Pattern Recognition


Computer Vision and Image Analytics

#artificialintelligence

Over the past few months, I've been working on a fascinating project with one of the world's largest pharmaceutical companies to apply SAS Viya computer vision to help identify potential quality issues on the production line as part of the validated inspection process. As I know the application of these types of AI and ML techniques are of real interest to many high-tech manufacturing organisations as part of their Manufacturing 4.0 initiatives, I thought I'd take the to opportunity to share my experiences with a wide audience, so I hope you enjoy this blog post. For obvious reasons, I can't share specifics of the organisation or product, so please don't ask me to. But I hope you find this article interesting and informative, and if you would like to know more about the techniques then please feel free to contact me. Quality inspections are a key part of the manufacturing process, and while many of these inspections can be automated using a range of techniques, tests and measurements, some issues are still best identified by the human eye.


Case Study: Face Recognition Transforms Thailand's Mobile Banking Sector [NEC Official]

#artificialintelligence

The Siam Commercial Bank (SCB) provides a leading online banking platform to let customers open an account through its mobile banking application, eliminating the need to visit a physical branch. NEC provided SCB with Know Your Customer a sacure face recognition solution to achieve this. With our focus on Solutions for Society, NEC's goal is to lead the advancement of the world's social infrastructure by leveraging ICT and new business models. Our Solutions for Society activities will become the pillars of NEC over the company's next 100 years. Find NEC on Facebook: https://www.facebook.com/nec.global


Apple's low-power AI acquisition will bolster its surging AR play

#artificialintelligence

Apple reportedly spent around $200 million to purchase US artificial intelligence startup Xnor.ai, according to GeekWire. The startup's low-power, edge-based AI tools will allow Apple to add AI features to power-constrained devices, like smart cameras or phones. For instance, Xnor.ai's most notable technology is an AI-based image recognition tool that enables on-device human detection for smart home cameras. Apple will also have access to a platform created by Xnor.ai that allows software developers who aren't well-versed in AI to implement AI-related code and data libraries in their apps. Apple's Xnor.ai acquisition is just one of many it has recently made, as it aims to create more powerful and personalized AI features.


Exclusive: Apple acquires Xnor.ai, edge AI spin-out from Paul Allen's AI2, for price in $200M range

#artificialintelligence

Apple has acquired Xnor.ai, a Seattle startup specializing in low-power, edge-based artificial intelligence tools, sources with knowledge of the deal told GeekWire. Speaking on condition of anonymity, sources said Apple paid an amount similar to what was paid for Turi, in the range of $200 million. Xnor.ai didn't immediately respond to our inquiries, while Apple emailed us its standard response on questions about acquisitions: "Apple buys smaller technology companies from time to time and we generally do not discuss our purpose or plans." When we visited Xnor.ai's office in Seattle's Fremont neighborhood this morning, a move was clearly in progress -- presumably to Apple's Seattle offices. The arrangement suggests that Xnor's AI-enabled image recognition tools could well become standard features in future iPhones and webcams.


Despite what you may think, face recognition surveillance isn't inevitable

#artificialintelligence

Last year, communities banded together to prove that they can--and will--defend their privacy rights. As part of ACLU-led campaigns, three California cities--San Francisco, Berkeley, and Oakland--as well as three Massachusetts municipalities--Somerville, Northhampton, and Brookline--banned the government's use of face recognition from their communities. Following another ACLU effort, the state of California blocked police body cam use of the technology, forcing San Diego's police department to shutter its massive face surveillance flop. And in New York City, tenants successfully fended off their landlord's efforts to install face surveillance. Even the private sector demonstrated it had a responsibility to act in the face of the growing threat of face surveillance.


Why Does Data Science Matter in Advanced Image Recognition?

#artificialintelligence

Image recognition typically is a process of the image processing, identifying people, patterns, logos, objects, places, colors, and shapes, the whole thing that can be sited in the image. And advanced image recognition, in this way, is a framework for employing AI and deep learning that can accomplish greater automation across identification processes. As vision and speech are two crucial human interaction elements, data science is able to imitate these human tasks using computer vision and speech recognition technologies. Even it has already started emulating and has leveraged in different fields, particularly in e-commerce amongst sectors. Advancements in machine learning and the use of high bandwidth data services are fortifying the applications of image recognition.


DiffCVML 2020

#artificialintelligence

Traditional machine learning, pattern recognition and data analysis methods often assume that input data can be represented well by elements of Euclidean space. While this assumption has worked well for many past applications, researchers have increasingly realized that most data in vision and pattern recognition is intrinsically non-Euclidean, i.e. standard Euclidean calculus does not apply. The exploitation of this geometrical information can lead to more accurate representation the inherent structure of the data, better algorithms and better performance in practical applications. In particular, Riemannian geometric principles can be applied to a variety of difficult computer vision problems including face recognition, activity recognition, object detection, biomedical image analysis, and structure-from-motion, to name a few. Consequently, Riemannian geometric computing has become increasingly popular in the computer vision community.


Filtering Abstract Senses From Image Search Results

Neural Information Processing Systems

We propose an unsupervised method that, given a word, automatically selects non-abstract senses of that word from an online ontology and generates images depicting the corresponding entities. When faced with the task of learning a visual model based only on the name of an object, a common approach is to find images on the web that are associated with the object name, and then train a visual classifier from the search result. As words are generally polysemous, this approach can lead to relatively noisy models if many examples due to outlier senses are added to the model. We argue that images associated with an abstract word sense should be excluded when training a visual classifier to learn a model of a physical object. While image clustering can group together visually coherent sets of returned images, it can be difficult to distinguish whether an image cluster relates to a desired object or to an abstract sense of the word.


Why The Brain Separates Face Recognition From Object Recognition

Neural Information Processing Systems

Many studies have uncovered evidence that visual cortex contains specialized regions involved in processing faces but not other object classes. Recent electrophysiology studies of cells in several of these specialized regions revealed that at least some of these regions are organized in a hierarchical manner with viewpoint-specific cells projecting to downstream viewpoint-invariant identity-specific cells (Freiwald and Tsao 2010). A separate computational line of reasoning leads to the claim that some transformations of visual inputs that preserve viewed object identity are class-specific. In particular, the 2D images evoked by a face undergoing a 3D rotation are not produced by the same image transformation (2D) that would produce the images evoked by an object of another class undergoing the same 3D rotation. However, within the class of faces, knowledge of the image transformation evoked by 3D rotation can be reliably transferred from previously viewed faces to help identify a novel face at a new viewpoint.


A Convergence Analysis of Log-Linear Training

Neural Information Processing Systems

Log-linear models are widely used probability models for statistical pattern recognition. Typically, log-linear models are trained according to a convex criterion. In recent years, the interest in log-linear models has greatly increased. The optimization of log-linear model parameters is costly and therefore an important topic, in particular for large-scale applications. Different optimization algorithms have been evaluated empirically in many papers.