Image Recognition: A peek into the future

#artificialintelligence

Our brains are wired in a way that they can differentiate between objects, both living and non-living by simply looking at them. In fact, the recognition of objects and a situation through visualization is the fastest way to gather, as well as to relate information. This becomes a pretty big deal for computers where a vast amount of data has to be stuffed into it, before the computer can perform an operation on its own. Ironically, with each passing day, it is becoming essential for machines to identify objects through facial recognition, so that humans can take the next big step towards a more scientifically advanced social mechanism. So, what progress have we really made in that respect?


Like by smiling? Facebook acquires emotion detection startup FacioMetrics

#artificialintelligence

Facebook could one day build facial gesture controls for its app thanks to the acquisition of a Carnegie Mellon University spinoff company called FacioMetrics. The startup made an app called Intraface that could detect seven different emotions in people's faces, but it's been removed from the app stores. The acquisition aligns with a surprising nugget of information Facebook slipped into a 32-bullet point briefing sent to TechCrunch this month. "Future applications of deep learning platform on mobile: Gesture-based controls, recognize facial expressions and perform related actions" It's not hard to imagine Facebook one day employing FacioMetrics' tech and its own AI to let you add a Like or one of its Wow/Haha/Angry/Sad emoji reactions by showing that emotion with your face. "How people share and communicate is changing and things like masks and other effects allow people to express themselves in fun and creative ways.


Thoughts On Machine Learning Accuracy Amazon Web Services

#artificialintelligence

Let's start with some comments about a recent ACLU blog in which they run a facial recognition trial. Using Rekognition, the ACLU built a face database using 25,000 publicly available arrest photos and then performed facial similarity searches of that database using public photos of all current members of Congress. They found 28 incorrect matches out of 535, using an 80% confidence level; this is a 5% misidentification (sometimes called'false positive') rate and a 95% accuracy rate. The ACLU has not published its data set, methodology, or results in detail, so we can only go on what they've publicly said. To illustrate the impact of confidence threshold on false positives, we ran a test where we created a face collection using a dataset of over 850,000 faces commonly used in academia.


Detecting fake face images created by both humans and machines

#artificialintelligence

Researchers at the State University of New York in Korea have recently explored new ways to detect both machine and human-created fake images of faces. In their paper, published in ACM Digital Library, the researchers used ensemble methods to detect images created by generative adversarial networks (GANs) and employed pre-processing techniques to improve the detection of images created by humans using Photoshop. Over the past few years, significant advancements in image processing and machine learning have enabled the generation of fake, yet highly realistic, images. However, these images could also be used to create fake identities, make fake news more convincing, bypass image detection algorithms, or fool image recognition tools. "Fake face images have been a topic of research for quite some time now, but studies have mainly focused on photos made by humans, using Photoshop tools," Shahroz Tariq, one of the researchers who carried out the study told Tech Xplore.


A Computational Model for Cursive Handwriting Based on the Minimization Principle

Neural Information Processing Systems

We propose a trajectory planning and control theory for continuous movements such as connected cursive handwriting and continuous natural speech. Its hardware is based on our previously proposed forward-inverse-relaxation neural network (Wada & Kawato, 1993). Computationally, its optimization principle is the minimum torquechange criterion.Regarding the representation level, hard constraints satisfied by a trajectory are represented as a set of via-points extracted from a handwritten character. Accordingly, we propose a via-point estimation algorithm that estimates via-points by repeating the trajectory formation of a character and the via-point extraction from the character. In experiments, good quantitative agreement is found between human handwriting data and the trajectories generated by the theory. Finally, we propose a recognition schema based on the movement generation. We show a result in which the recognition schema is applied to the handwritten character recognition and can be extended to the phoneme timing estimation of natural speech. 1 INTRODUCTION In reaching movements, trajectory formation is an ill-posed problem because the hand can move along an infinite number of possible trajectories from the starting to the target point.