A Computational Model for Cursive Handwriting Based on the Minimization Principle

Neural Information Processing Systems

We propose a trajectory planning and control theory for continuous movements such as connected cursive handwriting and continuous natural speech. Its hardware is based on our previously proposed forward-inverse-relaxation neural network (Wada & Kawato, 1993). Computationally, its optimization principle is the minimum torquechange criterion.Regarding the representation level, hard constraints satisfied by a trajectory are represented as a set of via-points extracted from a handwritten character. Accordingly, we propose a via-point estimation algorithm that estimates via-points by repeating the trajectory formation of a character and the via-point extraction from the character. In experiments, good quantitative agreement is found between human handwriting data and the trajectories generated by the theory. Finally, we propose a recognition schema based on the movement generation. We show a result in which the recognition schema is applied to the handwritten character recognition and can be extended to the phoneme timing estimation of natural speech. 1 INTRODUCTION In reaching movements, trajectory formation is an ill-posed problem because the hand can move along an infinite number of possible trajectories from the starting to the target point.


Like by smiling? Facebook acquires emotion detection startup FacioMetrics

#artificialintelligence

Facebook could one day build facial gesture controls for its app thanks to the acquisition of a Carnegie Mellon University spinoff company called FacioMetrics. The startup made an app called Intraface that could detect seven different emotions in people's faces, but it's been removed from the app stores. The acquisition aligns with a surprising nugget of information Facebook slipped into a 32-bullet point briefing sent to TechCrunch this month. "Future applications of deep learning platform on mobile: Gesture-based controls, recognize facial expressions and perform related actions" It's not hard to imagine Facebook one day employing FacioMetrics' tech and its own AI to let you add a Like or one of its Wow/Haha/Angry/Sad emoji reactions by showing that emotion with your face. "How people share and communicate is changing and things like masks and other effects allow people to express themselves in fun and creative ways.


Face recognition and OCR processing of 300 million records from US yearbooks

#artificialintelligence

A yearbook is a type of a book published annually to record, highlight, and commemorate the past year of a school. Our team at MyHeritage took on a complex project: extracting individual pictures, names, and ages from hundreds of thousands of yearbooks, structuring the data, and creating a searchable index that covers the majority of US schools between the years 1890–1979 -- more than 290 million individuals. In this article I'll describe what problems we encountered during this project and how we solved them. First of all, let me explain why we needed to tackle this challenge. MyHeritage is a genealogy platform that provides access to almost 10 billion historical records.


This Google-powered AI can identify your terrible doodles

#artificialintelligence

As part of Google's slew of artificial intelligence announcements today, the company is releasing a number of AI web experiments powered by its cloud services that anyone can go and play with. One -- called Quick, Draw! -- gives you a prompt to draw an image of a written word or phrase in under 20 seconds with your mouse cursor in such a way that a neural network can identify it. It's both a hilarious and fascinating exercise with broader implications for how AI can self-learn over time in key AI research areas like image recognition and optical character recognition. Quick, Draw! is a great way to familiarize yourself with how neural networks work to identify objects and text in photos, which is one of the most common forms of AI-guided software techniques we see daily on platform's like Facebook and Google Photos. As you start to craft the doodle, Quick, Draw!'s software automaton will start yelling out words and phrases it thinks you're trying to illustrate.


Researchers Have Created an AI That Could Read and React to Emotions

#artificialintelligence

One of today's more popular artificially intelligent (AI) androids comes from the TV series "MARVEL's Agents of S.H.I.E.L.D." Those of you who followed the latest season's story -- no spoilers here! One of the most interesting things about this fictional AI character is that it can read people's emotions. Thanks to researchers from the University of Cambridge, this AI ability might soon make the jump from sci-fi to reality. The first step in creating such a system is training an algorithm on simpler facial expressions and just one specific emotion or feeling. To that end, the Cambridge team focused on using a machine learning algorithm to figure out if a sheep is in pain, and this week, they presented their research at the IEEE International Conference on Automatic Face and Gesture Recognition in Washington, D.C.