Language Learning


Sign language turned to text with new electric glove

Daily Mail

An electric glove which can convert sign language into text messages has been unveiled by scientists. The device consists of a sports glove which has been fitted with nine stretchable sensors positioned over the knuckles. When a user bends their fingers or thumb to sign a letter, the sensors stretch, which causes an electrical signal to be produced. When a user bends their fingers or thumb to sign a letter, the sensors stretch, which causes an electrical signal to be produced.


Glove turns sign language into text for real-time translation

New Scientist

A new glove developed at the University of California, San Diego, can convert the 26 letters of American Sign Language (ASL) into text on a smartphone or computer screen. "For thousands of people in the UK, sign language is their first language," says Jesal Vishnuram, the technology research manager at the charity Action on Hearing Loss. In the UK, someone who is deaf is entitled to a sign language translator at work or when visiting a hospital, but at a train station, for example, it can be incredibly difficult to communicate with people who don't sign. The flexible sensors mean that you hardly notice that you are wearing the glove, says Timothy O'Connor who is working on the technology at the University of California, San Diego.


Automatic sign language translators turn signing into text

New Scientist

Machine translation systems that convert sign language into text and back again are helping people who are deaf or have difficulty hearing to communicate with those who cannot sign. A sign language user can approach a bank teller and sign to the KinTrans camera that they'd like assistance, for example. KinTrans's machine learning algorithm translates each sign as it is made and then a separate algorithm turns those signs into a sentence that makes grammatical sense. KinTrans founder Mohamed Elwazer says his system can already recognise thousands of signs in both American and Arabic sign language with 98 per cent accuracy.


This shuttle bus will serve people with vision, hearing, and physical impairments--and drive itself

#artificialintelligence

It's been 15 years since a degenerative eye disease forced Erich Manser to stop driving. Today, he commutes to his job as an accessibility consultant via commuter trains and city buses, but he has trouble locating empty seats sometimes and must ask strangers for guidance. A step toward solving Manser's predicament could arrive as soon as next year. Manser's employer, IBM, and an independent carmaker called Local Motors are developing a self-driving, electric shuttle bus that combines artificial intelligence, augmented reality, and smartphone apps to serve people with vision, hearing, physical, and cognitive disabilities. The buses, dubbed "Olli," are designed to transport people around neighborhoods at speeds below 35 miles per hour and will be sold to cities, counties, airports, companies, and universities.


This shuttle bus will serve people with vision, hearing, and physical impairments--and drive itself

#artificialintelligence

It's been 15 years since a degenerative eye disease forced Erich Manser to stop driving. Today, he commutes to his job as an accessibility consultant via commuter trains and city buses, but he has trouble locating empty seats sometimes and must ask strangers for guidance. A step toward solving Manser's predicament could arrive as soon as next year. Manser's employer, IBM, and an independent carmaker called Local Motors are developing a self-driving, electric shuttle bus that combines artificial intelligence, augmented reality, and smartphone apps to serve people with vision, hearing, physical, and cognitive disabilities. The buses, dubbed "Olli," are designed to transport people around neighborhoods at speeds below 35 miles per hour and will be sold to cities, counties, airports, companies, and universities.


This mind-reading system can correct a robot's error! Latest News & Updates at Daily News & Analysis

#artificialintelligence

A new brain-computer interface developed by scientists can read a person's thoughts in real time to identify when a robot makes a mistake, an advance that may lead to safer self-driving cars. By relying on brain signals called "error-related potentials" (ErrPs) that occur automatically when humans make a mistake or spot someone else making one, the new approach allows even complete novices to control a robot with their minds. This technology developed by researchers at the Boston University and the Massachusetts Institute of Technology (MIT) may offer intuitive and instantaneous ways of communicating with machines, for applications as diverse as supervising factory robots to controlling robotic prostheses. "When humans and robots work together, you basically have to learn the language of the robot, learn a new way to communicate with it, adapt to its interface," said Joseph DelPreto, a PhD candidate at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL).


IBM Research Demonstrates Innovative 'Speech to Sign Language' Translation System

AITopics Original Links

HURSLEY, UK--(Marketwire - September 13, 2007) - IBM (NYSE: IBM) has developed an ingenious system called SiSi (Say It Sign It) that automatically converts the spoken word into British Sign Language (BSL) which is then signed by an animated digital character or avatar. SiSi brings together a number of computer technologies. A speech recognition module converts the spoken word into text, which SiSi then interprets into gestures, that are used to animate an avatar which signs in BSL. Upon development this system would see a signing avatar'pop up' in the corner of the display screen in use -- whether that be a laptop, personal computer, TV, meeting-room display or auditorium screen. Users would be able select the size and appearance of the avatar.


AI computer learns to speak like a four-year-old child

AITopics Original Links

A computer that learns to talk in the same way as a young child by holding conversations with humans has been developed by scientists. The machine, which uses cutting edge artificial neural network technology to mimic the way the human brain works, was given 1,500 sentences from literature about language structure. It was then able to use this to learn how to construct new sentences with nouns, verbs, adjectives and pronouns when having a conversation with a real human. Researchers used connections between two million artificial neurons to mimic some of the processes that take place in the human brain as we learn to speak (illustrated). While some of the sentences had the rather functional approach of a computer rather than the finesse of a natural speaker, the results are still impressive.


Kinect sensor can translate sign language into SPEECH and TEXT

AITopics Original Links

Microsoft's Kinect has already proved its credentials in reading simple hand and body movements in the gaming world. But now a team of Chinese researchers have added sign language to its motion-sensing capabilities. Scientists at Microsoft Research Asia recently demonstrated software that allows Kinect to read sign language using hand tracking. What's impressive is that it can do this in real-time, translating sign language to spoken language and vice versa at conversational speeds. The system, dubbed the Kinect Sign Language Translator, is capable of capturing a conversation from both sides.


Toshiba's new robot can speak in sign language

AITopics Original Links

The "communication android", as Toshiba is calling its creation, was unveiled this week at the Cutting-Edge IT & Electronics Comprehensive Exhibition (CEATEC), Japan, and has been designed for a maximum of movement fluidity in its hands and arms, employing 43 actuators in its joints, in order to speak in Japanese sign language. At this point, its range is fairly limited: it can mimic simple movements, such as greetings, but the company has plans to develop the robot -- named Aiko Chihira -- into a full communications robot by 2020. This will include speech synthesis, speech recognition, robotic control and other sensors. The end goal, the company said, is a robot that can serve as a "companion for the elderly and people with dementia, to offer telecounseling in natural speech, communicate through sign language and allow healthcare workers or family members to keep an eye on elderly people." If the robot looks familiar, that's because it was developed in collaboration with Osaka University, which has been developing humanoid robots for some time.