An electric glove which can convert sign language into text messages has been unveiled by scientists. The $100 (£77) device will will allow deaf people to instantly send messages to those who don't understand sign language, according to its inventors. Researchers fitted a standard sports glove with nine flexible strain sensors which react when a user bends their fingers to create the new device. The device consists of a sports glove which has been fitted with nine stretchable sensors positioned over the knuckles. When a user bends their fingers or thumb to sign a letter, the sensors stretch, which causes an electrical signal to be produced.
Handwriting will never be the same again. A new glove developed at the University of California, San Diego, can convert the 26 letters of American Sign Language (ASL) into text on a smartphone or computer screen. Because it's cheaper and more portable than other automatic sign language translators on the market, it could be a game changer. People in the deaf community will be able to communicate effortlessly with those who don't understand their language. ASL is a language all of its own, but few people outside the deaf community speak it.
Abstract: We are increasingly surrounded by artificially intelligent technology that takes decisions and executes actions on our behalf. This creates a pressing need for general means to communicate with, instruct and guide artificial agents, with human language the most compelling means for such communication. To achieve this in a scalable fashion, agents must be able to relate language to the world and to actions; that is, their understanding of language must be grounded and embodied. However, learning grounded language is a notoriously challenging problem in artificial intelligence research. Here we present an agent that learns to interpret language in a simulated 3D environment where it is rewarded for the successful execution of written instructions.
For years scientists have worked to find a way to make it easier for deaf and hearing impaired people to communicate. And now it is hoped that a new intelligent system could be about to transform their lives. Researchers have used image recognition to translate sign language into'readable language' and while it is early days, the tool could one day be used on smartphones. Researchers have used image recognition to translate sign language (pictured) into'readable language' and while it is early days, the tool could one day be used on smartphones Scientists from Malaysia and New Zealand came up with the Automatic Sign Language Translator (ASLT), which can capture, interpret and translate sign language. It has been tested on gestures and signs representing both isolated words and continuous sentences in Malaysian sign language, with what they claim is a high degree of recognition accuracy and speed.
Duolingo has been offering language learning tools for a while now, but today the company debuted a new tool inside its iPhone app that could make the task a bit easier. Thanks to AI-powered chatbots, the language-learning app offers a way to have conversations while you're trying to learn French, German and Spanish. That's a short list of languages for now, but Duolingo says more options are on the way. Right now, you can only interact with the chatbots via text, but the company does have plans to add spoken conversations in the future. Duolingo gave these bots a bit of personality to make them more like real people and created them to be flexible with the answers they'll accept when there's multiple ways for you to respond.
Los Angeles, California – Indie developer and computer vision engineer, Mustafa Jaber, is pleased to announce the release of Capture Caption Lite, an AI-based app developed for iOS and Android devices. With the Capture Caption app, users can snap a photo with their smartphone or iPad and artificial intelligence will generate a word cloud using cutting-edge computer technology. These word clouds can be then downloaded to the user's image library and shared across multiple social media platforms. The brainchild of electrical engineer and image processing expert Mustafa Jaber, the app uses an artificial intelligence platform that derives information from images. This program understands the content of any image by using powerful machine-learning models, which can quickly classify images into thousands of categories.
For people living in a world without sound, sign language can make sure their points of view are heard. But outside of the deaf and hard-of-hearing communities, this gesture-based language can lose its meaning. Now a pair of entrepreneurial technology students in the US has designed a pair of gloves to break down the communication barriers, by translating hand gestures into speech. US inventors have designed a pair of gloves, called'SignAloud', which translate the gestures of sign language to spoken English. The gloves (pictured) use embedded sensors to monitor the position and movement of the user's hands, while a central computer analyses the data and converts gestures to speech Called'SignAloud', the gloves use embedded sensors to monitor the position and movement of the user's hands.
This paper introduces SIFU, a system that recruits in real time native speakers as online volunteer tutors to help answer questions from Chinese language learners in reading news articles. SIFU integrates the strengths of two effective online language learning methods: reading online news and communicating with online native speakers. SIFU recruits volunteers from an online social network rather than recruits workers from Amazon Mechanical Turk.Initial experiments showed that the proposed approach is able to effectively recruit online volunteer tutors, adequately answer the learners' questions, and efficiently obtain an answer for the learner. Our field deployment illustrates that SIFU is very useful in assisting Chinese learners in reading Chinese news articles and online volunteer tutors are willing to help Chinese learners when they are on social network service.
Deaf and hard of hearing students studying advanced topics in Science, Technology, Engineering, and Mathematics (STEM) lack standard terminology to enable them to learn, discuss and contribute to their chosen fields. The ASL-STEM Forum enables the diverse, thinly-spread groups that are independently creating and using terminology to come together using a community-based, video-enabled web resource. A common vocabulary would provide interpreters with consistent terminology and enable deaf scientists to more easily converse from a common basis. This paper discusses the implementation of the ASL-STEM Forum, describes our approach to building a community using the site, and overviews the unique opportunities it offers for observing a language developing from the bottom-up.