When we discuss about artificial intelligence (AI), how are machines learning? What kinds of projects feed into greater understanding? For our friends over at IBM, one surprising answer is movies. To build smarter AI systems, IBM researchers are using movie plots and neural networks to explore new ways of enhancing the language understanding capabilities of AI models. IBM will present key findings from two papers on these topics at the Association for Computational Linguistics (ACL) annual meeting this week in Melbourne, Australia.
Many studies have shown that musical training can enhance language skills. However, it was unknown whether music lessons improve general cognitive ability, leading to better language proficiency, or if the effect of music is more specific to language processing. A new study from MIT has found that piano lessons have a very specific effect on kindergartners' ability to distinguish different pitches, which translates into an improvement in discriminating between spoken words. However, the piano lessons did not appear to confer any benefit for overall cognitive ability, as measured by IQ, attention span, and working memory. "The children didn't differ in the more broad cognitive measures, but they did show some improvements in word discrimination, particularly for consonants.
Memrise, a UK startup whose eponymous language-learning app employs machine learning and localised content to adapt to users' needs as they progress through their lessons, has raised another $15.5 million in funding to expand its product. The funding comes after a period of strong growth: Memrise has now passed 35 million users globally across its 20 language courses, and it tipped into profitability in Q1 of this year. Ed Cooke, who co-founded the app with Ben Whately and Greg Detre, told TechCrunch that this places it as the second-most popular language app globally in terms of both users and revenues. This round, a Series B, was led by Octopus Ventures and Korelya Capital, along with participation from existing investors Avalon Ventures and Balderton Capital. Memrise is not disclosing its valuation -- it has raised a relatively modest $22 million to date -- but Cooke (who is also the CEO) said the plan will be to use the funding to expand its AI platform and add in more features for users.
For many AI services, it is critical to be able to comprehend human language and even converse in it with human users. So far, advances in natural language processing (NLP) powered with "sub-symbolic" machine learning based on deep neural networks allows us to solve multiple tasks like machine translation, classification, and emotion recognition. However, using these approaches requires enormous amount of training. Additionally, there are increasing legal restrictions in particular applications due to recent regulations, making current solutions unviable. The ultimate goal for these industry initiatives is to allow humans and AI to interact fluently in a common language.
A cross-party group of lawmakers on Tuesday unveiled a draft version of what would become Japan's first-ever law defining the government's responsibility to systematically promote Japanese language education both at home and abroad. The drafting of the bill comes as Japan experiences a continued increase in non-Japanese residents, including under categories such as technical intern trainees, students, and highly skilled professionals, but at the same time lacks a unified policy as to how to teach them Japanese. The group hopes to submit the bill to the fall session of the Diet for possible enactment, Liberal Democratic Party lawmaker Hiroshi Hase, secretary-general of the group, told The Japan Times. Whether the bill will be passed through the Diet and how big of an impact it will create remains to be seen, with its effectiveness likely hinging on how much funding it receives from the government. The draft bill does not specify any numerical targets for fiscal spending nor set a deadline for the government to meet the ultimate goal advocated in the legislation.
Building intelligent agents that can communicate with and learn from humans in natural language is of great value. Supervised language learning is limited by the ability of capturing mainly the statistics of training data, and is hardly adaptive to new scenarios or flexible for acquiring new knowledge without inefficient retraining or catastrophic forgetting. We highlight the perspective that conversational interaction serves as a natural interface both for language learning and for novel knowledge acquisition and propose a joint imitation and reinforcement approach for grounded language learning through an interactive conversational game. The agent trained with this approach is able to actively acquire information by asking questions about novel objects and use the just-learned knowledge in subsequent conversations in a one-shot fashion. Results compared with other methods verified the effectiveness of the proposed approach.
Speaking verbally and performing sign language require the same parts of the brain, according to a new study. Researchers at New York University found that the neural skills needed to perform sign language are the similar to those required for speaking out loud. Their report is the first of its kind to prove the association between the two communication forms. Sign language communicators and verbal English speakers rely on the same neural skills, a new report says. The new research was published in the journal Scientific Reports.
A robotic hand that can translate words into sign language gestures for deaf people has been created by scientists. Named Project Aslan, the 3D-printed hand costs as little as £400 ($560) to make and interprets both written text and spoken words. The device communicates through'fingerspelling', a type of sign language where words are spelled out letter-by-letter through separate gestures on a single hand. The robot, which will be ready in five years, could one day be carried around in a rucksack, scientists say. It could help some of the 70 million worldwide who are deaf or hard of hearing to communicate with people who don't know sign language.
Translating is difficult work, the more so the further two languages are from one another. But sign language is a unique case, and translating it uniquely difficult, because it is fundamentally different from spoken and written languages. All the same, SignAll has been working hard for years to make accurate, real-time machine translation of ASL a reality.
The code has been copied to your clipboard. Some machines can take something written in one language and give users the same or similar wording in another language. These machines are designed to do this kind of work quickly and without mistakes. Some of the devices are so small they can be carried around the world. The quality of translation software programs has greatly improved in recent years, thanks to new, fast-developing technologies.