Language Learning

How we used AI to translate sign language in real time.


Using artificial intelligence to translate sign language in real time - see how we used Python to train a neural network with 86% accuracy in less than a day. Imagine a world where anyone can communicate using sign language over video. Inspired by this vision, some of our engineering team decided to bring this idea to HealthHack 2018. In less than 48 hours and using the power of artificial intelligence, their team was able to produce a working prototype which translated signs from the Auslan alphabet to English text in real time. People who are hearing impaired are left behind in video consultations.

This Amazon Echo mod lets Alexa understand sign language


It seems like voice interfaces are going to be a big part of the future of computing; popping up in phones, smart speakers, and even household appliances. But how useful is this technology for people who don't communicate using speech? Are we creating a system that locks out certain users? These were the questions that inspired software developer Abhishek Singh to create a mod that lets Amazon's Alexa assistant understand some simple sign language commands. In a video, Singh demonstrates how the system works.

Movies, Neural Networks Boost AI Language Skills - insideBIGDATA


When we discuss about artificial intelligence (AI), how are machines learning? What kinds of projects feed into greater understanding? For our friends over at IBM, one surprising answer is movies. To build smarter AI systems, IBM researchers are using movie plots and neural networks to explore new ways of enhancing the language understanding capabilities of AI models. IBM will present key findings from two papers on these topics at the Association for Computational Linguistics (ACL) annual meeting this week in Melbourne, Australia.

Memrise raises $15.5M as its AI-based language-learning app passes 35M users


Memrise, a UK startup whose eponymous language-learning app employs machine learning and localised content to adapt to users' needs as they progress through their lessons, has raised another $15.5 million in funding to expand its product. The funding comes after a period of strong growth: Memrise has now passed 35 million users globally across its 20 language courses, and it tipped into profitability in Q1 of this year. Ed Cooke, who co-founded the app with Ben Whately and Greg Detre, told TechCrunch that this places it as the second-most popular language app globally in terms of both users and revenues. This round, a Series B, was led by Octopus Ventures and Korelya Capital, along with participation from existing investors Avalon Ventures and Balderton Capital. Memrise is not disclosing its valuation -- it has raised a relatively modest $22 million to date -- but Cooke (who is also the CEO) said the plan will be to use the funding to expand its AI platform and add in more features for users.

How SingularityNET is Advancing Unsupervised Language Learning


For many AI services, it is critical to be able to comprehend human language and even converse in it with human users. So far, advances in natural language processing (NLP) powered with "sub-symbolic" machine learning based on deep neural networks allows us to solve multiple tasks like machine translation, classification, and emotion recognition. However, using these approaches requires enormous amount of training. Additionally, there are increasing legal restrictions in particular applications due to recent regulations, making current solutions unviable. The ultimate goal for these industry initiatives is to allow humans and AI to interact fluently in a common language.

Interactive Language Acquisition with One-shot Visual Concept Learning through a Conversational Game Artificial Intelligence

Building intelligent agents that can communicate with and learn from humans in natural language is of great value. Supervised language learning is limited by the ability of capturing mainly the statistics of training data, and is hardly adaptive to new scenarios or flexible for acquiring new knowledge without inefficient retraining or catastrophic forgetting. We highlight the perspective that conversational interaction serves as a natural interface both for language learning and for novel knowledge acquisition and propose a joint imitation and reinforcement approach for grounded language learning through an interactive conversational game. The agent trained with this approach is able to actively acquire information by asking questions about novel objects and use the just-learned knowledge in subsequent conversations in a one-shot fashion. Results compared with other methods verified the effectiveness of the proposed approach.

Cross-Language Learning for Program Classification Using Bilateral Tree-Based Convolutional Neural Networks

AAAI Conferences

Towards the vision of translating code that implements an algorithm from one programming language into another, this paper proposes an approach for automated program classification using bilateral tree-based convolutional neural networks (BiTBCNNs). It is layered on top of two tree-based convolutional neural networks (TBCNNs), each of which recognizes the algorithm of code written in an individual programming language. The combination layer of the networks recognizes the similarities and differences among code in different programming languages. The BiTBCNNs are trained using the source code in different languages but known to implement the same algorithms and/or functionalities. For a preliminary evaluation, we use 3591 Java and 3534 C++ code snippets from 6 algorithms we crawled systematically from GitHub. We obtained over 90% accuracy in the cross-language binary classification task to tell whether any given two code snippets implement a same algorithm. Also, for the algorithm classification task, i.e., to predict which one of the six algorithm labels is implemented by an arbitrary C++ code snippet, we achieved over 80% precision.

Unsupervised Selection of Negative Examples for Grounded Language Learning

AAAI Conferences

There has been substantial work in recent years on grounded language acquisition, in which language and sensor data are used to create a model relating linguistic constructs to the perceivable world. While powerful, this approach is frequently hindered by ambiguities, redundancies, and omissions found in natural language. We describe an unsupervised system that learns language by training visual classifiers, first selecting important terms from object descriptions, then automatically choosing negative examples from a paired corpus of perceptual and linguistic data. We evaluate the effectiveness of each stage as well as the system's performance on the overall learning task.

AI-powered language learning promises to fast-track fluency


A linguistics company is using AI to shorten the time it takes to learn a new language. It takes about 200 hours, using traditional methods, to gain basic proficiency in a new language. This AI-powered platform claims it can teach from beginner to fluency in just a few months – through once-daily 20 minute lessons. Learning a new language is hard. Some people seem to pick up new dialects with ease, but for the rest of us it's a trudge through rote memorization.