Tools and apps like Google Translate are getting better and better at translating one language into another. Alexander Waibel, professor of computer science at Carnegie Mellon University's Language Technologies Institute (@LTIatCMU), tells Here & Now's Jeremy Hobson how translation technology works, where there's still room to improve and what could be in store in the decades to come. "Over the years I think there's been a big trend on translation to go increasingly from rule-based, knowledge-based methods to learning methods. Systems have now really achieved a phenomenally good accuracy, and so I think, within our lifetime I'm fairly sure that we'll reach -- if we haven't already done so -- human-level performance, and/or exceeding it. "The current technology that really has taken the community by storm is of course neural machine translation.
This work focuses on comparing different solutions for machine translation on low resource language pairs, namely, with zero-shot transfer learning and unsupervised machine translation. We discuss how the data size affects the performance of both unsupervised MT and transfer learning. Additionally we also look at how the domain of the data affects the result of unsupervised MT. The code to all the experiments performed in this project are accessible on Github.
Natural Language Processing; it's Artificial Intelligence that learns words and patterns of words so that it can respond to human searches and questions. Siri and Alexa are examples of this technology. And this technology is continually improving. As more and more conversations are held with these machines, they continue to learn and respond more accurately. Machines are also in use for translations.
In the year 2020, it may seem natural to receive a meaningful translation from Google Translator, when some of us can still remember the times when it required correction every time you tried to translate more than three words altogether. This is the example of changes we tend to overlook as unpretentious users, but there is a lot of hard work behind them. While processing data, the neural network doesn't just follow some algorithm but finds ways of solving the problems and, in fact, learns to solve them. And the more tasks it solves, the better it copes with them. This similarity with a principle of human brain functioning is the reason to name neural networks an artificial intelligence (AI).