Collaborating Authors

Google's new AI can help you speak another language in your own voice


Google Translate is one of the company's most used products. It helps people translate one language to another through typing, taking pics of text, and using speech-to-text technology. Now, the company's launching a new project called Translatotron, which will offer direct speech-to-speech translations – without even using any text. In a post on Google's AI blog, the team behind the tool explained that instead of using speech-to-text and then text-to-speech to convert voice, it relied on a new model (which runs on a neural network) to develop the new system. Get 50% off tickets if you buy now.

Machine Learning Behind Google Translate Services - AI Summary


During the initial days, Google Translate was launched with Phrase-Based Machine Translation as the key algorithm. The main improvement in the translation systems was achieved with the introduction of Google Neural Machine Translation or GNMT . With Translatotron, Google demonstrated that a single sequence-to-sequence model can directly translate speech from one language into speech in another language, without the need for intermediate text representation, unlike cascaded systems. Translatotron is claimed to be the first end-to-end model that could directly translate speech from one language into speech in another language and was also able to retain the source speaker's voice in the translated speech. Stay updated on last news about Artificial Intelligence.

Amazing Google AI speaks another language in your voice


On Wednesday, Google unveiled Translatotron, an in-development speech-to-speech translation system. It's not the first system to translate speech from one language to another, but Google designed Translatotron to do something other systems can't: retain the original speaker's voice in the translated audio. In other words, the tech could make it sound like you're speaking a language you don't know -- a remarkable step forward on the path to breaking down the global language barrier. According to Google's AI blog, most speech-to-speech translation systems follow a three-step process. First they transcribe the speech.

Move over, Google Translate: Here come A.I. earbuds


Forget phrase books or even Google Translate. New translation devices are getting closer to replicating the fantasy of the Babel fish, which in the "Hitchhiker's Guide to the Galaxy" sits in one's ear and instantly translates any foreign language into the user's own. The WT2 Plus Ear to Ear AI Translator Earbuds from Timekettle are already available, while the over-the-ear "Ambassador" from Wavery Labs is scheduled for release this year. Both brands are wireless, and come with two earpieces that must be synced to a single smartphone connected to Wi-Fi or cellular data. These devices "bring us a bit closer to being able to travel to places in the world where people speak different languages and communicate smoothly with those who are living there," said Graham Neubig, an assistant professor at the Language Technologies Institute of Carnegie Mellon University and an expert in machine learning and natural language processing.