Machine Translation


Machine Translation for User-Generated Content

#artificialintelligence

A specific use case worth exploring in this regard is MT for User Generated Content (UGC). Because of the speed with which UGC (comments, feedback, reviews) is being created and the corresponding costs of its professional translation, many organizations turn to MT. Popular examples of such companies are Skype (in addition to text translation, Microsoft developed the Automatic Speech Recognition (ASR) for audio speech translation in Skype) and Facebook. The social network is aiming to solve the challenge of fine-tuning each system relating to a specific language pair, using neural machine translation (NMT) and benefiting from various contexts for translations. One solution that tackles this issue is the technology developed by Language I/O. It takes into account the client's glossaries and TMs, selects the best MT engine output and then improves on the results using cultural intelligence and/or human linguists who compare machine translations post-facto to ensure that their MT Optimizer engine learns over time.


29 Cutting Edge Applications of Artificial Intelligence - The Burnie Group

#artificialintelligence

Artificial Intelligence (AI) is the theory and development of computer systems that can perform tasks that normally require human intelligence. These tasks include visual perception, speech recognition, decision making, and language translation. Systems capable of performing such tasks are steadily transitioning from research laboratories into industry usage. AI technology is unique in that it is flexible in application. It can be used to improve processes, enhance interactions, and solve problems that, until recently, could only be performed by humans.


Google's New AI Transcribe Feature - A Translating Wiz

#artificialintelligence

Is the improved transcription feature the new replacement of the earlier Google Live Transcribe? The latest audio-to-text translation service is out and about, but only for Android users for the time being. Record the audio in one language and have it rendered in another language altogether! Lengthy discussions can be easily transcribed into text now, without any trouble. January marked the launch of the AI-Powered transcription feature of Google Translate on Android, and now it supports transcribed translations between any of the eight languages, including French, German, Portuguese, English, Thai, Hindi, Spanish, Russian.


Dead Languages Come to Life

Communications of the ACM

Driven by advanced techniques in machine learning, commercial systems for automated language translation now nearly match the performance of human linguists, and far more efficiently. Google Translate supports 105 languages, from Afrikaans to Zulu, and in addition to printed text it can translate speech, handwriting, and the text found on websites and in images. The methods for doing those things are clever, but the key enabler lies in the huge annotated databases of writings in the various language pairs. A translation from French to English succeeds because the algorithms were trained on millions of actual translation examples. The expectation is that every word or phrase that comes into the system, with its associated rules and patterns of language structure, will have been seen and translated before.


Translating documents with Amazon Translate, AWS Lambda, and the new Batch Translate API Amazon Web Services

#artificialintelligence

With an increasing number of digital text documents shared across the world for both business and personal reasons, the need for translation capabilities becomes even more critical. There are multiple tools available online that enable people to copy/paste text and get the translated equivalent in the language of their choice. While this is a great way to perform ad hoc translation of a (limited) amount of text, it can be tedious and time-consuming if performed frequently. Your organization may largely depend on content to document your products and services, teach your customers how to interact with you, or just share the cool things you are doing. This content is often text-heavy and mostly written in English.


Lookahead Optimizer: k steps forward, 1 step back

Neural Information Processing Systems

The vast majority of successful deep neural networks are trained using variants of stochastic gradient descent (SGD) algorithms. Recent attempts to improve SGD can be broadly categorized into two approaches: (1) adaptive learning rate schemes, such as AdaGrad and Adam and (2) accelerated schemes, such as heavy-ball and Nesterov momentum. In this paper, we propose a new optimization algorithm, Lookahead, that is orthogonal to these previous approaches and iteratively updates two sets of weights. Intuitively, the algorithm chooses a search direction by looking ahead at the sequence of fast weights" generated by another optimizer. We show that Lookahead improves the learning stability and lowers the variance of its inner optimizer with negligible computation and memory cost.


Neural Machine Translation with Soft Prototype

Neural Information Processing Systems

Neural machine translation models usually use the encoder-decoder framework and generate translation from left to right (or right to left) without fully utilizing the target-side global information. A few recent approaches seek to exploit the global information through two-pass decoding, yet have limitations in translation quality and model efficiency. In this work, we propose a new framework that introduces a soft prototype into the encoder-decoder architecture, which allows the decoder to have indirect access to both past and future information, such that each target word can be generated based on the better global understanding. We further provide an efficient and effective method to generate the prototype. Empirical studies on various neural machine translation tasks show that our approach brings significant improvement in generation quality over the baseline model, with little extra cost in storage and inference time, demonstrating the effectiveness of our proposed framework.


Fast Structured Decoding for Sequence Models

Neural Information Processing Systems

Autoregressive sequence models achieve state-of-the-art performance in domains like machine translation. However, due to the autoregressive factorization nature, these models suffer from heavy latency during inference. Recently, non-autoregressive sequence models were proposed to speed up the inference time. However, these models assume that the decoding process of each token is conditionally independent of others. Such a generation process sometimes makes the output sentence inconsistent, and thus the learned non-autoregressive models could only achieve inferior accuracy compared to their autoregressive counterparts.


A Tensorized Transformer for Language Modeling

Neural Information Processing Systems

Latest development of neural models has connected the encoder and decoder through a self-attention mechanism. In particular, Transformer, which is solely based on self-attention, has led to breakthroughs in Natural Language Processing (NLP) tasks. However, the multi-head attention mechanism, as a key component of Transformer, limits the effective deployment of the model to a resource-limited setting. In this paper, based on the ideas of tensor decomposition and parameters sharing, we propose a novel self-attention model (namely Multi-linear attention) with Block-Term Tensor Decomposition (BTD). We test and verify the proposed attention method on three language modeling tasks (i.e., PTB, WikiText-103 and One-billion) and a neural machine translation task (i.e., WMT-2016 English-German).


Decoding NLP Attention Mechanisms

#artificialintelligence

Arguably more famous today than Michael Bay's Transformers, the transformer architecture and transformer-based models have been breaking all kinds of state-of-the-art records. They are (rightfully) getting the attention of a big portion of the deep learning community and researchers in Natural Language Processing (NLP) since their introduction in 2017 by the Google Translation Team. This architecture has set the stage for today's heavy-weight models: Google AI's BERT (and its variants) have been sitting in first position across many NLP leaderboards. OpenAI's GPT2 was judged so powerful by its authors, that up until recently only a weaker version of it was publicly released, following expressed concerns that this model might be used for "evil"! In this blogpost series, we will walk you through the rise of the transformer architecture.