Machine Translation


Is Google's New Lingvo Framework a Big Deal for Machine Translation? Slator

#artificialintelligence

Researchers in neural machine translation (NMT) and natural language processing (NLP) may want to keep an eye on a new framework from Google. Lingvo is specifically tailored toward sequence models and NLP, which includes speech recognition, language understanding, MT, and speech translation. The Google AI team claims there are already "dozens" of research papers in these areas based on Lingvo. In fact, they said this was one reason they decided to open-source the project: to support the research community and encourage reproducible results. Lingvo supports multiple neural network architectures -- from recurrent neural nets to Transformer models -- and comes with lots of documentation on common implementations across different tasks (i.e., NLP, NMT, speech synthesis).


The Missing Ingredient in Zero-Shot Neural Machine Translation

arXiv.org Artificial Intelligence

Multilingual Neural Machine Translation (NMT) models are capable of translating between multiple source and target languages. Despite various approaches to train such models, they have difficulty with zero-shot translation: translating between language pairs that were not together seen during training. In this paper we first diagnose why state-of-the-art multilingual NMT models that rely purely on parameter sharing, fail to generalize to unseen language pairs. We then propose auxiliary losses on the NMT encoder that impose representational invariance across languages. Our simple approach vastly improves zero-shot translation quality without regressing on supervised directions. For the first time, on WMT14 English-FrenchGerman, we achieve zero-shot performance that is on par with pivoting. We also demonstrate the easy scalability of our approach to multiple languages on the IWSLT 2017 shared task.


Adversarial attacks against Fact Extraction and VERification

arXiv.org Artificial Intelligence

This paper describes a baseline for the second iteration of the Fact Extraction and VERification shared task (FEVER2.0) which explores the resilience of systems through adversarial evaluation. We present a collection of simple adversarial attacks against systems that participated in the first FEVER shared task. FEVER modeled the assessment of truthfulness of written claims as a joint information retrieval and natural language inference task using evidence from Wikipedia. A large number of participants made use of deep neural networks in their submissions to the shared task. The extent as to whether such models understand language has been the subject of a number of recent investigations and discussion in literature. In this paper, we present a simple method of generating entailment-preserving and entailment-altering perturbations of instances by common patterns within the training data. We find that a number of systems are greatly affected with absolute losses in classification accuracy of up to $29\%$ on the newly perturbed instances. Using these newly generated instances, we construct a sample submission for the FEVER2.0 shared task. Addressing these types of attacks will aid in building more robust fact-checking models, as well as suggest directions to expand the datasets.


Integrating Artificial and Human Intelligence for Efficient Translation

arXiv.org Artificial Intelligence

It has been shown that PE can not only yield productivity gains of 36% [9], but that it also increases the quality [2]. This paper discusses how human and artificial intelligence can be combined for efficient language translations, which tools already exist and which open challenges remain (see Figure 1). HARNESSING SYNERGIES BETWEEN AIS AND HUMANS Draft Proposal The PE process starts with an initial draft that is proposed by the AI and which the human uses as a basis. There are two main sources for this proposal: a machine translation (MT) and a translation memory (TM). Simply put, TMs are large databases containing already completed human translations which are matched (using fuzzy or exact matches) against the sentence to be translated to provide a starting point for PE.


OpenKiwi: An Open Source Framework for Quality Estimation

#artificialintelligence

A year ago we told you why Quality Estimation is the missing piece in Machine Translation. Today, we have some exciting news to share about a new project from our AI Research team, with my colleagues Fábio Kepler, Sony Trénous, and Miguel Vera. Since 2016, Unbabel's AI team has been focused on advancing the state of the art in Quality Estimation (QE). Our models are running in production systems for 14 language pairs, with coverage and performance improving over time, thanks to the increasing amount of data produced by our human post-editors on a daily basis. This combination of AI and humans is what makes our translation pipeline fast and accurate, at scale.


What Did We Learn at the New Work Summit?

#artificialintelligence

MR. METZ This is an ongoing problem. There have been very real and very significant gains in image recognition, speech recognition and language translation over the last several years. That can help with talking digital assistants, driverless cars and certain aspects of health care -- not to mention face recognition services and autonomous weapons. Driverless cars are still years from the mainstream. Better translation is very different from a more general intelligence that can do anything a human can do.


How machine learning can be used to break down language barriers

#artificialintelligence

Machine learning has transformed major aspects of the modern world with great success. Self-driving cars, intelligent virtual assistants on smartphones, and cybersecurity automation are all examples of how far the technology has come. But of all the applications of machine learning, few have the potential to so radically shape our economy as language translation. The content of language translation is the perfect model for machine learning to tackle. Language operates on a set of predictable rules, but with a degree of variation that makes it difficult for humans to interpret.


Jointly Optimizing Diversity and Relevance in Neural Response Generation

arXiv.org Artificial Intelligence

Although recent neural conversation models have shown great potential, they often generate bland and generic responses. While various approaches have been explored to diversify the output of the conversation model, the improvement often comes at the cost of decreased relevance. In this paper, we propose a method to jointly optimize diversity and relevance that essentially fuses the latent space of a sequence-to-sequence model and that of an autoencoder model by leveraging novel regularization terms. As a result, our approach induces a latent space in which the distance and direction from the predicted response vector roughly match the relevance and diversity, respectively. This property also lends itself well to an intuitive visualization of the latent space. Both automatic and human evaluation results demonstrate that the proposed approach brings significant improvement compared to strong baselines in both diversity and relevance.


Neuroscience-Inspired Artificial Intelligence

#artificialintelligence

Learning to combine foveal glimpses with a third-order Boltzmann machine. Multiple object recognition with visual attention. Show, attend and tell: neural image caption generation with visual attention. Neural machine translation by jointly learning to align and translate. Learning what and where to draw.


State-Of-The-Art Methods For Neural Machine Translation & Multilingual Tasks

#artificialintelligence

The quality of machine translation produced by state-of-the-art models is already quite high and often requires only minor corrections from professional human translators. This is especially true for high-resource language pairs like English-German and English-French. So, the main focus of recent research studies in machine translation was on improving system performance for low-resource language pairs, where we have access to large monolingual corpora in each language but do not have sufficiently large parallel corpora. Facebook AI researchers seem to lead in this research area and have introduced several interesting solutions for low-resource machine translation during the last year. This includes augmenting the training data with back-translation, learning joint multilingual sentence representations, as well as extending BERT to a cross-lingual setting.