Machine Translation

Microsoft Unveiled a New Language Translation Feature for Its HoloLens Holograms Digital Trends


Not only is it possible to have a fairly realistic holographic replica of yourself, but Microsoft has just shown that it is also possible to have that same replica speak in different languages, too. According to The Verge, on Wednesday, July 17, Microsoft provided a demo of this latest innovation during its keynote speech at the Microsoft Inspire partner conference in Las Vegas. Tom Warren of The Verge posted a video clip on YouTube of Microsoft's demonstration of the hologram's language translation capabilities. Microsoft's demonstration of the technology included Azure executive Julia White, a HoloLens 2 headset, and White's hologram. White's hologram began as a small green outline of a hologram that White could hold in her hand, but as soon as she uttered two simple words, "render keynote," the small hologram grew into a fully rendered, human-sized replica of White and immediately began delivering the keynote speech in Japanese, in a voice that still matched White's.

Compensating for NLP's Lack of Understanding


The saying "a picture is worth a thousand words" does something of an injustice to the medium of language. It suggests that words are an inefficient form of communication when in fact the opposite is true. When humans use language to communicate, so much is left out because the speaker and listener share experience of the same world, which makes explicit statements about that shared world unnecessary in everyday speech. For example, if I say to you "the vase is on its side, rolling along the table," I don't need to also tell you that the vase is made of fragile stuff (it's a reasonable assumption that it is), or that the table doesn't have edges that will stop the vase's rolling, or that as a result the vase will likely roll off the table, or that gravity will make the vase to fall to the floor, which is hard and will therefore cause the fragile vase to shatter. It's enough for me to say "the vase is on its side, rolling along the table" for you to know the vase will likely smash to pieces unless someone intervenes.

The Challenge of Open Source MT SDL


The very large majority of open-source MT efforts fail because they do not consistently produce output that is equal to, or better than, any easily accessed public MT solution or because they cannot be deployed effectively. This is not to say that this is not possible, but the investments and long-term commitment required for success are often underestimated or simply not properly understood. A case can always be made for private systems that offer greater control and security, even if they are generally less accurate than public MT options. However, in the localization industry we see that if "free" MT solutions that are superior to an LSP-built system are available, translators will use them. We also find that for the few self-developed MT systems that do produce useful output quality, integration issues are often an impediment to deployment at enterprise scale and robustness.

The Challenge of Open Source Machine Translation


We live in a time when there is a proliferation of open-source machine learning and AI-related development platforms. Thus, people believe that given a large amount of data and a few computers, a functional and useful MT system can be developed with a do-it-yourself (DIY) tool kit. However, as many who have tried have found out, the reality is much more complicated, and the path to success is long, winding and sometimes even treacherous. The very large majority of open-source MT efforts fail because they do not consistently produce output that is equal to, or better than, any easily accessed public MT solution or because they cannot be deployed effectively. This is not to say that this is not possible, but the investments and long-term commitment required for success are often underestimated or simply not properly understood.

Bridging the Gap between Training and Inference for Neural Machine Translation Machine Learning

Neural Machine Translation (NMT) generates target words sequentially in the way of predicting the next word conditioned on the context words. At training time, it predicts with the ground truth words as context while at inference it has to generate the entire sequence from scratch. This discrepancy of the fed context leads to error accumulation among the way. Furthermore, word-level training requires strict matching between the generated sequence and the ground truth sequence which leads to overcorrection over different but reasonable translations. In this paper, we address these issues by sampling context words not only from the ground truth sequence but also from the predicted sequence by the model during training, where the predicted sequence is selected with a sentence-level optimum. Experiment results on Chinese->English and WMT'14 English->German translation tasks demonstrate that our approach can achieve significant improvements on multiple datasets.

Multilingual translation tools spread in Japan with new visa system

The Japan Times

The use of multilingual translation tools is expanding in Japan, where foreign workers are expected to increase in the wake of April's launch of new visa categories. A growing number of local governments, labor unions and other entities have decided to introduce translation tools, which can help foreigners when going through administrative procedures as they allow local officials and other officers to talk to such applicants in their mother languages. "Talking in the applicants' own languages makes it easier to convey our cooperative stance," said an official in Tokyo's Sumida Ward. The ward introduced VoiceBiz, an audio translation app developed by Toppan Printing Co. that covers 30 languages. The app, which can be downloaded onto smartphones and tablet computers, will be used in eight municipalities, including Osaka and Ayase in Kanagawa Prefecture, company officials said.

A Focus on Neural Machine Translation for African Languages Machine Learning

African languages are numerous, complex and low-resourced. The datasets required for machine translation are difficult to discover, and existing research is hard to reproduce. Minimal attention has been given to machine translation for African languages so there is scant research regarding the problems that arise when using machine translation techniques. To begin addressing these problems, we trained models to translate English to five of the official South African languages (Afrikaans, isiZulu, Northern Sotho, Setswana, Xitsonga), making use of modern neural machine translation techniques. The results obtained show the promise of using neural machine translation techniques for African languages. By providing reproducible publicly-available data, code and results, this research aims to provide a starting point for other researchers in African machine translation to compare to and build upon.

Resolving Gendered Ambiguous Pronouns with BERT Machine Learning

Pronoun resolution is part of coreference resolution, the task of pairing an expression to its referring entity. This is an important task for natural language understanding and a necessary component of machine translation systems, chat bots and assistants. Neural machine learning systems perform far from ideally in this task, reaching as low as 73% F1 scores on modern benchmark datasets. Moreover, they tend to perform better for masculine pronouns than for feminine ones. Thus, the problem is both challenging and important for NLP researchers and practitioners. In this project, we describe our BERT-based approach to solving the problem of gender-balanced pronoun resolution. We are able to reach 92% F1 score and a much lower gender bias on the benchmark dataset shared by Google AI Language team.

Unsupervised Pivot Translation for Distant Languages Artificial Intelligence

Unsupervised neural machine translation (NMT) has attracted a lot of attention recently. While state-of-the-art methods for unsupervised translation usually perform well between similar languages (e.g., English-German translation), they perform poorly between distant languages, because unsupervised alignment does not work well for distant languages. In this work, we introduce unsupervised pivot translation for distant languages, which translates a language to a distant language through multiple hops, and the unsupervised translation on each hop is relatively easier than the original direct translation. We propose a learning to route (LTR) method to choose the translation path between the source and target languages. LTR is trained on language pairs whose best translation path is available and is applied on the unseen language pairs for path selection. Experiments on 20 languages and 294 distant language pairs demonstrate the advantages of the unsupervised pivot translation for distant languages, as well as the effectiveness of the proposed LTR for path selection. Specifically, in the best case, LTR achieves an improvement of 5.58 BLEU points over the conventional direct unsupervised method.