"Machine translation (MT) is the application of computers to the task of translating texts from one natural language to another. One of the very earliest pursuits in computer science, MT has proved to be an elusive goal, but today a number of systems are available which produce output which, if not perfect, is of sufficient quality to be useful in a number of specific domains."
– Definition from the European Association for Machine Translation (EAMT).
YOU'VE heard of it in movies or in passing conversations. Maybe your workplace uses it, or you're considering using it yourself. As technology continues to make ripples across the workplace, AI has become increasingly prevalent. Through AI, companies are able to analyse large amounts of data, which will allow them to better engage with customers. Today, AI is easily accessible.
Machine learning is one of the newest technologies that is poised to make significant changes in the way companies conduct their business. Machine learning refers to computer technology that relays intelligent output based on algorithmic decisions made after processing a user's input. While still in its infancy, machine learning has already started being rolled out to consumers through different applications, such as Apple's Siri, Amazon's Alexa, and Microsoft's Cortana, among others. Apart from voice, the technology is used to process image data (e.g. Various reports indicate that advanced machine learning systems will leave translators out of work in the near future.
Being tongue-tied on holiday could become a thing of the past thanks to a major update to Google's Translate feature. Google has now introduced new Translate AI which both Android and iPhone users can take advantage of. Neural machine translation was introduced by Google 2 years ago and this new AI is set to improve on previous translation features with it being able to use offline in 59 languages. These include English, Arabic, Chinese, German, and Hindi, to name a few, with only 35MB being used per language. The Google app will allegedly be able to produce more accurate results than predecessors and at a much faster rate.
Neural-network-based language translators can be tricked into deleting words from sentences or dramatically changing the meaning of a phrase, by strategically inserting typos and numbers. Just like twiddling pixels in a photo, or placing a specially crafted sticker near an object, can make image-recognition systems mistake bananas for toasters, it is possible to alter the translation of a sentence by tweaking the input. This isn't like altering "The black cat" to "The black cap", and making an English-to-French translation AI change its output from "Le chat noir" to "Le chapeau noir." That change is to be expected. No, we're talking about, for example, tweaking "Er ist Geigenbauer und Psychotherapeut" (He is a violin maker and psychotherapist) to "Er ist Geigenbauer und Psy6hothearpeiut", and getting the translation: "He is a brick maker and a psychopath."
Machine translation, sometimes referred to by the abbreviation MT is a very challenge task that investigates the use of software to translate text or speech from one language to another. Traditionally, it involves large statistical models developed using highly sophisticated linguistic knowledge. Here we are, we are going to use deep neural networks for the problem of machine translation. We will discover how to develop a neural machine translation model for translating English to French. Our model will accept English text as input and return the French translation.
Research on question answering with knowledge base has recently seen an increasing use of deep architectures. In this extended abstract, we study the application of the neural machine translation paradigm for question parsing. We employ a sequence-to-sequence model to learn graph patterns in the SPARQL graph query language and their compositions. Instead of inducing the programs through question-answer pairs, we expect a semi-supervised approach, where alignments between questions and queries are built through templates. We argue that the coverage of language utterances can be expanded using late notable works in natural language generation.
Evaluating on adversarial examples has become a standard procedure to measure robustness of deep learning models. Due to the difficulty of creating white-box adversarial examples for discrete text input, most analyses of the robustness of NLP models have been done through black-box adversarial examples. We investigate adversarial examples for character-level neural machine translation (NMT), and contrast black-box adversaries with a novel white-box adversary, which employs differentiable string-edit operations to rank adversarial changes. We propose two novel types of attacks which aim to remove or change a word in a translation, rather than simply break the NMT. We demonstrate that white-box adversarial examples are significantly stronger than their black-box counterparts in different attack scenarios, which show more serious vulnerabilities than previously known. In addition, after performing adversarial training, which takes only 3 times longer than regular training, we can improve the model's robustness significantly.
China's top voice recognition firm iFlytek has penned a deal with China International Publishing Group to build a national artificial intelligence translator and keep up with rising demand. AI translations can lift the burden off human translators, who can barely keep up with requirements at government departments and companies looking to operate overseas, state-owned news agency Xinhua cited CIPG Deputy Director Fang Zhenghui as saying. The machine can translate Chinese into 33 languages, added Liu Qingfeng, president of Anhui-based iFlytek, saying it uses cutting-edge technology to improve the accuracy of machine translations. "When translation machines fail to recognize some special nouns or specific terms, human translators can monitor the process and help to polish the text," he said. "The machine [can] learn from these mistakes and improve its work next time."
This year, we have seen an acceleration of Silicon Valley tech giants opening AI research labs around the world as they seek to gain traction among researchers and fulfill their global ambitions. In the past six months or so, Google brought labs to China and France, Facebook opened labs in Pittsburgh and Seattle, and Microsoft announced plans to open labs near universities in Berkeley, California and Melbourne, Australia. This trend shows no signs of slowing down. Last month, Samsung announced labs in Cambridge, Moscow, and Toronto. This week, Nvidia announced plans to open a new lab in Toronto, while Google shared plans to open a lab in Accra, Ghana, Google's first in Africa and perhaps the first of any tech giant in Africa.
Nvidia has released a bunch of new tools for savvy AI developers in time for the Computer Vision and Pattern Recognition conference in Salt Lake City on Tuesday. Some of them have been previously announced at its GPU Technology Conference (GTC) earlier this year. The beta platform for using graphics cards with the Kubernetes system is now available for developers to test out. It's aimed at enterprises dealing with heavy AI workloads that need to be shared across multiple GPU cloud clusters. Large datasets and models take a long time to train so using Kubernetes will speed up training and inference.