Machine Translation


Artificial Intelligence: An Interview With Maria Johnsen

#artificialintelligence

Currently, Google's RankBrain, an AI process used help set search engine rankings, is having a major impact which is only expected to expand. Neural language and machine learning can be used in order to create better translation systems that make companies' life easier. Doing a real time analytics and search is the big integral part of search engines and the Cloud makes it possible. The big search engine trend contain three components such as: Combining big data and search real time personalization and machine learning.


The future of translation is part human, part machine

#artificialintelligence

Greece and Rome were, like many areas of the ancient world, multilingual, and so needed both translators and interpreters. My own thesis into English to Welsh translation – due to be published later this year – shows that a translator working to correct the output from machine translation makes for higher productivity and quicker translation. Well over 350,000 people speak Welsh every day, while local authorities across the UK are also translating into numerous other languages. Today, machine translation can create rough drafts of relatively simple language, and research shows that correcting this draft is usually more efficient than translation from scratch by a human.


Tencent Rolls Out Its First AI-Assisted Translation Software

#artificialintelligence

The AI-assisted translation software not only provide sentence translation, translation of photographed texts and dictionary services but also offer simultaneous interpretation services, Yicai Global learned. In terms of Chinese-to-English translation, Fanyijun can identify more than 90 percent of Chinese words but can only translate popular Chinese words of the day by using pinyin, Yicai Global finds after using it. Aside from simultaneous interpretation, the software also provides voice translation, dictionary and translation of photographed texts services. Fanyijun is the first voice translation software introduced by Tencent that uses AI technology it has developed.


The future of translation is part human, part machine

#artificialintelligence

Greece and Rome were, like many areas of the ancient world, multilingual, and so needed both translators and interpreters. My own thesis into English to Welsh translation – due to be published later this year – shows that a translator working to correct the output from machine translation makes for higher productivity and quicker translation. Well over 350,0000 people speak Welsh every day, while local authorities across the UK are also translating into numerous other languages. Today, machine translation can create rough drafts of relatively simple language, and research shows that correcting this draft is usually more efficient than translation from scratch by a human.


Advancing AI Capabilities with Next-Generation HPC Solutions

#artificialintelligence

HPE and NVIDIA are delivering IT solutions with superhuman intelligence to harness the full power of AI and pioneer the next generation of HPC systems. Topping this list of solutions, the NVIDIA Volta is fueling some of the most powerful supercomputers in the U.S. By combining AI with traditional HPC applications on a single platform, Volta rapidly accelerates workloads for HPC, AI training, AI inference, and virtual desktops. Powered by Volta, the NVIDIA Tesla V100 pairs 5,120 CUDA cores and 640 NEW tensor cores to deliver 120 TeraFLOPS of Deep Learning, 7.5 TeraFLOPS of double precision performance, and 15 TeraFLOPS of single precision performance to turbocharge both HPC and AI, making it the most advanced data center GPU ever built. In addition to enhancing speed and accessibility, the NVIDIA TensorRT, a Deep Learning inference optimizer and runtime engine, enables 3.5X faster inference performance and delivers dramatic throughput gains, even at less than 7 milliseconds of latency required by real-time AI services.


Google wants to build 'people-centric' AI systems

Mashable

They also want to pair that perspective with tools and training that help those building AI and machine learning systems view it and develop for it in a more people-centric way. To help guide AI development, the team will focus on three areas: Tools for AI engineers and researchers, how they can build tools to help people working in verticals like healthcare, farming and music effectively apply AI, and the inclusivity of AI systems. Both groups recognize that the key to unbiased AI is better training data (Machine Learning systems are typically trained with mountains of example data to build artificial intelligence around things like object and face recognition, speech translation, and even cancer research). In April, researchers at University of Bath in the United Kingdom and Princeton University found gender-based bias in numerous AI systems, including Google Translate where it automatically "converts gender-neutral pronouns from several languages into'he' when talking about a doctor, and'she' when talking about a nurse."


AI and machine learning on social media data is giving hedge funds a competitive edge

#artificialintelligence

The parameters are evolving by which an ever-expanding data set, including the likes of Twitter, pictures, text, video is processed; relying on experts versus the wisdom of the crowd; sentiment derived from a "bag of words", as opposed to structured linguistic analysis. Professor Gautum Mitra, OptiRisk Systems introduced Elijah DePalma and James Cantarella, Thomson Reuters; Pierce Crosby, StockTwits; Anders Bally, Sentifi; Peter Hafez, RavenPack; Stephen Morse, Twitter. DePalma differed somewhat from the others because the Thomson Reuters sentiment engine uses only accredited Reuters news data, rather than raw social media chatter. "Why not take auto translation engine like Google Translate, translate Japanese language to English and apply your engine?


AI And Machine Learning On Social Media Data Is Giving Hedge Funds A Competitive Edge

International Business Times

The parameters are evolving by which an ever-expanding data set, including the likes of Twitter, pictures, text, video is processed; relying on experts versus the wisdom of the crowd; sentiment derived from a "bag of words", as opposed to structured linguistic analysis. Professor Gautum Mitra, OptiRisk Systems introduced Elijah DePalma and James Cantarella, Thomson Reuters; Pierce Crosby, StockTwits; Anders Bally, Sentifi; Peter Hafez, RavenPack; Stephen Morse, Twitter. DePalma differed somewhat from the others because the Thomson Reuters sentiment engine uses only accredited Reuters news data, rather than raw social media chatter. "Why not take auto translation engine like Google Translate, translate Japanese language to English and apply your engine?


AI Programming: So Much Uncertainty - The New Stack

#artificialintelligence

Much work, and many tools, are still needed to integrate artificial intelligence into the software engineering workflow, noted Peter Norvig, Google's director of research, speaking at the O'Reilly Artificial Intelligence conference in New York last week. "AI systems are fundamentally dealing with uncertainty whereas traditional software is fundamentally trying to hide uncertainty," Norvig said. We have options, the Google research director noted. Traditionally, machine translations systems were made of a pipeline of probabilistic statistical models.


AI – The Present in the Making

#artificialintelligence

But Professor Jon Oberlander disagrees. With a plethora of functions, Alexa quickly gained much popularity and fame. The next thing on Professor Jon Oberlander's list was labeling images on search engines. Over the years, machine translation has also gained popularity as numerous people around the world rely on these translators.