Machine Translation


NVIDIA's AI-Driven Data Center Business Could Grow by 18 Times in 5 Years

#artificialintelligence

NVIDIA (NASDAQ:NVDA) recently reported powerful fiscal first-quarter 2018 results. The graphics processing unit (GPU) specialist's revenue jumped 66%, GAAP earnings per share soared 151%, and adjusted EPS surged 141%. There was a wealth of information about the company's results and future prospects shared on the earnings call. Our focus here is on NVIDIA's data center, which is growing like gangbusters -- its revenue grew 71% year over year to $701 million in the quarter, accounting for 22% of the company's total revenue. We see the data center opportunity as very large, fueled by growing demand for accelerated computing and applications ranging from AI [artificial intelligence] to high-performance computing across multiple market segments and vertical industries.


How AI Is Making Prediction Cheaper

#artificialintelligence

Avi Goldfarb, a professor at the University of Toronto's Rotman School of Management, explains the economics of machine learning, a branch of artificial intelligence that makes predictions. He says as prediction gets cheaper and better, machines are going to be doing more of it. That means businesses -- and individual workers -- need to figure out how to take advantage of the technology to stay competitive. Goldfarb is the coauthor of the book Prediction Machines: The Simple Economics of Artificial Intelligence. CURT NICKISCH: Welcome to the HBR IdeaCast, from Harvard Business Review. YOUTUBE: [Two women speaking] We've got this all tabbed up? In it, three young English-speaking women use Google Translate to order food in Hindi from an Indian restaurant. They copy and paste their order in English into the computer, and it translates items like "samosas" and reads them aloud in the foreign language.


NVIDIA's AI-Driven Data Center Business Could Grow by 18 Times in 5 Years

@machinelearnbot

NVIDIA (NASDAQ: NVDA) recently reported powerful fiscal first-quarter 2018 results. The graphics processing unit (GPU) specialist's revenue jumped 66%, GAAP earnings per share soared 151%, and adjusted EPS surged 141%. There was a wealth of information about the company's results and future prospects shared on the earnings call. Our focus here is on NVIDIA's data center, which is growing like gangbusters -- its revenue grew 71% year over year to $701 million in the quarter, accounting for 22% of the company's total revenue. We see the data center opportunity as very large, fueled by growing demand for accelerated computing and applications ranging from AI [artificial intelligence] to high-performance computing across multiple market segments and vertical industries.


The mind-blowing AI announcement from Google that you probably missed.

#artificialintelligence

In the closing weeks of 2016, Google published an article that quietly sailed under most people's radars. Which is a shame, because it may just be the most astonishing article about machine learning that I read last year. Don't feel bad if you missed it. Not only was the article competing with the pre-Christmas rush that most of us were navigating -- it was also tucked away on Google's Research Blog, beneath the geektastic headline Zero-Shot Translation with Google's Multilingual Neural Machine Translation System. This doesn't exactly scream must read, does it?


What do linguists make of AI and natural language processing?

#artificialintelligence

What do linguists make of AI and natural language processing (NLP)? Do they see a bright future for their careers with AI, or worry about being replaced by it entirely? To find out, Locaria ran a survey with 150 participating linguists from across the globe. The survey was a combination of questions that required them to select from a list of answers, or give their view in their own words. An essential part of the survey saw each linguist describe their feelings towards AI, NLP, and machine translation.


Finding Frequent Entities in Continuous Data

arXiv.org Machine Learning

In many applications that involve processing high-dimensional data, it is important to identify a small set of entities that account for a significant fraction of detections. Rather than formalize this as a clustering problem, in which all detections must be grouped into hard or soft categories, we formalize it as an instance of the frequent items or heavy hitters problem, which finds groups of tightly clustered objects that have a high density in the feature space. We show that the heavy hitters formulation generates solutions that are more accurate and effective than the clustering formulation. In addition, we present a novel online algorithm for heavy hitters, called HAC, which addresses problems in continuous space, and demonstrate its effectiveness on real video and household domains.


A Reinforcement Learning Approach to Interactive-Predictive Neural Machine Translation

arXiv.org Machine Learning

We present an approach to interactive-predictive neural machine translation that attempts to reduce human effort from three directions: Firstly, instead of requiring humans to select, correct, or delete segments, we employ the idea of learning from human reinforcements in form of judgments on the quality of partial translations. Secondly, human effort is further reduced by using the entropy of word predictions as uncertainty criterion to trigger feedback requests. Lastly, online updates of the model parameters after every interaction allow the model to adapt quickly. We show in simulation experiments that reward signals on partial translations significantly improve character F-score and BLEU compared to feedback on full translations only, while human effort can be reduced to an average number of $5$ feedback requests for every input.


MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

#artificialintelligence

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – released MLPerf, a nascent benchmarking tool "for measuring the speed of machine learning software and hardware." Arrival of MLPerf follows what has been a smattering of ad hoc AI performance comparisons trickling to market. Today Intel posted a blog with data showing for select machine translation using RNNs "the Intel Xeon Scalable processor outperforms NVidia V100 by 4x on the AWS Sockeye Neural Machine Translation model." For quite some time there has been vigorous discussion around the need for meaningful AI benchmarks with proponents suggesting that the lack of meaningful benchmark tools has restrained AI adoption.


Toward the Jet Age of machine learning

#artificialintelligence

Check out the "Software Development in the Age of Deep Learning" session at the AI Conference in San Francisco, September 4-7, 2018. Hurry--best price ends June 8. Machine learning today resembles the dawn of aviation. In 1903, dramatic flights by the Wright brothers ushered in the Pioneer Age of aviation, and within a decade, there was widespread belief that powered flight would revolutionize transportation and society more generally. Machine learning (ML) today is also rapidly advancing.


Hot stuff: Facebook AI gurus tout new Pytorch 1.0 framework for all

#artificialintelligence

F8 Facebook announced Pytorch 1.0, an updated version of the popular AI framework Pytorch, that aims to make it easier for developers to use neural network systems in production. On the second day of its developer conference F8 in San Jose, California, CTO Mike Schroepfer, introduced Pytorch 1.0, and said it combines Pytorch, Caffe 2, with Open Neural Network Exchange (ONXX). Pytorch 1.0 will let developers use their tools of choice and run models on their cloud of choice at peak performance, Schroepfer said. Microsoft and Amazon are, apparently, planning to support Pytorch 1.0 for Azure and AWS. It's already deployed in some Facebook's services such as its machine translation system.