Huang


AI to help, not confront humans, says AlphaGo developer Aja Huang

#artificialintelligence

AI (artificial intelligence) will not confront human beings but serve as tools at their disopal, as human brain will remain the most powerful, although some say AI machines may be able to talk with people and judge their emotions in 2045 at the earliest, according to Aja Huang, one of the key developers behind AlphaGo, an AI program developed by Google's DeepMind unit. Huang made the comments when delivering a speech at the 2017 Taiwan AI Conference hosted recently by the Institute of Information Science under Academia Sinica and Taiwan Data Science Foundation. Huang recalled that he was invited to join London-based Deep Mind Technologies in late 2012, two years after he won the gold medal at the 15th Computer Olympiad in Kanazawa in 2010. In February 2014, DeepMind was acquired by Google, allowing the AI team to enjoy sufficient advanced hardware resources such as power TPU (tensor processing unit) and enabling them to work out the world's most powerful AI program AlphaGo, which has stunned the world by beating global top Go players. In March, 2016, AlphaGo beat Lee Sedol, a South Korean professional Go player in a five-game match, marking the first time a computer Go program has beaten a 9-dan professional without handicaps.


Nvidia CEO: Gaming will be huge, but so will AI and data center businesses

#artificialintelligence

Nvidia reported a stellar quarter for the three months ended October 31. Nvidia had $2.6 billion in revenue in the quarter, and $1.5 billion of it came from graphics chips for gaming PCs. But the company's investment in artificial intelligence chips is paying off, with data center growing beyond $500 million in revenue for the first time. Jensen Huang, CEO of Santa Clara, California-based Nvidia, said his company started investing in AI seven years ago, and that its latest AI chips are the result of years of work by several thousand engineers. That has given the company an edge in AI, and other rivals are scrambling to keep up, he said.


Nvidia CEO: Gaming will be huge, but so will AI and data center businesses

#artificialintelligence

Nvidia reported a stellar quarter for the three months ended October 31. Nvidia had $2.6 billion in revenue in the quarter, and $1.5 billion of it came from graphics chips for gaming PCs. But the company's investment in artificial intelligence chips is paying off, with data center growing beyond $500 million in revenue for the first time. Jensen Huang, CEO of Santa Clara, California-based Nvidia, said his company started investing in AI seven years ago, and that its latest AI chips are the result of years of work by several thousand engineers. That has given the company an edge in AI, and other rivals are scrambling to keep up, he said.


Nvidia steps up its transition to an AI company

#artificialintelligence

Nvidia reported earnings that beat expectations and showed that the company's focus on artificial intelligence is still paying off. For the past decade, Nvidia has been rising above graphics chips for gamers, expanding to parallel processing in data centers and lately to artificial intelligence processing for deep learning neural networks and self-driving cars. The company reported earnings per share of $1.33 (up 60 percent from a year ago) on revenue of $2.6 billion (up 32 percent), beating Wall Street's expectations. The company's stock price is up more than 100 percent in the past year on the popularity of artificial intelligence. But it slumped during the day on Thursday, along with the broader market.


Deep Learning Reading Group: Deep Networks with Stochastic Depth

@machinelearnbot

Today's paper is by Gao Huang, Yu Sun, et al. It introduces a new way to perturb networks during training in order to improve their performance. Before I continue, let me first state that this paper is a real pleasure to read; it is concise and extremely well written. It gives an excellent overview of the motivating problems, previous solutions, and Huang and Sun's new approach. I highly recommended giving it a read!


NVIDIA Targets Next AI Frontiers: Inference And China

#artificialintelligence

NVIDIA's meteoric growth in the datacenter, where its business is now generating some $1.6B annually, has been largely driven by the demand to train deep neural networks for Machine Learning (ML) and Artificial Intelligence (AI)--an area where the computational requirements are simply mindboggling. First, and perhaps most importantly, Huang announced new TensorRT3 software that optimizes trained neural networks for inference processing on NVIDIA GPUs. In addition to announcing the Chinese deployment wins, Huang provided some pretty compelling benchmarks to demonstrate the company's prowess in accelerating Machine Learning inference operations, in the datacenter and at the edge. In addition to the TensorRT3 deployments, Huang announced that the largest Chinese Cloud Service Providers, Alibaba, Baidu, and Tencent, are all offering the company's newest Tesla V100 GPUs to their customers for scientific and deep learning applications.


Vincent AI Sketch Demo Draws In Throngs at GTC Europe The Official NVIDIA Blog

@machinelearnbot

The story behind the story: a finely tuned generative adversarial network that sampled 8,000 great works of art -- a tiny sample size in the data-intensive world of deep learning -- and in just 14 hours of training on an NVIDIA DGX system created an application that takes human input and turns it into something stunning. Building on thousands of hours of research undertaken by Cambridge Consultants' AI research lab, the Digital Greenhouse, a team of five built the Vincent demo in just two months. After Huang's keynote, GTC attendees had the opportunity to pick up the stylus for themselves, selecting from one of seven different styles to sketch everything from portraits to landscapes to, of course, cats. While traditional deep learning algorithms have achieved stunning results by ingesting vast quantities of data, GANs create applications out of much smaller sample sizes by training one neural network to try to imitate the data they're fed, and another to try to spot fakes.


Nvidia's new supercomputer is designed to drive fully autonomous vehicles

Mashable

Nvidia wants to make it easier for automotive companies to build self-driving cars, so it's releasing a brand new supercomputer designed to drive them. The chipmaker claims its new supercomputer is the world's first artificial intelligence computer designed for "Level 5" autonomy, which means vehicles that can operate themselves without any human intervention. The new computer will be part of Nvidia's existing Drive PX platform, which the GPU-maker offers to automotive companies in order to provide the processing power for their self-driving car systems. Huang announced Nvidia will soon release a new software development kit (SDK), Drive IX, that will help developers to build new AI-partner programs to improve in-car experience.


Nvidia aims for level 5 vehicle autonomy with Pegasus

ZDNet

By the middle of 2018, Nvidia believes it will have a system capable of level 5 autonomy in the hands of the auto industry, which will allow for fully self-driving vehicles. Pegasus is rated as being capable of 320 trillion operations per second, which the company claims is a thirteen-fold increase over previous generations. In May, Nvidia took the wraps off its Tesla V100 accelerator aimed at deep learning. The company said the V100 has 1.5 times the general-purpose FLOPS compared to Pascal, a 12 times improvement for deep learning training, and six times the performance for deep learning inference.


Microsoft's AI is getting crazily good at speech recognition

#artificialintelligence

Microsoft's speech recognition efforts have hit a significant milestone. It can now transcribe human speech with a 5.1% error rate, Microsoft technical fellow Xuedong Huang wrote in a blog post -- the same error rate as humans. Microsoft actually thought it hit this point last year, when it reached 5.9%, the word error rate it had measured for humans. "Reaching human parity with an accuracy on par with humans has been a research goal for the last 25 years," Xuedong Huang wrote.