"The field of Machine Learning seeks to answer these questions: How can we build computer systems that automatically improve with experience, and what are the fundamental laws that govern all learning processes?"
– from The Discipline of Machine Learning by Tom Mitchell. CMU-ML-06-108, 2006.
Kimberly Powell, who leads Nvidia's efforts in health care, says the company is working with medical researchers in a range of areas and will look to expand these efforts in coming years. Most notably, a machine-learning technique called deep learning is being applied to processing medical images and sifting through large amounts of medical data. Nvidia is, for example, working with Bradley Erickson, a neuro-radiologist at the Mayo Clinic, to apply deep learning to brain images. There are, however, significant challenges in applying techniques like deep learning to medicine.
Element AI -- a Montreal-based platform and incubator that wants to be the go-to place for any and all companies (big or small) that are building or want to include AI solutions in their businesses, but lack the talent and other resources to get started -- is announcing a mammoth Series A round of $102 million. They include Fidelity Investments Canada, Korea's Hanwha, Intel Capital, Microsoft Ventures, National Bank of Canada, NVIDIA, Real Ventures, and "several of the world's largest sovereign wealth funds." But the basic model is not: Element AI is tackling this problem essentially by leaning on trends in outsourcing: systems integrators, business process outsourcers, and others have built multi-billion dollar businesses by providing consultancy or even fully taking the reins on projects that businesses do not consider their core competency. Element AI says that initial products that can be picked up there include predictive modeling, forecasting models for small data sets, conversational AI and natural language processing, image recognition and automatic tagging of attributes based on images, 'aggregation techniques' based on machine learning, reinforcement learning for physics-based motion control, compression of time-series data, statistical machine learning algorithms, voice recognition, recommendation systems, fluid simulation, consumer engagement optimization and computational advertising.
Nvidia has benefitted from a rapid explosion of investment in machine learning from tech companies. Can this rapid growth in the use cases for machine learning continue? Recent research results from applying machine learning to diagnosis are impressive (see "An AI Ophthalmologist Shows How Machine Learning May Transform Medicine"). Your chips are already driving some cars: all Tesla vehicles now use Nvidia's Drive PX 2 computer to power the Autopilot feature that automates highway driving.
"We invented a computing model called GPU accelerated computing and we introduced it almost slightly over 10 years ago," Huang said, noting that while AI is only recently dominating tech news headlines, the company was working on the foundation long before that. Nvidia's tech now resides in many of the world's most powerful supercomputers, and the applications include fields that were once considered beyond the realm of modern computing capabilities. Now, Nvidia's graphics hardware occupies a more pivotal role, according to Huang – and the company's long list of high-profile partners, including Microsoft, Facebook and others, bears him out. GTC, in other words, has evolved into arguably the biggest developer event focused on artificial intelligence in the world.
Increasingly affordable AI maintenance and the increased speed of calculations thanks to GPU are significant factors in the unbridled growth of AI. The astonishing results that were achieved on training a neural network on GPU cards made Nvidia a key player, with 70 percent of the market share that Intel failed to gain. Compared with the results from the analog algorithms, and thanks to the combination of machine learning and big data, previously "unsolvable" problems are now being solved. Machine learning algorithms can directly analyze thousands of previous cases of different types of diseases and make their own conclusions as to what constitutes a sick individual versus a healthy individual, and consequently help diagnose dangerous conditions including cancer.
H2O.ai and Nvidia today announced that they have partnered to take machine learning and deep learning algorithms to the enterprise through deals with Nvidia's graphics processing units (GPUs). Mountain View, Calif.-based H20.ai has created AI software that enables customers to train machine learning and deep learning models up to 75 times faster than conventional central processing unit (CPU) solutions. H2O.ai is also a founding member of the GPU Open Analytics initiative that aims to create an open framework for data science on GPUs. As part of the initiative, H2O.ai's GPU edition machine learning algorithms are compatible with the GPU Data Frame, the open in-GPU-memory data frame.
It was in this same dingy diner in April 1993 that three young electrical engineers--Malachowsky, Curtis Priem and Nvidia's current CEO, Jen-Hsun Huang--started a company devoted to making specialized chips that would generate faster and more realistic graphics for video games. "We've been investing in a lot of startups applying deep learning to many areas, and every single one effectively comes in building on Nvidia's platform," says Marc Andreessen of venture capital firm Andreessen Horowitz. Starting in 2006, Nvidia released a programming tool kit called CUDA that allowed coders to easily program each individual pixel on a screen. From his bedroom, Krizhevsky had plugged 1.2 million images into a deep learning neural network powered by two Nvidia GeForce gaming cards.
Today, when Intel announced a new generation of Xeon Phi server chips, the emphasis was on their ability to handle A.I. Of all those servers, 7 percent were handling deep learning, while 95 percent were doing machine learning, she said. Of servers doing machine learning or deep learning, "the vast, vast majority of workloads are machine learning. They offer "advanced acceleration capabilities" for workloads like Google's TensorFlow deep learning framework, Google has said.
The new machine, called a DGX-1, is optimized for the form of machine learning known as deep learning, which involves feeding data to a large network of crudely simulated neurons and has resulted in great strides in artificial intelligence in recent years. Language remains a very tricky problem for artificial intelligence, but in recent years researchers have made progress in applying deep learning to the problem (see "AI's Language Problem"). "This will allow us to train models on larger data sets, which we have found leads to progress in AI." OpenAI hopes to use reinforcement learning to build robots capable of performing useful chores around the home, although this may prove a time-consuming challenge (see "This Is the Robot Maid Elon Musk Is Funding" and "The Robot You Want Most Is Far from Reality").
Norm Jouppi, a distinguished hardware engineer at Google, detailed the company's public disclosure of the Tensor Processing Unit (TPU) last week after the CEO Sundar Pichai's earlier announcement at Google I/O. Several questions came up around how the TensorFlow-optimized chipset could compete with publicly available hardware like Nvidia's Tesla P100 and even PaaS providers like Nervana that provide machine learning services. Google's public disclosure of the TPU may have been related to Nvidia's release of the Tesla P100 in April. Jouppi noted that Google wants to lead the industry in machine learning and make the innovation available to its customers, but didn't disclose specific plans or offerings to do so at this time.