"Many researchers … speculate that the information-processing abilities of biological neural systems must follow from highly parallel processes operating on representations that are distributed over many neurons. [Artificial neural networks] capture this kind of highly parallel computation based on distributed representations"
– from Machine Learning (Section 4.1.1; page 82) by Tom M. Mitchell, McGraw Hill Companies, Inc. (1997).
Kimberly Powell, who leads Nvidia's efforts in health care, says the company is working with medical researchers in a range of areas and will look to expand these efforts in coming years. Most notably, a machine-learning technique called deep learning is being applied to processing medical images and sifting through large amounts of medical data. Nvidia is, for example, working with Bradley Erickson, a neuro-radiologist at the Mayo Clinic, to apply deep learning to brain images. There are, however, significant challenges in applying techniques like deep learning to medicine.
Increasingly affordable AI maintenance and the increased speed of calculations thanks to GPU are significant factors in the unbridled growth of AI. The astonishing results that were achieved on training a neural network on GPU cards made Nvidia a key player, with 70 percent of the market share that Intel failed to gain. Compared with the results from the analog algorithms, and thanks to the combination of machine learning and big data, previously "unsolvable" problems are now being solved. Machine learning algorithms can directly analyze thousands of previous cases of different types of diseases and make their own conclusions as to what constitutes a sick individual versus a healthy individual, and consequently help diagnose dangerous conditions including cancer.
H2O.ai and Nvidia today announced that they have partnered to take machine learning and deep learning algorithms to the enterprise through deals with Nvidia's graphics processing units (GPUs). Mountain View, Calif.-based H20.ai has created AI software that enables customers to train machine learning and deep learning models up to 75 times faster than conventional central processing unit (CPU) solutions. H2O.ai is also a founding member of the GPU Open Analytics initiative that aims to create an open framework for data science on GPUs. As part of the initiative, H2O.ai's GPU edition machine learning algorithms are compatible with the GPU Data Frame, the open in-GPU-memory data frame.
It was in this same dingy diner in April 1993 that three young electrical engineers--Malachowsky, Curtis Priem and Nvidia's current CEO, Jen-Hsun Huang--started a company devoted to making specialized chips that would generate faster and more realistic graphics for video games. "We've been investing in a lot of startups applying deep learning to many areas, and every single one effectively comes in building on Nvidia's platform," says Marc Andreessen of venture capital firm Andreessen Horowitz. Starting in 2006, Nvidia released a programming tool kit called CUDA that allowed coders to easily program each individual pixel on a screen. From his bedroom, Krizhevsky had plugged 1.2 million images into a deep learning neural network powered by two Nvidia GeForce gaming cards.
Today, when Intel announced a new generation of Xeon Phi server chips, the emphasis was on their ability to handle A.I. Of all those servers, 7 percent were handling deep learning, while 95 percent were doing machine learning, she said. Of servers doing machine learning or deep learning, "the vast, vast majority of workloads are machine learning. They offer "advanced acceleration capabilities" for workloads like Google's TensorFlow deep learning framework, Google has said.
The new machine, called a DGX-1, is optimized for the form of machine learning known as deep learning, which involves feeding data to a large network of crudely simulated neurons and has resulted in great strides in artificial intelligence in recent years. Language remains a very tricky problem for artificial intelligence, but in recent years researchers have made progress in applying deep learning to the problem (see "AI's Language Problem"). "This will allow us to train models on larger data sets, which we have found leads to progress in AI." OpenAI hopes to use reinforcement learning to build robots capable of performing useful chores around the home, although this may prove a time-consuming challenge (see "This Is the Robot Maid Elon Musk Is Funding" and "The Robot You Want Most Is Far from Reality").