If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Although most recognize GE as a leading name in energy, the company has steadily built a healthcare empire over the course of decades, beginning in the 1950s in particular with its leadership in medical X-ray machines and later CT systems in the 1970s and today, with devices that touch a broad range of uses. Much of GE Healthcare's current medical device business is rooted in imaging hardware and software systems, including CT imaging machines and other diagnostic equipment. The company has also invested significantly in the drug discovery and production arena in recent years--something the new CEO of GE, John Flannery (who previously led the healthcare division at GE), identified as one of three main focal points for GE's financial future. According to Flannery, the company's healthcare unit has one million scanners in service globally, which generate 50,000 scans every few moments. As one might imagine, this kind of volume will increasingly require more processing and analysis capabilities cooked in--something the company is seeking to get ahead with in today's partnership with Nvidia.
'We are at a tipping point where AI is really taking off … […] I think we will evolve in computing from a mobile first to an AI first world.' It is not surprising that Sundar Pichai, CEO of Google, repeatedly praises the potential of AI. In an explanation of the quarterly figures of the world's largest search engine, he says that the past ten years have been about'mobile first'. In his view, the next ten years will be about'AI first'. He anticipates that in the near future the concept of the device will fade completely.
WIRE)--H2O.ai, the leading company bringing AI to enterprises, today announced it has completed a $40 million Series C round of funding led by Wells Fargo and NVIDIA with participation from New York Life, Crane Venture Partners, Nexus Venture Partners and Transamerica Ventures, the corporate venture capital fund of Transamerica and Aegon Group. The Series C round brings H2O.ai's total amount of funding raised to $75 million. The new investment will be used to further democratize advanced machine learning and for global expansion and innovation of Driverless AI, an automated machine learning and pipelining platform that uses "AI to do AI." H2O.ai continued its juggernaut growth in 2017 as evidenced by new platforms and partnerships. The company launched Driverless AI, a product that automates AI for non-technical users and introduces visualization and interpretability features that explain the data modeling results in plain English, thus fostering further adoption and trust in artificial intelligence.
Reporting record quarterly revenues and a 60% rise in earnings since last year, NVIDIA's astronomical rise to the top of the tech market is largely thanks to its range of hardware offerings for AI. "Our Volta GPU has been embraced by every major internet and cloud service provider and computer maker," explained founder and CEO Jensen Huang in a public statement. "Industries across the world are accelerating their adoption of AI." Global e-commerce giants Alibaba and Baidu all announced this quarter that they will adopt NVIDIA Volta GPUs in order to accelerate AI across enterprise and consumer applications, joining Amazon, Facebook, Google, and Microsoft. In other words, it's been a great year for AI innovators – and an even better year for NVIDIA. "We estimate that at least 80% of all applications will have an AI component by 2020," says Dave Schubmehl, Research Director for Cognitive / AI Systems with IDC.
As developers flock to artificial intelligence frameworks in response to the explosion of intelligence machines, training deep learning models has emerged as a priority along with synching them to a growing list of neural and other network designs. All are being aligned to confront some of the next big AI challenges, including training deep learning models to make inferences from the fire hose of unstructured data. These and other AI developer challenges were highlighted during this week's Nvidia GPU technology conference in Washington. The GPU leader uses the events to bolster its contention that GPUs--some with up to 5,000 cores--are filling the computing gap created by the decline of Moore's Law. The other driving force behind the "era of AI" is the emergence of algorithm-driven deep learning that is forcing developers to move beyond mere coding to apply AI to a growing range of automated processes and predictive analytics.
"Taiwan has been the epicenter of the PC revolution, and it will serve as a key center for the next industry revolution focused on AI," said NVIDIA founder and CEO Jensen Huang. "We are delighted to be working closely with MOST to ensure that Taiwan fully harnesses the power of this technological wave." "AI is the key to igniting Taiwan's next industrial revolution, building on the long-established strength of our IT manufacturing capabilities," said Dr. Liang-Gee Chen, Minister of Science and Technology. "Our focus is on drawing academics, industry and young talent into our AI Grand Plan to create an ecosystem based on AI innovation." Under the agreement, the National Center for High-Performance Computing will build Taiwan's first AI-focused supercomputer powered by NVIDIA DGX AI computing platforms and Volta architecture-based GPUs.
The issue lies with a prevalent tactic in AI development called "back propagation". Geoffrey Hinton has been called the "Godfather of Deep Learning". It relates directly to how AIs learn and store information. Since its conception, back propagation algorithms have become the "workhorses" of the majority of AI projects.
It will be the first of its kind among the autonomous vehicle market, as Nvidia race ahead and offer passengers an on demand service to take them to their destination and giving accessibility to everyone including elderly and disabled passengers. The technology making robotaxis a possibility is Nvidia's Drive PX AI platform, dubbed Pegasus, which delivers all the capabilities of a data centre in a supercomputer the size of a license plate. The size, cost and power demands of existing AI computing solutions, Nvidia claims, makes them impractical for production vehicles. The fleet will use ZF's ProAI self-driving platform for the vehicles, based on Nvidia's Drive PX AI platform.
Facebook published a paper in June describing how it connected 32 Big Basin systems over its internal network to aggregate 256 GPUs and train a ResNet-50 image recognition model in under an hour with about 90% scaling efficiency and 72% accuracy. IBM's PowerAI software platform with Distributed Deep Learning (DDL) libraries includes IBM-Caffe and "topology aware communication" libraries. IBM used 64 Power System S822LC systems, each with four NVIDIA Tesla P100 SXM2-connected GPUs and two POWER8 processors, for a total of 256 GPUs – matching Facebook's paper. Commercial availability of IBM's S822LC for low volume buyers will be a key element enabling academic and enterprise researchers to buy a few systems and test IBM's hardware and software scaling efficiencies.
In this chapter of our thought leadership series, AI Business caught up with Kari Ann Briski, the Director of Deep Learning Software Product at NVIDIA. Deep learning is being applied to solve many big data problems from computer vision, image recognition, speech recognition, and autonomous vehicles. "I've personally seen so many fun and interesting AI applications, from large organizations to small businesses and individuals who previously knew nothing about deep learning," Kari explains. "NVIDIA heavily contributes to open source projects, both in the frameworks (deep learning libraries) as well as posting neural networks that we have researched for specific AI applications.