If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Kimberly Powell, who leads Nvidia's efforts in health care, says the company is working with medical researchers in a range of areas and will look to expand these efforts in coming years. Most notably, a machine-learning technique called deep learning is being applied to processing medical images and sifting through large amounts of medical data. Nvidia is, for example, working with Bradley Erickson, a neuro-radiologist at the Mayo Clinic, to apply deep learning to brain images. There are, however, significant challenges in applying techniques like deep learning to medicine.
"We invented a computing model called GPU accelerated computing and we introduced it almost slightly over 10 years ago," Huang said, noting that while AI is only recently dominating tech news headlines, the company was working on the foundation long before that. Nvidia's tech now resides in many of the world's most powerful supercomputers, and the applications include fields that were once considered beyond the realm of modern computing capabilities. Now, Nvidia's graphics hardware occupies a more pivotal role, according to Huang – and the company's long list of high-profile partners, including Microsoft, Facebook and others, bears him out. GTC, in other words, has evolved into arguably the biggest developer event focused on artificial intelligence in the world.
H2O.ai and Nvidia today announced that they have partnered to take machine learning and deep learning algorithms to the enterprise through deals with Nvidia's graphics processing units (GPUs). Mountain View, Calif.-based H20.ai has created AI software that enables customers to train machine learning and deep learning models up to 75 times faster than conventional central processing unit (CPU) solutions. H2O.ai is also a founding member of the GPU Open Analytics initiative that aims to create an open framework for data science on GPUs. As part of the initiative, H2O.ai's GPU edition machine learning algorithms are compatible with the GPU Data Frame, the open in-GPU-memory data frame.
Audi and Nvidia have been collaborating for some time, but at CES 2017, the companies made their biggest joint announcement yet. Using artificial intelligence and deep learning technology, the companies will bring fully automated driving to the roads by 2020. To achieve this, Audi will leverage Nvidia's expertise in artificial intelligence, the fruits of which are already being shown at CES. Audi's Q7 Piloted Driving Concept is fitted with Nvidia's Drive PX 2 processor and after only four days of "training," the vehicle is already driving itself over a complex road course. This is due to the Drive PX 2's incredible ability to learn on the go, which is a far cry from the first driverless cars that needed pre-mapped routes to function properly. "Nvidia is pioneering the use of deep learning AI to revolutionize transportation," Nvidia CEO Jen-Hsun Huang said.
In case you missed it, TensorFlow is now available for Windows, as well as Mac and Linux. This was not always the case. For most of TensorFlow's first year of existence, the only means of Windows support was virtualization, typically through Docker. Even without GPU support, this is great news for me. I teach a graduate course in deep learning and dealing with students who only run Windows was always difficult.
NVIDIA (NASDAQ:NVDA) is primarily known as the company that revolutionized computer gaming. The debut of the Graphics Processing Unit (GPU) in 1999 provided gamers with faster, clearer, and more lifelike images. The GPU was designed to quickly perform complex mathematical calculations that were necessary to accelerate the creation of realistic graphics. It achieved this feat by performing many functions at the same time, known as parallel computing. This resulted in faster, smoother motion in game graphics and a revolution in modern gaming.
Audi and Nvidia are putting their partnership into high gear with the aim of putting cars with artificial intelligence on the road by 2020. The ambitious project, detailed at CES in Las Vegas is, according to both companies, the best way for cars to become truly autonomous and genuinely capable of coping with the everyday complexities of real-world driving and of learning as they go. At the 2016 NIPS Artificial Intelligence Conference in Barcelona in December, Audi demonstrated a miniature prototype AI car -- a scale model of the Q2 SUV -- that was laden with sensors. It autonomously drove around an alien space learning where obstacles were, developing a real-time map of the space in its digital mind and after some trial and error could drive straight to a confined parking space and navigate into it every time. However, for CES, the demonstrations are very much full-scale.
A few days ago, AMD (NYSE:AMD) has shown its new professional video/compute card lineup for the enterprise sector and it has recently shown some additional demonstrations about the new Ryzen CPUs. AMD is obviously focusing the public attention around its overall platform, which is becoming more and more interesting, and it is finally providing some very interesting product previews. In this article, I want to focus on the professional video lineup for the enterprise market, that is subdivided into three solutions with three different architectures. As I will show you, the first two solutions are not so competitive due to technical issues and/or outdated architectures, but the top solution, the MI25 powered by the VEGA 10 architecture, looks to be very interesting and competitive. Surely, the initial enthusiasm must be restrained since the architecture is quite late in comparison to Nvidia's (NASDAQ:NVDA) Pascal, while VEGA 10 will probably still show the high power consumption behavior that characterizes Polaris 10 and the entire GCN architecture.
Without a doubt, 2016 was an amazing year for Machine Learning (ML) and Artificial Intelligence (AI). During the year, we saw nearly every high tech CEO claim the mantel of becoming an "AI Company". However, only a few companies were actually able to monetize their significant investments in AI, notably,,,,,, and . But 2016 was nonetheless a year of many firsts. As a posterchild for the potential for ML, Google Deep Mind mastered the subtle and infinitely complex game of GO, soundly beating the reigning world champion.