Results


AI everywhere

#artificialintelligence

"We invented a computing model called GPU accelerated computing and we introduced it almost slightly over 10 years ago," Huang said, noting that while AI is only recently dominating tech news headlines, the company was working on the foundation long before that. Nvidia's tech now resides in many of the world's most powerful supercomputers, and the applications include fields that were once considered beyond the realm of modern computing capabilities. Now, Nvidia's graphics hardware occupies a more pivotal role, according to Huang – and the company's long list of high-profile partners, including Microsoft, Facebook and others, bears him out. GTC, in other words, has evolved into arguably the biggest developer event focused on artificial intelligence in the world.


Microsoft Infuses SQL Server With Artificial Intelligence

#artificialintelligence

SQL Server 2017, which will run on both Windows and Linux, is inching closer to release with a set of artificial intelligence capabilities that will change the way enterprises derive value from their business data, according to Microsoft. The Redmond, Wash., software giant on April 19 released SQL Server 2017 Community Technology Preview (CTP) 2.0. Joseph Sirosh, corporate vice president of the Microsoft Data Group, described the "production-quality" database software as "the first RDBMS [relational database management system] with built-in AI." Download links and instructions on installing the preview on Linux are available in this TechNet post from the SQL Server team at Microsoft. It's no secret to anyone keeping tabs on Microsoft lately that the company is betting big on AI, progressively baking its machine learning and cognitive computing technologies into a wide array of the company's cloud services, business software offerings and consumer products. "In this preview release, we are introducing in-database support for a rich library of machine learning functions, and now for the first time Python support (in addition to R)," stated Sirosh, in the April 19 announcement.


Smart Machines Need Smart Silicon

#artificialintelligence

It seems like even the biggest hyperscale platform developers who have long touted software-defined architectures as the key to computing nirvana are starting to learn a cardinal rule of infrastructure: No matter how much you try to abstract it, basic hardware still matters. A key example of this is Google's Tensor Processing Unit (TPU), which the company designed specifically for machine learning and other crucial workflows that were starting to push the limits of available CPUs and GPUs. In fact, the company says that without the TPU, it was looking at doubling its data center footprint in order to support applications like voice recognition and image search. The TPU is custom-designed to work with the TensorFlow software library, generating results 15 to 30 times faster than state-of-the-art Intel Haswell or Nvidia K80 devices. This may seem like a harbinger of bad times ahead for Intel and Nvidia, but the broader picture is a bit more muddled.


Deploying Deep Neural Networks with NVIDIA TensorRT

#artificialintelligence

NVIDIA TensorRT is a high-performance deep learning inference library for production environments. Power efficiency and speed of response are two key metrics for deployed deep learning applications, because they directly affect the user experience and the cost of the service provided. Tensor RT automatically optimizes trained neural networks for run-time performance, delivering up to 16x higher energy efficiency (performance per watt) on a Tesla P100 GPU compared to common CPU-only deep learning inference systems (see Figure 1). Figure 2 shows the performance of NVIDIA Tesla P100 and K80 running inference using TensorRT with the relatively complex GoogLenet neural network architecture. In this post we will show you how you can use Tensor RT to get the best efficiency and performance out of your trained deep neural network on a GPU-based deployment platform.


Compare NVIDIA Pascal GPUs and Google TPU

#artificialintelligence

The recent TPU paper by Google draws a clear conclusion – without accelerated computing, the scale-out of AI is simply not practical. Today's economy runs in the world's data centers, and data centers are changing dramatically. Not so long ago, they served up web pages, advertising and video content. Now, they recognize voices, detect images in video streams and connect us with information we need exactly when we need it. Increasingly, those capabilities are enabled by a form of artificial intelligence called deep learning.


Which GPU(s) to Get for Deep Learning

#artificialintelligence

Deep learning is a field with intense computational requirements and the choice of your GPU will fundamentally determine your deep learning experience. With no GPU this might look like months of waiting for an experiment to finish, or running an experiment for a day or more only to see that the chosen parameters were off. With a good, solid GPU, one can quickly iterate over deep learning networks, and run experiments in days instead of months, hours instead of days, minutes instead of hours. So making the right choice when it comes to buying a GPU is critical. So how do you select the GPU which is right for you?


Google Reveals Technical Specs and Business Rationale for TPU Processor – PPP Focus

#artificialintelligence

By way of example, the Google engineers said that if people used voice search for three minutes a day, running the associated speech recognition tasks without the TPU would have required the company to have twice as many datacenters. Based on the scant details Google provides about its data center operations – which include 15 major sites – the search-and-ad giant was looking at additional capital expenditures of perhaps $15bn, assuming that a large Google data center costs about $1bn. As it applied machine learning capabilities to more of its products and applications over the past several years, Google said it realized it needed to supercharge its hardware as well as its software. It took years for Kubernetes and TensorFlow to become publicly available, both of which Google had used extensively on its own (albeit in somewhat different forms). Due to that inherent efficiency, the chips can squeeze more operations per second into the silicon using more sophisticated and powerful machine learning models to get results more rapidly.


Competition in AI platform market to heat up in 2017

#artificialintelligence

Intel's Nervana platform is a $400 million investment in AI Back in November, Intel announced what it claims is a comprehensive AI platform for data center and compute applications called Nervana, with its focus aimed directly at taking on Nvidia's GPU solutions for enterprise users. The platform is the result of the chipmaker's acquisition of 48-person startup Nervana Systems back in August for $400 million that was led by former Qualcomm researcher Naveen Rao. Built using FPGA technology and designed for highly-optimized AI solutions, Intel claims Nervana will deliver up to a 100-fold reduction in the time it takes to train a deep learning model within the next three years. The company intends to integrate Nervana technology into Xeon and Xeon Phi processor lineups. During Q1, it will test the Nervana Engine chip, codenamed'Lake Crest,' and make it available to key customers later within the year.


Google says its AI chips smoke CPUs, GPUs in performance tests

PCWorld

Four years ago, Google was faced with a conundrum: if all its users hit its voice recognition services for three minutes a day, the company would need to double the number of data centers just to handle all of the requests to the machine learning system powering those services. Rather than buy a bunch of new real estate and servers just for that purpose, the company embarked on a journey to create dedicated hardware for running machine- learning applications like voice recognition. The result was the Tensor Processing Unit (TPU), a chip that is designed to accelerate the inference stage of deep neural networks. Google published a paper on Wednesday laying out the performance gains the company saw over comparable CPUs and GPUs, both in terms of raw power and the performance per watt of power consumed. A TPU was on average 15 to 30 times faster at the machine learning inference tasks tested than a comparable server-class Intel Haswell CPU or Nvidia K80 GPU.


3 Growth Stocks That Could Soar More Than Nvidia -- The Motley Fool

#artificialintelligence

NVIDIA's (NASDAQ:NVDA) graphic cards have long been favorites among hardcore gamers, but who would've thought the chipmaker's stock would explode the way it has in recent times? The share price has more than tripled in just the past year, turning NVIDIA into a near eight-bagger in just five years. Of course, there's more to its run than just graphics processors. It's more an artificial intelligence computing company today, having made huge headway in two of the hottest technology fields of our times: AI and self-driving cars. For investors looking to find the "next NVIDIA," the trick is to find a company that is sitting on a big growth opportunity, or is already tapping into a soon-to-heat-up trend, but that is still flying under Wall Street's radar.