Results


H2O.ai teams up with Nvidia to take machine learning to the enterprise

#artificialintelligence

H2O.ai and Nvidia today announced that they have partnered to take machine learning and deep learning algorithms to the enterprise through deals with Nvidia's graphics processing units (GPUs). Mountain View, Calif.-based H20.ai has created AI software that enables customers to train machine learning and deep learning models up to 75 times faster than conventional central processing unit (CPU) solutions. H2O.ai is also a founding member of the GPU Open Analytics initiative that aims to create an open framework for data science on GPUs. As part of the initiative, H2O.ai's GPU edition machine learning algorithms are compatible with the GPU Data Frame, the open in-GPU-memory data frame.


Deep Learning Computer Build

#artificialintelligence

The proposed system is for academic / small startup budgets, for personal/multi- use (gaming, rendering, etc.) one can check the e.g. Andrej Karpathy's $3,000 system 2. How to read the slideshow Meant as an'introductory' deck for all the components typically needed when building your own workstation with some storage. Slides are quite condensed and best read with tablet with an easy zooming. If you find errors on this, I would appreciate bug reports:) 3. System Specifications 2D/3D Image processing Dual CPU system based on Intel Xeon CPUs – 4 x PCIe 16-lane GPUs achieved with dual CPU 2-8 x Pascal NVIDIA Titan X (12 GB) – Maximum GPU RAM with great performance as well. Not all frameworks necessarily can use multiple GPUs 256 GB RAM – Faster to develop for non-GPU pre-processing / data wrangling especially for large 3D datasets without having to worry too much of running out of memory.


Flipboard on Flipboard

#artificialintelligence

In 2011 Google realized they had a problem. They were getting serious about deep learning networks with computational demands that strained their resources. Google calculated they would have to have twice as many data centers as they already had if people used their deep learning speech recognition models for voice search for just three minutes a day. They needed more powerful and efficient processing chips. What kind of chip did they need?


The Great Strengths and Important Limitations Of Google's Machine Learning Chip

#artificialintelligence

Speed measures for the TPU (blue), GPU (red) and CPU (gold). In 2011 Google realized they had a problem. They were getting serious about deep learning networks with computational demands that strained their resources. Google calculated they would have to have twice as many data centers as they already had if people used their deep learning speech recognition models for voice search for just three minutes a day. They needed more powerful and efficient processing chips.


Which GPU(s) to Get for Deep Learning

#artificialintelligence

Deep learning is a field with intense computational requirements and the choice of your GPU will fundamentally determine your deep learning experience. With no GPU this might look like months of waiting for an experiment to finish, or running an experiment for a day or more only to see that the chosen parameters were off. With a good, solid GPU, one can quickly iterate over deep learning networks, and run experiments in days instead of months, hours instead of days, minutes instead of hours. So making the right choice when it comes to buying a GPU is critical. So how do you select the GPU which is right for you?


Google says its AI chips smoke CPUs, GPUs in performance tests

PCWorld

Four years ago, Google was faced with a conundrum: if all its users hit its voice recognition services for three minutes a day, the company would need to double the number of data centers just to handle all of the requests to the machine learning system powering those services. Rather than buy a bunch of new real estate and servers just for that purpose, the company embarked on a journey to create dedicated hardware for running machine- learning applications like voice recognition. The result was the Tensor Processing Unit (TPU), a chip that is designed to accelerate the inference stage of deep neural networks. Google published a paper on Wednesday laying out the performance gains the company saw over comparable CPUs and GPUs, both in terms of raw power and the performance per watt of power consumed. A TPU was on average 15 to 30 times faster at the machine learning inference tasks tested than a comparable server-class Intel Haswell CPU or Nvidia K80 GPU.


IBM adds Nvidia Tesla P100 GPU to its cloud ZDNet

#artificialintelligence

IBM said it will offer the latest Nvidia GPU, the Tesla P100, in the IBM Cloud as it aims to grab artificial intelligence, machine learning and high performance computing workloads. Big Blue said it is among the first to offer Nvidia's latest processors. The two companies are key partners on multiple fronts since 2014. Also see: Intel's Mobileye purchase may really be about thwarting Nvidia's car to cloud, data center connection Nvidia takes Pascal onto Jetson to boost embedded AI IBM, Nvidia target deep learning, AI workloads Nvidia is increasingly becoming a data center player as graphics processors take a bigger role in analytics workloads. IBM, Nvidia, Google and a bevy of others are partners in the OpenPower group, which aims to serve as a counterweight to Intel.


NVIDIA Scores Yet Another GPU Cloud For AI With Tencent

Forbes

NVIDIA's speedy GPUs and Machine Learning software have unquestionably become the gold standard for building Artificial Intelligence (AI) applications. And today, NVIDIA added TenCent to their list of cloud service providers that offer access to NVIDIA hardware in their clouds for AI and other compute intensive applications. This marks a significant milestone in the global accessibility of the hardware needed to build AI applications, from drones to medical devices to automated factories and robots. TenCent (whose Chinese name roughly translates to "Soaring Information") is one of China's largest Internet companies and the world's largest gaming platform, having recently announced 2016 revenues that grew by 48% to $21.9B. Many companies, perhaps most, opt to access GPUs in the cloud instead of buying and deploying the hardware directly.


Google Erects Fake Brain With … Graphics Chips?

AITopics Original Links

Your brain is a collection of neurons -- tiny cells that use electro-chemical signals to send and receive information. But as Google builds an artificial brain that will help drive everything from its web search engine to Google Street View to the voice-recognition app on Android smartphones, it's using very different materials. Among them: graphics microprocessors, the same sort of silicon chips that were first designed to process images and videos on your desktop computer. That's the word from Geoffrey Hinton, the artificial intelligence guru who was recently hired by the search giant to continue work on the so-called Google Brain. When we spoke to Hinton just after his "deep learning" operation was acquired by Larry Page and company, he didn't provide specifics, but he said that Google is now using graphics processing units, or GPUs, to help power its brain-mimicking neural networks.


Computex 2016 verdict: Behold the new brains of the computer

AITopics Original Links

When we were planning our approach this year to covering Computex, the largest IT trade show in Asia, there was some confusion about where exactly Intel had gone. At that point there was a sense that maybe this year would be a little flat. The Taipei show has always been a big song and dance around the latest CPUs (central processing units) from Intel and the changes they'll bring to computing in the years ahead. As it turned out, Computex was fascinating. On day zero, Nvidia and Asus put on a great show that quickly reminded us that the future is moving beyond the CPU, the chip that traditionally has been the brains of the computer.