Results


H2O.ai teams up with Nvidia to take machine learning to the enterprise

#artificialintelligence

H2O.ai and Nvidia today announced that they have partnered to take machine learning and deep learning algorithms to the enterprise through deals with Nvidia's graphics processing units (GPUs). Mountain View, Calif.-based H20.ai has created AI software that enables customers to train machine learning and deep learning models up to 75 times faster than conventional central processing unit (CPU) solutions. H2O.ai is also a founding member of the GPU Open Analytics initiative that aims to create an open framework for data science on GPUs. As part of the initiative, H2O.ai's GPU edition machine learning algorithms are compatible with the GPU Data Frame, the open in-GPU-memory data frame.


The Pint-Sized Supercomputer That Companies Are Scrambling to Get

MIT Technology Review

To companies grappling with complex data projects powered by artificial intelligence, a system that Nvidia calls an "AI supercomputer in a box" is a welcome development. Early customers of Nvidia's DGX-1, which combines machine-learning software with eight of the chip maker's highest-end graphics processing units (GPUs), say the system lets them train their analytical models faster, enables greater experimentation, and could facilitate breakthroughs in science, health care, and financial services. Data scientists have been leveraging GPUs to accelerate deep learning--an AI technique that mimics the way human brains process data--since 2012, but many say that current computing systems limit their work. Faster computers such as the DGX-1 promise to make deep-learning algorithms more powerful and let data scientists run deep-learning models that previously weren't possible. The DGX-1 isn't a magical solution for every company.


This is why dozens of companies have bought Nvidia's $129,000 deep-learning supercomputer in a box

#artificialintelligence

To companies grappling with complex data projects powered by artificial intelligence, a system that Nvidia calls an "AI supercomputer in a box" is a welcome development. Early customers of Nvidia's DGX-1, which combines machine-learning software with eight of the chip maker's highest-end graphics processing units (GPUs), say the system lets them train their analytical models faster, enables greater experimentation, and could facilitate breakthroughs in science, health care, and financial services. Data scientists have been leveraging GPUs to accelerate deep learning--an AI technique that mimics the way human brains process data--since 2012, but many say that current computing systems limit their work. Faster computers such as the DGX-1 promise to make deep-learning algorithms more powerful and let data scientists run deep-learning models that previously weren't possible. The DGX-1 isn't a magical solution for every company.


Nvidia CEO's "Hyper-Moore's Law" Vision for Future Supercomputers

#artificialintelligence

Over the last year in particular, we have documented the merger between high performance computing and deep learning and its various shared hardware and software ties. This next year promises far more on both horizons and while GPU maker Nvidia might not have seen it coming to this extent when it was outfitting its first GPUs on the former top "Titan" supercomputer, the company sensed a mesh on the horizon when the first hyperscale deep learning shops were deploying CUDA and GPUs to train neural networks. All of this portends an exciting year ahead and for once, the mighty CPU is not the subject of the keenest interest. Instead, the action is unfolding around the CPU's role alongside accelerators; everything from Intel's approach to integrating the Nervana deep learning chips with Xeons, to Pascal and future Volta GPUs, and other novel architectures that have made waves. While Moore's Law for traditional CPU-based computing is on the decline, Jen-Hsun Huang, CEO of GPU maker, Nvidia told The Next Platform at SC16 that we are just on the precipice of a new Moore's Law-like curve of innovation--one that is driven by traditional CPUs with accelerator kickers, mixed precision capabilities, new distributed frameworks for managing both AI and supercomputing applications, and an unprecedented level of data for training.


Intel Declares War on GPUs at Disputed HPC, AI Border

#artificialintelligence

In Supercomputing Conference (SC) years past, chipmaker Intel has always come forth with a strong story, either as an enabling processor or co-processor force, or more recently, as a prime contractor for a leading-class national lab supercomputer. But outside of a few announcements at this year's SC related to beefed-up SKUs for high performance computing and Skylake plans, the real emphasis back in Portland seemed to ring far fainter for HPC and much louder for the newest server tech darlings, deep learning and machine learning. Far from the HPC crowd last week was Intel's AI Day, an event in San Francisco chock full of announcements on both the hardware and software fronts during a week that has historically emphasized Intel's revolving efforts in supercomputing. As we have noted before, there is a great deal of overlap between these two segments, so it is not fair to suggest that Intel is ditching one community for the other. In fact, it is quite the opposite--or more specifically, these areas are merging to a greater degree (and far faster) than most could have anticipated.


AMD's Deal With Google Is Significant

Forbes

Recently, Advanced Micro Devices clinched a deal with Google to supply its FirePro S9300 x2 GPU in the latter's cloud platform. AMD's stock price has shot by close to 30% since the announcement of this deal. This is because this deal is quite significant for AMD, as most of the big players in the cloud storage market currently use Nvidia's GPUs. According to some sources, Google's cloud market share stood at 8% in 2016, just behind Amazon and Microsoft. In this analysis, we further elaborate on the reasons as to why this deal is significant for AMD.


GPUs Reshape Computing

Communications of the ACM

NVidia's Titan X graphics card, featuring the company's Pascal-powered graphics processing unit driven by 3,584 CUDA cores running at 1.5GHz. As researchers continue to push the boundaries of neural networks and deep learning--particularly in speech recognition and natural language processing, image and pattern recognition, text and data analytics, and other complex areas--they are constantly on the lookout for new and better ways to extend and expand computing capabilities. For decades, the gold standard has been high-performance computing (HPC) clusters, which toss huge amounts of processing power at problems--albeit at a prohibitively high cost. This approach has helped fuel advances across a wide swath of fields, including weather forecasting, financial services, and energy exploration. However, in 2012, a new method emerged.


What Will GPU Accelerated AI Lend to Traditional Supercomputing?

#artificialintelligence

There are over 410 GPU accelerated HPC applications, over 300,000 CUDA developers, and we accelerate all of the deep learning frameworks as well," Buck says, pointing the data above to highlight their role in all of the most prevalent platforms to date. Yet another question one might ask during the AI-laden HPC talks this week is where deep learning and machine learning might fit in HPC workflows. It is Nvidia's goal to take that expertise in high performance computing and move it into deep learning and artificial intelligence--and the company expects that blend of those two worlds will be key to boosting next-generation AI applications. In essence, these can work exactly like a K80, except they have more memory bandwidth (model depending) slightly less memory, and about 6.5% more single precision floating point capability (and 1.6X double precision floating point) plus support for FP16.


Nvidia's new Pascal GPU to supercharge deep learning

PCWorld

Nvidia said server makers like Cray, Dell and Hewlett-Packard Enterprise will start taking orders and delivering systems with the GPU starting in the fourth quarter this year. A version of the Tesla P100 GPU was also introduced at Nvidia's GPU Technology Conference in April, but that was for the new NVLink interconnect. The Tesla P100 for PCIe slots will deliver roughly 4.7 teraflops of double-precision performance, which is slightly lower than the 5.3 teraflops on the NVlink version of the GPU. The single-precision performance is 9.3 teraflops, compared to 10.6 teraflops on the NVlink version.


3 Things NVIDIA Is Doing Right -- The Motley Fool

#artificialintelligence

Most of the company's GPU revenue comes from its gaming chip business, which saw revenues grow 17% year-over-year in Q1. TrendForce expects the U.S. virtual reality market to reach 70 billion market by 2020, giving NVIDIA plenty of room for more growth. And the company recently debuted its DGX-1 supercomputer, which can be be paired with Drive PX 2 to process real-time autonomous driving information from the cloud. Facebook uses NVIDIA's Tesla M40 GPU accelerators to help power its Big Sur machine learning computer servers.