Results


H2O.ai teams up with Nvidia to take machine learning to the enterprise

#artificialintelligence

H2O.ai and Nvidia today announced that they have partnered to take machine learning and deep learning algorithms to the enterprise through deals with Nvidia's graphics processing units (GPUs). Mountain View, Calif.-based H20.ai has created AI software that enables customers to train machine learning and deep learning models up to 75 times faster than conventional central processing unit (CPU) solutions. H2O.ai is also a founding member of the GPU Open Analytics initiative that aims to create an open framework for data science on GPUs. As part of the initiative, H2O.ai's GPU edition machine learning algorithms are compatible with the GPU Data Frame, the open in-GPU-memory data frame.


1 Company Is Already Winning AI -- The Motley Fool

#artificialintelligence

NVIDIA (NASDAQ:NVDA) is primarily known as the company that revolutionized computer gaming. The debut of the Graphics Processing Unit (GPU) in 1999 provided gamers with faster, clearer, and more lifelike images. The GPU was designed to quickly perform complex mathematical calculations that were necessary to accelerate the creation of realistic graphics. It achieved this feat by performing many functions at the same time, known as parallel computing. This resulted in faster, smoother motion in game graphics and a revolution in modern gaming.


With PowerAI, IBM Will Likely Accelerate Enterprise AI Trial And Adoption #SC16

#artificialintelligence

While AI (artificial intelligence) has been around since the 50's, IBM was the pioneer in the latest AI cycle with their own custom solution dubbed Watson. Ever since the introduction of Watson and its ability to beat Jeopardy Champion Ken Jennings, the company has been increasing its investment in the space. IBM Watson is now an entire division of the company which indicates the importance they put on the future of AI. Watson is only one part of IBM's AI investment which I consider the "easy button" for those enterprise who don't want to create everything from scratch. IBM also has DIY (do it yourself) infrastructure for cloud providers through POWER8, OpenPOWER, OpenCAPI, designed for cloud giant rolls their own AI software. But what about enterprises who are in the middle, those who want solid infrastructure and want to invest in the latest deep neural network frameworks?


Azure N-Series: General availability on December 1

#artificialintelligence

I am really excited to announce that the general availability of the Azure N-Series will be December 1st, 2016. Azure N-Series virtual machines are powered by NVIDIA GPUs and provide customers and developers access to industry-leading accelerated computing and visualization experiences. I am also excited to announce global access to the sizes, with N-series available in South Central US, East US, West Europe and South East Asia, all available on December 1st. We've had thousands of customers participate in the N-Series preview since we launched it back in August. We've heard positive feedback on the enhanced performance and the work we have down with NVIDIA to make this a completely turnkey experience for you.


The AI Era Ignited by GPU Deep Learning

#artificialintelligence

Soon, hundreds of billions of devices will be infused with intelligence. AI will revolutionize every industry. READ MORE 7. 7 The global ecosystem for NVIDIA GPU Deep Learning has scaled out rapidly. Breakthrough results triggered a race to adopt AI for consumer internet services: TRANSLATION RECOGNITION SEARCH RECOMMENDATIONS 8. 8 Cloud service providers, from Alibaba and Amazon to IBM and Microsoft, make the NVIDIA GPU deep learning platform available to companies large and small. Pinterest is Changing Online Retail with GPUs 12. 12 AI can solve problems that seemed well beyond our reach just a few years back.


Nvidia CEO's "Hyper-Moore's Law" Vision for Future Supercomputers

#artificialintelligence

Over the last year in particular, we have documented the merger between high performance computing and deep learning and its various shared hardware and software ties. This next year promises far more on both horizons and while GPU maker Nvidia might not have seen it coming to this extent when it was outfitting its first GPUs on the former top "Titan" supercomputer, the company sensed a mesh on the horizon when the first hyperscale deep learning shops were deploying CUDA and GPUs to train neural networks. All of this portends an exciting year ahead and for once, the mighty CPU is not the subject of the keenest interest. Instead, the action is unfolding around the CPU's role alongside accelerators; everything from Intel's approach to integrating the Nervana deep learning chips with Xeons, to Pascal and future Volta GPUs, and other novel architectures that have made waves. While Moore's Law for traditional CPU-based computing is on the decline, Jen-Hsun Huang, CEO of GPU maker, Nvidia told The Next Platform at SC16 that we are just on the precipice of a new Moore's Law-like curve of innovation--one that is driven by traditional CPUs with accelerator kickers, mixed precision capabilities, new distributed frameworks for managing both AI and supercomputing applications, and an unprecedented level of data for training.


IBM and Nvidia make deep learning easy for AI service creators with a new bundle

#artificialintelligence

On Monday, IBM announced that it collaborated with Nvidia to provide a complete package for customers wanting to jump right into the deep learning market without all the hassles of determining and setting up the perfect combination of hardware and software. The company also revealed that a cloud-based model is available as well that eliminates the need to install local hardware and software. To trace this project, we have to jump back to September when IBM launched a new series of "OpenPower" servers that rely on the company's Power8 processor. The launch was notable because this chip features integrated NVLink technology, a proprietary communications link created by Nvidia that directly connects the central processor to a Nvidia-based graphics processor, namely the Tesla P100 in this case. Server-focused x86 processors provided by Intel and AMD don't have this type of integrated connectivity between the CPU and GPU.


NVIDIA accelerates IBM POWER8 past Intel - Enterprise Times

#artificialintelligence

At Supercomputer 16 (SC16) IBM and NVIDIA have announced what they call the fastest deep learning enterprise solution. The system is based on IBM Power System S822LC platforms that were announced in September. These systems contain the latest version of the IBM POWER8 processor that has NVIDIA NVLink embedded in it. IBM has also released a new deep learning toolkit called IBM PowerAI. The solution is capable of running AlexNet with Caffe up to 2x faster than equivalent systems.


[session] Bert Loomis and AI in the Cloud By @IBMCloud @CloudExpo #AI #Cloud #DigitalTransformation

#artificialintelligence

Bert Loomis was a visionary. This general session will highlight how Bert Loomis and people like him inspire us to build great things with small inventions. In their general session at 19th Cloud Expo, Harold Hannon, Architect at IBM Bluemix, and Michael O'Neill, Strategic Business Development at Nvidia, will discuss the accelerating pace of AI development and how IBM Cloud and NVIDIA are partnering to bring AI capabilities to "every day," on-demand. They will also review two "free infrastructure" programs available to startups and innovators. Speaker Bios Harold Hannon has worked in the field of software development as both an architect and developer for more than 15 years, with a focus on workflow, integration, and distributed systems.


How Enterprise Is Supporting Deep Learning Articles Big Data

#artificialintelligence

The NVIDIA Inception Program provides early access to the latest GPU hardware, NVIDIA's deep learning experts and engineering teams, technical training, as well as investment in order help them develop products and services with a first-mover advantage. One of its early collaborations is with NYU, whose researchers are set to work alongside NVIDIA scientists and engineers to develop autonomous driving technology, for which NVIDIA has already created the Drive PX2 chips. The team will seek to grow the the current NVIDIA learning system to encompass all aspects of autonomous driving, eliminating the need for hand-programmed rules and procedures like finding lane markings to avoid the creation of a near infinite number of'if, then, else' statements, which is impractical to code when trying to account for the randomness that occurs on the road.