Are you being asked to provide GPUs to your application developers and data scientists for machine learning or high performance computing? Are users asking for more than one GPU to be usable for their application? Are you interested in cost-effective ways to share GPUs across the entire data science team? If any of these types of questions apply to you, then this new E-Book from VMware on the key decisions to take about GPU use on vSphere will be a great read for you. GPUs provide the computing power needed to run machine learning programs efficiently, reliably and quickly.
In the consumer market, a GPU is mostly used to accelerate gaming graphics. Today, GPGPU's (General Purpose GPU) are the choice of hardware to accelerate computational workloads in modern High Performance Computing (HPC) landscapes. HPC in itself is the platform serving workloads like Machine Learning (ML), Deep Learning (DL), and Artificial Intelligence (AI). Using a GPGPU is not only about ML computations that require image recognition anymore. Calculations on tabular data is also a common exercise in i.e. healthcare, insurance and financial industry verticals.
We are excited to share that today VMware has released vSphere 7 Update 2. It is available to download right away, both through VMware Customer Connect and from within vSphere Lifecycle Manager itself. With vSphere 7, released in April 2020, we moved vSphere to a six-month release cycle. We released vSphere 7 Update 1 in October 2020, and vSphere 7 Update 2 today. With this faster pace, you get the benefits of the latest capabilities and innovation from VMware as well as from the VMware partner and OEM ecosystem much more quickly, to meet your business needs and be future-ready. The vSphere 7 Update 2 release sure packs a big punch!
Performance of Machine Learning workloads using GPUs is by no means compromised when running on vSphere. In fact, you can often achieve better aggregate performance, i.e. throughput of many jobs, by running on vSphere vs. bare metal A key benefit of running GPU-based Machine Learning workloads on vSphere is the ability to allocate GPU resources in a very flexible and dynamic way. Performance of Machine Learning workloads using GPUs is by no means compromised when running on vSphere. In fact, you can often achieve better aggregate performance, i.e. throughput of many jobs, by running on vSphere vs. bare metal A key benefit of running GPU-based Machine Learning workloads on vSphere is the ability to allocate GPU resources in a very flexible and dynamic way.
Nvidia announced its new enterprise software product, vComputeServer, which has been developed and optimised for use with VMware's vSphere. Last week, VMware announced its intention to acquire Carbon Black and Pivotal, in a massive deal that will expand the company's SaaS offerings, while enhancing its ability to enable digital transformation for customers. Before the dust had even settled on that news, the company announced today (26 August), that it is set to launch a hybrid cloud on AWS (Amazon Web Services) in partnership with Nvidia, which will improve GPU (graphics processing unit) virtualisation. The two companies say that this is the first hybrid cloud service that lets enterprises accelerate AI, machine learning or deep learning workloads with GPUs. At the VMWorld conference in San Francisco, Nvidia's VP of product management, John Fanelli, told reporters: "In a modern data centre, organisations are going to be using GPUs to power AI, deep learning and analytics. "Due to the scale of those types of workloads, they're going to be doing some processing on premise in data centres, some processing in clouds and continually iterating between them." The company said that this will make the completion of deep learning training up to 50 times faster than with a CPU alone. This product is aimed at people who may be using Nvidia's Rapids software, Fanelli explained, which is a suite of data processing and machine learning libraries used for GPU-acceleration in data science workflows. Nvidia founder and CEO Jensen Huang said: "From operational intelligence to artificial intelligence, businesses rely on GPU-accelerated computing to make fast, accurate predictions that directly impact their bottom line.