codeplay
CUDA, SYCL, Codeplay, and oneAPI -- Accelerators for Everyone
There is an ever-growing number of accelerators in the world. This raises the question of how various ecosystems will evolve to allow programmers to leverage these accelerators. At higher levels of abstraction, domain-specific layers like Tensorflow and PyTorch provide great abstractions to the underlying hardware. However, for developers who maintain code that talks to the hardware without such an abstraction the challenge still exists. One solution that is supported on multiple underlying hardware architectures is C with SYCL.
Codeplay inks landmark deal with U.S. government to enable next-generation supercomputer
The National Energy Research Scientific Computing Center at Lawrence Berkeley National Laboratory, in collaboration with the Argonne Leadership Computing Facility, is partnering with UK-based Codeplay Software to enhance GPU compiler capabilities for NVIDIA. This collaboration will help NERSC and ALCF users, along with researchers in the high-performance computing community, to produce high-performance applications that are portable across compute architectures from multiple vendors. Today, most artificial intelligence software, including for cars, is developed using graphics processors designed for video games, according to Codeplay. The company provides tools designed to enable software to be accelerated by graphics processors or the latest specialized AI processors. NERSC supercomputers are used for scientific research by researchers working in diverse areas such as alternative energy, environment, high-energy and nuclear physics, advanced computing, materials science and chemistry.
- North America > United States (0.40)
- Europe > United Kingdom (0.26)
- Energy (1.00)
- Government > Regional Government > North America Government > United States Government (0.40)
AI Accelerators and open software
Three years ago, we had maybe six or less AI accelerators, today there's over two dozen, and more are coming. One of the first commercially available AI training accelerators was the GPU, and the undisputed leader of that segment was Nvidia. Nvidia was already preeminent in machine learning (ML) and deep-learning (DL) applications and adding neural net acceleration was a logical and rather straight-forward step for the company. Nvidia also brought a treasure-trove of applications with their GPUs based on the company's proprietary development language CUDA. The company developed CUDA in 2006 and empowered hundreds of Universities to give courses on it. As a result, the thousands of computer science graduates every year came out of school knowing how and wanting to use CUDA.
AI Accelerators and open software
Three years ago, we had maybe six or less AI accelerators, today there's over two dozen, and more are coming. One of the first commercially available AI training accelerators was the GPU, and the undisputed leader of that segment was Nvidia. Nvidia was already preeminent in machine learning (ML) and deep-learning (DL) applications and adding neural net acceleration was a logical and rather straight-forward step for the company. Nvidia also brought a treasure-trove of applications with their GPUs based on the company's proprietary development language CUDA. The company developed CUDA in 2006 and empowered hundreds of Universities to give courses on it. As a result, the thousands of computer science graduates every year came out of school knowing how and wanting to use CUDA.