Goto

Collaborating Authors

Oracle Cloud targets more HPC, adds Ampere for first Arm offering

ZDNet

Fresh off of its high-profile deal with TikTok and a series of other major cloud customer wins, Oracle on Tuesday showcase its vision for its cloud business over the next 12 to 18 months. With a heavy focus on high-performance computing (HPC) workloads, the company announced a series of hardware and compute updates, as well as new partnerships. The announcements include: New HPC instances on Oracle Cloud Infrastructure (OCI) powered by Intel "Ice Lake" chips, the general availability of Nvidia A100 GPUs on bare metal instances and the introduction of E4 compute instances for general purpose workloads. Additionally, Oracle is partnering with Ampere to offer Oracle's first ARM-based compute instances, and it's partnering with Rescale to make it easier for customers to onboard HPC jobs. After nearly four years of competing effectively as a niche provider, overshadowed by major public cloud providers like Amazon Web Services, Microsoft Azure and Google Cloud, Oracle's cloud business is showing some momentum: With more than 25 regions currently online, OCI should have 36 regions up and running globally by this time next year.


Seamlessly scaling HPC and AI initiatives with HPE leading-edge technology

#artificialintelligence

Accelerate your HPC and AI workloads with new products, advanced technologies, and services from HPE. A growing number of commercial businesses are implementing HPC solutions to derive actionable business insights, to run higher performance applications and to gain a competitive advantage. In fact, according to Hyperion Research, the HPC market exceeded expectations with 6.8% growth in 2018 with continued growth expected through 2023.1 Complexities abound as HPC becomes more pervasive across industries and markets, especially as you adopt, scale and optimize HPC and AI workloads. HPE is in lockstep with you along your AI journey. We help you get started with your AI transformation and scale more quickly, saving time and resources.


DSS 8440: Flexible Machine Learning for Data Centers Direct2DellEMC

#artificialintelligence

This introduces a new high-performance, high capacity, reduced cost inference choice for data centers and machine learning service providers. It is the purpose-designed, open PCIe architecture of the DSS 8440 that enables us to readily expand accelerator options for our customers as the market demands. This latest addition to our powerhouse machine learning server is further proof of Dell EMC's commitment to supporting our customers as they compete in the rapidly emerging AI arena. The DSS 8440 is a 4U 2-socket accelerator-optimized server designed to deliver exceptionally high compute performance for both training and inference. Its open architecture, based on a high performance switched PCIe fabric, maximizes customer choice for machine learning infrastructure while also delivering best-of-breed technology.


Top AI Chip Announcements Of 2020

#artificialintelligence

Last year, we compiled a list of top chips for accelerating ML tasks. We talked about the rising demand of AI-based systems on Chips and the year 2020 is no different -- the trend continued. While few chipmakers capitalised on this trend, chip giants like Intel had to undergo a tough period. They even had to sell their NAND division to South Korean chipmaker SK Hynix. Even Apple announced their separation from Intel processors and opened a new chapter of Apple Silicon.


Oracle BrandVoice: GPU Chips Are Poised To Rewrite (Again) What's Possible In Cloud Computing

#artificialintelligence

At Altair, chief technology officer Sam Mahalingam is heads-down testing the company's newest software for designing cars, buildings, windmills, and other complex systems. The engineering and design software company, whose customers include BMW, Daimler, Airbus, and General Electric, is developing software that combines computer models of wind and fluid flows with machine design in the same process--so an engineer could design a turbine blade while simultaneously seeing its draft's effect on neighboring mills in a wind farm. What Altair needs for a job as hard as this, though, is a particular kind of computing power, provided by graphics processing units (GPUs) made by Silicon Valley's Nvidia and others. "When solving complex design challenges like the interaction between wind structures in windmills, GPUs help expedite computing so faster business decisions can be made," Mahalingam says. An aerodynamics simulation performed with Altair ultraFluidX on the Altair CX-1 concept design, modeled in Altair Inspire Studio.