Oracle on Tuesday announced that Nissan is migrating on-premise,high-performance computing (HPC) workloads to Oracle Cloud, in order to perform latency-sensitive engineering simulations. Back in 2018, Oracle introduced bare metal compute instances, powered by Intel Xeon processors, tailored for HPC workloads. The instances are part of Oracle's "Clustered Network" offering, which provides access to a low-latency, high-bandwidth remote direct memory access (RDMA) network. Nissan is one of the first automotive OEMs to leverage Oracle's bare-metal GPU-accelerated hardware for HPC workloads. Bing Xu, the GM of Nissan's Engineering Systems Department, said the company selected Oracle's cloud HPC offerings "to meet the challenges of increased simulation demand under constant cost savings pressure."
Cloud Bigtable has long been Google Cloud's fully managed NoSQL database for massive, petabyte-sized analytical and operational workloads. At $0.65 per hour and node, it was never a cheap service to run, especially because Google Cloud enforced a minimum of three nodes per cluster for production workloads. Today, however, it is changing that, and you can now run Bigtable production workloads on just a single node. "We want Bigtable to be an excellent home for all of your key-value and wide-column use-cases, both large and small," Google Cloud Bigtable product manager Sandy Ghai said in today's announcement. "That's true whether you're a developer just getting started, or an established enterprise looking for a landing place for your self-managed HBase or Cassandra clusters."
You don't need Sherlock Holmes to tell you that cloud computing is on the rise, and that cloud traffic keeps going up. However, it is enlightening to see the degree by which it is increasing, which is, in essence, about to quadruple in the next few years. By that time, 92% percent of workloads will be processed by cloud data centers; versus only eight percent being processed by traditional data centers.
Businesses are migrating to cloud architectures at a rapid clip and by 2020, cloud traffic will take up 92 percent of total data center traffic globally, according to Cisco's Global Cloud Index report. The networking giant predicts that cloud traffic will rise 3.7-fold up from 3.9 zettabytes (ZB) per year in 2015 to 14.1ZB per year by 2020. "The IT industry has taken cloud computing from an emerging technology to an essential scalable and flexible networking solution. With large global cloud deployments, operators are optimizing their data center strategies to meet the growing needs of businesses and consumers," said Doug Webster, VP of service provider marketing for Cisco, in a press release. "We anticipate all types of data center operators continuing to invest in cloud-based innovations that streamline infrastructures and help them more profitably deliver web-based services to a wide range of end users."
The AI and ML deployments are well underway, but for CXOs the biggest issue will be managing these initiatives, and figuring out where the data science team fits in and what algorithms to buy versus build. Penguin Computing is rolling out four high-performance computing stacks for workloads such as artificial intelligence, analytics and data science and cloud. The move by Penguin Computing, a division of SMART Global Holdings, highlights how many HPC vendors are targeting commercial adoption and workloads that are becoming more mainstream. For instance, Dell Technologies launched HPC stacks aimed at industries and use cases. The takeaway is that HPC is moving beyond supercomputing rankings to more commercialization as workloads for AI, simulation and analytics goes enterprise mainstream.