IoT devices are generating huge amounts of data, and it's leaving a mark across the enterprise IT industry. That's according to a new report by analyst firm 451 Research, whose The Voice of the Enterprise: IoT – Workloads and Key Projects report says a third of organisations is planning on increasing their storage capacity. Three in ten (30 per cent) will increase network edge equipment, while almost the same number (29.4 per cent) plans on increasing server infrastructure. And finally, off-premises cloud infrastructure will also be getting a boost in 27.2 per cent of the companies surveyed, all in the next 12 months. Spending on IoT remains'solid', 451 Research says, with two thirds of respondents planning on increasing their spend within the next 12 months.
Analytics is evolving from big data, machine learning to artificial intelligence. Machine learning is the analysis of data at rest, artificial intelligence (AI) is the analysis of data in real-time. Machine learning is predictive; AI is cognitive. The requirements of a storage infrastructure supporting an AI environment are high bandwidth, low latency, elasticity in response to workload demands, and rapid response to multiple parallel analytic queries. Traditionally, most AI initiatives start as skunkworks projects often hosted in the cloud.
Amazon Web Services is making cloud instances powered by AMD's Epyc Rome chips generally available. The Elastic Compute Cloud (EC2) C5a instances, powered by the 2nd Gen AMD Epyc processors, offer the lowest cost per x86 virtual CPU in the Amazon EC2 portfolio. They're well-suited compute-intensive workloads that can take advantage of the 2nd Gen Epyc processor's high core counts, including video game development and hosting. Powered by a processor running at frequencies up to 3.3Ghz, the Amazon EC2 C5a instances are available in eight configurations, with up to 96 virtual CPUs. The is the sixth instance family at AWS powered by Epyc processors.
Oracle today announced the general availability of new bare metal Oracle Cloud Infrastructure compute instances, powered by Intel Xeon processors. These new instances add to Oracle's CPU- and GPU-based high performance computing (HPC) workloads, with the aim of convincing large businesses to bring legacy HPC workloads to the cloud for the first time. The instances are part of Oracle's new "Clustered Network" offering, which provides access to a low-latency, high-bandwidth remote direct memory access (RDMA) network. Oracle says it's the only cloud provider offering bare metal Infrastructure-as-a-Service (IaaS) with RDMA. Also: Amazon's consumer business moves from Oracle to AWS, but Larry Ellison won't stop talking With the Clustered Network, companies can run performance-sensitive workloads, such as AI or engineering simulations.
Artificial Intelligence (AI) is a broad term that can apply to various computing tasks, including machine learning, deep learning, and big data analytics. Many AI projects are in a proof of concept stage, but CIOs and IT Managers need to understand that in the future, almost every business outcome and workflow will use and depend upon some form of AI processing. The time is now to prepare the Infrastructure for that eventuality. As AI environments move into production and begin to grow in size and importance, organizations need a strategy to address challenges the AI at scale will create for both the compute and storage architectures. For the last decade, developing a cloud strategy was at the top of every CIO's to-do list.