Goto

Collaborating Authors

Beat the GPU Storage Bottleneck for AI and ML

#artificialintelligence

Data centers that support AI and ML deployments rely on Graphics Processing Unit (GPU)-based servers to power their computationally intensive architectures. Across multiple industries, expansion in GPU use is behind the over 31 percent CAGR in GPU servers projected through 2024. That means more system architects will be tasked to assure top performance and cost-efficiency from GPU systems. Yet optimizing storage for these GPU-based AI/ML workloads is no small feat. GPU servers are highly efficient for the matrix multiplication and convolution required to train large AI/ML datasets.


Evaluate your machine learning and AI data storage requirements

#artificialintelligence

Organizations are using machine learning and AI to get insights they can use to improve how they do business. But these workloads differ from regular ones, as they require large amounts of data to build and train statistical models. All this data must also be processed and stored: Active data must be moved to a high-performance platform for processing, and other data often gets transferred to long-term storage. To meet these requirements, some storage vendors offer either converged infrastructure products or products that organizations can build into their machine learning and AI projects. These tools package storage, networking and compute or scale-out file storage with GPUs.


Data Storage Architectures for Machine Learning and Artificial Intelligence

#artificialintelligence

There is growing interest in machine learning (ML) and artificial intelligence (AI) in enterprise organizations. The market is quickly moving from infrastructures designed for research and development to turn-key solutions that respond quickly to new business requests. ML/AI are strategic technologies across all industries, improving business processes while enhancing the competitiveness of the entire organization. ML/AI software tools are improving and becoming more user-friendly, making it easier to to build new applications or reuse existing models for more use cases. As the ML/AI market matures, high-performance computing (HPC) vendors are now joined by traditional storage manufacturers, that are usually focused on enterprise workloads.


WekaIO Partners with HPE to Develop All-Flash Storage for HPC and AI - insideHPC

#artificialintelligence

Today WekaIO announced a partnership with HPE to deliver integrated flash-based parallel file system capabilities that can significantly accelerate compute-intensive workloads. At HPE we're committed to providing innovative solutions for our customers in the rapidly growing markets for high-performance computing, artificial intelligence and machine learning," said Bill Mannel, Vice President and General Manager, HPC and AI Segment Solutions, Hewlett Packard Enterprise. "The combination of WekaIO Matrix with HPE Apollo Systems is an option that enables customers to maximize the throughput of their HPC environment by making it easy to scale storage capacity and performance to new levels without the requirement to modify compute codes or HPC workflows." The agreement creates an offering that targets the space, energy, and processor attributes of supercomputing. The HPE portfolio of HPC and AI solutions--including Apollo 2000, Apollo 6000, Apollo 6500, and SGI 8600--all provide rich architectures for leveraging high-performance flash storage both within the actual compute platforms as well as across high-performance interconnect fabrics.


AI is data Pac-Man. Winning requires a flashy new storage strategy.

#artificialintelligence

When it comes to data, AI is like Pac-Man. Hard disk drives, NAS, conventional data center and cloud-based storage schemes can't sate AI's voracious appetite for speed and capacity, especially for real time. Playing the game today requires a fundamental rethinking of storage as a foundation of machine learning, deep learning, image processing, and neural network success. "AI and Big Data are dominating every aspect of decision-making and operations," says Jeff Denworth, vice president of products and co-founder at Vast Data, a provider of all-flash storage and services. "The need for vast amounts of fast data are rendering the traditional storage pyramid obsolete. Applying new thinking to many of the toughest problems helps simplify the storage and access of huge reserves of data, in real time, leading to insights that were not possible before."