Goto

Collaborating Authors

AI Needs an NVMe-Optimized File System

#artificialintelligence

Analytics is evolving from big data, machine learning to artificial intelligence. Machine learning is the analysis of data at rest, artificial intelligence (AI) is the analysis of data in real-time. Machine learning is predictive; AI is cognitive. The requirements of a storage infrastructure supporting an AI environment are high bandwidth, low latency, elasticity in response to workload demands, and rapid response to multiple parallel analytic queries. Traditionally, most AI initiatives start as skunkworks projects often hosted in the cloud.


Micron's SolidScale system pushes SSDs out to shared storage

PCWorld

SSDs operate the fastest when inside a computer. Micron's new SolidScale storage system uproots SSDs from servers and pushes them into discrete boxes while reducing latency. SolidScale is a top-of-the-rack storage system that will house many SSDs. It will connect to servers, memory, and other computing resources in a data center via gigabit ethernet, and will use the emerging NVMeoF (NVMe over Fabric) 1.0 protocol for data transfers. The new storage system is faster than regular storage arrays, Micron claimed.


Evolving SolutionsEvolving Solutions Storage in a Hybrid MultiCloud World

#artificialintelligence

Cloud computing has opened up new ways of running workloads and serving internal and external clients. Storage has evolved to complement the capabilities that cloud provides, both on premises and off premises. Advances in flash storage technology have created new tiers of storage that can be cost effectively used by organizations. More enterprises are opting for a hybrid multicloud environment to provide the agility to meet changing demands. Storage offers more flexibility than ever before to satisfy business and data needs.


How to Choose the Right Storage for AI and HPC Panasas

#artificialintelligence

More and more, companies are using high-performance computing applications, such as large-scale simulations, discovery, and deep learning, to stay competitive, support research innovation, and deliver the best results to customers. But, if your company is like most, it's also struggling to pick the correct storage system that can support this important work. The problem with traditional HPC storage While traditional HPC storage such as Lustre and Spectrum Scale are powerful, they can also be extremely complex and expensive to manage. They introduce significant new administrative overhead for tuning, optimizing, and maintaining storage performance for different HPC workloads, driving higher TCO. They can also introduce reliability problems as systems scale and produce performance bottlenecks.


Why Object Storage Can Be Optimal for AI, Machine Learning Workloads

#artificialintelligence

If IT were a television show, it would be "Hoarders." Organizations are creating and storing more and more data every day, and they're having a difficulty finding effective places to put it all.