AI Needs an NVMe-Optimized File System


Analytics is evolving from big data, machine learning to artificial intelligence. Machine learning is the analysis of data at rest, artificial intelligence (AI) is the analysis of data in real-time. Machine learning is predictive; AI is cognitive. The requirements of a storage infrastructure supporting an AI environment are high bandwidth, low latency, elasticity in response to workload demands, and rapid response to multiple parallel analytic queries. Traditionally, most AI initiatives start as skunkworks projects often hosted in the cloud.

Micron's SolidScale system pushes SSDs out to shared storage


SSDs operate the fastest when inside a computer. Micron's new SolidScale storage system uproots SSDs from servers and pushes them into discrete boxes while reducing latency. SolidScale is a top-of-the-rack storage system that will house many SSDs. It will connect to servers, memory, and other computing resources in a data center via gigabit ethernet, and will use the emerging NVMeoF (NVMe over Fabric) 1.0 protocol for data transfers. The new storage system is faster than regular storage arrays, Micron claimed.

How to Choose the Right Storage for AI and HPC Panasas


More and more, companies are using high-performance computing applications, such as large-scale simulations, discovery, and deep learning, to stay competitive, support research innovation, and deliver the best results to customers. But, if your company is like most, it's also struggling to pick the correct storage system that can support this important work. The problem with traditional HPC storage While traditional HPC storage such as Lustre and Spectrum Scale are powerful, they can also be extremely complex and expensive to manage. They introduce significant new administrative overhead for tuning, optimizing, and maintaining storage performance for different HPC workloads, driving higher TCO. They can also introduce reliability problems as systems scale and produce performance bottlenecks.

Why Object Storage Can Be Optimal for AI, Machine Learning Workloads


If IT were a television show, it would be "Hoarders." Organizations are creating and storing more and more data every day, and they're having a difficulty finding effective places to put it all.

WekaIO raises $31.7 million to develop file systems optimized for AI and technical workloads


No matter the domain, data-intensive apps share one requirement in common: a reliable file system that ensures data is available to them on demand. Pure Storage, NetApp, VAST Data, IBM Spectrum Scale, and Dell EMC provide this, as does San Jose, California-based company WekaIO. WekaIO's high-velocity Matrix platform takes advantage of flash storage, off-the-shelf components, and sophisticated software techniques to deliver enormous speedups at exabyte scale. In fact, the company claims Matrix is the fastest parallel file system on the market for AI and technical compute workloads, as measured by independent SPEC SFS 2014 benchmark tests. To lay the groundwork for future growth in AI and analytics, life sciences, manufacturing, media and entertainment, and financial services, WekaIO has closed a $31.7 million series C financing round led by Hewlett Packard Enterprise (HPE), with participation from a host of storage and computing industry giants including Mellanox, Nvidia, Seagate, Western Digital Capital, and Qualcomm.