For the first time, scientists at IBM Research have demonstrated reliably storing 3 bits of data per cell using a relatively new memory technology known as phase-change memory (PCM). The current memory landscape spans from venerable DRAM to hard disk drives to ubiquitous flash. But in the last several years PCM has attracted the industry's attention as a potential universal memory technology based on its combination of read/write speed, endurance, non-volatility and density. For example, PCM doesn't lose data when powered off, unlike DRAM, and the technology can endure at least 10 million write cycles, compared to an average flash USB stick, which tops out at 3,000 write cycles. This research breakthrough provides fast and easy storage to capture the exponential growth of data from mobile devices and the Internet of Things.
Sponsored Artificial intelligence and machine learning hold out the promise of enabling businesses to work smarter and faster, by improving and streamlining operations or offering firms the chance to gain a competitive advantage over their rivals. But where is best to host such applications – in the cloud, or locally, at the edge? Despite all the hype, it is early days for the technologies that we loosely label "AI", and many organisations lack the expertise and resources to really take advantage of it. Machine learning and deep learning often require teams of experts, for example, as well as access to large data sets for training, and specialised infrastructure with a considerable amount of processing power. This is because cloud service providers have a wealth of development tools and other resources readily available such as pre-trained deep neural networks for voice, text, image, and translation processing, according to Moor Insights & Strategy Senior Analyst Karl Freund.
When it comes to data, AI is like Pac-Man. Hard disk drives, NAS, conventional data center and cloud-based storage schemes can't sate AI's voracious appetite for speed and capacity, especially for real time. Playing the game today requires a fundamental rethinking of storage as a foundation of machine learning, deep learning, image processing, and neural network success. "AI and Big Data are dominating every aspect of decision-making and operations," says Jeff Denworth, vice president of products and co-founder at Vast Data, a provider of all-flash storage and services. "The need for vast amounts of fast data are rendering the traditional storage pyramid obsolete. Applying new thinking to many of the toughest problems helps simplify the storage and access of huge reserves of data, in real time, leading to insights that were not possible before."
This article is part of the Technology Insight series, made possible with funding from Intel. By now, you've seen the word "Optane" bandied about on VentureBeat (such as here and here) and probably countless other places -- and for good reason. Intel, the maker of all Optane products, is heavily promoting the results of its decade-long R&D investment in this new memory/storage hybrid. But what exactly is Optane, and what is it good for? If you're not feeling up to speed, don't worry.
Businesses are increasingly using data assets to accelerate their competitiveness and drive greater revenue. Part of this strategy is to use machine learning and AI tools and technologies. But AI workloads have significantly different data storage and computing needs than generic workloads. AI and machine learning workloads require huge amounts of data both to build and train the models and to keep them running. When it comes to storage for these workloads, high-performance and long-term data storage are the most important concerns.