Collaborating Authors

IBM's Memory Breakthrough Will Speed Up IoT and Machine Learning


IBM researchers have just revealed a new storage memory breakthrough that has the potential to speed up machine learning and access to the Internet of Things (IoT), as well as mobile phone apps and cloud storage. For the first time, scientists have demonstrated reliably storing three bits of data per cell using a new memory technology known as phase-change memory (PCM). While memory types span from DRAM to hard disk drives to flash, over the past few years, PCM has become quite popular in the industry as well, due to its combination of read/write speed, endurance, non-volatility and density. The experimental multi-bit PCM chip used by IBM scientists is connected to a standard integrated circuit board. For example, unlike DRAM, PCM doesn't lose data when powered off, and it can endure at least 10 million write cycles, while an average flash USB stick tops out at 3,000 write cycles.

IBM's phase-change memory breakthrough brings DRAM speed at lower costs


IBM scientists have achieved a breakthrough in phase-change memory (PCM), developing storage capabilities with the speed and endurance of DRAM that also come close to matching the low-cost density of flash. The research team presenting in Zurich on Tuesday has successfully stored 3 bits of data per cell in PCM, surpassing the previous limit of 1 bit per cell in PCM. "Phase change memory is the first instantiation of a universal memory with properties of both DRAM and flash, thus answering one of the grand challenges of our industry," Haris Pozidis, an author of the paper and the manager of non-volatile memory research at IBM Research in Zurich, said in a statement. PCM has some clear advantages over the current memory landscape. For instance, it doesn't lose data when powered off as DRAM does, and it can at endure at least 10 million write cycles while an average USB stick can handle around 3,000.

IBM's optical storage is 50 times faster than flash


To store PCM data on a Blu-ray disk, you apply a high current to amorphous (non-crystalline) glass materials, transforming them into a more conductive crystale form. To read it back, you apply a lower voltage to measure conductivity -- when it's high, the state is "1," and when it's low, it's "0." By heating up the materials, more states can be stored, but the problem is that the crystals can "drift" depending on the ambient temperature. IBM's team figured out how to track and encode those variations, allowing them to reliably read 3-bits of data per cell long after it was written. That suddenly makes PCM a lot more interesting -- its speed is currently much better than flash, but the costs are as high as RAM thanks to the low density.

IBM's latest move may have cracked the code on a cheaper DRAM alternative


A cheaper alternative to DRAM just took a step closer to enterprise data centers as IBM unveiled a way to make it more dense. PCM (phase-change memory) is one of a handful of emerging technologies that aim to be faster than flash and less expensive than DRAM. They could give enterprises and consumers faster access to data at lower cost, but there are challenges to overcome before that happens. Density is one of those, and IBM says it's achieved a new high in that area with a version of PCM that can fit three bits on each cell. That's 50 percent more than the company showed off in 2011 with a two-bit form of PCM.

Memory Issues For AI Edge Chips


Several companies are developing or ramping up AI chips for systems on the network edge, but vendors face a variety of challenges around process nodes and memory choices that can vary greatly from one application to the next. The network edge involves a class of products ranging from cars and drones to security cameras, smart speakers and even enterprise servers. All of these applications incorporate low-power chips running machine learning algorithms. While these chips have many of the same components as other digital chips, a key difference is that the bulk of the processing is done in or near the memory. With that in mind, the makers of AI edge chips are evaluating different types of memory for their next devices. Each comes with its own set of challenges. In addition, the chips themselves must incorporate low-power architectures, despite the fact that in many cases they are using mature processes rather than the most advanced nodes. AI chips -- sometimes called deep-learning accelerators or processors -- are optimized to handle various workloads in systems using machine learning. A subset of AI, machine learning utilizes a neural network to crunch data and identify patterns.