Goto

Collaborating Authors

 dram


KVNAND: Efficient On-Device Large Language Model Inference Using DRAM-Free In-Flash Computing

Deng, Lishuo, Xu, Shaojie, Chen, Jinwu, Yan, Changwei, Wang, Jiajie, Jiang, Zhe, Shan, Weiwei

arXiv.org Artificial Intelligence

Abstract--Deploying large language models (LLMs) on edge devices enables personalized agents with strong privacy and low cost. However, with tens to hundreds of billions of parameters, single-batch autoregressive inference suffers from extremely low arithmetic intensity, creating severe weight-loading and bandwidth pressures on resource-constrained platforms. Recent in-flash computing (IFC) solutions alleviate this bottleneck by co-locating weight-related linear computations in the decode phase with flash, yet still rely on DRAM for the key-value (KV) cache. As context length grows, the KV cache can exceed model weights in size, imposing prohibitive DRAM cost and capacity requirements. Attempts to offload KV cache to flash suffer from severe performance penalties. We propose KVNAND, the first DRAM-free, IFC-based architecture that stores both model weights and KV cache entirely in compute-enabled 3D NAND flash. KVNAND addresses the fundamental performance challenges of flash under intensive KV cache access by leveraging IFC for all memory-bound operations to reduce data transfer overhead, introducing head-group parallelism to boost throughput, and employing page-level KV cache mapping to align token access patterns with flash organization. In addition, we propose a design space exploration framework that evaluates discrete and compact KVNAND variants to balance weight and KV placement, automatically identifying the optimal design trade-off. These techniques mitigate latency, energy, and reliability concerns, turning flash into a practical medium for long-context KV storage. Evaluations on MHA 7B and GQA 70B LLMs show that KVNAND achieves 1.98 /1.94 /2.05 geomean speedup at 128/1K/10K-token contexts compared to DRAMequipped IFC designs and addresses out-of-memory failures at 100K context length. As Large Language Models (LLMs) integrate into daily workflows, demand increases for personalized AI agents that align with user preferences, domain knowledge, and interaction styles. Deploying such agents on edge devices offers privacy, low-latency responsiveness, and cost efficiency by eliminating cloud dependency, making on-device LLMs a compelling direction for AI democratization [81]. Realizing high-quality personal LLM agents on resource-limited edge devices faces two main bottlenecks: memory capacity and bandwidth. The growing demand for long-context agentic workflows like long document analysis [35], multi-turn dialogue [84], and chain-of-thought reasoning [10] introduces the KV cache as another dominant consumer of this limited memory [19], [74]. Moreover, recent state-of-the-art (SoT A) models support extensive context lengths ranging from 128K (LLaMA3.1-70B The KV cache demand scales linearly with context length; for example, a 13B model already requires 8 GB KV memory at a 10K context [71], placing prohibitive pressure on edge resources.


Emergence of Fixational and Saccadic Movements in a Multi-Level Recurrent Attention Model for Vision

Pan, Pengcheng, Shogo, Yonekura, Kuniyoshi, Yasuo

arXiv.org Artificial Intelligence

Inspired by foveal vision, hard attention models promise interpretability and parameter economy. However, existing models like the Recurrent Model of Visual Attention (RAM) and Deep Recurrent Attention Model (DRAM) failed to model the hierarchy of human vision system, that compromise on the visual exploration dynamics. As a result, they tend to produce attention that are either overly fixational or excessively saccadic, diverging from human eye movement behavior. In this paper, we propose a Multi-Level Recurrent Attention Model (MRAM), a novel hard attention framework that explicitly models the neural hierarchy of human visual processing. By decoupling the function of glimpse location generation and task execution in two recurrent layers, MRAM emergent a balanced behavior between fixation and saccadic movement. Our results show that MRAM not only achieves more human-like attention dynamics, but also consistently outperforms CNN, RAM and DRAM baselines on standard image classification benchmarks.



Stratum: System-Hardware Co-Design with Tiered Monolithic 3D-Stackable DRAM for Efficient MoE Serving

Pan, Yue, Xia, Zihan, Hsu, Po-Kai, Hu, Lanxiang, Kim, Hyungyo, Sharda, Janak, Zhou, Minxuan, Kim, Nam Sung, Yu, Shimeng, Rosing, Tajana, Kang, Mingu

arXiv.org Artificial Intelligence

As Large Language Models (LLMs) continue to evolve, Mixture of Experts (MoE) architecture has emerged as a prevailing design for achieving state-of-the-art performance across a wide range of tasks. MoE models use sparse gating to activate only a handful of expert sub-networks per input, achieving billion-parameter capacity with inference costs akin to much smaller models. However, such models often pose challenges for hardware deployment due to the massive data volume introduced by the MoE layers. To address the challenges of serving MoE models, we propose Stratum, a system-hardware co-design approach that combines the novel memory technology Monolithic 3D-Stackable DRAM (Mono3D DRAM), near-memory processing (NMP), and GPU acceleration. The logic and Mono3D DRAM dies are connected through hybrid bonding, whereas the Mono3D DRAM stack and GPU are interconnected via silicon interposer. Mono3D DRAM offers higher internal bandwidth than HBM thanks to the dense vertical interconnect pitch enabled by its monolithic structure, which supports implementations of higher-performance near-memory processing. Furthermore, we tackle the latency differences introduced by aggressive vertical scaling of Mono3D DRAM along the z-dimension by constructing internal memory tiers and assigning data across layers based on access likelihood, guided by topic-based expert usage prediction to boost NMP throughput. The Stratum system achieves up to 8.29x improvement in decoding throughput and 7.66x better energy efficiency across various benchmarks compared to GPU baselines.



An End-to-End DNN Inference Framework for the SpiNNaker2 Neuromorphic MPSoC

Jobst, Matthias, Langer, Tim, Liu, Chen, Alici, Mehmet, Gonzalez, Hector A., Mayr, Christian

arXiv.org Artificial Intelligence

--This work presents a multi-layer DNN scheduling framework as an extension of OctopuScheduler, providing an end-to-end flow from PyT orch models to inference on a single SpiN-Naker2 chip. T ogether with a front-end comprised of quantization and lowering steps, the proposed framework enables the edge-based execution of large and complex DNNs up to transformer scale using the neuromorphic platform SpiNNaker2. The efficient deployment of Deep Neural Networks (DNNs) on constrained devices has the potential to revolutionize the entire edge industry. While the primary energy challenges are associated with datacenter workloads [1], mapping DNN models efficiently to the edge enables the development of smarter infrastructure nodes. Neuromorphic computing stands out as a particularly promising approach to significantly reduce the energy footprint of these AI workloads by emulating the extreme efficiencies of biological brains [2].


Deep Reinforcement Learning-Based DRAM Equalizer Parameter Optimization Using Latent Representations

Usama, Muhammad, Chang, Dong Eui

arXiv.org Artificial Intelligence

Equalizer parameter optimization for signal integrity in high-speed Dynamic Random Access Memory systems is crucial but often computationally demanding or model-reliant. This paper introduces a data-driven framework employing learned latent signal representations for efficient signal integrity evaluation, coupled with a model-free Advantage Actor-Critic reinforcement learning agent for parameter optimization. The latent representation captures vital signal integrity features, offering a fast alternative to direct eye diagram analysis during optimization, while the reinforcement learning agent derives optimal equalizer settings without explicit system models. Applied to industry-standard Dynamic Random Access Memory waveforms, the method achieved significant eye-opening window area improvements: 42.7\% for cascaded Continuous-Time Linear Equalizer and Decision Feedback Equalizer structures, and 36.8\% for Decision Feedback Equalizer-only configurations. These results demonstrate superior performance, computational efficiency, and robust generalization across diverse Dynamic Random Access Memory units compared to existing techniques. Core contributions include an efficient latent signal integrity metric for optimization, a robust model-free reinforcement learning strategy, and validated superior performance for complex equalizer architectures.


Rapid Parameter Inference with Uncertainty Quantification for a Radiological Plume Source Identification Problem

Edwards, Christopher, Smith, Ralph C

arXiv.org Artificial Intelligence

In the event of a nuclear accident, or the detonation of a radiological dispersal device, quickly locating the source of the accident or blast is important for emergency response and environmental decontamination. At a specified time after a simulated instantaneous release of an aerosolized radioactive contaminant, measurements are recorded downwind from an array of radiation sensors. Neural networks are employed to infer the source release parameters in an accurate and rapid manner using sensor and mean wind speed data. We consider two neural network constructions that quantify the uncertainty of the predicted values; a categorical classification neural network and a Bayesian neural network. With the categorical classification neural network, we partition the spatial domain and treat each partition as a separate class for which we estimate the probability that it contains the true source location. In a Bayesian neural network, the weights and biases have a distribution rather than a single optimal value. With each evaluation, these distributions are sampled, yielding a different prediction with each evaluation. The trained Bayesian neural network is thus evaluated to construct posterior densities for the release parameters. Results are compared to Markov chain Monte Carlo (MCMC) results found using the Delayed Rejection Adaptive Metropolis Algorithm. The Bayesian neural network approach is generally much cheaper computationally than the MCMC approach as it relies on the computational cost of the neural network evaluation to generate posterior densities as opposed to the MCMC approach which depends on the computational expense of the transport and radiation detection models.


Managed-Retention Memory: A New Class of Memory for the AI Era

Legtchenko, Sergey, Stefanovici, Ioan, Black, Richard, Rowstron, Antony, Liu, Junyi, Costa, Paolo, Canakci, Burcu, Narayanan, Dushyanth, Wu, Xingbo

arXiv.org Artificial Intelligence

AI clusters today are one of the major uses of High Bandwidth Memory (HBM). However, HBM is suboptimal for AI workloads for several reasons. Analysis shows HBM is overprovisioned on write performance, but underprovisioned on density and read bandwidth, and also has significant energy per bit overheads. It is also expensive, with lower yield than DRAM due to manufacturing complexity. We propose a new memory class: Managed-Retention Memory (MRM), which is more optimized to store key data structures for AI inference workloads. We believe that MRM may finally provide a path to viability for technologies that were originally proposed to support Storage Class Memory (SCM). These technologies traditionally offered long-term persistence (10+ years) but provided poor IO performance and/or endurance. MRM makes different trade-offs, and by understanding the workload IO patterns, MRM foregoes long-term data retention and write performance for better potential performance on the metrics important for these workloads.


AI beats human experts at distinguishing American whiskey from Scotch

New Scientist

Artificial intelligence can tell Scotch whisky from American whiskey and identify its strongest constituent aromas more reliably than human experts – by using data rather than tasting the drinks. Andreas Grasskamp at the Fraunhofer Institute for Process Engineering and Packaging IVV in Germany and his colleagues trained an AI molecular odour prediction algorithm called OWSum on descriptions of different whiskies. Then, in a study involving 16 samples – nine types of Scotch whisky and seven types of American bourbon or whiskey – they tasked OWSum with telling drinks from the two nations apart based on keyword descriptions of their flavours, such as flowery, fruity, woody or smoky. Using these alone, the AI could tell which country a drink came from with almost 94 per cent accuracy. Because the complex aroma of these spirits is determined by the absence or presence of many chemical compounds, the researchers also fed the AI a reference dataset of 390 molecules commonly found in whiskies.