Goto

Collaborating Authors

 memcapacitor


Analysis of a Memcapacitor-Based for Neural Network Accelerator Framework

Singh, Ankur, Kim, Dowon, Lee, Byung-Geun

arXiv.org Artificial Intelligence

Memelements have emerged as a promising class of devices, demonstrating remarkable performance, particularly when deployed in crossbar architectures [1-3]. Their integration into these structures significantly enhances the efficiency of vector-matrix multiplication (VMM) by enabling the parallel execution of product and summation operations through the devices. This capability is particularly beneficial in the domain of convolutional neural networks (CNNs), where extensive matrix operations are fundamental to both training and inference processes. The combination of in-memory computing (IMC) architectures with the adjustable analog memductance of memelements further contributes to power-efficient VMM and training, enabling the development of highly integrated memory architectures. Consequently, a wide array of CNN hardware designs utilizing memelements-based VMM accelerators [3-6] has been proposed, with their effectiveness consistently demonstrated in various studies. Neuromorphic computing, modeled after brain-like processes and grounded in artificial neural networks, presents effective solutions for a wide range of computationally demanding tasks. Originally conceptualized in the 1980s [7-8], this field has seen substantial progress with the advent of memristive devices [9] and the introduction of convolutional layers in deep neural networks [10-11]. These innovations have facilitated the development of various resistive neuromorphic systems that employ materials such as oxides [12-14], phase-change memory [15], spintronic devices [16-17], and ferroelectric components, including ferroelectric tunnel junctions [18-19] and ferroelectric field-effect transistors (FeFETs) [20-21].


Intrinsic Voltage Offsets in Memcapacitive Bio-Membranes Enable High-Performance Physical Reservoir Computing

Mohamed, Ahmed S., Dhungel, Anurag, Hasan, Md Sakib, Najem, Joseph S.

arXiv.org Artificial Intelligence

Reservoir computing is a brain-inspired machine learning framework for processing temporal data by mapping inputs into high-dimensional spaces. Physical reservoir computers (PRCs) leverage native fading memory and nonlinearity in physical substrates, including atomic switches, photonics, volatile memristors, and, recently, memcapacitors, to achieve efficient high-dimensional mapping. Traditional PRCs often consist of homogeneous device arrays, which rely on input encoding methods and large stochastic device-to-device variations for increased nonlinearity and high-dimensional mapping. These approaches incur high pre-processing costs and restrict real-time deployment. Here, we introduce a novel heterogeneous memcapacitor-based PRC that exploits internal voltage offsets to enable both monotonic and non-monotonic input-state correlations crucial for efficient high-dimensional transformations. We demonstrate our approach's efficacy by predicting a second-order nonlinear dynamical system with an extremely low prediction error (0.00018). Additionally, we predict a chaotic H\'enon map, achieving a low normalized root mean square error (0.080). Unlike previous PRCs, such errors are achieved without input encoding methods, underscoring the power of distinct input-state correlations. Most importantly, we generalize our approach to other neuromorphic devices that lack inherent voltage offsets using externally applied offsets to realize various input-state correlations. Our approach and unprecedented performance are a major milestone towards high-performance full in-materia PRCs.


Biomembrane-based Memcapacitive Reservoir Computing System for Energy Efficient Temporal Data Processing

Hossain, Md Razuan, Mohamed, Ahmed Salah, Armendarez, Nicholas Xavier, Najem, Joseph S., Hasan, Md Sakib

arXiv.org Artificial Intelligence

Reservoir computing is a highly efficient machine learning framework for processing temporal data by extracting features from the input signal and mapping them into higher dimensional spaces. Physical reservoir layers have been realized using spintronic oscillators, atomic switch networks, silicon photonic modules, ferroelectric transistors, and volatile memristors. However, these devices are intrinsically energy-dissipative due to their resistive nature, which leads to increased power consumption. Therefore, capacitive memory devices can provide a more energy-efficient approach. Here, we leverage volatile biomembrane-based memcapacitors that closely mimic certain short-term synaptic plasticity functions as reservoirs to solve classification tasks and analyze time-series data in simulation and experimentally. Our system achieves a 99.6% accuracy rate for spoken digit classification and a normalized mean square error of 7.81*10^{-4} in a second-order non-linear regression task. Furthermore, to showcase the device's real-time temporal data processing capability, we achieve 100% accuracy for a real-time epilepsy detection problem from an inputted electroencephalography (EEG) signal. Most importantly, we demonstrate that each memcapacitor consumes an average of 41.5 fJ of energy per spike, regardless of the selected input voltage pulse width, while maintaining an average power of 415 fW for a pulse width of 100 ms. These values are orders of magnitude lower than those achieved by state-of-the-art memristors used as reservoirs. Lastly, we believe the biocompatible, soft nature of our memcapacitor makes it highly suitable for computing and signal-processing applications in biological environments.