Analysis of a Memcapacitor-Based for Neural Network Accelerator Framework

Singh, Ankur, Kim, Dowon, Lee, Byung-Geun

arXiv.org Artificial Intelligence 

Memelements have emerged as a promising class of devices, demonstrating remarkable performance, particularly when deployed in crossbar architectures [1-3]. Their integration into these structures significantly enhances the efficiency of vector-matrix multiplication (VMM) by enabling the parallel execution of product and summation operations through the devices. This capability is particularly beneficial in the domain of convolutional neural networks (CNNs), where extensive matrix operations are fundamental to both training and inference processes. The combination of in-memory computing (IMC) architectures with the adjustable analog memductance of memelements further contributes to power-efficient VMM and training, enabling the development of highly integrated memory architectures. Consequently, a wide array of CNN hardware designs utilizing memelements-based VMM accelerators [3-6] has been proposed, with their effectiveness consistently demonstrated in various studies. Neuromorphic computing, modeled after brain-like processes and grounded in artificial neural networks, presents effective solutions for a wide range of computationally demanding tasks. Originally conceptualized in the 1980s [7-8], this field has seen substantial progress with the advent of memristive devices [9] and the introduction of convolutional layers in deep neural networks [10-11]. These innovations have facilitated the development of various resistive neuromorphic systems that employ materials such as oxides [12-14], phase-change memory [15], spintronic devices [16-17], and ferroelectric components, including ferroelectric tunnel junctions [18-19] and ferroelectric field-effect transistors (FeFETs) [20-21].