Park, Jihoon
COMPASS: A Compiler Framework for Resource-Constrained Crossbar-Array Based In-Memory Deep Learning Accelerators
Park, Jihoon, Choe, Jeongin, Kim, Dohyun, Kim, Jae-Joon
Recently, crossbar array based in-memory accelerators have been gaining interest due to their high throughput and energy efficiency. While software and compiler support for the in-memory accelerators has also been introduced, they are currently limited to the case where all weights are assumed to be on-chip. This limitation becomes apparent with the significantly increasing network sizes compared to the in-memory footprint. Weight replacement schemes are essential to address this issue. We propose COMPASS, a compiler framework for resource-constrained crossbar-based processing-in-memory (PIM) deep neural network (DNN) accelerators. COMPASS is specially targeted for networks that exceed the capacity of PIM crossbar arrays, necessitating access to external memories. We propose an algorithm to determine the optimal partitioning that divides the layers so that each partition can be accelerated on chip. Our scheme takes into account the data dependence between layers, core utilization, and the number of write instructions to minimize latency, memory accesses, and improve energy efficiency. Simulation results demonstrate that COMPASS can accommodate much more networks using a minimal memory footprint, while improving throughput by 1.78X and providing 1.28X savings in energy-delay product (EDP) over baseline partitioning methods.
Oscillations enhance time-series prediction in reservoir computing with feedback
Kawai, Yuji, Morita, Takashi, Park, Jihoon, Asada, Minoru
Reservoir computing, a machine learning framework used for modeling the brain, can predict temporal data with little observations and minimal computational resources. However, it is difficult to accurately reproduce the long-term target time series because the reservoir system becomes unstable. This predictive capability is required for a wide variety of time-series processing, including predictions of motor timing and chaotic dynamical systems. This study proposes oscillation-driven reservoir computing (ODRC) with feedback, where oscillatory signals are fed into a reservoir network to stabilize the network activity and induce complex reservoir dynamics. The ODRC can reproduce long-term target time series more accurately than conventional reservoir computing methods in a motor timing and chaotic time-series prediction tasks. Furthermore, it generates a time series similar to the target in the unexperienced period, that is, it can learn the abstract generative rules from limited observations. Given these significant improvements made by the simple and computationally inexpensive implementation, the ODRC would serve as a practical model of various time series data. Moreover, we will discuss biological implications of the ODRC, considering it as a model of neural oscillations and their cerebellar processors.
New Versions of Gradient Temporal Difference Learning
Lee, Donghwan, Lim, Han-Dong, Park, Jihoon, Choi, Okyong
This Temporal-difference (TD) learning [1] is one of the most popular approach does not allow general and formal analysis frameworks reinforcement learning (RL) algorithms [2] for policy evaluation because the asymptotic stability of the ODE model significantly problems. However, its main limitation lies in its inability to accommodate depends on the specific algorithm, and it is in general hard to both off-policy learning and linear function approximation for establish the stability of the ODE model. On the other hand, convergence guarantees, which has been an important open problem the proposed analysis applies the recent asymptotic stability for decades. In 2009, Sutton, Szepesvรกri, and Maei [3], [4] introduced theory of primal-dual gradient dynamics (PDGD) [13], where the first TD learning algorithms compatible with both linear function control theoretic frameworks for stability analysis of PDGD are approximation and off-policy training based on gradient estimations, developed. Using this recent result, we provide a new template which are thus called gradient temporal-difference learning (GTD).
Encoding Speaker-Specific Latent Speech Feature for Speech Synthesis
Kong, Jungil, Lee, Junmo, Kim, Jeongmin, Kim, Beomjeong, Park, Jihoon, Kong, Dohee, Lee, Changheon, Kim, Sangjin
In this work, we propose a novel method for modeling numerous speakers, which enables expressing the overall characteristics of speakers in detail like a trained multi-speaker model without additional training on the target speaker's dataset. Although various works with similar purposes have been actively studied, their performance has not yet reached that of trained multi-speaker models due to their fundamental limitations. To overcome previous limitations, we propose effective methods for feature learning and representing target speakers' speech characteristics by discretizing the features and conditioning them to a speech synthesis model. Our method obtained a significantly higher similarity mean opinion score (SMOS) in subjective similarity evaluation than seen speakers of a best-performing multispeaker model, even with unseen speakers. The proposed method also outperforms a zero-shot method by significant margins. Furthermore, our method shows remarkable performance in generating new artificial speakers. In addition, we demonstrate that the encoded latent features are sufficiently informative to reconstruct an original speaker's speech completely. It implies that our method can be used as a general methodology to encode and reconstruct speakers' characteristics in various tasks. Recently, research on modeling numerous speakers in the real world has been actively studied. Previous works (Gibiansky et al., 2017; Ping et al., 2018; Chen et al., 2020; Kim et al., 2020; 2021) used a trainable speaker embedding matrix to learn the speech characteristics of each speaker in one model to model multiple speakers effectively; this is commonly referred to as multi-speaker speech synthesis. Because the method enables a similar expression of each speaker's characteristics and the sharing of common information among speakers, it is effective in synthesizing the speech of multiple speakers in high quality with relatively less training data than training each speaker in one model. However, the model must be trained for all speakers whenever a new speaker is added, and synthesizing high-quality speech may not be possible for speakers with a relatively small dataset.
VITS2: Improving Quality and Efficiency of Single-Stage Text-to-Speech with Adversarial Learning and Architecture Design
Kong, Jungil, Park, Jihoon, Kim, Beomjeong, Kim, Jeongmin, Kong, Dohee, Kim, Sangjin
Single-stage text-to-speech models have been actively studied recently, and their results have outperformed two-stage pipeline systems. Although the previous single-stage model has made great progress, there is room for improvement in terms of its intermittent unnaturalness, computational efficiency, and strong dependence on phoneme conversion. In this work, we introduce VITS2, a single-stage text-to-speech model that efficiently synthesizes a more natural speech by improving several aspects of the previous work. We propose improved structures and training mechanisms and present that the proposed methods are effective in improving naturalness, similarity of speech characteristics in a multi-speaker model, and efficiency of training and inference. Furthermore, we demonstrate that the strong dependence on phoneme conversion in previous works can be significantly reduced with our method, which allows a fully end-to-end single-stage approach.
SplitAMC: Split Learning for Robust Automatic Modulation Classification
Park, Jihoon, Oh, Seungeun, Kim, Seong-Lyun
Automatic modulation classification (AMC) is a technology that identifies a modulation scheme without prior signal information and plays a vital role in various applications, including cognitive radio and link adaptation. With the development of deep learning (DL), DL-based AMC methods have emerged, while most of them focus on reducing computational complexity in a centralized structure. This centralized learning-based AMC (CentAMC) violates data privacy in the aspect of direct transmission of client-side raw data. Federated learning-based AMC (FedeAMC) can bypass this issue by exchanging model parameters, but causes large resultant latency and client-side computational load. Moreover, both CentAMC and FedeAMC are vulnerable to large-scale noise occured in the wireless channel between the client and the server. To this end, we develop a novel AMC method based on a split learning (SL) framework, coined SplitAMC, that can achieve high accuracy even in poor channel conditions, while guaranteeing data privacy and low latency. In SplitAMC, each client can benefit from data privacy leakage by exchanging smashed data and its gradient instead of raw data, and has robustness to noise with the help of high scale of smashed data. Numerical evaluations validate that SplitAMC outperforms CentAMC and FedeAMC in terms of accuracy for all SNRs as well as latency.
Compensated Integrated Gradients to Reliably Interpret EEG Classification
Tachikawa, Kazuki, Kawai, Yuji, Park, Jihoon, Asada, Minoru
Integrated gradients are widely employed to evaluate the contribution of input features in classification models because it satisfies the axioms for attribution of prediction. This method, however, requires an appropriate baseline for reliable determination of the contributions. We propose a compensated integrated gradients method that does not require a baseline. In fact, the method compensates the attributions calculated by integrated gradients at an arbitrary baseline using Shapley sampling. We prove that the method retrieves reliable attributions if the processes of input features in a classifier are mutually independent, and they are identical like shared weights in convolutional neural networks. Using three electroencephalogram datasets, we experimentally demonstrate that the attributions of the proposed method are more reliable than those of the original integrated gradients, and its computational complexity is much lower than that of Shapley sampling.