Morie, Takashi
Techniques for Enhancing Memory Capacity of Reservoir Computing
Yokota, Atsuki, Kawashima, Ichiro, Saito, Yohei, Tamukoh, Hakaru, Nomura, Osamu, Morie, Takashi
Reservoir Computing (RC) is a bio-inspired machine learning framework, and various models have been proposed. RC is a well-suited model for time series data processing, but there is a trade-off between memory capacity and nonlinearity. In this study, we propose methods to improve the memory capacity of reservoir models by modifying their network configuration except for the inside of reservoirs. The Delay method retains past inputs by adding delay node chains to the input layer with the specified number of delay steps. To suppress the effect of input value increase due to the Delay method, we divide the input weights by the number of added delay steps. The Pass through method feeds input values directly to the output layer. The Clustering method divides the input and reservoir nodes into multiple parts and integrates them at the output layer. We applied these methods to an echo state network (ESN), a typical RC model, and the chaotic Boltzmann machine (CBM)-RC, which can be efficiently implemented in integrated circuits. We evaluated their performance on the NARMA task, and measured information processing capacity (IPC) to evaluate the trade-off between memory capacity and nonlinearity.
Training Physical Neural Networks for Analog In-Memory Computing
Sakemi, Yusuke, Okamoto, Yuji, Morie, Takashi, Nobukawa, Sou, Hosomi, Takeo, Aihara, Kazuyuki
Deep learning is a state-of-the-art methodology in numerous domains, including image recognition, natural language processing, and data generation [1]. The discovery of scaling laws in deep learning models [2, 3] has motivated the development of increasingly larger models, commonly referred to as foundation models [4, 5, 6]. Recent studies have shown that reasoning tasks can be improved through iterative computations during the inference phase [7]. While computational power continues to be a major driver of artificial intelligence (AI) advancements, the associated costs remain a significant barrier to broader adoption across diverse industries [8, 9]. This issue is especially critical in edge AI systems, where energy consumption is constrained by the limited capacity of batteries, making the need for more efficient computation paramount [10]. One promising strategy to enhance energy efficiency is fabricating dedicated hardware. Since matrixvector multiplication is the computational core in deep learning, parallelization greatly enhances computational efficiency [11]. Moreover, in data-driven applications such as deep learning, a substantial portion of power consumption is due to data movement between the processor and memory, commonly referred to as the von Neumann bottleneck [12].
Hibikino-Musashi@Home 2024 Team Description Paper
Isomoto, Kosei, Mizutani, Akinobu, Matsuzaki, Fumiya, Sato, Hikaru, Matsumoto, Ikuya, Yamao, Kosei, Kawabata, Takuya, Shiba, Tomoya, Yano, Yuga, Yokota, Atsuki, Kanaoka, Daiju, Yamaguchi, Hiromasa, Murai, Kazuya, Minje, Kim, Shen, Lu, Suzuka, Mayo, Anraku, Moeno, Yamaguchi, Naoki, Fujimatsu, Satsuki, Tokuno, Shoshi, Mizo, Tadataka, Fujino, Tomoaki, Nakadera, Yuuki, Shishido, Yuka, Nakaoka, Yusuke, Tanaka, Yuichiro, Morie, Takashi, Tamukoh, Hakaru
This paper provides an overview of the techniques employed by Hibikino-Musashi@Home, which intends to participate in the domestic standard platform league. The team has developed a dataset generator for training a robot vision system and an open-source development environment running on a Human Support Robot simulator. The large language model powered task planner selects appropriate primitive skills to perform the task requested by users. The team aims to design a home service robot that can assist humans in their homes and continuously attends competitions to evaluate and improve the developed system.
Hibikino-Musashi@Home 2023 Team Description Paper
Shiba, Tomoya, Mizutani, Akinobu, Yano, Yuga, Ono, Tomohiro, Tokuno, Shoshi, Kanaoka, Daiju, Fukuda, Yukiya, Amano, Hayato, Koresawa, Mayu, Sakai, Yoshifumi, Takemoto, Ryogo, Tamai, Katsunori, Nakahara, Kazuo, Hayashi, Hiroyuki, Fujimatsu, Satsuki, Mizoguchi, Yusuke, Anraku, Moeno, Suzuka, Mayo, Shen, Lu, Maeda, Kohei, Matsuzaki, Fumiya, Matsumoto, Ikuya, Murai, Kazuya, Isomoto, Kosei, Minje, Kim, Tanaka, Yuichiro, Morie, Takashi, Tamukoh, Hakaru
This paper describes an overview of the techniques of Hibikino-Musashi@Home, which intends to participate in the domestic standard platform league. The team has developed a dataset generator for the training of a robot vision system and an open-source development environment running on a human support robot simulator. The robot system comprises self-developed libraries including those for motion synthesis and open-source software works on the robot operating system. The team aims to realize a home service robot that assists humans in a home, and continuously attend the competition to evaluate the developed system. The brain-inspired artificial intelligence system is also proposed for service robots which are expected to work in a real home environment.
Learning Reservoir Dynamics with Temporal Self-Modulation
Sakemi, Yusuke, Nobukawa, Sou, Matsuki, Toshitaka, Morie, Takashi, Aihara, Kazuyuki
Reservoir computing (RC) can efficiently process time-series data by transferring the input signal to randomly connected recurrent neural networks (RNNs), which are referred to as a reservoir. The high-dimensional representation of time-series data in the reservoir significantly simplifies subsequent learning tasks. Although this simple architecture allows fast learning and facile physical implementation, the learning performance is inferior to that of other state-of-the-art RNN models. In this paper, to improve the learning ability of RC, we propose self-modulated RC (SM-RC), which extends RC by adding a self-modulation mechanism. The self-modulation mechanism is realized with two gating variables: an input gate and a reservoir gate. The input gate modulates the input signal, and the reservoir gate modulates the dynamical properties of the reservoir. We demonstrated that SM-RC can perform attention tasks where input information is retained or discarded depending on the input signal. We also found that a chaotic state emerged as a result of learning in SM-RC. This indicates that self-modulation mechanisms provide RC with qualitatively different information-processing capabilities. Furthermore, SM-RC outperformed RC in NARMA and Lorentz model tasks. In particular, SM-RC achieved a higher prediction accuracy than RC with a reservoir 10 times larger in the Lorentz model tasks. Because the SM-RC architecture only requires two additional gates, it is physically implementable as RC, providing a new direction for realizing edge AI.
Hibikino-Musashi@Home 2022 Team Description Paper
Shiba, Tomoya, Ono, Tomohiro, Tokuno, Shoshi, Uchino, Issei, Okamoto, Masaya, Kanaoka, Daiju, Takahashi, Kazutaka, Tsukamoto, Kenta, Tsutsumi, Yoshiaki, Nakamura, Yugo, Fukuda, Yukiya, Hoji, Yusuke, Amano, Hayato, Kubota, Yuma, Koresawa, Mayu, Sakai, Yoshifumi, Takemoto, Ryogo, Tamai, Katsunori, Nakahara, Kazuo, Hayashi, Hiroyuki, Fujimatsu, Satsuki, Mizutani, Akinobu, Mizoguchi, Yusuke, Yoshimitsu, Yuhei, Suzuka, Mayo, Matsumoto, Ikuya, Yano, Yuga, Tanaka, Yuichiro, Morie, Takashi, Tamukoh, Hakaru
Our team, Hibikino-Musashi@Home (HMA), was founded in 2010. It is based in Japan in the Kitakyushu Science and Research Park. Since 2010, we have annually participated in the RoboCup@Home Japan Open competition in the open platform league (OPL).We participated as an open platform league team in the 2017 Nagoya RoboCup competition and as a domestic standard platform league (DSPL) team in the 2017 Nagoya, 2018 Montreal, 2019 Sydney, and 2021 Worldwide RoboCup competitions.We also participated in theWorld Robot Challenge (WRC) 2018 in the service-robotics category of the partner-robot challenge (real space) and won first place. Currently, we have 27 members from nine different laboratories within the Kyushu Institute of Technology and the university of Kitakyushu. In this paper, we introduce the activities that have been performed by our team and the technologies that we use.
An Efficient Clustering Algorithm Using Stochastic Association Model and Its Implementation Using Nanostructures
Morie, Takashi, Matsuura, Tomohiro, Nagata, Makoto, Iwata, Atsushi
This paper describes a clustering algorithm for vector quantizers using a "stochastic association model". It offers a new simple and powerful softmax adaptation rule. The adaptation process is the same as the online K-means clustering method except for adding random fluctuation in the distortion error evaluation process. Simulation results demonstrate that the new algorithm can achieve efficient adaptation as high as the "neural gas" algorithm, which is reported as one of the most efficient clustering methods. It is a key to add uncorrelated random fluctuation in the similarity evaluation process for each reference vector. For hardware implementation of this process, we propose a nanostructure, whose operation is described by a single-electron circuit. It positively uses fluctuation in quantum mechanical tunneling processes.
An Efficient Clustering Algorithm Using Stochastic Association Model and Its Implementation Using Nanostructures
Morie, Takashi, Matsuura, Tomohiro, Nagata, Makoto, Iwata, Atsushi
This paper describes a clustering algorithm for vector quantizers using a "stochastic association model". It offers a new simple and powerful softmax adaptationrule. The adaptation process is the same as the online K-means clustering method except for adding random fluctuation in the distortion error evaluation process. Simulation results demonstrate that the new algorithm can achieve efficient adaptation as high as the "neural gas" algorithm, which is reported as one of the most efficient clustering methods. It is a key to add uncorrelated random fluctuation in the similarity evaluationprocess for each reference vector. For hardware implementation ofthis process, we propose a nanostructure, whose operation is described by a single-electron circuit. It positively uses fluctuation in quantum mechanical tunneling processes.