Goto

Collaborating Authors

 Tenore, Francesco


Targeted Adversarial Denoising Autoencoders (TADA) for Neural Time Series Filtration

arXiv.org Artificial Intelligence

Current machine learning (ML)-based algorithms for filtering electroencephalography (EEG) time series data face challenges related to cumbersome training times, regularization, and accurate reconstruction. To address these shortcomings, we present an ML filtration algorithm driven by a logistic covariance-targeted adversarial denoising autoencoder (TADA). We hypothesize that the expressivity of a targeted, correlation-driven convolutional autoencoder will enable effective time series filtration while minimizing compute requirements (e.g., runtime, model size). Furthermore, we expect that adversarial training with covariance rescaling will minimize signal degradation. To test this hypothesis, a TADA system prototype was trained and evaluated on the task of removing electromyographic (EMG) noise from EEG data in the EEGdenoiseNet dataset, which includes EMG and EEG data from 67 subjects. The TADA filter surpasses conventional signal filtration algorithms across quantitative metrics (Correlation Coefficient, Temporal RRMSE, Spectral RRMSE), and performs competitively against other deep learning architectures at a reduced model size of less than 400,000 trainable parameters. Further experimentation will be necessary to assess the viability of TADA on a wider range of deployment cases.


Entrainment of Silicon Central Pattern Generators for Legged Locomotory Control

Neural Information Processing Systems

We demonstrate improvements over a previous chip by moving toward a significantly more versatile device. This includes a larger number of silicon neurons, more sophisticated neurons including voltage dependent charging and relative and absolute refractory periods, and enhanced programmability of neural networks. This chip builds on the basic results achieved on a previous chip and expands its versatility to get closer to a self-contained locomotion controller for walking robots. 1 Introduction Legged locomotion is a system level behavior that engages most senses and activates most muscles in the human body. Understanding of biological systems is exceedingly difficult and usually defies any unifying analysis. Walking behavior is no exception. Theories of walking are likely incomplete, often in ways that are invisible to the scientist studying these behavior in animal or human systems. Biological systems often fill in gaps and details. One way of exposing our incomplete understanding is through the process of synthesis. In this paper we report on continued progress in building the basic elements of a motor pattern generator sufficient to control a legged robot.


Entrainment of Silicon Central Pattern Generators for Legged Locomotory Control

Neural Information Processing Systems

We demonstrate improvements over a previous chip by moving toward a significantly more versatile device. This includes a larger number of silicon neurons, more sophisticated neurons including voltage dependent charging and relative and absolute refractory periods, and enhanced programmability of neural networks. This chip builds on the basic results achieved on a previous chip and expands its versatility to get closer to a self-contained locomotion controller for walking robots. 1 Introduction Legged locomotion is a system level behavior that engages most senses and activates most muscles in the human body. Understanding of biological systems is exceedingly difficult and usually defies any unifying analysis. Walking behavior is no exception. Theories of walking are likely incomplete, often in ways that are invisible to the scientist studying these behavior in animal or human systems. Biological systems often fill in gaps and details. One way of exposing our incomplete understanding is through the process of synthesis. In this paper we report on continued progress in building the basic elements of a motor pattern generator sufficient to control a legged robot.


Spike Timing-Dependent Plasticity in the Address Domain

Neural Information Processing Systems

Address-event representation (AER), originally proposed as a means to communicate sparse neural events between neuromorphic chips, has proven efficient in implementing large-scale networks with arbitrary, configurable synaptic connectivity. In this work, we further extend the functionality of AER to implement arbitrary, configurable synaptic plasticity in the address domain. As proof of concept, we implement a biologically inspired form of spike timing-dependent plasticity (STDP) based on relative timing of events in an AER framework. Experimental results from an analog VLSI integrate-and-fire network demonstrate address domain learning in a task that requires neurons to group correlated inputs.


Spike Timing-Dependent Plasticity in the Address Domain

Neural Information Processing Systems

Address-event representation (AER), originally proposed as a means to communicate sparse neural events between neuromorphic chips, has proven efficient in implementing large-scale networks with arbitrary, configurable synaptic connectivity. In this work, we further extend the functionality of AER to implement arbitrary, configurable synaptic plasticity in the address domain. As proof of concept, we implement a biologically inspired form of spike timing-dependent plasticity (STDP) based on relative timing of events in an AER framework. Experimental results from an analog VLSI integrate-and-fire network demonstrate address domain learning in a task that requires neurons to group correlated inputs.


Spike Timing-Dependent Plasticity in the Address Domain

Neural Information Processing Systems

Address-event representation (AER), originally proposed as a means to communicate sparse neural events between neuromorphic chips, has proven efficient in implementing large-scale networks with arbitrary, configurable synaptic connectivity. In this work, we further extend the functionality of AER to implement arbitrary, configurable synaptic plasticity inthe address domain. As proof of concept, we implement a biologically inspiredform of spike timing-dependent plasticity (STDP) based on relative timing of events in an AER framework. Experimental resultsfrom an analog VLSI integrate-and-fire network demonstrate address domain learning in a task that requires neurons to group correlated inputs.