Goto

Collaborating Authors

Harvesting Brain Signal Using Machine Learning Methods

#artificialintelligence

Abstract. Brain computer interface (BCI) systems are developed in the biomedical engineering fields to increase the quality of life among patients with paralysis and neurological conditions. The development of a six class BCI controller to operate a semi-autonomous mobile robotic arm is presented. The controller uses the following mental tasks: imagined left/right hand squeeze, imagined left/right foot tap, rest, and a physical jaw clench. To design a controller, the locations of active electrodes are verified, and an appropriate machine learning algorithm is determined. Three subjects, ages ranging between 22 and 27, participated in five sessions of motor imagery experiments to record their brainwaves. These recordings were analyzed using event related potential (ERP) plots and topographical maps to determine active electrodes. bcilab was used to train two, three, five, and six class BCI controllers using linear discriminant analysis (LDA) and relevance vector machine (RVM) machine learning methods. The subjects' data were used to compare the two-method's performance in terms of error rate percentage. While a two class BCI controller showed the same accuracy for both methods, the three and five class BCI controllers showed the RVM approach having a higher accuracy than the LDA approach. For the five-class controller, error rate percentage was 33.3% for LDA and 29.2% for RVM. The six class BCI controller error rate percentage for both LDA and RVM was 34.5%. While the percentage values are the same, RVM was chosen as the desired machine learning algorithm based on the trend seen in the three and five class controller performances.


The design of a highperformance cache controller: A case study in asynchronous synthesis

Classics

Because of ever-increasing demands on digital system performance, there is a need for new architectures which fully exploit the capabilities of contemporary VLSI technology. Asynchronous or self-timed systems, in particular, promise a number of advantages over traditional synchronous systems: adaptive operation (based on voltage, temperature, process and data), wider environmental operating range, reduced power consumption, and robust interfaces. In [27], we presented a new method for the synthesis of locally-clocked asynchronous controllers. In [12], the STRiP architecture was shown to provide an attractive alternative to comparable synchronous and asynchronous implementations. However, to support this asynchronous paradigm, an efficient asynchronous memory subsystem is critical. In this paper, we apply the locally-clocked synthesis method to the design of an asynchronous second-level cache controller.


Event-driven Vision and Control for UAVs on a Neuromorphic Chip

arXiv.org Artificial Intelligence

Event-based vision sensors achieve up to three orders of magnitude better speed vs. power consumption trade off in high-speed control of UAVs compared to conventional image sensors. Event-based cameras produce a sparse stream of events that can be processed more efficiently and with a lower latency than images, enabling ultra-fast vision-driven control. Here, we explore how an event-based vision algorithm can be implemented as a spiking neuronal network on a neuromorphic chip and used in a drone controller. We show how seamless integration of event-based perception on chip leads to even faster control rates and lower latency. In addition, we demonstrate how online adaptation of the SNN controller can be realised using on-chip learning. Our spiking neuronal network on chip is the first example of a neuromorphic vision-based controller solving a high-speed UAV control task. The excellent scalability of processing in neuromorphic hardware opens the possibility to solve more challenging visual tasks in the future and integrate visual perception in fast control loops.


I am Robot: Neuromuscular Reinforcement Learning to Actuate Human Limbs through Functional Electrical Stimulation

arXiv.org Artificial Intelligence

Functional Electrical Stimulation (FES) is an established and safe technique for contracting muscles by stimulating the skin above a muscle to induce its contraction. However, an open challenge remains on how to restore motor abilities to human limbs through FES, as the problem of controlling the stimulation is unclear. We are taking a robotics perspective on this problem, by developing robot learning algorithms that control the ultimate humanoid robot, the human body, through electrical muscle stimulation. Human muscles are not trivial to control as actuators due to their force production being non-stationary as a result of fatigue and other internal state changes, in contrast to robot actuators which are wellunderstood and stationary over broad operation ranges. We present our Deep Reinforcement Learning approach to the control of human muscles with FES, using a recurrent neural network for dynamic state representation, to overcome the unobserved elements of the behaviour of human muscles under external stimulation. We demonstrate our technique both in neuromuscular simulations but also experimentally on a human. Our results show that our controller can learn to manipulate human muscles, applying appropriate levels of stimulation to achieve the given tasks while compensating for advancing muscle fatigue which arises throughout the tasks. Additionally, Figure 1: Our 3 scenarios for FES control: (a) arm vertical motion our technique can learn quickly enough to be implemented in in simulation (b) and human volunteers, (c) arm horizontal motion real-world human-in-the-loop settings.


An Open-Source Framework for Adaptive Traffic Signal Control

arXiv.org Artificial Intelligence

Developing optimal transportation control systems at the appropriate scale can be difficult as cities' transportation systems can be large, complex and stochastic. Intersection traffic signal controllers are an important element of modern transportation infrastructure where sub-optimal control policies can incur high costs to many users. Many adaptive traffic signal controllers have been proposed by the community but research is lacking regarding their relative performance difference - which adaptive traffic signal controller is best remains an open question. This research contributes a framework for developing and evaluating different adaptive traffic signal controller models in simulation - both learning and non-learning - and demonstrates its capabilities. The framework is used to first, investigate the performance variance of the modelled adaptive traffic signal controllers with respect to their hyperparameters and second, analyze the performance differences between controllers with optimal hyperparameters. The proposed framework contains implementations of some of the most popular adaptive traffic signal controllers from the literature; Webster's, Max-pressure and Self-Organizing Traffic Lights, along with deep Q-network and deep deterministic policy gradient reinforcement learning controllers. This framework will aid researchers by accelerating their work from a common starting point, allowing them to generate results faster with less effort.