Plotting

 Poole, Benjamin


Error-related Potential Variability: Exploring the Effects on Classification and Transferability

arXiv.org Artificial Intelligence

Brain-Computer Interfaces (BCI) have allowed for direct communication from the brain to external applications for the automatic detection of cognitive processes such as error recognition. Error-related potentials (ErrPs) are a particular brain signal elicited when one commits or observes an erroneous event. However, due to the noisy properties of the brain and recording devices, ErrPs vary from instance to instance as they are combined with an assortment of other brain signals, biological noise, and external noise, making the classification of ErrPs a non-trivial problem. Recent works have revealed particular cognitive processes such as awareness, embodiment, and predictability that contribute to ErrP variations. In this paper, we explore the performance of classifier transferability when trained on different ErrP variation datasets generated by varying the levels of awareness and embodiment for a given task. In particular, we look at transference between observational and interactive ErrP categories when elicited by similar and differing tasks. Our empirical results provide an exploratory analysis into the ErrP transferability problem from a data perspective.


Towards Intrinsic Interactive Reinforcement Learning

arXiv.org Artificial Intelligence

Meanwhile, applications of RL have only begun to expand beyond these constrained game environments to more diverse and complex real-world environments such as chip design [86], chemical reaction optimization [133] and performing long-term recommendations [45]. To further progress towards these more complex real-world environments, greater alleviation of challenges currently facing RL (e.g., generalization, robustness, scalability, and safety) is needed [7, 27, 72, 108]. Moreover, we can expect that as the complexity of environments increases, the difficulty in alleviating these challenges will increase as well [27]. For the purpose of this paper, we broadly define known RL challenges as either an aptitude or alignment problem. Aptitude encompasses challenges concerned with being able to learn. Aptitude includes ideas such as robustness, the ability of RL to perform a task (e.g., asymptotic performance) and generalize within/between environments of similar complexity; scalability, the ability of RL to scale up to more complex environment; and aptness, the rate at which a RL algorithm can learn to solve a problem or achieve a desired performance level. Likewise, alignment encompasses challenges concerned with learning as intended [7, 27, 72]. The hypothetical paperclip agent [18] is a classic example of misalignment.