BiND: A Neural Discriminator-Decoder for Accurate Bimanual Trajectory Prediction in Brain-Computer Interfaces

Robert, Timothee, Shaeri, MohammadAli, Shoaran, Mahsa

arXiv.org Artificial Intelligence 

-- Decoding bimanual hand movements from in-tracortical recordings remains a critical challenge for brain-computer interfaces (BCIs), due to overlapping neural representations and nonlinear interlimb interactions. We introduce BiND (Bimanual Neural Discriminator-Decoder), a two-stage model that first classifies motion type (unimanual left, unimanual right, or bimanual) and then uses specialized GRU-based decoders--augmented with a trial-relative time index--to predict continuous 2D hand velocities. It also demonstrates greater robustness to session variability than all other benchmarked models, with accuracy improvements of up to 4% compared to GRU in cross-session analyses. This highlights the effectiveness of task-aware discrimination and temporal modeling in enhancing bimanual decoding. According to the World Health Organization (WHO), neurological conditions such as stroke and brain injuries affect over one-third of the global population and represent a leading cause of disability [1], [2]. Around 2% of people worldwide require rehabilitation or assistive technologies [3], [4], often due to motor impairments from spinal cord injuries, stroke, or related disorders, which can lead to partial or complete paralysis and severely impact quality of life.