A State Space Model for Decoding Auditory Attentional Modulation from MEG in a Competing Speaker Environment
–Neural Information Processing Systems
Humans are able to segregate auditory objects in a complex acoustic scene, through an interplay of bottom-up feature extraction and top-down selective attention in the brain. The detailed mechanism underlying this process is largely unknown and the ability to mimic this procedure is an important problem in artificial intelligence and computational neuroscience. We consider the problem of decoding the attentional state of a listener in a competing-speaker environment from magnetoencephalographic (MEG) recordings from the human brain. We develop a behaviorally inspired state-space model to account for the modulation of the MEG with respect to attentional state of the listener. We construct a decoder based on the maximum a posteriori (MAP) estimate of the state parameters via the Expectation-Maximization (EM) algorithm. The resulting decoder is able to track the attentional modulation of the listener with multi-second resolution using only the envelopes of the two speech streams as covariates.
Neural Information Processing Systems
Mar-13-2024, 11:17:40 GMT
- Country:
- Europe > United Kingdom
- England > Cambridgeshire > Cambridge (0.14)
- North America > United States
- Maryland > Prince George's County
- College Park (0.14)
- Massachusetts > Middlesex County
- Cambridge (0.04)
- Maryland > Prince George's County
- Europe > United Kingdom
- Genre:
- Research Report (0.69)
- Industry:
- Health & Medicine > Therapeutic Area > Neurology (0.87)
- Technology: