Using deep neural networks to predict how natural sounds are processed by the brain

#artificialintelligence 

In recent years, machine learning techniques have accelerated and innovated research in numerous fields, including neuroscience. By identifying patterns in experimental data, these models could for instance predict the neural processes associated with specific experiences or with the processing of sensory stimuli. Researchers at CNRS and Université Aix-Marseille and Maastricht University recently tried to use computational models to predict how the human brain transforms sounds into semantic representations of what is happening in the surrounding environment. Their paper, published in Nature Neuroscience, shows that some deep neural network (DNN)-based models might be better at predicting neural processes from neuroimaging and experimental data. "Our main interest is to make numerical predictions about how natural sounds are perceived and represented in the brain, and to use computational models to understand how we transform the heard acoustic signal into a semantic representation of the objects and events in the auditory environment," Bruno Giordano, one of the researchers who carried out the study, told Medical Xpress.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found