Neural Attention: Machine Learning Meets Neuroscience

#artificialintelligence 

Neural attention has been applied successfully to a variety of different applications including natural language processing, vision, and memory. An attractive aspect of these neural models is their ability to extract relevant features from data, with minimal feature engineering.Brian Cheung is a PhD Student at UC Berkeley working with Professor Bruno Olshausen, as well as an Intern at Google Brain. By drawing inspiration from the fields of neuroscience and machine learning, he hopes to create systems which can solve complex vision tasks using attention and memory. At the Deep Learning Summit in Singapore, Brian will share expertise on the fovea as an emergent property of visual attention, ways we can extend this ability to learning interpretable structural features of the attention window itself, and finding conditions where these emergent properties are amplified or eliminated providing clues to their function. I asked him a few questions ahead of the summit to learn more.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found