Goto

Collaborating Authors

$12 million project aims to 'reverse-engineer' the brain to help computers learn

AITopics Original Links

Teaching computers to learn the way we do is widely considered an important step toward better artificial intelligence, but it's hard to achieve without a good understanding of how we think. With that premise in mind, a new $12 million effort launched Wednesday with aims to "reverse-engineer" the human brain. Led by Tai Sing Lee, a professor in Carnegie Mellon University's Computer Science Department and the Center for the Neural Basis of Cognition (CNBC), the five-year project seeks to unlock the secrets of neural circuitry and the brain's learning methods. Ultimately, the goal is to improve neural networks, the computational models often used for AI in applications including self-driving cars, automated trading, and facial and speech recognition. "Today's neural nets use algorithms that were essentially developed in the early 1980s," Lee said.


The U.S. Government Launches a $100-Million "Apollo Project of the Brain"

AITopics Original Links

Three decades ago, the U.S. government launched the Human Genome Project, a 13-year endeavor to sequence and map all the genes of the human species. Although initially met with skepticism and even opposition, the project has since transformed the field of genetics and is today considered one of the most successful scientific enterprises in history. Now the Intelligence Advanced Research Projects Activity (IARPA), a research organization for the intelligence community modeled after the defense department's famed DARPA, has dedicated $100 million to a similarly ambitious project. The Machine Intelligence from Cortical Networks program, or MICrONS, aims to reverse-engineer one cubic millimeter of the brain, study the way it makes computations, and use those findings to better inform algorithms in machine learning and artificial intelligence. IARPA has recruited three teams, led by David Cox, a biologist and computer scientist at Harvard University, Tai Sing Lee, a computer scientist at Carnegie Mellon University, and Andreas Tolias, a neuroscientist at the Baylor College of Medicine.


AI Designers Find Inspiration in Rat Brains

IEEE Spectrum Robotics

When the rat sees object A, it must lick the nozzle on the left to get a drop of sweet juice; when it sees object B, the juice will be in the right nozzle. But the objects are presented in various orientations, so the rat has to mentally rotate each shape on display and decide if it matches A or B. Interspersed with training sessions are imaging sessions, for which the rats are taken down the hall to another lab where a bulky microscope is draped in black cloth, looking like an old-fashioned photographer's setup. Here, the team uses a two-photon excitation microscope to examine the animal's visual cortex while it's looking at a screen displaying the now-familiar objects A and B, again in various orientations. The microscope records flashes of fluorescence when its laser hits active neurons, and the 3D video shows patterns that resemble green fireflies winking on and off in a summer night. Cox is keen to see how those patterns change as the animal becomes expert at its task.


AI Designers Find Inspiration in Rat Brains

#artificialintelligence

When the rat sees object A, it must lick the nozzle on the left to get a drop of sweet juice; when it sees object B, the juice will be in the right nozzle. But the objects are presented in various orientations, so the rat has to mentally rotate each shape on display and decide if it matches A or B. Interspersed with training sessions are imaging sessions, for which the rats are taken down the hall to another lab where a bulky microscope is draped in black cloth, looking like an old-fashioned photographer's setup. Here, the team uses a two-photon excitation microscope to examine the animal's visual cortex while it's looking at a screen displaying the now-familiar objects A and B, again in various orientations. The microscope records flashes of fluorescence when its laser hits active neurons, and the 3D video shows patterns that resemble green fireflies winking on and off in a summer night. Cox is keen to see how those patterns change as the animal becomes expert at its task.


A Map of the Brain Could Teach Machines to See Like You

AITopics Original Links

Take a three-year-old to the zoo, and she intuitively knows that the long-necked creature nibbling leaves is the same thing as the giraffe in her picture book. That superficially easy feat is in reality quite sophisticated. The cartoon drawing is a frozen silhouette of simple lines, while the living animal is awash in color, texture, movement and light. It can contort into different shapes and looks different from every angle. Humans excel at this kind of task.