When the rat sees object A, it must lick the nozzle on the left to get a drop of sweet juice; when it sees object B, the juice will be in the right nozzle. But the objects are presented in various orientations, so the rat has to mentally rotate each shape on display and decide if it matches A or B. Interspersed with training sessions are imaging sessions, for which the rats are taken down the hall to another lab where a bulky microscope is draped in black cloth, looking like an old-fashioned photographer's setup. Here, the team uses a two-photon excitation microscope to examine the animal's visual cortex while it's looking at a screen displaying the now-familiar objects A and B, again in various orientations. The microscope records flashes of fluorescence when its laser hits active neurons, and the 3D video shows patterns that resemble green fireflies winking on and off in a summer night. Cox is keen to see how those patterns change as the animal becomes expert at its task.
Three decades ago, the U.S. government launched the Human Genome Project, a 13-year endeavor to sequence and map all the genes of the human species. Although initially met with skepticism and even opposition, the project has since transformed the field of genetics and is today considered one of the most successful scientific enterprises in history. Now the Intelligence Advanced Research Projects Activity (IARPA), a research organization for the intelligence community modeled after the defense department's famed DARPA, has dedicated $100 million to a similarly ambitious project. The Machine Intelligence from Cortical Networks program, or MICrONS, aims to reverse-engineer one cubic millimeter of the brain, study the way it makes computations, and use those findings to better inform algorithms in machine learning and artificial intelligence. IARPA has recruited three teams, led by David Cox, a biologist and computer scientist at Harvard University, Tai Sing Lee, a computer scientist at Carnegie Mellon University, and Andreas Tolias, a neuroscientist at the Baylor College of Medicine.
"Here's the problem with artificial intelligence today," says David Cox. Yes, it has gotten astonishingly good, from near-perfect facial recognition to driverless cars and world-champion Go-playing machines. And it's true that some AI applications don't even have to be programmed anymore: they're based on architectures that allow them to learn from experience. Yet there is still something clumsy and brute-force about it, says Cox, a neuroscientist at Harvard. "To build a dog detector, you need to show the program thousands of things that are dogs and thousands that aren't dogs," he says.
Neuroscientists have constructed a network map of connections between cortical neurons, traced from a 100 terabytes 3D data set. The data were created by an electron microscope in nanoscopic detail, allowing every one of the "wires" to be seen, along with their connections. Some of the neurons are color-coded according to their activity patterns in the living brain. The largest network of the connections between neurons in the cortex to date has been published by an international team of researchers from the Allen Institute for Brain Science, Harvard Medical School, and Neuro-Electronics Research Flanders (NERF). In the process of their study*, the researchers developed new tools that will be useful for "reverse engineering the brain by discovering relationships between circuit wiring and neuronal and network computations," says Wei-Chung Lee, Ph.D., Instructor in Neurobiology at Harvard Medicine School and lead author of a paper published this week in the journal Nature.
Take a three-year-old to the zoo, and she intuitively knows that the long-necked creature nibbling leaves is the same thing as the giraffe in her picture book. That superficially easy feat is in reality quite sophisticated. The cartoon drawing is a frozen silhouette of simple lines, while the living animal is awash in color, texture, movement and light. It can contort into different shapes and looks different from every angle. Humans excel at this kind of task.