Goto

Collaborating Authors

An Existential Crisis in Neuroscience - Issue 81: Maps

Nautilus

On a chilly evening last fall, I stared into nothingness out of the floor-to-ceiling windows in my office on the outskirts of Harvard's campus. As a purplish-red sun set, I sat brooding over my dataset on rat brains. I thought of the cold windowless rooms in downtown Boston, home to Harvard's high-performance computing center, where computer servers were holding on to a precious 48 terabytes of my data. I have recorded the 13 trillion numbers in this dataset as part of my Ph.D. experiments, asking how the visual parts of the rat brain respond to movement. Printed on paper, the dataset would fill 116 billion pages, double-spaced. When I recently finished writing the story of my data, the magnum opus fit on fewer than two dozen printed pages. Performing the experiments turned out to be the easy part. I had spent the last year agonizing over the data, observing and asking questions. The answers left out large chunks that did not pertain to the questions, like a map leaves out irrelevant details of a territory.


The U.S. Government Launches a $100-Million "Apollo Project of the Brain"

AITopics Original Links

Three decades ago, the U.S. government launched the Human Genome Project, a 13-year endeavor to sequence and map all the genes of the human species. Although initially met with skepticism and even opposition, the project has since transformed the field of genetics and is today considered one of the most successful scientific enterprises in history. Now the Intelligence Advanced Research Projects Activity (IARPA), a research organization for the intelligence community modeled after the defense department's famed DARPA, has dedicated $100 million to a similarly ambitious project. The Machine Intelligence from Cortical Networks program, or MICrONS, aims to reverse-engineer one cubic millimeter of the brain, study the way it makes computations, and use those findings to better inform algorithms in machine learning and artificial intelligence. IARPA has recruited three teams, led by David Cox, a biologist and computer scientist at Harvard University, Tai Sing Lee, a computer scientist at Carnegie Mellon University, and Andreas Tolias, a neuroscientist at the Baylor College of Medicine.


Inside the Moonshot Effort to Finally Figure Out the Brain

MIT Technology Review

"Here's the problem with artificial intelligence today," says David Cox. Yes, it has gotten astonishingly good, from near-perfect facial recognition to driverless cars and world-champion Go-playing machines. And it's true that some AI applications don't even have to be programmed anymore: they're based on architectures that allow them to learn from experience. Yet there is still something clumsy and brute-force about it, says Cox, a neuroscientist at Harvard. "To build a dog detector, you need to show the program thousands of things that are dogs and thousands that aren't dogs," he says.


AI Designers Find Inspiration in Rat Brains

IEEE Spectrum Robotics

When the rat sees object A, it must lick the nozzle on the left to get a drop of sweet juice; when it sees object B, the juice will be in the right nozzle. But the objects are presented in various orientations, so the rat has to mentally rotate each shape on display and decide if it matches A or B. Interspersed with training sessions are imaging sessions, for which the rats are taken down the hall to another lab where a bulky microscope is draped in black cloth, looking like an old-fashioned photographer's setup. Here, the team uses a two-photon excitation microscope to examine the animal's visual cortex while it's looking at a screen displaying the now-familiar objects A and B, again in various orientations. The microscope records flashes of fluorescence when its laser hits active neurons, and the 3D video shows patterns that resemble green fireflies winking on and off in a summer night. Cox is keen to see how those patterns change as the animal becomes expert at its task.


AI Designers Find Inspiration in Rat Brains

#artificialintelligence

When the rat sees object A, it must lick the nozzle on the left to get a drop of sweet juice; when it sees object B, the juice will be in the right nozzle. But the objects are presented in various orientations, so the rat has to mentally rotate each shape on display and decide if it matches A or B. Interspersed with training sessions are imaging sessions, for which the rats are taken down the hall to another lab where a bulky microscope is draped in black cloth, looking like an old-fashioned photographer's setup. Here, the team uses a two-photon excitation microscope to examine the animal's visual cortex while it's looking at a screen displaying the now-familiar objects A and B, again in various orientations. The microscope records flashes of fluorescence when its laser hits active neurons, and the 3D video shows patterns that resemble green fireflies winking on and off in a summer night. Cox is keen to see how those patterns change as the animal becomes expert at its task.