Hawkins, Jeff
The Thousand Brains Project: A New Paradigm for Sensorimotor Intelligence
Clay, Viviane, Leadholm, Niels, Hawkins, Jeff
Artificial intelligence has advanced rapidly in the last decade, driven primarily by progress in the scale of deep-learning systems. Despite these advances, the creation of intelligent systems that can operate effectively in diverse, real-world environments remains a significant challenge. In this white paper, we outline the Thousand Brains Project, an ongoing research effort to develop an alternative, complementary form of AI, derived from the operating principles of the neocortex. We present an early version of a thousand-brains system, a sensorimotor agent that is uniquely suited to quickly learn a wide range of tasks and eventually implement any capabilities the human neocortex has. Core to its design is the use of a repeating computational unit, the learning module, modeled on the cortical columns found in mammalian brains. Each learning module operates as a semi-independent unit that can model entire objects, represents information through spatially structured reference frames, and both estimates and is able to effect movement in the world. Learning is a quick, associative process, similar to Hebbian learning in the brain, and leverages inductive biases around the spatial structure of the world to enable rapid and continual learning. Multiple learning modules can interact with one another both hierarchically and non-hierarchically via a "cortical messaging protocol" (CMP), creating more abstract representations and supporting multimodal integration. We outline the key principles motivating the design of thousand-brains systems and provide details about the implementation of Monty, our first instantiation of such a system. Code can be found at https://github.com/thousandbrainsproject/tbp.monty, along with more detailed documentation at https://thousandbrainsproject.readme.io/.
Toward Next-Generation Artificial Intelligence: Catalyzing the NeuroAI Revolution
Zador, Anthony, Escola, Sean, Richards, Blake, รlveczky, Bence, Bengio, Yoshua, Boahen, Kwabena, Botvinick, Matthew, Chklovskii, Dmitri, Churchland, Anne, Clopath, Claudia, DiCarlo, James, Ganguli, Surya, Hawkins, Jeff, Koerding, Konrad, Koulakov, Alexei, LeCun, Yann, Lillicrap, Timothy, Marblestone, Adam, Olshausen, Bruno, Pouget, Alexandre, Savin, Cristina, Sejnowski, Terrence, Simoncelli, Eero, Solla, Sara, Sussillo, David, Tolias, Andreas S., Tsao, Doris
This implies that the bulk of the work in developing general AI can be achieved by building systems that match the perceptual and motor abilities of animals and that the subsequent step to human-level intelligence would be considerably smaller. This is good news because progress on the first goal can rely on the favored subjects of neuroscience research - rats, mice, and non-human primates - for which extensive and rapidly expanding behavioral and neural datasets can guide the way. Thus, we believe that the NeuroAI path will lead to necessary advances if we figure out the core capabilities that all animals possess in embodied sensorimotor interaction with the world. NeuroAI Grand Challenge: The Embodied Turing Test In 1950, Alan Turing proposed the "imitation game" as a test of a machine's ability to exhibit intelligent behavior indistinguishable from that of a human
How do neurons operate on sparse distributed representations? A mathematical theory of sparsity, neurons and active dendrites
Ahmad, Subutai, Hawkins, Jeff
We propose a formal mathematical model for sparse representations and active dendrites in neocortex. Our model is inspired by recent experimental findings on active dendritic processing and NMDA spikes in pyramidal neurons. These experimental and modeling studies suggest that the basic unit of pattern memory in the neocortex is instantiated by small clusters of synapses operated on by localized non-linear dendritic processes. We derive a number of scaling laws that characterize the accuracy of such dendrites in detecting activation patterns in a neuronal population under adverse conditions. We introduce the union property which shows that synapses for multiple patterns can be randomly mixed together within a segment and still lead to highly accurate recognition. We describe simulation results that provide further insight into sparse representations as well as two primary results. First we show that pattern recognition by a neuron with active dendrites can be extremely accurate and robust with high dimensional sparse inputs even when using a tiny number of synapses to recognize large patterns. Second, equations representing recognition accuracy of a dendrite predict optimal NMDA spiking thresholds under a generous set of assumptions. The prediction tightly matches NMDA spiking thresholds measured in the literature. Our model matches many of the known properties of pyramidal neurons. As such the theory provides a mathematical framework for understanding the benefits and limits of sparse representations in cortical networks.