Goto

Collaborating Authors

Exploring the nature of intelligence

#artificialintelligence

Algorithms modeled loosely on the brain have helped artificial intelligence take a giant leap forward in recent years. Those algorithms, in turn, have advanced our understanding of human intelligence while fueling discoveries in a range of other fields. MIT founded the Quest for Intelligence to apply new breakthroughs in human intelligence to AI, and use advances in AI to push human intelligence research even further. This fall, nearly 50 undergraduates joined MIT's human-machine intelligence quest under the Undergraduate Research Opportunities Program (UROP). Students worked on a mix of projects focused on the brain, computing, and connecting computing to disciplines across MIT.


DreamCoder: Growing generalizable, interpretable knowledge with wake-sleep Bayesian program learning

arXiv.org Artificial Intelligence

Expert problem-solving is driven by powerful languages for thinking about problems and their solutions. Acquiring expertise means learning these languages -- systems of concepts, alongside the skills to use them. We present DreamCoder, a system that learns to solve problems by writing programs. It builds expertise by creating programming languages for expressing domain concepts, together with neural networks to guide the search for programs within these languages. A ``wake-sleep'' learning algorithm alternately extends the language with new symbolic abstractions and trains the neural network on imagined and replayed problems. DreamCoder solves both classic inductive programming tasks and creative tasks such as drawing pictures and building scenes. It rediscovers the basics of modern functional programming, vector algebra and classical physics, including Newton's and Coulomb's laws. Concepts are built compositionally from those learned earlier, yielding multi-layered symbolic representations that are interpretable and transferrable to new tasks, while still growing scalably and flexibly with experience.


Developing artificial intelligence tools for all

#artificialintelligence

For all of the hype about artificial intelligence (AI), most software is still geared toward engineers. To demystify AI and unlock its benefits, the MIT Quest for Intelligence created the Quest Bridge to bring new intelligence tools and ideas into classrooms, labs, and homes. This spring, more than a dozen Undergraduate Research Opportunities Program (UROP) students joined the project in its mission to make AI accessible to all. Undergraduates worked on applications designed to teach kids about AI, improve access to AI programs and infrastructure, and harness AI to improve literacy and mental health. Six projects are highlighted here.


EmTech MIT: Giving machines common sense

ZDNet

The world has seen remarkable progress in artificial intelligence in recent years, but general AI remains science fiction. One of the keys to making this leap could be the human brain. In a talk at the EmTech MIT conference this week, MIT professor Josh Tenenbaum described a new university moonshot to build machines that can learn like children. "Why do we have all these AI technologies, but fundamentally no real AI?" Tenenbaum said. "We have machines that do useful things we used to think only humans could do, but none of these systems are truly intelligent, none of them have the flexible, common sense [of] . . .


Learning abstract structure for drawing by efficient motor program induction

arXiv.org Artificial Intelligence

Humans flexibly solve new problems that differ qualitatively from those they were trained on. This ability to generalize is supported by learned concepts that capture structure common across different problems. Here we develop a naturalistic drawing task to study how humans rapidly acquire structured prior knowledge. The task requires drawing visual objects that share underlying structure, based on a set of composable geometric rules. We show that people spontaneously learn abstract drawing procedures that support generalization, and propose a model of how learners can discover these reusable drawing programs. Trained in the same setting as humans, and constrained to produce efficient motor actions, this model discovers new drawing routines that transfer to test objects and resemble learned features of human sequences. These results suggest that two principles guiding motor program induction in the model - abstraction (general programs that ignore object-specific details) and compositionality (recombining previously learned programs) - are key for explaining how humans learn structured internal representations that guide flexible reasoning and learning.