Goto

Collaborating Authors

 clune


SIMA 2: A Generalist Embodied Agent for Virtual Worlds

SIMA team, null, Bolton, Adrian, Lerchner, Alexander, Cordell, Alexandra, Moufarek, Alexandre, Bolt, Andrew, Lampinen, Andrew, Mitenkova, Anna, Hallingstad, Arne Olav, Vujatovic, Bojan, Li, Bonnie, Lu, Cong, Wierstra, Daan, Sawyer, Daniel P., Slater, Daniel, Reichert, David, Vercelli, Davide, Hassabis, Demis, Hudson, Drew A., Williams, Duncan, Hirst, Ed, Pardo, Fabio, Hill, Felix, Besse, Frederic, Openshaw, Hannah, Chan, Harris, Soyer, Hubert, Wang, Jane X., Clune, Jeff, Agapiou, John, Reid, John, Marino, Joseph, Kim, Junkyung, Gregor, Karol, Sridhar, Kaustubh, McKinney, Kay, Kampis, Laura, Zhang, Lei M., Matthey, Loic, Wang, Luyu, Raad, Maria Abi, Loks-Thompson, Maria, Engelcke, Martin, Kecman, Matija, Jackson, Matthew, Gazeau, Maxime, Purkiss, Ollie, Knagg, Oscar, Stys, Peter, Mendolicchio, Piermaria, Hadsell, Raia, Ke, Rosemary, Faulkner, Ryan, Chakera, Sarah, Baveja, Satinder Singh, Legg, Shane, Kashem, Sheleem, Terzi, Tayfun, Keck, Thomas, Harley, Tim, Scholtes, Tim, Roberts, Tyson, Mnih, Volodymyr, Liu, Yulan, Wang, Zhengdong, Ghahramani, Zoubin

arXiv.org Artificial Intelligence

We introduce SIMA 2, a generalist embodied agent that understands and acts in a wide variety of 3D virtual worlds. Built upon a Gemini foundation model, SIMA 2 represents a significant step toward active, goal-directed interaction within an embodied environment. Unlike prior work (e.g., SIMA 1) limited to simple language commands, SIMA 2 acts as an interactive partner, capable of reasoning about high-level goals, conversing with the user, and handling complex instructions given through language and images. Across a diverse portfolio of games, SIMA 2 substantially closes the gap with human performance and demonstrates robust generalization to previously unseen environments, all while retaining the base model's core reasoning capabilities. Furthermore, we demonstrate a capacity for open-ended self-improvement: by leveraging Gemini to generate tasks and provide rewards, SIMA 2 can autonomously learn new skills from scratch in a new environment. This work validates a path toward creating versatile and continuously learning agents for both virtual and, eventually, physical worlds.


An 'AI Scientist' Is Inventing and Running Its Own Experiments

WIRED

At first glance, a recent batch of research papers produced by a prominent artificial intelligence lab at the University of British Columbia in Vancouver might not seem that notable. Featuring incremental improvements on existing algorithms and ideas, they read like the contents of a middling AI conference or journal. But the research is, in fact, remarkable. That's because it's entirely the work of an "AI scientist" developed at the UBC lab together with researchers from the University of Oxford and a startup called Sakana AI. The project demonstrates an early step toward what might prove a revolutionary trick: letting AI learn by inventing and exploring novel ideas.


Large Language Models as In-context AI Generators for Quality-Diversity

Lim, Bryan, Flageat, Manon, Cully, Antoine

arXiv.org Artificial Intelligence

Quality-Diversity (QD) approaches are a promising direction to develop open-ended processes as they can discover archives of high-quality solutions across diverse niches. While already successful in many applications, QD approaches usually rely on combining only one or two solutions to generate new candidate solutions. As observed in open-ended processes such as technological evolution, wisely combining large diversity of these solutions could lead to more innovative solutions and potentially boost the productivity of QD search. In this work, we propose to exploit the pattern-matching capabilities of generative models to enable such efficient solution combinations. We introduce In-context QD, a framework of techniques that aim to elicit the in-context capabilities of pre-trained Large Language Models (LLMs) to generate interesting solutions using few-shot and many-shot prompting with quality-diverse examples from the QD archive as context. Applied to a series of common QD domains, In-context QD displays promising results compared to both QD baselines and similar strategies developed for single-objective optimization. Additionally, this result holds across multiple values of parameter sizes and archive population sizes, as well as across domains with distinct characteristics from BBO functions to policy search. Finally, we perform an extensive ablation that highlights the key prompt design considerations that encourage the generation of promising solutions for QD.


Can We Stop the Singularity?

The New Yorker

Increasingly, we're surrounded by fake people. Sometimes we know it and sometimes we don't. They offer us customer service on Web sites, target us in video games, and fill our social-media feeds; they trade stocks and, with the help of systems such as OpenAI's ChatGPT, can write essays, articles, and e-mails. By no means are these A.I. systems up to all the tasks expected of a full-fledged person. But they excel in certain domains, and they're branching out. Many researchers involved in A.I. believe that today's fake people are just the beginning.


An epic AI Debate--and why everyone should be at least a little bit worried about AI going into 2023

#artificialintelligence

What do Noam Chomsky, living legend of linguistics, Kai-Fu Lee, perhaps the most famous AI researcher in all of China, and Yejin Choi, the 2022 MacArthur Fellowship winner who was profiled earlier this week in The New York Times Magazine--and more than a dozen other scientists, economists, researchers, and elected officials--all have in common? They are all worried about the near-term future of AI. They are all worried about different things. Each spoke last week at December 23's AGI Debate (co-organized by Montreal.AI's Vince Boucher and myself). No summary can capture all that was said (though Tiernan Ray's 8,000 word account at ZDNet comes close), but here are a few of the many concerns that were raised: Noam Chomsky, who led off the night, was worried about whether the current approach to artificial intelligence would ever tell us anything about the thing that he cares about most: what makes the human mind what it is?


Machines that think like humans: Everything to know about AGI and AI Debate 3

#artificialintelligence

After a year's hiatus, the AI Debate hosted by Gary Marcus and Vincent Boucher returned with a gaggle of AI thinkers, this time including policy types and scholars outside of the discipline of AI such as Noam Chomsky. After a one-year hiatus, the annual artificial intelligence debate organized by Montreal.ai Learn about the leading tech trends the world will lean into over the next 12 months and how they will affect your life and your job. The debate this year, AI Debate 3: The AGI Debate, as it's called, focused on the concept of artificial general intelligence, the notion of a machine capable of integrating a myriad of reasoning abilities approaching human levels. While the previous debate featured a number of AI scholars, Friday's meet-up drew participation by 16 participants from a much wider gamut of professional backgrounds. In addition to numerous computer scientists and AI luminaries, the program included legendary linguist and activist Noam Chomsky, computational neuroscientist Konrad Kording, and Canadian parliament member Michelle Rempel Garner. Also: AI's true goal may no longer be intelligence Marcus was once again joined by his co-host, Vincent Boucher of Montreal.ai. The debate ran longer than planned. The full 3.5 hours can be viewed on the YouTube page for the debate. The debate Web site is agidebate dot com. In addition, you may want to follow the hashtag #agidebate. NYU professor emeritus and AI gadfly Gary Marcus resumed his duties hosting the multi-scholar face-off. Marcus started things off with a slide show of a "very brief history of AI," tongue firmly in cheek. Marcus said that contrary to enthusiasm in the decade following the landmark ImageNet success, the "promise" of machines doing various things had not paid off. He featured reference to his own New Yorker article throwing cold water on the matter.


An endlessly changing playground teaches AIs how to multitask

MIT Technology Review

They advance to more complex multiplayer games like hide and seek or capture the flag, where teams compete to be the first to find and grab their opponent's flag. The playground manager has no specific goal but aims to improve the general capability of the players over time. AIs like DeepMind's AlphaZero have beaten the world's best human players at chess and Go. But they can only learn one game at a time. As DeepMind cofounder Shane Legg put it when I spoke to him last year, it's like having to swap out your chess brain for your Go brain each time you want to switch games.


AI is learning how to create itself

#artificialintelligence

But it's not what the bots are learning that's exciting--it's how they're learning. POET generates the obstacle courses, assesses the bots' abilities, and assigns their next challenge, all without human involvement. Step by faltering step, the bots improve via trial and error. "At some point it might jump over a cliff like a kung fu master," says Wang. It may seem basic at the moment, but for Wang and a handful of other researchers, POET hints at a revolutionary new way to create supersmart machines: by getting AI to make itself. Wang's former colleague Jeff Clune is among the biggest boosters of this idea. Clune has been working on it for years, first at the University of Wyoming and then at Uber AI Labs, where he worked with Wang and others. Now dividing his time between the University of British Columbia and OpenAI, he has the backing of one of the world's top artificial-intelligence labs. Clune calls the attempt to build truly intelligent AI the most ambitious scientific quest in human history.


AI is learning how to create itself

#artificialintelligence

But it's not what the bots are learning that's exciting--it's how they're learning. POET generates the obstacle courses, assesses the bots' abilities, and assigns their next challenge, all without human involvement. Step by faltering step, the bots improve via trial and error. "At some point it might jump over a cliff like a kung fu master," says Wang. It may seem basic at the moment, but for Wang and a handful of other researchers, POET hints at a revolutionary new way to create supersmart machines: by getting AI to make itself.


Artificial intelligence can 'evolve' to solve problems

#artificialintelligence

Many great ideas in artificial intelligence languish in textbooks for decades because we don't have the computational power to apply them. That's what happened with neural networks, a technique inspired by our brains' wiring that has recently succeeded in translating languages and driving cars. Now, another old idea--improving neural networks not through teaching, but through evolution--is revealing its potential. Five new papers from Uber in San Francisco, California, demonstrate the power of so-called neuroevolution to play video games, solve mazes, and even make a simulated robot walk. Neuroevolution, a process of mutating and selecting the best neural networks, has previously led to networks that can compose music, control robots, and play the video game Super Mario World.