Goto

Collaborating Authors

gato


Why Gato from Deepmind is a game changer - DataScienceCentral.com

#artificialintelligence

While no agent can be expected to excel in all imaginable control tasks, especially those far outside of its training distribution, we here test the hypothesis that training an agent which is generally capable on a large number of tasks is possible; and that this general agent can be adapted with little extra data to succeed at an even larger number of tasks. We hypothesize that such an agent can be obtained through scaling data, compute and model parameters, continually broadening the training distribution while maintaining performance, towards covering any task, behavior and embodiment of interest. In this setting, natural language can act as a common grounding across otherwise incompatible embodiments, unlocking combinatorial generalization to new behaviors. The guiding design principle of Gato is to train on the widest variety of relevant data possible, including diverse modalities such as images, text, proprioception, joint torques, button presses, and other discrete and continuous observations and actions. To enable processing this multi-modal data, we serialize all data into a flat sequence of tokens.


Artificial General Intelligence Is Not as Imminent as You Might Think

#artificialintelligence

To the average person, it must seem as if the field of artificial intelligence is making immense progress. According to the press releases, and some of the more gushing media accounts, OpenAI's DALL-E 2 can seemingly create spectacular images from any text; another OpenAI system called GPT-3 can talk about just about anything; and a system called Gato that was released in May by DeepMind, a division of Alphabet, seemingly worked well on every task the company could throw at it. One of DeepMind's high-level executives even went so far as to brag that in the quest for artificial general intelligence (AGI), AI that has the flexibility and resourcefulness of human intelligence, "The Game is Over!" And Elon Musk said recently that he would be surprised if we didn't have artificial general intelligence by 2029. Machines may someday be as smart as people, and perhaps even smarter, but the game is far from over.


Deepmind: Is "Gato" a precursor for general artificial intelligence?

#artificialintelligence

Deepmind's Gato solves many tasks, but none of them really well. Does the new AI system nevertheless lead the way for general artificial intelligence? Hot on the heels of OpenAI's DALL-E 2, Google's PaLM, LaMDA 2, and Deepmind's Chinchilla and Flamingo, the London-based AI company is showing off another large AI model that outperforms existing systems. Yet Deepmind's Gato is different: The model can't text better, describe images better, play Atari better, control robotic arms better, or orient itself in 3D spaces better than other AI systems. But Gato can do a bit of everything. Deepmind trained the Transformer-based multi-talent with images, text, proprioception, joint moments, keystrokes, and other discrete and continuous observations and actions.


The long, hype-strewn road to general artificial intelligence

#artificialintelligence

This story was originally published by Undark and is reproduced here as part of the Climate Desk collaboration. Last month, DeepMind, a subsidiary of technology giant Alphabet, set Silicon Valley abuzz when it announced Gato, perhaps the most versatile artificial intelligence model in existence. Billed as a "generalist agent," Gato can perform over 600 different tasks. It can drive a robot, caption images, identify objects in pictures, and more. It is probably the most advanced AI system on the planet that isn't dedicated to a singular function.


Can the 'Gato' AI model out-perform human intelligence?

#artificialintelligence

Deepmind, a subsidiary of Alphabet specialising in artificial intelligence, recently presented its "Gato" model. This so-called "general-purpose" AI model can reportedly perform more than 600 different tasks. And, in many of these tasks, the AI might even perform better than a human being. Could Deepmind have built the first general-purpose artificial intelligence model, i.e., a model capable of learning several tasks at once, whereas most AI models are trained for a specific purpose? Since the American company unveiled its new work, the question has been spurring reaction from computer experts around the world.


Is DeepMind's Gato the world's first AGI?

#artificialintelligence

Artificial general intelligence (AGI) is back in the news thanks to the recent introduction of Gato from DeepMind. As much as anything, AGI invokes images of the Skynet (of Terminator lore) that was originally designed as threat analysis software for the military, but it quickly came to see humanity as the enemy. While fictional, this should give us pause, especially as militaries around the world are pursuing AI-based weapons. However, Gato does not appear to raise any of these concerns. The deep learning transformer model is described as a "generalist agent" and purports to perform 604 distinct and mostly mundane tasks with varying modalities, observations and action specifications.


The Long, Uncertain Road to Artificial General Intelligence

#artificialintelligence

Last month, DeepMind, a subsidiary of technology giant Alphabet, set Silicon Valley abuzz when it announced Gato, perhaps the most versatile artificial intelligence model in existence. Billed as a "generalist agent," Gato can perform over 600 different tasks. It can drive a robot, caption images, identify objects in pictures, and more. It is probably the most advanced AI system on the planet that isn't dedicated to a singular function. And, to some computing experts, it is evidence that the industry is on the verge of reaching a long-awaited, much-hyped milestone: Artificial General Intelligence.


Dr. Tristan Behrens on LinkedIn: What if I would tell you that Language Models and Multi-Agent Reinforcement

#artificialintelligence

What if I would tell you that Language Models and Multi-Agent Reinforcement learning are now engaged and will get married soon? First and foremost, kudos to Andrés Fernández Rodríguez who sent me the inspiring paper "Multi-Agent Reinforcement Learning is a Sequence Modeling Problem". The idea of the paper is fantastic. In its essence, it is about mapping the problem of agent control to token translation. The authors use an encoder-decoder model like the original "Attention is all you need" paper.


Resisting the urge to be impressed, knowing what to talk about when we talk about AI

ZDNet

The barrage of new AI models released by the likes of DeepMind, Google, Meta and OpenAI is intensifying. Each of them is different in some way, each of them renewing the conversation about their achievements, applications, and implications. Imagen, like DALLE-2, Gato, GPT-3 and other AI models before them are all impressive, but maybe not for the reasons you think. Here's a brief account of where we are in the AI race, and what we have learned so far. At this pace, it's getting harder to even keep track of releases, let alone analyze them. Let's start this timeline of sorts with GPT-3.


DeepMind researcher claims new AI could lead to AGI, says 'game is over'

#artificialintelligence

According to Doctor Nando de Freitas, a lead researcher at Google's DeepMind, humanity is apparently on the verge of solving artificial general intelligence (AGI) within our lifetimes. In response to an opinion piece penned by yours truly, the scientist posted a thread on Twitter that began with what's perhaps the boldest statement we've seen from anyone at DeepMind concerning its current progress toward AGI: It's about making these models bigger, safer, compute efficient, faster at sampling, smarter memory, more modalities, INNOVATIVE DATA, on/offline, … 1/N https://t.co/UJxSLZGc71 It's about making these models bigger, safer, compute efficient, faster at sampling, smarter memory, more modalities, INNOVATIVE DATA, on/offline, … 1/N Solving these scaling challenges is what will deliver AGI. Research focused on these problems, eg S4 for greater memory, is needed. Rich Sutton is right too, but the AI lesson ain't bitter but rather sweet.