Goto

Collaborating Authors

gato


Is an AI Autumn Around the Corner? - DataScienceCentral.com

#artificialintelligence

It is hard, looking at the current technological landscape, to believe that artificial intelligence may actually be facing a reckoning that could cause investment in the field to dry up. Yet increasingly, the sentiment being expressed by those who work most with AI is that we may be heading for a wall. There are several solid reasons for this concern. Alan Morrison recently noted that what is more recently referred to as Artificial Intelligence can be divided into about five different buckets. The first, from within the realm of Data Science, is the use of stochastic (probability-based) algorithms used in conjunction with large data sets to perform what amounts to predictive analytics.


The hype around DeepMind's new AI model misses what's actually cool about it

#artificialintelligence

One of DeepMind's top researchers and a coauthor of the Gato paper, Nando de Freitas, couldn't contain his excitement. "The game is over!" he tweeted, suggesting that there is now a clear path from Gato to artificial general intelligence, or AGI, a vague concept of human- or superhuman-level AI. The way to build AGI, he claimed, is mostly a question of scale: making models such as Gato bigger and better. Unsurprisingly, de Freitas's announcement triggered breathless press coverage that DeepMind is "on the verge" of human-level artificial intelligence. This is not the first time hype has outstripped reality.


The Download: DeepMind's AI shortcomings, and China's social media translation problem

MIT Technology Review

Earlier this month, DeepMind presented a new "generalist" AI model called Gato. The model can play the video game Atari, caption images, chat, and stack blocks with a real robot arm, the Alphabet-owned AI lab announced. All in all, Gato can do hundreds of different tasks. But while Gato is undeniably fascinating, in the week since its release some researchers have got a bit carried away. One of DeepMind's top researchers and a coauthor of the Gato paper, Nando de Freitas, couldn't contain his excitement.


General AI through scaling: Meta's AI chief Yann LeCun speaks out

#artificialintelligence

Does the breakthrough to general AI need more data and computing power above all else? Yann LeCun, Chief AI Scientist at Meta, comments on the recent debate about scaling sparked by Deepmind's Gato. The recent successes of large AI models such as OpenAI's DALL-E 2, Google's PaLM and Deepmind's Flamingo have sparked a debate about their significance for progress towards general AI. Deepmind's Gato has recently given a particular boost to the debate, which has been conducted publicly, especially on Twitter. Gato is a Transformer model trained with numerous data modalities, including images, text, proprioception or joint moments.


GATO: Google's Generalized AI

#artificialintelligence

Note: The entire model is trained in a purely supervised fashion as opposed to any form of reinforcement learning. The first question you may ask is how the model takes different types of inputs like tabular data, images, sound, audio, video, etc. The answer to this is that everything is first converted to the same format, i.e. After converting data into tokens, they use the following canonical sequence ordering. The goal here is to put everything in the same format with a particular ordering depending upon the task.


Is DeepMind's Gato AI really a human-level intelligence breakthrough?

New Scientist

DeepMind's Gato may or may not be a major breakthrough for AI DeepMind has released what it calls a "generalist" AI called Gato, which can play Atari games, accurately caption images, chat naturally with a human and stack coloured blocks with a robot arm, among 600 other tasks. But is Gato truly intelligent – having artificial general intelligence – or is it just an AI model with a few extra tricks up its sleeve? What is artificial general intelligence (AGI)? Outside science fiction, AI is limited to niche tasks. It has seen plenty of success recently in solving a huge range of problems, from writing software to protein folding and even creating beer recipes, but individual AI models have limited, specific abilities.


Google's DeepMind says it is close to achieving 'human-level' artificial intelligence

Daily Mail - Science & tech

DeepMind, a British company owned by Google, may be on the verge of achieving human-level artificial intelligence (AI). Nando de Freitas, a research scientist at DeepMind and machine learning professor at Oxford University, has said'the game is over' in regards to solving the hardest challenges in the race to achieve artificial general intelligence (AGI). AGI refers to a machine or program that has the ability to understand or learn any intellectual task that a human being can, and do so without training. According to De Freitas, the quest for scientists is now scaling up AI programs, such as with more data and computing power, to create an AGI. Earlier this week, DeepMind unveiled a new AI'agent' called Gato that can complete 604 different tasks'across a wide range of environments'. Gato uses a single neural network – a computing system with interconnected nodes that works like nerve cells in the human brain.


DeepMind's 'Gato' is mediocre, so why did they build it?

ZDNet

Tiernan Ray has been covering technology and business for 27 years. He was most recently technology editor for Barron's where he wrote daily market coverage for the Tech Trader blog and wrote the weekly print column of that name. DeepMind's "Gato" neural network excels at numerous tasks including controlling robotic arms that stack blocks, playing Atari 2600 games, and captioning images. The world is used to seeing headlines about the latest breakthrough by deep learning forms of artificial intelligence. The latest achievement of the DeepMind division of Google, however, might be summarized as, "One AI program that does a so-so job at a lot of things."


DeepMind's new AI system can perform over 600 tasks – TechCrunch

#artificialintelligence

The ultimate achievement to some in the AI industry is creating a system with artificial general intelligence (AGI), or the ability to understand and learn any task that a human can. Long relegated to the domain of science fiction, it's been suggested that AGI would bring about systems with the ability to reason, plan, learn, represent knowledge, and communicate in natural language. Not every expert is convinced that AGI is a realistic goal -- or even possible. Gato is what DeepMind describes as a "general-purpose" system, a system that can be taught to perform many different types of tasks. Researchers at DeepMind trained Gato to complete 604, to be exact, including captioning images, engaging in dialogue, stacking blocks with a real robot arm, and playing Atari games. Jack Hessel, a research scientist at the Allen Institute for AI, points out that a single AI system that can solve many tasks isn't new.


A Generalist Agent

#artificialintelligence

Inspired by progress in large-scale language modelling, we apply a similar approach towards building a single generalist agent beyond the realm of text outputs. The agent, which we refer to as Gato, works as a multi-modal, multi-task, multi-embodiment generalist policy. The same network with the same weights can play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button presses, or other tokens. During the training phase of Gato, data from different tasks and modalities are serialised into a flat sequence of tokens, batched, and processed by a transformer neural network similar to a large language model. The loss is masked so that Gato only predicts action and text targets.