Goto

Collaborating Authors

Google's DeepMind says it is close to achieving 'human-level' artificial intelligence

Daily Mail - Science & tech

DeepMind, a British company owned by Google, may be on the verge of achieving human-level artificial intelligence (AI). Nando de Freitas, a research scientist at DeepMind and machine learning professor at Oxford University, has said'the game is over' in regards to solving the hardest challenges in the race to achieve artificial general intelligence (AGI). AGI refers to a machine or program that has the ability to understand or learn any intellectual task that a human being can, and do so without training. According to De Freitas, the quest for scientists is now scaling up AI programs, such as with more data and computing power, to create an AGI. Earlier this week, DeepMind unveiled a new AI'agent' called Gato that can complete 604 different tasks'across a wide range of environments'. Gato uses a single neural network โ€“ a computing system with interconnected nodes that works like nerve cells in the human brain.


Google's Supermodel: DeepMind Perceiver is a step on the road to an AI machine that could process anything and everything

ZDNet

Arguably one of the premiere events that has brought AI to popular attention in recent years was the invention of the Transformer by Ashish Vaswani and colleagues at Google in 2017. The Transformer led to lots of language programs such as Google's BERT and OpenAI's GPT-3 that have been able to produce surprisingly human-seeming sentences, giving the impression machines can write like a person. Now, scientists at DeepMind in the U.K., which is owned by Google, want to take the benefits of the Transformer beyond text, to let it revolutionize other material including images, sounds and video, and spatial data of the kind a car records with LiDAR. The Perceiver, unveiled this week by DeepMind in a paper posted on arXiv, adapts the transformer with some tweaks to let it consume all those types of input, and to perform on the various tasks, such as image recognition, for which separate kinds of neural networks are usually developed. The DeepMind work appears to be a way station on the way to an envisioned super-model of deep learning, a neural network that could perform a plethora of tasks, and would learn faster and with less data, what Google's head of AI, Jeff Dean, has described as a "grand challenge" for the discipline.


Meta's 'data2vec' is the next step toward One Neural Network to Rule Them All

ZDNet

The race is on to create one neural network that can process multiple kinds of data, the notion of a more-general artificial intelligence that doesn't discriminate about types of data but instead can crunch them all within the same basic structure. The genre of multi-modality, as these neural networks are called, is seeing a flurry of activity in which different data, such as image, text, and speech audio, are passed through the same algorithm to produce a score on different tests such as image recognition, natural language understanding or speech detection. And these ambidextrous networks are racking up scores on benchmark tests of AI. The latest achievement is what's called'data2vec," developed by researchers at the AI division of Meta, parent of Facebook, Instagram, and WhatsApp. The point, as Meta's scientists, Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, and Michael Auli, write, is to approach something more like the general learning ability that the human mind seems to encompass.


DeepMind's new AI system can perform over 600 tasks โ€“ TechCrunch

#artificialintelligence

The ultimate achievement to some in the AI industry is creating a system with artificial general intelligence (AGI), or the ability to understand and learn any task that a human can. Long relegated to the domain of science fiction, it's been suggested that AGI would bring about systems with the ability to reason, plan, learn, represent knowledge, and communicate in natural language. Not every expert is convinced that AGI is a realistic goal -- or even possible. Gato is what DeepMind describes as a "general-purpose" system, a system that can be taught to perform many different types of tasks. Researchers at DeepMind trained Gato to complete 604, to be exact, including captioning images, engaging in dialogue, stacking blocks with a real robot arm, and playing Atari games. Jack Hessel, a research scientist at the Allen Institute for AI, points out that a single AI system that can solve many tasks isn't new.


Meta's 'data2vec' is a step toward One Neural Network to Rule Them All

#artificialintelligence

The race is on to create one neural network that can process multiple kinds of data -- a more-general artificial intelligence that doesn't discriminate about types of data but instead can crunch them all within the same basic structure. The genre of multi-modality, as these neural networks are called, is seeing a flurry of activity in which different data, such as image, text, and speech audio, are passed through the same algorithm to produce a score on different tests such as image recognition, natural language understanding, or speech detection. And these ambidextrous networks are racking up scores on benchmark tests of AI. The latest achievement is what's called "data2vec," developed by researchers at the AI division of Meta (parent of Facebook, Instagram, and WhatsApp). The point, as Meta researcher Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, and Michael Auli reveal in a blog post, is to approach something more like the general learning ability that the human mind seems to encompass.