Goto

Collaborating Authors

 alphatensor


Google DeepMind used a large language model to solve an unsolvable math problem

MIT Technology Review

FunSearch (so called because it searches for mathematical functions, not because it's fun) continues a streak of discoveries in fundamental math and computer science that DeepMind has made using AI. First AlphaTensor found a way to speed up a calculation at the heart of many different kinds of code, beating a 50-year record. Then AlphaDev found ways to make key algorithms used trillions of times a day run faster. Yet those tools did not use large language models. Built on top of DeepMind's game-playing AI AlphaZero, both solved math problems by treating them as if they were puzzles in Go or chess.


Better Algorithms through Faster Math

Communications of the ACM

Developing faster algorithms is an important but elusive goal for data scientists. The ability to accelerate complex computing tasks and reduce latency has far-reaching ramifications in areas such as natural language processing, video streaming, autonomous robotics, gaming, and extended reality. Yet for all the hype surrounding computer algorithms and the increasingly sophisticated ways they operate, a basic fact stands out: these algorithms are typically built atop matrix multiplication, a basic type of linear algebra. The underlying mathematical framework has not changed a great deal since the inception of computing--and finding more efficient formulas has proved elusive. It is an issue attracting growing attention--particularly as machine learning (ML), deep learning (DL), artificial intelligence (AI), and machine automation advance into the mainstream.


First Open Source Implementation of DeepMind's AlphaTensor - KDnuggets

#artificialintelligence

Matrix multiplication is a fundamental operation used in many systems, from neural networks to scientific computing routines. Finding efficient and provably correct algorithms for matrix multiplication can have a huge impact on making computation faster and more efficient, but is a very challenging task. The space of possible algorithms is enormous, and traditional methods for discovering algorithms, such as human-designed heuristics or combinatorial search, are often suboptimal. DeepMind's recently proposed AI-based solution for automated search goes far beyond human intuition. The solution consists of a deep reinforcement learning agent called AlphaTensor, built on top of AlphaZero. This agent is trained to play a single-player game, TensorGame, where the goal is to discover computationally efficient algorithms for matrix multiplication. AlphaTensor is particularly good at handling large matrices by decomposing large matrix multiplications into smaller multiplications. Moreover, AlphaTensor can be used to achieve state-of-the-art performance for matrix multiplication once fine-tuned on a specific hardware device. AlphaTensor has great potential for accelerating deep learning computing.


AI Reveals New Possibilities in Matrix Multiplication

#artificialintelligence

Even something as abstract as multiplying matrices (two-dimensional tables of numbers) can feel like a game when you try to find the most efficient way to do it. It's a little like trying to solve a Rubik's Cube in as few moves as possible -- challenging, but alluring. Except that for a Rubik's Cube, the number of possible moves at each step is 18; for matrix multiplication, even in relatively simple cases, every step can present more than 1012 options. Over the past 50 years, researchers have approached this problem in many ways, all based on computer searches aided by human intuition. Last month, a team at the artificial intelligence company DeepMind showed how to tackle the problem from a new direction, reporting in a paper in Nature that they'd successfully trained a neural network to discover new fast algorithms for matrix multiplication.


DeepMind AlphaTensor: The delicate balance between human and artificial intelligence

#artificialintelligence

This article is part of our coverage of the latest in AI research. DeepMind has made another impressive artificial intelligence announcement with AlphaTensor, a deep reinforcement learning system that discovers algorithms to make matrix multiplications much more efficient. Matrix multiplication is at the heart of many computational tasks, including neural networks, 3D graphics, and data compression. Therefore, there are many immediate applications for an AI system that can improve the efficiency of matrix multiplication. To create AlphaTensor, scientists at DeepMind used AlphaZero, the deep learning system that previously mastered board games like go, chess, and shogi.


How DeepMind's AlphaTensor AI Devised a Faster Matrix Multiplication & More Latest News - Up Jobs

#artificialintelligence

After growing a man-made intelligence that may obtain superhuman mastery of video games like chess and go, along with one other AI that may predict how proteins fold themselves in three-dimensional area, the researchers over at DeepMind have completed it once more -- this time utilizing a deep studying AI mannequin to effectively clear up a elementary arithmetic downside, whereas beating a 50-year-old document besides. In a weblog put up from earlier this month, the DeepMind group introduces AlphaTensor, an AI system that's designed for locating new and extra environment friendly algorithms for fixing essential mathematical operations -- on this case, matrix multiplication. Whether they're used to course of or compress pictures or video, recognizing spoken instructions, or working simulations to foretell the climate, matrix multiplication underpins a lot of recent computing. So it's little surprise that consultants and firms everywhere in the world are continuously in search of extra environment friendly methods to enhance the algorithms for fixing these mathematical operations behind such duties. Matrix multiplication is without doubt one of the easiest mathematical operations in algebra, the place particular person numbers which might be organized in grids -- or matrices -- are multiplied collectively after which added in particular manner with the intention to generate a new matrix.


Perceptron: AI saving whales, steadying gaits and banishing traffic

#artificialintelligence

Research in the field of machine learning and AI, now a key technology in practically every industry and company, is far too voluminous for anyone to read it all. This column, Perceptron, aims to collect some of the most relevant recent discoveries and papers -- particularly in, but not limited to, artificial intelligence -- and explain why they matter. Over the past few weeks, researchers at MIT have detailed their work on a system to track the progression of Parkinson's patients by continuously monitoring their gait speed. Elsewhere, Whale Safe, a project spearheaded by the Benioff Ocean Science Laboratory and partners, launched buoys equipped with AI-powered sensors in an experiment to prevent ships from striking whales. Other aspects of ecology and academics also saw advances powered by machine learning.


What Happened in Reinforcement Learning in 2022

#artificialintelligence

Just like how we learn from our environment and our actions determine whether we are rewarded or punished, so do reinforcement learning agents whose ultimate aim is to maximise the rewards. This article brings the top 8 reinforcement learning innovations that shaped AI across several industries in 2022. Alphabet's DeepMind collaborated with the University of Venice, the University of Oxford and the Athens University of Economics and Business to build a deep neural network called'Ithaca', which can restore missing text from ancient texts. In a paper published in Nature, DeepMind stated that Ithaca was trained using natural language processing (NLP) to not only recover lost ancient text that has been damaged over time but also identify the original location of the text and establish the date when it was made. With DeepMind's latest release AlphaTensor, an AI system (based on a 3D board game), researchers shed light on a 50-year-old fundamental mathematics question of finding the fastest way to multiply two matrices.


DeepMind's AlphaTensor: Deepmind's Alphatensor: The AI That Is Reinventing Math

#artificialintelligence

Originally published on Towards AI the World's Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses. Without realizing it, any of our activities, in one way or another, involve matrix multiplications.


How DeepMind's AlphaTensor AI Devised a Faster Matrix Multiplication

#artificialintelligence

After developing an artificial intelligence that can achieve superhuman mastery of games like chess and go, in addition to another AI that can predict how proteins fold themselves in three-dimensional space, the researchers over at DeepMind have done it again -- this time using a deep learning AI model to efficiently solve a fundamental mathematics problem, while beating a 50-year-old record to boot. In a blog post from earlier this month, the DeepMind team introduces AlphaTensor, an AI system that is designed for discovering new and more efficient algorithms for solving crucial mathematical operations -- in this case, matrix multiplication. Whether they are used to process or compress images or video, recognizing spoken commands, or running simulations to predict the weather, matrix multiplication underpins much of modern computing. So it's little wonder that experts and companies all over the world are constantly looking for more efficient ways to improve the algorithms for solving these mathematical operations behind such tasks. Matrix multiplication is one of the simplest mathematical operations in algebra, where individual numbers that are arranged in grids -- or matrices -- are multiplied together and then added in specific way in order to generate a new matrix.