Goto

Collaborating Authors

 raghavachary


What is the "forward-forward" algorithm, Geoffrey Hinton's new AI technique? – TechTalks

#artificialintelligence

This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI. In the 1980s, Geoffrey Hinton was one of the scientists who invented backpropagation, the algorithm that enables the training of deep neural networks. Backpropagation was key to the success of deep learning and its widespread use today. But Hinton, who is one of the most celebrated artificial intelligence scientists of our time, thinks it is time that we think beyond backpropagation and look for other, more efficient ways to train neural networks. And like many other scientist, he draws inspiration from the human brain.


To achieve AGI, we need new perspectives on intelligence

#artificialintelligence

This article is part of "the philosophy of artificial intelligence," a series of posts that explore the ethical, moral, and social implications of AI today and in the future. For decades, scientists have tried to create computational imitations of the brain. And for decades, the holy grail of artificial general intelligence, computers that can think and act like humans, has continued to elude scientists and researchers. Why do we continue to replicate some aspects of intelligence but fail to generate systems that can generalize their skills like humans and animals? One computer scientist who has been working on AI for three decades believes that to get past the hurdles of narrow AI, we must look at intelligence from a different and more fundamental perspective.


To create AGI, we need a new theory of intelligence

#artificialintelligence

All the sessions from Transform 2021 are available on-demand now. This article is part of "the philosophy of artificial intelligence," a series of posts that explore the ethical, moral, and social implications of AI today and in the future For decades, scientists have tried to create computational imitations of the brain. And for decades, the holy grail of artificial general intelligence, computers that can think and act like humans, has continued to elude scientists and researchers. Why do we continue to replicate some aspects of intelligence but fail to generate systems that can generalize their skills like humans and animals? One computer scientist who has been working on AI for three decades believes that to get past the hurdles of narrow AI, we must look at intelligence from a different and more fundamental perspective.


To create AGI, we need a new theory of intelligence

#artificialintelligence

This article is part of "the philosophy of artificial intelligence," a series of posts that explore the ethical, moral, and social implications of AI today and in the future For decades, scientists have tried to create computational imitations of the brain. And for decades, the holy grail of artificial general intelligence, computers that can think and act like humans, has continued to elude scientists and researchers. Why do we continue to replicate some aspects of intelligence but fail to generate systems that can generalize their skills like humans and animals? One computer scientist who has been working on AI for three decades believes that to get past the hurdles of narrow AI, we must look at intelligence from a different and more fundamental perspective. In a paper that was presented at the Brain-Inspired Cognitive Architectures for Artificial Intelligence (BICA*AI), Sathyanaraya Raghavachary, Associate Professor of Computer Science at the University of Southern California, discusses "considered response," a theory that can generalize to all forms of intelligent life that have evolved and thrived on our planet.