Goto

Collaborating Authors

Abductive inference is a major blind spot for AI

#artificialintelligence

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Recent advances in deep learning have rekindled interest in the imminence of machines that can think and act like humans, or artificial general intelligence. By following the path of building bigger and better neural networks, the thinking goes, we will be able to get closer and closer to creating a digital version of the human brain. But this is a myth, argues computer scientist Erik Larson, and all evidence suggests that human and machine intelligence are radically different. Larson's new book, The Myth of Artificial Intelligence: Why Computers Can't Think the Way We Do, discusses how widely publicized misconceptions about intelligence and inference have led AI research down narrow paths that are limiting innovation and scientific discoveries.


Abductive Inference & future path of #AI

#artificialintelligence

Welcome to AI book reviews, a series of posts that explore the latest literature on artificial intelligence. Recent advances in deep learning have rekindled interest in the imminence of machines that can think and act like humans, or artificial general intelligence. By following the path of building bigger and better neural networks, the thinking goes, we will be able to get closer and closer to creating a digital version of the human brain. But this is a myth, argues computer scientist Erik Larson, and all evidence suggests that human and machine intelligence are radically different. Larson's new book, The Myth of Artificial Intelligence: Why Computers Can't Think the Way We Do, discusses how widely publicized misconceptions about intelligence and inference have led AI research down narrow paths that are limiting innovation and scientific discoveries.


Abductive inference: The blind spot of artificial intelligence

#artificialintelligence

Welcome to AI book reviews, a series of posts that explore the latest literature on artificial intelligence. Recent advances in deep learning have rekindled interest in the imminence of machines that can think and act like humans, or artificial general intelligence. By following the path of building bigger and better neural networks, the thinking goes, we will be able to get closer and closer to creating a digital version of the human brain. But this is a myth, argues computer scientist Erik Larson, and all evidence suggests that human and machine intelligence are radically different. Larson's new book, The Myth of Artificial Intelligence: Why Computers Can't Think the Way We Do, discusses how widely publicized misconceptions about intelligence and inference have led AI research down narrow paths that are limiting innovation and scientific discoveries.


Common sense is a huge blind spot for AI developers

#artificialintelligence

Welcome to AI book reviews, a series of posts that explore the latest literature on artificial intelligence. Recent advances in deep learning have rekindled interest in the imminence of machines that can think and act like humans, or artificial general intelligence. By following the path of building bigger and better neural networks, the thinking goes, we will be able to get closer and closer to creating a digital version of the human brain. But this is a myth, argues computer scientist Erik Larson, and all evidence suggests that human and machine intelligence are radically different. Larson's new book, The Myth of Artificial Intelligence: Why Computers Can't Think the Way We Do, discusses how widely publicized misconceptions about intelligence and inference have led AI research down narrow paths that are limiting innovation and scientific discoveries.


Can computers think like humans? Reviewing Erik Larson's "The Myth of Artificial Intelligence"

#artificialintelligence

In his recent book The Myth of Artificial Intelligence: Why Computers Can't Think the Way We Do, AI researcher Erik J. Larson defends the claim that, as things stand today, there's no plausible approach in AI research that can lead to generalized, human-like intelligence. It's important to understand what the author is claiming- and what he's not claiming. He's not claiming that computers can never think like humans, as some philosophers of mind have claimed. Rather, his position is- if there's indeed a way to make computers think like humans, we haven't the foggiest what that is. Our current approaches- no matter how promising they might seem- are all dead ends. He contrasts this with the prevailing optimism about AI: the perception that current approaches are on the path to generalized intelligence, and the problems of this approach are, at least in theory, solvable. Thought this way, human-like computers seem just a matter of time. Larson, on the other hand, argues that even the fundamental theoretical principles of current AI approaches are non-starters. All of the current approaches in AI (or at least the most promising ones) are based on a certain model of thinking: inductive inference.