Goto

Collaborating Authors

How far are we from artificial general intelligence?

#artificialintelligence

Since the earliest days of artificial intelligence -- and computing more generally -- theorists have assumed that intelligent machines would think in much the same ways as humans. After all, we know of no greater cognitive power than the human brain. In many ways, it makes sense to try to replicate it if the goal is to create a high level of cognitive processing. However, there is a debate today over the best way of reaching true general AI. In particular, recent years' advancements in deep learning -- which is itself inspired by the human brain, though diverges from it in some important ways -- have shown developers that there may be other paths.


Can This Man Make AIMore Human?

#artificialintelligence

Like any proud father, Gary Marcus is only too happy to talk about the latest achievements of his two-year-old son. More unusually, he believes that the way his toddler learns and reasons may hold the key to making machines much more intelligent. Sitting in the boardroom of a bustling Manhattan startup incubator, Marcus, a 45-year-old professor of psychology at New York University and the founder of a new company called Geometric Intelligence, describes an example of his boy's ingenuity. From the backseat of the car, his son had seen a sign showing the number 11, and because he knew that other double-digit numbers had names like "thirty-three" and "seventy-seven," he asked his father if the number on the sign was "onety-one." "He had inferred that there is a rule about how you put your numbers together," Marcus explains with a smile.


The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence

arXiv.org Artificial Intelligence

Recent research in artificial intelligence and machine learning has largely emphasized general-purpose learning and ever-larger training sets and more and more compute. In contrast, I propose a hybrid, knowledge-driven, reasoning-based approach, centered around cognitive models, that could provide the substrate for a richer, more robust AI than is currently possible.


The case for hybrid artificial intelligence

#artificialintelligence

This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence. Deep learning, the main innovation that has renewed interest in artificial intelligence in the past years, has helped solve many critical problems in computer vision, natural language processing, and speech recognition. However, as the deep learning matures and moves from hype peak to its trough of disillusionment, it is becoming clear that it is missing some fundamental components. This is a reality that many of the pioneers of deep learning and its main component, artificial neural networks, have acknowledged in various AI conferences in the past year. Geoffrey Hinton, Yann LeCun, and Yoshua Bengio, the three "godfathers of deep learning," have all spoken about the limits of neural networks.


Deep Learning, Part 1: Not as Deep as You Think

#artificialintelligence

Gary Marcus has emerged as one of deep learning's chief skeptics. In a recent interview, and a slightly less recent medium post, he discusses his feud with deep learning pioneer Yann LeCun and some of his views on how deep learning is overhyped. I find the whole thing entertaining, but at many times LeCun and Marcus are talking past each other more than with each other. Marcus seems to me to be either unaware of or ignoring certain truths about machine learning and LeCun seems to basically agree with Marcus' ideas in a way that's unsatisfying for Marcus. The temptation for me to brush 10 years of dust off of my professor hat is too much to ignore.