Building artificial intelligence: Reward is not enough

#artificialintelligence 

In a recent paper, the DeepMind team, (Silver et al., 2021) argue that rewards are enough for all kinds of intelligence. Specifically, they argue that "maximizing reward is enough to drive behavior that exhibits most if not all attributes of intelligence." They argue that simple rewards are all that is needed for agents in rich environments to develop multi-attribute intelligence of the sort needed to achieve artificial general intelligence. This sounds like a bold claim, but, in fact, it is so vague as to be almost meaningless. They support their thesis, not by offering specific evidence, but by repeatedly asserting that reward is enough because the observed solutions to the problems are consistent with the problem having been solved. The Silver et al. paper represents at least the third time that a serious proposal has been offered to demonstrate that generic learning mechanisms are sufficient to account for all learning.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found