Goto

Collaborating Authors

A Popperian Falsification of Artificial Intelligence - Lighthill Defended

arXiv.org Artificial Intelligence

The area of computation called artificial intelligence (AI) is falsified by describing a previous 1972 falsification of AI by British applied mathematician James Lighthill. It is explained how Lighthill's arguments continue to apply to current AI. It is argued that AI should use the Popperian scientific method in which it is the duty of every scientist to attempt to falsify theories and if theories are falsified to replace or modify them. The paper describes the Popperian method in detail and discusses Paul Nurse's application of the method to cell biology that also involves questions of mechanism and behavior. Arguments used by Lighthill in his original 1972 report that falsified AI are discussed. The Lighthill arguments are then shown to apply to current AI. The argument uses recent scholarship to explain Lighthill's assumptions and to show how the arguments based on those assumptions continue to falsify modern AI. An important focus of the argument involves Hilbert's philosophical programme that defined knowledge and truth as provable formal sentences. Current AI takes the Hilbert programme as dogma beyond criticism while Lighthill as a mid 20th century applied mathematician had abandoned it. The paper uses recent scholarship to explain John von Neumann's criticism of AI that I claim was assumed by Lighthill. The paper discusses computer chess programs to show Lighthill's combinatorial explosion still applies to AI but not humans. An argument showing that Turing Machines (TM) are not the correct description of computation is given. The paper concludes by advocating studying computation as Peter Naur's Dataology.


Artificial Intelligence: Structures and Strategies for Complex Problem Solving

AITopics Original Links

Many and long were the conversations between Lord Byron and Shelley to which I was a devout and silent listener. During one of these, various philosophical doctrines were discussed, and among others the nature of the principle of life, and whether there was any probability of its ever being discovered and communicated. They talked of the experiments of Dr. Darwin (I speak not of what the doctor really did or said that he did, but, as more to my purpose, of what was then spoken of as having been done by him), who preserved a piece of vermicelli in a glass case till by some extraordinary means it began to move with a voluntary motion. Not thus, after all, would life be given. Perhaps a corpse would be reanimated; galvanism had given token of such things: perhaps the component parts of a creature might be manufactured, brought together, and endued with vital warmth (Butler 1998).



How close are we to creating artificial intelligence? – David Deutsch Aeon Essays

#artificialintelligence

It is uncontroversial that the human brain has capabilities that are, in some respects, far superior to those of all other known objects in the cosmos. It is the only kind of object capable of understanding that the cosmos is even there, or why there are infinitely many prime numbers, or that apples fall because of the curvature of space-time, or that obeying its own inborn instincts can be morally wrong, or that it itself exists. Nor are its unique abilities confined to such cerebral matters. The cold, physical fact is that it is the only kind of object that can propel itself into space and back without harm, or predict and prevent a meteor strike on itself, or cool objects to a billionth of a degree above absolute zero, or detect others of its kind across galactic distances. But no brain on Earth is yet close to knowing what brains do in order to achieve any of that functionality. The enterprise of achieving it artificially -- the field of'artificial general intelligence' or AGI -- has made no progress whatever during the entire six decades of its existence. Why? Because, as an unknown sage once remarked, 'it ain't what we don't know that causes trouble, it's what we know for sure that just ain't so' (and if you know that sage was Mark Twain, then what you know ain't so either). I cannot think of any other significant field of knowledge in which the prevailing wisdom, not only in society at large but also among experts, is so beset with entrenched, overlapping, fundamental errors. Yet it has also been one of the most self-confident fields in prophesying that it will soon achieve the ultimate breakthrough. Despite this long record of failure, AGI must be possible. And that is because of a deep property of the laws of physics, namely the universality of computation.


Computer sciences and synthesis: retrospective and perspective

arXiv.org Artificial Intelligence

The problem of synthesis in computer sciences, including cybernetics, artificial intelligence and system analysis, is analyzed. Main methods of realization this problem are discussed. Ways of search universal method of creation universal synthetic science are represented. As example of such universal method polymetric analysis is given. Perspective of further development of this research, including application polymetric method for the resolution main problems of computer sciences, is analyzed too.