The 30-Year Cycle In The AI Debate

arXiv.org Artificial Intelligence

The recent practical successes [26] of Artificial Intelligence (AI) programs of the Reinforcement Learning and Deep Learning varieties in game playing, natural language processing and image classification, are now calling attention to the envisioned pitfalls of their hypothetical extension to wider domains of human behavior. Several voices from the industry and academia are now routinely raising concerns over the advances [49] of often heavily media-covered representatives of this new generation of programs such as Deep Blue, Watson, Google Translate, AlphaGo and AlphaZero. Most of these cutting-edge algorithms generally fall under the class of supervised learning, a branch of the still evolving taxonomy of Machine Learning techniques in AI research. In most cases the implementation choice is artificial neural networks software, the workhorse of the Connectionism school of thought in both AI and Cognitive Psychology. Confronting the current wave of connectionist architectures, critics usually raise issues of interpretability (Can the remarkable predictive capabilities be 1 trusted in real-life tasks? Are these capabilities transferable to unfamiliar situations or to different tasks altogether? How informative are the results about the real world; about human cognition?


869

AI Magazine

Connectionism challenges a basic assumption of much of AI, that mental processes are best viewed as algorithmic symbol manipulations. Connectionism replaces symbol structures with distributed representations in the form of weights between units. For problems close to the architecture of the underlying machines, connectionist and symbolic approaches can make different representational commitments for a task and, thus, can constitute different theories. For complex problems, however, the power of a system comes more from the content of the representations than the medium in which the representations reside. The connectionist hope of using learning to obviate explicit specification of this content is undermined by the problem of programming appropriate initial connectionist architectures so that they can in fact learn.


Connectionism and Information Processing Abstractions

AI Magazine

Connectionism challenges a basic assumption of much of AI, that mental processes are best viewed as algorithmic symbol manipulations. Connectionism replaces symbol structures with distributed representations in the form of weights between units. For problems close to the architecture of the underlying machines, connectionist and symbolic approaches can make different representational commitments for a task and, thus, can constitute different theories. For complex problems, however, the power of a system comes more from the content of the representations than the medium in which the representations reside. The connectionist hope of using learning to obviate explicit specification of this content is undermined by the problem of programming appropriate initial connectionist architectures so that they can in fact learn. In essence, although connectionism is a useful corrective to the view of mind as a Turing machine, for most of the central issues of intelligence, connectionism is only marginally relevant.



873

AI Magazine

A workshop on high-level connectionist models was held in Las Cruces, New Mexico, on 9-11 April 1988 with support from the American Association for Artificial Intelligence and the Office of Naval Research. John Barnden and Jordan Pollack organized and hosted the workshop and will edit a book containing the proceedings and commentary. The book will be published by Ablex as the first volume in a series entitled Advances in Connectionist and Neural Computation Theory. The two fields are often posed as paradigmatic enemies, and a risk of severing them exists. Few connectionist results are published in the mainstream AI journals and conference proceedings other than those sponsored by the Cognitive Science Society, and many neural-network researchers and industrialists proceed without consideration of the problems (and progress) of AI.