Goto

Collaborating Authors

 connectionism


A Scale-Invariant Diagnostic Approach Towards Understanding Dynamics of Deep Neural Networks

Moharil, Ambarish, Tamburri, Damian, Kumara, Indika, Heuvel, Willem-Jan Van Den, Azarfar, Alireza

arXiv.org Artificial Intelligence

This paper introduces a scale-invariant methodology employing \textit{Fractal Geometry} to analyze and explain the nonlinear dynamics of complex connectionist systems. By leveraging architectural self-similarity in Deep Neural Networks (DNNs), we quantify fractal dimensions and \textit{roughness} to deeply understand their dynamics and enhance the quality of \textit{intrinsic} explanations. Our approach integrates principles from Chaos Theory to improve visualizations of fractal evolution and utilizes a Graph-Based Neural Network for reconstructing network topology. This strategy aims at advancing the \textit{intrinsic} explainability of connectionist Artificial Intelligence (AI) systems.


Connectionism for Music and Audition

Neural Information Processing Systems

In recent years, NIPS has heard neural networks generate tunes and harmonize chorales. With a large amount of music becoming available in computer readable form, real data can be used to train connectionist models. At the beginning of this workshop, Andreas Weigend focused on architectures to capture structure on multiple time scales. The prediction approach to continuation and completion, as well as to modeling expectations, can be charac(cid:173) terized by the question "What's next?". Moving to time as the primary medium of musical communication, the inquiry in music perception and cognition shifted to the question "When next?" .


Primitive Manipulation Learning with Connectionism

Neural Information Processing Systems

Infants' manipulative exploratory behavior within the environment is a vehicle of cognitive stimulation[McCall 1974]. During this time, infants practice and perfect sensorimotor patterns that become be(cid:173) havioral modules which will be seriated and imbedded in more com(cid:173) plex actions. This paper explores the development of such primitive learning systems using an embodied light-weight hand which will be used for a humanoid being developed at the MIT Artificial In(cid:173) telligence Laboratory[Brooks and Stein 1993]. Primitive grasping procedures are learned from sensory inputs using a connectionist reinforcement algorithm while two submodules preprocess sensory data to recognize the hardness of objects and detect shear using competitive learning and back-propagation algorithm strategies, respectively. This system is not only consistent and quick dur(cid:173) ing the initial learning stage, but also adaptable to new situations after training is completed.


The Less Intelligent the Elements, the More Intelligent the Whole. Or, Possibly Not?

Fioretti, Guido, Policarpi, Andrea

arXiv.org Artificial Intelligence

We dare to make use of a possible analogy between neurons in a brain and people in society, asking ourselves whether individual intelligence is necessary in order to collective wisdom to emerge and, most importantly, what sort of individual intelligence is conducive of greater collective wisdom. We review insights and findings from connectionism, agent-based modeling, group psychology, economics and physics, casting them in terms of changing structure of the system's Lyapunov function. Finally, we apply these insights to the sort and degrees of intelligence of preys and predators in the Lotka-Volterra model, explaining why certain individual understandings lead to co-existence of the two species whereas other usages of their individual intelligence cause global extinction.


A Brief History of Artificial Intelligence

#artificialintelligence

The history of Artificial Intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen. The seeds of modern AI were planted by classical philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain. The field of AI research was founded at a workshop held on the campus of Dartmouth College during the summer of 1956.Those who attended would become the leaders of AI research for decades.


Council Post: Symbolism Versus Connectionism In AI: Is There A Third Way?

#artificialintelligence

It's an essential prerequisite for deciding how we want critical decisions about our health and well-being to be made -- possibly for a very long time to come. To understand why the "how" behind AI functionality is so important, we first have to appreciate the fact that there have historically been two very different approaches to AI. The first is symbolism, which deals with semantics and symbols. Many early AI advances utilized a symbolistic approach to AI programming, striving to create smart systems by modeling relationships and using symbols and programs to convey meaning. But it soon became clear that one weakness to these semantic networks and this "top-down" approach was that true learning was relatively limited.


Artificial intelligence in library services Daily times

#artificialintelligence

Libraries have always resisted change. Libraries are also viewed as an agent of change. The journey from clay tablet to e-tablet and from papyrus to paper has been made but it has not yet ended. Changing the paradigm from a traditional library setup to modern information network has enhanced the role of libraries as real services agents. These changes have stunned some scholars who wonder what else is going to be brought into consideration in order to impart quality and optimal information in minimal time.


Does AlphaGo actually play Go? Concerning the State Space of Artificial Intelligence

Lyre, Holger

arXiv.org Artificial Intelligence

The overarching goal of this paper is to develop a general model of the state space of AI. Given the breathtaking progress in AI research and technologies in recent years, such conceptual work is of substantial theoretical interest. The present AI hype is mainly driven by the triumph of deep learning neural networks. As the distinguishing feature of such networks is the ability to self-learn, self-learning is identified as one important dimension of the AI state space. Another main dimension lies in the possibility to go over from specific to more general types of problems. The third main dimension is provided by semantic grounding. Since this is a philosophically complex and controversial dimension, a larger part of the paper is devoted to it. We take a fresh look at known foundational arguments in the philosophy of mind and cognition that are gaining new relevance in view of the recent AI developments including the blockhead objection, the Turing test, the symbol grounding problem, the Chinese room argument, and general use-theoretic considerations of meaning. Finally, the AI state space, spanned by the main dimensions generalization, grounding and "selfx-ness", possessing self-x properties such as self-learning, is outlined.


Reviewing Rebooting AI

#artificialintelligence

First of all, apologies for not posting as frequently as I used to. As you might imagine, blogging is not my full time job and I'm currently extremely involved in a very exciting startup (something I'm going to write about soon). On weekends and evening I'm busy with 7mo infant to help care for and altogether that leaves me with very little time. But I'll try to make it better soon, since a lot is going on in the AI space and signs of cooling are visible now all over the place. In this post I'd like to focus on the recent book by Gary Marcus and Ernest Davis, Rebooting AI.


The 30-Year Cycle In The AI Debate

Chauvet, Jean-Marie

arXiv.org Artificial Intelligence

The recent practical successes [26] of Artificial Intelligence (AI) programs of the Reinforcement Learning and Deep Learning varieties in game playing, natural language processing and image classification, are now calling attention to the envisioned pitfalls of their hypothetical extension to wider domains of human behavior. Several voices from the industry and academia are now routinely raising concerns over the advances [49] of often heavily media-covered representatives of this new generation of programs such as Deep Blue, Watson, Google Translate, AlphaGo and AlphaZero. Most of these cutting-edge algorithms generally fall under the class of supervised learning, a branch of the still evolving taxonomy of Machine Learning techniques in AI research. In most cases the implementation choice is artificial neural networks software, the workhorse of the Connectionism school of thought in both AI and Cognitive Psychology. Confronting the current wave of connectionist architectures, critics usually raise issues of interpretability (Can the remarkable predictive capabilities be 1 trusted in real-life tasks? Are these capabilities transferable to unfamiliar situations or to different tasks altogether? How informative are the results about the real world; about human cognition?