connectionist
Competitive Anti-Hebbian Learning of Invariants
Many connectionist learning algorithms share with principal component analysis (Jolliffe, 1986) the strategy of extracting the directions of highest variance from the input. A single Hebbian neuron, for instance, will come to encode the input's first principal component (Oja and Karhunen, 1985); various forms of lateral interaction can be used to force a layer of such nodes to differentiate and span the principal component subspace - cf. (Sanger, 1989; Kung, 1990; Leen, 1991), and others. The same type of representation also develops in the hidden layer of backpropagation autoassociator networks (Baldi and Hornik, 1989).
A Short Survey of Systematic Generalization
This survey includes systematic generalization and a history of how machine learning addresses it. We aim to summarize and organize the related information of both conventional and recent improvements. We first look at the definition of systematic generalization, then introduce Classicist and Connectionist. We then discuss different types of Connectionists and how they approach the generalization. Two crucial problems of variable binding and causality are discussed. We look into systematic generalization in language, vision, and VQA fields. Recent improvements from different aspects are discussed. Systematic generalization has a long history in artificial intelligence. We could cover only a small portion of many contributions. We hope this paper provides a background and is beneficial for discoveries in future work.
- North America > Dominican Republic (0.05)
- Asia > Middle East > Jordan (0.04)
- North America > United States > Washington > King County > Seattle (0.04)
- (10 more...)
- Overview (0.66)
- Research Report (0.64)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Grammars & Parsing (0.95)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.93)
What is Artificial Intelligence? -- Suffixtree
Artificial Intelligence (AI) is the field of computer science dedicated to solving cognitive problems commonly associated with human intelligence, such as learning, problem solving, and pattern recognition. Artificial Intelligence, often abbreviated as "AI", may connote robotics or futuristic scenes, AI goes well beyond the automatons of science fiction, into the non-fiction of modern day advanced computer science. Professor Pedro Domingos, a prominent researcher in this field, describes "five tribes" of machine learning, comprised of symbolists, with origins in logic and philosophy; connectionists, stemming from neuroscience; evolutionaries, relating to evolutionary biology; Bayesians, engaged with statistics and probability; and analogizers with origins in psychology. Recently, advances in the efficiency of statistical computation have led to Bayesians being successful at furthering the field in a number of areas, under the name "machine learning". Similarly, advances in network computation have led to connectionists furthering a subfield under the name "deep learning".
Connectionism - Switching and Fast Transforms
If you are a connectionist (and who isn't?) you should know what the terms switch, connect and disconnect mean. A switch when on gives: zero volts in zero volts out, 1 volt in 1 volt out, 2 volts in 2 volts out. If you graph that out in a uniform way you get a 45 degree line. The function form is f(x) x and the meaning is connect. A switch when off gives only zero volts out.
The case for hybrid artificial intelligence
This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence. Deep learning, the main innovation that has renewed interest in artificial intelligence in the past years, has helped solve many critical problems in computer vision, natural language processing, and speech recognition. However, as the deep learning matures and moves from hype peak to its trough of disillusionment, it is becoming clear that it is missing some fundamental components. This is a reality that many of the pioneers of deep learning and its main component, artificial neural networks, have acknowledged in various AI conferences in the past year. Geoffrey Hinton, Yann LeCun, and Yoshua Bengio, the three "godfathers of deep learning," have all spoken about the limits of neural networks.
Reviewing Rebooting AI
First of all, apologies for not posting as frequently as I used to. As you might imagine, blogging is not my full time job and I'm currently extremely involved in a very exciting startup (something I'm going to write about soon). On weekends and evening I'm busy with 7mo infant to help care for and altogether that leaves me with very little time. But I'll try to make it better soon, since a lot is going on in the AI space and signs of cooling are visible now all over the place. In this post I'd like to focus on the recent book by Gary Marcus and Ernest Davis, Rebooting AI.
The Revenge of Neurons
This web page only shows the two main figures of the paper, translated in English. Since 2010, machine learning based predictive techniques, and more specifically deep learning neural networks, have achieved spectacular performances in the fields of image recognition or automatic translation, under the umbrella term of "Artificial Intelligence". But their filiation to this field of research is not straightforward. In the tumultuous history of AI, learning techniques using so-called "connectionist" neural networks have long been mocked and ostracized by the "symbolic" movement. From a social history of science and technology perspective, it seeks to highlight how researchers, relying on the availability of massive data and the multiplication of computing power have undertaken to reformulate the symbolic AI project by reviving the spirit of adaptive and inductive machines dating back from the era of cybernetics.
- North America > United States > Massachusetts > Hampshire County > Amherst (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- (4 more...)
- Information Technology (1.00)
- Education (0.69)
What is Artificial Intelligence?
Artificial Intelligence (AI) is the field of computer science dedicated to solving cognitive problems commonly associated with human intelligence, such as learning, problem solving, and pattern recognition. Artificial Intelligence, often abbreviated as "AI", may connote robotics or futuristic scenes, AI goes well beyond the automatons of science fiction, into the non-fiction of modern day advanced computer science. Professor Pedro Domingos, a prominent researcher in this field, describes "five tribes" of machine learning, comprised of symbolists, with origins in logic and philosophy; connectionists, stemming from neuroscience; evolutionaries, relating to evolutionary biology; Bayesians, engaged with statistics and probability; and analogizers with origins in psychology. Recently, advances in the efficiency of statistical computation have led to Bayesians being successful at furthering the field in a number of areas, under the name "machine learning". Similarly, advances in network computation have led to connectionists furthering a subfield under the name "deep learning". Machine learning (ML) and deep learning (DL) are both computer science fields derived from the discipline of Artificial Intelligence.
Talking Heads … A Review of Speaking Minds: Interviews with Twenty Eminent Cognitive Scientists
They thought that the Chinese Room argument showed that computationalism could never fully account for the first-person perspective, that the "computer metaphor for the mind" might lead to some vital social questions being ignored, that passing the Turing Test They conducted 20 interviews with a rather idiosyncratic collection of people, largely on the east and west coasts, to find out what the consensus was in the field. One of their happy discoveries was that connectionism (about which they initially knew little) was expected to overcome many of these obstacles. Each interview begins with a brief personal history of why the interviewee became involved with the subject and what they take it to be, and then moves into a discussion of contemporary issues which the editors find interesting. While the interviews do not conform to a set pattern, they return regularly to a few favorite themes: the Chinese Room, the importance of the Turing Test, why "symbolic AI" has failed (a claim that is made repeatedly throughout the book), and the significance of connectionism as a replacement for it Wilensky, and Winograd could possibly be said to be active in mainstream AI; on the other hand there are seven or eight philosophers, of whom only Dennett has a sympathetic interest in AI; all the others have rejected its premises, and Dreyfus, Searle and Weizenbaum are notorious for their passionate and sustained attacks on the subject. This would be less important but for the fact that AI is the main subject matter of several of the interviews.