fodor
Explainability Through Systematicity: The Hard Systematicity Challenge for Artificial Intelligence
This paper argues that explainability is only one facet of a broader ideal that shapes our expectations towards artificial intelligence (AI). Fundamentally, the issue is to what extent AI exhibits systematicity--not merely in being sensitive to how thoughts are composed of recombinable constituents, but in striving towards an integrated body of thought that is consistent, coherent, comprehensive, and parsimoniously principled. This richer conception of systematicity has been obscured by the long shadow of the "systematicity challenge" to connectionism, according to which network architectures are fundamentally at odds with what Fodor and colleagues termed "the systematicity of thought." I offer a conceptual framework for thinking about "the systematicity of thought" that distinguishes four senses of the phrase. I use these distinctions to defuse the perceived tension between systematicity and connectionism and show that the conception of systematicity that historically shaped our sense of what makes thought rational, authoritative, and scientific is more demanding than the Fodorian notion. To determine whether we have reason to hold AI models to this ideal of systematicity, I then argue, we must look to the rationales for systematization and explore to what extent they transfer to AI models. I identify five such rationales and apply them to AI. This brings into view the "hard systematicity challenge." However, the demand for systematization itself needs to be regulated by the rationales for systematization. This yields a dynamic understanding of the need to systematize thought, which tells us how systematic we need AI models to be and when.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Minnesota (0.04)
- (6 more...)
- Health & Medicine (1.00)
- Education > Curriculum > Subject-Specific Education (0.45)
The end of radical concept nativism
Rule, Joshua S., Piantadosi, Steven T.
Though humans seem to be remarkable learners, arguments in cognitive science and philosophy of mind have long maintained that learning something fundamentally new is impossible. Specifically, Jerry Fodor's arguments for radical concept nativism hold that most, if not all, concepts are innate and that what many call concept learning never actually leads to the acquisition of new concepts. These arguments have deeply affected cognitive science, and many believe that the counterarguments to radical concept nativism have been either unsuccessful or only apply to a narrow class of concepts. This paper first reviews the features and limitations of prior arguments. We then identify three critical points - related to issues of expressive power, conceptual structure, and concept possession - at which the arguments in favor of radical concept nativism diverge from describing actual human cognition. We use ideas from computer science and information theory to formalize the relevant ideas in ways that are arguably more scientifically productive. We conclude that, as a result, there is an important sense in which people do indeed learn new concepts.
- North America > United States > California > Alameda County > Berkeley (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Education (0.92)
- Health & Medicine > Therapeutic Area > Neurology (0.92)
Symbol grounding in computational systems: A paradox of intentions
The paper presents a paradoxical feature of computational systems that suggests that computationalism cannot explain symbol grounding. If the mind is a digital computer, as computationalism claims, then it can be computing either over meaningful symbols or over meaningless symbols. If it is computing over meaningful symbols its functioning presupposes the existence of meaningful symbols in the system, i.e. it implies semantic nativism. If the mind is computing over meaningless symbols, no intentional cognitive processes are available prior to symbol grounding. In this case, no symbol grounding could take place since any grounding presupposes intentional cognitive processes. So, whether computing in the mind is over meaningless or over meaningful symbols, computationalism implies semantic nativism.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (3 more...)
There must be encapsulated nonconceptual content in vision
In this paper I want to propose an argument to support Jerry Fodor's thesis (Fodor 1983) that input systems are modular and thus informationally encapsulated. The argument starts with the suggestion that there is a "grounding problem" in perception, i. e. that there is a problem in explaining how perception that can yield a visual experience is possible, how sensation can become meaningful perception of something for the subject. Given that visual experience is actually possible, this invites a transcendental argument that explains the conditions of its possibility. I propose that one of these conditions is the existence of a visual module in Fodor's sense that allows the step from sensation to object-identifying perception, thus enabling visual experience. It seems to follow that there is informationally encapsulated nonconceptual content in visual perception.
- North America > United States (0.15)
- Europe > United Kingdom > England (0.14)
Can Artificial Intelligence Plan Your Next Trip? We Interviewed ChatGPT to Find Out
Um...are we travel writers all out of a job? If you've been following the recent advancements in the world of artificial intelligence, you'll know that there are unbelievable strides being made regarding the creation of original art. But there are also cutting-edge language models that can craft anything from original stories and college essays to writing jokes and crafting press releases. And for travelers, A.I. might even be able to help you plan your next trip–which has us travel writers a little bit nervous. But we wanted to see how far this technology has come, so we decided to put it to the test by conducting an interview with the ChatGPT A.I. engine to find out some of the best things to do in the coming year and hear about the travel space in general.
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.06)
- South America (0.06)
- Europe > Portugal > Lisbon > Lisbon (0.05)
- (17 more...)
- Consumer Products & Services > Travel (0.96)
- Consumer Products & Services > Restaurants (0.70)
Unit Testing for Concepts in Neural Networks
Lovering, Charles, Pavlick, Ellie
Many complex problems are naturally understood in terms of symbolic concepts. For example, our concept of "cat" is related to our concepts of "ears" and "whiskers" in a non-arbitrary way. Fodor (1998) proposes one theory of concepts, which emphasizes symbolic representations related via constituency structures. Whether neural networks are consistent with such a theory is open for debate. We propose unit tests for evaluating whether a system's behavior is consistent with several key aspects of Fodor's criteria. Using a simple visual concept learning task, we evaluate several modern neural architectures against this specification. We find that models succeed on tests of groundedness, modularlity, and reusability of concepts, but that important questions about causality remain open. Resolving these will require new methods for analyzing models' internal states.
- Oceania > Australia (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- (2 more...)
Japanese vending machines: the cutting edge of merchandising … and on the brink of irrelevance?
In an online list of the world's wackiest vending machines; compiled by Fodor's Travel, Japan got the nod for 3 out of 13 -- those selling beer, fresh eggs and bananas. The Fodor's list failed to impress this writer, who has seen machines with considerably wackier offerings. Some years ago, for instance, I spotted a machine that enabled people to automatically wash, rinse and dry their dogs. The appearance of original vending machines does warrant media coverage, which is certainly the case for Dohiemon, a new type of vending machine that went into service earlier this year. Preceding it with "do" and adding "mon" at the end creates a clever parody of Doraemon, the blue robot cat of cartoon fame.
- Media > News (0.50)
- Consumer Products & Services (0.49)
Language & Cognition: re-reading Jerry Fodor
In my opinion the late Jerry Fodor was one of the most brilliant cognitive scientists (that I knew of), if you wanted to have a deep understanding of the major issues in cognition and the plausibility/implausibility of various cognitive architectures. Very few had the technical breadth and depth in tackling some of the biggest questions concerning the mind, language, computation, the nature of concepts, innateness, ontology, etc. The other day I felt like re-reading his Concepts -- Where Cognitive Science Went Wrong (I read this small monograph at least 10 times before, and I must say that I still do not comprehend everything that's in it fully). But, what did happen in the 11th reading of Concepts is this: I now have a new and deeper understanding of his Productivity, Systematicity and Compositionality arguments that should clearly put an end to any talk of connectionist architectures being a serious architecture for cognition -- by'connectionist architectures' I roughly mean also modern day'deep neural networks' (DNNs) that are essentially, if we strip out the advances in compute power, the same models that were the target of Fodor's onslaught. I have always understood the'gist' of his argument, but I believe I now have a deeper understanding -- and, in the process I am now more than I have ever been before, convinced that DNNs cannot be considered as serious models for high-level cognitive tasks (planning, reasoning, language understanding, problem solving, etc.) beyond being statistical pattern recognizers (although very good ones at that).
Book Reviews
Review of The Mind Doesn't Work That Way: The Scope and Limits of Computational Psychology If you are interested in writing a review, contact chandra@ cis.ohio-state.edu. AT question: Which one of the following doesn't belong with the rest? It is the only discipline in the list that is not under attack for being conceptually or methodologically confused. Objections to AI and computational cognitive science are myriad. Accordingly, there are many different reasons for these attacks. However, all of them come down to one simple observation: Humans seem a lot smarter than computers--not just smarter as in Einstein was smarter than I, or I am smarter than a chimpanzee, but more like I am smarter than a pencil sharpener. To many, computation seems like the wrong paradigm for studying the mind. All this is because of another truth: The computational paradigm is the best thing to come down the pike since the wheel. The Mind Doesn't Work That Way: The Scope and Limits of Computational Psychology, Jerry Fodor, Cambridge, Massachusetts, The MIT Press, 2000, 126 pages, $22.95. Jerry Fodor believes this latter claim. He says: [The computational theory of mind] is, in my view, by far the best theory of cognition that we've got; indeed, the only one we've got that's worth the bother of a serious discussion.… There is, in short, every reason to suppose that Computational Theory is part of the truth about cognition. It is a fascinating read. This dispute about quantity of truth is where the book gets its title. In 1997, Steven Pinker published an important book describing the current state of the art in cognitive science (see also Plotkin [1997]). Pinker's book is entitled How the Mind Works. In it, he describes how computationalism, psychological nativism (the idea that many of our concepts are innate), massive modularity (the idea that most mental processes occur within a domain-specific, encapsulated specialpurpose processor), and Darwinian adaptationism combine to form a robust (but nascent) theory of mind. Fodor, however, thinks that the mind doesn't work that way or, anyhow, not very much of the mind works that way. Fodor dubs the synthesis of computationalism, nativism, massive modularity, and adaptationism the new synthesis (p.
- Book Review (0.83)
- Overview (0.74)
- North America > United States > California > Santa Clara County > Palo Alto (0.40)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.06)
- Oceania > Australia > Victoria > Melbourne (0.05)
- (4 more...)
- Education > Curriculum > Subject-Specific Education (0.32)
- Information Technology > Security & Privacy (0.30)