Goto

Collaborating Authors

 mcclelland


Is AI already conscious? Evidence is 'far too limited' to definitively say artificial intelligence hasn't made the leap, expert claims

Daily Mail - Science & tech

Rob Reiner and his wife's cause of death revealed Dan Bongino announces he's QUIT FBI to return to popular talk show The full story of Nick Reiner and these murders is so much more unbearable than everyone thinks. Even Hollywood wouldn't dare write it: MAUREEN CALLAHAN I sneakily looked at my perfect son's phone... What a terrible mistake! US car dealer charged with FRAUD after bankruptcy revealed depths of American's debt crisis Tara Reid speaks out for the first time since THAT video emerged... and tells KATIE HIND why she is convinced she was spiked after watching CCTV Chilling new details of father's death a day before facing justice for leaving his daughter, 2, to die in a hot car Pouty dine-and-dash diva interrupts judge MULTIPLE times as she's hauled to court for bill-skipping spree Karoline Leavitt close-up from Vanity Fair's Susie Wiles interview sparks fury: 'Shameful' Symptoms of deadly'super flu' sweeping the US explained and how to tell it apart from Covid Earthquakes stir fear in America's Heartland as deadly fault zone awakens Scandal rocks Trump's deportation force: DHS insiders say boss Kristi Noem's'lover' made'unethical, immoral' requests to agency leaders Disgraced Michigan coach Sherrone Moore had'long history' of domestic violence against victim of alleged knife attack, lawyer claims'Flowing red blood' surging in Persian Gulf sparks wild claims that God's biblical plagues have returned Evidence is'far too limited' to definitively say artificial intelligence hasn't made the leap, expert claims READ MORE: T here may already be a'slightly conscious' AI out in the world Artificial intelligence ( AI) is already helping to solve problems in finance, research and medicine. But could it be reaching consciousness? Dr Tom McClelland, a philosopher from the University of Cambridge has warned that current evidence is'far too limited' to rule this dystopian possibility out.


Understanding Task Representations in Neural Networks via Bayesian Ablation

Nam, Andrew, Campbell, Declan, Griffiths, Thomas, Cohen, Jonathan, Leslie, Sarah-Jane

arXiv.org Artificial Intelligence

Neural networks are powerful tools for cognitive modeling due to their flexibility and emergent properties. However, interpreting their learned representations remains challenging due to their sub-symbolic semantics. In this work, we introduce a novel probabilistic framework for interpreting latent task representations in neural networks. Inspired by Bayesian inference, our approach defines a distribution over representational units to infer their causal contributions to task performance. Using ideas from information theory, we propose a suite of tools and metrics to illuminate key model properties, including representational distributedness, manifold complexity, and polysemanticity.


AI-enhanced semantic feature norms for 786 concepts

Suresh, Siddharth, Mukherjee, Kushin, Giallanza, Tyler, Yu, Xizheng, Patil, Mia, Cohen, Jonathan D., Rogers, Timothy T.

arXiv.org Artificial Intelligence

Semantic feature norms have been foundational in the study of human conceptual knowledge, yet traditional methods face trade-offs between concept/feature coverage and verifiability of quality due to the labor-intensive nature of norming studies. Here, we introduce a novel approach that augments a dataset of human-generated feature norms with responses from large language models (LLMs) while verifying the quality of norms against reliable human judgments. We find that our AI-enhanced feature norm dataset, NOVA: Norms Optimized Via AI, shows much higher feature density and overlap among concepts while outperforming a comparable human-only norm dataset and word-embedding models in predicting people's semantic similarity judgments. Taken together, we demonstrate that human conceptual knowledge is richer than captured in previous norm datasets and show that, with proper validation, LLMs can serve as powerful tools for cognitive science research.


Naturalistic Computational Cognitive Science: Towards generalizable models and theories that capture the full range of natural behavior

Carvalho, Wilka, Lampinen, Andrew

arXiv.org Artificial Intelligence

Artificial Intelligence increasingly pursues large, complex models that perform many tasks within increasingly realistic domains. How, if at all, should these developments in AI influence cognitive science? We argue that progress in AI offers timely opportunities for cognitive science to embrace experiments with increasingly naturalistic stimuli, tasks, and behaviors; and computational models that can accommodate these changes. We first review a growing body of research spanning neuroscience, cognitive science, and AI that suggests that incorporating a broader range of naturalistic experimental paradigms (and models that accommodate them) may be necessary to resolve some aspects of natural intelligence and ensure that our theories generalize. We then suggest that integrating recent progress in AI and cognitive science will enable us to engage with more naturalistic phenomena without giving up experimental control or the pursuit of theoretically grounded understanding. We offer practical guidance on how methodological practices can contribute to cumulative progress in naturalistic computational cognitive science, and illustrate a path towards building computational models that solve the real problems of natural cognition - together with a reductive understanding of the processes and principles by which they do so.


A Relational Inductive Bias for Dimensional Abstraction in Neural Networks

Campbell, Declan, Cohen, Jonathan D.

arXiv.org Artificial Intelligence

The human cognitive system exhibits remarkable flexibility and generalization capabilities, partly due to its ability to form low-dimensional, compositional representations of the environment. In contrast, standard neural network architectures often struggle with abstract reasoning tasks, overfitting, and requiring extensive data for training. This paper investigates the impact of the relational bottleneck -- a mechanism that focuses processing on relations among inputs -- on the learning of factorized representations conducive to compositional coding and the attendant flexibility of processing. We demonstrate that such a bottleneck not only improves generalization and learning efficiency, but also aligns network performance with human-like behavioral biases. Networks trained with the relational bottleneck developed orthogonal representations of feature dimensions latent in the dataset, reflecting the factorized structure thought to underlie human cognitive flexibility. Moreover, the relational network mimics human biases towards regularity without pre-specified symbolic primitives, suggesting that the bottleneck fosters the emergence of abstract representations that confer flexibility akin to symbols.


A Simple Illustration of Interleaved Learning using Kalman Filter for Linear Least Squares

John, Majnu, Wu, Yihren

arXiv.org Machine Learning

IL is one of the mechanisms expounded by Complementary Learning Systems Theory (McClelland, McNaughton and O'Reilly, 1995; Marr, 1971) on how successful learners such as human beings mitigate effects of'catastrophic interference' while learning. Recent illustrations of IL using neural networks include Saxena, Shobe and McNaughton, 2022, who exhibited that if the new information is similar to a subset of old items, then deep neural networks can learn the new information rapidly and with the same level of accuracy by interleaving the old items in the subset. A similar insight was presented in McClelland, McNaughton and Lampinen, 2020, where it was shown that for artificial neural networks, information consistent with prior knowledge can sometimes be integrated very quickly. Another recent paper (Ban and Xie, 2021) formulated interleaved machine learning as a multi-level optimization problem, and developed an efficient differentiable algorithm to solve the interleaving learning problem with application to neural architecture search. A closely related biological concept is interleaved replay which also has been empirically validated in the literature (Gepperth and Karaoguz, 2016; Kemker and Kanan, 2018). Over the past couple of decades, ideas inspired by biological IL have been utilized in a wide array of online learning methods as well, especially to prevent catastrophic forgetting. See, for example Wang et.


AI: The pattern is not in the data, it's in the machine

#artificialintelligence

A neural network transforms input, the circles on the left, to output, on the right. How that happens is a transformation of weights, center, which we often confuse for patterns in the data itself. It's a commonplace of artificial intelligence to say that machine learning, which depends on vast amounts of data, functions by finding patterns in data. The phrase, "finding patterns in data," in fact, has been a staple phrase of things such as data mining and knowledge discovery for years now, and it has been assumed that machine learning, and its deep learning variant especially, are just continuing the tradition of finding such patterns. AI programs do, indeed, result in patterns, but, just as "The fault, dear Brutus, lies not in our stars but in ourselves," the fact of those patterns is not something in the data, it is what the AI program makes of the data.


AI: The pattern is not in the data, it's in the machine

#artificialintelligence

A neural network transforms input, the circles on the left, to output, on the right. How that happens is a transformation of weights, center, which we often confuse for patterns in the data itself. It's a commonplace of artificial intelligence to say that machine learning, which depends on vast amounts of data, functions by finding patterns in data. The phrase, "finding patterns in data," in fact, has been a staple phrase of things such as data mining and knowledge discovery for years now, and it has been assumed that machine learning, and its deep learning variant especially, are just continuing the tradition of finding such patterns. AI programs do, indeed, result in patterns, but, just as "The fault, dear Brutus, lies not in our stars but in ourselves," the fact of those patterns is not something in the data, it is what the AI program makes of the data.


Modelling the development of counting with memory-augmented neural networks

Dulberg, Zack, Webb, Taylor, Cohen, Jonathan

arXiv.org Artificial Intelligence

Learning to count is an important example of the broader human capacity for systematic generalization, and the development of counting is often characterized by an inflection point when children rapidly acquire proficiency with the procedures that support this ability. We aimed to model this process by training a reinforcement learning agent to select N items from a binary vector when instructed (known as the give-$N$ task). We found that a memory-augmented modular network architecture based on the recently proposed Emergent Symbol Binding Network (ESBN) exhibited an inflection during learning that resembled human development. This model was also capable of systematic extrapolation outside the range of its training set - for example, trained only to select between 1 and 10 items, it could succeed at selecting 11 to 15 items as long as it could make use of an arbitrary count sequence of at least that length. The close parallels to child development and the capacity for extrapolation suggest that our model could shed light on the emergence of systematicity in humans.


Ford vehicles will run on Android Auto starting in 2023

Engadget

Google and Ford have announced a first-of-its-kind partnership "that promises to transform both Ford and the auto industry," Google Cloud CEO, Thomas Kurian, told reporters during a virtual press conference on Monday. "We both believe that the relationship between Google and Ford will establish an innovation powerhouse," David McClelland, Ford vice president strategy and partnerships, added. "It will accelerate the modernization of our business and Ford, and most importantly, it will let us exceed our customers expectations." The under the terms of the six-year partnership, Ford has named Google as its preferred cloud provider and, beginning in 2023, millions of Ford and Lincoln vehicles will operate using Android Auto (just as we saw in the Polestar 2) with Google apps, such as Assistant and Maps, embedded into the infotainment system. But don't worry iPhone owners, Ford will continue to support Apple CarPlay and Amazon Alexa functionality moving forward.