cognitive capacity
Why Experts Can't Agree on Whether AI Has a Mind
Why Experts Can't Agree on Whether AI Has a Mind Pillay is an editorial fellow at TIME. Pillay is an editorial fellow at TIME. I'm not used to getting nasty emails from a holy man, says Professor Michael Levin, a developmental biologist at Tufts University. Levin was presenting his research to a group of engineers interested in spiritual matters in India, arguing that properties like "mind" and intelligence can be observed even in cellular systems, and that they exist on a spectrum. But when he pushed further--arguing that the same properties emerge everywhere, including in computers--the reception shifted.
- North America > United States (0.29)
- Asia > India (0.24)
- Europe > France (0.04)
- Africa (0.04)
Enhancing Reasoning Abilities of Small LLMs with Cognitive Alignment
Cai, Wenrui, Wang, Chengyu, Yan, Junbing, Huang, Jun, Fang, Xiangzhong
The reasoning capabilities of large reasoning models (LRMs), such as OpenAI's o1 and DeepSeek-R1, have seen substantial advancements through deep thinking. However, these enhancements come with significant resource demands, underscoring the need for training effective small reasoning models. A critical challenge is that small models possess different reasoning capacities and cognitive trajectories compared with their larger counterparts. Hence, directly distilling chain-of-thought (CoT) rationales from large LRMs to smaller ones can sometimes be ineffective and often requires a substantial amount of annotated data. In this paper, we first introduce a novel Critique-Rethink-Verify (CRV) system, designed for training smaller yet powerful LRMs. Our CRV system consists of multiple LLM agents, each specializing in unique tasks: (i) critiquing the CoT rationales according to the cognitive capabilities of smaller models, (ii) rethinking and refining these CoTs based on the critiques, and (iii) verifying the correctness of the refined results. Building on the CRV system, we further propose the Cognitive Preference Optimization (CogPO) algorithm to continuously enhance the reasoning abilities of smaller models by aligning their reasoning processes with their cognitive capacities. Comprehensive evaluations on challenging reasoning benchmarks demonstrate the efficacy of our CRV+CogPO framework, which outperforms other methods by a large margin.
CogGNN: Cognitive Graph Neural Networks in Generative Connectomics
Soussia, Mayssa, Lin, Yijun, Mahjoub, Mohamed Ali, Rekik, Islem
Generative learning has advanced network neuroscience, enabling tasks like graph super-resolution, temporal graph prediction, and multimodal brain graph fusion. However, current methods, mainly based on graph neural networks (GNNs), focus solely on structural and topological properties, neglecting cognitive traits. To address this, we introduce the first cognified generative model, CogGNN, which endows GNNs with cognitive capabilities (e.g., visual memory) to generate brain networks that preserve cognitive features. While broadly applicable, we present CogGNN, a specific variant designed to integrate visual input, a key factor in brain functions like pattern recognition and memory recall. As a proof of concept, we use our model to learn connectional brain templates (CBTs), population-level fingerprints from multi-view brain networks. Unlike prior work that overlooks cognitive properties, CogGNN generates CBTs that are both cognitively and structurally meaningful. Our contributions are: (i) a novel cognition-aware generative model with a visual-memory-based loss; (ii) a CBT-learning framework with a co-optimization strategy to yield well-centered, discriminative, cognitively enhanced templates. Extensive experiments show that CogGNN outperforms state-of-the-art methods, establishing a strong foundation for cognitively grounded brain network modeling.
- Africa > Middle East > Tunisia > Sousse Governorate > Sousse (0.05)
- South America > Peru > Lima Department > Lima Province > Lima (0.04)
- North America (0.04)
- (2 more...)
- Health & Medicine > Health Care Technology (1.00)
- Health & Medicine > Therapeutic Area > Neurology > Alzheimer's Disease (0.48)
Multi-Sensory Cognitive Computing for Learning Population-level Brain Connectivity
Soussia, Mayssa, Mahjoub, Mohamed Ali, Rekik, Islem
The generation of connectional brain templates (CBTs) has recently garnered significant attention for its potential to identify unique connectivity patterns shared across individuals. However, existing methods for CBT learning such as conventional machine learning and graph neural networks (GNNs) are hindered by several limitations. These include: (i) poor interpretability due to their black-box nature, (ii) high computational cost, and (iii) an exclusive focus on structure and topology, overlooking the cognitive capacity of the generated CBT. To address these challenges, we introduce mCOCO (multi-sensory COgnitive COmputing), a novel framework that leverages Reservoir Computing (RC) to learn population-level functional CBT from BOLD (Blood-Oxygen-level-Dependent) signals. RC's dynamic system properties allow for tracking state changes over time, enhancing interpretability and enabling the modeling of brain-like dynamics, as demonstrated in prior literature. By integrating multi-sensory inputs (e.g., text, audio, and visual data), mCOCO captures not only structure and topology but also how brain regions process information and adapt to cognitive tasks such as sensory processing, all in a computationally efficient manner. Our mCOCO framework consists of two phases: (1) mapping BOLD signals into the reservoir to derive individual functional connectomes, which are then aggregated into a group-level CBT - an approach, to the best of our knowledge, not previously explored in functional connectivity studies - and (2) incorporating multi-sensory inputs through a cognitive reservoir, endowing the CBT with cognitive traits. Extensive evaluations show that our mCOCO-based template significantly outperforms GNN-based CBT in terms of centeredness, discriminativeness, topological soundness, and multi-sensory memory retention. Our source code is available at https://github.com/basiralab/mCOCO.
- Africa > Middle East > Tunisia > Sousse Governorate > Sousse (0.05)
- North America > Canada > Quebec > Capitale-Nationale Region > Québec (0.04)
- North America > Canada > Quebec > Capitale-Nationale Region > Quebec City (0.04)
- (4 more...)
Speciesism in AI: Evaluating Discrimination Against Animals in Large Language Models
Jotautaitė, Monika, Caviola, Lucius, Brewster, David A., Hagendorff, Thilo
As large language models (LLMs) become more widely deployed, it is crucial to examine their ethical tendencies. Building on research on fairness and discrimination in AI, we investigate whether LLMs exhibit speciesist bias -- discrimination based on species membership -- and how they value non-human animals. We systematically examine this issue across three paradigms: (1) SpeciesismBench, a 1,003-item benchmark assessing recognition and moral evaluation of speciesist statements; (2) established psychological measures comparing model responses with those of human participants; (3) text-generation tasks probing elaboration on, or resistance to, speciesist rationalizations. In our benchmark, LLMs reliably detected speciesist statements but rarely condemned them, often treating speciesist attitudes as morally acceptable. On psychological measures, results were mixed: LLMs expressed slightly lower explicit speciesism than people, yet in direct trade-offs they more often chose to save one human over multiple animals. A tentative interpretation is that LLMs may weight cognitive capacity rather than species per se: when capacities were equal, they showed no species preference, and when an animal was described as more capable, they tended to prioritize it over a less capable human. In open-ended text generation tasks, LLMs frequently normalized or rationalized harm toward farmed animals while refusing to do so for non-farmed animals. These findings suggest that while LLMs reflect a mixture of progressive and mainstream human views, they nonetheless reproduce entrenched cultural norms around animal exploitation. We argue that expanding AI fairness and alignment frameworks to explicitly include non-human moral patients is essential for reducing these biases and preventing the entrenchment of speciesist attitudes in AI systems and the societies they influence.
- Europe > Germany > Baden-Württemberg > Stuttgart Region > Stuttgart (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Africa > Eswatini > Manzini > Manzini (0.04)
Bringing Comparative Cognition To Computers
Voudouris, Konstantinos, Cheke, Lucy G., Schulz, Eric
Artificial intelligence (AI) systems, from large language models (LLMs) to reinforcement learning agents, now exhibit behaviours once assumed to be exclusive to humans and other animals. As such, researchers are increasingly probing these systems using psychological methods, asking questions about how they explore new environments, make decisions in risky conditions, and reason about their own uncertainty [1]. This work appears to be driven by two motivations: better characterising what AI can and cannot do so that we can improve it and use it safely; and the tantalising proposition that AI constitutes a new class of cognitive system worthy of serious scientific attention, not only to learn more about how they work but to better understand our own cognition [2]. But applying methods designed for human cognitive psychology to test AI risks both under-and over-attributing cognitive capacities to them - because those tests may be ill-designed for these non-human subjects. Comparative cognition - the study of non-human animal behaviour - has grappled with similar challenges for decades. By adopting its methods, AI research could avoid pitfalls, join the cognitive sciences, and clarify the nature of cognition itself.
RFK Jr. calls on President Biden to show he has the 'cognitive capacity' and 'mental acuity' to lead
Exclusive: 2024 presidential candidate Robert F. Kennedy, Jr. sits down with'The Story's' Martha MacCallum to discuss his election bid. Independent presidential candidate Robert F. Kennedy, Jr. called on President Biden Wednesday to show the American people he has the "cognitive capacity" and "mental acuity" to lead the nation for another four-year term. "I think he [Biden] needs to come out of the White House and show Americans that he has the cognitive capacity, to, and the mental acuity, to handle this job at probably the most challenging time now, at least in recent American history," RFK Jr. told "The Story." "We're facing issues that are existential. We're involved in two wars. We have AI coming down, which is going to change everything, and there's enormous dangers in it," he continued.
- North America > United States > New York (0.06)
- North America > United States > Utah (0.05)
- North America > United States > Texas (0.05)
- (11 more...)
Elon Musk's Neuralink raises question about how moral future humans might be
Elon Musk just announced that Neuralink -- a "brain-computer interface" -- had been implanted into a human brain for the first time. Patient Zero is recovering well. The technology has already undergone animal trials and has been branded as a Fitbit for the brain. Paired with your iPhone, a Neuralink could help control prosthetics, monitor brain activity in real time and boost overall cognitive capacity. It will eventually pair seamlessly with a Tesla, I'm sure.
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Health Care Technology (0.96)
Has GPT-4 really passed the startling threshold of human-level artificial intelligence? Well, it depends
Recent public interest in tools like ChatGPT has raised an old question in the artificial intelligence community: is artificial general intelligence (in this case, AI that performs at human level) achievable? An online preprint this week has added to the hype, suggesting the latest advanced large language model, GPT-4, is at the early stages of artificial general intelligence (AGI) as it's exhibiting "sparks of intelligence". OpenAI, the company behind ChatGPT, has unabashedly declared its pursuit of AGI. Meanwhile, a large number of researchers and public intellectuals have called for an immediate halt to the development of these models, citing "profound risks to society and humanity". These calls to pause AI research are theatrical and unlikely to succeed – the allure of advanced intelligence is too provocative for humans to ignore, and too rewarding for companies to pause.
Do machines have minds?
Welcome to AI book reviews, a series of posts that explore the latest literature on artificial intelligence. What you'll read a lot in the media is that advances in artificial intelligence have reached a point where it is becoming increasingly difficult to tell the difference between humans and machines. But to say it more precisely, the superficial similarities between human and artificial intelligence are making it difficult to see the underlying differences that remain persistent. I've been thinking about this issue a lot recently as I've been following the controversy surrounding Google LaMDA and AI sentience, the growing capacity of deep learning systems in outmatching humans in complicated games, and the use of generative models in creating stunning artwork. Obviously, current AI systems are nothing like the human mind.