Goto

Collaborating Authors

 philosopher


Why Experts Can't Agree on Whether AI Has a Mind

TIME - Tech

Why Experts Can't Agree on Whether AI Has a Mind Pillay is an editorial fellow at TIME. Pillay is an editorial fellow at TIME. I'm not used to getting nasty emails from a holy man, says Professor Michael Levin, a developmental biologist at Tufts University. Levin was presenting his research to a group of engineers interested in spiritual matters in India, arguing that properties like "mind" and intelligence can be observed even in cellular systems, and that they exist on a spectrum. But when he pushed further--arguing that the same properties emerge everywhere, including in computers--the reception shifted.


Never Out of Date: How Hannah Arendt Helps Us Understand Our World

Der Spiegel International

Fifty years after her death in New York, Hannah Arendt has become the most popular philosopher of our time. For good reason: Her views are just as timely as ever. It must be so nice to play Hannah Arendt. No fewer than five actresses are on stage this evening at the Deutsches Theater Berlin to portray the philosopher. The piece is an adaptation of the graphic novel by American illustrator Ken Krimstein about the philosopher's life, called The Three Escapes of Hannah Arendt," combined with scenes from the famous interview that journalist Günter Gaus conducted with Arendt in 1964 for German public broadcaster ZDF. The article you are reading originally appeared in German in issue 49/2025 (November 28th, 2025) of DER SPIEGEL. They play Arendt and a few of her contemporaries, the philosopher Martin Heidegger, the writer Walter Benjamin, her husband Heinrich Blücher. There is a great deal of speech in the play, especially from Arendt herself. The places of her life are ticked off, her ...


AI may blunt our thinking skills – here's what you can do about it

New Scientist

AI may blunt our thinking skills - here's what you can do about it There is growing evidence that our reliance on generative AI tools is reducing our ability to think clearly and critically, but it doesn't have to be that way Socrates wasn't the greatest fan of the written word. Famous for leaving no texts to posterity, the great philosopher is said to have believed that a reliance on writing destroys the memory and weakens the mind . Some 2400 years later, Socrates's fears seem misplaced - particularly in light of evidence that writing things down improves memory formation . A growing number of psychologists, neuroscientists and philosophers worry that ChatGPT and similar generative AI tools will chip away at our powers of information recall and blunt our capacity for clear reasoning. What's more, while Socrates relied on clever rhetoric to make his argument, these researchers are grounding theirs in empirical data.



AI's Next Frontier? An Algorithm for Consciousness

WIRED

Some of the world's most interesting thinkers about thinking think they might've cracked machine sentience. And I think they might be onto something. As a journalist who covers AI, I hear from countless people who seem utterly convinced that ChatGPT, Claude, or some other chatbot has achieved "sentience." The Turing test was aced a while back, yes, but unlike rote intelligence, these things are not so easily pinned down. Large language models will claim to think for themselves, even describe inner torments or profess undying loves, but such statements don't imply interiority.


Discerning What Matters: A Multi-Dimensional Assessment of Moral Competence in LLMs

Kilov, Daniel, Hendy, Caroline, Guyot, Secil Yanik, Snoswell, Aaron J., Lazar, Seth

arXiv.org Artificial Intelligence

Moral competence is the ability to act in accordance with moral principles. As large language models (LLMs) are increasingly deployed in situations demanding moral competence, there is increasing interest in evaluating this ability empirically. We review existing literature and identify three significant shortcoming: (i) Over-reliance on prepackaged moral scenarios with explicitly highlighted moral features; (ii) Focus on verdict prediction rather than moral reasoning; and (iii) Inadequate testing of models' (in)ability to recognize when additional information is needed. Grounded in philosophical research on moral skill, we then introduce a novel method for assessing moral competence in LLMs. Our approach moves beyond simple verdict comparisons to evaluate five dimensions of moral competence: identifying morally relevant features, weighting their importance, assigning moral reasons to these features, synthesizing coherent moral judgments, and recognizing information gaps. We conduct two experiments comparing six leading LLMs against non-expert humans and professional philosophers. In our first experiment using ethical vignettes standard to existing work, LLMs generally outperformed non-expert humans across multiple dimensions of moral reasoning. However, our second experiment, featuring novel scenarios designed to test moral sensitivity by embedding relevant features among irrelevant details, revealed a striking reversal: several LLMs performed significantly worse than humans. Our findings suggest that current evaluations may substantially overestimate LLMs' moral reasoning capabilities by eliminating the task of discerning moral relevance from noisy information, which we take to be a prerequisite for genuine moral skill. This work provides a more nuanced framework for assessing AI moral competence and highlights important directions for improving moral competence in advanced AI systems.


Does Society Have Too Many Rules?

The New Yorker

Does Society Have Too Many Rules? When regular people seem burdened by bureaucracy, and the powerful act as they choose, it's worth asking whether we've forgotten what makes rules effective. I live in a three-generation household. Our place is big, but crowded: all of us have hobbies, and so every shelf or surface contains toys, books, art supplies, sporting goods, craft projects, cameras, musical instruments, or kitchen gadgets. Before the table can be set for dinner, it must be cleared of a board game or marble run. My desk, where I aim to write in the mornings, has been repurposed as a drone-repair workshop. The property includes two broken-down sheds and a garage.


Controller synthesis method for multi-agent system based on temporal logic specification

Huang, Ruohan, Cao, Zining

arXiv.org Artificial Intelligence

Controller synthesis is a theoretical approach to the systematic design of discrete event systems. It constructs a controller to provide feedback and control to the system, ensuring it meets specified control specifications. Traditional controller synthesis methods often use formal languages to describe control specifications and are mainly oriented towards single-agent and non-probabilistic systems. With the increasing complexity of systems, the control requirements that need to be satisfied also become more complex. Based on this, this paper proposes a controller synthesis method for semi-cooperative semi-competitive multi-agent probabilistic discrete event systems to solve the controller synthesis problem based on temporal logic specifications. The controller can ensure the satisfaction of specifications to a certain extent. The specification is given in the form of a linear temporal logic formula. This paper designs a controller synthesis algorithm that combines probabilistic model checking. Finally, the effectiveness of this method is verified through a case study.


Mechanistic Interpretability Needs Philosophy

Williams, Iwan, Oldenburg, Ninell, Dhar, Ruchira, Hatherley, Joshua, Fierro, Constanza, Rajcic, Nina, Schiller, Sandrine R., Stamatiou, Filippos, Søgaard, Anders

arXiv.org Artificial Intelligence

Mechanistic interpretability (MI) aims to explain how neural networks work by uncovering their underlying causal mechanisms. As the field grows in influence, it is increasingly important to examine not just models themselves, but the assumptions, concepts and explanatory strategies implicit in MI research. We argue that mechanistic interpretability needs philosophy: not as an afterthought, but as an ongoing partner in clarifying its concepts, refining its methods, and assessing the epistemic and ethical stakes of interpreting AI systems. Taking three open problems from the MI literature as examples, this position paper illustrates the value philosophy can add to MI research, and outlines a path toward deeper interdisciplinary dialogue.


The philosopher's machine: my conversation with Peter Singer's AI chatbot

The Guardian

I'm Peter Singer AI," the avatar says. I am almost expecting it to continue, like a reincarnated Clippy: "It looks like you're trying to solve a problem. The problem I am trying to solve is why Peter Singer, the man who has been called the world's most influential living philosopher, has created a chatbot. And also, whether it is any good. Me: Why do you exist?