Goto

Collaborating Authors

 objection


Can machines perform a qualitative data analysis? Reading the debate with Alan Turing

De Paoli, Stefano

arXiv.org Artificial Intelligence

This paper reflects on the literature that rejects the use of Large Language Models (LLMs) in qualitative data analysis. It illustrates through empirical evidence as well as critical reflections why the current critical debate is focusing on the wrong problems . The paper proposes that the focus of researching the use of the LLMs for qualitative analysis is not the method per se, but rather the empirical investigation of an artificial system performing an analysis . The paper bui lds on the seminal work of Alan Turing and reads the current debate using key ideas from Turing's "Computing Machinery and Intelligence". Th is paper therefore reframes the debate on qualitative analysis with LLMs and states that ra ther than asking whether machines can perform qualitative analysis in principle, we should ask whether with LLMs we can produce analyses that are sufficiently comparable to human analysts. In the final part the contrary views to performing qualitative analysis with LLMs are analysed using the same writing and rhetorical style that Turing used in his seminal work, to discuss the contrary views to the main question.


In Defense of the Turing Test and its Legacy

Gonçalves, Bernardo

arXiv.org Artificial Intelligence

Considering that Turing's original test was co-opted by Weizenbaum and that six of the most common criticisms of the Turing test are unfair to both Turing's argument and the historical development of AI. The Turing test has faced criticism for decades, most recently at the Royal Society event "Celebrating the 75th Anniversary of the Turing Test." The question of the Turing test's significance has intensified with recent advances in large language model technology, which now enable machines to pass it. In this article, I address six of the most common criticisms of the Turing test: The Turing test encourages fooling people; Turing overestimated human intelligence, as people can be easily fooled (the ELIZA effect); The Turing test is not a good benchmark for AI; Turing's 1950 paper is not serious and/or has contradictions; Imitation should not be a goal for AI, and it is also harmful to society; Passing the Turing test teaches nothing about AI. All six criticisms largely derive from Joseph Weizenbaum's influential reinterpretation of the Turing test. The first four fail to withstand a close examination of the internal logic of Turing's 1950 paper, particularly when the paper is situated within its mid-twentieth-century context.


Consciousness in Artificial Intelligence? A Framework for Classifying Objections and Constraints

Campero, Andres, Shiller, Derek, Aru, Jaan, Simon, Jonathan

arXiv.org Artificial Intelligence

We develop a taxonomical framework for classifying challenges to the possibility of consciousness in digital artificial intelligence systems. This framework allows us to identify the level of granularity at which a given challenge is intended (the levels we propose correspond to Marr's levels) and to disambiguate its degree of force: is it a challenge to computational functionalism that leaves the possibility of digital consciousness open (degree 1), a practical challenge to digital consciousness that suggests improbability without claiming impossibility (degree 2), or an argument claiming that digital consciousness is strictly impossible (degree 3)? We apply this framework to 14 prominent examples from the scientific and philosophical literature. Our aim is not to take a side in the debate, but to provide structure and a tool for disambiguating between challenges to computational functionalism and challenges to digital consciousness, as well as between different ways of parsing such challenges.


AI-powered nimbyism could grind UK planning system to a halt, experts warn

The Guardian

One leading planning lawyer warned such AI services could'supercharge nimbyism'. One leading planning lawyer warned such AI services could'supercharge nimbyism'. Tools that help people scan applications and find grounds for objection have potential to hit government's housebuilding plans The government's plan to use artificial intelligence to accelerate planning for new homes may be about to hit an unexpected roadblock: AI-powered nimbyism. A new service called Objector is offering "policy-backed objections in minutes" to people who are upset about planning applications near their homes. It uses generative AI to scan planning applications and check for grounds for objection, ranking these as "high", "medium" or "low" impact. It then automatically creates objection letters, AI-written speeches to deliver to the planning committees, and even AI-generated videos to "influence councillors".


Epistemic Deference to AI

Lange, Benjamin

arXiv.org Artificial Intelligence

When should we defer to AI outputs over human expert judgment? Drawing on recent work in social epistemology, I motivate the idea that some AI systems qualify as Artificial Epistemic Authorities (AEAs) due to their demonstrated reliability and epistemic su periority. I then introduce AI Preemptionism, the view that AEA outputs should replace rather than supplement a user's independent epistemic reasons. I show that classic objections to preemptionism - such as uncritical deference, epistemic entren chment, and unhinging epistemic bases - apply in amplified form to AEAs, given their opacity, self-reinforcing authority, and lack of epistemic failure markers. Against this, I develop a more promising alternative: a total evidence view of AI deference. According to this view, AEA outputs should function as contributory reasons rather than outright replacements for a user's independent epistemic considerations. This approach has three key advantages: (i) it mitigates expertise atrophy by keeping human users engaged, (ii) it provides an epistemic case for meaningful human oversight and control, and (iii) it explains the justified mistrust of AI when reliability conditions are unmet. While demanding in practice, this account offers a principled way to determine when AI deference is justified, particularly in high-stakes contexts requiring rigorous reliability.


The Indian woman who stood up to moral policing - and won a pageant

BBC News

Muskan Sharma stood up to men who tried to bully her over her clothes - and went on to win hearts and a beauty pageant. The 23-year-old, who was crowned Miss Rishikesh 2025 last week in the northern Indian state of Uttarakhand, told the BBC that even though it was a small local pageant, it made me feel like Miss Universe. Sharma's win has made headlines in India as it came after a viral video that showed her spiritedly arguing with a man who barged into their rehearsals just a day before the 4 October contest. Sharma, who wanted to be a model and participate in a pageant since I was in school, said the intruders came in just as they broke for lunch. We were sitting around, chilling, having a laugh when they walked in, she said.


A Woodland Hills nursery is turning into a cemetery. Some locals are fighting it

Los Angeles Times

Things to Do in L.A. Tap to enable a layout that focuses on the article. A Woodland Hills nursery is turning into a cemetery. Aerial view of where groves will turn to graves in Woodland Hills, where a developer has plans to redevelop Boething Treeland Nursery into a cemetery. This is read by an automated voice. Please report any issues or inconsistencies here .


Review for NeurIPS paper: STEER : Simple Temporal Regularization For Neural ODE

Neural Information Processing Systems

Additional Feedback: While I have raised severe objections, I still believe that the method itself may have strong merits. Please consider the following questions and suggestions: a) If the authors deem the theorem really necessary, it needs to be made clearer why the regular Picard-Lindelöf theorem does not apply. Maybe I misunderstood something on a very fundamental level. If not, simply consider removing the section. However, I don't think the method necessarily requires a stiffness discussion. Is there a dependence between suitable parameter ranges b and different solvers/model architectures?


Agnosticism About Artificial Consciousness

McClelland, Tom

arXiv.org Artificial Intelligence

Could an AI have conscious experiences? Any answer to this question should conform to Evidentialism - that is, it should be based not on intuition, dogma or speculation but on solid scientific evidence. I argue that such evidence is hard to come by and that the only justifiable stance on the prospects of artificial consciousness is agnosticism. In the current debate, the main division is between biological views that are sceptical of artificial consciousness and functional views that are sympathetic to it. I argue that both camps make the same mistake of over-estimating what the evidence tells us. Scientific insights into consciousness have been achieved through the study of conscious organisms. Although this has enabled cautious assessments of consciousness in various creatures, extending this to AI faces serious obstacles. AI thus presents consciousness researchers with a dilemma: either reach a verdict on artificial consciousness but violate Evidentialism; or respect Evidentialism but offer no verdict on the prospects of artificial consciousness. The dominant trend in the literature has been to take the first option while purporting to follow the scientific evidence. I argue that if we truly follow the evidence, we must take the second option and adopt agnosticism.


Chatting with Bots: AI, Speech Acts, and the Edge of Assertion

Williams, Iwan, Bayne, Tim

arXiv.org Artificial Intelligence

This paper addresses the question of whether large language model-powered chatbots are capable of assertion. According to what we call the Thesis of Chatbot Assertion (TCA), chatbots are the kinds of things that can assert, and at least some of the output produced by current-generation chatbots qualifies as assertion. We provide some motivation for TCA, arguing that it ought to be taken seriously and not simply dismissed. We also review recent objections to TCA, arguing that these objections are weighty. We thus confront the following dilemma: how can we do justice to both the considerations for and against TCA? We consider two influential responses to this dilemma - the first appeals to the notion of proxy-assertion; the second appeals to fictionalism - and argue that neither is satisfactory. Instead, reflecting on the ontogenesis of assertion, we argue that we need to make space for a category of proto-assertion. We then apply the category of proto-assertion to chatbots, arguing that treating chatbots as proto-assertors provides a satisfactory resolution to the dilemma of chatbot assertion.