Goto

Collaborating Authors

 whittaker


Humanity in the Age of AI: Reassessing 2025's Existential-Risk Narratives

Louadi, Mohamed El

arXiv.org Artificial Intelligence

Two 2025 publications, "AI 2027" (Kokotajlo et al., 2025) and "If Anyone Builds It, Everyone Dies" (Yudkowsky & Soares, 2025), assert that superintelligent artificial intelligence will almost certainly destroy or render humanity obsolete within the next decade. Both rest on the classic chain formulated by Good (1965) and Bostrom (2014): intelligence explosion, superintelligence, lethal misalignment. This article subjects each link to the empirical record of 2023-2025. Sixty years after Good's speculation, none of the required phenomena (sustained recursive self-improvement, autonomous strategic awareness, or intractable lethal misalignment) have been observed. Current generative models remain narrow, statistically trained artefacts: powerful, opaque, and imperfect, but devoid of the properties that would make the catastrophic scenarios plausible. Following Whittaker (2025a, 2025b, 2025c) and Zuboff (2019, 2025), we argue that the existential-risk thesis functions primarily as an ideological distraction from the ongoing consolidation of surveillance capitalism and extreme concentration of computational power. The thesis is further inflated by the 2025 AI speculative bubble, where trillions in investments in rapidly depreciating "digital lettuce" hardware (McWilliams, 2025) mask lagging revenues and jobless growth rather than heralding superintelligence. The thesis remains, in November 2025, a speculative hypothesis amplified by a speculative financial bubble rather than a demonstrated probability.


Irresponsible AI: big tech's influence on AI research and associated impacts

Hernandez-Garcia, Alex, Volokhova, Alexandra, Williams, Ezekiel, Kabakibo, Dounia Shaaban

arXiv.org Artificial Intelligence

The accelerated development, deployment and adoption of artificial intelligence systems has been fuelled by the increasing involvement of big tech. This has been accompanied by increasing ethical concerns and intensified societal and environmental impacts. In this article, we review and discuss how these phenomena are deeply entangled. First, we examine the growing and disproportionate influence of big tech in AI research and argue that its drive for scaling and general-purpose systems is fundamentally at odds with the responsible, ethical, and sustainable development of AI. Second, we review key current environmental and societal negative impacts of AI and trace their connections to big tech and its underlying economic incentives. Finally, we argue that while it is important to develop technical and regulatory approaches to these challenges, these alone are insufficient to counter the distortion introduced by big tech's influence. We thus review and propose alternative strategies that build on the responsibility of implicated actors and collective action.


Accessibility Considerations in the Development of an AI Action Plan

Mankoff, Jennifer, Light, Janice, Coughlan, James, Vogler, Christian, Glasser, Abraham, Vanderheiden, Gregg, Rice, Laura

arXiv.org Artificial Intelligence

AI has the potential to empower everyone to become more independent and self-sufficient. The increasing use of artificial intelligence (AI)-based technologies in everyday settings creates new opportunities to understand how disabled people might use these technologies [Glazko, 2023]. It also enables the development of new types of assistive technologies as well as new ways for people with disabilities to interact with technology in ways that are both simpler (for those who need things simpler) and more efficient and effective for those who cannot use the traditional interfaces effectively. AI has been rapidly taken up in almost all accessibility communities [Adnin 2024, Alharbi 2024, Jiang 2024, Bennett 2024, Valencia 2023]. Since becoming widely available to the public, Generative Artificial Intelligence (GAI) has steadily gained recognition for its potential as a valuable tool in the private sector and by government, as well as a tool for accessibility. Studies of blind and visually impaired individuals have found that they use GAI to'offload' cognitively demanding tasks and obtain personal help such as fashion advice (e.g., [Xie 2024]), and to create content or retrieve information [Adnin 2024]. A study of GAI use by neurodiverse users found GAI can both support and complicate tasks like code-switching, emotional regulation, and accessing information [Glazko, 2025]. A study of people who use AAC found it helpful for text input [Valencia 2023]. However there are concerns with a technology that is often based on probability and thus tends toward the most common case rather than those at the margins.


Palmer Luckey's vision for the future of mixed reality

MIT Technology Review

Silicon Valley players are poised to benefit. One of them is Palmer Luckey, the founder of the virtual-reality headset company Oculus, which he sold to Facebook for 2 billion. After Luckey's highly public ousting from Meta, he founded Anduril, which focuses on drones, cruise missiles, and other AI-enhanced technologies for the US Department of Defense. The company is now valued at 14 billion. My colleague James O'Donnell interviewed Luckey about his new pet project: headsets for the military.


How We Chose the TIME100 Most Influential People in AI 2024

TIME - Tech

As we were finishing this year's TIME100 AI, I had two conversations, with two very different TIME100 AI honorees, that made clear the stakes of this technological transformation. Sundar Pichai, who joined Google in 2004 and became CEO of the world's fourth most valuable company nine years ago, told me that introducing the company's billions of users to artificial intelligence through Google's products amounts to "one of the biggest improvements we've done in 20 years." Speaking that same day, Meredith Whittaker, a former Google employee and critic of the company who, as the president of Signal, has become one of the world's most influential advocates for privacy, expressed alarm at the dangers posed by the fact that so much of the AI revolution depends on the infrastructure and decisions of only a handful of big players in tech. Our purpose in creating the TIME100 AI is to put leaders like Pichai and Whittaker in dialogue and to open up their views to TIME's readers. That is why we are excited to share with you the second edition of the TIME100 AI.


An Autonomous Robotic System for Mapping Abandoned Mines

Neural Information Processing Systems

We present the software architecture of a robotic system for mapping abandoned mines. The software is capable of acquiring consistent 2D maps of large mines with many cycles, represented as Markov random elds. Our system has been deployed in three abandoned mines, two of which inaccessible to people, where it has acquired maps of unprecedented detail and accuracy.


AI Doomerism Is a Decoy

The Atlantic - Technology

On Tuesday morning, the merchants of artificial intelligence warned once again about the existential might of their products. Hundreds of AI executives, researchers, and other tech and business figures, including OpenAI CEO Sam Altman and Bill Gates, signed a one-sentence statement written by the Center for AI Safety declaring that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." Those 22 words were released following a multi-week tour in which executives from OpenAI, Microsoft, Google, and other tech companies called for limited regulation of AI. They spoke before Congress, in the European Union, and elsewhere about the need for industry and governments to collaborate to curb their product's harms--even as their companies continue to invest billions in the technology. Several prominent AI researchers and critics told me that they're skeptical of the rhetoric, and that Big Tech's proposed regulations appear defanged and self-serving.


AI Is an Insult Now

The Atlantic - Technology

If you want to really hurt someone's feelings in the year 2023, just call them an AI. An all-star cast of celebrities and public figures have recently been the victim of such jokes: the NBA player Jordan Poole ("AI Steph Curry"), Raquel Leviss from the reality-TV show Vanderpump Rules ("what would happen if you asked chat GBT [sic] to create an American girl"), Transportation Secretary Pete Buttigieg ("our first A.I. cabinet member?"). That these slights span the three pillars of American life--sports, politics, Bravo--suggests that no one, or rather nothing, is safe. Such digs have popped up all over social media; on Twitter alone, insults like these have been levied against TV shows, songs, sports uniforms, commencement speeches, White House press releases, proposed legislation, and lots of news articles. That AI has become an attack is a result of the huge moment for AI we're in.


Is AI coming for your job? Tech experts weigh in: "They don't replace human labor" - CBS News

#artificialintelligence

Amid major developments in the field of artificial intelligence, there's a question many of us have been asking ourselves: How long until machines replace us? New systems from Google and Microsoft -- plus a Microsoft partner called OpenAI -- are capable of doing things we used to think were uniquely human, like creating original art and generating original writing. But so are the fears -- about jobs and wages in particular. As artificial intelligence gets better, some expect job security will get worse. In reports like one in Gizmodo earlier this month, titled, "Here Are the Jobs Our New AI Overlords Plan to Kill," coding or computer programming is often on the list.


As AI rises, lawmakers try to catch up - abtlive

#artificialintelligence

From "intelligent" vacuum cleaners and driverless cars to advanced techniques for diagnosing diseases, artificial intelligence has burrowed its way into every arena of modern life. Its promoters reckon it is revolutionising human experience, but critics stress that the technology risks putting machines in charge of life-changing decisions. Regulators in Europe and North America are worried. The European Union is likely to pass legislation next year- the AI Act- aimed at reining in the age of the algorithm. The United States recently published a blueprint for an AI Bill of Rights and Canada is also mulling legislation.