Goto

Collaborating Authors

 philosophy


The Download: US immigration agencies' AI videos, and inside the Vitalism movement

MIT Technology Review

Plus: French company Capgemini has confirmed it's no longer working with ICE The US Department of Homeland Security is using AI video generators from Google and Adobe to make and edit content shared with the public, a new document reveals. The document, released on Wednesday, provides an inventory of which commercial AI tools DHS uses for tasks ranging from generating drafts of documents to managing cybersecurity. It comes as immigration agencies have flooded social media with content to support President Trump's mass deportation agenda--some of which appears to be made with AI--and as workers in tech have put pressure on their employers to denounce the agencies' activities. For the last couple of years, I've been following the progress of a group of individuals who believe death is humanity's "core problem." Put simply, they say death is wrong--for everyone. They've even said it's morally wrong.


Reid Hoffman Wants Silicon Valley to 'Stand Up' Against the Trump Administration

WIRED

Reid Hoffman Wants Silicon Valley to'Stand Up' Against the Trump Administration The LinkedIn cofounder and frequent Trump target has a simple message for his peers: "Just speak up about the things that you think are true." Reid Hoffman doesn't do much in half measures. He cofounded LinkedIn, of course, and helped bankroll companies including Meta and Airbnb in their startup days. He has also fashioned himself, via books, podcasts, and other public appearances, as something of a public intellectual--a pro-capitalist philosopher who still insists that tech can be a force for good. Most recently, Hoffman has emerged as one of Silicon Valley's most prominent defenders of artificial intelligence . His newest book, 2025's, makes the case that AI won't diminish human capacity but will instead amplify it. Hoffman even relied on AI to make one of the most unconventional--and perhaps uncomfortable, depending on your view of AI-generated creativity--Christmas gifts I've heard of lately. Whatever you think of Hoffman's utopian views on AI, credit where due: He's also a very outspoken critic of President Trump--a rare trait in a tech world that's grown increasingly quiet, or cozy, when it comes to the cruelties of the US administration. Hoffman's overt political views haven't been without consequence: Trump has twice threatened to launch investigations into him, most recently calling on Attorney General Pam Bondi to dig into Hoffman's ties to Jeffrey Epstein . He has subsequently called for the government to release the Epstein files in full.) Despite those threats, Hoffman isn't pulling punches: When we sat down to tape this episode in mid-December, he readily called out the administration for degrading American government, criticized his peers for keeping their heads down, and urged Silicon Valley to stop pretending that neutrality is a virtue. If only more billionaires were saying it. So glad to have you here. I'm glad to be here. We like to start these conversations with some very fast questions. What's the hardest lesson you've ever had to learn? Probably when to give up.


Wittgenstein's Family Resemblance Clustering Algorithm

Amanpour, Golbahar, Ghojogh, Benyamin

arXiv.org Machine Learning

This paper, introducing a novel method in philo-matics, draws on Wittgenstein's concept of family resemblance from analytic philosophy to develop a clustering algorithm for machine learning. According to Wittgenstein's Philosophical Investigations (1953), family resemblance holds that members of a concept or category are connected by overlapping similarities rather than a single defining property. Consequently, a family of entities forms a chain of items sharing overlapping traits. This philosophical idea naturally lends itself to a graph-based approach in machine learning. Accordingly, we propose the Wittgenstein's Family Resemblance (WFR) clustering algorithm and its kernel variant, kernel WFR. This algorithm computes resemblance scores between neighboring data instances, and after thresholding these scores, a resemblance graph is constructed. The connected components of this graph define the resulting clusters. Simulations on benchmark datasets demonstrate that WFR is an effective nonlinear clustering algorithm that does not require prior knowledge of the number of clusters or assumptions about their shapes.


Executable Epistemology: The Structured Cognitive Loop as an Architecture of Intentional Understanding

Kim, Myung Ho

arXiv.org Artificial Intelligence

Large language models exhibit intelligence without genuine epistemic understanding, exposing a key gap: the absence of epistemic architecture. This paper introduces the Structured Cognitive Loop (SCL) as an executable epistemological framework for emergent intelligence. Unlike traditional AI research asking "what is intelligence?" (ontological), SCL asks "under what conditions does cognition emerge?" (epistemological). Grounded in philosophy of mind and cognitive phenomenology, SCL bridges conceptual philosophy and implementable cognition. Drawing on process philosophy, enactive cognition, and extended mind theory, we define intelligence not as a property but as a performed process -- a continuous loop of judgment, memory, control, action, and regulation. SCL makes three contributions. First, it operationalizes philosophical insights into computationally interpretable structures, enabling "executable epistemology" -- philosophy as structural experiment. Second, it shows that functional separation within cognitive architecture yields more coherent and interpretable behavior than monolithic prompt based systems, supported by agent evaluations. Third, it redefines intelligence: not representational accuracy but the capacity to reconstruct its own epistemic state through intentional understanding. This framework impacts philosophy of mind, epistemology, and AI. For philosophy, it allows theories of cognition to be enacted and tested. For AI, it grounds behavior in epistemic structure rather than statistical regularity. For epistemology, it frames knowledge not as truth possession but as continuous reconstruction within a phenomenologically coherent loop. We situate SCL within debates on cognitive phenomenology, emergence, normativity, and intentionality, arguing that real progress requires not larger models but architectures that realize cognitive principles structurally.


RefineBench: Evaluating Refinement Capability of Language Models via Checklists

Lee, Young-Jun, Kim, Seungone, Lee, Byung-Kwan, Moon, Minkyeong, Hwang, Yechan, Kim, Jong Myoung, Neubig, Graham, Welleck, Sean, Choi, Ho-Jin

arXiv.org Artificial Intelligence

Can language models (LMs) self-refine their own responses? This question is increasingly relevant as a wide range of real-world user interactions involve refinement requests. However, prior studies have largely tested LMs' refinement abilities on verifiable tasks such as competition math or symbolic reasoning with simplified scaffolds, whereas users often pose open-ended queries and provide varying degrees of feedback on what they desire. The recent advent of reasoning models that exhibit self-reflection patterns in their chains-of-thought further motivates this question. To analyze this, we introduce RefineBench, a benchmark of 1,000 challenging problems across 11 domains paired with a checklist-based evaluation framework. We evaluate two refinement modes: (1) guided refinement, where an LM is provided natural language feedback, and (2) self-refinement, where LMs attempt to improve without guidance. In the self-refinement setting, even frontier LMs such as Gemini 2.5 Pro and GPT-5 achieve modest baseline scores of 31.3% and 29.1%, respectively, and most models fail to consistently improve across iterations (e.g., Gemini-2.5-Pro gains only +1.8%, while DeepSeek-R1 declines by -0.1%). By contrast, in guided refinement, both proprietary LMs and large open-weight LMs (>70B) can leverage targeted feedback to refine responses to near-perfect levels within five turns. These findings suggest that frontier LMs require breakthroughs to self-refine their incorrect responses, and that RefineBench provides a valuable testbed for tracking progress.


Jeff Bezos brings signature management style to 6 billion AI startup

The Japan Times

Jeff Bezos has a unique set of management practices he used and espoused during his time as CEO of Amazon. Amazon founder and former Chief Executive Officer Jeff Bezos honed his leadership philosophy running one of the world's largest companies. Project Prometheus, which Bezos co-founded with scientist Vik Bajaj, will use AI to accelerate engineering and manufacturing in fields like aerospace and automobiles, the New York Times reported. The startup has $6.2 billion in funding, sourced in part from Bezos himself, and employees counted in the dozens, some of whom were poached from leading AI labs like OpenAI and Google DeepMind. As co-CEO with Bajaj, Bezos is back in a formal executive post for the first time since stepping down from Amazon in 2021.


Alex Karp Goes to War

WIRED

Palantir's CEO is good with ICE and says he defends human rights. But will Israel and Trump ever go too far for him? Alex Karp and I would not seem to have much in common. I work for WIRED, which does tough reporting on Trumpworld; Karp is the CEO of Palantir, a $450 billion firm that has contracts with agencies like the CIA and ICE and worked for the Israeli military during its campaign in Gaza. I live in the East Village of New York City, and the home Karp spends the most time in is a 500-acre compound in rural New Hampshire. I was a plain old English major, and he's got a law degree and a PhD in philosophy, studying under the legendary Jürgen Habermas. I consider myself a progressive; Karp regards that stuff as "pagan religion." But we can bond over one shared status: Both of us are alumni of Central High School, a Philadelphia magnet school. I have some years on the 58-year-old executive.)


Why Nicholas Thompson Made a Custom GPT to Run Faster

WIRED

The Atlantic CEO's new book,, examines his complicated relationship with the sport. On this week's episode of, he talks about the ways tech is helping him become a better runner. To most of the world, Nicholas Thompson is known as an editor, an AI enthusiast, or something of a LinkedIn influencer. But the former WIRED editor in chief, who is now CEO of The Atlantic, is often better known to colleagues as . On Tuesday, Thompson is releasing . As the title suggests, it's a book about his commitment to running--Thompson runs a ridiculously fast marathon and holds the American 50K record for the 45-49 age group. Ultimately, though, the book examines the complicated relationship between the sport, Thompson, and his father, who first took him on a run when he was just 5 years old. Tech obsessives, of course, will also get their fix: includes plenty of science-backed training guidance and documents Thompson's experience training with elite Nike coaches. On this week's episode of, I talked to Thompson (who was also my first boss; he hired me as an intern at WIRED in 2008) about his book, the interplay between running and addiction, and what he thinks AI can do for runners for writers. It is a joy to be here with you at Condé Nast at WIRED. I loved coming up those elevators. I love seeing you as the editor in chief. I'm thrilled that you're here. We're going to start this conversation the way we start all of them, which is with a little warmup, some rapid-fire questions. In honor of your new book,, I'm gonna make them entirely running themed. I mean, if your listeners don't wanna hear about running Trail run or track run? Worst running injury you've ever had. The one you wish people would stop talking to you about. You only need to run a 20-miler before a marathon. What do you need to run? Why do people die at mile 20? Because they only train for [marathons] with 20-mile-runs. I generally prefer people, but then you have to schedule it. Backup sport of choice if you could never run again.



Representing Beauty: Towards a Participatory but Objective Latent Aesthetics

Rusnak, Alexander Michael

arXiv.org Artificial Intelligence

What does it mean for a machine to recognize beauty? While beauty remains a culturally and experientially compelling but philosophically elusive concept, deep learning systems increasingly appear capable of modeling aesthetic judgment. In this paper, we explore the capacity of neural networks to represent beauty despite the immense formal diversity of objects for which the term applies. By drawing on recent work on cross-model representational convergence, we show how aesthetic content produces more similar and aligned representations between models which have been trained on distinct data and modalities - while unaesthetic images do not produce more aligned representations. This finding implies that the formal structure of beautiful images has a realist basis - rather than only as a reflection of socially constructed values. Furthermore, we propose that these realist representations exist because of a joint grounding of aesthetic form in physical and cultural substance. We argue that human perceptual and creative acts play a central role in shaping these the latent spaces of deep learning systems, but that a realist basis for aesthetics shows that machines are not mere creative parrots but can produce novel creative insights from the unique vantage point of scale. Our findings suggest that human-machine co-creation is not merely possible, but foundational - with beauty serving as a teleological attractor in both cultural production and machine perception.