Goto

Collaborating Authors

 heidegger


Never Out of Date: How Hannah Arendt Helps Us Understand Our World

Der Spiegel International

Fifty years after her death in New York, Hannah Arendt has become the most popular philosopher of our time. For good reason: Her views are just as timely as ever. It must be so nice to play Hannah Arendt. No fewer than five actresses are on stage this evening at the Deutsches Theater Berlin to portray the philosopher. The piece is an adaptation of the graphic novel by American illustrator Ken Krimstein about the philosopher's life, called The Three Escapes of Hannah Arendt," combined with scenes from the famous interview that journalist Günter Gaus conducted with Arendt in 1964 for German public broadcaster ZDF. The article you are reading originally appeared in German in issue 49/2025 (November 28th, 2025) of DER SPIEGEL. They play Arendt and a few of her contemporaries, the philosopher Martin Heidegger, the writer Walter Benjamin, her husband Heinrich Blücher. There is a great deal of speech in the play, especially from Arendt herself. The places of her life are ticked off, her ...


How Large Language Models are Designed to Hallucinate

Ackermann, Richard, Emanuilov, Simeon

arXiv.org Artificial Intelligence

Large language models (LLMs) achieve remarkable fluency across linguistic and reasoning tasks but remain systematically prone to hallucination. Prevailing accounts attribute hallucinations to data gaps, limited context, or optimization errors. We argue instead that hallucination is a structural outcome of the transformer architecture. As coherence engines, transformers are compelled to produce fluent continuations, with self-attention simulating the relational structure of meaning but lacking the existential grounding of temporality, mood, and care that stabilizes human understanding. On this basis, we distinguish ontological hallucination, arising when continuations require disclosure of beings in world, and residual reasoning hallucination, where models mimic inference by recycling traces of human reasoning in text. We illustrate these patterns through case studies aligned with Heideggerian categories and an experiment across twelve LLMs showing how simulated "self-preservation" emerges under extended prompts. Our contribution is threefold: (1) a comparative account showing why existing explanations are insufficient; (2) a predictive taxonomy of hallucination linked to existential structures with proposed benchmarks; and (3) design directions toward "truth-constrained" architectures capable of withholding or deferring when disclosure is absent. We conclude that hallucination is not an incidental defect but a defining limit of transformer-based models, an outcome scaffolding can mask but never resolve.


The Quasi-Creature and the Uncanny Valley of Agency: A Synthesis of Theory and Evidence on User Interaction with Inconsistent Generative AI

Manhaes, Mauricio, Miller, Christine, Schroeder, Nicholas

arXiv.org Artificial Intelligence

The user experience with large-scale generative AI is paradoxical: superhuman fluency meets absurd failures in common sense and consistency. This paper argues that the resulting potent frustration is an ontological problem, stemming from the "Quasi-Creature"-an entity simulating intelligence without embodiment or genuine understanding. Interaction with this entity precipitates the "Uncanny Valley of Agency," a framework where user comfort drops when highly agentic AI proves erratically unreliable. Its failures are perceived as cognitive breaches, causing profound cognitive dissonance. Synthesizing HCI, cognitive science, and philosophy of technology, this paper defines the Quasi-Creature and details the Uncanny Valley of Agency. An illustrative mixed-methods study ("Move 78," N=37) of a collaborative creative task reveals a powerful negative correlation between perceived AI efficiency and user frustration, central to the negative experience. This framework robustly explains user frustration with generative AI and has significant implications for the design, ethics, and societal integration of these powerful, alien technologies.


A Phenomenological Approach to Analyzing User Queries in IT Systems Using Heidegger's Fundamental Ontology

Vishnevskiy, Maksim

arXiv.org Artificial Intelligence

This paper presents a novel research analytical IT system grounded in Martin Heidegger's Fundamental Ontology, distinguishing between beings (das Seiende) and Being (das Sein). The system employs two modally distinct, descriptively complete languages: a categorical language of beings for processing user inputs and an existential language of Being for internal analysis. These languages are bridged via a phenomenological reduction module, enabling the system to analyze user queries (including questions, answers, and dialogues among IT specialists), identify recursive and self-referential structures, and provide actionable insights in categorical terms. Unlike contemporary systems limited to categorical analysis, this approach leverages Heidegger's phenomenological existential analysis to uncover deeper ontological patterns in query processing, aiding in resolving logical traps in complex interactions, such as metaphor usage in IT contexts. The path to full realization involves formalizing the language of Being by a research team based on Heidegger's Fundamental Ontology; given the existing completeness of the language of beings, this reduces the system's computability to completeness, paving the way for a universal query analysis tool. The paper presents the system's architecture, operational principles, technical implementation, use cases--including a case based on real IT specialist dialogues--comparative evaluation with existing tools, and its advantages and limitations.


A Representationalist, Functionalist and Naturalistic Conception of Intelligence as a Foundation for AGI

Pfister, Rolf

arXiv.org Artificial Intelligence

Intelligence is understood as the ability to create novel skills that allow to achieve goals under previously unknown conditions. To this end, intelligence utilises reasoning methods such as deduction, induction and abduction as well as other methods such as abstraction and classification to develop a world model. The methods are applied to indirect and incomplete representations of the world, which are obtained through perception, for example, and which do not depict the world but only correspond to it. Due to these limitations and the uncertain and contingent nature of reasoning, the world model is constructivist. Its value is functionally determined by its viability, i.e., its potential to achieve the desired goals. In consequence, meaning is assigned to representations by attributing them a function that makes it possible to achieve a goal. This representational and functional conception of intelligence enables a naturalistic interpretation that does not presuppose mental features, such as intentionality and consciousness, which are regarded as independent of intelligence. Based on a phenomenological analysis, it is shown that AGI can gain a more fundamental access to the world than humans, although it is limited by the No Free Lunch theorems, which require assumptions to be made.


Should We Fear Large Language Models? A Structural Analysis of the Human Reasoning System for Elucidating LLM Capabilities and Risks Through the Lens of Heidegger's Philosophy

Zhang, Jianqiiu

arXiv.org Artificial Intelligence

In the rapidly evolving field of Large Language Models (LLMs), there is a critical need to thoroughly analyze their capabilities and risks. Central to our investigation are two novel elements. Firstly, it is the innovative parallels between the statistical patterns of word relationships within LLMs and Martin Heidegger's concepts of "ready-to-hand" and "present-at-hand," which encapsulate the utilitarian and scientific altitudes humans employ in interacting with the world. This comparison lays the groundwork for positioning LLMs as the digital counterpart to the Faculty of Verbal Knowledge, shedding light on their capacity to emulate certain facets of human reasoning. Secondly, a structural analysis of human reasoning, viewed through Heidegger's notion of truth as "unconcealment" is conducted This foundational principle enables us to map out the inputs and outputs of the reasoning system and divide reasoning into four distinct categories. Respective cognitive faculties are delineated, allowing us to place LLMs within the broader schema of human reasoning, thus clarifying their strengths and inherent limitations. Our findings reveal that while LLMs possess the capability for Direct Explicative Reasoning and Pseudo Rational Reasoning, they fall short in authentic rational reasoning and have no creative reasoning capabilities, due to the current lack of many analogous AI models such as the Faculty of Judgement. The potential and risks of LLMs when they are augmented with other AI technologies are also evaluated. The results indicate that although LLMs have achieved proficiency in some reasoning abilities, the aspiration to match or exceed human intellectual capabilities is yet unattained. This research not only enriches our comprehension of LLMs but also propels forward the discourse on AI's potential and its bounds, paving the way for future explorations into AI's evolving landscape.


'Scorn' is a horror game more faithful to H.R. Giger than 'Alien'

Washington Post - Technology News

On "Scorn's" Kickstarter page, Ebb described the concept behind the game as "being thrown into the world." You play as a hairless humanoid waking up in a grotesque but beautifully realized biomechanical world. Walls that look like taut sinew are girded with beams and rafters reminiscent of bones. Flaps of flaky, skinlike cloth hang in tatters from ceilings. Decrepit, skeletal machines are powered by gunmetal intestinal tracts, and controlled by consoles rippling with industrial blood vessels.


The Paradox of Creative Disruption

#artificialintelligence

No nation is technologically advanced without creative disruption, this has long been elaborated by Joseph Scumpeter as a condition for the flow of the economy to survive. No nation has survived so long frozen from the technological innovations that have moved the times. If the disruption of innovation is still defined as something that interferes with the continuation of conventional industry, it is a sign to be called a ruin. But we will come to a time when creative disruption will become the common enemy of the human species. Disruption is a term that was first popularized by Clayton Christensen, economist from Harvard Business School in 1995.


Is AI Sentient?

#artificialintelligence

I don't know, but am inclined to think "no," if sentience involves anything like a sense of self-consciousness, interiority, care for the world, or existential motivation. But this is a question that is alive in our culture, an ersatz theology for the secular age. This "dialogue" between a Google Engineer and an AI is quite remarkable, the stuff of science fiction: The dialogue recalls Kierkegaard's quip about St. Anselm, when he heard that the great rationalist had prayed for days asking God to send him "proof" of his existence. "Does the loving bride in the embrace of her beloved ask for proof that he is alive and real?" Who needs ontological arguments for divine existence when you have something more immediate, a relationship?


The Age of the Videogame

#artificialintelligence

The history of decision-making has always been intrinsically tied to the history of technology. Charts and compasses have guided explorers for centuries, and a level is an indispensable instrument for construction workers. New tools allow us to make more informed choices which, in turn, may positively impact technological advancements. This dependence suggests that a change in the technological landscape will have implications in how we make decisions. The last half-century has seen one of the most radical revolutions: the emergence of artificial intelligence (AI), powered by the ever-increasing data we gather.