wittgenstein
Wittgenstein's Family Resemblance Clustering Algorithm
Amanpour, Golbahar, Ghojogh, Benyamin
This paper, introducing a novel method in philo-matics, draws on Wittgenstein's concept of family resemblance from analytic philosophy to develop a clustering algorithm for machine learning. According to Wittgenstein's Philosophical Investigations (1953), family resemblance holds that members of a concept or category are connected by overlapping similarities rather than a single defining property. Consequently, a family of entities forms a chain of items sharing overlapping traits. This philosophical idea naturally lends itself to a graph-based approach in machine learning. Accordingly, we propose the Wittgenstein's Family Resemblance (WFR) clustering algorithm and its kernel variant, kernel WFR. This algorithm computes resemblance scores between neighboring data instances, and after thresholding these scores, a resemblance graph is constructed. The connected components of this graph define the resulting clusters. Simulations on benchmark datasets demonstrate that WFR is an effective nonlinear clustering algorithm that does not require prior knowledge of the number of clusters or assumptions about their shapes.
The Algebra of Meaning: Why Machines Need Montague More Than Moore's Law
Jeong, Cheonkam, Kim, Sungdo, Park, Jewoo
Contemporary language models are fluent yet routinely mis-handle the types of meaning their outputs entail. We argue that hallucination, brittle moderation, and opaque compliance outcomes are symptoms of missing type-theoretic semantics rather than data or scale limitations. Building on Montague's view of language as typed, compositional algebra, we recast alignment as a parsing problem: natural-language inputs must be compiled into structures that make explicit their descriptive, normative, and legal dimensions under context. We present Savassan, a neuro-symbolic architecture that compiles utterances into Montague-style logical forms and maps them to typed ontologies extended with deontic operators and jurisdictional contexts. Neural components extract candidate structures from unstructured inputs; symbolic components perform type checking, constraint reasoning, and cross-jurisdiction mapping to produce compliance-aware guidance rather than binary censorship. In cross-border scenarios, the system "parses once" (e.g., defect claim(product x, company y)) and projects the result into multiple legal ontologies (e.g., defamation risk in KR/JP, protected opinion in US, GDPR checks in EU), composing outcomes into a single, explainable decision. This paper contributes: (i) a diagnosis of hallucination as a type error; (ii) a formal Montague-ontology bridge for business/legal reasoning; and (iii) a production-oriented design that embeds typed interfaces across the pipeline. We outline an evaluation plan using legal reasoning benchmarks and synthetic multi-jurisdiction suites. Our position is that trustworthy autonomy requires compositional typing of meaning, enabling systems to reason about what is described, what is prescribed, and what incurs liability within a unified algebra of meaning.
- Information Technology > Security & Privacy (0.49)
- Law > Civil Rights & Constitutional Law (0.35)
The Xeno Sutra: Can Meaning and Value be Ascribed to an AI-Generated "Sacred" Text?
Shanahan, Murray, Das, Tara, Thurman, Robert
This paper presents a case study in the use of a large language model to generate a fictional Buddhist "sutra"', and offers a detailed analysis of the resulting text from a philosophical and literary point of view. The conceptual subtlety, rich imagery, and density of allusion found in the text make it hard to causally dismiss on account of its mechanistic origin. This raises questions about how we, as a society, should come to terms with the potentially unsettling possibility of a technology that encroaches on human meaning-making. We suggest that Buddhist philosophy, by its very nature, is well placed to adapt.
- Europe > United Kingdom > England (0.28)
- Asia (0.28)
- North America > United States (0.28)
Wanting to Be Understood Explains the Meta-Problem of Consciousness
Fernando, Chrisantha, Banarse, Dylan, Osindero, Simon
Because we are highly motivated to be understood, we created public external representations -- mime, language, art -- to externalise our inner states. We argue that such external representations are a pre-condition for access consciousness, the global availability of information for reasoning. Yet the bandwidth of access consciousness is tiny compared with the richness of `raw experience', so no external representation can reproduce that richness in full. Ordinarily an explanation of experience need only let an audience `grasp' the relevant pattern, not relive the phenomenon. But our drive to be understood, and our low level sensorimotor capacities for `grasping' so rich, that the demand for an explanation of the feel of experience cannot be ``satisfactory''. That inflated epistemic demand (the preeminence of our expectation that we could be perfectly understood by another or ourselves) rather than an irreducible metaphysical gulf -- keeps the hard problem of consciousness alive. But on the plus side, it seems we will simply never give up creating new ways to communicate and think about our experiences. In this view, to be consciously aware is to strive to have one's agency understood by oneself and others.
- North America > United States (1.00)
- Europe (0.68)
"i am a stochastic parrot, and so r u": Is AI-based framing of human behaviour and cognition a conceptual metaphor or conceptual engineering?
Mollema, Warmhold Jan Thomas, Wachter, Thomas
Given the massive integration of AI technologies into our daily lives, AI-related concepts are being used to metaphorically compare AI systems with human behaviour and/or cognitive abilities like language acquisition. Rightfully, the epistemic success of these metaphorical comparisons should be debated. Against the backdrop of the conflicting positions of the 'computational' and 'meat' chauvinisms, we ask: can the conceptual constellation of the computational and AI be applied to the human domain and what does it mean to do so? What is one doing when the conceptual constellations of AI in particular are used in this fashion? Rooted in a Wittgensteinian view of concepts and language-use, we consider two possible answers and pit them against each other: either these examples are conceptual metaphors, or they are attempts at conceptual engineering. We argue that they are conceptual metaphors, but that (1) this position is unaware of its own epistemological contingency, and (2) it risks committing the ''map-territory fallacy''. Down at the conceptual foundations of computation, (3) it most importantly is a misleading 'double metaphor' because of the metaphorical connection between human psychology and computation. In response to the shortcomings of this projected conceptual organisation of AI onto the human domain, we argue that there is a semantic catch. The perspective of the conceptual metaphors shows avenues for forms of conceptual engineering. If this methodology's criteria are met, the fallacies and epistemic shortcomings related to the conceptual metaphor view can be bypassed. At its best, the cross-pollution of the human and AI conceptual domains is one that prompts us to reflect anew on how the boundaries of our current concepts serve us and how they could be approved.
- North America > United States (0.67)
- Europe > United Kingdom > England (0.14)
- Health & Medicine > Therapeutic Area > Neurology (0.35)
- Health & Medicine > Consumer Health (0.34)
Palatable Conceptions of Disembodied Being: Terra Incognita in the Space of Possible Minds
Is it possible to articulate a conception of consciousness that is compatible with the exotic characteristics of contemporary, disembodied AI systems, and that can stand up to philosophical scrutiny? How would subjective time and selfhood show up for an entity that conformed to such a conception? Trying to answer these questions, even metaphorically, stretches the language of consciousness to breaking point. Ultimately, the attempt yields something like emptiness, in the Buddhist sense, and helps to undermine our dualistic inclinations towards subjectivity and selfhood.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > New York (0.04)
- (4 more...)
Still "Talking About Large Language Models": Some Clarifications
My paper Talking About Large Language Models has more than once been interpreted as advocating a reductionist stance towards large language models. But the paper was not intended that way, and I do not endorse such positions. This short note situates the paper in the context of a larger philosophical project that is concerned with the (mis)use of words rather than metaphysics, in the spirit of Wittgenstein's later writing. In (Shanahan, 2024b), I wrote "[a] bare-bones LLM does not really know anything because all it does, at a fundamental level, is sequence prediction". Looking at that sentence in isolation, a reader might be forgiven for assuming that I am taking some sort of reductionist stance according to which an LLM-based chatbot, such as ChatGPT, Claude, or Gemini, is just a next token predictor, where the word "just" here carries great metaphysical weight, and that LLM-based systems therefore do not and cannot have beliefs.
Machines of Meaning
One goal of Artificial Intelligence is to learn meaningful representations for natural language expressions, but what this entails is not always clear. A variety of new linguistic behaviours present themselves embodied as computers, enhanced humans, and collectives with various kinds of integration and communication. But to measure and understand the behaviours generated by such systems, we must clarify the language we use to talk about them. Computational models are often confused with the phenomena they try to model and shallow metaphors are used as justifications for (or to hype) the success of computational techniques on many tasks related to natural language; thus implying their progress toward human-level machine intelligence without ever clarifying what that means. This paper discusses the challenges in the specification of "machines of meaning", machines capable of acquiring meaningful semantics from natural language in order to achieve their goals. We characterize "meaning" in a computational setting, while highlighting the need for detachment from anthropocentrism in the study of the behaviour of machines of meaning. The pressing need to analyse AI risks and ethics requires a proper measurement of its capabilities which cannot be productively studied and explained while using ambiguous language. We propose a view of "meaning" to facilitate the discourse around approaches such as neural language models and help broaden the research perspectives for technology that facilitates dialogues between humans and machines.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (9 more...)
- Health & Medicine (0.67)
- Education (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.69)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Rule-Based Reasoning (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (0.46)
- Information Technology > Artificial Intelligence > Natural Language > Information Retrieval (0.46)
A History of Philosophy in Colombia through Topic Modelling
Loaiza, Juan R., González-Duque, Miguel
Data-driven approaches to philosophy have emerged as a valuable tool for studying the history of the discipline. However, most studies in this area have focused on a limited number of journals from specific regions and subfields. We expand the scope of this research by applying dynamic topic modelling techniques to explore the history of philosophy in Colombia and Latin America. Our study examines the Colombian philosophy journal Ideas y Valores, founded in 1951 and currently one of the most influential academic philosophy journals in the region. By analyzing the evolution of topics across the journal's history, we identify various trends and specific dynamics in philosophical discourse within the Colombian and Latin American context. Our findings reveal that the most prominent topics are value theory (including ethics, political philosophy, and aesthetics), epistemology, and the philosophy of science. We also trace the evolution of articles focusing on the historical and interpretive aspects of philosophical texts, and we note a notable emphasis on German philosophers such as Kant, Husserl, and Hegel on various topics throughout the journal's lifetime. Additionally, we investigate whether articles with a historical focus have decreased over time due to editorial pressures. Our analysis suggests no significant decline in such articles. Finally, we propose ideas for extending this research to other Latin American journals and suggest improvements for natural language processing workflows in non-English languages.
- South America > Colombia (0.61)
- North America > Central America (0.25)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- (4 more...)
Boundless Socratic Learning with Language Games
An agent trained within a closed system can master any desired capability, as long as the following three conditions hold: (a) it receives sufficiently informative and aligned feedback, (b) its coverage of experience/data is broad enough, and (c) it has sufficient capacity and resource. In this position paper, we justify these conditions, and consider what limitations arise from (a) and (b) in closed systems, when assuming that (c) is not a bottleneck. Considering the special case of agents with matching input and output spaces (namely, language), we argue that such pure recursive self-improvement, dubbed "Socratic learning", can boost performance vastly beyond what is present in its initial data or knowledge, and is only limited by time, as well as gradual misalignment concerns. Furthermore, we propose a constructive framework to implement it, based on the notion of language games.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (0.94)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.70)