Goto

Collaborating Authors

 peirce


Moving Pictures of Thought: Extracting Visual Knowledge in Charles S. Peirce's Manuscripts with Vision-Language Models

Pedretti, Carlo Teo, Picca, Davide, Rodighiero, Dario

arXiv.org Artificial Intelligence

Diagrams are crucial yet underexplored tools in many disciplines, demonstrating the close connection between visual representation and scholarly reasoning. However, their iconic form poses obstacles to visual studies, intermedial analysis, and text-based digital workflows. In particular, Charles S. Peirce consistently advocated the use of diagrams as essential for reasoning and explanation. His manuscripts, often combining textual content with complex visual artifacts, provide a challenging case for studying documents involving heterogeneous materials. In this preliminary study, we investigate whether Visual Language Models (VLMs) can effectively help us identify and interpret such hybrid pages in context. First, we propose a workflow that (i) segments manuscript page layouts, (ii) reconnects each segment to IIIF-compliant annotations, and (iii) submits fragments containing diagrams to a VLM. In addition, by adopting Peirce's semiotic framework, we designed prompts to extract key knowledge about diagrams and produce concise captions. Finally, we integrated these captions into knowledge graphs, enabling structured representations of diagrammatic content within composite sources.


The meaning of prompts and the prompts of meaning: Semiotic reflections and modelling

Thellefsen, Martin, Dewi, Amalia Nurma, Sorensen, Bent

arXiv.org Artificial Intelligence

This paper explores prompts and prompting in large language models (LLMs) as dynamic semiotic phenomena, drawing on Peirce's triadic model of signs, his nine sign types, and the Dynacom model of communication. The aim is to reconceptualize prompting not as a technical input mechanism but as a communicative and epistemic act involving an iterative process of sign formation, interpretation, and refinement. The theoretical foundation rests on Peirce's semiotics, particularly the interplay between representamen, object, and interpretant, and the typological richness of signs: qualisign, sinsign, legisign; icon, index, symbol; rheme, dicent, argument - alongside the interpretant triad captured in the Dynacom model. Analytically, the paper positions the LLM as a semiotic resource that generates interpretants in response to user prompts, thereby participating in meaning-making within shared universes of discourse. The findings suggest that prompting is a semiotic and communicative process that redefines how knowledge is organized, searched, interpreted, and co-constructed in digital environments. This perspective invites a reimagining of the theoretical and methodological foundations of knowledge organization and information seeking in the age of computational semiosis


Convergence to the Truth

Lin, Hanti

arXiv.org Artificial Intelligence

The epistemology of scientific inference has a rich history. According to the explanationist tradition, theory choice should be guided by a theory's overall balance of explanatory virtues, such as simplicity, fit with data, and/or unification (Russell 1912). The instrumentalist tradition urges, instead, that scientific inference should be driven by the goal of obtaining useful models, rather than true theories or even approximately true ones (Duhem 1906). A third tradition is Bayesianism, which features a shift of focus from all-ornothing beliefs to degrees of belief (Bayes 1763). It may be fair to say that these traditions are the big three in contemporary epistemology of scientific inference. There is, in fact, a fourth tradition.


Peirce in the Machine: How Mixture of Experts Models Perform Hypothesis Construction

Rushing, Bruce

arXiv.org Artificial Intelligence

Mixture of experts is a prediction aggregation method in machine learning that aggregates the predictions of specialized experts. This method often outperforms Bayesian methods despite the Bayesian having stronger inductive guarantees. We argue that this is due to the greater functional capacity of mixture of experts. We prove that in a limiting case of mixture of experts will have greater capacity than equivalent Bayesian methods, which we vouchsafe through experiments on non-limiting cases. Finally, we conclude that mixture of experts is a type of abductive reasoning in the Peircian sense of hypothesis construction.


A novel framework for systematic propositional formula simplification based on existential graphs

de Mas, Jordina Francès, Bowles, Juliana

arXiv.org Artificial Intelligence

This paper presents a novel simplification calculus for propositional logic derived from Peirce's existential graphs' rules of inference and implication graphs. Our rules can be applied to propositional logic formulae in nested form, are equivalence-preserving, guarantee a monotonically decreasing number of variables, clauses and literals, and maximise the preservation of structural problem information. Our techniques can also be seen as higher-level SAT preprocessing, and we show how one of our rules (TWSR) generalises and streamlines most of the known equivalence-preserving SAT preprocessing methods. In addition, we propose a simplification procedure based on the systematic application of two of our rules (EPR and TWSR) which is solver-agnostic and can be used to simplify large Boolean satisfiability problems and propositional formulae in arbitrary form, and we provide a formal analysis of its algorithmic complexity in terms of space and time. Finally, we show how our rules can be further extended with a novel n-ary implication graph to capture all known equivalence-preserving preprocessing procedures.


Revisiting C.S.Peirce's Experiment: 150 Years Later

Mukhopadhyay, Deep

arXiv.org Artificial Intelligence

An iconoclastic philosopher and polymath, Charles Sanders Peirce (1837-1914) is among the greatest of American minds. In 1872, Peirce conducted a series of experiments to determine the distribution of response times to an auditory stimulus, which is widely regarded as one of the most significant statistical investigations in the history of nineteenth-century American mathematical research (Stigler, 1978). On the 150th anniversary of this historic experiment, we look back at Peirce's view on empirical modeling through a modern statistical lens.


The Arc of the Data Scientific Universe

Leslie, David

arXiv.org Artificial Intelligence

In this paper I explore the scaffolding of normative assumptions that supports Sabina Leonelli's implicit appeal to the values of epistemic integrity and the global public good that conjointly animate the ethos of responsible and sustainable data work in the context of COVID-19. Drawing primarily on the writings of sociologist Robert K. Merton, the thinkers of the Vienna Circle, and Charles Sanders Peirce, I make some of these assumptions explicit by telling a longer story about the evolution of social thinking about the normative structure of science from Merton's articulation of his well-known norms (those of universalism, communism, organized skepticism, and disinterestedness) to the present. I show that while Merton's norms and his intertwinement of these with the underlying mechanisms of democratic order provide us with an especially good starting point to explore and clarify the commitments and values of science, Leonelli's broader, more context-responsive, and more holistic vision of the epistemic integrity of data scientific understanding, and her discernment of the global and biospheric scope of its moral-practical reach, move beyond Merton's schema in ways that effectively draw upon important critiques. Stepping past Merton, I argue that a combination of situated universalism, methodological pluralism, strong objectivity, and unbounded communalism must guide the responsible and sustainable data work of the future.


The Meaning of Causality

#artificialintelligence

We use the word causality as a means of understanding cognition but we don't really understand its distinctions. Let's look at what C.S.Peirce had to say about causality. What @yudapearl says is that to understand a system one needs to hypothesize a model of the system and then see how this model is in agreement. Statistics is just one of the methods of testing. But it's not how one formulates the original model.


Peirce's Semiotics and General Intelligence

#artificialintelligence

There is a natural evolution from the ideas that deep learning has empirical revealed to a theory of general intelligence. A common criticism of deep learning is its lack of good theory. Deep learning is like the supercolliders in high energy physics. It reveals the inner behavior of an artificial intuitive process. It reveals to us patterns of what does work. To build up that theory we must walk back into the ideas of past thinkers. Thinkers who have never seen the empirical evidence. What will they conclude about their ideas if they had been exposed to evidence in deep learning?


Why AI Geniuses Haven't Created True Thinking Machines

#artificialintelligence

As we saw yesterday, artificial intelligence (AI) has enjoyed a a string of unbroken successes against humans. But these are successes in games where the map is the territory. That fact hints at the problem tech philosopher and futurist George Gilder raises in Gaming AI (free download here). Whether all human activities can be treated that way successfully is an entirely different question. As Gilder puts it, "AI is a system built on the foundations of computer logic, and when Silicon Valley's AI theorists push the logic of their case to a "singularity," they defy the most crucial findings of twentieth-century mathematics and computer science." Here is one of the crucial findings they defy (or ignore): Philosopher Charles Sanders Peirce (1839–1914) pointed out that, generally, mental activity comes in threes, not twos (so he called it triadic).