Goto

Collaborating Authors

 leibniz


Leibniz's Monadology as Foundation for the Artificial Age Score: A Formal Architecture for Al Memory Evaluation

Kayadibi, Seyma Yaman

arXiv.org Artificial Intelligence

This paper develops a mathematically rigorous, philosophically grounded framework for evaluating artificial memory systems, rooted in the metaphysical structure of Leibniz's Monadology. Building on a previously formalized metric, the Artificial Age Score (AAS), the study maps twenty core propositions from the Monadology to an information-theoretic architecture. In this design, each monad functions as a modular unit defined by a truth score, a redundancy parameter, and a weighted contribution to a global memory penalty function. Smooth logarithmic transformations operationalize these quantities and yield interpretable, bounded metrics for memory aging, representational stability, and salience. Classical metaphysical notions of perception, apperception, and appetition are reformulated as entropy, gradient dynamics, and internal representation fidelity. Logical principles, including the laws of non-contradiction and sufficient reason, are encoded as regularization constraints guiding memory evolution. A central contribution is a set of first principles proofs establishing refinement invariance, structural decomposability, and monotonicity under scale transformation, aligned with the metaphysical structure of monads. The framework's formal organization is structured into six thematic bundles derived from Monadology, aligning each mathematical proof with its corresponding philosophical domain. Beyond evaluation, the framework offers a principled blueprint for building Al memory architectures that are modular, interpretable, and provably sound.


How Large Language Models Need Symbolism

Deng, Xiaotie, Li, Hanyu

arXiv.org Artificial Intelligence

Advances in artificial intelligence (AI), particularly large language models (LLMs) [1], have achieved remarkable success. This progress stems from "scaling laws" -- performance improves with greater computation, data, and model size [2]. They now excel at mathematics, medical, legal, and coding exams and competitions. Y et, this paradigm has a crucial vulnerability: scaling laws are effective only when data is abundant. Human reasoning, which relies on logical operations and abstractions rather than brute-force pattern matching on vast data, proves critical in tackling complex frontier domains, where usable data is often inherently scarce.


Homo Ratiocinator (Reckoning Human)

Communications of the ACM

Homo Sapiens, "wise human" in Latin, is the taxonomic species name for modern humans. But observing the current state of the world and its trajectory, it is hard for me to accept the description "wise." I am not the first to object to the "sapiens" descriptor. The French philosopher Henri-Louis Bergson argued in 1911 that a better term would be Homo Faber, referring to human tool-making ability. This ability goes back to early humans, about three million years ago. Most importantly, human tools got better and better due to innovation and cultural transmission.


Computational Natural Philosophy: A Thread from Presocratics through Turing to ChatGPT

Dodig-Crnkovic, Gordana

arXiv.org Artificial Intelligence

Modern computational natural philosophy conceptualizes the universe in terms of information and computation, establishing a framework for the study of cognition and intelligence. Despite some critiques, this computational perspective has significantly influenced our understanding of the natural world, leading to the development of AI systems like ChatGPT based on deep neural networks. Advancements in this domain have been facilitated by interdisciplinary research, integrating knowledge from multiple fields to simulate complex systems. Large Language Models (LLMs), such as ChatGPT, represent this approach's capabilities, utilizing reinforcement learning with human feedback (RLHF). Current research initiatives aim to integrate neural networks with symbolic computing, introducing a new generation of hybrid computational models.


History Of AI In 33 Breakthroughs: The First 'Thinking Machine'

#artificialintelligence

Many histories of AI start with Homer and his description of how the crippled, blacksmith god Hephaestus fashioned for himself self-propelled tripods on wheels and "golden" assistants, "in appearance like living young women" who "from the immortal gods learned how to do things." I prefer to stay as close as possible to the notion of "artificial intelligence" in the sense of intelligent humans actually creating, not just imagining, tools, mechanisms, and concepts for assisting our cognitive processes or automating (and imitating) them. UNITED STATES - CIRCA 1943: Machine's Can't Think (Photo by Buyenlarge/Getty Images) In 1308, Catalan poet and theologian Ramon Llull completed Ars generalis ultima (The Ultimate General Art), further perfecting his method of using paper-based mechanical means to create new knowledge from combinations of concepts. Llull devised a system of thought that he wanted to impart to others to assist them in theological debates, among other intellectual pursuits. He wanted to create a universal language using a logical combination of terms.


5 Key Facts About AI: How Long Has It Been Around?

#artificialintelligence

In recent years, the prevalence of artificial intelligence in our everyday lives has increased drastically. We now see such technology in our phones, in cybersecurity, and even in cars. But where did it all begin for AI, and what lies in its future? Well, here are some interesting facts you may not know about artificial intelligence. While the ancient Greeks wrote about "intelligent robots" in religious mythology, artificial intelligence was first conceptualized by Gottfried Wilhelm Leibniz, a German mathematician and philosopher, in the late seventeenth century.


AI Avant-Garde: 3 Pioneering Works That Revolutionised The Field

#artificialintelligence

Modern-day AI is a culmination of weird ideas of stalwarts spread over centuries. The year 2021 especially is special in this regard as it also happens to be the 375th birth anniversary of Gottfried Wilhelm Leibniz, the 90th anniversary of Kurt Goedel's 1931 groundbreaking paper, and the 80th anniversary of Konrad Zuse's seminal work. These works laid the foundations for modern-day AI and its algorithms. The significance of this year was first brought to light by Prof. Juergen Schmidhuber, who himself has been responsible for many groundbreaking works in the field of AI. Also known as the world's first computer scientist, Leibniz's work had a great impact on the field of computing.


From counting with stones to artificial intelligence: the story of calculus

#artificialintelligence

Isaac Newton (left) and Gottfried Wilhelm Leibniz each independently invented calculus.Credit: Left, DeAgostini/Getty; Right, Lombard/ullstein bild via Getty Midway through Infinite Powers, Steven Strogatz writes that Isaac Newton and Gottfried Wilhelm Leibniz both "died in excruciating pain while suffering from calculi -- a bladder stone for Newton, a kidney stone for Leibniz". It was a cruelly ironic end for the scientists who independently invented calculus: the word comes from the Latin for'small stone', in reference to pebbles once used for counting. Such fascinating anecdotes abound in Infinite Powers. Strogatz, a mathematician working in nonlinear dynamics and complex systems, has written a romp through the history of calculus -- the study of how things change. Starting with the ancient Greeks, the book ends with connections between the field and artificial intelligence and machine learning. Calculus was key to working with Newton's laws of motion, which stimulated the Industrial Revolution.


Do We Have Minds of Our Own?

#artificialintelligence

In order to do science, we've had to dismiss the mind. This was, in any case, the bargain that was made in the seventeenth century, when Descartes and Galileo deemed consciousness a subjective phenomenon unfit for empirical study. If the world was to be reducible to physical causation, then all mental experiences--intention, agency, purpose, meaning--must be secondary qualities, inexplicable within the framework of materialism. And so the world was divided in two: mind and matter. This dualistic solution helped to pave the way for the Enlightenment and the technological and scientific advances of the coming centuries.


Aristotle's binary philosophies created today's AI bias

#artificialintelligence

There is no doubt that AIs are biased. But many declare AI's inequalities exist because we humans are flawed, rather than the machines. "Are machines doomed to inherit human biases?" the headlines read. "Human bias is a huge problem for AI. Here's how we're going to fix it." But these narratives perpetuate a dangerous algorithm-first fallacy that needs to be nixed.