Goto

Collaborating Authors

 formal theory


Where to Search: Measure the Prior-Structured Search Space of LLM Agents

Song, Zhuo-Yang

arXiv.org Artificial Intelligence

The generate-filter-refine (iterative paradigm) based on large language models (LLMs) has achieved progress in reasoning, programming, and program discovery in AI+Science. However, the effectiveness of search depends on where to search, namely, how to encode the domain prior into an operationally structured hypothesis space. To this end, this paper proposes a compact formal theory that describes and measures LLM-assisted iterative search guided by domain priors. We represent an agent as a fuzzy relation operator on inputs and outputs to capture feasible transitions; the agent is thereby constrained by a fixed safety envelope. To describe multi-step reasoning/search, we weight all reachable paths by a single continuation parameter and sum them to obtain a coverage generating function; this induces a measure of reachability difficulty; and it provides a geometric interpretation of search on the graph induced by the safety envelope. We further provide the simplest testable inferences and validate them via two instantiation. This theory offers a workable language and operational tools to measure agents and their search spaces, proposing a systematic formal description of iterative search constructed by LLMs.


Automated Generation of Massive Reasonable Empirical Theorems by Forward Reasoning Based on Strong Relevant Logics -- A Solution to the Problem of LLM Pre-training Data Exhaustion

Cheng, Jingde

arXiv.org Artificial Intelligence

Recently, it is often said that the data used for the pre-training of large language models (LLMs) have been exhausted. This paper proposes a solution to the problem: Automated generation of massive reasonable empirical theorems by forward reasoning based on strong relevant logics. In fact, this can be regarded as a part of our approach to the problems of ATF (Automated Theorem Finding) and AKA (Automated Knowledge Appreciation).


In the beginning was the code: Juergen Schmidhuber at TEDxUHasselt

#artificialintelligence

The universe seems incredibly complex. But could its rules be dead simple? Juergen Schmidhuber's fascinating story will convince you that this universe and your own life are just by-products of a very simple and fast program computing all logically possible universes. Juergen Schmidhuber is Director of the Swiss Artificial Intelligence Lab IDSIA (since 1995), Professor of Artificial Intelligence at the University of Lugano, Switzerland (since 2009), and Professor SUPSI (since 2003). He helped to transform IDSIA into one of the world's top ten AI labs (the smallest!), according to the ranking of Business Week Magazine.


Formal Theory of Creativity and Fun and Intrinsic Motivation Explains Science, Art, Music, Humor (Juergen Schmidhuber). Artificial Scientists, Artificial Artists, Developmental Robotics, Curiosity, Attention, Surprise, Novelty, Discovery, Open-Ended Learning, Formal Theory of Beauty, Creating Novel Patters

#artificialintelligence

How the Theory Explains Humor. Consider the following statement: Biological organisms are driven by the "Four Big F's": Feeding, Fighting, Fleeing, Mating. Some subjective observers who read this for the first time think it is funny. As the eyes are sequentially scanning the text the brain receives a complex visual input stream. The latter is subjectively partially compressible as it relates to the observer's previous knowledge about letters and words.


Towards OWL-based Knowledge Representation in Petrology

Shkotin, Alex, Ryakhovsky, Vladimir, Kudryavtsev, Dmitry

arXiv.org Artificial Intelligence

This paper presents our work on development of OWL-driven systems for formal representation and reasoning about terminological knowledge and facts in petrology. The long-term aim of our project is to provide solid foundations for a large-scale integration of various kinds of knowledge, including basic terms, rock classification algorithms, findings and reports. We describe three steps we have taken towards that goal here. First, we develop a semi-automated procedure for transforming a database of igneous rock samples to texts in a controlled natural language (CNL), and then a collection of OWL ontologies. Second, we create an OWL ontology of important petrology terms currently described in natural language thesauri. We describe a prototype of a tool for collecting definitions from domain experts. Third, we present an approach to formalization of current industrial standards for classification of rock samples, which requires linear equations in OWL 2. In conclusion, we discuss a range of opportunities arising from the use of semantic technologies in petrology and outline the future work in this area.


Formalizations of Commonsense Psychology

Gordon, Andrew S., Hobbs, Jerry R.

AI Magazine

The central challenge in commonsense knowledge representation research is to develop content theories that achieve a high degree of both competency and coverage. We describe a new methodology for constructing formal theories in commonsense knowledge domains that complements traditional knowledge representation approaches by first addressing issues of coverage. These concepts are sorted into a manageable number of coherent domains, one of which is the representational area of commonsense human memory. These representational areas are then analyzed using more traditional knowledge representation techniques, as demonstrated in this article by our treatment of commonsense human memory.


Formal theories of knowledge in AI and robotics

Rosenschein, S. J.

Classics

Although the concept ofknowledge plays a central role in artificial intelligence, the theoretical foundations of knowledge representation currently rest on a very limited conception of what it means for a machine to know a proposition. In the current view, the machine is regarded as knowing a fact if its state either explicitly encodes the fact as a sentence of an interpreted formal language or if such a sentence can be derived from other encoded sentences according to the rules of an appropriate logical system.


A formal theory of inductive inference

Solomonoff, R. J.

Classics

In Part I, four ostensibly different theoretical models of induction are presented, in which the problem dealt with is the extrapolation of a very long sequence of symbols—presumably containing all of the information to be used in the induction. Almost all, if not all problems in induction can be put in this form. Some strong heuristic arguments have been obtained for the equivalence of the last three models. One of these models is equivalent to a Bayes formulation, in which a priori probabilities are assigned to sequences of symbols on the basis of the lengths of inputs to a universal Turing machine that are required to produce the sequence of interest as output. Though it seems likely, it is not certain whether the first of the four models is equivalent to the other three.