Goto

Collaborating Authors

Logic & Formal Reasoning


Knowledge Graphs

Communications of the ACM

The 1980s saw the evolution of computing as it transitioned from industry to homes through the boom of personal computers. In the field of data management, the Relational Database industry was developing rapidly (Oracle, Sybase, IBM, among others). Object-oriented abstractions were developed as a new form of representational independence. The Internet changed the way people communicated and exchanged information.


Edmund M. Clarke (1945–2020)

Communications of the ACM

Edmund Melson Clarke, Jr., a celebrated American academic who developed methods for mathematically proving the correctness of computer systems, died on December 22, 2020 at the age of 75 from complications of COVID-19. Clarke was awarded the A.M Turing Award in 2008 with his former student E. Allen Emerson and the French computer scientist Joseph Sifakis, for their work on model checking. "I've never liked to fly, although I've done my share of it. I wanted to do something that would make systems like airplanes safer," Clarke said in a 2014 video produced by the Franklin Institute when he was awarded their 2014 Bower Award and Prize for Achievement in Sciencea "For his leading role in the conception and development of techniques for automatically verifying the correctness of a broad array of computer systems, including those found in transportation, communications, and medicine." Model checking is a practical approach for machine verification of mathematical models of hardware, software, communications protocols, and other complex computing systems.


Paraconsistent Foundations for Quantum Probability

arXiv.org Artificial Intelligence

The mathematics of quantum mechanics has been viewed and analyzed from a huge variety of different perspectives, each shedding light on different subtleties of its underlying structure and its connection to our everyday reality. Here we add an additional thread to this conceptual polyphony, demonstrating a close connection between fuzzy paraconsistent logic and quantum probabilities. This connection suggests new variations on existing interpretations of quantum reality and measurement. It also provides some tantalizing connections between the probabilistic and fuzzy logic used in modern AI systems and quantum probabilistic reasoning, which may have implications for quantum-computing implementations of logical inference based AI. The ideas here arose as a spinoff from the work reported in [Goe21], which uses a variety of paraconsistent intuitionistic logic called Constructible Duality (CD) Logic as a means for giving a rigorous logic foundation to the PLN (Probabilistic Logic Networks) logic [GIGH08] that has been used in the OpenCog AI project [GPG13a, GPG13b] for well over a decade now.


HySTER: A Hybrid Spatio-Temporal Event Reasoner

arXiv.org Artificial Intelligence

The task of Video Question Answering (VideoQA) consists in answering natural language questions about a video and serves as a proxy to evaluate the performance of a model in scene sequence understanding. Most methods designed for VideoQA up-to-date are end-to-end deep learning architectures which struggle at complex temporal and causal reasoning and provide limited transparency in reasoning steps. We present the HySTER: a Hybrid Spatio-Temporal Event Reasoner to reason over physical events in videos. Our model leverages the strength of deep learning methods to extract information from video frames with the reasoning capabilities and explainability of symbolic artificial intelligence in an answer set programming framework. We define a method based on general temporal, causal and physics rules which can be transferred across tasks. We apply our model to the CLEVRER dataset and demonstrate state-of-the-art results in question answering accuracy. This work sets the foundations for the incorporation of inductive logic programming in the field of VideoQA.


Paraconsistent Foundations for Probabilistic Reasoning, Programming and Concept Formation

arXiv.org Artificial Intelligence

It is argued that 4-valued paraconsistent truth values (called here "p-bits") can serve as a conceptual, mathematical and practical foundation for highly AI-relevant forms of probabilistic logic and probabilistic programming and concept formation. First it is shown that appropriate averaging-across-situations and renormalization of 4-valued p-bits operating in accordance with Constructible Duality (CD) logic yields PLN (Probabilistic Logic Networks) strength-and-confidence truth values. Then variations on the Curry-Howard correspondence are used to map these paraconsistent and probabilistic logics into probabilistic types suitable for use within dependent type based programming languages. Zach Weber's paraconsistent analysis of the sorites paradox is extended to form a paraconsistent / probabilistic / fuzzy analysis of concept boundaries; and a paraconsistent version of concept formation via Formal Concept Analysis is presented, building on a definition of fuzzy property-value degrees in terms of relative entropy on paraconsistent probability distributions. These general points are fleshed out via reference to the realization of probabilistic reasoning and programming and concept formation in the OpenCog AGI framework which is centered on collaborative multi-algorithm updating of a common knowledge metagraph.


Top Program Construction and Reduction for polynomial time Meta-Interpretive Learning

arXiv.org Artificial Intelligence

Meta-Interpretive Learners, like most ILP systems, learn by searching for a correct hypothesis in the hypothesis space, the powerset of all constructible clauses. We show how this exponentially-growing search can be replaced by the construction of a Top program: the set of clauses in all correct hypotheses that is itself a correct hypothesis. We give an algorithm for Top program construction and show that it constructs a correct Top program in polynomial time and from a finite number of examples. We implement our algorithm in Prolog as the basis of a new MIL system, Louise, that constructs a Top program and then reduces it by removing redundant clauses. We compare Louise to the state-of-the-art search-based MIL system Metagol in experiments on grid world navigation, graph connectedness and grammar learning datasets and find that Louise improves on Metagol's predictive accuracy when the hypothesis space and the target theory are both large, or when the hypothesis space does not include a correct hypothesis because of "classification noise" in the form of mislabelled examples. When the hypothesis space or the target theory are small, Louise and Metagol perform equally well.


Model-Based Machine Learning for Communications

arXiv.org Machine Learning

Traditional communication systems design is dominated by methods that are based on statistical models. These statistical-model-based algorithms, which we refer to henceforth as model-based methods, rely on mathematical models that describe the transmission process, signal propagation, receiver noise, interference, and many other components of the system that affect the end-to-end signal transmission and reception. Such mathematical models use parameters that vary over time as the channel conditions, the environment, network traffic, or network topology change. Therefore, for optimal operation, many of the algorithms used in communication systems rely on the underlying mathematical models as well as the estimation of the model parameters. However, there are cases where this approach fails, in particular when the mathematical models for one or more of the system components are highly complex, hard to estimate, poorly understood, do not well-capture the underlying physics of the system, or do not lend themselves to computationally-efficient algorithms.


Semantic Modeling with SUMO

arXiv.org Artificial Intelligence

Abstract: We explore using the Suggested Upper Merged Ontology (SUMO) to develop a semantic simulation. We provide two proof-of-concept demonstrations modeling transitions in a simulated gasoline engine using a general-purpose programming language. Rather than focusing on computationally highly intensive techniques, we explore a less computationally intensive approach related to familiar software engineering testing procedures. In addition, we propose structured representations of terms based on linguistic approaches to lexicography. Keywords: Definitions, Description Logic, Model-Checking, Model-Level, Rules, Semantic Simulation, Transitionals, Truth Maintenance 1 Introduction We believe knowledge representation should be fully integrated with programming languages. Therefore, we are exploring the implementation of dynamic semantic simulations based on ontologies using a general-purpose programming language (cf., [4]). These simulations allow model-level constructs such as flows, states, transitions, microworlds, generalizations, and causation, and language features such as conditionals, threads, and looping. In this paper, we provide initial demonstrations for how the Suggested Upper Merged Ontology (SUMO) can be applied to Python-based semantic modeling. SUMO has both a rich ontology and a sophisticated inference environment built to use first-order predicate calculus [9, 15, 16, 25, 27, 28].


On the Decomposition of Abstract Dialectical Frameworks and the Complexity of Naive-based Semantics

Journal of Artificial Intelligence Research

Abstract dialectical frameworks (ADFs) are a recently introduced powerful generalization of Dung’s popular abstract argumentation frameworks (AFs). Inspired by similar work for AFs, we introduce a decomposition scheme for ADFs, which proceeds along the ADF’s strongly connected components. We find that, for several semantics, the decomposition-based version coincides with the original semantics, whereas for others, it gives rise to a new semantics. These new semantics allow us to deal with pertinent problems such as odd-length negative cycles in a more general setting, that for instance also encompasses logic programs. We perform an exhaustive analysis of the computational complexity of these new, so-called naive-based semantics. The results are quite interesting, for some of them involve little-known classes of the so-called Boolean hierarchy (another hierarchy in between classes of the polynomial hierarchy). Furthermore, in credulous and sceptical entailment, the complexity can be different depending on whether we check for truth or falsity of a specific statement.


Robust Machine Learning Systems: Challenges, Current Trends, Perspectives, and the Road Ahead

arXiv.org Artificial Intelligence

Machine Learning (ML) techniques have been rapidly adopted by smart Cyber-Physical Systems (CPS) and Internet-of-Things (IoT) due to their powerful decision-making capabilities. However, they are vulnerable to various security and reliability threats, at both hardware and software levels, that compromise their accuracy. These threats get aggravated in emerging edge ML devices that have stringent constraints in terms of resources (e.g., compute, memory, power/energy), and that therefore cannot employ costly security and reliability measures. Security, reliability, and vulnerability mitigation techniques span from network security measures to hardware protection, with an increased interest towards formal verification of trained ML models. This paper summarizes the prominent vulnerabilities of modern ML systems, highlights successful defenses and mitigation techniques against these vulnerabilities, both at the cloud (i.e., during the ML training phase) and edge (i.e., during the ML inference stage), discusses the implications of a resource-constrained design on the reliability and security of the system, identifies verification methodologies to ensure correct system behavior, and describes open research challenges for building secure and reliable ML systems at both the edge and the cloud.