"I think the best hope for human-level AI is logical AI, based on the formalizing of commonsense knowledge and reasoning in mathematical logic. Formalizing common sense requires extensions to mathematical logic including nonmonotonic reasoning and extensive reification, e.g., of concepts and also contexts. The reifications require appropriate reflection schemas."
– from The Future of AI—A Manifesto by John McCarthy. AI Magazine 26(4), (2005).
Automated reasoning is the general process that gives machine learning algorithms an organized framework to define, approach and solve problems. While more a theoretical field of research than a specific technique itself, automated reasoning underpins many machine learning practices, such as logic programming, fuzzy logic, Bayesian inference, and maximal entropy reasoning. The ultimate goal is to create deep learning systems that can mimic human deduction without human interference.
Note: This blog should have been named bitwise operator, but yes will correct it after some time. Renaming it now will break the link and will leave many with a a 404 page. Okay, today let's look at Boolean algebra with Julia. In Julia, something true is represented by a constant true and something false is represented by a constant false. These two things are inbuilt in Julia and you can't keep a variable like true 1, Julia will throw an error.
One of the fastest advancing areas of modern science is functional genomics. This science seeks to understand how the complete complement of molecular components of living organisms (nucleic acid, protein, small molecules, and so on) interact together to form living organisms. Functional genomics is of interest to AI because the relationship between machines and living organisms is central to AI and because the field is an instructive and fun domain to apply and sharpen AI tools and ideas, requiring complex knowledge representation, reasoning, learning, and so on. This article describes two machine learning (inductive logic programming [ILP])-based approaches to the bioinformatic problem of predicting protein function from amino acid sequence. The first approach is based on using ILP as a way of bootstrapping from conventional sequence-based homology methods.
Doug Lenat has worked in diverse parts of AI – natural language understanding and generation, automatic program synthesis, expert systems, machine learning, etc. – for going on 40 years now, just long enough to dare to write this article. His 1976 Stanford PhD thesis, AM, demonstrated that creative discoveries in mathematics could be produced by a computer program (a theorem proposer, rather than a theorem prover) guided by a corpus of hundreds of heuristic rules for deciding which experiments to perform and judging "interestingness" of their outcomes. That work earned him the IJCAI Computers and Thought Award, and sparked a renaissance in machine learning research. Dr. Lenat was on the CS faculty at CMU and Stanford, was one of the founders of Teknowledge, and was in the first batch of AAAI Fellows. He worked with Bill Gates and Nathan Myhrvold to launch Microsoft Research Labs, and to this day he remains the only person to have served on the technical advisory boards of both Apple and Microsoft.
This book represents a selection of papers presented at the Inductive Logic Programming (ILP) workshop held at Cumberland Lodge, Great Windsor Park. The collection marks two decades since the first ILP workshop in 1991. During this period the area has developed into the main forum for work on logic-based machine learning. The chapters cover a wide variety of topics, ranging from theory and ILP implementations to state-of-the-art applications in real-world domains. The international contributors represent leaders in the field from prestigious institutions in Europe, North America and Asia.
Many systems are naturally modeled as Markov Decision Processes (MDPs), combining probabilities and strategic actions. Given a model of a system as an MDP and some logical specification of system behavior, the goal of synthesis is to find a policy that maximizes the probability of achieving this behavior. A popular choice for defining behaviors is Linear Temporal Logic (LTL). Policy synthesis on MDPs for properties specified in LTL has been well studied. LTL, however, is defined over infinite traces, while many properties of interest are inherently finite. Linear Temporal Logic over finite traces (LTLf) has been used to express such properties, but no tools exist to solve policy synthesis for MDP behaviors given finite-trace properties. We present two algorithms for solving this synthesis problem: the first via reduction of LTLf to LTL and the second using native tools for LTLf. We compare the scalability of these two approaches for synthesis and show that the native approach offers better scalability compared to existing automaton generation tools for LTL.
We consider weighted structures, which extend ordinary relational structures by assigning weights, i.e. elements from a particular group or ring, to tuples present in the structure. We introduce an extension of first-order logic that allows to aggregate weights of tuples, compare such aggregates, and use them to build more complex formulas. We provide locality properties of fragments of this logic including Feferman-Vaught decompositions and a Gaifman normal form for a fragment called FOW1, as well as a localisation theorem for a larger fragment called FOWA1. This fragment can express concepts from various machine learning scenarios. Using the locality properties, we show that concepts definable in FOWA1 over a weighted background structure of at most polylogarithmic degree are agnostically PAC-learnable in polylogarithmic time after pseudo-linear time preprocessing.
Autonomous Intelligent Systems are designed to reduce the need for human intervention in our daily life. However, the full benefit of these new systems will be attained only if they are aligned with society's values and ethical principles. Adopting ethical approaches to building such systems has been attracting a lot of attention in the recent years. The global concern about the ethical behavior of this kind of technologies has manifested in many initiatives at different levels. As examples, we mention: the IEEE initiative for ethically aligned design of autonomous intelligent systems ('Ethics in Action'
We study a dynamic version of multi-agent path finding problem (called D-MAPF) where existing agents may leave and new agents may join the team at different times. We introduce a new method to solve D-MAPF based on conflict-resolution. The idea is, when a set of new agents joins the team and there are conflicts, instead of replanning for the whole team, to replan only for a minimal subset of agents whose plans conflict with each other. We utilize answer set programming as part of our method for planning, replanning and identifying minimal set of conflicts.
Answer set programming (ASP) is a well-established knowledge representation formalism. Most ASP solvers are based on (extensions of) technology from Boolean satisfiability solving. While these solvers have shown to be very successful in many practical applications, their strength is limited by their underlying proof system, resolution. In this paper, we present a new tool LP2PB that translates ASP programs into pseudo-Boolean theories, for which solvers based on the (stronger) cutting plane proof system exist. We evaluate our tool, and the potential of cutting-plane-based solving for ASP on traditional ASP benchmarks as well as benchmarks from pseudo-Boolean solving. Our results are mixed: overall, traditional ASP solvers still outperform our translational approach, but several benchmark families are identified where the balance shifts the other way, thereby suggesting that further investigation into a stronger proof system for ASP is valuable.