If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
The planning graph of Blum and Furst is one of the frequently used tools in planning. It is a data structure which can be visualized as a bipartite graph with state variables and actions as nodes and which approximates (upper bound) the set of reachable states with a given number of sets of simultaneous actions. We show that the contents of planning graphs follow from two more general notions: extended clause learning restricted to 2-literal clauses and the representation of parallel plans consisting of STRIPS actions in the classical propositional logic. This is the first time planning graphs have been given an explanation in terms of the inference methods used in SAT solvers. The work helps in bridging the gap between specialized algorithms devised for planning and general-purpose algorithms for automated reasoning.
Agents plan to achieve and maintain goals. Maintenance that requires continuous action excludes the representation of plans as finite sequences of actions. If there is no upper bound on the number of actions, a simple list of actions would be infinitely long. Instead, a compact representation requires some form of looping construct. We look at a specific temporally extended maintenance goal, multiple target video surveillance, and formalize it in Temporal Action Logic. The logic's representation of time as the natural numbers suggests using mathematical induction to deductively plan to satisfy temporally extended goals. Such planning makes use of a sound and useful, but incomplete, induction rule that compactly represents the solution as a recursive fixpoint formula. Two heuristic rules overcome the problem of identifying a sufficiently strong induction hypothesis and enable an automated solution to the surveillance problem that satisfies the goal indefinitely.
Formal action models play a central role in several subfields of AI because they are used to model application domains, e.g., in automated planning. However, there are hitherto no automated methods for relating such domain models to each other, in particular for checking whether one is a specialization or generalization of the other. In this paper, we introduce two kinds of subsumption relations between operators, both of which are suitable for modeling and verifying hierarchies between actions and operators: applicability subsumption considers an action to be more general than another if the latter can be replaced by the first at each point in each sound sequence of actions; abstraction subsumption exploits relations between actions from an ontological point of view.
Coherence-based approaches are quite popular to reason under inconsistency. Most of them are defined with respect to totally preordered belief bases such as the lexicographic inference which is known to have desirable properties from theoretical, practical and psychological points of view. However, partially preordered belief bases offer much more flexibility to represent efficiently incomplete knowledge and to avoid comparing unrelated pieces of information. In this paper, we propose a lexicographic inference for partially preordered belief bases that extends the classical one. On one hand, we define a natural inference relation which consists in applying classical lexicographic inference from all compatible totally preordered belief bases.
This logic is capable of representing default logic with the advantage of itself being monotonic, with a clearly defined semantics and a separation of the object level and the meta level. The procedure prepares the ground for efficient implementations as it clearly separates the SATsolving part of the reasoning problem from the modal aspects that are specifically caused by defaults. We sketch an extension of the logic to cover confidence levels and show that the resulting system can accommodate ordered default theories with a prescriptive interpretation of preference between defaults.
The combination of rules and ontologies has a central role in the ongoing development of the Semantic Web. In previous work, autoepistemic logic (AEL) was advocated as a uniform host formalism to study different such combinations, enabling comparisons on a common basis. In this paper, we continue this line of research and investigate different embeddings of major proposals to combine rules and ontologies into first-order autoepistemic logic (FO-AEL). In particular, we present embeddings for dl-programs, r-hybrid knowledge bases, and hybrid MKNF knowledge bases, which are representatives of different combination types. We study the embeddings in the context of FO-AEL under the standard-names assumption, but we also discuss variants using the any-and all-names semantics. Our results provide interesting insights into the properties of the discussed combination formalisms.
We suggest a new representation of defeasible entailment and specificity in the framework of default logic. The representation is based on augmenting the underlying classical language with the language of conditionals having its own (monotonic) internal logic. It is shown, in particular, that nonmonotonic inheritance reasoning can be naturally represented in this framework, and generalized to the full classical language. The problem of nonmonotonic, defeasible inference can be seen as the main objective, as well as the main problem of the general theory of nonmonotonic reasoning. An impressive success has been achieved in our understanding of it, in realizing how complex it is, and, most importantly, how many different forms it may have. Many formalisms have been developed and implemented that capture significant aspects of defeasible inference, though a unified picture has not yet emerged. In this study we will suggest a new representation of defeasible inference in the framework of default logic. As the reader will see, however, the suggested representation will borrow the main insights of many previous approaches to this problem, hopefully without inheriting their shortcomings. Preliminary version of this paper has appeared as (Bochman 2007), though in the present study we suggest a somewhat different, more transparent formalization.
To compute a function such as a mapping from vertices to colors in the graph coloring problem, current practice in Answer Set Programming is to represent the function as a relation. Among other things, this often makes the resulting program unnecessarily large when instantiated on a large domain. The extra constraints needed to enforce the relation as a function also make the logic program less transparent. In this paper, we consider adding functions directly to normal logic programs. We show that the answer set semantics can be generalized to these programs straightforwardly. We also show that the notions of loops and loop formulas can be extended, and that through program completion and loop formulas, a normal logic program with functions can be transformed to a Constraint Satisfaction problem.