Approaches to goal-directed behaviour including online planning and opportunistic planning tackle a change in the environment by generating alternative goals to avoid failures or seize opportunities. However, current approaches only address unanticipated changes related to objects or object types already defined in the planning task that is being solved. This article describes a domain-independent approach that advances the state of the art by extending the knowledge of a planning task with relevant objects of new types. The approach draws upon the use of ontologies, semantic measures, and ontology alignment to accommodate newly acquired data that trigger the formulation of goal opportunities inducing a better-valued plan.
A mismatch between the real world and an agent's representation of it can be signalled by unexpected failures (or successes) of the agent's reasoning. The'real world' may include the ontologies of other agents. Such mismatches can be repaired by refining or abstracting an agent's ontology. These refinements or abstractions may not be limited to changes of belief, but may also change the signature of the agent's ontology.
We present a semantically-driven approach to uncertainties within and across ontologies. Ontologies are widely used not only by the Semantic Web but also by artificial systems in general. They represent and structure a domain with respect to its semantics. Uncertainties, however, have been rarely taken into account in ontological representation, even though they are inevitable when applying ontologies in `real world' applications. In this paper, we analyze why uncertainties are necessary for ontologies, how and where uncertainties have to be represented in ontologies, and what their semantics are. In particular, we investigate which ontology constructions need to address uncertainty issues and which ontology constructions should not be affected by uncertainties on the basis of their semantics. As a result, the use of uncertainties is restricted to appropriate cases, which reduces complexity and guides ontology development. We give examples and motivation from the field of spatially-aware systems in indoor environments.
Ontologies provide useful technology for organizing and managing large-scale knowledge bases and enabling interoperability in heterogeneous agent environments. However, autonomous systems require not only large knowledge bases and knowledge sharing; they also require efficient run-time performance. In agents optimized for performance, control structures and domain knowledge are often intertwined, resulting in fast execution but knowledge bases that are brittle and scale poorly. Our hypothesis is that combining ontology representations and tools with agents optimized for performance will capitalize on the strengths of the individual approaches and reduce their weaknesses. Our strategy is to use automatic translators that convert ontological representations to agent representations, handcoded agent knowledge for ontological inference, and explanation-based learning to cache ontological inferences. The paper outlines the rationale for this approach and design decisions and trade offs encountered. We also discuss criteria for evaluating success and understanding the consequences of design decisions on agent performance and knowledge base manageability.