Collaborating Authors

Belief Revision

The importance of intent recognition in speech tech for kids


Intent recognition is the natural language understanding (NLU) task of determining what general goal a user is trying to accomplish (e.g., finding out the weather forecast, booking a table at a restaurant, or adding a song to a playlist). What's tricky is there are many ways users may express an intent. For example, "Turn on the light" and "It's too dark in here; make it brighter" are just two of a plethora of ways of expressing the same "Light on" intent to a smart home device, but the two utterances are completely different on the surface in terms of syntax and vocabulary. A good intent recognizer should map both of those utterances to the same intent. More generally, a well-trained recognizer can account for the many ways people may express their goals in natural language and map them to the correct intent, which then triggers an action or response.

Extended Goal Recognition: Lessons from Magic


The “science of magic” has lately emerged as a new field of study, providing valuable insights into the nature of human perception and cognition. While most of us think of magic as being all about deception and perceptual “tricks”, the craft—as documented by psychologists and professional magicians—provides a rare practical demonstration and understanding of goal recognition. For the purposes of human-aware planning, goal recognition involves predicting what a human observer is most likely to understand from a sequence of actions. Magicians perform sequences of actions with keen awareness of what an audience will understand from them and—in order to subvert it—the ability to predict precisely what an observer’s expectation is most likely to be. Magicians can do this without needing to know any personal details about their audience and without making any significant modification to their routine from one performance to the next. That is, the actions they perform are reliably interpreted by any human observer in such a way that particular (albeit erroneous) goals are predicted every time. This is achievable because people’s perception, cognition and sense-making are predictably fallible. Moreover, in the context of magic, the principles underlying human fallibility are not only well-articulated but empirically proven. In recent work we demonstrated how aspects of human cognition could be incorporated into a standard model of goal recognition, showing that—even though phenome...

Conditional Inference and Activation of Knowledge Entities in ACT-R Artificial Intelligence

Activation-based conditional inference applies conditional reasoning to ACT-R, a cognitive architecture developed to formalize human reasoning. The idea of activation-based conditional inference is to determine a reasonable subset of a conditional belief base in order to draw inductive inferences in time. Central to activation-based conditional inference is the activation function which assigns to the conditionals in the belief base a degree of activation mainly based on the conditional's relevance for the current query and its usage history.

Belief propagation for permutations, rankings, and partial orders Machine Learning

Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, New Mexico 87501, USA Many datasets give partial information about an ordering or ranking by indicating which team won a game, which item a user prefers, or who infected whom. We define a continuous spin system whose Gibbs distribution is the posterior distribution on permutations, given a probabilistic model of these interactions. Using the cavity method we derive a belief propagation algorithm that computes the marginal distribution of each node's position. In addition, the Bethe free energy lets us approximate the number of linear extensions of a partial order and perform model selection. Ranking or ordering objects is a natural problem in In this case, the energy H(π) is the number of violations, many contexts.

Relevance in Belief Update

Journal of Artificial Intelligence Research

It has been pointed out by Katsuno and Mendelzon that the so-called AGM revision operators, defined by Alchourrón, Gärdenfors and Makinson, do not behave well in dynamically-changing applications. On that premise, Katsuno and Mendelzon formally characterized a different type of belief-change operators, typically referred to as KM update operators, which, to this date, constitute a benchmark in belief update. In this article, we show that there exist KM update operators that yield the same counter-intuitive results as any AGM revision operator. Against this non-satisfactory background, we prove that a translation of Parikh's relevance-sensitive axiom (P), in the realm of belief update, suffices to block this liberal behaviour of KM update operators. It is shown, both axiomatically and semantically, that axiom (P) for belief update, essentially, encodes a type of relevance that acts at the possible-worlds level, in the context of which each possible world is locally modified, in the light of new information. Interestingly, relevance at the possible-worlds level is shown to be equivalent to a form of relevance that acts at the sentential level, by considering the building blocks of relevance to be the sentences of the language. Furthermore, we concretely demonstrate that Parikh's notion of relevance in belief update can be regarded as (at least a partial) solution to the frame, ramification and qualification problems, encountered in dynamically-changing worlds. Last but not least, a whole new class of well-behaved, relevance-sensitive KM update operators is introduced, which generalize Forbus' update operator and are perfectly-suited for real-world implementations.

Simultaneous Perception-Action Design via Invariant Finite Belief Sets Artificial Intelligence

Although perception is an increasingly dominant portion of the overall computational cost for autonomous systems, only a fraction of the information perceived is likely to be relevant to the current task. To alleviate these perception costs, we develop a novel simultaneous perception-action design framework wherein an agent senses only the task-relevant information. This formulation differs from that of a partially observable Markov decision process, since the agent is free to synthesize not only its policy for action selection but also its belief-dependent observation function. The method enables the agent to balance its perception costs with those incurred by operating in its environment. To obtain a computationally tractable solution, we approximate the value function using a novel method of invariant finite belief sets, wherein the agent acts exclusively on a finite subset of the continuous belief space. We solve the approximate problem through value iteration in which a linear program is solved individually for each belief state in the set, in each iteration. Finally, we prove that the value functions, under an assumption on their structure, converge to their continuous state-space values as the sample density increases.

Situated Conditional Reasoning Artificial Intelligence

Conditionals are useful for modelling, but are not always sufficiently expressive for capturing information accurately. In this paper we make the case for a form of conditional that is situation-based. These conditionals are more expressive than classical conditionals, are general enough to be used in several application domains, and are able to distinguish, for example, between expectations and counterfactuals. Formally, they are shown to generalise the conditional setting in the style of Kraus, Lehmann, and Magidor. We show that situation-based conditionals can be described in terms of a set of rationality postulates. We then propose an intuitive semantics for these conditionals, and present a representation result which shows that our semantic construction corresponds exactly to the description in terms of postulates. With the semantics in place, we proceed to define a form of entailment for situated conditional knowledge bases, which we refer to as minimal closure. It is reminiscent of and, indeed, inspired by, the version of entailment for propositional conditional knowledge bases known as rational closure. Finally, we proceed to show that it is possible to reduce the computation of minimal closure to a series of propositional entailment and satisfiability checks. While this is also the case for rational closure, it is somewhat surprising that the result carries over to minimal closure.

Forgetting Formulas and Signature Elements in Epistemic States Artificial Intelligence

Delgrande's knowledge level account of forgetting provides a general approach to forgetting syntax elements from sets of formulas with links to many other forgetting operations, in particular, to Boole's variable elimination. On the other hand, marginalisation of epistemic states is a specific approach to actively reduce signatures in more complex semantic frameworks, also aiming at forgetting atoms that is very well known from probability theory. In this paper, we bring these two perspectives of forgetting together by showing that marginalisation can be considered as an extension of Delgrande's approach to the level of epistemic states. More precisely, we generalize Delgrande's axioms of forgetting to forgetting in epistemic states, and show that marginalisation is the most specific and informative forgetting operator that satisfies these axioms. Moreover, we elaborate suitable phrasings of Delgrande's concept of forgetting for formulas by transferring the basic ideas of the axioms to forgetting formulas from epistemic states. However, here we show that this results in trivial approaches to forgetting formulas. This finding supports the claim that forgetting syntax elements is essentially different from belief contraction, as e.g. axiomatized in the AGM belief change framework.

On Limited Non-Prioritised Belief Revision Operators with Dynamic Scope Artificial Intelligence

The research on non-prioritized revision studies revision operators which do not accept all new beliefs. In this paper, we contribute to this line of research by introducing the concept of dynamic-limited revision, which are revisions expressible by a total preorder over a limited set of worlds. For a belief change operator, we consider the scope, which consists of those beliefs which yield success of revision. We show that for each set satisfying single sentence closure and disjunction completeness there exists a dynamic-limited revision having the union of this set with the beliefs set as scope. We investigate iteration postulates for belief and scope dynamics and characterise them for dynamic-limited revision. As an application, we employ dynamic-limited revision to studying belief revision in the context of so-called inherent beliefs, which are beliefs globally accepted by the agent. This leads to revision operators which we call inherence-limited. We present a representation theorem for inherence-limited revision, and we compare these operators and dynamic-limited revision with the closely related credible-limited revision operators.

Belief Propagation as Diffusion Artificial Intelligence

Message-passing algorithms such as belief propagation (BP) are parallel computing schemes that try to estimate the marginals of a high dimensional probability distribution. They are used in various areas involving the statistics of a large number of interacting random variables, such as computational thermodynamics [5, 10], artificial intelligence [11, 21, 15], computer vision [18] and communications processing [3, 4]. We have shown the existence of a non-linear correspondence between BP algorithms and discrete integrators of a new form of continuous-time diffusion equations on belief networks [13, 14].