Belief Revision
Bayesian Belief Polarization
Jern, Alan, Chang, Kai-min, Kemp, Charles
Situations in which people with opposing prior beliefs observe the same evidence and then strengthen those existing beliefs are frequently offered as evidence of human irrationality. This phenomenon, termed belief polarization, is typically assumed to be non-normative. We demonstrate, however, that a variety of cases of belief polarization are consistent with a Bayesian approach to belief revision. Simulation results indicate that belief polarization is not only possible but relatively common within the class of Bayesian models that we consider. Papers published at the Neural Information Processing Systems Conference.
Probabilistic Belief Revision with Structural Constraints
Jones, Peter, Saligrama, Venkatesh, Mitter, Sanjoy
Experts (human or computer) are often required to assess the probability of uncertain events. When a collection of experts independently assess events that are structurally interrelated, the resulting assessment may violate fundamental laws of probability. Such an assessment is termed incoherent. In this work we investigate how the problem of incoherence may be affected by allowing experts to specify likelihood models and then update their assessments based on the realization of a globally-observable random sequence. Papers published at the Neural Information Processing Systems Conference.
Factored Probabilistic Belief Tracking
The problem of belief tracking in the presence of stochastic actions and observations is pervasive and yet computationally intractable. In this work we show however that probabilistic beliefs can be maintained in factored form exactly and efficiently across a number of causally closed beams, when the state variables that appear in more than one beam obey a form of backward determinism . Since computing marginals from the factors is still computationally intractable in general, and variables appearing in several beams are not always backward-deterministic, the basic formulation is extended with two approximations: forms of belief propagation for computing marginals from factors, and sampling of non-backward-deterministic variables for making such variables backward-deterministic given their sampled history. Unlike, Rao-Blackwellized particle-filtering, the sampling is not used for making inference tractable but for making the factorization sound . The resulting algorithm involves sampling and belief propagation or just one of them as determined by the structure of the model.
Message Scheduling for Performant, Many-Core Belief Propagation
Van der Merwe, Mark, Joseph, Vinu, Gopalakrishnan, Ganesh
--Belief Propagation (BP) is a message-passing algorithm for approximate inference over Probabilistic Graphical Models (PGMs), finding many applications such as computer vision, error-correcting codes, and protein-folding. While general, the convergence and speed of the algorithm has limited its practical use on difficult inference problems. As an algorithm that is highly amenable to parallelization, many-core Graphical Processing Units (GPUs) could significantly improve BP performance. Improving BP through many-core systems is nontrivial: the scheduling of messages in the algorithm strongly affects performance. We present a study of message scheduling for BP on GPUs. We demonstrate that BP exhibits a tradeoff between speed and convergence based on parallelism and show that existing message schedulings are not able to utilize this tradeoff. T o this end, we present a novel randomized message scheduling approach, Randomized BP (RnBP), which outperforms existing methods on the GPU. I NTRODUCTION Probabilistic Graphical Models (PGMs) are powerful, general machine learning models that encode distributions over random variables. PGM Inference, in which we seek to compute some probabilistic beliefs within the system modeled by the PGM, is in general an intractable problem, leading to dependence on approximate algorithms. Belief Propagation (BP) is a widely employed approximate inference algorithms for PGMs [1].
Active Goal Recognition
Amato, Christopher, Baisero, Andrea
To coordinate with other systems, agents must be able to determine what the systems are currently doing and predict what they will be doing in the future---plan and goal recognition. There are many methods for plan and goal recognition, but they assume a passive observer that continually monitors the target system. Real-world domains, where information gathering has a cost (e.g., moving a camera or a robot, or time taken away from another task), will often require a more active observer. We propose to combine goal recognition with other observer tasks in order to obtain \emph{active goal recognition} (AGR). We discuss this problem and provide a model and preliminary experimental results for one form of this composite problem. As expected, the results show that optimal behavior in AGR problems balance information gathering with other actions (e.g., task completion) such as to achieve all tasks jointly and efficiently. We hope that our formulation opens the door for extensive further research on this interesting and realistic problem.
Memory Management in Resource-Bounded Agents
Memory in an agent system is a process of reasoning: it is the l earning process of strengthening a concept. The interaction between an agent and the environment can pla y an important role in constructing its memory and may affect its future behaviour. In fact, through memory an agent is potentially able to recall and to learn from experiences so that its beliefs and i ts future course of action are grounded in these experiences. In computational logic, [2] introduces DLEK (Dynamic Logic of Explicit beliefs and Knowledge) as a logical formalization of the short-term and long-term memory. The underlying idea is to represent reasoning about the formation of beliefs throu gh perception and inference in non-omniscient resource-bounded agents. DLEK has however no notion of time, while agents' actual perceptions are inherently timed and so are many of the inferences drawn from such perceptions. In this paper we present an extension of LEK/DLEK to T-LEK/T-DLEK ("Timed LE K" and "Timed DLEK") obtained by introducing a special function which associates to each b elief the arrival time and controls timed inferences. Through this function it is easier to keep the ev olution of the surrounding world under control and the representation is more complete. This abstr act is an evolution version of [3], where we have introduced explicit time instants and time intervals i n formulas, and it is extracted from [4].
A Temporal Module for Logical Frameworks
Pitoni, Valentina, Costantini, Stefania
In the literature there different kind of timed logical fram eworks exist, where time is specified directly using hybrid logics (cf., e.g., [2]), temporal epistemic lo gic (cf., e.g., [4]) or simply by using Linear Temporal Logic. We propose a temporal module which can be ado pted to "temporalize" many logical framework. This module is in practice a particular kind of fu nction that assigns a "timing" to atoms. We have exploited this T function in two different settings. The first one is the formalization of the reasoning on the formation of beliefs and the interaction wi th background knowledge in non-omniscient agents' memory.
$\alpha$ Belief Propagation as Fully Factorized Approximation
Liu, Dong, Moghadam, Nima N., Rasmussen, Lars K., Huang, Jinliang, Chatterjee, Saikat
Belief propagation (BP) can do exact inference in loop-free graphs, but its performance could be poor in graphs with loops, and the understanding of its solution is limited. This work gives an interpretable belief propagation rule that is actually minimization of a localized $\alpha$-divergence. We term this algorithm as $\alpha$ belief propagation ($\alpha$-BP). The performance of $\alpha$-BP is tested in MAP (maximum a posterior) inference problems, where $\alpha$-BP can outperform (loopy) BP by a significant margin even in fully-connected graphs.
A Conceptually Well-Founded Characterization of Iterated Admissibility Using an "All I Know" Operator
Halpern, Joseph Y., Pass, Rafael
Brandenburger, Friedenberg, and Keisler provide an epistemic characterization of iterated admissibility (IA), also known as iterated deletion of weakly dominated strategies, where uncertainty is represented using LPSs (lexicographic probability sequences). Their characterization holds in a rich structure called a complete structure, where all types are possible. In earlier work, we gave a characterization of iterated admissibility using an "all I know" operator, that captures the intuition that "all the agent knows" is that agents satisfy the appropriate rationality assumptions. That characterization did not need complete structures and used probability structures, not LPSs. However, that characterization did not deal with Samuelson's conceptual concern regarding IA, namely, that at higher levels, players do not consider possible strategies that were used to justify their choice of strategy at lower levels. In this paper, we give a characterization of IA using the all I know operator that does deal with Samuelson's concern. However, it uses LPSs. We then show how to modify the characterization using notions of "approximate belief" and "approximately all I know" so as to deal with Samuelson's concern while still working with probability structures.
Exploiting Belief Bases for Building Rich Epistemic Structures
We introduce a semantics for epistemic logic exploiting a belief base abstraction. Differently from existing Kripke-style semantics for epistemic logic in which the notions of possible world and epistemic alternative are primitive, in the proposed semantics they are non-primitive but are defined from the concept of belief base. We show that this semantics allows us to define the universal epistemic model in a simpler and more compact way than existing inductive constructions of it. We provide (i) a number of semantic equivalence results for both the basic epistemic language with "individual belief" operators and its extension by the notion of "only believing", and (ii) a lower bound complexity result for epistemic logic model checking relative to the universal epistemic model.