Goto

Collaborating Authors

 Belief Revision


Streaming Belief Propagation for Community Detection

Neural Information Processing Systems

The community detection problem requires to cluster the nodes of a network into a small number of well-connected'communities'. There has been substantial recent progress in characterizing the fundamental statistical limits of community detection under simple stochastic block models. However, in real-world applications, the network structure is typically dynamic, with nodes that join over time. In this setting, we would like a detection algorithm to perform only a limited number of updates at each node arrival. While standard voting approaches satisfy this constraint, it is unclear whether they exploit the network information optimally. We introduce a simple model for networks growing over time which we refer to as streaming stochastic block model (StSBM). Within this model, we prove that voting algorithms have fundamental limitations.


Conditioning and AGM-like belief change in the Desirability-Indifference framework

arXiv.org Artificial Intelligence

We show how the AGM framework for belief change (expansion, revision, contraction) can be extended to deal with conditioning in the so-called Desirability-Indifference framework, based on abstract notions of accepting and rejecting options, as well as on abstract notions of events. This level of abstraction allows us to deal simultaneously with classical and quantum probability theory.


Review for NeurIPS paper: Scalable Belief Propagation via Relaxed Scheduling

Neural Information Processing Systems

Weaknesses: - Presentation: I think the space that the paper spends on the BP background is more than necessary since the BP algorithm is just the standard one. The paper would be more compelling if the BP background is compressed and a more complete explanation of their algorithm is presented, for example some visual illustration that comes with the explanation of their implementation in Section 3.3. Moreover, since there are not many notations used in the paper, it is better not to use the same notation for different meanings to avoid confusion. For example, k is used for the number of top elements throughout the paper and also index of variable at Line 285; at Line 301 the parameter H is used without definition, and later on at Line 302 it denotes the tree height while at Line 334 a parameter in the Splash algorithm. Could the authors provide some conceptual or empirical comparison of them with the proposed one? Distributed Parallel Inference on Large Factor Graphs.


Review for NeurIPS paper: Scalable Belief Propagation via Relaxed Scheduling

Neural Information Processing Systems

Reviewers agreed, in reviews and discussion, that this paper presents a nice, simple idea very clearly. The author feedback included new experiments and a new baseline, with positive results. I enjoyed reading the paper too.


Anytime Incremental $\rho$POMDP Planning in Continuous Spaces

arXiv.org Artificial Intelligence

Partially Observable Markov Decision Processes (POMDPs) provide a robust framework for decision-making under uncertainty in applications such as autonomous driving and robotic exploration. Their extension, $\rho$POMDPs, introduces belief-dependent rewards, enabling explicit reasoning about uncertainty. Existing online $\rho$POMDP solvers for continuous spaces rely on fixed belief representations, limiting adaptability and refinement - critical for tasks such as information-gathering. We present $\rho$POMCPOW, an anytime solver that dynamically refines belief representations, with formal guarantees of improvement over time. To mitigate the high computational cost of updating belief-dependent rewards, we propose a novel incremental computation approach. We demonstrate its effectiveness for common entropy estimators, reducing computational cost by orders of magnitude. Experimental results show that $\rho$POMCPOW outperforms state-of-the-art solvers in both efficiency and solution quality.


Belief Roadmaps with Uncertain Landmark Evanescence

arXiv.org Artificial Intelligence

We would like a robot to navigate to a goal location while minimizing state uncertainty. To aid the robot in this endeavor, maps provide a prior belief over the location of objects and regions of interest. To localize itself within the map, a robot identifies mapped landmarks using its sensors. However, as the time between map creation and robot deployment increases, portions of the map can become stale, and landmarks, once believed to be permanent, may disappear. We refer to the propensity of a landmark to disappear as landmark evanescence. Reasoning about landmark evanescence during path planning, and the associated impact on localization accuracy, requires analyzing the presence or absence of each landmark, leading to an exponential number of possible outcomes of a given motion plan. To address this complexity, we develop BRULE, an extension of the Belief Roadmap. During planning, we replace the belief over future robot poses with a Gaussian mixture which is able to capture the effects of landmark evanescence. Furthermore, we show that belief updates can be made efficient, and that maintaining a random subset of mixture components is sufficient to find high quality solutions. We demonstrate performance in simulated and real-world experiments. Software is available at https://bit.ly/BRULE.


Reviews: Fast Convergence of Belief Propagation to Global Optima: Beyond Correlation Decay

Neural Information Processing Systems

The major contributions of this paper are that it proves the global convergence of BP(Theorem 1.3) and VI(Theorem 1.2) on ferromagnetic Ising model with a specific initialization, i.e., to initialize variables to be 1. The proof of Theorem 1.2 is based on the fact that the mean-field free energy function, i.e., \Phi(x) is concave on the set S obtained by the update rule, and then we can use Holder's inequality to expand the \Phi(x*) - Phi(x_t) and get the upper bounds. The proof of Theorem 1.3 is based on the fact that the norm of \Phi(v)'s gradient is less than 1(Lemma 3.2), and the properties of variable \mu sandwiched between v 0 and final v T(Lemma 3.5 and Lemma F.1). Other minor contributions include that it provides examples to empirically show the convergence(appendix G) and it shows how to use ellipsoid method to optimize the beliefs(appendix H). I have to admit that I am not familiar with this area, so can only go through a part of the proof, and I am not able to evaluate the originality and quality of this work.


Fast Convergence of Belief Propagation to Global Optima: Beyond Correlation Decay

Neural Information Processing Systems

Belief propagation is a fundamental message-passing algorithm for probabilistic reasoning and inference in graphical models. While it is known to be exact on trees, in most applications belief propagation is run on graphs with cycles. Understanding the behavior of "loopy" belief propagation has been a major challenge for researchers in machine learning and other fields, and positive convergence results for BP are known under strong assumptions which imply the underlying graphical model exhibits decay of correlations. We show, building on previous work of Dembo and Montanari, that under a natural initialization BP converges quickly to the global optimum of the Bethe free energy for Ising models on arbitrary graphs, as long as the Ising model is ferromagnetic (i.e.


Reviews: Fast Convergence of Belief Propagation to Global Optima: Beyond Correlation Decay

Neural Information Processing Systems

The reviewers liked the results on convergence of belief propagation algorithms for Ising models under certain settings. As a presentational suggestion, they suggest providing more extensive proof sketches in the main section of the paper.