Belief Revision
Natural revision is contingently-conditionalized revision
Natural revision seems so natural: it changes beliefs as little as possible to incorporate new information. Yet, some counterexamples show it wrong. It is so conservative that it never fully believes. It only believes in the current conditions. This is right in some cases and wrong in others. Which is which? The answer requires extending natural revision from simple formulae expressing universal truths (something holds) to conditionals expressing conditional truth (something holds in certain conditions). The extension is based on the basic principles natural revision follows, identified as minimal change, indifference and naivety: change beliefs as little as possible; equate the likeliness of scenarios by default; believe all until contradicted. The extension says that natural revision restricts changes to the current conditions. A comparison with an unrestricting revision shows what exactly the current conditions are. It is not what currently considered true if it contradicts the new information. It includes something more and more unlikely until the new information is at least possible.
Risk-aware Control for Robots with Non-Gaussian Belief Spaces
This paper addresses the problem of safety-critical control of autonomous robots, considering the ubiquitous uncertainties arising from unmodeled dynamics and noisy sensors. To take into account these uncertainties, probabilistic state estimators are often deployed to obtain a belief over possible states. Namely, Particle Filters (PFs) can handle arbitrary non-Gaussian distributions in the robot's state. In this work, we define the belief state and belief dynamics for continuous-discrete PFs and construct safe sets in the underlying belief space. We design a controller that provably keeps the robot's belief state within this safe set. As a result, we ensure that the risk of the unknown robot's state violating a safety specification, such as avoiding a dangerous area, is bounded. We provide an open-source implementation as a ROS2 package and evaluate the solution in simulations and hardware experiments involving high-dimensional belief spaces.
Data-Driven Goal Recognition in Transhumeral Prostheses Using Process Mining Techniques
Su, Zihang, Yu, Tianshi, Lipovetzky, Nir, Mohammadi, Alireza, Oetomo, Denny, Polyvyanyy, Artem, Sardina, Sebastian, Tan, Ying, van Beest, Nick
A transhumeral prosthesis restores missing anatomical segments below the shoulder, including the hand. Active prostheses utilize real-valued, continuous sensor data to recognize patient target poses, or goals, and proactively move the artificial limb. Previous studies have examined how well the data collected in stationary poses, without considering the time steps, can help discriminate the goals. In this case study paper, we focus on using time series data from surface electromyography electrodes and kinematic sensors to sequentially recognize patients' goals. Our approach involves transforming the data into discrete events and training an existing process mining-based goal recognition system. Results from data collected in a virtual reality setting with ten subjects demonstrate the effectiveness of our proposed goal recognition approach, which achieves significantly better precision and recall than the state-of-the-art machine learning techniques and is less confident when wrong, which is beneficial when approximating smoother movements of prostheses.
Belief revision and incongruity: is it a joke?
Bannay, Florence Dupin de Saint Cyr -, Prade, Henri
Even if much has been written about ingredients that trigger laughter, researchers are still far from having completely understood their interplay in the cognitive process that leads a listener to guffaw at a pun or a joke. They are even farther from a detailed analysis and modeling of the mechanisms that are at work in this process. However, in recent articles Dupin de Saint-Cyr and Prade (2020, 2022) took a first step in this direction by laying bare that a belief revision mechanism was solicited in the reception of a narrative joke. Namely the punchline, which triggers a revision, is both surprising and explains perfectly what was reported in the beginning of the joke. A similar idea has been more informally proposed in Ritchie (2002). It is quite clear that this is insufficient for characterizing a narrative joke.
Epistemic Planning for Heterogeneous Robotic Systems
Bramblett, Lauren, Bezzo, Nicola
Heterogeneous multi-robot system deployment offers a For example, consider Figure 1 where two unmanned ground variety of advantages including improved versatility, scalability, vehicles (UGVs) and one unmanned aerial vehicle (UAV) are and adaptability over homogeneous systems. As robotic exploring an environment and may discover tasks at undisclosed technology has advanced over the last few decades making locations. During disconnection, the UAV maintains robots smaller, more capable, and affordable, demand for a set of possible (belief) states for UGV 1 and UGV 2 and multi-robot research has grown. Appropriate coordination of also a set of (empathy) states that UGV 1 and UGV 2 might these heterogeneous systems can improve the effectiveness of believe about the UAV. The UAV finds a task that requires safety critical missions such as surveillance, exploration, and a UGV and plans to communicate with UGV 2. After the rescue operations by incorporating the capabilities of each UAV travels to UGV 2's first belief state, it finds that UGV robot. However, the complexity of the solution for a heterogeneous 2 is not present. So, the UAV reasons that UGV 2 might be system can exponentially expand over long periods at the second belief state, successfully communicates, and of disconnectivity, especially in uncertain environments.
Verifiable Goal Recognition for Autonomous Driving with Occlusions
Brewitt, Cillian, Tamborski, Massimiliano, Wang, Cheng, Albrecht, Stefano V.
Goal recognition (GR) involves inferring the goals of other vehicles, such as a certain junction exit, which can enable more accurate prediction of their future behaviour. In autonomous driving, vehicles can encounter many different scenarios and the environment may be partially observable due to occlusions. We present a novel GR method named Goal Recognition with Interpretable Trees under Occlusion (OGRIT). OGRIT uses decision trees learned from vehicle trajectory data to infer the probabilities of a set of generated goals. We demonstrate that OGRIT can handle missing data due to occlusions and make inferences across multiple scenarios using the same learned decision trees, while being computationally fast, accurate, interpretable and verifiable. We also release the inDO, rounDO and OpenDDO datasets of occluded regions used to evaluate OGRIT.
Online Goal Recognition in Discrete and Continuous Domains Using a Vectorial Representation
Tesch, Douglas, Amado, Leonardo Rosa, Meneguzzi, Felipe
While recent work on online goal recognition efficiently infers goals under low observability, comparatively less work focuses on online goal recognition that works in both discrete and continuous domains. Online goal recognition approaches often rely on repeated calls to the planner at each new observation, incurring high computational costs. Recognizing goals online in continuous space quickly and reliably is critical for any trajectory planning problem since the real physical world is fast-moving, e.g. robot applications. We develop an efficient method for goal recognition that relies either on a single call to the planner for each possible goal in discrete domains or a simplified motion model that reduces the computational burden in continuous ones. The resulting approach performs the online component of recognition orders of magnitude faster than the current state of the art, making it the first online method effectively usable for robotics applications that require sub-second recognition.
Belief Revision from Probability
Goodman, Jeremy, Salow, Bernhard
In previous work ("Knowledge from Probability", TARK 2021) we develop a question-relative, probabilistic account of belief. On this account, what someone believes relative to a given question is (i) closed under entailment, (ii) sufficiently probable given their evidence, and (iii) sensitive to the relative probabilities of the answers to the question. Here we explore the implications of this account for the dynamics of belief. We show that the principles it validates are much weaker than those of orthodox theories of belief revision like AGM, but still stronger than those valid according to the popular Lockean theory of belief, which equates belief with high subjective probability. We then consider a restricted class of models, suitable for many but not all applications, and identify some further natural principles valid on this class. We conclude by arguing that the present framework compares favorably to the rival probabilistic accounts of belief developed by Leitgeb and by Lin and Kelly.
System of Spheres-based Two Level Credibility-limited Revisions
Garapa, Marco, Ferme, Eduardo, Reis, Maurício D. L.
Two level credibility-limited revision is a non-prioritized revision operation. When revising by a two level credibility-limited revision, two levels of credibility and one level of incredibility are considered. When revising by a sentence at the highest level of credibility, the operator behaves as a standard revision, if the sentence is at the second level of credibility, then the outcome of the revision process coincides with a standard contraction by the negation of that sentence. If the sentence is not credible, then the original belief set remains unchanged. In this paper, we propose a construction for two level credibility-limited revision operators based on Grove's systems of spheres and present an axiomatic characterization for these operators.
Cognitive Bias and Belief Revision
Papadamos, Panagiotis, Gierasimczuk, Nina
Cognitive bias is a systematic human thought pattern connected with the distortion of received information, that usually leads to deviation from rationality (for a recent analysis see [18]). Such biases are specific not only to human intelligence, they can be also ascribed to artificial agents, algorithms and programs. For instance, confirmation bias can be seen as stubbornness against new information which contradicts the previously adopted view. In some cases such confirmation bias can be implemented into a system purposefully. Take as an example an authentication algorithm and a malicious user who is trying to break into an email account.