Goto

Collaborating Authors

 Bramblett, Lauren


Implicit Coordination using Active Epistemic Inference

arXiv.org Artificial Intelligence

A Multi-robot system (MRS) provides significant advantages for intricate tasks such as environmental monitoring, underwater inspections, and space missions. However, addressing potential communication failures or the lack of communication infrastructure in these fields remains a challenge. A significant portion of MRS research presumes that the system can maintain communication with proximity constraints, but this approach does not solve situations where communication is either non-existent, unreliable, or poses a security risk. Some approaches tackle this issue using predictions about other robots while not communicating, but these methods generally only permit agents to utilize first-order reasoning, which involves reasoning based purely on their own observations. In contrast, to deal with this problem, our proposed framework utilizes Theory of Mind (ToM), employing higher-order reasoning by shifting a robot's perspective to reason about a belief of others observations. Our approach has two main phases: i) an efficient runtime plan adaptation using active inference to signal intentions and reason about a robot's own belief and the beliefs of others in the system, and ii) a hierarchical epistemic planning framework to iteratively reason about the current MRS mission state. The proposed framework outperforms greedy and first-order reasoning approaches and is validated using simulations and experiments with heterogeneous robotic systems.


Using High-Level Patterns to Estimate How Humans Predict a Robot will Behave

arXiv.org Artificial Intelligence

A human interacting with a robot often forms predictions of what the robot will do next. For instance, based on the recent behavior of an autonomous car, a nearby human driver might predict that the car is going to remain in the same lane. It is important for the robot to understand the human's prediction for safe and seamless interaction: e.g., if the autonomous car knows the human thinks it is not merging -- but the autonomous car actually intends to merge -- then the car can adjust its behavior to prevent an accident. Prior works typically assume that humans make precise predictions of robot behavior. However, recent research on human-human prediction suggests the opposite: humans tend to approximate other agents by predicting their high-level behaviors. We apply this finding to develop a second-order theory of mind approach that enables robots to estimate how humans predict they will behave. To extract these high-level predictions directly from data, we embed the recent human and robot trajectories into a discrete latent space. Each element of this latent space captures a different type of behavior (e.g., merging in front of the human, remaining in the same lane) and decodes into a vector field across the state space that is consistent with the underlying behavior type. We hypothesize that our resulting high-level and course predictions of robot behavior will correspond to actual human predictions. We provide initial evidence in support of this hypothesis through a proof-of-concept user study.


Take Your Best Shot: Sampling-Based Next-Best-View Planning for Autonomous Photography & Inspection

arXiv.org Artificial Intelligence

Autonomous mobile robots (AMRs) equipped with high-quality cameras have revolutionized the field of inspections by providing efficient and cost-effective means of conducting surveys. The use of autonomous inspection is becoming more widespread in a variety of contexts, yet it is still challenging to acquire the best inspection information autonomously. In situations where objects may block a robot's view, it is necessary to use reasoning to determine the optimal points for collecting data. Although researchers have explored cloud-based applications to store inspection data, these applications may not operate optimally under network constraints, and parsing these datasets can be manually intensive. Instead, there is an emerging requirement for AMRs to autonomously capture the most informative views efficiently. To address this challenge, we present an autonomous Next-Best-View (NBV) framework that maximizes the inspection information while reducing the number of pictures needed during operations. The framework consists of a formalized evaluation metric using ray-tracing and Gaussian process interpolation to estimate information reward based on the current understanding of the partially-known environment. A derivative-free optimization (DFO) method is used to sample candidate views in the environment and identify the NBV point. The proposed approach's effectiveness is shown by comparing it with existing methods and further validated through simulations and experiments with various vehicles.


Robust Online Epistemic Replanning of Multi-Robot Missions

arXiv.org Artificial Intelligence

As Multi-Robot Systems (MRS) become more affordable and computing capabilities grow, they provide significant advantages for complex applications such as environmental monitoring, underwater inspections, or space exploration. However, accounting for potential communication loss or the unavailability of communication infrastructures in these application domains remains an open problem. Much of the applicable MRS research assumes that the system can sustain communication through proximity regulations and formation control or by devising a framework for separating and adhering to a predetermined plan for extended periods of disconnection. The latter technique enables an MRS to be more efficient, but breakdowns and environmental uncertainties can have a domino effect throughout the system, particularly when the mission goal is intricate or time-sensitive. To deal with this problem, our proposed framework has two main phases: i) a centralized planner to allocate mission tasks by rewarding intermittent rendezvous between robots to mitigate the effects of the unforeseen events during mission execution, and ii) a decentralized replanning scheme leveraging epistemic planning to formalize belief propagation and a Monte Carlo tree search for policy optimization given distributed rational belief updates. The proposed framework outperforms a baseline heuristic and is validated using simulations and experiments with aerial vehicles.


Epistemic Planning for Heterogeneous Robotic Systems

arXiv.org Artificial Intelligence

Heterogeneous multi-robot system deployment offers a For example, consider Figure 1 where two unmanned ground variety of advantages including improved versatility, scalability, vehicles (UGVs) and one unmanned aerial vehicle (UAV) are and adaptability over homogeneous systems. As robotic exploring an environment and may discover tasks at undisclosed technology has advanced over the last few decades making locations. During disconnection, the UAV maintains robots smaller, more capable, and affordable, demand for a set of possible (belief) states for UGV 1 and UGV 2 and multi-robot research has grown. Appropriate coordination of also a set of (empathy) states that UGV 1 and UGV 2 might these heterogeneous systems can improve the effectiveness of believe about the UAV. The UAV finds a task that requires safety critical missions such as surveillance, exploration, and a UGV and plans to communicate with UGV 2. After the rescue operations by incorporating the capabilities of each UAV travels to UGV 2's first belief state, it finds that UGV robot. However, the complexity of the solution for a heterogeneous 2 is not present. So, the UAV reasons that UGV 2 might be system can exponentially expand over long periods at the second belief state, successfully communicates, and of disconnectivity, especially in uncertain environments.


Epistemic Prediction and Planning with Implicit Coordination for Multi-Robot Teams in Communication Restricted Environments

arXiv.org Artificial Intelligence

Thus, we introduce Multi-robot systems (MRS) have the potential to assist a coordinated epistemic prediction and planning method in many safety-critical applications such as search and rescue, in which a robot propagates a finite set of belief states military intelligence and surveillance, and inspection representing possible states of other agents in the system and operations where it may be hazardous and costly to deploy empathy states representing a finite set of possible states from humans. Looking to the state-of-the-art, we note that most other agents' perspectives. Subsequently, using epistemic MRS research assumes constant communication between planning, we can formulate a consensus strategy such that robots [1]-[3]. However, within the aforementioned application every distributed belief in the system achieves consensus. For space, long-range communication is often unreliable example, consider Figure 1 where two robots are canvassing or unavailable. Humans adequately cope with such problems, an environment. During disconnection, Robot 1 maintains a performing these tasks collaboratively by extrapolating and set of possible (belief) states for Robot 2 and also a set of empathizing with what other actors might believe if the local (empathy) states that Robot 2 might believe about Robot 1. plan must change at run-time. This subconscious process can Once Robot 2 experiences a failure, it tracks another state be modally represented as epistemic planning, computing in its empathy set. We reason that though Robot 1 holds a and reasoning about multiple predictions and actions while false belief about Robot 2's state, there exists an epistemic accounting for a priori beliefs, current observations, and strategy that can allow robot 1 to find robot 2 (i.e., updating other actors' sensing and mobility capabilities.