Goto

Collaborating Authors

 Krumme, Jonas


World Knowledge from AI Image Generation for Robot Control

arXiv.org Artificial Intelligence

Real images encode a lot of information about the world, such as how an object can look like, how certain things can be meaningfully arranged, or which items belong together. The image of an average office desk can give us information about how the different parts are usually arranged in relation to each other, e.g. a monitor on the desk with mouse and keyboard in front of it and a chair in front of the desk, or the image of someone preparing a meal can give us information about which ingredients and kitchen tools are to be used. This might seem rather trivial from a human perspective as we are very easily capable of handling such tasks without having to rely on pre-made example images to follow, but for a robot that has to navigate and solve tasks in e.g. a household environment such information can be critical for successfully handling everyday-activities and interacting with the world. We could encode all relevant information explicitly into an extensive knowledge base [1] for the robot, but considering the number of tasks and circumstances that a robot could encounter, correctly handling all situations could become very challenging [2] or even overwhelming when the robot needs to act in widely different environments. Additional knowledge sources, such as simulations of the environment, when available, can help by providing ways to investigate consequences of actions without having to act in the world [3]. We could also try to train the robot on a variety of different tasks, e.g. using reinforcement learning or other methods [4], hoping that the robot is able to generalize and handle situations and circumstances that were never seen during training. However, images of the real world already show examples of how a dining table looks like with plates and cutlery, how images are hung on the wall in bedrooms, dining rooms, or other places. Figure 1 shows an example of two different versions of how sandwich ingredients could be stacked together.


Robot Pouring: Identifying Causes of Spillage and Selecting Alternative Action Parameters Using Probabilistic Actual Causation

arXiv.org Artificial Intelligence

In everyday life, we perform tasks (e.g., cooking or cleaning) that involve a large variety of objects and goals. When confronted with an unexpected or unwanted outcome, we take corrective actions and try again until achieving the desired result. The reasoning performed to identify a cause of the observed outcome and to select an appropriate corrective action is a crucial aspect of human reasoning for successful task execution. Central to this reasoning is the assumption that a factor is responsible for producing the observed outcome. In this paper, we investigate the use of probabilistic actual causation to determine whether a factor is the cause of an observed undesired outcome. Furthermore, we show how the actual causation probabilities can be used to find alternative actions to change the outcome. We apply the probabilistic actual causation analysis to a robot pouring task. When spillage occurs, the analysis indicates whether a task parameter is the cause and how it should be changed to avoid spillage. The analysis requires a causal graph of the task and the corresponding conditional probability distributions. To fulfill these requirements, we perform a complete causal modeling procedure (i.e., task analysis, definition of variables, determination of the causal graph structure, and estimation of conditional probability distributions) using data from a realistic simulation of the robot pouring task, covering a large combinatorial space of task parameters. Based on the results, we discuss the implications of the variables' representation and how the alternative actions suggested by the actual causation analysis would compare to the alternative solutions proposed by a human observer. The practical use of the analysis of probabilistic actual causation to select alternative action parameters is demonstrated.