Goto

Collaborating Authors

 Zetzsche, Christoph


World Knowledge from AI Image Generation for Robot Control

arXiv.org Artificial Intelligence

Real images encode a lot of information about the world, such as how an object can look like, how certain things can be meaningfully arranged, or which items belong together. The image of an average office desk can give us information about how the different parts are usually arranged in relation to each other, e.g. a monitor on the desk with mouse and keyboard in front of it and a chair in front of the desk, or the image of someone preparing a meal can give us information about which ingredients and kitchen tools are to be used. This might seem rather trivial from a human perspective as we are very easily capable of handling such tasks without having to rely on pre-made example images to follow, but for a robot that has to navigate and solve tasks in e.g. a household environment such information can be critical for successfully handling everyday-activities and interacting with the world. We could encode all relevant information explicitly into an extensive knowledge base [1] for the robot, but considering the number of tasks and circumstances that a robot could encounter, correctly handling all situations could become very challenging [2] or even overwhelming when the robot needs to act in widely different environments. Additional knowledge sources, such as simulations of the environment, when available, can help by providing ways to investigate consequences of actions without having to act in the world [3]. We could also try to train the robot on a variety of different tasks, e.g. using reinforcement learning or other methods [4], hoping that the robot is able to generalize and handle situations and circumstances that were never seen during training. However, images of the real world already show examples of how a dining table looks like with plates and cutlery, how images are hung on the wall in bedrooms, dining rooms, or other places. Figure 1 shows an example of two different versions of how sandwich ingredients could be stacked together.


Robot Pouring: Identifying Causes of Spillage and Selecting Alternative Action Parameters Using Probabilistic Actual Causation

arXiv.org Artificial Intelligence

In everyday life, we perform tasks (e.g., cooking or cleaning) that involve a large variety of objects and goals. When confronted with an unexpected or unwanted outcome, we take corrective actions and try again until achieving the desired result. The reasoning performed to identify a cause of the observed outcome and to select an appropriate corrective action is a crucial aspect of human reasoning for successful task execution. Central to this reasoning is the assumption that a factor is responsible for producing the observed outcome. In this paper, we investigate the use of probabilistic actual causation to determine whether a factor is the cause of an observed undesired outcome. Furthermore, we show how the actual causation probabilities can be used to find alternative actions to change the outcome. We apply the probabilistic actual causation analysis to a robot pouring task. When spillage occurs, the analysis indicates whether a task parameter is the cause and how it should be changed to avoid spillage. The analysis requires a causal graph of the task and the corresponding conditional probability distributions. To fulfill these requirements, we perform a complete causal modeling procedure (i.e., task analysis, definition of variables, determination of the causal graph structure, and estimation of conditional probability distributions) using data from a realistic simulation of the robot pouring task, covering a large combinatorial space of task parameters. Based on the results, we discuss the implications of the variables' representation and how the alternative actions suggested by the actual causation analysis would compare to the alternative solutions proposed by a human observer. The practical use of the analysis of probabilistic actual causation to select alternative action parameters is demonstrated.


Cause-effect perception in an object place task

arXiv.org Artificial Intelligence

Algorithmic causal discovery is based on formal reasoning and provably converges toward the optimal solution. However, since some of the underlying assumptions are often not met in practice no applications for autonomous everyday life competence are yet available. Humans on the other hand possess full everyday competence and develop cognitive models in a data efficient manner with the ability to transfer knowledge between and to new situations. Here we investigate the causal discovery capabilities of humans in an object place task in virtual reality (VR) with haptic feedback and compare the results to the state of the art causal discovery algorithms FGES, PC and FCI. In addition we use the algorithms to analyze causal relations between sensory information and the kinematic parameters of human behavior. Our findings show that the majority of participants were able to determine which variables are causally related. This is in line with causal discovery algorithms like PC, which recover causal dependencies in the first step. However, unlike such algorithms which can identify causes and effects in our test configuration, humans are unsure in determining a causal direction. Regarding the relation between the sensory information provided to the participants and their placing actions (i.e. their kinematic parameters) the data yields a surprising dissociation of the subjects knowledge and the sensorimotor level. Knowledge of the cause-effect pairs, though undirected, should suffice to improve subject's movements. Yet a detailed causal analysis provides little evidence for any such influence. This, together with the reports of the participants, implies that instead of exploiting their consciously perceived information they leave it to the sensorimotor level to control the movement.


Completing Knowledge by Competing Hierarchies

arXiv.org Artificial Intelligence

A control strategy for expert systems is presented which is based on Shafer's Belief theory and the combination rule of Dempster. In contrast to well known strategies it is not sequentially and hypotheses-driven, but parallel and self organizing, determined by the concept of information gain. The information gain, calculated as the maximal difference between the actual evidence distribution in the knowledge base and the potential evidence determines each consultation step. Hierarchically structured knowledge is an important representation form and experts even use several hierarchies in parallel for constituting their knowledge. Hence the control strategy is applied to a layered set of distinct hierarchies. Depending on the actual data one of these hierarchies is chosen by the control strategy for the next step in the reasoning process. Provided the actual data are well matched to the structure of one hierarchy, this hierarchy remains selected for a longer consultation time. If no good match can be achieved, a switch from the actual hierarchy to a competing one will result, very similar to the phenomenon of restructuring in problem solving tasks. Up to now the control strategy is restricted to multi hierarchical knowledge bases with disjunct hierarchies. It is implemented in the expert system IBIG (inference by information gain), being presently applied to acquired speech disorders (aphasia).