Goto

Collaborating Authors

Case-Based Learning by Observation in Robotics Using a Dynamic Case Representation

AAAI Conferences

Robots are becoming increasingly common in home, industrial and medical environments. Their end users may know what they want the robots to do but lack the required technical skills to program them. We present a case-based reasoning approach for training a control module that controls a multi-purpose robotic platform. The control module learns by observing an expert performing a task and does not require any human intervention to program or modify the control module. To avoid requiring the control module to be modified when the robot it controls is repurposed, smart sensors and effectors register with the control module allowing it to dynamically modify the case structure it uses and how those cases are compared. This allows the hardware configuration to be modified, or completely changed, without having to change the control module. We present a case study demonstrating how a robot can be trained using learning by observation and later repurposed with new sensors and then retrained.


Behavior-Based Planning

AAAI Conferences

The arbiter effectively synchronizes the votes by maintaining a consistent command space in which those votes are initially represented, so that votes are counted correctly even after significant vehicle motion by remapping them into the current actuator frame of reference, and obsolete votes are not counted at all. However, once a behavior's votes have become obsolete, that behavior has no effect on the decision-making process until it issues a new set of votes. For example, a field of regard arbiter and its associated behaviors have been implemented and used for the control of a pair of stereo cameras on a pan/tilt platform. Field of regard refers to the camera field of view mapped on to the ground plane. Behaviors vote for different possible field of regard polygons, as shown in Figure 4, based on considerations such as not looking in the direction of known obstacles (since travelling in that direction is impossible), looking toward the goal, and looking at a region contiguous to already mapped areas.


Management of Uncertainty in the Multi-Level Monitoring and Diagnosis of the Time of Flight Scintillation Array

arXiv.org Artificial Intelligence

We present a general architecture for the monitoring and diagnosis of large scale sensor-based systems with real time diagnostic constraints. This architecture is multileveled, combining a single monitoring level based on statistical methods with two model based diagnostic levels. At each level, sources of uncertainty are identified, and integrated methodologies for uncertainty management are developed. The general architecture was applied to the monitoring and diagnosis of a specific nuclear physics detector at Lawrence Berkeley National Laboratory that contained approximately 5000 components and produced over 500 channels of output data. The general architecture is scalable, and work is ongoing to apply it to detector systems one and two orders of magnitude more complex.


Heintz

AAAI Conferences

For autonomous systems such as unmanned aerial vehicles tosuccessfully perform complex missions, a great deal of embedded reasoning is required at varying levels of abstraction. In order to make use of diverse reasoning modules in such systems, issues ofintegration such as sensor data flow and information flow between such modules has to be taken into account. The DyKnow framework is a tool with a formal basis that pragmatically deals with many of the architectural issues which arise in such systems. This includes a systematic stream-based method for handling the sense-reasoning gap,caused by the wide difference in abstraction levels between the noisy data generally available from sensors and the symbolic, semantically meaningful information required by many high-level reasoning modules. DyKnow has proven to be quite robust and widely applicable to different aspects of hybrid software architectures forrobotics.


Cognitive Knowledge Graph Reasoning for One-shot Relational Learning

arXiv.org Machine Learning

Inferring new facts from existing knowledge graphs (KG) with explainable reasoning processes is a significant problem and has received much attention recently. However, few studies have focused on relation types unseen in the original KG, given only one or a few instances for training. To bridge this gap, we propose CogKR for one-shot KG reasoning. The one-shot relational learning problem is tackled through two modules: the summary module summarizes the underlying relationship of the given instances, based on which the reasoning module infers the correct answers. Motivated by the dual process theory in cognitive science, in the reasoning module, a cognitive graph is built by iteratively coordinating retrieval (System 1, collecting relevant evidence intuitively) and reasoning (System 2, conducting relational reasoning over collected information). The structural information offered by the cognitive graph enables our model to aggregate pieces of evidence from multiple reasoning paths and explain the reasoning process graphically. Experiments show that CogKR substantially outperforms previous state-of-the-art models on one-shot KG reasoning benchmarks, with relative improvements of 24.3%-29.7% on MRR. The source code is available at https://github.com/THUDM/CogKR.