Any autonomous system embedded in a dynamic and changing environment must be able to create qualitative knowledge and object structures representing aspects of its environment on the fly from raw or preprocessed sensor data in order to reason qualitatively about the environment. These structures must be managed and made accessible to deliberative and reactive functionalities which are dependent on being situationally aware of the changes in both the robotic agent's embedding and internal environment. DyKnow is a software framework which provides a set of functionalities for contextually accessing, storing, creating and processing such structures. In this paper, we focus on the use of DyKnow in supporting the representation and reasoning about dynamic objects such as road vehicles in the external environment of an autonomous unmanned aerial vehicle. The representation of complex objects generally consists of simpler objects with associated features that are related to each other via linkages. These linkage structures are constructed incrementally as additional sensor data is acquired and integrated with existing structures. The resulting linkage structures represent complex objects at many levels of abstraction. Many issues related to anchoring and symbol grounding can be approached by taking advantage of the versatility of these linkage structures. Examples are provided in the paper using an experimental UAV research platform.
In this paper, we present a motion planning framework for a fully deployed autonomous unmanned aerial vehicle which integrates two sample-based motion planning techniques, Probabilistic Roadmaps and Rapidly Exploring Random Trees. Additionally, we incorporate dynamic reconfigurability into the framework by integrating the motion planners with the control kernel of the UAV in a novel manner with little modification to the original algorithms. The framework has been verified through simulation and in actual flight. Empirical results show that these techniques used with such a framework offer a surprisingly efficient method for dynamically reconfiguring a motion plan based on unforeseen contingencies which may arise during the execution of a plan. The framework is generic and can be used for additional platforms.
Enthusiasm for developing conversational characters in games is not difficult to generate [1, 2], but most of these visions seem to rely on the dream of solving all of the problems of Computational Linguistics. Since such a breakthrough is,nlikely to happen anytime soon, we present a more modest proposal, which still allows for complex spoken conversational interactions with a variety of NPCs in games. One of the main problems in developing spoken dialogue systems for interactive games is that individual dialogue systems have been application-specific, and difficult to transfer to new domains, and thus to new games or to various different characters within a game. Moreover, most of the dialogue systems developed in the past have been for simple "form-filling" interactions which are relatively uninteresting as far as gaming is concerned. We have made some progress in developing a "plug-and-play" multi-modal (i.e.
We claim that a natural dialogue interface to a semiautonomous intelligent agent has important advantages, especially when operating in real-time complex dynamic environments involving multiple concurrent tasks and activities. We discuss some of the requirements of such a dialogue interface, and describe some of the features of a working system built at CSLI, focusing on the data-structures and techniques used to manage multiple interleaved threads of conversation about concurrent activities and their execution status.