We claim that a natural dialogue interface to a semiautonomous intelligent agent has important advantages, especially when operating in real-time complex dynamic environments involving multiple concurrent tasks and activities. We discuss some of the requirements of such a dialogue interface, and describe some of the features of a working system built at CSLI, focusing on the data-structures and techniques used to manage multiple interleaved threads of conversation about concurrent activities and their execution status.
The belief that humans will be able to interact with computers in conversational speech has long been a favorite subject in science fiction, reflecting the persistent belief that spoken dialogue would be the most natural and powerful user interface to computers. With recent improvements in computer technology and in speech and language processing, such systems are starting to appear feasible. There are significant technical problems that still need to be solved before speech-driven interfaces become truly conversational. This article describes the results of a 10-year effort building robust spoken dialogue systems at the University of Rochester. For example, consider building a telephony system that answers queries about your mortgage.
In this paper, first by analyzing dialogue instruction dialogue corpus, we reveal how task context as well as discourse context determines tile experts' explanation strategy, then, based on the empirical results, we also introduce a m('chanism for selecting the most appropriate utterance content and dialogue control strategy for an instruction dialogue.