Dialogue policy transfer enables us to build dialogue policies in a target domain with little data by leveraging knowledge from a source domain with plenty of data. Dialogue sentences are usually represented by speech-acts and domain slots, and the dialogue policy transfer is usually achieved by assigning a slot mapping matrix based on human heuristics. However, existing dialogue policy transfer methods cannot transfer across dialogue domains with different speech-acts, for example, between systems built by different companies. Also, they depend on either common slots or slot entropy, which are not available when the source and target slots are totally disjoint and no database is available to calculate the slot entropy. To solve this problem, we propose a Policy tRansfer across dOMaIns and SpEech-acts (PROMISE) model, which is able to transfer dialogue policies across domains with different speech-acts and disjoint slots. The PROMISE model can learn to align different speech-acts and slots simultaneously, and it does not require common slots or the calculation of the slot entropy. Experiments on both real-world dialogue data and simulations demonstrate that PROMISE model can effectively transfer dialogue policies across domains with different speech-acts and disjoint slots.
This paper describes a novel method by which a spoken dialogue system can learn to choose an optimal dialogue strategy from its experience interacting with human users. The method is based on a combination of reinforcement learning and performance modeling of spoken dialogue systems. The reinforcement learning component applies Q-learning (Watkins, 1989), while the performance modeling component applies the PARADISE evaluation framework (Walker et al., 1997) to learn the performance function (reward) used in reinforcement learning. We illustrate the method with a spoken dialogue system named ELVIS (EmaiL Voice Interactive System), that supports access to email over the phone. We conduct a set of experiments for training an optimal dialogue strategy on a corpus of 219 dialogues in which human users interact with ELVIS over the phone. We then test that strategy on a corpus of 18 dialogues. We show that ELVIS can learn to optimize its strategy selection for agent initiative, for reading messages, and for summarizing email folders.
An important difficulty in developing spoken dialogue systems for robots is the open-ended nature of most interactions. Robotic agents must typically operate in complex, continuously changing environments which are difficult to model and do not provide any clear, predefined goal. Directly capturing this complexity in a single, large dialogue policy is thus inadequate. This paper presents a new approach which tackles the complexity of open-ended interactions by breaking it into a set of small, independent policies, which can be activated and deactivated at runtime by a dedicated mechanism. The approach is currently being implemented in a spoken dialogue system for autonomous robots.
BEIJING – Japan and China on Monday held their first security dialogue involving senior diplomats and defense officials in nearly two years. The talks in Beijing took place as the two countries attempt to set up a maritime and aerial communication mechanism to prevent accidental clashes in and above the East China Sea, where China has been asserting its claim to the Japan-administered Senkaku Islands. The meeting was held before Chinese Premier Li Keqiang's potential first visit to Japan since taking office in 2013. He may visit next month to attend a trilateral summit involving the two countries and South Korea. Kong Xuanyou, China's assistant foreign minister, said he hopes the dialogue will play an "active role in enhancing the momentum of improving ties between the two countries."
Complex dialogues, including dialogues embedded in one another, can be represented in the formalism as sequences of moves in a combination of dialogue games. We show that our formalism can represent the different types of dialogue in a standard typology, and we also provide these dialogue-types with a game-theoretic semantics. Introduction Autonomous intelligent software agents have become a powerful paradigm in modern computer science. In this paradigm, discrete software entities -- autonomous agents -- interact to achieve individual or group objectives, on the basis of possibly different sets of assumptions, beliefs, preferences and objectives. For instance, agents may negotiate the purchase of goods or services from other agents, or seek information from them, or collaborate with them to achieve some common task, such as management of a telecommunications network.