Dialogue policy transfer enables us to build dialogue policies in a target domain with little data by leveraging knowledge from a source domain with plenty of data. Dialogue sentences are usually represented by speech-acts and domain slots, and the dialogue policy transfer is usually achieved by assigning a slot mapping matrix based on human heuristics. However, existing dialogue policy transfer methods cannot transfer across dialogue domains with different speech-acts, for example, between systems built by different companies. Also, they depend on either common slots or slot entropy, which are not available when the source and target slots are totally disjoint and no database is available to calculate the slot entropy. To solve this problem, we propose a Policy tRansfer across dOMaIns and SpEech-acts (PROMISE) model, which is able to transfer dialogue policies across domains with different speech-acts and disjoint slots. The PROMISE model can learn to align different speech-acts and slots simultaneously, and it does not require common slots or the calculation of the slot entropy. Experiments on both real-world dialogue data and simulations demonstrate that PROMISE model can effectively transfer dialogue policies across domains with different speech-acts and disjoint slots.
BEIJING – Japan and China on Monday held their first security dialogue involving senior diplomats and defense officials in nearly two years. The talks in Beijing took place as the two countries attempt to set up a maritime and aerial communication mechanism to prevent accidental clashes in and above the East China Sea, where China has been asserting its claim to the Japan-administered Senkaku Islands. The meeting was held before Chinese Premier Li Keqiang's potential first visit to Japan since taking office in 2013. He may visit next month to attend a trilateral summit involving the two countries and South Korea. Kong Xuanyou, China's assistant foreign minister, said he hopes the dialogue will play an "active role in enhancing the momentum of improving ties between the two countries."
Dialogue research is a crucial component of building the next generation of intelligent agents. While there's been progress with chatbots in single-domain dialogue, agents today are far from capable of carrying an open-domain conversation across a multitude of topics. Agents that can chat with humans in the way that people talk to each other will be easier and more enjoyable to use in our day-to-day lives -- going beyond simple tasks like playing a song or booking an appointment. Generating coherent and engaging responses in conversations requires a range of nuanced conversational skills, including language understanding and reasoning. Facebook AI has made scientific progress in dialogue research that is, in the long run, fundamental to building more engaging, personable AI systems.
An important difficulty in developing spoken dialogue systems for robots is the open-ended nature of most interactions. Robotic agents must typically operate in complex, continuously changing environments which are difficult to model and do not provide any clear, predefined goal. Directly capturing this complexity in a single, large dialogue policy is thus inadequate. This paper presents a new approach which tackles the complexity of open-ended interactions by breaking it into a set of small, independent policies, which can be activated and deactivated at runtime by a dedicated mechanism. The approach is currently being implemented in a spoken dialogue system for autonomous robots.
Complex dialogues, including dialogues embedded in one another, can be represented in the formalism as sequences of moves in a combination of dialogue games. We show that our formalism can represent the different types of dialogue in a standard typology, and we also provide these dialogue-types with a game-theoretic semantics. Introduction Autonomous intelligent software agents have become a powerful paradigm in modern computer science. In this paradigm, discrete software entities -- autonomous agents -- interact to achieve individual or group objectives, on the basis of possibly different sets of assumptions, beliefs, preferences and objectives. For instance, agents may negotiate the purchase of goods or services from other agents, or seek information from them, or collaborate with them to achieve some common task, such as management of a telecommunications network.