BEIJING – Japan and China on Monday held their first security dialogue involving senior diplomats and defense officials in nearly two years. The talks in Beijing took place as the two countries attempt to set up a maritime and aerial communication mechanism to prevent accidental clashes in and above the East China Sea, where China has been asserting its claim to the Japan-administered Senkaku Islands. The meeting was held before Chinese Premier Li Keqiang's potential first visit to Japan since taking office in 2013. He may visit next month to attend a trilateral summit involving the two countries and South Korea. Kong Xuanyou, China's assistant foreign minister, said he hopes the dialogue will play an "active role in enhancing the momentum of improving ties between the two countries."
Complex dialogues, including dialogues embedded in one another, can be represented in the formalism as sequences of moves in a combination of dialogue games. We show that our formalism can represent the different types of dialogue in a standard typology, and we also provide these dialogue-types with a game-theoretic semantics. Introduction Autonomous intelligent software agents have become a powerful paradigm in modern computer science. In this paradigm, discrete software entities -- autonomous agents -- interact to achieve individual or group objectives, on the basis of possibly different sets of assumptions, beliefs, preferences and objectives. For instance, agents may negotiate the purchase of goods or services from other agents, or seek information from them, or collaborate with them to achieve some common task, such as management of a telecommunications network.
Dialogue management is the process of deciding what to do next in a dialogue. In human communication, dialogue management is a process that seems to go unnoticed much of the time; only rarely are we away of having to make a decision on what to say next, or on whether to say anything at all. When designing a computer dialogue system, however, we either have to pre-program the possible dialogues according to certain fixed sequence of utterances, or else we have to define a'dialogue manager', who decides what the system should do next based on a model of the current dialogue context. Research on the design of intelligent computer dialogue management systems has also inspired investigations of human dialogue management. These investigations have made it clear that human dialogue management is actually highly complex and sophisticated, and suggests that one of the stumbling blocks for the development of high-quality speech dialogue systems is the design of dialogue managers that have some of the sophistication and subtlety of human dialogue management. In this paper, we will examine the notion of'dialogue context' in the sense of the information that is relevant for deciding what to do next in a dialogue.
Dialogue policy transfer enables us to build dialogue policies in a target domain with little data by leveraging knowledge from a source domain with plenty of data. Dialogue sentences are usually represented by speech-acts and domain slots, and the dialogue policy transfer is usually achieved by assigning a slot mapping matrix based on human heuristics. However, existing dialogue policy transfer methods cannot transfer across dialogue domains with different speech-acts, for example, between systems built by different companies. Also, they depend on either common slots or slot entropy, which are not available when the source and target slots are totally disjoint and no database is available to calculate the slot entropy. To solve this problem, we propose a Policy tRansfer across dOMaIns and SpEech-acts (PROMISE) model, which is able to transfer dialogue policies across domains with different speech-acts and disjoint slots. The PROMISE model can learn to align different speech-acts and slots simultaneously, and it does not require common slots or the calculation of the slot entropy. Experiments on both real-world dialogue data and simulations demonstrate that PROMISE model can effectively transfer dialogue policies across domains with different speech-acts and disjoint slots.
An important difficulty in developing spoken dialogue systems for robots is the open-ended nature of most interactions. Robotic agents must typically operate in complex, continuously changing environments which are difficult to model and do not provide any clear, predefined goal. Directly capturing this complexity in a single, large dialogue policy is thus inadequate. This paper presents a new approach which tackles the complexity of open-ended interactions by breaking it into a set of small, independent policies, which can be activated and deactivated at runtime by a dedicated mechanism. The approach is currently being implemented in a spoken dialogue system for autonomous robots.