Goto

Collaborating Authors

A Study on Dialogue Reward Prediction for Open-Ended Conversational Agents

arXiv.org Artificial Intelligence

The amount of dialogue history to include in a conversational agent is often underestimated and/or set in an empirical and thus possibly naive way. This suggests that principled investigations into optimal context windows are urgently needed given that the amount of dialogue history and corresponding representations can play an important role in the overall performance of a conversational system. This paper studies the amount of history required by conversational agents for reliably predicting dialogue rewards. The task of dialogue reward prediction is chosen for investigating the effects of varying amounts of dialogue history and their impact on system performance. Experimental results using a dataset of 18K human-human dialogues report that lengthy dialogue histories of at least 10 sentences are preferred (25 sentences being the best in our experiments) over short ones, and that lengthy histories are useful for training dialogue reward predictors with strong positive correlations between target dialogue rewards and predicted ones.


Trott

AAAI Conferences

Speakers frequently repair their speech, and listeners must therefore integrate information across ill-formed, often fragmentary inputs. Previous dialogue systems for human-robot interaction (HRI) have addressed certain problems in dialogue repair, but there are many problems that remain. In this paper, we discuss these problems from the perspective of Conversation Analysis, and argue that a more holistic account of dialogue repair will actually aid in the design and implementation of machine dialogue systems.


Optimizing Dialogue Management with Reinforcement Learning: Experiments with the NJFun System

Journal of Artificial Intelligence Research

Designing the dialogue policy of a spoken dialogue system involves many nontrivial choices. This paper presents a reinforcement learning approach for automatically optimizing a dialogue policy, which addresses the technical challenges in applying reinforcement learning to a working dialogue system with human users. We report on the design, construction and empirical evaluation of NJFun, an experimental spoken dialogue system that provides users with access to information about fun things to do in New Jersey. Our results show that by optimizing its performance via reinforcement learning, NJFun measurably improves system performance.


Japan and China agree to boost confidence-building efforts in key security dialogue

The Japan Times

BEIJING – Japan and China have agreed to strengthen confidence-building measures in a security dialogue held for the first time in nearly two years. During the meeting in Beijing on Monday involving senior diplomats and defense officials, both sides explained each other's security policies and "frankly" discussed major challenges facing the region and the rest of the world, the Japanese government said. The government also said Japan asked China to make its security polices more transparent. The talks in Beijing took place as the two countries attempt to set up a maritime and aerial communication mechanism to prevent accidental clashes in and above the East China Sea, where China has been asserting its claim to the Japan-administered Senkaku Islands. Prime Minister Shinzo Abe and Chinese President Xi Jinping have agreed to ease tensions stemming from a standoff over sovereignty of a group of tiny islands in the sea.


Cross-domain Dialogue Policy Transfer via Simultaneous Speech-act and Slot Alignment

arXiv.org Artificial Intelligence

Dialogue policy transfer enables us to build dialogue policies in a target domain with little data by leveraging knowledge from a source domain with plenty of data. Dialogue sentences are usually represented by speech-acts and domain slots, and the dialogue policy transfer is usually achieved by assigning a slot mapping matrix based on human heuristics. However, existing dialogue policy transfer methods cannot transfer across dialogue domains with different speech-acts, for example, between systems built by different companies. Also, they depend on either common slots or slot entropy, which are not available when the source and target slots are totally disjoint and no database is available to calculate the slot entropy. To solve this problem, we propose a Policy tRansfer across dOMaIns and SpEech-acts (PROMISE) model, which is able to transfer dialogue policies across domains with different speech-acts and disjoint slots. The PROMISE model can learn to align different speech-acts and slots simultaneously, and it does not require common slots or the calculation of the slot entropy. Experiments on both real-world dialogue data and simulations demonstrate that PROMISE model can effectively transfer dialogue policies across domains with different speech-acts and disjoint slots.