Building Task-Oriented Visual Dialog Systems Through Alternative Optimization Between Dialog Policy and Language Generation

arXiv.org Artificial Intelligence

Reinforcement learning (RL) is an effective approach to learn an optimal dialog policy for task-oriented visual dialog systems. A common practice is to apply RL on a neural sequence-to-sequence (seq2seq) framework with the action space being the output vocabulary in the decoder. However, it is difficult to design a reward function that can achieve a balance between learning an effective policy and generating a natural dialog response. This paper proposes a novel framework that alternatively trains a RL policy for image guessing and a supervised seq2seq model to improve dialog generation quality. We evaluate our framework on the GuessWhich task and the framework achieves the state-of-the-art performance in both task completion and dialog quality.


Judge the Judges: A Large-Scale Evaluation Study of Neural Language Models for Online Review Generation

arXiv.org Machine Learning

Recent advances in deep learning have resulted in a resurgence in the popularity of natural language generation (NLG). Many deep learning based models, including recurrent neural networks and generative adversarial networks, have been proposed and applied to generating various types of text. Despite the fast development of methods, how to better evaluate the quality of these natural language generators remains a significant challenge. We conduct an in-depth empirical study to evaluate the existing evaluation methods for natural language generation. We compare human-based evaluators with a variety of automated evaluation procedures, including discriminative evaluators that measure how well the generated text can be distinguished from human-written text, as well as text overlap metrics that measure how similar the generated text is to human-written references. We measure to what extent these different evaluators agree on the ranking of a dozen of state-of-the-art generators for online product reviews. We find that human evaluators do not correlate well with discriminative evaluators, leaving a bigger question of whether adversarial accuracy is the correct objective for natural language generation. In general, distinguishing machine-generated text is a challenging task even for human evaluators, and their decisions tend to correlate better with text overlap metrics. We also find that diversity is an intriguing metric that is indicative of the assessments of different evaluators.


Health of pitching rotation will be key to Angels' success in 2017

Los Angeles Times

The Angels' undoing last season, they maintained amid it and after it, was the lack of health of their starting rotation. The key to their success this season, General Manager Billy Eppler is now saying, will be the health of their starting rotation. "Let's call it what it is," Eppler said. "If we can get 25 or more starts out of every guy that we go west with, the original five, I think we'll be in pretty good shape." That, of course, is unpredictable.



AAAI80-026.pdf

AAAI Conferences

AUTOMATIC GENERATION OF SEMANTIC ATTACHMENTS IN FOL Luigia Aiello Computer Science Department Stanford University Stanford, California 94305 ABSTRACT Semantic attachment is provided by FOL as a means for associating model values (i.e. This paper presents an algorithm that automatically generates semantic attachments in FOL and discusses the advantages deriving from its use. I INTRODUCTION In FOL (the mechanized reasoning system developed by R. Weyhrauch at the Stanford A.I. Laboratory [4,5,6 1, the knowledge about a given domain of discourse is represented in the form of an L/S structure. F An L/S structure is the FOL counterpart of the logician notion of a theory/model pair. It is a triple L,S,F where L is a sorted first order language with equality, S is a simulation structure (i.e. a computable part of a model for a first order theory), and F is a finite set of facts (i.e.