Recent advances in deep learning have resulted in a resurgence in the popularity of natural language generation (NLG). Many deep learning based models, including recurrent neural networks and generative adversarial networks, have been proposed and applied to generating various types of text. Despite the fast development of methods, how to better evaluate the quality of these natural language generators remains a significant challenge. We conduct an in-depth empirical study to evaluate the existing evaluation methods for natural language generation. We compare human-based evaluators with a variety of automated evaluation procedures, including discriminative evaluators that measure how well the generated text can be distinguished from human-written text, as well as text overlap metrics that measure how similar the generated text is to human-written references. We measure to what extent these different evaluators agree on the ranking of a dozen of state-of-the-art generators for online product reviews. We find that human evaluators do not correlate well with discriminative evaluators, leaving a bigger question of whether adversarial accuracy is the correct objective for natural language generation. In general, distinguishing machine-generated text is a challenging task even for human evaluators, and their decisions tend to correlate better with text overlap metrics. We also find that diversity is an intriguing metric that is indicative of the assessments of different evaluators.
The Angels' undoing last season, they maintained amid it and after it, was the lack of health of their starting rotation. The key to their success this season, General Manager Billy Eppler is now saying, will be the health of their starting rotation. "Let's call it what it is," Eppler said. "If we can get 25 or more starts out of every guy that we go west with, the original five, I think we'll be in pretty good shape." That, of course, is unpredictable.
AUTOMATIC GENERATION OF SEMANTIC ATTACHMENTS IN FOL Luigia Aiello Computer Science Department Stanford University Stanford, California 94305 ABSTRACT Semantic attachment is provided by FOL as a means for associating model values (i.e. This paper presents an algorithm that automatically generates semantic attachments in FOL and discusses the advantages deriving from its use. I INTRODUCTION In FOL (the mechanized reasoning system developed by R. Weyhrauch at the Stanford A.I. Laboratory [4,5,6 1, the knowledge about a given domain of discourse is represented in the form of an L/S structure. F An L/S structure is the FOL counterpart of the logician notion of a theory/model pair. It is a triple L,S,F where L is a sorted first order language with equality, S is a simulation structure (i.e. a computable part of a model for a first order theory), and F is a finite set of facts (i.e.
Design Evaluator is a pen-based system that provides designers with critical feedback on their sketches in various visual forms. The goal of these system-generated critiques is to help end users who draw and then reason about their drawings to solve design problems. This paper outlines the implementation strategies of the Design Evaluator and shows example applications in two visual design domains: architectural floor plans and Web page layout.