Goto

Collaborating Authors

Alexa Prize — State of the Art in Conversational AI

AI Magazine

Eighteen teams were selected for the inaugural competition last year. To build their socialbots, the students combined state-of-the-art techniques with their own novel strategies in the areas of natural language understanding and conversational AI. This article reports on the research conducted over the 2017-2018 year. While the 20-minute grand challenge was not achieved in the first year, the competition produced several conversational agents that advanced the state of the art, that are interesting for everyday users to interact with, and that help form a baseline for the second year of the competition. We conclude with a summary of the human conversation have applicability in both work that we plan to address in the second year of professional and everyday domains. The first generation of such assistants -- Amazon's Alexa, Apple's Siri, Google The Alexa Prize competition received hundreds of Assistant, and Microsoft's Cortana -- have been applications from interested universities. After a focused on short, task-oriented interactions, such as detailed review of the applications, Amazon playing music or answering simple questions, as announced 12 sponsored and 6 unsponsored teams opposed to the longer free-form conversations that as the inaugural cohort for the Alexa Prize. The teams occur naturally in social and professional human that went live for the 2017 competition, listed alphabetically interaction. Conversational AI is the study of techniques by university, were DeisBot (Brandeis University), for creating software agents that can engage Magnus (Carnegie Mellon University), in natural conversational interactions with humans.


Here's how Alexa learned to speak Spanish without your help

#artificialintelligence

The first tool studies a handful of "golden utterances" (that is, reference commands suggested by the developers) to learn general syntax and semantics patterns. After that, it produces "rewrite expressions" that themselves create thousands of new yet similar sentences to work from. The system works quickly -- you could move from 50 utterances to a fully operational linguistic set in less than two days. Amazon's other tool uses guided resampling to replace terms that can be safely swapped, further improving the AI's training. The technique draws both on data from existing Alexa languages as well as media sources like the Amazon Music catalog, and it's capable enough to be aware of context (it won't swap a musician's name for an audiobook, for example).


Improving Multi-turn Dialogue Modelling with Utterance ReWriter

arXiv.org Artificial Intelligence

Recent research has made impressive progress in single-turn dialogue modelling. In the multi-turn setting, however, current models are still far from satisfactory. One major challenge is the frequently occurred coreference and information omission in our daily conversation, making it hard for machines to understand the real intention. In this paper, we propose rewriting the human utterance as a pre-process to help multi-turn dialgoue modelling. Each utterance is first rewritten to recover all coreferred and omitted information. The next processing steps are then performed based on the rewritten utterance. To properly train the utterance rewriter, we collect a new dataset with human annotations and introduce a Transformer-based utterance rewriting architecture using the pointer network. We show the proposed architecture achieves remarkably good performance on the utterance rewriting task. The trained utterance rewriter can be easily integrated into online chatbots and brings general improvement over different domains.


91275050

#artificialintelligence

On Thursday Amazon announced the Alexa Prize, a 1 million award for the creation of a conversational artificial intelligence that can talk to people "coherently and engagingly" for a third of an hour. To aid the endeavor, up to ten teams will get a 100,000 stipend from Amazon along with Alexa-enabled devices, free cloud computing and support from Amazon's Alexa team. The push comes as Amazon's digital assistant Alexa is coming to multiple platforms beyond its original home on Amazon's Echo speaker, and as artificial intelligence is anticipated to become the cutting edge of tech companies' interfaces with their customers. The Alexa Prize announcement comes the same day several of the world's largest tech companies announced the formation of a consortium aimed at fostering the promise of artificial intelligence.


Ranking Enhanced Dialogue Generation

arXiv.org Artificial Intelligence

How to effectively utilize the dialogue history is a crucial problem in multi-turn dialogue generation. Previous works usually employ various neural network architectures (e.g., recurrent neural networks, attention mechanisms, and hierarchical structures) to model the history. However, a recent empirical study by Sankar et al. has shown that these architectures lack the ability of understanding and modeling the dynamics of the dialogue history. For example, the widely used architectures are insensitive to perturbations of the dialogue history, such as words shuffling, utterances missing, and utterances reordering. To tackle this problem, we propose a Ranking Enhanced Dialogue generation framework in this paper. Despite the traditional representation encoder and response generation modules, an additional ranking module is introduced to model the ranking relation between the former utterance and consecutive utterances. Specifically, the former utterance and consecutive utterances are treated as query and corresponding documents, and both local and global ranking losses are designed in the learning process. In this way, the dynamics in the dialogue history can be explicitly captured. To evaluate our proposed models, we conduct extensive experiments on three public datasets, i.e., bAbI, PersonaChat, and JDC. Experimental results show that our models produce better responses in terms of both quantitative measures and human judgments, as compared with the state-of-the-art dialogue generation models. Furthermore, we give some detailed experimental analysis to show where and how the improvements come from.