Goto

Collaborating Authors

socialbot


Emora: An Inquisitive Social Chatbot Who Cares For You

arXiv.org Artificial Intelligence

Inspired by studies on the overwhelming presence of experience-sharing in human-human conversations, Emora, the social chatbot developed by Emory University, aims to bring such experience-focused interaction to the current field of conversational AI. The traditional approach of information-sharing topic handlers is balanced with a focus on opinion-oriented exchanges that Emora delivers, and new conversational abilities are developed that support dialogues that consist of a collaborative understanding and learning process of the partner's life experiences. We present a curated dialogue system that leverages highly expressive natural language templates, powerful intent classification, and ontology resources to provide an engaging and interesting conversational experience to every user.


Neural Generation Meets Real People: Towards Emotionally Engaging Mixed-Initiative Conversations

arXiv.org Artificial Intelligence

Building an open-domain socialbot that talks to real people is challenging - such a system must meet multiple user expectations such as broad world knowledge, conversational style, and emotional connection. Our socialbot engages users on their terms - prioritizing their interests, feelings and autonomy. As a result, our socialbot provides a responsive, personalized user experience, capable of talking knowledgeably about a wide variety of topics, as well as chatting empathetically about ordinary life. Neural generation plays a key role in achieving these goals, providing the backbone for our conversational and emotional tone. At the end of the competition, Chirpy Cardinal progressed to the finals with an average rating of 3.6/5.0,


Amazon Releases Data Set of Annotated Conversations to Aid Development of Socialbots : Alexa Blogs

#artificialintelligence

Today I am happy to announce the public release of the Topical Chat Dataset, a text-based collection of more than 235,000 utterances (over 4,700,000 words) that will help support high-quality, repeatable research in the field of dialogue systems. The goal of Topical Chat is to enable innovative research in knowledge-grounded neural response-generation systems by tackling hard challenges that are not addressed by other publicly available datasets. Those challenges, which we have seen universities begin to tackle in the Alexa Prize Socialbot Grand Challenge, include transitioning between topics in a natural manner, knowledge selection and enrichment, and integration of fact and opinion into dialogue. Each conversation in the data set refers to a group of three related entities, and every turn of conversation is supported by an extract from a collection of unstructured or loosely structured text resources. To our knowledge, Topical Chat is the largest social-conversation and knowledge dataset available publicly to the research community.


Advancing the State of the Art in Open Domain Dialog Systems through the Alexa Prize

arXiv.org Artificial Intelligence

Building open domain conversational systems that allow users to have engaging conversations on topics of their choice is a challenging task. Alexa Prize was launched in 2016 to tackle the problem of achieving natural, sustained, coherent and engaging open-domain dialogs. In the second iteration of the competition in 2018, university teams advanced the state of the art by using context in dialog models, leveraging knowledge graphs for language understanding, handling complex utterances, building statistical and hierarchical dialog managers, and leveraging model-driven signals from user responses. The 2018 competition also included the provision of a suite of tools and models to the competitors including the CoBot (conversational bot) toolkit, topic and dialog act detection models, conversation evaluators, and a sensitive content detection model so that the competing teams could focus on building knowledge-rich, coherent and engaging multi-turn dialog systems. This paper outlines the advances developed by the university teams as well as the Alexa Prize team to achieve the common goal of advancing the science of Conversational AI. We address several key open-ended problems such as conversational speech recognition, open domain natural language understanding, commonsense reasoning, statistical dialog management and dialog evaluation. These collaborative efforts have driven improved experiences by Alexa users to an average rating of 3.61, median duration of 2 mins 18 seconds, and average turns to 14.6, increases of 14%, 92%, 54% respectively since the launch of the 2018 competition. For conversational speech recognition, we have improved our relative Word Error Rate by 55% and our relative Entity Error Rate by 34% since the launch of the Alexa Prize. Socialbots improved in quality significantly more rapidly in 2018, in part due to the release of the CoBot toolkit, with new entrants attaining an average rating of 3.35 just 1 week into the semifinals, compared to 9 weeks in the 2017 competition.


On Evaluating and Comparing Open Domain Dialog Systems

arXiv.org Artificial Intelligence

Conversational agents are exploding in popularity. However, much work remains in the area of non goal-oriented conversations, despite significant growth in research interest over recent years. To advance the state of the art in conversational AI, Amazon launched the Alexa Prize, a 2.5-million dollar university competition where sixteen selected university teams built conversational agents to deliver the best social conversational experience. Alexa Prize provided the academic community with the unique opportunity to perform research with a live system used by millions of users. The subjectivity associated with evaluating conversations is key element underlying the challenge of building non-goal oriented dialogue systems. In this paper, we propose a comprehensive evaluation strategy with multiple metrics designed to reduce subjectivity by selecting metrics which correlate well with human judgement. The proposed metrics provide granular analysis of the conversational agents, which is not captured in human ratings. We show that these metrics can be used as a reasonable proxy for human judgment. We provide a mechanism to unify the metrics for selecting the top performing agents, which has also been applied throughout the Alexa Prize competition. To our knowledge, to date it is the largest setting for evaluating agents with millions of conversations and hundreds of thousands of ratings from users. We believe that this work is a step towards an automatic evaluation process for conversational AIs.


Tartan: A retrieval-based socialbot powered by a dynamic finite-state machine architecture

arXiv.org Artificial Intelligence

This paper describes the Tartan conversational agent built for the 2018 Alexa Prize Competition. Tartan is a non-goal-oriented socialbot focused around providing users with an engaging and fluent casual conversation. Tartan's key features include an emphasis on structured conversation based on flexible finite-state models and an approach focused on understanding and using conversational acts. To provide engaging conversations, Tartan blends script-like yet dynamic responses with data-based generative and retrieval models. Unique to Tartan is that our dialog manager is modeled as a dynamic Finite State Machine. To our knowledge, no other conversational agent implementation has followed this specific structure.


Alexa Prize: Amazon's Battle to Bring Conversational AI Into Your Home

WIRED

The first interactor--a muscular man in his fifties with a shaved head and a black V-neck sweater--walks into a conference room and sits in a low-slung blue armchair before a phalanx of video cameras and studio lights. The rest of the room is totally dark. He gazes at a black, hockey- puck-shaped object--an Amazon Echo--on a small table in front of him. "Alexa," he says, "let's chat."


A Deep Reinforcement Learning Chatbot (Short Version)

arXiv.org Machine Learning

We present MILABOT: a deep reinforcement learning chatbot developed by the Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize competition. MILABOT is capable of conversing with humans on popular small talk topics through both speech and text. The system consists of an ensemble of natural language generation and retrieval models, including neural network and template-based models. By applying reinforcement learning to crowdsourced data and real-world user interactions, the system has been trained to select an appropriate response from the models in its ensemble. The system has been evaluated through A/B testing with real-world users, where it performed significantly better than other systems. The results highlight the potential of coupling ensemble systems with deep reinforcement learning as a fruitful path for developing real-world, open-domain conversational agents.


Heriot-Watt claims podium place in Amazon artificial intelligence competition

#artificialintelligence

A Scottish university reached the final three of a prestigious international competition dedicated to advancing conversational artificial intelligence (AI).


The big goal for Alexa is a nice, long chat, says Alexa's chief scientist

USATODAY - Tech Top Stories

Amazon wants you to have long, real conversations with Alexa, its popular personal digital assistant. The e-tail giant recently released new tools to app developers that allow Alexa to whisper, show emotion and pause naturally, like we humans do. And that's just the start, says Rohit Prasad, Amazon's head scientist for Alexa, who is playing a key role in the retailer's efforts in artificial intelligence for Alexa--using computers to converse with us. "I truly believe that for AI to be useful in our daily lives, it has to be something you can connect with," Prasad said in an interview here. "Conversation is the next step, to be more human-like."