Goto

Collaborating Authors

 Kaden, Zachary


Tartan: A retrieval-based socialbot powered by a dynamic finite-state machine architecture

arXiv.org Artificial Intelligence

This paper describes the Tartan conversational agent built for the 2018 Alexa Prize Competition. Tartan is a non-goal-oriented socialbot focused around providing users with an engaging and fluent casual conversation. Tartan's key features include an emphasis on structured conversation based on flexible finite-state models and an approach focused on understanding and using conversational acts. To provide engaging conversations, Tartan blends script-like yet dynamic responses with data-based generative and retrieval models. Unique to Tartan is that our dialog manager is modeled as a dynamic Finite State Machine. To our knowledge, no other conversational agent implementation has followed this specific structure.


Horizon: Facebook's Open Source Applied Reinforcement Learning Platform

arXiv.org Artificial Intelligence

In this paper we present Horizon, Facebook's open source applied reinforcement learning (RL) platform. Horizon is an end-to-end platform designed to solve industry applied RL problems where datasets are large (millions to billions of observations), the feedback loop is slow (vs. a simulator), and experiments must be done with care because they don't run in a simulator. Unlike other RL platforms, which are often designed for fast prototyping and experimentation, Horizon is designed with production use cases as top of mind. The platform contains workflows to train popular deep RL algorithms and includes data preprocessing, feature transformation, distributed training, counterfactual policy evaluation, and optimized serving. We also showcase real examples of where models trained with Horizon significantly outperformed and replaced supervised learning systems at Facebook. Deep reinforcement learning (RL) is poised to revolutionize how autonomous systems are built. In recent years, it has been shown to achieve state-of-theart performance on a wide variety of complicated tasks (Mnih et al., 2015; Lillicrap et al., 2015; Schulman et al., 2015; Van Hasselt et al., 2016; Schulman et al., 2017), where being successful requires learning complex relationships between high dimensional state spaces, actions, and long term rewards. However, the current implementations of the latest advances in this field have mainly been tailored to academia, focusing on fast prototyping and evaluating performance on simulated benchmark environments.


Handling Cold-Start Collaborative Filtering with Reinforcement Learning

arXiv.org Artificial Intelligence

A major challenge in recommender systems is handling new users, whom are also called $\textit{cold-start}$ users. In this paper, we propose a novel approach for learning an optimal series of questions with which to interview cold-start users for movie recommender systems. We propose learning interview questions using Deep Q Networks to create user profiles to make better recommendations to cold-start users. While our proposed system is trained using a movie recommender system, our Deep Q Network model should generalize across various types of recommender systems.