We report on the use of reinforcement learning with Cobot, a software agent residing in the well-known online community LambdaMOO. Our initial work on Cobot (Isbell et al.2000) provided him with the ability to collect social statistics and report them to users. Here we describe an application of RL allowing Cobot to take proactive actions in this complex social environment, and adapt behavior from multiple sources of human reward. After 5 months of training, and 3171 reward and punishment events from 254 different LambdaMOO users, Cobot learned nontrivial preferences for a number of users, modifing his behavior based on his current state. Here we describe LambdaMOO and the state and action spaces of Cobot, and report the statistical results of the learning experiment.
It was in April 2016 that Facebook founder Mark Zuckerberg announced that the social media platform was providing its nearly two billion users the opportunity to livestream content. The move was viewed as a natural extension of the platform's primary goal: providing a space for the average person to share their daily experiences, from the mundane to the meaningful. Almost as quickly, users found ways to live-broadcast the worst of their nature, including the "Easter Day slaughter" in which the fatal shooting of a 74-year-old Cleveland grandfather was livestreamed. In response, calls have increased for Facebook to either shutter the service or find a way to better regulate its content. Rev. Jesse Jackson, for example, remarked that Facebook Live is being used by people "as a platform to release their anger, their fears and their foolishness."
This paper describes a method for the development of dialogue managers for natural language interfaces. A dialogue manager is presented designed on the basis of both a theoretical investigation of models for dialogue management and an analysis of empirical material. It is argued that for natural language interfaces many of the human interaction phenomena accounted for in, for instance, plan-based models of dialogue do not occur. Instead, for many applications, dialogue in natural language interfaces can be managed from information on the functional role of an utterance as conveyed in the linguistic structure. This is modelled in a dialogue grammar which controls the interaction.
Enthusiasm for developing conversational characters in games is not difficult to generate [1, 2], but most of these visions seem to rely on the dream of solving all of the problems of Computational Linguistics. Since such a breakthrough is,nlikely to happen anytime soon, we present a more modest proposal, which still allows for complex spoken conversational interactions with a variety of NPCs in games. One of the main problems in developing spoken dialogue systems for interactive games is that individual dialogue systems have been application-specific, and difficult to transfer to new domains, and thus to new games or to various different characters within a game. Moreover, most of the dialogue systems developed in the past have been for simple "form-filling" interactions which are relatively uninteresting as far as gaming is concerned. We have made some progress in developing a "plug-and-play" multi-modal (i.e.