Prakash, Bharat (University of Maryland, Baltimore County) | Khatwani, Mohit (University of Maryland, Baltimore County) | Waytowich, Nicholas (US Army Research Laboratory) | Mohsenin, Tinoosh (University of Maryland, Baltimore County)
Recent progress in AI and Reinforcement learning has shown great success in solving complex problems with high dimensional state spaces. However, most of these successes have been primarily in simulated environments where failure is of little or no consequence. Most real-world applications, however, require training solutions that are safe to operate as catastrophic failures are inadmissible especially when there is human interaction involved. Currently, Safe RL systems use human oversight during training and exploration in order to make sure the RL agent does not go into a catastrophic state. These methods require a large amount of human labor and it is very difficult to scale up. We present a hybrid method for reducing the human intervention time by combining model-based approaches and training a supervised learner to to improve sample efficiency while also ensuring safety. We evaluate these methods on various grid-world environments using both standard and visual representations and show that our approach achieves better performance in terms of sample efficiency, number of catastrophic states reached as well as overall task performance compared to traditional model-free approaches.
As artificially intelligent systems grow in intelligence and capability, some of their available options may allow them to resist intervention by their programmers. We call an AI system "corrigible" if it cooperates with what its creators regard as a corrective intervention, despite default incentives for rational agents to resist attempts to shut them down or modify their preferences. We introduce the notion of corrigibility and analyze utility functions that attempt to make an agent shut down safely if a shutdown button is pressed, while avoiding incentives to prevent the button from being pressed or cause the button to be pressed, and while ensuring propagation of the shutdown behavior as it creates new subsystems or self-modifies. While some proposals are interesting, none have yet been demonstrated to satisfy all of our intuitive desiderata, leaving this simple problem in corrigibility wide-open.
In this article, we explore the possibility of using animated characters as personal social companions for supporting interventions for promoting health behaviors. We explore how supportive feedback could be provided to users of such artificial companion systems, by coupling both personalized intervention content from a mental health perspective with personalized affective social agents such as graphical facial avatars or Embodied Conversational Agents (ECAs). We discuss the issues and potential of such an approach.
Segal, Avi (Ben-Gurion University of the Negev) | Gal, Kobi (Ben-Gurion University of the Negev) | Kamar, Ece (Microsoft Research, Redmond WA) | Horvitz, Eric (Microsoft Research, Redmond WA) | Miller, Grant (University of Oxford)
Volunteers who help with online crowdsourcing such as citizen science tasks typically make only a few contributions before exiting. We propose a computational approach for increasing users' engagement in such settings that is based on optimizing policies for displaying motivational messages to users. The approach, which we refer to as Trajectory Corrected Intervention (TCI), reasons about the tradeoff between the long-term influence of engagement messages on participants' contributions and the potential risk of disrupting their current work. We combine model-based reinforcement learning with off-line policy evaluation to generate intervention policies, without relying on a fixed representation of the domain. TCI works iteratively to learn the best representation from a set of random intervention trials and to generate candidate intervention policies. It is able to refine selected policies off-line by exploiting the fact that users can only be interrupted once per session.We implemented TCI in the wild with Galaxy Zoo, one of the largest citizen science platforms on the web. We found that TCI was able to outperform the state-of-the-art intervention policy for this domain, and significantly increased the contributions of thousands of users. This work demonstrates the benefit of combining traditional AI planning with off-line policy methods to generate intelligent intervention strategies.
Abeyruwan, Saminda (University of Miami) | Baral, Ramesh (Florida International University) | Yasavur, Ugan (Florida International University) | Lisetti, Christine (Florida International University) | Visser, Ubbo (University of Miami)
We combined a spoken dialog system that we developed to deliver brief health interventions with the fully autonomous humanoid robot (NAO). The dialog system is based on a framework facilitating Markov decision processes (MDP). It is optimized using reinforcement learning (RL) algorithms with data we collected from real user interactions. The system begins to learn optimal dialog strategies for initiative selection and for the type of confirmations that it uses during theinteraction. The health intervention, delivered by a 3D character instead of the NAO, has already been evaluated, with positive results in terms of task completion, ease of use, and future intention to use the system. The current spoken dialog system for the humanoid robot is a novelty and exists so far as a proof ofconcept.