Embodied avatars as virtual agents have many applications and provide benefits over disembodied agents, allowing non-verbal social and interactional cues to be leveraged, in a similar manner to how humans interact with each other. We present an open embodied avatar built upon the Unreal Engine that can be controlled via a simple python programming interface. The avatar has lip syncing (phoneme control), head gesture and facial expression (using either facial action units or cardinal emotion categories) capabilities. We release code and models to illustrate how the avatar can be controlled like a puppet or used to create a simple conversational agent using public application programming interfaces (APIs). GITHUB link: https://github.com/danmcduff/AvatarSim
There is a growing recognition that artists use valuable ways to understand and work with cognitive and perceptual mechanisms to convey desired experiences and narrative in their created artworks (DiPaola et al., 2010; Zeki, 2001). This paper documents our attempt to computationally model the creative process of a portrait painter, who relies on understanding human traits (i.e., personality and emotions) to inform their art. Our system includes an empathic conversational interaction component to capture the dominant personality category of the user and a generative AI Portraiture system that uses this categorization to create a personalized stylization of the user's portrait. This paper includes the description of our systems and the real-time interaction results obtained during the demonstration session of the NeurIPS 2019 Conference.
This USER Workshop was convened with the goal of defining future research directions for the burgeoning intelligent agent research community and to communicate them to the National Science Foundation. It took place in Pittsburgh Pennsylvania on October 24 and 25, 2019 and was sponsored by National Science Foundation Grant Number IIS-1934222. Any opinions, findings and conclusions or future directions expressed in this document are those of the authors and do not necessarily reflect the views of the National Science Foundation. The 27 participants presented their individual research interests and their personal research goals. In the breakout sessions that followed, the participants defined the main research areas within the domain of intelligent agents and they discussed the major future directions that the research in each area of this domain should take.
Recommender systems are software applications that help users to find items of interest in situations of information overload. Current research often assumes a one-shot interaction paradigm, where the users' preferences are estimated based on past observed behavior and where the presentation of a ranked list of suggestions is the main, one-directional form of user interaction. Conversational recommender systems (CRS) take a different approach and support a richer set of interactions. These interactions can, for example, help to improve the preference elicitation process or allow the user to ask questions about the recommendations and to give feedback. The interest in CRS has significantly increased in the past few years. This development is mainly due to the significant progress in the area of natural language processing, the emergence of new voice-controlled home assistants, and the increased use of chatbot technology. With this paper, we provide a detailed survey of existing approaches to conversational recommendation. We categorize these approaches in various dimensions, e.g., in terms of the supported user intents or the knowledge they use in the background. Moreover, we discuss technological approaches, review how CRS are evaluated, and finally identify a number of gaps that deserve more research in the future.
Today's voice assistants are fairly limited in their conversational abilities and we look forward to their evolution toward The ambition of AI research is not solely to create intelligent increasing capability. Smart speakers and voice applications artifacts that have the same capabilities as people; are a result of the foundational research that has come to we also seek to enhance our intelligence and, in particular, life in today's consumer products. These systems can complete to build intelligent artifacts that assist in our intellectual simple tasks well: send and read text messages; answer activities. Intelligent assistants are a central component basic informational queries; set timers and calendar of a long history of using computation to improve human entries; set reminders, make lists, and do basic math calculations; activities, dating at least back to the pioneering work control Internet-of-Things-enabled devices such of Douglas Engelbart (1962). Early examples of intelligent as thermostats, lights, alarms, and locks; and tell jokes and assistants include sales assistants (McDermott 1982), stories (Hoy 2018). Although voice assistants have greatly scheduling assistants (Fox and Smith 1984), intelligent tutoring improved in the last few years, when it comes to more complicated systems (Grignetti, Hausmann, and Gould,Anderson, routines, such as rescheduling appointments in a Boyle, and Reiser 1975, 1985), and intelligent assistants for calendar, changing a reservation at a restaurant, or having a software development and maintenance (Winograd, Kaiser, conversation, we are still looking forward to a future where Feiler, and Popovich 1973, 1988). More recent examples assistants are capable of completing these tasks. Are today's of intelligent assistants are e-commerce assistants (Lu and voice systems "conversational"? We say that intelligent assistants Smith 2007), meeting assistants (Tür et al. 2010), and systems are conversational if they are able to recognize and that offer the intelligent capabilities of modern search respond to input; to generate their own input; to deal with