Ajaykumar, Gopika
An Introduction to Causal Inference Methods for Observational Human-Robot Interaction Research
Lee, Jaron J. R., Ajaykumar, Gopika, Shpitser, Ilya, Huang, Chien-Ming
Quantitative methods in Human-Robot Interaction (HRI) research have primarily relied upon randomized, controlled experiments in laboratory settings. However, such experiments are not always feasible when external validity, ethical constraints, and ease of data collection are of concern. Furthermore, as consumer robots become increasingly available, increasing amounts of real-world data will be available to HRI researchers, which prompts the need for quantative approaches tailored to the analysis of observational data. In this article, we present an alternate approach towards quantitative research for HRI researchers using methods from causal inference that can enable researchers to identify causal relationships in observational settings where randomized, controlled experiments cannot be run. We highlight different scenarios that HRI research with consumer household robots may involve to contextualize how methods from causal inference can be applied to observational HRI research. We then provide a tutorial summarizing key concepts from causal inference using a graphical model perspective and link to code examples throughout the article, which are available at https://gitlab.com/causal/causal_hri. Our work paves the way for further discussion on new approaches towards observational HRI research while providing a starting point for HRI researchers to add causal inference techniques to their analytical toolbox.
Older Adults' Task Preferences for Robot Assistance in the Home
Ajaykumar, Gopika, Huang, Chien-Ming
Artificial intelligence technologies that can assist with at-home tasks have the potential to help older adults age in place. Robot assistance in particular has been applied towards physical and cognitive support for older adults living independently at home. Surveys, questionnaires, and group interviews have been used to understand what tasks older adults want robots to assist them with. We build upon prior work exploring older adults' task preferences for robot assistance through field interviews situated within older adults' aging contexts. Our findings support results from prior work indicating older adults' preference for physical assistance over social and care-related support from robots and indicating their preference for control when adopting robot assistance, while highlighting the variety of individual constraints, boundaries, and needs that may influence their preferences.
Multimodal Robot Programming by Demonstration: A Preliminary Exploration
Ajaykumar, Gopika, Huang, Chien-Ming
Recent years have seen a growth in the number of industrial robots working closely with end-users such as factory workers. This growing use of collaborative robots has been enabled in part due to the availability of end-user robot programming methods that allow users who are not robot programmers to teach robots task actions. Programming by Demonstration (PbD) is one such end-user programming method that enables users to bypass the complexities of specifying robot motions using programming languages by instead demonstrating the desired robot behavior. Demonstrations are often provided by physically guiding the robot through the motions required for a task action in a process known as kinesthetic teaching. Kinesthetic teaching enables users to directly demonstrate task behaviors in the robot's configuration space, making it a popular end-user robot programming method for collaborative robots known for its low cognitive burden. However, because kinesthetic teaching restricts the programmer's teaching to motion demonstrations, it fails to leverage information from other modalities that humans naturally use when providing physical task demonstrations to one other, such as gaze and speech. Incorporating multimodal information into the traditional kinesthetic programming workflow has the potential to enhance robot learning by highlighting critical aspects of a program, reducing ambiguity, and improving situational awareness for the robot learner and can provide insight into the human programmer's intent and difficulties. In this extended abstract, we describe a preliminary study on multimodal kinesthetic demonstrations and future directions for using multimodal demonstrations to enhance robot learning and user programming experiences.