Trafton, J. Gregory
The Perceived Danger (PD) Scale: Development and Validation
Molan, Jaclyn, Saad, Laura, Roesler, Eileen, McCurry, J. Malcolm, Gyory, Nathaniel, Trafton, J. Gregory
There are currently no psychometrically valid tools to measure the perceived danger of robots. To fill this gap, we provided a definition of perceived danger and developed and validated a 12-item bifactor scale through four studies. An exploratory factor analysis revealed four subdimensions of perceived danger: affective states, physical vulnerability, ominousness, and cognitive readiness. A confirmatory factor analysis confirmed the bifactor model. We then compared the perceived danger scale to the Godspeed perceived safety scale and found that the perceived danger scale is a better predictor of empirical data. We also validated the scale in an in-person setting and found that the perceived danger scale is sensitive to robot speed manipulations, consistent with previous empirical findings. Results across experiments suggest that the perceived danger scale is reliable, valid, and an adequate predictor of both perceived safety and perceived danger in human-robot interaction contexts.
Connection-Coordination Rapport (CCR) Scale: A Dual-Factor Scale to Measure Human-Robot Rapport
Lin, Ting-Han, Dinner, Hannah, Leung, Tsz Long, Mutlu, Bilge, Trafton, J. Gregory, Sebo, Sarah
Robots, particularly in service and companionship roles, must develop positive relationships with people they interact with regularly to be successful. These positive human-robot relationships can be characterized as establishing "rapport," which indicates mutual understanding and interpersonal connection that form the groundwork for successful long-term human-robot interaction. However, the human-robot interaction research literature lacks scale instruments to assess human-robot rapport in a variety of situations. In this work, we developed the 18-item Connection-Coordination Rapport (CCR) Scale to measure human-robot rapport. We first ran Study 1 (N = 288) where online participants rated videos of human-robot interactions using a set of candidate items. Our Study 1 results showed the discovery of two factors in our scale, which we named "Connection" and "Coordination." We then evaluated this scale by running Study 2 (N = 201) where online participants rated a new set of human-robot interaction videos with our scale and an existing rapport scale from virtual agents research for comparison. We also validated our scale by replicating a prior in-person human-robot interaction study, Study 3 (N = 44), and found that rapport is rated significantly greater when participants interacted with a responsive robot (responsive condition) as opposed to an unresponsive robot (unresponsive condition). Results from these studies demonstrate high reliability and validity for the CCR scale, which can be used to measure rapport in both first-person and third-person perspectives. We encourage the adoption of this scale in future studies to measure rapport in a variety of human-robot interactions.
Automated Surveillance from a Mobile Robot
Lawson, Wallace (Naval Research Lab) | Sullivan, Keith (Naval Research Lab) | Bekele, Esube (Naval Research Lab) | Hiatt, Laura M. (Naval Research Lab) | Goring, Robert (Naval Research Lab) | Trafton, J. Gregory (Naval Research Lab )
In this paper, we propose to augment an existing video surveillance system with a mobile robot. This robot acts as a collaborator with a human who together monitor an environment to both look for objects that are out of the ordinary as well as to to describe people found near these objects. To find anomalies, our robot must first build a dictionary describing the things that are normally seen while navigating through each environment. We use a computational cognitive model, ACT-R/E to learn which dictionary elements are normal for each environment. Finally, the robot makes note of people seen in the environment and builds a human understandable description of each individual. When an anomaly is seen, the robot can then report people recently seen, as they may be potential witnesses or people of interest. Our system operates in real-time, and we demonstrate operation on several examples.
Anticipation of Touch Gestures to Improve Robot Reaction Time
Narber, Cody G. (Naval Research Laboratory) | Lawson, Wallace (Naval Research Lab) | Trafton, J. Gregory (Naval Research Lab)
Nonverbal communication is a critical way for humans to relay information and can have many forms including hand gestures, touch, and facial expressions. Our work focuses on touch gestures. In typical systems the recognition process does not begin until after the communication has completed, which can create a delayed response from the robot. It may take time for the robot to plan the appropriate response to touch, which could delay the reaction time. We have trained an artificial neural network on features extracted from the Leap Motion Controller, and successfully performed early recognition of touch gestures with high accuracy.