As robotic technologies mature, we can imagine an increasing number of applications in which robots could soon prove to be useful in unstructured human environments. Many of those applications require a natural interface between the robot and untrained human users or are possible only in a human-robot collaborative scenario. In this paper, we study an example of such scenario in which a visually impaired person and a robotic guide collaborate in an unfamiliar environment. We then analyze how the scenario can be realized through language- and gesture-based human-robot interaction, combined with semantic spatial understanding and reasoning, and propose an integration of semantic world model with language and gesture models for several collaboration modes. We believe that this way practical robotic applications can be achieved in human environments with the use of currently available technology.
Maeda, Guilherme (Technische Universitaet Darmstadt) | Maloo, Aayush (Indian Institute of Technology Madras) | Ewerton, Marco (Technische Universitaet Darmstadt) | Lioutikov, Rudolf (Technische Universitaet Darmstadt) | Peters, Jan (Technische Universitaet Darmstadt)
This paper introduces our initial investigation on the problem of providing a semi-autonomous robot collaborator with anticipative capabilities to predict human actions. Anticipative robot behavior is a desired characteristic of robot collaborators that lead to fluid, proactive interactions. We are particularly interested in improving reactive methods that rely on human action recognition to activate the corresponding robot action. Action recognition invariably causes delay in the robot’s response, and the goal of our method is to eliminate this delay by predicting the next human action. Prediction is achieved by using a lookup table containing variations of assembly sequences, previously demonstrated by different users. The method uses the nearest neighbor sequence in the table that matches the actual sequence of human actions. At the movement level, our method uses a probabilistic representation of interaction primitives to generate robot trajectories. The method is demonstrated using a 7 degree-of-freedom lightweight arm equipped with a 5-finger hand on an assembly task consisting of 17 steps.
--In this paper, we present results from a human-subject study designed to explore two facets of human mental models of robots--inferred capability and intention--and their relationship to overall trust and eventual decisions. In particular, we examine delegation situations characterized by uncertainty, and explore how inferred capability and intention are applied across different tasks. We develop an online survey where human participants decide whether to delegate control to a simulated UA V agent. Our study shows that human estimations of robot capability and intent correlate strongly with overall self-reported trust. However, overall trust is not independently sufficient to determine whether a human will decide to trust (delegate) a given task to a robot. Instead, our study reveals that estimations of robot intention, capability, and overall trust are integrated when deciding to delegate. From a broader perspective, these results suggest that calibrating overall trust alone is insufficient; to make correct decisions, humans need (and use) multifaceted mental models when collaborating with robots across multiple contexts. I NTRODUCTION Trust is a cornerstone of long-lasting collaboration in human teams, and is crucial for human-robot cooperation . For example, human trust in robots influences usage , and willingness to accept information or suggestions . Misplaced trust in robots can lead to poor task-allocation and unsatisfactory outcomes.
Wu, Jane (Harvey Mudd College) | Paeng, Erin (Harvey Mudd College) | Linder, Kari (Claremont McKenna College) | Valdesolo, Piercarlo (Claremont McKenna College ) | Boerkoel, James C. (Harvey Mudd College)
Trust plays a key role in social interactions, particularly when the decisions we make depend on the people we face. In this paper, we use game theory to explore whether a person’s decisions are influenced by the type of agent they interact with:human or robot. By adopting a coin entrustment game, we quantitatively measure trust and cooperation to see if such phenomena emerge differently when a person believes they are playing a robot rather than another human. We found that while people cooperate with other humans and robots at a similar rate, they grow to trust robots more completely than humans. As a possible explanation for these differences, our survey results suggest that participants perceive humans as having faculty for feelings and sympathy, whereas they perceive robots as being more precise and reliable.
Today, advancements and innovation have reached the next level where everything is automated and is done automatically. This new breakthrough has driven the world to a new scenario where humans and robots can work together. For now, most autonomous systems or robots seem to work with driving vehicles, vacuuming home floors or turning lights on and off, caring elderly, gardening crops and picking fruits, and much more. These machinery systems are getting good enough as they are able to work alongside the human workforce in a shared space as teammates. Just like smartphones and social media that provide connectivity beyond our imagination, robots have started to offer physical and cognitive abilities to humans they never expected before.