Goto

Collaborating Authors

Top 10 Creepy Robots Ranked By IEEE

#artificialintelligence

There has been an exponential increase in R&D related to virtual reality, photorealistic computer animations and augmented reality. Researchers from around the globe are working in these domains and have been creating machines which have the abilities of a human -- or humanoid robots. This is an emerging research domain which is now playing a crucial role in robotics research. However, according to the uncanny valley theory, the humanoid objects which imperfectly resemble actual human beings provoke uncanny or strangely familiar feelings of revulsion in observers. The IEEE spectrum ranks robots into three categories, -- top-rated, creepiest and most wanted.


Learning to Maintain Engagement: No One Leaves a Sad DragonBot

AAAI Conferences

Engagement is a key factor in every social interaction, be it between humans or humans and robots. Many studies were aimed at designing robot behavior in order to sustain human engagement. Infants and children, however, learn how to engage their caregivers to receive more attention.We used a social robot platform, DragonBot, that learned which of its social behaviors retained human engagement. This was achieved by implementing a reinforcement learning algorithm, wherein the reward is the proximity and number of people near the robot. The experiment was run in the World Science Festival in New York, where hundreds of people interacted with the robot. After more than two continuous hours of interaction, the robot learned by itself that making a sad face was the most rewarding expression. Further analysis showed that after a sad face, people's engagement rose for thirty seconds. In other words, the robot learned by itself in two hours that almost no-one leaves a sad DragonBot.


These Robotic Objects Are Designed to Be Stabbed and Beaten to Help You Feel Better

#artificialintelligence

At a human-computer interaction conference this week in Glasgow, U.K., Carnegie Mellon University researcher Michal Luria is presenting a paper on "Challenges of Designing HCI for Negative Emotions." The discussion includes a case study involving what Luria calls "cathartic objects": robotic contraptions that you can beat, stab, smash, and swear at to help yourself feel better. In the paper, presented at the ACM CHI Conference on Human Factors in Computing Systems, Luria and co-authors Amit Zoran and Jodi Forlizzi point out that technology tends to try and handle negative emotions by attempting to "fix" them immediately: Technology is often designed to support positive emotions, yet it is not very common to encounter technology that helps people engage with emotions of sadness, anger or loneliness (as opposed to resolving them)... As technology gains a central role in shaping everyday life and is becoming increasingly social, perhaps there is a design space for interaction with social and personal negative emotions. The researchers acknowledge that it's going to be challenging to find "cathartic" ways of engaging with negative emotions using technology that can demonstrably improve well-being, and that studying the topic is going to be tricky as well.


The Impact of Humanoid Affect Expression on Human Behavior in a Game-Theoretic Setting

arXiv.org Artificial Intelligence

With the rapid development of robot and other intelligent and autonomous agents, how a human could be influenced by a robot's expressed mood when making decisions becomes a crucial question in human-robot interaction. In this pilot study, we investigate (1) in what way a robot can express a certain mood to influence a human's decision making behavioral model; (2) how and to what extent the human will be influenced in a game theoretic setting. More specifically, we create an NLP model to generate sentences that adhere to a specific affective expression profile. We use these sentences for a humanoid robot as it plays a Stackelberg security game against a human. We investigate the behavioral model of the human player.


Towards Learning How to Properly Play UNO with the iCub Robot

arXiv.org Artificial Intelligence

--While interacting with another person, our reactions and behavior are much affected by the emotional changes within the temporal context of the interaction. Our intrinsic affective appraisal comprising perception, self-assessment, and the affective memories with similar social experiences will drive specific, and in most cases addressed as proper, reactions within the interaction. This paper proposes the roadmap for the development of multimodal research which aims to empower a robot with the capability to provide proper social responses in a Human-Robot Interaction (HRI) scenario. Our capabilities of both perceiving and reacting to the affective behavior of other persons are fine-tuned based on the observed social response of our interaction peers. We usually perceive how others are behaving towards us by reading their affective behavior through the processing of audio/visual cues [13].