Results


Ergonomically Intelligent Physical Human-Robot Interaction: Postural Estimation, Assessment, and Optimization

arXiv.org Artificial Intelligence

Ergonomics and human comfort are essential concerns in physical human-robot interaction applications, and common practical methods either fail in estimating the correct posture due to occlusion or suffer from less accurate ergonomics models in their postural optimization methods. Instead, we propose a novel framework for posture estimation, assessment, and optimization for ergonomically intelligent physical human-robot interaction. We show that we can estimate human posture solely from the trajectory of the interacting robot. We propose DULA, a differentiable ergonomics model, and use it in gradient-free postural optimization for physical human-robot interaction tasks such as co-manipulation and teleoperation. We evaluate our framework through human and simulation experiments.


Why Did the Robot Cross the Road? A User Study of Explanation in Human-Robot Interaction

arXiv.org Artificial Intelligence

This work documents a pilot user study evaluating the effectiveness of contrastive, causal and example explanations in supporting human understanding of AI in a hypothetical commonplace human robot interaction HRI scenario. In doing so, this work situates explainable AI XAI in the context of the social sciences and suggests that HRI explanations are improved when informed by the social sciences.


Modeling Trust in Human-Robot Interaction: A Survey

arXiv.org Artificial Intelligence

As the autonomy and capabilities of robotic systems increase, they are expected to play the role of teammates rather than tools and interact with human collaborators in a more realistic manner, creating a more human-like relationship. Given the impact of trust observed in human-robot interaction (HRI), appropriate trust in robotic collaborators is one of the leading factors influencing the performance of human-robot interaction. Team performance can be diminished if people do not trust robots appropriately by disusing or misusing them based on limited experience. Therefore, trust in HRI needs to be calibrated properly, rather than maximized, to let the formation of an appropriate level of trust in human collaborators. For trust calibration in HRI, trust needs to be modeled first. There are many reviews on factors affecting trust in HRI, however, as there are no reviews concentrated on different trust models, in this paper, we review different techniques and methods for trust modeling in HRI. We also present a list of potential directions for further research and some challenges that need to be addressed in future work on human-robot trust modeling.


'Swapping bodies' changes a person's personality, study reveals

The Independent - Tech

Swapping bodies with another person would have a profound effect on the subject's behaviour and even their personality, a new study has revealed. Scientists at the Karolinska Institutet in Sweden discovered a way to allow people to experience the effect of swapping bodies, through a perceptual illusion, in order to understand the relationship between a person's psychological and physical sense of self. They found that when pairs of friends "switched bodies", each friend's personality became more like the other. "Body swapping is not a domain reserved for science fiction anymore," said Pawel Tacikowski, a postdoctoral researcher at the institute and lead author of the study. In order to create the illusion that the study's subjects had switched bodies, Dr Tacikowski and his team fitted them with virtual reality goggles showing live feeds of the other person's body from a first-person perspective.


Personality in Healthcare Human Robot Interaction (H-HRI): A Literature Review and Brief Critique

arXiv.org Artificial Intelligence

Robots are becoming an important way to deliver health care, and personality is vital to understanding their effectiveness. Despite this, there is a lack of a systematic overarching understanding of personality in health care human robot interaction (H-HRI). To address this, the authors conducted a review that identified 18 studies on personality in H-HRI. This paper presents the results of that systematic literature review. Insights are derived from this review regarding the methodologies, outcomes, and samples utilized. The authors of this review discuss findings across this literature while identifying several gaps worthy of attention. Overall, this paper is an important starting point in understanding personality in H-HRI.


'Touchless touchscreen' could fight future epidemics, researchers say

The Independent - Tech

Cambridge University researchers have developed a "no-touch touchscreen" that uses artificial intelligence to predict a user's intention before their hand reaches the display. The screen was originally designed for use in cars, but the engineers who built it claim it could also have widespread applications during a pandemic. The "predictive touch" technology can be retrofitted to existing displays and could be used to prevent the spread of pathogens on touchscreens at supermarket check-outs, ATMs and ticket terminals at railway stations. Studies have shown that coronavirus can remain on plastic and glass for anywhere between two hours and a week, meaning touchscreens in public places need to be constantly disinfected to prevent transmission. "Touchscreens and other interactive displays are something most people use multiple times per day, but they can be difficult to use while in motion, whether that's driving a car or hanging the music on your phone while you're running," said Simon Godsill from the university's department of engineering.


Deployment and Evaluation of a Flexible Human-Robot Collaboration Model Based on AND/OR Graphs in a Manufacturing Environment

arXiv.org Artificial Intelligence

The Industry 4.0 paradigm promises shorter development times, increased ergonomy, higher flexibility, and resource efficiency in manufacturing environments. Collaborative robots are an important tangible technology for implementing such a paradigm. A major bottleneck to effectively deploy collaborative robots to manufacturing industries is developing task planning algorithms that enable them to recognize and naturally adapt to varying and even unpredictable human actions while simultaneously ensuring an overall efficiency in terms of production cycle time. In this context, an architecture encompassing task representation, task planning, sensing, and robot control has been designed, developed and evaluated in a real industrial environment. A pick-and-place palletization task, which requires the collaboration between humans and robots, is investigated. The architecture uses AND/OR graphs for representing and reasoning upon human-robot collaboration models online. Furthermore, objective measures of the overall computational performance and subjective measures of naturalness in human-robot collaboration have been evaluated by performing experiments with production-line operators. The results of this user study demonstrate how human-robot collaboration models like the one we propose can leverage the flexibility and the comfort of operators in the workplace. In this regard, an extensive comparison study among recent models has been carried out.


Staring at your phone at night could be linked to depression, study finds

The Independent - Tech

Exposure to artificial light at night has been found to induce depressive-like behaviour in mice, a new study has found. The findings could help with the understanding of how exposure to excessive light at night-time triggers depression in humans. Researchers from the University of Science and Technology in China exposed mice to two hours of blue light – the same light emitted from light pollution or electronic devices like smartphones – for several weeks. They observed that after three weeks the animals displayed depressive tendencies, measured by reduced escape behaviour and decreased preference for sugar. These bouts of depression lasted up for an additional three weeks after the mice were no longer exposed to light at night.


Artificial intelligence translates thoughts into text using brain implant

#artificialintelligence

Scientists have developed an artificial intelligence system that can translate a person's thoughts into text by analysing their brain activity. Researchers at the University of California, San Francisco, developed the AI to decipher up to 250 words in real time from a set of between 30 and 50 sentences. The algorithm was trained using the neural signals of four women with electrodes implanted in their brains, which were already in place to monitor epileptic seizures. The volunteers repeatedly read sentences aloud while the researchers fed the brain data to the AI to unpick patterns that could be associated with individual words. The average word error rate across a repeated set was as low as 3 per cent.


A Review of Personality in Human Robot Interactions

arXiv.org Artificial Intelligence

Personality has been identified as a vital factor in understanding the quality of human robot interactions. Despite this the research in this area remains fragmented and lacks a coherent framework. This makes it difficult to understand what we know and identify what we do not. As a result our knowledge of personality in human robot interactions has not kept pace with the deployment of robots in organizations or in our broader society. To address this shortcoming, this paper reviews 83 articles and 84 separate studies to assess the current state of human robot personality research. This review: (1) highlights major thematic research areas, (2) identifies gaps in the literature, (3) derives and presents major conclusions from the literature and (4) offers guidance for future research.