Goto

Collaborating Authors

 Sandygulova, Anara


Paper index: Designing an introductory HRI course (workshop at HRI 2024)

arXiv.org Artificial Intelligence

Human-robot interaction is now an established discipline. Dozens of HRI courses exist at universities worldwide, and some institutions even offer degrees in HRI. However, although many students are being taught HRI, there is no agreed-upon curriculum for an introductory HRI course. In this workshop, we aimed to reach community consensus on what should be covered in such a course. Through interactive activities like panels, breakout discussions, and syllabus design, workshop participants explored the many topics and pedagogical approaches for teaching HRI. This collection of articles submitted to the workshop provides examples of HRI courses being offered worldwide.


SLIRS: Sign Language Interpreting System for Human-Robot Interaction

AAAI Conferences

Deaf-mute communities around the world experience a need in effective human-robot interaction system that would act as an interpreter in public places such as banks, hospitals, or police stations. The focus of this work is to address the challenges presented to hearing-impaired people by developing an interpreting robotic system required for effective communication in public places. To this end, we utilize a previously developed neural network-based learning architecture to recognize Cyrillic manual alphabet, which is used for finger spelling in Kazakhstan. In order to train and test the performance of the recognition system, we collected a depth data set of ten people and applied it to a learning-based method for gesture recognition by modeling motion data. We report our results that show an average accuracy of 77.2% for a complete alphabet recognition consisting of 33 letters.


Child-Centred Motion-Based Age and Gender Estimation with Neural Network Learning

AAAI Conferences

The focus of this work is to investigate how children's perception of the robot changes with age and gender, and to enable the robot to adapt to these differences for improving human-robot interaction (HRI). We propose a neural network-based learning architecture to estimate children's age and gender based on the body motion performing a set of actions. To evaluate our system, we collected a fully annotated depth dataset of 28 children (aged between 7 and 16 years old) and applied it to a learning-based method for age and gender estimation by modeling children's 3D skeleton motion data. We discuss our results that show an average accuracy of 95.2% and 90.3% for age and gender respectively in the context of a real-world scenario.