Goto

Collaborating Authors

 scassellati


Yale researchers develop AI technology for adults with autism

#artificialintelligence

Researchers from several American universities are collaborating to develop artificial intelligence based software to help people on the autism spectrum find and hold meaningful employment. The project is a collaboration between experts at Vanderbilt, Yale, Cornell and the Georgia Institute of Technology. It consists of developing multiple pieces of technology, each one aimed at a different aspect of supporting people with Autism Spectrum Disorder (ASD) in the workplace, according to Nilanjan Sarkar, professor of engineering at Vanderbilt University and the leader of the project. "We realized together that there are some support systems for children with autism in this society, but as soon as they become 18 years old and more, there is a support cliff and the social services are not as much," Sarkar said. The project began a year ago with preliminary funding from the National Science Foundation. The NSF initially invested in around 40 projects, but only four -- including this one -- were chosen to be funded for a longer term of two years.


Researchers taught an AI about ownership rules and social norms

#artificialintelligence

With the growing prevalence of AI and robotics in our social lives, social competence is becoming a crucial component for intelligent systems that interact with humans,


Theories of Parenting and their Application to Artificial Intelligence

Croeser, Sky, Eckersley, Peter

arXiv.org Artificial Intelligence

As machine learning (ML) systems have advanced, they have acquired more power over humans' lives, and questions about what values are embedded in them have become more complex and fraught. It is conceivable that in the coming decades, humans may succeed in creating artificial general intelligence (AGI) that thinks and acts with an open-endedness and autonomy comparable to that of humans. The implications would be profound for our species; they are now widely debated not just in science fiction and speculative research agendas but increasingly in serious technical and policy conversations. Much work is underway to try to weave ethics into advancing ML research. We think it useful to add the lens of parenting to these efforts, and specifically radical, queer theories of parenting that consciously set out to nurture agents whose experiences, objectives and understanding of the world will necessarily be very different from their parents'. We propose a spectrum of principles which might underpin such an effort; some are relevant to current ML research, while others will become more important if AGI becomes more likely. These principles may encourage new thinking about the development, design, training, and release into the world of increasingly autonomous agents.


Robotic Teachers Can Adjust Style Based on Student Success

#artificialintelligence

Teachers are often stretched thin. As classroom sizes get larger and resources dwindle, it can be a significant challenge for even the most qualified teacher to provide individual attention to every single child, especially those with special challenges or learning difficulties. As part of the National Science Foundation (NSF) Expeditions in Computing, researchers from Yale University are developing socially assistive robotics-- a new field of robotics that focuses on assisting users through social rather than physical interaction. A core part of their research is to design these robots to work with children, including those with challenges such as autism, hearing impairment, or those whose first language is one other than English. The goal is not to replace teachers, but to assist them, said Brian Scassellati, a professor of computer science, cognitive science, and mechanical engineering at Yale University and director of the NSF Expedition on Socially Assistive Robotics.


Establishing Sustained, Supportive Human-Robot Relationships: Building Blocks and Open Challenges

Strohkorb, Sarah (Yale University) | Huang, Chien-Ming (Yale University) | Ramachandran, Aditi (Yale University) | Scassellati, Brian (Yale University)

AAAI Conferences

Researchers have been developing Social robots are increasingly common in schools to support algorithms to aid robots in determining task hierarchies learning goals, in workplaces to augment productivity, (Hayes and Scassellati 2014), learning tasks from humans and in homes to improve quality of life. The fulfillment of (Thomaz and Breazeal 2008), and choosing what information their objectives in these environments are strongly dependent to communicate and when to communicate it (Unhelkar on the quality of the sustained, supportive relationship and Shah 2016). Although robots have made great robots are able to construct with their human users.


Who's Talking? — Efference Copy and a Robot's Sense of Agency

Brody, Justin (Goucher College) | Perlis, Don (University of Maryland, College Park) | Shamwell, Jared (University of Maryland, College Park)

AAAI Conferences

How can a robot tell when it — rather than another agent — is making an utterance or performing an action? This is rather tricky and also very important for human-robot (or even robot-robot) interaction. Here we outline our beginning attempt to deal with this issue.


Using Small Humanoid Robots to Detect Autism in Toddlers

Manner, Marie D. (University of Minnesota)

AAAI Conferences

Autism Spectrum Disorder is a developmental disorder often characterized by limited social skills, repetitive behaviors, obsessions, and/or routines. Using the small humanoid robot NAO, we designed an interactive program to elicit common social cues from toddlers while in the presence of trained psychologists during standard toddler assessments.  Our program will capture three different videos of the child-robot interaction and create algorithms to analyze the videos and flag autistic behavior to make diagnosis easier for clinicians.  Our novel contributions will be automatic video processing and automatic behavior classification for clinicians to use with toddlers, validated on a large number of subjects and using a reproducible and portable robotic program for the NAO robot.


Social Hierarchical Learning

Hayes, Bradley (Yale University)

AAAI Conferences

My dissertation research focuses on the application of hierarchical learning and heuristics based on social signals to solve challenges inherent to enabling human-robot collaboration. I approach this problem through advancing the state of the art in building hierarchical task representations, multi-agent task-level planning, and learning assistive behaviors from demonstration.


Developing Effective Robot Teammates for Human-Robot Collaboration

Hayes, Bradley (Yale University) | Scassellati, Brian (Yale University)

AAAI Conferences

Developing collaborative robots that can productively operate out of isolation and work safely in uninstrumented, human-populated environments is critically important for advancing the field of robotics. Especially in domains where modern robots are ineffective, we wish to leverage human-robot teaming to improve the efficiency, ability, and safety of human workers. Our work, outlined in this extended abstract, focuses on creating agents capable of human-robot teamwork by leveraging learning from demonstration, hierarchical task networks, multi-agent planning and state estimation, and intention recognition. We briefly describe our recent work within human-robot collaboration, including task comprehension, learning and performing assistive behaviors, and training novice human collaborators to become competent co-workers.


Augmenting the Reachable Space in the NAO Humanoid Robot

Antonelli, Marco (Universitat Jaume I) | Grzyb, Beata Joanna (Universitat Jaume I) | Castelló, Vicente (Universitat Jaume I) | Pobil, Angel Pascual del (Universitat Jaume I)

AAAI Conferences

Reaching for a target requires estimating the spatial position of the target and to convert such a position in a suitable arm-motor command. In the proposed framework, the location of the target is represented implicitly by the gaze direction of the robot and by the distance of the target. The NAO robot is provided with two cameras, one to look ahead and one to look down, which constitute two independent head-centered coordinate systems. These head-centered frames of reference are converted into reaching commands by two neural networks. The weights of networks are learned by moving the arm while gazing the hand, using an on-line learning algorithm that maintains the covariance matrix of weights. This work adapts a previously proposed model that worked on a full humanoid robot torso, to work with the NAO and is a step toward a more generic framework for the implicit representation of the peripersonal space in humanoid robots.