Modeling Collaboration for Robot-assisted Dressing Tasks
Clegg, Alexander, Kemp, Charles C., Turk, Greg, Liu, C. Karen
–arXiv.org Artificial Intelligence
Modeling Collaboration for Robot-assisted Dressing T asks Alexander Clegg, 1, 2, Charles C. Kemp 1, Greg Turk 1, and C. Karen Liu 1, 3 Abstract -- We investigated the application of haptic aware feedback control and deep reinforcement learning to robot assisted dressing in simulation. We did so by modeling both human and robot control policies as separate neural networks and training them both via TRPO. We show that co-optimization, training separate human and robot control policies simultaneously, can be a valid approach to finding successful strategies for human/robot cooperation on assisted dressing tasks. Typical tasks are putting on one or both sleeves of a hospital gown or pulling on a T -shirt. We also present a method for modeling human dressing behavior under variations in capability including: unilateral muscle weakness, Dyskinesia, and limited range of motion. Using this method and behavior model, we demonstrate discovery of successful strategies for a robot to assist humans with a variety of capability limitations. I NTRODUCTION It becomes ever more likely that robots will be found in homes and businesses, physically interacting with the humans they encounter. With this in mind, researchers have begun preparing robots for the physical interaction tasks which they will face in a human world. Dressing tasks in particular present a multitude of privacy, safety, and independence concerns which strongly motivate the application of robotic assistance [1]. However, clothing exhibits complex dynamics and often occludes the body, making it difficult to accurately observe the task state and predict the results of planned interactions. These challenges are compounded by the risk of injuring the human or damaging the robot as well as the sparsity of data that could be collected during physical task exploration.
arXiv.org Artificial Intelligence
Sep-14-2019