Goto

Collaborating Authors

 Onal, Cagdas D.


Integrating Contact-aware Feedback CPG System for Learning-based Soft Snake Robot Locomotion Controllers

arXiv.org Artificial Intelligence

This paper aims to solve the contact-aware locomotion problem of a soft snake robot by developing bio-inspired contact-aware locomotion controllers. To provide effective contact information for the controllers, we develop a scale-covered sensor structure mimicking natural snakes' scale sensilla. In the design of the control framework, our core contribution is the development of a novel sensory feedback mechanism for the Matsuoka central pattern generator (CPG) network. This mechanism allows the Matsuoka CPG system to work like a "spinal cord" in the whole contact-aware control scheme, which simultaneously takes the stimuli including tonic input signals from the "brain" (a goal-tracking locomotion controller) and sensory feedback signals from the "reflex arc" (the contact reactive controller), and generates rhythmic signals to actuate the soft snake robot to slither through densely allocated obstacles. In the "reflex arc" design, we develop two distinctive types of reactive controllers -- 1) a reinforcement learning (RL) sensor regulator that learns to manipulate the sensory feedback inputs of the CPG system, and 2) a local reflexive sensor-CPG network that directly connects sensor readings and the CPG's feedback inputs in a specific topology. Combining with the locomotion controller and the Matsuoka CPG system, these two reactive controllers facilitate two different contact-aware locomotion control schemes. The two control schemes are tested and evaluated in both simulated and real soft snake robots, showing promising performance in the contact-aware locomotion tasks. The experimental results also validate the advantages of the modified Matsuoka CPG system with a new sensory feedback mechanism for bio-inspired robot controller design.


From Hand-Perspective Visual Information to Grasp Type Probabilities: Deep Learning via Ranking Labels

arXiv.org Artificial Intelligence

Limb deficiency severely affects the daily lives of amputees and drives efforts to provide functional robotic prosthetic hands to compensate this deprivation. Convolutional neural network-based computer vision control of the prosthetic hand has received increased attention as a method to replace or complement physiological signals due to its reliability by training visual information to predict the hand gesture. Mounting a camera into the palm of a prosthetic hand is proved to be a promising approach to collect visual data. However, the grasp type labelled from the eye and hand perspective may differ as object shapes are not always symmetric. Thus, to represent this difference in a realistic way, we employed a dataset containing synchronous images from eye- and hand- view, where the hand-perspective images are used for training while the eye-view images are only for manual labelling. Electromyogram (EMG) activity and movement kinematics data from the upper arm are also collected for multi-modal information fusion in future work. Moreover, in order to include human-in-the-loop control and combine the computer vision with physiological signal inputs, instead of making absolute positive or negative predictions, we build a novel probabilistic classifier according to the Plackett-Luce model. To predict the probability distribution over grasps, we exploit the statistical model over label rankings to solve the permutation domain problems via a maximum likelihood estimation, utilizing the manually ranked lists of grasps as a new form of label. We indicate that the proposed model is applicable to the most popular and productive convolutional neural network frameworks.