Goto

Collaborating Authors

 touch sensor


Reinforcement Learning for Robotic Safe Control with Force Sensing

Lin, Nan, Zhang, Linrui, Chen, Yuxuan, Chen, Zhenrui, Zhu, Yujun, Chen, Ruoxi, Wu, Peichen, Chen, Xiaoping

arXiv.org Artificial Intelligence

-- For the task with complicated manipulation in unstructured environments, traditional hand-coded methods are ineffective, while reinforcement learning can provide more general and useful policy. Although the reinforcement learning is able to obtain impressive results, its stability and reliability is hard to guarantee, which would cause the potential safety threats. Besides, the transfer from simulation to real-world also will lead in unpredictable situations. T o enhance the safety and reliability of robots, we introduce the force and haptic perception into reinforcement learning. We demonstrate that the force-based reinforcement learning method can be more adaptive to environment, especially in sim-to-real transfer . Experimental results show in object pushing task, our strategy is safer and more efficient in both simulation and real world, thus it holds prospects for a wide variety of robotic applications.


Embodying mechano-fluidic memory in soft machines to program behaviors upon interactions

Comoretto, Alberto, Mandke, Tanaya, Overvelde, Johannes T. B.

arXiv.org Artificial Intelligence

Soft machines display shape adaptation to external circumstances due to their intrinsic compliance. To achieve increasingly more responsive behaviors upon interactions without relying on centralized computation, embodying memory directly in the machines' structure is crucial. Here, we harness the bistability of elastic shells to alter the fluidic properties of an enclosed cavity, thereby switching between stable frequency states of a locomoting self-oscillating machine. To program these memory states upon interactions, we develop fluidic circuits surrounding the bistable shell, with soft tubes that kink and unkink when externally touched. We implement circuits for both long-term and short-term memory in a soft machine that switches behaviors in response to a human user and that autonomously changes direction after detecting a wall. By harnessing only geometry and elasticity, embodying memory allows physical structures without a central brain to exhibit autonomous feats that are typically reserved for computer-based robotic systems.


More complex environments may be required to discover benefits of lifetime learning in evolving robots

de Bruin, Ege, Glette, Kyrre, Ellefsen, Kai Olav

arXiv.org Artificial Intelligence

It is well known that intra-life learning, defined as an additional controller optimization loop, is beneficial for evolving robot morphologies for locomotion. In this work, we investigate this further by comparing it in two different environments: an easy flat environment and a more challenging hills environment. We show that learning is significantly more beneficial in a hilly environment than in a flat environment and that it might be needed to evaluate robots in a more challenging environment to see the benefits of learning.


Touch2Touch: Cross-Modal Tactile Generation for Object Manipulation

Rodriguez, Samanta, Dou, Yiming, Oller, Miquel, Owens, Andrew, Fazeli, Nima

arXiv.org Artificial Intelligence

Today's touch sensors come in many shapes and sizes. This has made it challenging to develop general-purpose touch processing methods since models are generally tied to one specific sensor design. We address this problem by performing cross-modal prediction between touch sensors: given the tactile signal from one sensor, we use a generative model to estimate how the same physical contact would be perceived by another sensor. This allows us to apply sensor-specific methods to the generated signal. We implement this idea by training a diffusion model to translate between the popular GelSlim and Soft Bubble sensors. As a downstream task, we perform in-hand object pose estimation using GelSlim sensors while using an algorithm that operates only on Soft Bubble signals. The dataset, the code, and additional details can be found at https://www.mmintlab.com/research/touch2touch/.


Touch in Human Social Robot Interaction: Systematic Literature Review with PRISMA Method

Tsirka, Christiana, Velentza, Anna-Maria, Fachantidis, Nikolaos

arXiv.org Artificial Intelligence

In the past two decades, there has been a continuous rise in the deployment of robots fulfilling social roles that expands across various industries such as guides, service providers, and educators. To establish robots as integral allies in daily life, it is essential for them to deliver positive and trustworthy experiences, achieved through seamless and satisfying interactions across diverse modalities and communication channels. In the realm of human-robot interactions, touch plays a pivotal role in facilitating meaningful connections and communication. To delve into the significance of haptic technologies and their impact on interactions between humans and social robots, an exploration of the existing literature is essential, since the research about touch is the most underrepresented between the other communication channels (facial expressions, movements, vocals etc). A systematic literature review has been carried out, identifying 42 articles with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), related to touch and haptic technologies and interaction between humans and social robots in the twenty years (2001 -2023). The results show the main differences, pros and cons between the materials and technologies that have been primary used so far, the qualitative and quantitative research that links the HRI touch studies with the human emotion and also the types of touch and repeatability of those methods. The study identifies research gaps and outlines future directions, while it serves as a guide for anyone who will be interesting in conducting HRI touch research or build a haptic system for a social robot.


Spike up Prime Interest in Science and Technology through Constructionist Games

Petrovič, Pavel, Agarshev, Fedir

arXiv.org Artificial Intelligence

Robotics sets have been successfully used in elementary and secondary schools in conformance with the 'learning through play' philosophy fostered by LEGO Education, while utilizing the Constructionism didactic approach. Learners discover and acquire knowledge through first-hand tangible experiences, building their own representations in a constructivist learning process. Usual pedagogical goals of the activities include introduction to the principles of control, mechanics, programming, and robotics [1]. They are organized as hands-on learning situations with teamwork cooperation of learners, project-based learning, sharing and presentations of the learners group experiences. Arriving from this tradition, we focus on a slightly different scenarios: employing the robotics sets and the named approaches when learning Physics, Mathematics, Art, Science, and other subjects. In carefully designed projects, learners build interactive models that demonstrate concepts, principles, and phenomena, perform experiments, and modify them in elaboration phases with the aim to connect, create associations and links to the actual underlying theoretical curriculum. In this way, they are collecting practical experiences which are prerequisite to successful learning process. Based on feedback from children, we continue upon two previous sets of activities that focused on Physics and Mathematics, this time with projects built around games. Learners play various games with physical artifacts in the real-world - with the models they build. They acquire skills while playing the games, analyze them, and learn about the underlying principles. They modify the game rules, strategies, create extensions, and interact with each other in an entertaining and engaging settings. This time we have designed the activities together with the children, students of applied robotics seminar, and a student of Applied Informatics.


Contact Energy Based Hindsight Experience Prioritization

Sayar, Erdi, Bing, Zhenshan, D'Eramo, Carlo, Oguz, Ozgur S., Knoll, Alois

arXiv.org Artificial Intelligence

Multi-goal robot manipulation tasks with sparse rewards are difficult for reinforcement learning (RL) algorithms due to the inefficiency in collecting successful experiences. Recent algorithms such as Hindsight Experience Replay (HER) expedite learning by taking advantage of failed trajectories and replacing the desired goal with one of the achieved states so that any failed trajectory can be utilized as a contribution to learning. However, HER uniformly chooses failed trajectories, without taking into account which ones might be the most valuable for learning. In this paper, we address this problem and propose a novel approach Contact Energy Based Prioritization~(CEBP) to select the samples from the replay buffer based on rich information due to contact, leveraging the touch sensors in the gripper of the robot and object displacement. Our prioritization scheme favors sampling of contact-rich experiences, which are arguably the ones providing the largest amount of information. We evaluate our proposed approach on various sparse reward robotic tasks and compare them with the state-of-the-art methods. We show that our method surpasses or performs on par with those methods on robot manipulation tasks. Finally, we deploy the trained policy from our method to a real Franka robot for a pick-and-place task. We observe that the robot can solve the task successfully. The videos and code are publicly available at: https://erdiphd.github.io/HER_force


Finger-shaped sensor enables more dexterous robots

Robohub

MIT researchers have developed a camera-based touch sensor that is long, curved, and shaped like a human finger. Their device, which provides high-resolution tactile sensing over a large area, could enable a robotic hand to perform multiple types of grasps. Imagine grasping a heavy object, like a pipe wrench, with one hand. You would likely grab the wrench using your entire fingers, not just your fingertips. Sensory receptors in your skin, which run along the entire length of each finger, would send information to your brain about the tool you are grasping.


Vision- and tactile-based continuous multimodal intention and attention recognition for safer physical human-robot interaction

Wong, Christopher Yee, Vergez, Lucas, Suleiman, Wael

arXiv.org Artificial Intelligence

Employing skin-like tactile sensors on robots enhances both the safety and usability of collaborative robots by adding the capability to detect human contact. Unfortunately, simple binary tactile sensors alone cannot determine the context of the human contact -- whether it is a deliberate interaction or an unintended collision that requires safety manoeuvres. Many published methods classify discrete interactions using more advanced tactile sensors or by analysing joint torques. Instead, we propose to augment the intention recognition capabilities of simple binary tactile sensors by adding a robot-mounted camera for human posture analysis. Different interaction characteristics, including touch location, human pose, and gaze direction, are used to train a supervised machine learning algorithm to classify whether a touch is intentional or not with an F1-score of 86%. We demonstrate that multimodal intention recognition is significantly more accurate than monomodal analyses with the collaborative robot Baxter. Furthermore, our method can also continuously monitor interactions that fluidly change between intentional or unintentional by gauging the user's attention through gaze. If a user stops paying attention mid-task, the proposed intention and attention recognition algorithm can activate safety features to prevent unsafe interactions. We also employ a feature reduction technique that reduces the number of inputs to five to achieve a more generalized low-dimensional classifier. This simplification both reduces the amount of training data required and improves real-world classification accuracy. It also renders the method potentially agnostic to the robot and touch sensor architectures while achieving a high degree of task adaptability.


Touch sensing: An important tool for mobile robot navigation

Robohub

In mammals, the touch modality develops earlier than the other senses, yet it is a less studied sensory modality than the visual and auditory counterparts. It not only allows environmental interactions, but also, serves as an effective defense mechanism. The role of touch in mobile robot navigation has not been explored in detail. However, touch appears to play an important role in obstacle avoidance and pathfinding for mobile robots. Proximal sensing often is a blind spot for most long range sensors such as cameras and lidars for which touch sensors could serve as a complementary modality.