Goto

Collaborating Authors

 manikin


Enhancing UAV Search under Occlusion using Next Best View Planning

Strand, Sigrid Helene, Wiedemann, Thomas, Burczek, Bram, Shutin, Dmitriy

arXiv.org Artificial Intelligence

Search and rescue missions are often critical following sudden natural disasters or in high-risk environmental situations. The most challenging search and rescue missions involve difficult-to-access terrains, such as dense forests with high occlusion. Deploying unmanned aerial vehicles for exploration can significantly enhance search effectiveness, facilitate access to challenging environments, and reduce search time. However, in dense forests, the effectiveness of unmanned aerial vehicles depends on their ability to capture clear views of the ground, necessitating a robust search strategy to optimize camera positioning and perspective. This work presents an optimized planning strategy and an efficient algorithm for the next best view problem in occluded environments. Two novel optimization heuristics, a geometry heuristic, and a visibility heuristic, are proposed to enhance search performance by selecting optimal camera viewpoints. Comparative evaluations in both simulated and real-world settings reveal that the visibility heuristic achieves greater performance, identifying over 90% of hidden objects in simulated forests and offering 10% better detection rates than the geometry heuristic. Additionally, real-world experiments demonstrate that the visibility heuristic provides better coverage under the canopy, highlighting its potential for improving search and rescue missions in occluded environments.


OpenRoboCare: A Multimodal Multi-Task Expert Demonstration Dataset for Robot Caregiving

Liang, Xiaoyu, Liu, Ziang, Lin, Kelvin, Gu, Edward, Ye, Ruolin, Nguyen, Tam, Hsu, Cynthia, Wu, Zhanxin, Yang, Xiaoman, Cheung, Christy Sum Yu, Soh, Harold, Dimitropoulou, Katherine, Bhattacharjee, Tapomayukh

arXiv.org Artificial Intelligence

We present OpenRoboCare, a multimodal dataset for robot caregiving, capturing expert occupational therapist demonstrations of Activities of Daily Living (ADLs). Caregiving tasks involve complex physical human-robot interactions, requiring precise perception under occlusions, safe physical contact, and long-horizon planning. While recent advances in robot learning from demonstrations have shown promise, there is a lack of a large-scale, diverse, and expert-driven dataset that captures real-world caregiving routines. To address this gap, we collect data from 21 occupational therapists performing 15 ADL tasks on two manikins. The dataset spans five modalities: RGB-D video, pose tracking, eye-gaze tracking, task and action annotations, and tactile sensing, providing rich multimodal insights into caregiver movement, attention, force application, and task execution strategies. We further analyze expert caregiving principles and strategies, offering insights to improve robot efficiency and task feasibility. Additionally, our evaluations demonstrate that OpenRoboCare presents challenges for state-of-the-art robot perception and human activity recognition methods, both critical for developing safe and adaptive assistive robots, highlighting the value of our contribution. See our website for additional visualizations: https://emprise.cs.cornell.edu/robo-care/.


Manikin-Recorded Cardiopulmonary Sounds Dataset Using Digital Stethoscope

Torabi, Yasaman, Shirani, Shahram, Reilly, James P.

arXiv.org Artificial Intelligence

Heart and lung sounds are crucial for healthcare monitoring. Recent improvements in stethoscope technology have made it possible to capture patient sounds with enhanced precision. In this dataset, we used a digital stethoscope to capture both heart and lung sounds, including individual and mixed recordings. To our knowledge, this is the first dataset to offer both separate and mixed cardiorespiratory sounds. The recordings were collected from a clinical manikin, a patient simulator designed to replicate human physiological conditions, generating clean heart and lung sounds at different body locations. This dataset includes both normal sounds and various abnormalities (i.e., murmur, atrial fibrillation, tachycardia, atrioventricular block, third and fourth heart sound, wheezing, crackles, rhonchi, pleural rub, and gurgling sounds). The dataset includes audio recordings of chest examinations performed at different anatomical locations, as determined by specialist nurses. Each recording has been enhanced using frequency filters to highlight specific sound types. This dataset is useful for applications in artificial intelligence, such as automated cardiopulmonary disease detection, sound classification, unsupervised separation techniques, and deep learning algorithms related to audio signal processing.


Force/Torque Sensing for Soft Grippers using an External Camera

Collins, Jeremy A., Grady, Patrick, Kemp, Charles C.

arXiv.org Artificial Intelligence

Robotic manipulation can benefit from wrist-mounted force/torque (F/T) sensors, but conventional F/T sensors can be expensive, difficult to install, and damaged by high loads. We present Visual Force/Torque Sensing (VFTS), a method that visually estimates the 6-axis F/T measurement that would be reported by a conventional F/T sensor. In contrast to approaches that sense loads using internal cameras placed behind soft exterior surfaces, our approach uses an external camera with a fisheye lens that observes a soft gripper. VFTS includes a deep learning model that takes a single RGB image as input and outputs a 6-axis F/T estimate. We trained the model with sensor data collected while teleoperating a robot (Stretch RE1 from Hello Robot Inc.) to perform manipulation tasks. VFTS outperformed F/T estimates based on motor currents, generalized to a novel home environment, and supported three autonomous tasks relevant to healthcare: grasping a blanket, pulling a blanket over a manikin, and cleaning a manikin's limbs. VFTS also performed well with a manually operated pneumatic gripper. Overall, our results suggest that an external camera observing a soft gripper can perform useful visual force/torque sensing for a variety of manipulation tasks.


New computer algorithm can locate people lost at sea

Daily Mail - Science & tech

A team of researchers have developed a new algorithm that could help search and rescue teams locate people lost at sea using ocean currents, wind speed, and wave direction. The project was a joint effort from scientists at MIT, the Swiss Federal Institute of Technology (ETH), the Woods Hole Oceanographic Institution (WHOI), and Virginia Tech, who tested their method using human manikins in the ocean off the coast of Martha's Vineyard. Unlike current search and rescue models--which also use data about ocean currents and wind to calculate the likely location of a missing person by simulating one single linear path--the team's new system is focused on identifying multiple points of'attraction' in the ocean, which can sometimes change dramatically over time. Using a system they called Transient Attracting Profiles (TRAPS), the team tracks these attraction points, which they behave like'moving magnets' pulling people in the water toward them. Instead of mapping out a single, linear path, the TRAPS model identifies many different attraction points, or'traps,' in the ocean that will likely have pulled a person in multiple directions as they drift through the waters.


Are robots moving sculptures? On Art, illusion and artificial intelligence

#artificialintelligence

Traditional art has an element of illusionism to it. This has long been commented on, and is responsible for the prevalent thought (at least among the general public) that the more realistic the artwork, the more a man-made creation looks like a nature-made one, the better it must be. The ancient praised the lifelike naturalism of painters, with Pliny relating the famous story of a duel between two artists, one of whom was able to fool a bird into swooping in to peck at his painted grapes, whereas the other was able to fool the first artist, tricking him into trying to pull aside a curtain that was, in fact, his painting of a curtain. Fooling a human trumps fooling an animal, and the ability to inspire awe, wonder, the "how-did-they-do-that" expression, has long been the goal of most traditional art. Think of a tale of Pygmalion, in which an ivory sculpture of a naked woman was so realistic, and its sculptor's love for it so strong, that it actually came to life.