Vision-Guided Loco-Manipulation with a Snake Robot

Salagame, Adarsh, Potluri, Sasank, Vaidyanathan, Keshav Bharadwaj, Gangaraju, Kruthika, Sihite, Eric, Ramezani, Milad, Ramezani, Alireza

arXiv.org Artificial Intelligence 

This paper presents the development and integration of a vision-guided loco-manipulation pipeline for Northeastern University's snake robot, COBRA. The system leverages a YOLOv8-based object detection model and depth data from an onboard stereo camera to estimate the 6-DOF pose of target objects in real time. We introduce a framework for autonomous detection and control, enabling closed-loop loco-manipulation for transporting objects to specified goal locations. Additionally, we demonstrate open-loop experiments in which COBRA successfully performs real-time object detection and loco-manipulation tasks.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found