Bridging Embodiment Gaps: Deploying Vision-Language-Action Models on Soft Robots

Open in new window