Bi-LAT: Bilateral Control-Based Imitation Learning via Natural Language and Action Chunking with Transformers
Kobayashi, Takumi, Kobayashi, Masato, Buamanee, Thanpimon, Uranishi, Yuki
–arXiv.org Artificial Intelligence
-- We present Bi-LA T, a novel imitation learning framework that unifies bilateral control with natural language processing to achieve precise force modulation in robotic manipulation. Bi-LA T leverages joint position, velocity, and torque data from leader-follower teleoperation while also integrating visual and linguistic cues to dynamically adjust applied force. By encoding human instructions such as "softly grasp the cup" or "strongly twist the sponge" through a multimodal Transformer-based model, Bi-LA T learns to distinguish nuanced force requirements in real-world tasks. We demonstrate Bi-LA T's performance in (1) unimanual cup-stacking scenario where the robot accurately modulates grasp force based on language commands, and (2) bimanual sponge-twisting task that requires coordinated force control. Experimental results show that Bi-LA T effectively reproduces the instructed force levels, particularly when incorporating SigLIP among tested language encoders. Our findings demonstrate the potential of integrating natural language cues into imitation learning, paving the way for more intuitive and adaptive human-robot interaction. I. INTRODUCTION In today's rapidly evolving landscape of robotics, integrating advanced manipulation capabilities with social intelligence is pivotal to shaping our hybrid future.
arXiv.org Artificial Intelligence
Jul-29-2025
- Country:
- Asia > Japan
- Honshū > Kansai > Osaka Prefecture > Osaka (0.05)
- North America > United States (0.04)
- Asia > Japan
- Genre:
- Research Report > New Finding (1.00)
- Technology: