GhostShell: Streaming LLM Function Calls for Concurrent Embodied Programming

Gong, Jian, Huang, Youwei, Yuan, Bo, Zhu, Ming, Liao, Zhou, Liang, Jianhang, Zhan, Juncheng, Wang, Jinke, Shu, Hang, Xiong, Mingyue, Ye, Yanjun, Zu, Yufan, Zhou, Yang, Ding, Yihan, Chen, Xuannian, Lu, Xingyu, Ban, Runjie, Huang, Bingchao, Liu, Fusen

arXiv.org Artificial Intelligence 

We present GhostShell, a novel approach that leverages Large Language Models (LLMs) to enable streaming and concurrent behavioral programming for embodied systems. In contrast to conventional methods that rely on pre-scheduled action sequences or behavior trees, GhostShell drives embodied systems to act on-the-fly by issuing function calls incrementally as tokens are streamed from the LLM. GhostShell features a streaming XML function token parser, a dynamic function interface mapper, and a multi-channel scheduler that orchestrates intra-channel synchronous and inter-channel asynchronous function calls, thereby coordinating serial-parallel embodied actions across multiple robotic components under LLM guidance. We evaluate GhostShell on our robotic prototype COCO through comprehensive grounded experiments across 34 real-world interaction tasks and multiple LLM backends. The results demonstrate that our approach achieves a state-of-the-art Behavioral Correctness Metric of 0.85 with Claude-4-Sonnet, and up to 66X faster response times compared to native LLM function calling APIs. GhostShell also proves effective in long-horizon multimodal tasks, exhibiting strong robustness and generalization capabilities.