function token
Memory Retrieval and Consolidation in Large Language Models through Function Tokens
Zhang, Shaohua, Lin, Yuan, Li, Hang
The remarkable success of large language models (LLMs) stems from their ability to consolidate vast amounts of knowledge into the memory during pre-training and to retrieve it from the memory during inference, enabling advanced capabilities such as knowledge memorization, instruction-following and reasoning. However, the mechanisms of memory retrieval and consolidation in LLMs remain poorly understood. In this paper, we propose the function token hypothesis to explain the workings of LLMs: During inference, function tokens activate the most predictive features from context and govern next token prediction (memory retrieval). During pre-training, predicting the next tokens (usually content tokens) that follow function tokens increases the number of learned features of LLMs and updates the model parameters (memory consolidation). Function tokens here roughly correspond to function words in linguistics, including punctuation marks, articles, prepositions, and conjunctions, in contrast to content tokens. We provide extensive experimental evidence supporting this hypothesis. Using bipartite graph analysis, we show that a small number of function tokens activate the majority of features. Case studies further reveal how function tokens activate the most predictive features from context to direct next token prediction. We also find that during pre-training, the training loss is dominated by predicting the next content tokens following function tokens, which forces the function tokens to select the most predictive features from context.
- Asia > Russia (0.05)
- Pacific Ocean > North Pacific Ocean > San Francisco Bay > Golden Gate (0.04)
- Europe > Russia > Central Federal District > Moscow Oblast > Moscow (0.04)
- (5 more...)
GhostShell: Streaming LLM Function Calls for Concurrent Embodied Programming
Gong, Jian, Huang, Youwei, Yuan, Bo, Zhu, Ming, Liao, Zhou, Liang, Jianhang, Zhan, Juncheng, Wang, Jinke, Shu, Hang, Xiong, Mingyue, Ye, Yanjun, Zu, Yufan, Zhou, Yang, Ding, Yihan, Chen, Xuannian, Lu, Xingyu, Ban, Runjie, Huang, Bingchao, Liu, Fusen
We present GhostShell, a novel approach that leverages Large Language Models (LLMs) to enable streaming and concurrent behavioral programming for embodied systems. In contrast to conventional methods that rely on pre-scheduled action sequences or behavior trees, GhostShell drives embodied systems to act on-the-fly by issuing function calls incrementally as tokens are streamed from the LLM. GhostShell features a streaming XML function token parser, a dynamic function interface mapper, and a multi-channel scheduler that orchestrates intra-channel synchronous and inter-channel asynchronous function calls, thereby coordinating serial-parallel embodied actions across multiple robotic components under LLM guidance. We evaluate GhostShell on our robotic prototype COCO through comprehensive grounded experiments across 34 real-world interaction tasks and multiple LLM backends. The results demonstrate that our approach achieves a state-of-the-art Behavioral Correctness Metric of 0.85 with Claude-4-Sonnet, and up to 66X faster response times compared to native LLM function calling APIs. GhostShell also proves effective in long-horizon multimodal tasks, exhibiting strong robustness and generalization capabilities.