Direct Semantic Communication Between Large Language Models via Vector Translation
Yang, Fu-Chun, Eshraghian, Jason
–arXiv.org Artificial Intelligence
When two Large Language Models (LLMs) debate an answer, critique each other's chain of thought, or sequentially refine a shared draft of text, they speak through plain tokens. Every round forces each model to flatten rich geometry into text, operate on that, then rebuild meaning. Ultimately, computational resources are wasted, and limited information bandwidth can erase nuance. Specialised LLMs thus operate in isolation, communication only through text interfaces that constrain information transfer and add overhead. Encoding semantics into tokens and re-decoding them discards much of the latent structure that models use internally, blurring complex relationships in the process. Yet each LLM carries a distinct internal representation space shaped by architecture, training objective, and data. Those spaces differ enough that raw vectors are not interchangeable, prompting the question: Can semantic information encoded in one model's vector space be translated so another model can use them directly? We demonstrate this is possible by learning bidirectional vector translations that create a latent bridge between models. Injecting these translated vectors directly into a target model's pipeline lets the pair share meaning without serialising to tokens, enabling chains, ensembles, and parallel collaborations to run at latent speed, and bypass text-based limitations.
arXiv.org Artificial Intelligence
Nov-7-2025
- Country:
- North America > United States > California > Santa Cruz County > Santa Cruz (0.05)
- Genre:
- Research Report
- Experimental Study (0.69)
- New Finding (0.69)
- Research Report
- Industry:
- Health & Medicine (0.47)
- Technology: