The Semantic Hub Hypothesis: Language Models Share Semantic Representations Across Languages and Modalities

Wu, Zhaofeng, Yu, Xinyan Velocity, Yogatama, Dani, Lu, Jiasen, Kim, Yoon

arXiv.org Artificial Intelligence 

Modern language models can process inputs across diverse languages and modalities. We hypothesize that models acquire this capability through learning a shared representation space across heterogeneous data types (e.g., different languages and modalities), which places semantically similar inputs near one another, even if they are from different modalities/languages. We term this the semantic hub hypothesis, following the hub-and-spoke model from neuroscience (Patterson et al., 2007) which posits that semantic knowledge in the human brain is organized through a transmodal semantic "hub" which integrates information from various modality-specific "spokes" regions. We first show that model representations for semantically equivalent inputs in different languages are similar in the intermediate layers, and that this space can be interpreted using the model's dominant pretraining language via the logit lens. This tendency extends to other data types, including arithmetic expressions, code, and visual/audio inputs. Interventions in the shared representation space in one data type also predictably affect model outputs in other data types, suggesting that this shared representations space is not simply a vestigial byproduct of large-scale training on broad data, but something that is actively utilized by the model during input processing. For every other layer, we show the closest output token to the hidden state based on the logit lens. Llama-3's hidden states are often closest to English tokens when processing Chinese texts, arithmetic expressions, and code, in a semantically corresponding way. LLaVA, a vision-language model, and SALMONN, an audio-language model, have similar behavior when processing images/audio. As shown for the arithmetic expression example, models can be intervened cross-lingually or cross-modally, such as using English even though the input is non-English, and be steered towards corresponding effects. Boldface is only for emphasis. How do LMs process these distinct data types with a single set of parameters? One strategy might be to learn specialized subspaces for each data type that are only employed when processing it. In many cases, however, data types that are surface-distinct share underlying semantic concepts.