Line of Sight: On Linear Representations in VLLMs
Rajaram, Achyuta, Schwettmann, Sarah, Andreas, Jacob, Conmy, Arthur
–arXiv.org Artificial Intelligence
Language models can be equipped with multimodal capabilities by fine-tuning on embeddings of visual inputs. But how do such multimodal models represent images in their hidden activations? We explore representations of image concepts within LlaVA-Next, a popular open-source VLLM. We find a diverse set of ImageNet classes represented via linearly decodable features in the residual stream. We show that the features are causal by performing targeted edits on the model output. In order to increase the diversity of the studied linear features, we train multimodal Sparse Autoencoders (SAEs), creating a highly interpretable dictionary of text and image features. We find that although model representations across modalities are quite disjoint, they become increasingly shared in deeper layers.
arXiv.org Artificial Intelligence
Jun-6-2025
- Country:
- Europe > Latvia
- Lubāna Municipality > Lubāna (0.04)
- North America > United States
- Massachusetts > Middlesex County > Cambridge (0.04)
- Pacific Ocean > North Pacific Ocean
- San Francisco Bay > Golden Gate (0.04)
- Europe > Latvia
- Genre:
- Research Report (0.64)
- Technology: