Rudman, William
Forgotten Polygons: Multimodal Large Language Models are Shape-Blind
Rudman, William, Golovanesky, Michal, Bar, Amir, Palit, Vedant, LeCun, Yann, Eickhoff, Carsten, Singh, Ritambhara
Despite strong performance on vision-language tasks, Multimodal Large Language Models (MLLMs) struggle with mathematical problem-solving, with both open-source and state-of-the-art models falling short of human performance on visual-math benchmarks. To systematically examine visual-mathematical reasoning in MLLMs, we (1) evaluate their understanding of geometric primitives, (2) test multi-step reasoning, and (3) explore a potential solution to improve visual reasoning capabilities. Our findings reveal fundamental shortcomings in shape recognition, with top models achieving under 50% accuracy in identifying regular polygons. We analyze these failures through the lens of dual-process theory and show that MLLMs rely on System 1 (intuitive, memorized associations) rather than System 2 (deliberate reasoning). Consequently, MLLMs fail to count the sides of both familiar and novel shapes, suggesting they have neither learned the concept of sides nor effectively process visual inputs. Finally, we propose Visually Cued Chain-of-Thought (VC-CoT) prompting, which enhances multi-step mathematical reasoning by explicitly referencing visual annotations in diagrams, boosting GPT-4o's accuracy on an irregular polygon side-counting task from 7% to 93%. Our findings suggest that System 2 reasoning in MLLMs remains an open problem, and visually-guided prompting is essential for successfully engaging visual reasoning. Code available at: https://github.com/rsinghlab/Shape-Blind.
Outlier Dimensions Encode Task-Specific Knowledge
Rudman, William, Chen, Catherine, Eickhoff, Carsten
Representations Two seminal works discovered the presence of "outlier" of transformer-based LLMs are dominated by a (Kovaleva et al., 2021) or "rogue" (Timkey few outlier dimensions whose variance and magnitude and van Schijndel, 2021) dimensions in pre-trained are significantly larger than the rest of the LLMs. Following Kovaleva et al. (2021) and Puccetti model's representations (Timkey and van Schijndel, et al. (2022), we define outlier dimensions 2021; Kovaleva et al., 2021). Previous studies as dimensions in LLM representations whose variance devoted to the formation of outlier dimensions in is at least 5x larger than the average variance pre-trained LLMs suggest that imbalanced token in the global vector space. The formation of outlier frequency causes an uneven distribution of variance dimensions is caused by a token imbalance in the in model representations (Gao et al., 2019; Puccetti pre-training data with more common tokens having et al., 2022). Although many argue that outlier dimensions much higher norms in the outlier dimensions "disrupt" model representations, making compared to rare tokens (Gao et al., 2019; Puccetti them less interpretable and hindering model performance, et al., 2022). Although the community agrees on ablating outlier dimensions has been shown the origin of outlier dimensions, their impact on to cause downstream performance to decrease dramatically the representational quality of pre-trained LLMs (Kovaleva et al., 2021; Puccetti et al., has been widely contested.
Stable Anisotropic Regularization
Rudman, William, Eickhoff, Carsten
Given the success of Large Language Models (LLMs), there has been considerable interest in studying the properties of model activations. The literature overwhelmingly agrees that LLM representations are dominated by a few ``outlier dimensions'' with exceedingly high variance and magnitude. Several studies in Natural Language Processing (NLP) have sought to mitigate the impact of such outlier dimensions and force LLMs to be isotropic (i.e., have uniform variance across all dimensions in embedding space). Isotropy is thought to be a desirable property for LLMs that improves model performance and more closely aligns textual representations with human intuition. However, many of the claims regarding isotropy in NLP have been based on the average cosine similarity of embeddings, which has recently been shown to be a flawed measure of isotropy. In this paper, we propose I-STAR: IsoScore*-based STable Anisotropic Regularization, a novel regularization method that can be used to increase or decrease levels of isotropy in embedding space during training. I-STAR uses IsoScore*, the first accurate measure of isotropy that is both differentiable and stable on mini-batch computations. In contrast to several previous works, we find that decreasing isotropy in contextualized embeddings improves performance on the majority of tasks and models considered in this paper.
Garden-Path Traversal in GPT-2
Jurayj, William, Rudman, William, Eickhoff, Carsten
In recent years, large-scale transformer decoders such as the GPT-x family of models have become increasingly popular. Studies examining the behavior of these models tend to focus only on the output of the language modeling head and avoid analysis of the internal states of the transformer decoder. In this study, we present a collection of methods to analyze the hidden states of GPT-2 and use the model's navigation of garden path sentences as a case study. To enable this, we compile the largest currently available dataset of garden path sentences. We show that Manhattan distances and cosine similarities provide more reliable insights compared to established surprisal methods that analyze next-token probabilities computed by a language modeling head. Using these methods, we find that negating tokens have minimal impacts on the model's representations for unambiguous forms of sentences with ambiguity solely over what the object of a verb is, but have a more substantial impact of representations for unambiguous sentences Figure 1: Hidden state relations (Top: cosine similarity, whose ambiguity would stem from the voice Middle: Manhattan distance, Bottom: surprisal difference) of a verb. Further, we find that analyzing the between negated and non-negated forms of garden decoder model's hidden states reveals periods path and unambiguous sentences. The ambiguous of ambiguity that might conclude in a garden verb "walked" primes the effect later in the sentence, path effect but happen not to, whereas surprisal while the unambiguous "taken" avoids it. The verb "lit" analyses routinely miss this detail.
IsoScore: Measuring the Uniformity of Embedding Space Utilization
Rudman, William, Gillman, Nate, Rayne, Taylor, Eickhoff, Carsten
The recent success of distributed word representations has led to an increased interest in analyzing the properties of their spatial distribution. Several studies have suggested that contextualized word embedding models do not isotropically project tokens into vector space. However, current methods designed to measure isotropy, such as average random cosine similarity and the partition score, have not been thoroughly analyzed and are not appropriate for measuring isotropy. We propose IsoScore: a novel tool that quantifies the degree to which a point cloud uniformly utilizes the ambient vector space. Using rigorously designed tests, we demonstrate that IsoScore is the only tool available in the literature that accurately measures how uniformly distributed variance is across dimensions in vector space. Additionally, we use IsoScore to challenge a number of recent conclusions in the NLP literature that have been derived using brittle metrics of isotropy. We caution future studies from using existing tools to measure isotropy in contextualized embedding space as resulting conclusions will be misleading or altogether inaccurate.