analogy
- North America > United States > California > Los Angeles County > Long Beach (0.14)
- Oceania > Australia > New South Wales > Sydney (0.04)
- North America > United States > Wisconsin > Dane County > Madison (0.04)
- (3 more...)
- North America > United States > California > Alameda County > Berkeley (0.04)
- Asia > Middle East > Jordan (0.04)
- Europe > Switzerland > Zürich > Zürich (0.05)
- North America > United States > District of Columbia > Washington (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- (4 more...)
- North America > Canada (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
Is turbulence really like Jello-O? Pilots weigh in.
Is turbulence really like Jello-O? Science backs up the goofy analogy. The viral TikTok video may actually hold up under scrutiny. Breakthroughs, discoveries, and DIY tips sent six days a week. A young woman pushes a balled-up piece of napkin into a cup of Jell-O, asking the viewer to imagine that it is an airplane, high in the air.
- South America (0.05)
- North America > United States > Massachusetts (0.05)
- North America > Central America (0.05)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.05)
Hyperdimensional Probe: Decoding LLM Representations via Vector Symbolic Architectures
Bronzini, Marco, Nicolini, Carlo, Lepri, Bruno, Staiano, Jacopo, Passerini, Andrea
Despite their capabilities, Large Language Models (LLMs) remain opaque with limited understanding of their internal representations. Current interpretability methods either focus on input-oriented feature extraction, such as supervised probes and Sparse Autoencoders (SAEs), or on output distribution inspection, such as logit-oriented approaches. A full understanding of LLM vector spaces, however, requires integrating both perspectives, something existing approaches struggle with due to constraints on latent feature definitions. We introduce the Hyperdimensional Probe, a hybrid supervised probe that combines symbolic representations with neural probing. Leveraging Vector Symbolic Architectures (VSAs) and hypervector algebra, it unifies prior methods: the top-down interpretability of supervised probes, SAE's sparsity-driven proxy space, and output-oriented logit investigation. This allows deeper input-focused feature extraction while supporting output-oriented investigation. Our experiments show that our method consistently extracts meaningful concepts across LLMs, embedding sizes, and setups, uncovering concept-driven patterns in analogy-oriented inference and QA-focused text generation. By supporting joint input-output analysis, this work advances semantic understanding of neural representations while unifying the complementary perspectives of prior methods.