The cell as a token: high-dimensional geometry in language models and cell embeddings
–arXiv.org Artificial Intelligence
This process mirrors parallel developments in machine learning, where large language models ingest unstructured text by converting words into discrete tokens embedded within a high-dimensional vector space. This perspective explores how advances in understanding the structure of language embeddings can inform ongoing efforts to analyze and visualize single cell datasets. We discuss how the context of tokens influences the geometry of embedding space, and the role of low-dimensional manifolds in shaping this space's robustness and interpretability. We highlight new developments in language modeling, such as interpretability probes and in-context reasoning, that can inform future efforts to construct and consolidate cell atlases. The implicit goal of modern single-cell technologies is to decompile the cell--to abstract it away from its squishy context, and to render it as a single point in a high-dimensional vector space. But how do we know if this space is meaningful?
arXiv.org Artificial Intelligence
Mar-26-2025
- Country:
- Asia > Middle East
- Jordan (0.04)
- North America > United States
- California (0.14)
- Massachusetts > Middlesex County
- Cambridge (0.04)
- Texas > Travis County
- Austin (0.14)
- Asia > Middle East
- Genre:
- Research Report (0.50)
- Industry:
- Technology: