Goto

Collaborating Authors

 Yun, Zeyu


URLOST: Unsupervised Representation Learning without Stationarity or Topology

arXiv.org Artificial Intelligence

Unsupervised representation learning has seen tremendous progress but is constrained by its reliance on data modality-specific stationarity and topology, a limitation not found in biological intelligence systems. For instance, human vision processes visual signals derived from irregular and non-stationary sampling lattices yet accurately perceives the geometry of the world. We introduce a novel framework that learns from high-dimensional data lacking stationarity and topology. Our model combines a learnable self-organizing layer, density adjusted spectral clustering, and masked autoencoders. We evaluate its effectiveness on simulated biological vision data, neural recordings from the primary visual cortex, and gene expression datasets. Compared to state-of-the-art unsupervised learning methods like SimCLR and MAE, our model excels at learning meaningful representations across diverse modalities without depending on stationarity or topology. It also outperforms other methods not dependent on these factors, setting a new benchmark in the field. This work represents a step toward unsupervised learning methods that can generalize across diverse high-dimensional data modalities.


Minimalistic Unsupervised Learning with the Sparse Manifold Transform

arXiv.org Artificial Intelligence

We describe a minimalistic and interpretable method for unsupervised representation learning that does not require data augmentation, hyperparameter tuning, or other engineering designs, but nonetheless achieves performance close to the state-of-the-art (SOTA) SSL methods. Our approach leverages the sparse manifold transform [21], which unifies sparse coding, manifold learning, and slow feature analysis. With a one-layer deterministic (one training epoch) sparse manifold transform, it is possible to achieve 99.3% KNN top-1 accuracy on MNIST, 81.1% KNN top-1 accuracy on CIFAR-10, and 53.2% on CIFAR-100. With simple grayscale augmentation, the model achieves 83.2% KNN top-1 accuracy on CIFAR-10 and 57% on CIFAR-100. These results significantly close the gap between simplistic "white-box" methods and SOTA methods. We also provide visualization to illustrate how an unsupervised representation transform is formed. The proposed method is closely connected to latent-embedding self-supervised methods and can be treated as the simplest form of VICReg. Though a small performance gap remains between our simple constructive model and SOTA methods, the evidence points to this as a promising direction for achieving a principled and white-box approach to unsupervised representation learning, which has potential to significantly improve learning efficiency. Unsupervised representation learning (aka self-supervised representation learning) aims to build models that automatically find patterns in data and reveal these patterns explicitly with a representation. There has been tremendous progress over the past few years in the unsupervised representation learning community, and this trend promises unparalleled scalability for future data-driven machine learning. However, questions remain about what exactly a representation is and how it is formed in an unsupervised fashion. Furthermore, it is unclear whether there exists a set of common principles underlying all these unsupervised representations. The hope is that such understanding will lead to a theory that enables us to build simple, fully explainable "white-box" models [14; 13; 71] from data based on first principles. Such a computational theory could guide us in achieving two intertwined fundamental goals: modeling natural signal statistics, and modeling biological sensory systems [83; 31; 32; 65]. Here, we take a small step toward this goal by building a minimalistic white-box unsupervised learning model without deep networks, projection heads, augmentation, or other similar engineering designs. By leveraging the classical unsupervised learning principles of sparsity [81; 82] and low-rank spectral embedding [89; 105], we build a two-layer model that achieves non-trivial benchmark results on several standard datasets. With simple grayscale augmentation, it achieves 83.2% KNN top-1 accuracy on CIFAR-10 and 57% KNN top-1 accuracy on CIFAR-100.


Transformer visualization via dictionary learning: contextualized embedding as a linear superposition of transformer factors

arXiv.org Artificial Intelligence

Transformer networks have revolutionized NLP representation learning since they were introduced. Though a great effort has been made to explain the representation in transformers, it is widely recognized that our understanding is not sufficient. One important reason is that there lack enough visualization tools for detailed analysis. In this paper, we propose to use dictionary learning to open up these "black boxes" as linear superpositions of transformer factors. Through visualization, we demonstrate the hierarchical semantic structures captured by the transformer factors, e.g., word-level polysemy disambiguation, sentence-level pattern formation, and long-range dependency. While some of these patterns confirm the conventional prior linguistic knowledge, the rest are relatively unexpected, which may provide new insights. We hope this visualization tool can bring further knowledge and a better understanding of how transformer networks work. The code is available at https://github.com/zeyuyun1/TransformerVis