White-Box Transformers via Sparse Rate Reduction: Compression Is All There Is?
Yu, Yaodong, Buchanan, Sam, Pai, Druv, Chu, Tianzhe, Wu, Ziyang, Tong, Shengbang, Bai, Hao, Zhai, Yuexiang, Haeffele, Benjamin D., Ma, Yi
–arXiv.org Artificial Intelligence
In this paper, we contend that a natural objective of representation learning is to compress and transform the distribution of the data, say sets of tokens, towards a low-dimensional Gaussian mixture supported on incoherent subspaces. The goodness of such a representation can be evaluated by a principled measure, called sparse rate reduction, that simultaneously maximizes the intrinsic information gain and extrinsic sparsity of the learned representation. From this perspective, popular deep network architectures, including transformers, can be viewed as realizing iterative schemes to optimize this measure. Particularly, we derive a transformer block from alternating optimization on parts of this objective: the multi-head self-attention operator compresses the representation by implementing an approximate gradient descent step on the coding rate of the features, and the subsequent multi-layer perceptron sparsifies the features. This leads to a family of white-box transformer-like deep network architectures, named CRATE, which are mathematically fully interpretable. We show, by way of a novel connection between denoising and compression, that the inverse to the aforementioned compressive encoding can be realized by the same class of CRATE architectures. Thus, the so-derived white-box architectures are universal to both encoders and decoders. Experiments show that these networks, despite their simplicity, indeed learn to compress and sparsify representations of large-scale real-world image and text datasets, and achieve performance very close to highly engineered transformer-based models: ViT, MAE, DINO, BERT, and GPT2. We believe the proposed computational framework demonstrates great potential in bridging the gap between theory and practice of deep learning, from a unified perspective of data compression. Code is available at: https://ma-lab-berkeley.github.io/CRATE .
arXiv.org Artificial Intelligence
Nov-24-2023
- Country:
- Africa
- Ethiopia > Addis Ababa
- Addis Ababa (0.04)
- Rwanda > Kigali
- Kigali (0.04)
- Ethiopia > Addis Ababa
- Asia
- China > Hong Kong (0.04)
- India (0.04)
- Middle East > Israel
- Haifa District > Haifa (0.04)
- Jerusalem District > Jerusalem (0.04)
- Europe
- Austria (0.04)
- Finland (0.04)
- France > Hauts-de-France
- Germany > Berlin (0.04)
- Greece > Attica
- Athens (0.04)
- Spain > Andalusia
- Granada Province > Granada (0.04)
- Switzerland > Zürich
- Zürich (0.13)
- United Kingdom
- England > Cambridgeshire
- Cambridge (0.04)
- Scotland > City of Edinburgh
- Edinburgh (0.04)
- England > Cambridgeshire
- North America
- Canada
- British Columbia > Metro Vancouver Regional District
- Vancouver (0.04)
- Quebec > Montreal (0.04)
- British Columbia > Metro Vancouver Regional District
- United States
- California
- Alameda County > Berkeley (0.04)
- Los Angeles County > Long Beach (0.04)
- San Diego County > San Diego (0.04)
- Alaska > Anchorage Municipality
- Anchorage (0.04)
- Illinois
- Champaign County > Urbana (0.04)
- Cook County > Chicago (0.04)
- Florida > Miami-Dade County
- Miami (0.04)
- Utah > Salt Lake County
- Salt Lake City (0.04)
- Louisiana > Orleans Parish
- New Orleans (0.04)
- Rhode Island > Providence County
- Providence (0.04)
- Hawaii > Honolulu County
- Honolulu (0.04)
- Nevada > Clark County
- Las Vegas (0.04)
- Minnesota > Hennepin County
- Minneapolis (0.13)
- Texas > Travis County
- Austin (0.04)
- California
- Canada
- South America
- Chile > Santiago Metropolitan Region
- Santiago Province > Santiago (0.04)
- Peru > Loreto Department (0.04)
- Suriname > North Atlantic Ocean (0.04)
- Chile > Santiago Metropolitan Region
- Africa
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Education (0.92)
- Technology: