decay 0
Masked Image Modeling Supplementary Material Anonymous Author(s) Affiliation Address email 1 More Training Details 1
We use the same setting for different sizes RevCol models on MIM pre-training. The hyper-parameters generally follow [4, 2]. Table 3 shows the detail training settings after MIM pre-training. We also show training settings on ImageNet-1K after ImageNet-22K fine-tuning. For semantic segmentation, we evaluate different backbones on ADE20K dataset.
- Europe > Netherlands > North Holland > Amsterdam (0.05)
- Asia > Middle East > Israel (0.05)
Evolution Strategies at the Hyperscale
Sarkar, Bidipta, Fellows, Mattie, Duque, Juan Agustin, Letcher, Alistair, Villares, Antonio León, Sims, Anya, Cope, Dylan, Liesen, Jarek, Seier, Lukas, Wolf, Theo, Berdica, Uljad, Goldie, Alexander David, Courville, Aaron, Sevegnani, Karin, Whiteson, Shimon, Foerster, Jakob Nicolaus
We introduce Evolution Guided General Optimization via Low-rank Learning (EGGROLL), an evolution strategies (ES) algorithm designed to scale backprop-free optimization to large population sizes for modern large neural network architectures with billions of parameters. ES is a set of powerful blackbox optimisation methods that can handle non-differentiable or noisy objectives with excellent scaling potential through parallelisation. Na{ï}ve ES becomes prohibitively expensive at scale due to the computational and memory costs associated with generating matrix perturbations $E\in\mathbb{R}^{m\times n}$ and the batched matrix multiplications needed to compute per-member forward passes. EGGROLL overcomes these bottlenecks by generating random matrices $A\in \mathbb{R}^{m\times r},\ B\in \mathbb{R}^{n\times r}$ with $r\ll \min(m,n)$ to form a low-rank matrix perturbation $A B^\top$ that are used in place of the full-rank perturbation $E$. As the overall update is an average across a population of $N$ workers, this still results in a high-rank update but with significant memory and computation savings, reducing the auxiliary storage from $mn$ to $r(m+n)$ per layer and the cost of a forward pass from $\mathcal{O}(mn)$ to $\mathcal{O}(r(m+n))$ when compared to full-rank ES. A theoretical analysis reveals our low-rank update converges to the full-rank update at a fast $\mathcal{O}\left(\frac{1}{r}\right)$ rate. Our experiments show that (1) EGGROLL does not compromise the performance of ES in tabula-rasa RL settings, despite being faster, (2) it is competitive with GRPO as a technique for improving LLM reasoning, and (3) EGGROLL enables stable pre-training of nonlinear recurrent language models that operate purely in integer datatypes.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- Asia > Middle East > Jordan (0.14)
- North America > United States > Virginia (0.04)
- (7 more...)
The Road Less Scheduled Aaron Defazio 1 Fundamental AI Research Team, Meta Xingyu (Alice) Y ang 2
Recently, Zamani and Glineur (2023) and Defazio et al. (2023) showed that the exact worst-case Our approach uses an alternative form of momentum that replaces traditional momentum. So from this viewpoint, the Schedule-Free updates can be seen as a version of momentum that has the same immediate effect, but with a greater delay for adding in the remainder of the gradient.
- Research Report > New Finding (0.68)
- Research Report > Experimental Study (0.67)
- Europe > Netherlands > North Holland > Amsterdam (0.05)
- Asia > Middle East > Israel (0.05)