Goto

Collaborating Authors

 nullnull


Scaling Laws for Precision in High-Dimensional Linear Regression

Zhang, Dechen, Tang, Xuan, Liang, Yingyu, Zou, Difan

arXiv.org Machine Learning

Low-precision training is critical for optimizing the trade-off between model quality and training costs, necessitating the joint allocation of model size, dataset size, and numerical precision. While empirical scaling laws suggest that quantization impacts effective model and data capacities or acts as an additive error, the theoretical mechanisms governing these effects remain largely unexplored. In this work, we initiate a theoretical study of scaling laws for low-precision training within a high-dimensional sketched linear regression framework. By analyzing multiplicative (signal-dependent) and additive (signal-independent) quantization, we identify a critical dichotomy in their scaling behaviors. Our analysis reveals that while both schemes introduce an additive error and degrade the effective data size, they exhibit distinct effects on effective model size: multiplicative quantization maintains the full-precision model size, whereas additive quantization reduces the effective model size. Numerical experiments validate our theoretical findings. By rigorously characterizing the complex interplay among model scale, dataset size, and quantization error, our work provides a principled theoretical basis for optimizing training protocols under practical hardware constraints.









f66340d6f28dae6aab0176892c9065e7-Supplemental-Conference.pdf

Neural Information Processing Systems

Once closed-form expressions for these Jacobians are derived, it remains to substitute those expressions into (16). The following identity (often termed the "vec" rule) will To depict the spatial topographies of the latent components measured on the EEG and fMRI analyses, the "forward-model" [ The results of the comparison are shown in Fig S1, where it is clear that the signal fidelity of the GCs (right panel) significantly exceeds those yielded by PCA (left) and ICA (middle). GCA is only able to recover sources with temporal dependencies (i.e., s Both the single electrodes and Granger components exhibit two pronounced peaks in the spectra: one near 2 Hz ("delta" Fig S3 shows the corresponding result for the left motor imagery condition. EEG motor imagery dataset described in the main text. For each technique, the first 6 components are presented.


Doubly Robust Augmented Transfer for Meta-Reinforcement Learning

Anonymous Authors

Neural Information Processing Systems

RL problems through the idea of "learning to learn". Current meta-RL methods can be classified in to two categories. These methods mainly differ in their ways of inference [3, 4, 20]. The other line follows the technique of relabeling that enables sample reuse across tasks, i.e., learning a task Packer et al. apply hindsight relabeling for meta-RL, and propose hindsight task relabeling (HTR) to relabel the trajectories Taking a step further than hindsight relabelling, Wan et al. introduce additionally foresight Huang et al. derive a general form of policy gradient from DR value estimator [29], whereas a DR off-policy actor-critic Kallus et al. propose the doubly robust method to find a robust policy that can Depending on the knowledge to be transferred, these methods in RL can be roughly divided into classes including sampled transitions [32, 33], learned policies or value networks [34, 35, 36, 37], features [38, 39, 40], and skills [41, 42]. Doubly Robust Property for Direct Use of Doubly Robust Estimator We show the doubly robust property of the DR estimator for value function in Eq. (5) in the main text, as follows.