Goto

Collaborating Authors

 tr null




Transformers learn to implement preconditioned gradient descent for in-context learning

Neural Information Processing Systems

Several recent works demonstrate that transformers can implement algorithms like gradient descent. By a careful construction of weights, these works show that multiple layers of transformers are expressive enough to simulate iterations of gradient descent.






QuIP: 2-Bit Quantization of Large Language Models With Guarantees

Neural Information Processing Systems

We introduce quantization with incoherence processing (QuIP), a new method based on the insight that quantization benefits from incoherent weight and Hessian matrices, i.e., from the weights being even in magnitude and the


Variance Matters: Improving Domain Adaptation via Stratified Sampling

Napoli, Andrea, White, Paul

arXiv.org Artificial Intelligence

Domain shift remains a key challenge in deploying machine learning models to the real world. Unsupervised domain adaptation (UDA) aims to address this by minimising domain discrepancy during training, but the discrepancy estimates suffer from high variance in stochastic settings, which can stifle the theoretical benefits of the method. This paper proposes Variance-Reduced Domain Adaptation via Stratified Sampling (VaRDASS), the first specialised stochastic variance reduction technique for UDA. We consider two specific discrepancy measures -- correlation alignment and the maximum mean discrepancy (MMD) -- and derive ad hoc stratification objectives for these terms. We then present expected and worst-case error bounds, and prove that our proposed objective for the MMD is theoretically optimal (i.e., minimises the variance) under certain assumptions. Finally, a practical k-means style optimisation algorithm is introduced and analysed. Experiments on three domain shift datasets demonstrate improved discrepancy estimation accuracy and target domain performance.


Near-Efficient and Non-Asymptotic Multiway Inference

López, Oscar, Prasadan, Arvind, Llosa-Vite, Carlos, Lehoucq, Richard B., Dunlavy, Daniel M.

arXiv.org Machine Learning

Both perspectives are useful in practice: parametric inference estimates the tensor of distributional parameters as a whole, while multiway analysis yields its latent factors for interpretation [1]. Both tasks rely fundamentally on tensor decompositions to represent and exploit underlying structure. However, computing tensor decompositions is notoriously difficult. Degeneracy phenomena lead to non-unique or ill-conditioned factorizations [2] and many tensor problems are NP-hard [3], making even approximate computation intractable in general. These issues put into question the reliability of existing tensor-based inference methods. They are particularly pronounced for the canonical polyadic (CP) decomposition [2], which, despite its widespread use, lacks the theoretical guarantees enjoyed by other tensor formats. Computing CP factors, i.e., multiway analysis, with minimal variance across multiple sets of observations would enhance the reliability of multiway analysis and parametric inference, offering practitioners more confidence in their results while reducing the need for extensive data collection. 1