Goto

Collaborating Authors

 Geifman, Amnon


Puzzle: Distillation-Based NAS for Inference-Optimized LLMs

arXiv.org Artificial Intelligence

Large language models (LLMs) have demonstrated remarkable capabilities, but their adoption is limited by high computational costs during inference. While increasing parameter counts enhances accuracy, it also widens the gap between state-of-the-art capabilities and practical deployability. We present Puzzle, a framework to accelerate LLM inference on specific hardware while preserving their capabilities. Through an innovative application of neural architecture search (NAS) at an unprecedented scale, Puzzle systematically optimizes models with tens of billions of parameters under hardware constraints. Our approach utilizes blockwise local knowledge distillation (BLD) for parallel architecture exploration and employs mixed-integer programming for precise constraint optimization. We demonstrate the real-world impact of our framework through Llama-3.1-Nemotron-51B-Instruct (Nemotron-51B), a publicly available model derived from Llama-3.1-70B-Instruct. Nemotron-51B achieves a 2.17x inference throughput speedup, fitting on a single NVIDIA H100 GPU while preserving 98.4% of the original model's capabilities. Nemotron-51B currently stands as the most accurate language model capable of inference on a single GPU with large batch sizes. Remarkably, this transformation required just 45B training tokens, compared to over 15T tokens used for the 70B model it was derived from. This establishes a new paradigm where powerful models can be optimized for efficient deployment with only negligible compromise of their capabilities, demonstrating that inference performance, not parameter count alone, should guide model selection. With the release of Nemotron-51B and the presentation of the Puzzle framework, we provide practitioners immediate access to state-of-the-art language modeling capabilities at significantly reduced computational costs.


Controlling the Inductive Bias of Wide Neural Networks by Modifying the Kernel's Spectrum

arXiv.org Artificial Intelligence

Following this characterization, we will Wide neural networks are biased towards learning use the term spectral bias of neural networks to refer to the certain functions, influencing both the rate of convergence inductive bias induced by their corresponding NTK spectrum. of gradient descent (GD) and the functions Specifically, it has been observed both theoretically that are reachable with GD in finite training and empirically that for a wide neural network, learning an time. As such, there is a great need for methods eigen-direction of the NTK with GD requires a number of that can modify this bias according to the iterations that is inversely proportional to the corresponding task at hand. To that end, we introduce Modified eigenvalue (Bowman & Montufar, 2022; Fridovich-Keil Spectrum Kernels (MSKs), a novel family of et al., 2021; Xu et al., 2022). Thus, if this spectral bias can constructed kernels that can be used to approximate be modified, it could lead to accelerated network training of kernels with desired eigenvalues for which certain target functions. Typically, the eigenvalue of NTK no closed form is known. We leverage the duality decays at least at a polynomial rate, implying that many between wide neural networks and Neural Tangent eigen-directions cannot be learned in polynomial time with Kernels and propose a preconditioned gradient gradient descent (Ma & Belkin, 2017). As such, modifying descent method, which alters the trajectory the spectral bias of a neural network is necessary to enable of GD. As a result, this allows for a polynomial a feasible learning time, allowing learning target functions and, in some cases, exponential training speedup that are not well aligned with the top eigen-directions of without changing the final solution.


A Kernel Perspective of Skip Connections in Convolutional Networks

arXiv.org Artificial Intelligence

Over-parameterized residual networks (ResNets) are amongst the most successful convolutional neural architectures for image processing. Here we study their properties through their Gaussian Process and Neural Tangent kernels. We derive explicit formulas for these kernels, analyze their spectra, and provide bounds on their implied condition numbers. Our results indicate that (1) with ReLU activation, the eigenvalues of these residual kernels decay polynomially at a similar rate compared to the same kernels when skip connections are not used, thus maintaining a similar frequency bias; (2) however, residual kernels are more locally biased. Our analysis further shows that the matrices obtained by these residual kernels yield favorable condition numbers at finite depths than those obtained without the skip connections, enabling therefore faster convergence of training with gradient descent.


On the Similarity between the Laplace and Neural Tangent Kernels

arXiv.org Machine Learning

Recent theoretical work has shown that massively overparameterized neural networks are equivalent to kernel regressors that use Neural Tangent Kernels (NTKs). Experiments show that these kernel methods perform similarly to real neural networks. Here we show that NTK for fully connected networks with ReLU activation is closely related to the standard Laplace kernel. We show theoretically that for normalized data on the hypersphere both kernels have the same eigenfunctions and their eigenvalues decay polynomially at the same rate, implying that their Reproducing Kernel Hilbert Spaces (RKHS) include the same sets of functions. This means that both kernels give rise to classes of functions with the same smoothness properties. The two kernels differ for data off the hypersphere, but experiments indicate that when data is properly normalized these differences are not significant. Finally, we provide experiments on real data comparing NTK and the Laplace kernel, along with a larger class of γ-exponential kernels. We show that these perform almost identically. Our results suggest that much insight about neural networks can be obtained from analysis of the well-known Laplace kernel, which has a simple closed form.