Goto

Collaborating Authors

 Riccietti, Elisa


Mixed precision accumulation for neural network inference guided by componentwise forward error analysis

arXiv.org Artificial Intelligence

Mixed precision accumulation for neural network inference guided by componentwise forward error analysis El-Mehdi El arar 1, Silviu-Ioan Filip 1, Theo Mary 2, and Elisa Riccietti 3 1 Inria, IRISA, Universit e de Rennes, 263 Av. G en eral Leclerc, F-35000, Rennes, France 2 Sorbonne Universit e, CNRS, LIP6, 4 Place Jussieu, F-75005, Paris, France 3 ENS de Lyon, CNRS, Inria, Universit e Claude Bernard Lyon 1 LIP, UMR 5668, 69342, Lyon cedex 07, France Abstract This work proposes a mathematically founded mixed precision accumulation strategy for the inference of neural networks. Our strategy is based on a new componentwise forward error analysis that explains the propagation of errors in the forward pass of neural networks. Specifically, our analysis shows that the error in each component of the output of a layer is proportional to the condition number of the inner product between the weights and the input, multiplied by the condition number of the activation function. These condition numbers can vary widely from one component to the other, thus creating a significant opportunity to introduce mixed precision: each component should be accumulated in a precision inversely proportional to the product of these condition numbers. We propose a practical algorithm that exploits this observation: it first computes all components in low precision, uses this output to estimate the condition numbers, and recomputes in higher precision only the components associated with large condition numbers. We test our algorithm on various networks and datasets and confirm experimentally that it can significantly improve the cost-accuracy tradeoff compared with uniform precision accumulation baselines. Keywords: Neural network, inference, error analysis, mixed precision, multiply-accumulate 1 Introduction Modern applications in artificial intelligence require increasingly complex models and thus increasing memory, time, and energy costs for storing and deploying large-scale deep learning models with parameter counts ranging in the millions and billions. This is a limiting factor both in the context of training and of inference. While the growing training costs can be tackled by the power of modern computing resources, notably GPU accelerators, the deployment of large-scale models leads to serious limitations in inference contexts with limited resources, such as embedded systems or applications that require real-time processing.


Path-metrics, pruning, and generalization

arXiv.org Machine Learning

Analyzing the behavior of ReLU neural networks often hinges on understanding the relationships between their parameters and the functions they implement. This paper proves a new bound on function distances in terms of the so-called path-metrics of the parameters. Since this bound is intrinsically invariant with respect to the rescaling symmetries of the networks, it sharpens previously known bounds. It is also, to the best of our knowledge, the first bound of its kind that is broadly applicable to modern networks such as ResNets, VGGs, U-nets, and many more. In contexts such as network pruning and quantization, the proposed path-metrics can be efficiently computed using only two forward passes. Besides its intrinsic theoretical interest, the bound yields not only novel theoretical generalization bounds, but also a promising proof of concept for rescaling-invariant pruning.


A path-norm toolkit for modern networks: consequences, promises and challenges

arXiv.org Machine Learning

This work introduces the first toolkit around path-norms that is fully able to encompass general DAG ReLU networks with biases, skip connections and any operation based on the extraction of order statistics: max pooling, GroupSort etc. This toolkit notably allows us to establish generalization bounds for modern neural networks that are not only the most widely applicable path-norm based ones, but also recover or beat the sharpest known bounds of this type. These extended path-norms further enjoy the usual benefits of path-norms: ease of computation, invariance under the symmetries of the network, and improved sharpness on feedforward networks compared to the product of operators' norms, another complexity measure most commonly used. The versatility of the toolkit and its ease of implementation allow us to challenge the concrete promises of path-norm-based generalization bounds, by numerically evaluating the sharpest known bounds for ResNets on ImageNet.


A Block-Coordinate Approach of Multi-level Optimization with an Application to Physics-Informed Neural Networks

arXiv.org Artificial Intelligence

Many numerical optimization problems of interest today are large dimensional, and techniques to solve them efficiently are thus an active field of research. A very powerful class of algorithms for the solution of large problems is that of multi-level methods. Originally, the concept of a method exploiting multiple levels, i.e., multiple resolutions of an underlying problem, was introduced for the solution of large scale systems arising from the discretization of partial differential equations (PDEs). In this context these methods are known as multigrid (MG) methods for the linear case or full approximation schemes (FAS) for the nonlinear one [3, 38]. These schemes were later extended to nonlinear optimization problems, in which context they are known as multi-level optimization techniques [27, 11, 12, 13, 5]. The central idea of all these approaches is to use the structure of the problem in order to significantly reduce the computational cost compared to standard approaches applied to the full unstructured problem. In this paper we introduce a new interpretation of multi-level methods as block coordinate descent (BCD) methods: iterations at coarse levels (i.e., low resolution) can be interpreted as the (possibly approximate) solution of a subproblem involving a set of variables smaller than that required to describe the fine level (high resolution). We propose a framework that allows us to encompass multi-level methods for several classes of problems as well as a unifying complexity analysis based on a generic block coordinate descent, which is simple yet comprehensive.


Self-supervised learning with rotation-invariant kernels

arXiv.org Artificial Intelligence

We introduce a regularization loss based on kernel mean embeddings with rotation-invariant kernels on the hypersphere (also known as dot-product kernels) for self-supervised learning of image representations. Besides being fully competitive with the state of the art, our method significantly reduces time and memory complexity for self-supervised training, making it implementable for very large embedding dimensions on existing devices and more easily adjustable than previous methods to settings with limited resources. Our work follows the major paradigm where the model learns to be invariant to some predefined image transformations (cropping, blurring, color jittering, etc.), while avoiding a degenerate solution by regularizing the embedding distribution. Our particular contribution is to propose a loss family promoting the embedding distribution to be close to the uniform distribution on the hypersphere, with respect to the maximum mean discrepancy pseudometric. We demonstrate that this family encompasses several regularizers of former methods, including uniformity-based and information-maximization methods, which are variants of our flexible regularization loss with different kernels. Beyond its practical consequences for state-of-the-art self-supervised learning with limited resources, the proposed generic regularization approach opens perspectives to leverage more widely the literature on kernel methods in order to improve self-supervised learning methods.