Goto

Collaborating Authors

 adaptive numerical format


Flexpoint: An Adaptive Numerical Format for Efficient Training of Deep Neural Networks

Neural Information Processing Systems

Deep neural networks are commonly developed and trained in 32-bit floating point format. Significant gains in performance and energy efficiency could be realized by training and inference in numerical formats optimized for deep learning. Despite advances in limited precision inference in recent years, training of neural networks in low bit-width remains a challenging problem. Here we present the Flexpoint data format, aiming at a complete replacement of 32-bit floating point format training and inference, designed to support modern deep network topologies without modifications. Flexpoint tensors have a shared exponent that is dynamically adjusted to minimize overflows and maximize available dynamic range. We validate Flexpoint by training AlexNet, a deep residual network and a generative adversarial network, using a simulator implemented with the \emph{neon} deep learning framework. We demonstrate that 16-bit Flexpoint closely matches 32-bit floating point in training all three models, without any need for tuning of model hyperparameters. Our results suggest Flexpoint as a promising numerical format for future hardware for training and inference.


Reviews: Flexpoint: An Adaptive Numerical Format for Efficient Training of Deep Neural Networks

Neural Information Processing Systems

I find the proposed technique interesting, but I believe that these problem might be better situated for a hardware venue. This seems to be more a contribution on hardware representations, as compared to previous techniques like xor-net which don't ask for a new hardware representation and had in mind an application to compression of networks for phones, and evaluated degradation of final performance. However, I do think that the paper could be of interest to the NIPS community as similar papers have been published in AI conferences. The paper is well written and easy to understand. The upsides of the method and the potential problems are well described and easy to follow.


Flexpoint: An Adaptive Numerical Format for Efficient Training of Deep Neural Networks

Neural Information Processing Systems

Deep neural networks are commonly developed and trained in 32-bit floating point format. Significant gains in performance and energy efficiency could be realized by training and inference in numerical formats optimized for deep learning. Despite advances in limited precision inference in recent years, training of neural networks in low bit-width remains a challenging problem. Here we present the Flexpoint data format, aiming at a complete replacement of 32-bit floating point format training and inference, designed to support modern deep network topologies without modifications. Flexpoint tensors have a shared exponent that is dynamically adjusted to minimize overflows and maximize available dynamic range.


Flexpoint: An Adaptive Numerical Format for Efficient Training of Deep Neural Networks

Köster, Urs, Webb, Tristan, Wang, Xin, Nassar, Marcel, Bansal, Arjun K., Constable, William, Elibol, Oguz, Gray, Scott, Hall, Stewart, Hornof, Luke, Khosrowshahi, Amir, Kloss, Carey, Pai, Ruby J., Rao, Naveen

Neural Information Processing Systems

Deep neural networks are commonly developed and trained in 32-bit floating point format. Significant gains in performance and energy efficiency could be realized by training and inference in numerical formats optimized for deep learning. Despite advances in limited precision inference in recent years, training of neural networks in low bit-width remains a challenging problem. Here we present the Flexpoint data format, aiming at a complete replacement of 32-bit floating point format training and inference, designed to support modern deep network topologies without modifications. Flexpoint tensors have a shared exponent that is dynamically adjusted to minimize overflows and maximize available dynamic range.