No Multiplication? No Floating Point? No Problem! Training Networks for Efficient Inference
Baluja, Shumeet, Marwood, David, Covell, Michele, Johnston, Nick
A different body of research has focused on quantizing and clustering network weights (Yi et al., 2008; Courbariaux et al., 2016; Rastegari et al., 2016; Deng et al., 2017; Wu et al., 2018). For successful deployment of deep neural networks on highly resource constrained devices (hearing aids, earbuds, wearables), we must simplify the types of operations and the memory/power resources required during inference. Completely avoiding inference-time floating point operations is one of the simplest ways to design networks for these highly constrained environments. By quantizing both our in-network non-linearities and our network weights, we can move to simple, compact networks without floating point operations, without multiplications, and without nonlinear function computations. Our approach allows us to explore the spectrum of possible networks, ranging from fully continuous versions down to networks with bi-level weights and activations. Our results show that quantization can be done with little or no loss of performance on both regression tasks (auto-encoding) and multi-class classification tasks (ImageNet). The memory needed to deploy our quantized networks is less than one-third of the equivalent architecture that uses floating-point operations. The activations in our networks emit only a small number of predefined, quantized values (typically 32) and all of the network's weight are drawn from a small number of unique values (typically 100-1000) found by employing a novel periodic adaptive clustering step during training. Almost all recent neural-network training algorithms rely on gradient-based learning. This has moved the research field away from using discrete-valued inference, with hard thresholds, to smooth, continuous-valued activation functions (Werbos, 1974; Rumelhart et al., 1986). Unfortunately, this causes inference to be done with floating-point operations, making it difficult to deploy on an increasinglylarge set of low-cost, limited-memory, low-power hardware in both commercial (Lane et al., 2015) and research settings (Bourzac, 2017). Avoiding all floating point operations allows the inference network to realize the power-saving gains available with fixed-point processing (Finnerty & Ratigner, 2017).
Sep-28-2018
- Country:
- North America > United States
- California > Santa Clara County > Mountain View (0.04)
- South America > Venezuela
- Barinas State > Barinas (0.04)
- North America > United States
- Genre:
- Research Report > New Finding (0.86)
- Technology: