compact network
ExpandNets: LinearOver-parameterization toTrainCompactConvolutionalNetworks-SupplementaryMaterial-AComplementaryExperiments
However,withdeep networks, initialization can have an important effect on the final results. While designing an initialization strategy specifically for compact networks is an unexplored research direction, our ExpandNets can be initialized in a natural manner. Note that this strategy yields an additional accuracy boost to our approach. Theoutput ofthelastlayer ispassed through afully-connected layer with 64 units, followed by a logit layer with either 10 or 100 units. Weusedstandard stochastic gradient descent (SGD) withamomentum of0.9 and a learning rate of0.01, divided by10 at epochs 50 and 100.
- Europe > Switzerland > Vaud > Lausanne (0.04)
- North America > United States > Massachusetts (0.04)
- North America > United States > California > Santa Clara County > Santa Clara (0.04)
- (2 more...)
ExpandNets: Linear Over-parameterization to Train Compact Convolutional Networks
We introduce an approach to training a given compact network. To this end, we leverage over-parameterization, which typically improves both neural network optimization and generalization. Specifically, we propose to expand each linear layer of the compact network into multiple consecutive linear layers, without adding any nonlinearity. As such, the resulting expanded network, or ExpandNet, can be contracted back to the compact one algebraically at inference. In particular, we introduce two convolutional expansion strategies and demonstrate their benefits on several tasks, including image classification, object detection, and semantic segmentation. As evidenced by our experiments, our approach outperforms both training the compact network from scratch and performing knowledge distillation from a teacher. Furthermore, our linear over-parameterization empirically reduces gradient confusion during training and improves the network generalization.
- North America > United States > California > Santa Clara County > Los Altos (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- North America > United States > California > Alameda County > Berkeley (0.04)
- Europe > Switzerland > Vaud > Lausanne (0.04)
- Europe > Switzerland > Vaud > Lausanne (0.04)
- North America > United States > Massachusetts (0.04)
- North America > United States > California > Santa Clara County > Santa Clara (0.04)
- (2 more...)
ExpandNets: Linear Over-parameterization to Train Compact Convolutional Networks
We introduce an approach to training a given compact network. To this end, we leverage over-parameterization, which typically improves both neural network optimization and generalization. Specifically, we propose to expand each linear layer of the compact network into multiple consecutive linear layers, without adding any nonlinearity. As such, the resulting expanded network, or ExpandNet, can be contracted back to the compact one algebraically at inference. In particular, we introduce two convolutional expansion strategies and demonstrate their benefits on several tasks, including image classification, object detection, and semantic segmentation. As evidenced by our experiments, our approach outperforms both training the compact network from scratch and performing knowledge distillation from a teacher.
Compression-aware Training of Deep Networks
Jose M. Alvarez, Mathieu Salzmann
In recent years, great progress has been made in a variety of application domains thanks to the development of increasingly deeper neural networks. Unfortunately, the huge number of units of these networks makes them expensive both computationally and memory-wise. To overcome this, exploiting the fact that deep networks are over-parametrized, several compression strategies have been proposed. These methods, however, typically start from a network that has been trained in a standard manner, without considering such a future compression. In this paper, we propose to explicitly account for compression in the training process. To this end, we introduce a regularizer that encourages the parameter matrix of each layer to have low rank during training. We show that accounting for compression during training allows us to learn much more compact, yet at least as effective, models than state-of-the-art compression techniques.
- North America > United States > California > Santa Clara County > Los Altos (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- North America > United States > California > Alameda County > Berkeley (0.04)
- Europe > Switzerland > Vaud > Lausanne (0.04)