binaryconnect
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Asia > China > Beijing > Beijing (0.04)
- Asia > China > Anhui Province > Hefei (0.04)
- North America > Canada > Alberta (0.14)
- Europe > France (0.04)
- North America > United States > Maryland > Prince George's County > College Park (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- North America > Canada > Alberta (0.14)
- Europe > France (0.04)
Export Reviews, Discussions, Author Feedback and Meta-Reviews
I mostly like very much the fact that you can train with binary weights, in contrast to previous works. Note that is FPGA section should be improved. It should be made more concrete (showing at least a diagram how the weights are placed and how the data are routed through the network. Specifically, how to route the convolutional layers) or removed.
BinaryConnect: Training Deep Neural Networks with binary weights during propagations
Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide range of tasks, with the best results obtained with large training sets and large models. In the past, GPUs enabled these breakthroughs because of their greater computational speed. In the future, faster computation at both training and test time is likely to be crucial for further progress and for consumer applications on low-power devices. As a result, there is much interest in research and development of dedicated hardware for Deep Learning (DL). Binary weights, i.e., weights which are constrained to only two possible values (e.g.