Boolean Variation and Boolean Logic BackPropagation

Nguyen, Van Minh

arXiv.org Artificial Intelligence 

Deep learning has become a commonplace solution to numerous modern problems, occupying a central spot of today's technological and social attention. The recipe of its power is the combining effect of unprecedented large dimensions and learning process based on gradient backpropagation [LeCun et al., 1998]. In particular, thanks to the simplicity of neuron model that is decomposed into a weighted linear sum followed by a non-linear activation function, the gradient of weights is solely determined by their respective input without involving cross-parameter dependency. Hence, in terms of computational process, gradient backpropagation is automated by gradient chain rule and only requires a buffering of the forward input data. However, deep learning is computationally intensive.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found