Difference between Local Response Normalization and Batch Normalization
Normalization has become an important of deep neural networks that compensates for the unbounded nature of certain activation functions such as ReLU, ELU etc. With these activation function, the output layers is not constrained within a bounded range (such as [-1,1] for tanh), rather they can grow as high as the training allows it. To limit the unbounded activation from increasing the output layer values, normalization is used just before the activation function. There are two common normalization techniques used in deep neural networks and are often misunderstood by the beginners. In this tutorial, a detailed explanation of both the normalization techniques will be discussed highlighting their key differences.
Jun-20-2019, 20:24:49 GMT
- Technology: