Optimizing Neural Networks

#artificialintelligence 

The goal of training an artificial neural network is to achieve the lowest generalized error in the least amount of time. In this article I'll outline a brief description of some common methods of optimizing training. Feature scaling, is the process of scaling the input features such that all features occupy the same range of values. This ensures that the gradient of the cost function is not exaggerated in any particular dimension, which reduces oscillation during gradient descent. Oscillation during gradient descent means the training is not maximally efficient, as it's not taking the shortest path to the minimum of the cost function.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found