A Novel Method for improving accuracy in neural network by reinstating traditional back propagation technique
–arXiv.org Artificial Intelligence
Deep learning has revolutionized the field of artificial intelligence by enabling machines to learn complex patterns and perform tasks that were previously deemed impossible. However, training deep neural networks is a challenging and computationally expensive task that requires optimizing millions or even billions of parameters. The back propagation algorithm has been the go-to method for training [5] deep neural networks for decades, but it suffers from some limitations, such as slow convergence and the vanishing gradient problem. To overcome these limitations, several alternative training methods have been proposed, such as Standard Back propagation and Direct Feedback Alignment. The core idea of this approach is to update the weights and biases in each layer of a neural network using the local error at that layer, rather than back propagating the error from the output layer to the input layer.[2] By doing so, the training process can be accelerated and the model's accuracy can be improved.
arXiv.org Artificial Intelligence
Aug-9-2023
- Country:
- Asia > India > Puducherry (0.04)
- Genre:
- Research Report > Promising Solution (0.64)
- Technology: