ResNet: Enabling Deep Convolutional Neural Networks through Residual Learning

Liu, Xingyu, Goh, Kun Ming

arXiv.org Artificial Intelligence 

Abstract--Convolutional Neural Networks (CNNs) have rev-olutionised computer vision, but training very deep networks has been challenging due to the vanishing gradient problem. This paper explores Residual Networks (ResNet), introduced by He et al. (2015), which overcome this limitation by using skip connections. ResNet enables the training of networks with hundreds of layers by allowing gradients to flow directly through shortcut connections that bypass intermediate layers. In our implementation on the CIF AR-10 dataset, ResNet-18 achieves 89.9% accuracy compared to 84.1% for a traditional deep CNN of similar depth, while also converging faster and training more stably. Deep Convolutional Neural Networks (CNNs) have become the foundation of modern computer vision, powering applications from image classification to object detection.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found