Aself-consistenttheoryofGaussianProcesses capturesfeaturelearningeffectsinfiniteCNNs
–Neural Information Processing Systems
Despite its theoretical appeal, this viewpoint lacks a crucial ingredient of deep learning in finite DNNs, laying at the heart of their success --feature learning. Here we consider DNNs trained with noisy gradient descent on a large training set and derive a self-consistent Gaussian Process theory accounting forstrongfinite-DNN and feature learning effects.
Neural Information Processing Systems
Feb-10-2026, 18:59:55 GMT