LCA: Loss Change Allocation for Neural Network Training
Janice Lan, Rosanne Liu, Hattie Zhou, Jason Yosinski
–Neural Information Processing Systems
Neural networks enjoy widespread use, but many aspects of their training, representation, and operation are poorly understood. In particular, our view into the training process is limited, with a single scalar loss being the most common viewport into this high-dimensional, dynamic process. We propose a new window into training called Loss Change Allocation (LCA), in which credit for changes to the network loss is conservatively partitioned to the parameters. This measurement is accomplished by decomposing the components of an approximate path integral along the training trajectory using a Runge-Kutta integrator. This rich view shows which parameters are responsible for decreasing or increasing the loss during training, or which parameters "help" or "hurt" the network's learning, respectively. LCA may be summed over training iterations and/or over neurons, channels, or layers for increasingly coarse views. This new measurement device produces several insights into training.
Neural Information Processing Systems
Jan-27-2025, 09:30:43 GMT
- Country:
- North America > Canada (0.14)
- Genre:
- Research Report
- Experimental Study (0.47)
- New Finding (0.46)
- Research Report
- Technology: