Retaining Beneficial Information from Detrimental Data for Deep Neural Network Repair Long-Kai Huang
–Neural Information Processing Systems
The performance of deep learning models heavily relies on the quality of the training data. Inadequacies in the training data, such as corrupt input or noisy labels, can lead to the failure of model generalization. Recent studies propose repairing the model by identifying the training samples that contribute to the failure and removing their influence from the model. However, it is important to note that the identified data may contain both beneficial and detrimental information. Simply erasing the information of the identified data from the model can have a negative impact on its performance, especially when accurate data is mistakenly identified as detrimental and removed.
Neural Information Processing Systems
Feb-12-2025, 01:55:32 GMT