Goto

Collaborating Authors

 strength


Using Trusted Data to Train Deep Networks on Labels Corrupted by Severe Noise

Neural Information Processing Systems

The growing importance of massive datasets with the advent of deep learning makes robustness to label noise a critical property for classifiers to have. Sources of label noise include automatic labeling for large datasets, non-expert labeling, and label corruption by data poisoning adversaries. In the latter case, corruptions may be arbitrarily bad, even so bad that a classifier predicts the wrong labels with high confidence. To protect against such sources of noise, we leverage the fact that a small set of clean labels is often easy to procure. We demonstrate that robustness to label noise up to severe strengths can be achieved by using a set of trusted data with clean labels, and propose a loss correction that utilizes trusted examples in a data-efficient manner to mitigate the effects of label noise on deep neural network classifiers. Across vision and natural language processing tasks, we experiment with various label noises at several strengths, and show that our method significantly outperforms existing methods.







FouRA: Fourier Low Rank Adaptation

Neural Information Processing Systems

While Low-Rank Adaptation (LoRA) has proven beneficial for efficiently fine-tuning large models, LoRA fine-tuned text-to-image diffusion models lack diversity in the generated images, as the model tends to copy data from the observed training samples.


Supplementary Material 1 Additional Implementation Details

Neural Information Processing Systems

We printed a checkerboard with a 9x10 grid of blocks, each measuring 87 mm x 87 mm. Parameter V alue Model Architecture Panoptic-PolarNet Test Batch Size 2 V al Batch Size 2 Test Batch size 1 post proc threshold 0.1 post proc nms kernel 5 post proc top k 100 center loss MSE offset loss L1 center loss weight 100 offset loss weight 10 enable SAP True SAP start epoch 30 SAP rate 0.01 Table 3: Parameters for Panoptic Segmentation model Parameter V alue(s) Model Architecture 4D-StOP Learning Rate 0.0005 Momentum 0.98 Stride 1 Max in points 5000 Sampling importance Decay Sampling None Input Threads 16 Checkpoint Gap 100 Table 4: Parameters for the 4D Panoptic Segmentation model The results reveal a significant variance in performance across different categories. Notably, 'Structure' and'Ground' both achieved high mIoU at Result The results are shown in Table 8. presents the mean intersection-over-union (mIoU) percent-56 Notably, 'Structure' achieved the highest mIoU at'General Objects' category have the lowest mIoU, The dataset is divided into 17 and 6 categories, respectively. Ground' and'Roads', as opposed to grouping anything related to ground as a single category. Overall, the performance across these tasks underscores the challenges posed by our dataset's With our dataset, future work can focus on improving the model's capacity to handle such diverse The raw data, processed data, and framework code can be found on our website.