Goto

Collaborating Authors

 model prediction


All Points Matter: Entropy-Regularized Distribution Alignment for Weakly-supervised 3D Segmentation Liyao T ang

Neural Information Processing Systems

This approach may, however, hinder the comprehensive exploitation of unlabeled data points. We hypothesize that this selective usage arises from the noise in pseudo-labels generated on unlabeled data. The noise in pseudo-labels may result in significant discrepancies between pseudo-labels and model predictions, thus confusing and affecting the model training greatly.





1 Data Ingestion

Neural Information Processing Systems

For all other remaining architectures, the reported results are from private datasets. Neck Shaft Angle(NSA) cannot be estimated. Additionally, [? ] requires estimation of the diaphysis Figure 4: Repeatability of the femur morphometry extraction method as measured by error distributions for a) the landmarks/anatomical sizes and b) axis alignment identified by the adapted method. Do the main claims made in the abstract and introduction accurately reflect the paper's Did you specify all the training details (e.g., data splits, hyperparameters, how they were Data splits are available in the GitHub repository. Did you report error bars (e.g., with respect to the random seed after running ex-67 Did you include the total amount of compute and the type of resources used (e.g., Did you mention the license of the assets?




Self-AdaptiveTraining: beyondEmpiricalRisk Minimization

Neural Information Processing Systems

This problem is important to robustly learning from data that are corrupted by,e.g., random noise and adversarial examples. The standard empirical risk minimization (ERM) for such data, however, may easily overfit noise and thus suffers from sub-optimal performance. In this paper, we observe that model predictions can substantially benefit the training process: self-adaptive training significantly mitigates the overfitting issue and improves generalization over ERM under both random and adversarial noise.


max

Neural Information Processing Systems

Toclarifywhere the adversarial brittleness truly comes from, we need to figure out how the robust and non-robust features in data manifold subtly manipulate feature representation and fool model prediction, by directly handling them in the feature space. To address it, we propose a way to precisely distill intermediate features into robust and non-robust features by employing Information Bottleneck (IB) [17, 18, 19].