Well File:

Greatness in Simplicity: Unified Self-Cycle Consistency for Parser-Free Virtual Try-On Junyin Wang 1

Neural Information Processing Systems

Image-based virtual try-on tasks remain challenging, primarily due to inherent complexities associated with non-rigid garment deformation modeling and strong feature entanglement of clothing within human body. Recent groundbreaking formulations, such as in-painting, cycle consistency, and knowledge distillation, have facilitated self-supervised generation of try-on images.





Geometric-Averaged Preference Optimization for Soft Preference Labels

Neural Information Processing Systems

Many algorithms for aligning LLMs with human preferences assume that human preferences are binary and deterministic. However, human preferences can vary across individuals, and therefore should be represented distributionally. In this work, we introduce the distributional soft preference labels and improve Direct Preference Optimization (DPO) with a weighted geometric average of the LLM output likelihood in the loss function. This approach adjusts the scale of learning loss based on the soft labels such that the loss would approach zero when the responses are closer to equally preferred. This simple modification can be easily applied to any DPO-based methods and mitigate over-optimization and objective mismatch, which prior works suffer from. Our experiments simulate the soft preference labels with AI feedback from LLMs and demonstrate that geometric averaging consistently improves performance on standard benchmarks for alignment research. In particular, we observe more preferable responses than binary labels and significant improvements where modestly-confident labels are in the majority.




ODGS: 3D Scene Reconstruction from Omnidirectional Images with 3D Gaussian Splatting 1

Neural Information Processing Systems

Omnidirectional (or 360-degree) images are increasingly being used for 3D applications since they allow the rendering of an entire scene with a single image. Existing works based on neural radiance fields demonstrate successful 3D reconstruction quality on egocentric videos, yet they suffer from long training and rendering times. Recently, 3D Gaussian splatting has gained attention for its fast optimization and real-time rendering.


A Theoretical Guarantees for FINE Algorithm

Neural Information Processing Systems

This section provides the detailed proof for Theorem 1 and the lower bounds of the precision and recall. We derive such theorems with the concentration inequalities in probabilistic theory. In this section, we frequently use the spectral norm. U = I when U is an orthogonal matrix). A.2 Proof of Theorem 1 We deal with some require lemmas which are used for the proof of Theorem 1. Lemma 1.


FINE Samples for Learning with Noisy Labels

Neural Information Processing Systems

Modern deep neural networks (DNNs) become weak when the datasets contain noisy (incorrect) class labels. Robust techniques in the presence of noisy labels can be categorized into two types: developing noise-robust functions or using noisecleansing methods by detecting the noisy data. Recently, noise-cleansing methods have been considered as the most competitive noisy-label learning algorithms. Despite their success, their noisy label detectors are often based on heuristics more than a theory, requiring a robust classifier to predict the noisy data with loss values. In this paper, we propose a novel detector for filtering label noise. Unlike most existing methods, we focus on each data point's latent representation dynamics and measure the alignment between the latent distribution and each representation using the eigen decomposition of the data gram matrix. Our framework, coined as filtering noisy instances via their eigenvectors (FINE), provides a robust detector using derivative-free simple methods with theoretical guarantees. Under our framework, we propose three applications of the FINE: sample-selection approach, semi-supervised learning (SSL) approach, and collaboration with noiserobust loss functions.