Goto

Collaborating Authors

 style factor


Appendix for " CS-Isolate: Extracting Hard Confident Examples by Content and Style Isolation " Y exiong Lin 1 Y u Y ao

Neural Information Processing Systems

We denote observed variables with gray color and latent variables with white color. Firstly, we introduce the concept of an uncontrolled style factor . Why do confident examples encourage content-style isolation? Calculate the loss using Eq. 1 and update networks; Output: The inference networks and classifier heads q It's essential to understand that although data augmentation cannot control all style factors, it still offers the benefit of "partial isolation". This approach, therefore, ensures that styles changes don't affect the derived content representation Calculate the loss using Eq. 2 and update networks; Output: The inference networks and classifier heads q Finally, confident and unlabeled examples are used to train the models based on the MixMatch algorithm.




CS-Isolate: Extracting Hard Confident Examples by Content and Style Isolation

Neural Information Processing Systems

Label noise widely exists in large-scale image datasets. To mitigate the side effects of label noise, state-of-the-art methods focus on selecting confident examples by leveraging semi-supervised learning. Existing research shows that the ability to extract hard confident examples, which are close to the decision boundary, significantly influences the generalization ability of the learned classifier.In this paper, we find that a key reason for some hard examples being close to the decision boundary is due to the entanglement of style factors with content factors. The hard examples become more discriminative when we focus solely on content factors, such as semantic information, while ignoring style factors. Nonetheless, given only noisy data, content factors are not directly observed and have to be inferred.To tackle the problem of inferring content factors for classification when learning with noisy labels, our objective is to ensure that the content factors of all examples in the same underlying clean class remain unchanged as their style information changes.To achieve this, we utilize different data augmentation techniques to alter the styles while regularizing content factors based on some confident examples. By training existing methods with our inferred content factors, CS-Isolate proves their effectiveness in learning hard examples on benchmark datasets. The implementation is available at https://github.com/tmllab/2023


Enhancing Adversarial Contrastive Learning via Adversarial Invariant Regularization

Neural Information Processing Systems

Adversarial contrastive learning (ACL) is a technique that enhances standard contrastive learning (SCL) by incorporating adversarial data to learn a robust representation that can withstand adversarial attacks and common corruptions without requiring costly annotations. To improve transferability, the existing work introduced the standard invariant regularization (SIR) to impose style-independence property to SCL, which can exempt the impact of nuisance style factors in the standard representation. However, it is unclear how the style-independence property benefits ACL-learned robust representations. In this paper, we leverage the technique of causal reasoning to interpret the ACL and propose adversarial invariant regularization (AIR) to enforce independence from style factors. We regulate the ACL using both SIR and AIR to output the robust representation. Theoretically, we show that AIR implicitly encourages the representational distance between different views of natural data and their adversarial variants to be independent of style factors. Empirically, our experimental results show that invariant regularization significantly improves the performance of state-of-the-art ACL methods in terms of both standard generalization and robustness on downstream tasks. To the best of our knowledge, we are the first to apply causal reasoning to interpret ACL and develop AIR for enhancing ACL-learned robust representations.


Appendix for " CS-Isolate: Extracting Hard Confident Examples by Content and Style Isolation " Y exiong Lin 1 Y u Y ao

Neural Information Processing Systems

We denote observed variables with gray color and latent variables with white color. Firstly, we introduce the concept of an uncontrolled style factor . Why do confident examples encourage content-style isolation? Calculate the loss using Eq. 1 and update networks; Output: The inference networks and classifier heads q It's essential to understand that although data augmentation cannot control all style factors, it still offers the benefit of "partial isolation". This approach, therefore, ensures that styles changes don't affect the derived content representation Calculate the loss using Eq. 2 and update networks; Output: The inference networks and classifier heads q Finally, confident and unlabeled examples are used to train the models based on the MixMatch algorithm.




Divide, Discover, Deploy: Factorized Skill Learning with Symmetry and Style Priors

Cathomen, Rafael, Mittal, Mayank, Vlastelica, Marin, Hutter, Marco

arXiv.org Artificial Intelligence

Unsupervised Skill Discovery (USD) allows agents to autonomously learn diverse behaviors without task-specific rewards. While recent USD methods have shown promise, their application to real-world robotics remains underexplored. In this paper, we propose a modular USD framework to address the challenges in the safety, interpretability, and deployability of the learned skills. Our approach employs user-defined factorization of the state space to learn disentangled skill representations. It assigns different skill discovery algorithms to each factor based on the desired intrinsic reward function. To encourage structured morphology-aware skills, we introduce symmetry-based inductive biases tailored to individual factors. We also incorporate a style factor and regularization penalties to promote safe and robust behaviors. We evaluate our framework in simulation using a quadrupedal robot and demonstrate zero-shot transfer of the learned skills to real hardware. Our results show that factorization and symmetry lead to the discovery of structured human-interpretable behaviors, while the style factor and penalties enhance safety and diversity. Additionally, we show that the learned skills can be used for downstream tasks and perform on par with oracle policies trained with hand-crafted rewards.


Enhancing Adversarial Contrastive Learning via Adversarial Invariant Regularization

Neural Information Processing Systems

Adversarial contrastive learning (ACL) is a technique that enhances standard contrastive learning (SCL) by incorporating adversarial data to learn a robust representation that can withstand adversarial attacks and common corruptions without requiring costly annotations. To improve transferability, the existing work introduced the standard invariant regularization (SIR) to impose style-independence property to SCL, which can exempt the impact of nuisance style factors in the standard representation. However, it is unclear how the style-independence property benefits ACL-learned robust representations. In this paper, we leverage the technique of causal reasoning to interpret the ACL and propose adversarial invariant regularization (AIR) to enforce independence from style factors. We regulate the ACL using both SIR and AIR to output the robust representation.