Goto

Collaborating Authors

 Park, Junsung


No Thing, Nothing: Highlighting Safety-Critical Classes for Robust LiDAR Semantic Segmentation in Adverse Weather

arXiv.org Artificial Intelligence

Existing domain generalization methods for LiDAR semantic segmentation under adverse weather struggle to accurately predict "things" categories compared to "stuff" categories. In typical driving scenes, "things" categories can be dynamic and associated with higher collision risks, making them crucial for safe navigation and planning. Recognizing the importance of "things" categories, we identify their performance drop as a serious bottleneck in existing approaches. We observed that adverse weather induces degradation of semantic-level features and both corruption of local features, leading to a misprediction of "things" as "stuff". To mitigate these corruptions, we suggest our method, NTN - segmeNt Things for No-accident. To address semantic-level feature corruption, we bind each point feature to its superclass, preventing the misprediction of things classes into visually dissimilar categories. Additionally, to enhance robustness against local corruption caused by adverse weather, we define each LiDAR beam as a local region and propose a regularization term that aligns the clean data with its corrupted counterpart in feature space. NTN achieves state-of-the-art performance with a +2.6 mIoU gain on the SemanticKITTI-to-SemanticSTF benchmark and +7.9 mIoU on the SemanticPOSS-to-SemanticSTF benchmark. Notably, NTN achieves a +4.8 and +7.9 mIoU improvement on "things" classes, respectively, highlighting its effectiveness.


Know "No" Better: A Data-Driven Approach for Enhancing Negation Awareness in CLIP

arXiv.org Artificial Intelligence

While CLIP has significantly advanced multimodal understanding by bridging vision and language, the inability to grasp negation - such as failing to differentiate concepts like "parking" from "no parking" - poses substantial challenges. By analyzing the data used in the public CLIP model's pre-training, we posit this limitation stems from a lack of negation-inclusive data. To address this, we introduce data generation pipelines that employ a large language model (LLM) and a multimodal LLM to produce negation-inclusive captions. Fine-tuning CLIP with data generated from our pipelines, we develop NegationCLIP, which enhances negation awareness while preserving the generality. Moreover, to enable a comprehensive evaluation of negation understanding, we propose NegRefCOCOg-a benchmark tailored to test VLMs' ability to interpret negation across diverse expressions and positions within a sentence. Experiments on various CLIP architectures validate the effectiveness of our data generation pipelines in enhancing CLIP's ability to perceive negation accurately. Additionally, NegationCLIP's enhanced negation awareness has practical applications across various multimodal tasks, demonstrated by performance gains in text-to-image generation and referring image segmentation.


Unleashing Multi-Hop Reasoning Potential in Large Language Models through Repetition of Misordered Context

arXiv.org Artificial Intelligence

Multi-hop reasoning, which requires multi-step reasoning based on the supporting documents within a given context, remains challenging for large language models (LLMs). LLMs often struggle to filter out irrelevant documents within the context, and their performance is sensitive to the position of supporting documents within that context. In this paper, we identify an additional challenge: LLMs' performance is also sensitive to the order in which the supporting documents are presented. We refer to this as the misordered context problem. To address this issue, we propose a simple yet effective method called context repetition (CoRe), which involves prompting the model by repeatedly presenting the context to ensure the supporting documents are presented in the optimal order for the model. Using CoRe, we improve the F1 score by up to 30%p on multi-hop QA tasks and increase accuracy by up to 70%p on a synthetic task. Additionally, CoRe helps mitigate the well-known "lost-in-the-middle" problem in LLMs and can be effectively combined with retrieval-based approaches utilizing Chain-of-Thought (CoT) reasoning.


Rethinking Data Augmentation for Robust LiDAR Semantic Segmentation in Adverse Weather

arXiv.org Artificial Intelligence

Existing LiDAR semantic segmentation methods often struggle with performance declines in adverse weather conditions. Previous work has addressed this issue by simulating adverse weather or employing universal data augmentation during training. However, these methods lack a detailed analysis and understanding of how adverse weather negatively affects LiDAR semantic segmentation performance. Motivated by this issue, we identified key factors of adverse weather and conducted a toy experiment to pinpoint the main causes of performance degradation: (1) Geometric perturbation due to refraction caused by fog or droplets in the air and (2) Point drop due to energy absorption and occlusions. Based on these findings, we propose new strategic data augmentation techniques. First, we introduced a Selective Jittering (SJ) that jitters points in the random range of depth (or angle) to mimic geometric perturbation. Additionally, we developed a Learnable Point Drop (LPD) to learn vulnerable erase patterns with a Deep Q-Learning Network to approximate the point drop phenomenon from adverse weather conditions. Without precise weather simulation, these techniques strengthen the LiDAR semantic segmentation model by exposing it to vulnerable conditions identified by our data-centric analysis. Experimental results confirmed the suitability of the proposed data augmentation methods for enhancing robustness against adverse weather conditions. Our method achieves a notable 39.5 mIoU on the SemanticKITTI-to-SemanticSTF benchmark, improving the baseline by 8.1\%p and establishing a new state-of-the-art. Our code will be released at \url{https://github.com/engineerJPark/LiDARWeather}.


Entropy is not Enough for Test-Time Adaptation: From the Perspective of Disentangled Factors

arXiv.org Artificial Intelligence

The primary challenge of TTA is limited access to the entire test dataset during online updates, causing error accumulation. To mitigate it, TTA methods have utilized the model output's entropy as a confidence metric that aims to determine which samples have a lower likelihood of causing error. Through experimental studies, however, we observed the unreliability of entropy as a confidence metric for TTA under biased scenarios and theoretically revealed that it stems from the neglect of the influence of latent disentangled factors of data on predictions. Building upon these findings, we introduce a novel TTA method named Destroy Your Object (DeYO), which leverages a newly proposed confidence metric named Pseudo-Label Probability Difference (PLPD). PLPD quantifies the influence of the shape of an object on prediction by measuring the difference between predictions before and after applying an object-destructive transformation. DeYO consists of sample selection and sample weighting, which employ entropy and PLPD concurrently. For robust adaptation, DeYO prioritizes samples that dominantly incorporate shape information when making predictions. Our extensive experiments demonstrate the consistent superiority of DeYO over baseline methods across various scenarios, including biased and wild. Although deep neural networks (DNNs) demonstrate powerful performance across various domains, they lack robustness against distribution shifts under conventional training (He et al., 2016; Pan & Yang, 2009). Therefore, research areas such as domain generalization (Blanchard et al., 2011; Gulrajani & Lopez-Paz, 2021), which involves training models to be robust against arbitrary distribution shifts, and unsupervised domain adaptation (UDA) (Ganin & Lempitsky, 2015; Park et al., 2020), which seeks domain-invariant information for label-absent target domains, have been extensively investigated in the existing literature. Test-time adaptation (TTA) (Wang et al., 2021a) has also gained significant attention as a means to address distribution shifts occurring during test time. TTA leverages each data point once for adaptation immediately after inference. Its minimal overhead compared to existing areas makes it particularly suitable for real-world applications (Azimi et al., 2022). Because UDA assumes access to the entire test samples before adaptation, it utilizes its information on a task by analyzing the distribution of the entire test set (Kang et al., 2019). It leads to inaccurate predictions, and incorporating them into model updates results in error accumulation within the model (Arazo et al., 2020).


DAFA: Distance-Aware Fair Adversarial Training

arXiv.org Artificial Intelligence

The disparity in accuracy between classes in standard training is amplified during adversarial training, a phenomenon termed the robust fairness problem. Existing methodologies aimed to enhance robust fairness by sacrificing the model's performance on easier classes in order to improve its performance on harder ones. However, we observe that under adversarial attacks, the majority of the model's predictions for samples from the worst class are biased towards classes similar to the worst class, rather than towards the easy classes. Through theoretical and empirical analysis, we demonstrate that robust fairness deteriorates as the distance between classes decreases. Motivated by these insights, we introduce the Distance-Aware Fair Adversarial training (DAFA) methodology, which addresses robust fairness by taking into account the similarities between classes. Specifically, our method assigns distinct loss weights and adversarial margins to each class and adjusts them to encourage a trade-off in robustness among similar classes. Experimental results across various datasets demonstrate that our method not only maintains average robust accuracy but also significantly improves the worst robust accuracy, indicating a marked improvement in robust fairness compared to existing methods. Recent studies have revealed the issue of accuracy imbalance among classes (He & Garcia, 2009). This imbalance becomes even more pronounced during adversarial training, which utilizes adversarial examples (Szegedy et al., 2013) to enhance the robustness of the model (Madry et al., 2017). This phenomenon is commonly referred to as "robust fairness problem" (Xu et al., 2021). Existing research has introduced methods inspired by long-tailed (LT) classification studies (He & Garcia, 2009; Zhang et al., 2023) to mitigate the challenge of achieving robust fairness. LT classification tasks tackle the problem of accuracy imbalance among classes, stemming from classifier bias toward classes with a substantial number of samples (head classes) within the LT dataset. The methods proposed for LT classification mainly apply opposing strategies to head classes and tail classes-those classes within LT datasets that have a limited number of samples. For instance, methods proposed by Cao et al. (2019); Khan et al. (2019); Menon et al. (2020) deliberately reduce the model output for head classes while augmenting the output for tail classes by adding constants. These approaches typically lead to improved accuracy for tail classes at the expense of reduced accuracy for head classes. Benz et al. (2021) noted similarities between the fairness issue in LT classification and that in adversarial training. They corresponded the head and tail classes in LT classification with the easy and hard classes in adversarial training, respectively.


PUCA: Patch-Unshuffle and Channel Attention for Enhanced Self-Supervised Image Denoising

arXiv.org Artificial Intelligence

Although supervised image denoising networks have shown remarkable performance on synthesized noisy images, they often fail in practice due to the difference between real and synthesized noise. Since clean-noisy image pairs from the real world are extremely costly to gather, self-supervised learning, which utilizes noisy input itself as a target, has been studied. To prevent a self-supervised denoising model from learning identical mapping, each output pixel should not be influenced by its corresponding input pixel; This requirement is known as J-invariance. Blind-spot networks (BSNs) have been a prevalent choice to ensure J-invariance in self-supervised image denoising. However, constructing variations of BSNs by injecting additional operations such as downsampling can expose blinded information, thereby violating J-invariance. Consequently, convolutions designed specifically for BSNs have been allowed only, limiting architectural flexibility. To overcome this limitation, we propose PUCA, a novel J-invariant U-Net architecture, for self-supervised denoising. PUCA leverages patch-unshuffle/shuffle to dramatically expand receptive fields while maintaining J-invariance and dilated attention blocks (DABs) for global context incorporation. Experimental results demonstrate that PUCA achieves state-of-the-art performance, outperforming existing methods in self-supervised image denoising.