Yamada, Kazunori D
A Language Anchor-Guided Method for Robust Noisy Domain Generalization
Dai, Zilin, Wang, Lehong, Lin, Fangzhou, Wang, Yidong, Li, Zhigang, Yamada, Kazunori D, Zhang, Ziming, Lu, Wang
Abstract--Real-world machine learning applications are often hindered by two critical challenges: distribution shift and label noise. Networks inherently tend to overfit to redundant, uninformative features present in the training distribution, which undermines their ability to generalize effectively to the target domain's distribution. The presence of noisy data further exacerbates this issue by inducing additional overfitting to noise, causing existing domain generalization methods to fail in effectively distinguishing invariant features from spurious ones. We also introduce a weighted loss function that dynamically adjusts the contribution of each sample based on its distance to the corresponding NLP anchor, thereby improving the model's resilience to noisy labels. Generalization (DG) has emerged as a pivotal algorithm in machine learning, aiming to develop models that can maintain high performance on previously unseen environments--or domains. T raditional methods often assume that training and test data share the same distribution, yet in real-world scenarios, there is frequently a substantial shift between these distributions. This phenomenon, widely referred to as domain shift, can cause severe performance degradation in tasks spanning computer vision, natural language processing, and medical image analysis [1]. As shown in Figure 1(a)(b), even within the same class label, the distribution of feature representations can vary considerably . This variation may stem from differences in image acquisition conditions--such as lighting variations, changes in pose, or complex background environments--and even from more subtle domain-specific factors like sensor noise or camera calibration differences. Such intra-class variability poses a significant challenge for developing accurate and adaptable models, which must learn to extract invariant features that capture the true semantic essence of the class while ignoring irrelevant variations. Lin, Z. Zhang is with Worcester Polytechnic Institute, Worcester, MA, 01890. L.Wang is with Carnegie Mellon University, Pittsburgh, P A, 15213. Y .Wang is with Peking University, Beijing, China, 100871. Z.Li, W.Lu is with T singhua University, Beijing, China, 100190. K.Y amada is with T ohoku University, Sendai, Japan, 980-8572.
Procedural Content Generation via Generative Artificial Intelligence
Mao, Xinyu, Yu, Wanli, Yamada, Kazunori D, Zielewski, Michael R.
The attempt to utilize machine learning in PCG has been made in the past. In this survey paper, we investigate how generative artificial intelligence (AI), which saw a significant increase in interest in the mid-2010s, is being used for PCG. We review applications of generative AI for the creation of various types of content, including terrains, items, and even storylines. While generative AI is effective for PCG, one significant issues it faces is that building high-performance generative AI requires vast amounts of training data. Because content generally highly customized, domain-specific training data is scarce, and straightforward approaches to generative AI models may not work well. For PCG research to advance further, issues related to limited training data must be overcome. Thus, we also give special consideration to research that addresses the challenges posed by limited training data.
Hyperbolic Contrastive Learning
Yue, Yun, Lin, Fangzhou, Yamada, Kazunori D, Zhang, Ziming
Learning good image representations that are beneficial to downstream tasks is a challenging task in computer vision. As such, a wide variety of self-supervised learning approaches have been proposed. Among them, contrastive learning has shown competitive performance on several benchmark datasets. The embeddings of contrastive learning are arranged on a hypersphere that results in using the inner (dot) product as a distance measurement in Euclidean space. However, the underlying structure of many scientific fields like social networks, brain imaging, and computer graphics data exhibit highly non-Euclidean latent geometry. We propose a novel contrastive learning framework to learn semantic relationships in the hyperbolic space. Hyperbolic space is a continuous version of trees that naturally owns the ability to model hierarchical structures and is thus beneficial for efficient contrastive representation learning. We also extend the proposed Hyperbolic Contrastive Learning (HCL) to the supervised domain and studied the adversarial robustness of HCL. The comprehensive experiments show that our proposed method achieves better results on self-supervised pretraining, supervised classification, and higher robust accuracy than baseline methods.