Jamonnak, Suphanut
CLIP-S$^4$: Language-Guided Self-Supervised Semantic Segmentation
He, Wenbin, Jamonnak, Suphanut, Gou, Liang, Ren, Liu
Existing semantic segmentation approaches are often limited by costly pixel-wise annotations and predefined classes. In this work, we present CLIP-S$^4$ that leverages self-supervised pixel representation learning and vision-language models to enable various semantic segmentation tasks (e.g., unsupervised, transfer learning, language-driven segmentation) without any human annotations and unknown class information. We first learn pixel embeddings with pixel-segment contrastive learning from different augmented views of images. To further improve the pixel embeddings and enable language-driven semantic segmentation, we design two types of consistency guided by vision-language models: 1) embedding consistency, aligning our pixel embeddings to the joint feature space of a pre-trained vision-language model, CLIP; and 2) semantic consistency, forcing our model to make the same predictions as CLIP over a set of carefully designed target classes with both known and unknown prototypes. Thus, CLIP-S$^4$ enables a new task of class-free semantic segmentation where no unknown class information is needed during training. As a result, our approach shows consistent and substantial performance improvement over four popular benchmarks compared with the state-of-the-art unsupervised and language-driven semantic segmentation methods. More importantly, our method outperforms these methods on unknown class recognition by a large margin.
Visual Understanding of Multiple Attributes Learning Model of X-Ray Scattering Images
Huang, Xinyi, Jamonnak, Suphanut, Zhao, Ye, Wang, Boyu, Hoai, Minh, Yager, Kevin, Xu, Wei
The technique is widely used in biomedical, material, and physical applications by analyzing structural patterns in the x-ray scattering images [21]. X-ray equipment can generate up to 1 million images per day which impose heavy burden in post image analysis. A variety of image analysis methods are applied to x-ray scattering data. Recently, deep learning models are employed in classifying and annotating multiple image attributes from experimental or synthetic images, which were shown to outperform previously published methods [18, 4]. As most deep learning paradigms, these methods are not easily understood by material, physical, and biomedical scientists. The lack of proper explanations and absence of control of the decisions would make the models less trustworthy. While considerable effort has been made to make deep learning interpretable and controllable by humans [3], the existing techniques are not specifically designed for the scientific image classification models of x-ray scattering images, which requires extra consideration in finding - How the learning models perform for a diverse set of overlapped attributes with high variation?