stride 2
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- Asia > China > Beijing > Beijing (0.04)
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > New Jersey > Middlesex County > Piscataway (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Appendix
This is the Appendix for "Self-Supervised Learning Disentangled Group Representation as Feature". Table .1 summarizes the abbreviations and the symbols used in the main paper.Abbreviation/Symbol Meaning Abbreviation SSL Self-supervised Learning SL Supervised Learning DCI Disentangle Metric for Informativeness IRS Interventional Robustness Score EXP Explicitness Score MOD Modularity Score LR Logistic Regression GBT Gradient Boosted Trees OOD Out-Of-Distributed Symbol in Theory U Semantic space X V ector space I Image space G Group G ( x) Group orbit w.r .t.G containing the sample x ϕ Image generation process U I φ Visual representation I X f Semantic representation U X m The number of decomposed subgroups Symbol in Algorithm P Partition of dataset P Learned partition through Eq. (3) P Set of partitions used in Eq. (2) N Number of training images θ "Dummy" parameter used by IRM I Image X List of abbreviations and symbols used in the paper. Section A provides the preliminary knowledge about the group theory. Section D presents the additional experimental results. 1 A Preliminaries Groups often arise as transformations of some space, such as a set, vector space, or topological space. The set of clockwise rotations w.r .t. its centroid to retain We say this group of rotations act on the triangle, which is formally defined below.
On the notion of missingness for path attribution explainability methods in medical settings: Guiding the selection of medically meaningful baselines
Geiger, Alexander, Wagner, Lars, Rueckert, Daniel, Wilhelm, Dirk, Jell, Alissa
The explainability of deep learning models remains a significant challenge, particularly in the medical domain where interpretable outputs are critical for clinical trust and transparency. Path attribution methods such as Integrated Gradients rely on a baseline representing the absence of relevant features ("missingness"). Commonly used baselines, such as all-zero inputs, are often semantically meaningless, especially in medical contexts. While alternative baseline choices have been explored, existing methods lack a principled approach to dynamically select baselines tailored to each input. In this work, we examine the notion of missingness in the medical context, analyze its implications for baseline selection, and introduce a counterfactual-guided approach to address the limitations of conventional baselines. We argue that a generated counterfactual (i.e. clinically "normal" variation of the pathological input) represents a more accurate representation of a meaningful absence of features. We use a Variational Autoencoder in our implementation, though our concept is model-agnostic and can be applied with any suitable counterfactual method. We evaluate our concept on three distinct medical data sets and empirically demonstrate that counterfactual baselines yield more faithful and medically relevant attributions, outperforming standard baseline choices as well as other related methods.
- Europe > Switzerland (0.04)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- North America > United States > Utah > Salt Lake County > Salt Lake City (0.04)
- (8 more...)
IB-GAN: Disentangled Representation Learning with Information Bottleneck Generative Adversarial Networks
Jeon, Insu, Lee, Wonkwang, Pyeon, Myeongjang, Kim, Gunhee
We propose a new GAN-based unsupervised model for disentangled representation learning. The new model is discovered in an attempt to utilize the Information Bottleneck (IB) framework to the optimization of GAN, thereby named IB-GAN. The architecture of IB-GAN is partially similar to that of InfoGAN but has a critical difference; an intermediate layer of the generator is leveraged to constrain the mutual information between the input and the generated output. The intermediate stochastic layer can serve as a learnable latent distribution that is trained with the generator jointly in an end-to-end fashion. As a result, the generator of IB-GAN can harness the latent space in a disentangled and interpretable manner. With the experiments on dSprites and Color-dSprites dataset, we demonstrate that IB-GAN achieves competitive disentanglement scores to those of state-of-the-art \b{eta}-VAEs and outperforms InfoGAN. Moreover, the visual quality and the diversity of samples generated by IB-GAN are often better than those by \b{eta}-VAEs and Info-GAN in terms of FID score on CelebA and 3D Chairs dataset.
- Asia > Middle East > Jordan (0.05)
- Asia > South Korea > Seoul > Seoul (0.04)
- Asia > Japan > Honshū > Chūbu > Ishikawa Prefecture > Kanazawa (0.04)