comir
- Europe > Switzerland > Zürich > Zürich (0.05)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > Sweden (0.04)
CoMIR: Contrastive Multimodal Image Representation for Registration
We propose contrastive coding to learn shared, dense image representations, referred to as CoMIRs (Contrastive Multimodal Image Representations). CoMIRs enable the registration of multimodal images where existing registration methods often fail due to a lack of sufficiently similar image structures. CoMIRs reduce the multimodal registration problem to a monomodal one, in which general intensity-based, as well as feature-based, registration algorithms can be applied. The method involves training one neural network per modality on aligned images, using a contrastive loss based on noise-contrastive estimation (InfoNCE). Unlike other contrastive coding methods, used for, e.g., classification, our approach generates image-like representations that contain the information shared between modalities. We introduce a novel, hyperparameter-free modification to InfoNCE, to enforce rotational equivariance of the learnt representations, a property essential to the registration task. We assess the extent of achieved rotational equivariance and the stability of the representations with respect to weight initialization, training set, and hyperparameter settings, on a remote sensing dataset of RGB and near-infrared images. We evaluate the learnt representations through registration of a biomedical dataset of bright-field and second-harmonic generation microscopy images; two modalities with very little apparent correlation. The proposed approach based on CoMIRs significantly outperforms registration of representations created by GAN-based image-to-image translation, as well as a state-of-the-art, application-specific method which takes additional knowledge about the data into account.
- Europe > Switzerland > Zürich > Zürich (0.05)
- North America > United States > Wisconsin > Dane County > Madison (0.04)
- North America > United States > Michigan (0.04)
- (2 more...)
- Health & Medicine > Therapeutic Area > Oncology (0.93)
- Health & Medicine > Diagnostic Medicine > Imaging (0.68)
- Government (0.68)
Our main contribution is to use contrastive learning for creating image-like embeddings suitable for registration, and
We thank the reviewers for their thorough evaluation. Apart from comparing with [29], we also mention [22] and [25] Table 3] for the biomedical dataset) we believe this is feasible. The reviews contain many suggestions on how to clarify and improve the article. The main computational cost of the method is linear w.r.t. We thank the reviewer for advising us to explore gCCA, which seems highly relevant.
CoMIR: Contrastive Multimodal Image Representation for Registration
We propose contrastive coding to learn shared, dense image representations, referred to as CoMIRs (Contrastive Multimodal Image Representations). CoMIRs enable the registration of multimodal images where existing registration methods often fail due to a lack of sufficiently similar image structures. CoMIRs reduce the multimodal registration problem to a monomodal one, in which general intensity-based, as well as feature-based, registration algorithms can be applied. The method involves training one neural network per modality on aligned images, using a contrastive loss based on noise-contrastive estimation (InfoNCE). Unlike other contrastive coding methods, used for, e.g., classification, our approach generates image-like representations that contain the information shared between modalities.
Can representation learning for multimodal image registration be improved by supervision of intermediate layers?
Wetzer, Elisabeth, Lindblad, Joakim, Sladoje, Nataša
Multimodal imaging and correlative analysis typically require image alignment. Contrastive learning can generate representations of multimodal images, reducing the challenging task of multimodal image registration to a monomodal one. Previously, additional supervision on intermediate layers in contrastive learning has improved biomedical image classification. We evaluate if a similar approach improves representations learned for registration to boost registration performance. We explore three approaches to add contrastive supervision to the latent features of the bottleneck layer in the U-Nets encoding the multimodal images and evaluate three different critic functions. Our results show that representations learned without additional supervision on latent features perform best in the downstream task of registration on two public biomedical datasets. We investigate the performance drop by exploiting recent insights in contrastive learning in classification and self-supervised learning. We visualize the spatial relations of the learned representations by means of multidimensional scaling, and show that additional supervision on the bottleneck layer can lead to partial dimensional collapse of the intermediate embedding space.
- Europe > Sweden > Uppsala County > Uppsala (0.04)
- Oceania > Fiji (0.04)
- North America > United States > New York (0.04)
- Europe > Switzerland (0.04)