Goto

Collaborating Authors

 invariant


Learning Invariant Molecular Representation in Latent Discrete Space Xiang Zhuang

Neural Information Processing Systems

Molecular representation learning lays the foundation for drug discovery. However, existing methods suffer from poor out-of-distribution (OOD) generalization, particularly when data for training and testing originate from different environments.


A Proofs

Neural Information Processing Systems

Further taking the usual assumption that X is compact. Let us start with Proposition 3, a central observation needed in Theorem 2. Put into words Now, we can proceed to prove the universality part of Theorem 2. Since the task admits a smooth separator, By Fubini's theorem and Proposition 3, we have F The reader can think of λ as a uniform distribution over G. (as in Theorem 2). The result follows directly from the combination of de Finetti's theorem [ Combining this with Kallenberg's noise transfer theorem we have that the weights and Assumption 1 or ii) is an inner-product decision graph problem as in Definition 3. Further, the task has infinitely (as in Theorem 2). Finally, we follow Proposition 2's proof by simply replacing de Finetti's with Aldous-Hoover's theorem. Define an RLC that samples the linear coefficients as follows.






IDEA: An Invariant Perspective for Efficient Domain Adaptive Image Retrieval

Neural Information Processing Systems

More importantly, we employ a generative model for synthetic samples to simulate the intervention of various non-causal effects, thereby minimizing their impact on hash codes for domain invariance. Comprehensive experiments conducted on benchmark datasets confirm the superior performance of our proposed IDEA compared to a variety of competitive baselines.