Goto

Collaborating Authors

 rotation


Quotient-Space Diffusion Models

Xu, Yixian, Wang, Yusong, Luo, Shengjie, Gao, Kaiyuan, He, Tianyu, He, Di, Liu, Chang

arXiv.org Machine Learning

Diffusion-based generative models have reformed generative AI, and have enabled new capabilities in the science domain, for example, generating 3D structures of molecules. Due to the intrinsic problem structure of certain tasks, there is often a symmetry in the system, which identifies objects that can be converted by a group action as equivalent, hence the target distribution is essentially defined on the quotient space with respect to the group. In this work, we establish a formal framework for diffusion modeling on a general quotient space, and apply it to molecular structure generation which follows the special Euclidean group $\text{SE}(3)$ symmetry. The framework reduces the necessity of learning the component corresponding to the group action, hence simplifies learning difficulty over conventional group-equivariant diffusion models, and the sampler guarantees recovering the target distribution, while heuristic alignment strategies lack proper samplers. The arguments are empirically validated on structure generation for small molecules and proteins, indicating that the principled quotient-space diffusion model provides a new framework that outperforms previous symmetry treatments.


Days really are dragging! Length of days on Earth is increasing at an 'unprecedented' rate - and scientists say climate change is to blame

Daily Mail - Science & tech

'Comatose' Mojtaba Khamenei'is UNAWARE there is a war on and has no idea he is supreme leader', report says - despite regime issuing his'first statement' FBI storms home of Lebanese-born restaurant worker who drove truck filled with explosives into synagogue and opened fire after his'family were killed in airstrike' Trump slammed after lifting oil sanctions on Russia as gas prices skyrocket: 'It's a betrayal' Alexander brothers' alleged HIGH SCHOOL rape video: Classmates speak out on sickening footage... as creepy unseen photos are exposed Kylie Jenner's total humiliation in Hollywood: Derogatory rumor leaves her boyfriend's peers'laughing at her' behind her back Billy Joel's daughter Alexa Ray gives health update amid his battle with rare brain disorder Concerning whispers inside Trump World that Operation Epic Fury is suddenly at risk... and the critical question that will determine how this ends: MARK HALPERIN Meghan Markle masks up to cheer young patients at Los Angeles children's hospital as she agrees deal to sign her latest documentary Beauty queen slams Trump as she's FIRED by White House: 'I stood by you for 20 years... now, I don't even recognize you' Wall Street issues stark warning that Iran oil attacks could wreck Trump's key election promises Truth behind the massacre of 110 school girls in Iran: How shameful episode sparked a deluge of conspiracy theories and lies... as JAKE WALLIS SIMONS explores what really happened Long hair over 45 is ageing and try-hard. I've finally cut mine off. NFL fans left divided as team replace historic logo with'boring' new design as part of franchise rebrand I worked with Carolyn Bessette. This is the'messy' truth about what she was REALLY like in secret. After she met JFK Jr she tried to hide it... but we all knew the nighttime gossip Trump says US is'totally destroying' Iran as he issues chilling threat of more action coming TODAY The 7 types of'hyperarousal' - so, do you get cold sweats or tingling fingers?






PrivCirNet: Efficient Private Inference via Block Circulant Transformation

Neural Information Processing Systems

Homomorphic encryption (HE)-based deep neural network (DNN) inference protects data and model privacy but suffers from significant computation overhead. We observe transforming the DNN weights into circulant matrices converts general matrix-vector multiplications into HE-friendly 1-dimensional convolutions, drastically reducing the HE computation cost.




In-ContextSymmetries: Self-Supervised LearningthroughContextualWorldModels

Neural Information Processing Systems

In this work, drawing insights from world models, we propose to instead learn a general representation that can adapt to be invariant or equivariant to different transformations by paying attention tocontext-- a memory module that tracks task-specificstates,actions,andfuturestates.