Goto

Collaborating Authors

 epoch


Tucker Diffusion Model for High-dimensional Tensor Generation

Guo, Jianhua, Kong, Xinbing, Li, Zeyu, Mao, Junfan

arXiv.org Machine Learning

Statistical inference on large-dimensional tensor data has been extensively studied in the literature and widely used in economics, biology, machine learning, and other fields, but how to generate a structured tensor with a target distribution is still a new problem. As profound AI generators, diffusion models have achieved remarkable success in learning complex distributions. However, their extension to generating multi-linear tensor-valued observations remains underexplored. In this work, we propose a novel Tucker diffusion model for learning high-dimensional tensor distributions. We show that the score function admits a structured decomposition under the low Tucker rank assumption, allowing it to be both accurately approximated and efficiently estimated using a carefully tailored tensor-shaped architecture named Tucker-Unet. Furthermore, the distribution of generated tensors, induced by the estimated score function, converges to the true data distribution at a rate depending on the maximum of tensor mode dimensions, thereby offering a clear theoretical advantage over the naive vectorized approach, which has a product dependence. Empirically, compared to existing approaches, the Tucker diffusion model demonstrates strong practical potential in synthetic and real-world tensor generation tasks, achieving comparable and sometimes even superior statistical performance with significantly reduced training and sampling costs.


The Order Is The Message

LeDoux, Jordan

arXiv.org Machine Learning

In a controlled experiment on modular arithmetic ($p = 9973$), varying only example ordering while holding all else constant, two fixed-ordering strategies achieve 99.5\% test accuracy by epochs 487 and 659 respectively from a training set comprising 0.3\% of the input space, well below established sample complexity lower bounds for this task under IID ordering. The IID baseline achieves 0.30\% after 5{,}000 epochs from identical data. An adversarially structured ordering suppresses learning entirely. The generalizing model reliably constructs a Fourier representation whose fundamental frequency is the Fourier dual of the ordering structure, encoding information present in no individual training example, with the same fundamental emerging across all seeds tested regardless of initialization or training set composition. We discuss implications for training efficiency, the reinterpretation of grokking, and the safety risks of a channel that evades all content-level auditing.


Murmurations, Mestre--Nagao sums, and Convolutional Neural Networks for elliptic curves

Bieri, Joanna, Costa, Edgar, Deines, Alyson, Lee, Kyu-Hwan, Lowry-Duda, David, Oliver, Thomas, Qi, Yidi, Veenstra, Tamara

arXiv.org Machine Learning

We apply one-dimensional convolutional neural networks to the Frobenius traces of elliptic curves over $\mathbb{Q}$ and evaluate and interpret their predictive capacity. In keeping with similar experiments by Kazalicki--Vlah, Bujanović--Kazalicki--Novak, and Pozdnyakov, we observe high accuracy predictions for the analytic rank across a range of conductors. We interpret the prediction using saliency curves and explore the interesting interplay between murmurations and Mestre--Nagao sums, the details of which vary with the conductor and the (predicted) rank.


Finite Difference Flow Optimization for RL Post-Training of Text-to-Image Models

McAllister, David, Aittala, Miika, Karras, Tero, Hellsten, Janne, Kanazawa, Angjoo, Aila, Timo, Laine, Samuli

arXiv.org Machine Learning

Reinforcement learning (RL) has become a standard technique for post-training diffusion-based image synthesis models, as it enables learning from reward signals to explicitly improve desirable aspects such as image quality and prompt alignment. In this paper, we propose an online RL variant that reduces the variance in the model updates by sampling paired trajectories and pulling the flow velocity in the direction of the more favorable image. Unlike existing methods that treat each sampling step as a separate policy action, we consider the entire sampling process as a single action. We experiment with both high-quality vision language models and off-the-shelf quality metrics for rewards, and evaluate the outputs using a broad set of metrics. Our method converges faster and yields higher output quality and prompt alignment than previous approaches.






8cbe9ce23f42628c98f80fa0fac8b19a-Supplemental.pdf

Neural Information Processing Systems

After training for 200 epochs, we achieve the attack success rate (ASR) of99.97% and the natural accuracy on clean data (ACC)of93.73%. Blend attack [6]: We first generate a trigger pattern where each pixel value is sampled from auniform distribution in[0,255]asshowninFigure 6(c). Input-aware Attack (IAB) [30]: The dynamic trigger varies across samples as shown in Figure 6(d). We apply two types of target label selection. Clean-labelAttack(CLB)[42]: The trigger is a3 3checkerboard at the four corners of images as shown in Figure 7(b).