Goto

Collaborating Authors

 turbulence




Is turbulence really like Jello-O? Pilots weigh in.

Popular Science

Is turbulence really like Jello-O? Science backs up the goofy analogy. The viral TikTok video may actually hold up under scrutiny. Breakthroughs, discoveries, and DIY tips sent six days a week. A young woman pushes a balled-up piece of napkin into a cup of Jell-O, asking the viewer to imagine that it is an airplane, high in the air.


How pilots avoid thunderstorms--and what happens when they can't

Popular Science

How pilots avoid thunderstorms--and what happens when they can't Most commercial planes get struck by lightning a couple times a year. Despite the fears of nervous fliers, radar, routing, and teamwork keep planes safe during storms. Breakthroughs, discoveries, and DIY tips sent every weekday. In the 2023 movie starring Gerard Butler, a commercial aircraft is caught in a terrible storm. The plane shakes and the lights go out.


Fourier-Invertible Neural Encoder (FINE) for Homogeneous Flows

Ouyang, Anqiao, Ke, Hongyi, Wang, Qi

arXiv.org Artificial Intelligence

We present the Fourier-Invertible Neural Encoder (FINE), a compact and interpretable architecture for dimension reduction in translation-equivariant datasets. FINE integrates reversible filters and monotonic activation functions with a Fourier truncation bottleneck, achieving information-preserving compression that respects translational symmetry. This design offers a new perspective on symmetry-aware learning, linking spectral truncation to group-equivariant representations. The proposed FINE architecture is tested on one-dimensional nonlinear wave interaction, one-dimensional Kuramoto-Sivashinsky turbulence dataset, and a two-dimensional turbulence dataset. FINE achieves an overall 4.9-9.1 times lower reconstruction error than convolutional autoencoders while using only 13-21% of their parameters. The results highlight FINE's effectiveness in representing complex physical systems with minimal dimension in the latent space. The proposed framework provides a principled framework for interpretable, low-parameter, and symmetry-preserving dimensional reduction, bridging the gap between Fourier representations and modern neural architectures for scientific and physics-informed learning.


A Dynamics-Informed Gaussian Process Framework for 2D Stochastic Navier-Stokes via Quasi-Gaussianity

Hamzi, Boumediene, Owhadi, Houman

arXiv.org Machine Learning

Yet a fundamental gap remains: while these methods depend critically on the choice of prior covariance kernel, most kernels are selected for computational convenience (e.g., Gaussian/RBF kernels) or generic smoothness assumptions (e.g., Mat ern) rather than being rigorously grounded in the system's long-time statistical structure. Recent breakthroughs in stochastic PDE theory now make it possible to close this gap, constructing priors directly from the invariant-measure geometry of the underlying dynamics. Recent work of Coe, Hairer, and Tolomeo [7] establishes a remarkable geometric property of the two-dimensional stochastic Navier-Stokes (2D SNS) equations: although the dynamics are highly nonlinear, their unique invariant measure is equivalent-in the sense of mutual absolute continuity-to the Gaussian invariant measure of the linearized Ornstein-Uhlenbeck (OU) process. Equivalence means the two measures share the same support, null sets, and typical events, differing only by a positive Radon-Nikodym derivative. This reveals that the equilibrium statistical geometry is Gaussian, even when individual realizations are not.


Addressing A Posteriori Performance Degradation in Neural Network Subgrid Stress Models

Wu, Andy, Lele, Sanjiva K.

arXiv.org Artificial Intelligence

Neural network subgrid stress models often have a priori performance that is far better than the a posteriori performance, leading to neural network models that look very promising a priori completely failing in a posteriori Large Eddy Simulations (LES). This performance gap can be decreased by combining two different methods, training data augmentation and reducing input complexity to the neural network. Augmenting the training data with two different filters before training the neural networks has no performance degradation a priori as compared to a neural network trained with one filter. A posteriori, neural networks trained with two different filters are far more robust across two different LES codes with different numerical schemes. In addition, by ablating away the higher order terms input into the neural network, the a priori versus a posteriori performance changes become less apparent. When combined, neural networks that use both training data augmentation and a less complex set of inputs have a posteriori performance far more reflective of their a priori evaluation.



Turb-L1: Achieving Long-term Turbulence Tracing By Tackling Spectral Bias

Wu, Hao, Gao, Yuan, Liu, Chang, Xu, Fan, Zhang, Fan, Zhu, Zhihong, Li, Yuqi, Wu, Xian, Liang, Yuxuan, Liu, Li, Wen, Qingsong, Wang, Kun, Zheng, Yu, Huang, Xiaomeng

arXiv.org Artificial Intelligence

Accurately predicting the long-term evolution of turbulence is crucial for advancing scientific understanding and optimizing engineering applications. However, existing deep learning methods face significant bottlenecks in long-term autoregressive prediction, which exhibit excessive smoothing and fail to accurately track complex fluid dynamics. Our extensive experimental and spectral analysis of prevailing methods provides an interpretable explanation for this shortcoming, identifying Spectral Bias as the core obstacle. Concretely, spectral bias is the inherent tendency of models to favor low-frequency, smooth features while overlooking critical high-frequency details during training, thus reducing fidelity and causing physical distortions in long-term predictions. Building on this insight, we propose Turb-L1, an innovative turbulence prediction method, which utilizes a Hierarchical Dynamics Synthesis mechanism within a multi-grid architecture to explicitly overcome spectral bias. It accurately captures cross-scale interactions and preserves the fidelity of high-frequency dynamics, enabling reliable long-term tracking of turbulence evolution. Extensive experiments on the 2D turbulence benchmark show that Turb-L1 demonstrates excellent performance: (I) In long-term predictions, it reduces Mean Squared Error (MSE) by $80.3\%$ and increases Structural Similarity (SSIM) by over $9\times$ compared to the SOTA baseline, significantly improving prediction fidelity. (II) It effectively overcomes spectral bias, accurately reproducing the full enstrophy spectrum and maintaining physical realism in high-wavenumber regions, thus avoiding the spectral distortions or spurious energy accumulation seen in other methods.