turbulence
Letters from Our Readers
Readers respond to Burkhard Bilger's piece about turbulence, Gideon Lewis-Kraus's article on Anthropic, Ava Kofman's story concerning surrogacy, and Katy Waldman's essay about fawning. Burkhard Bilger's recent story about aviation turbulence opens with a dramatic account of a Singapore Airlines flight, SQ321, in May, 2024 (" Buckle Up," March 9th). The plane hit clear-air turbulence over Myanmar's Irrawaddy River, causing it to drop almost two hundred feet in an instant. During the Second World War, U.S. Army Air Forces transport planes confronted the same weather system. Flying from northeast India, over "the Hump" of intervening mountain ranges, to southwestern China, pilots routinely encountered turbulence that dropped and lifted their aircraft not hundreds of feet but thousands.
- Transportation > Air (0.90)
- Government > Regional Government > North America Government > United States Government (0.36)
- Government > Military > Army (0.35)
- North America > United States > Minnesota (0.04)
- North America > United States > Wisconsin > Dane County > Madison (0.04)
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.67)
- Information Technology (0.67)
- Government (0.45)
- Food & Agriculture (0.45)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.05)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- North America > United States > California > Santa Clara County > Mountain View (0.05)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- Asia > Japan > Honshū > Chūbu > Ishikawa Prefecture > Kanazawa (0.04)
- South America > Chile > Arica y Parinacota Region > Arica Province > Arica (0.04)
- North America > United States > Massachusetts (0.04)
- (3 more...)
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Vision > Image Understanding (0.67)
Is turbulence really like Jello-O? Pilots weigh in.
Is turbulence really like Jello-O? Science backs up the goofy analogy. The viral TikTok video may actually hold up under scrutiny. Breakthroughs, discoveries, and DIY tips sent six days a week. A young woman pushes a balled-up piece of napkin into a cup of Jell-O, asking the viewer to imagine that it is an airplane, high in the air.
- South America (0.05)
- North America > United States > Massachusetts (0.05)
- North America > Central America (0.05)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.05)
How pilots avoid thunderstorms--and what happens when they can't
How pilots avoid thunderstorms--and what happens when they can't Most commercial planes get struck by lightning a couple times a year. Despite the fears of nervous fliers, radar, routing, and teamwork keep planes safe during storms. Breakthroughs, discoveries, and DIY tips sent every weekday. In the 2023 movie starring Gerard Butler, a commercial aircraft is caught in a terrible storm. The plane shakes and the lights go out.
- South America (0.05)
- North America > United States > Massachusetts (0.05)
- North America > Central America (0.05)
Fourier-Invertible Neural Encoder (FINE) for Homogeneous Flows
Ouyang, Anqiao, Ke, Hongyi, Wang, Qi
We present the Fourier-Invertible Neural Encoder (FINE), a compact and interpretable architecture for dimension reduction in translation-equivariant datasets. FINE integrates reversible filters and monotonic activation functions with a Fourier truncation bottleneck, achieving information-preserving compression that respects translational symmetry. This design offers a new perspective on symmetry-aware learning, linking spectral truncation to group-equivariant representations. The proposed FINE architecture is tested on one-dimensional nonlinear wave interaction, one-dimensional Kuramoto-Sivashinsky turbulence dataset, and a two-dimensional turbulence dataset. FINE achieves an overall 4.9-9.1 times lower reconstruction error than convolutional autoencoders while using only 13-21% of their parameters. The results highlight FINE's effectiveness in representing complex physical systems with minimal dimension in the latent space. The proposed framework provides a principled framework for interpretable, low-parameter, and symmetry-preserving dimensional reduction, bridging the gap between Fourier representations and modern neural architectures for scientific and physics-informed learning.
- North America > United States > California > San Diego County > San Diego (0.05)
- North America > Canada > British Columbia > Regional District of Central Okanagan > Kelowna (0.04)
- Europe > Switzerland (0.04)
- (2 more...)
A Dynamics-Informed Gaussian Process Framework for 2D Stochastic Navier-Stokes via Quasi-Gaussianity
Hamzi, Boumediene, Owhadi, Houman
Yet a fundamental gap remains: while these methods depend critically on the choice of prior covariance kernel, most kernels are selected for computational convenience (e.g., Gaussian/RBF kernels) or generic smoothness assumptions (e.g., Mat ern) rather than being rigorously grounded in the system's long-time statistical structure. Recent breakthroughs in stochastic PDE theory now make it possible to close this gap, constructing priors directly from the invariant-measure geometry of the underlying dynamics. Recent work of Coe, Hairer, and Tolomeo [7] establishes a remarkable geometric property of the two-dimensional stochastic Navier-Stokes (2D SNS) equations: although the dynamics are highly nonlinear, their unique invariant measure is equivalent-in the sense of mutual absolute continuity-to the Gaussian invariant measure of the linearized Ornstein-Uhlenbeck (OU) process. Equivalence means the two measures share the same support, null sets, and typical events, differing only by a positive Radon-Nikodym derivative. This reveals that the equilibrium statistical geometry is Gaussian, even when individual realizations are not.
Addressing A Posteriori Performance Degradation in Neural Network Subgrid Stress Models
Neural network subgrid stress models often have a priori performance that is far better than the a posteriori performance, leading to neural network models that look very promising a priori completely failing in a posteriori Large Eddy Simulations (LES). This performance gap can be decreased by combining two different methods, training data augmentation and reducing input complexity to the neural network. Augmenting the training data with two different filters before training the neural networks has no performance degradation a priori as compared to a neural network trained with one filter. A posteriori, neural networks trained with two different filters are far more robust across two different LES codes with different numerical schemes. In addition, by ablating away the higher order terms input into the neural network, the a priori versus a posteriori performance changes become less apparent. When combined, neural networks that use both training data augmentation and a less complex set of inputs have a posteriori performance far more reflective of their a priori evaluation.
- North America > United States > California > Santa Clara County > Stanford (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)