Goto

Collaborating Authors

 gravity


SupplementaryMaterialsfor LearningPhysicalDynamicswithSubequivariant GraphNeuralNetworks

Neural Information Processing Systems

The proof is given by [11]. Eq. (13)is clearlyO(3)-subequivariant, but theO(3)-subequivariant function is unnecessarily the form like Eq. (13). Then there must exit functionss( Z,h) and s ( Z,h), satisfying ˆf( Z,h) = [ Z, g]s( Z,h)+ Z s ( Z,h). Note thatf by Eq. (14) can also be considered as a function of both Z and g, and it is universal accordingtoProposition1. When f reducestoafunctionof Z byfixing g,thenbyTheorem1,itis 4 still universal with respect tothe subgroup that leaves g unchanged.



60cb558c40e4f18479664069d9642d5a-Paper.pdf

Neural Information Processing Systems

In real-world decision-making tasks, learning an optimal policy without a trialand-error process is an appealing challenge. When expert demonstrations are available, imitation learning that mimics expert actions can learn a good policy efficiently.


Learning Physical Dynamics with Subequivariant Graph Neural Networks

Neural Information Processing Systems

Graph Neural Networks (GNNs) have become a prevailing tool for learning physical dynamics. However, they still encounter several challenges: 1) Physical laws abide by symmetry, which is a vital inductive bias accounting for model generalization and should be incorporated into the model design. Existing simulators either consider insufficient symmetry, or enforce excessive equivariance in practice when symmetry is partially broken by gravity.


Lost in space: How 'digital twins' saved NASA's robots

Popular Science

Science Space International Space Station Lost in space: How'digital twins' saved NASA's robots Navigation algorithms designed for Earth fail in orbit. Breakthroughs, discoveries, and DIY tips sent every weekday. A standard ballpoint pen will not write in space. Without gravity, the ink refuses to flow. This simple failure illustrates a profound headache in space exploration: tools designed for terrestrial use often become useless in a microgravity environment.


Are we living in a simulation? This experiment could tell us

New Scientist

Are we living in a simulation? The idea that we might be living in a simulated reality has worried us for centuries. Thomas Anderson - otherwise known as Neo - is walking up a flight of stairs when he sees a black cat shake itself and walk past a doorway. Then the moment seems to replay before his eyes. Just a touch of déjà vu, he thinks.


How to tell time on Mars

Popular Science

Physicists finally know how much faster time moves on the Red Planet. Breakthroughs, discoveries, and DIY tips sent every weekday. Tracking the first astronauts' visit to Mars won't be as simple as watching a clock or marking days off of a calendar. Thanks to relativity, time actually moves faster on the Red Planet than it does here on Earth. For years, scientists have wondered about the exact temporal difference between planets, but physicists at the National Institute of Standards and Technology (NIST) finally have an answer.


Efficient Learning-Based Control of a Legged Robot in Lunar Gravity

Arm, Philip, Fischer, Oliver, Church, Joseph, Fuhrer, Adrian, Kolvenbach, Hendrik, Hutter, Marco

arXiv.org Artificial Intelligence

Legged robots are promising candidates for exploring challenging areas on low-gravity bodies such as the Moon, Mars, or asteroids, thanks to their advanced mobility on unstructured terrain. However, as planetary robots' power and thermal budgets are highly restricted, these robots need energy-efficient control approaches that easily transfer to multiple gravity environments. In this work, we introduce a reinforcement learning-based control approach for legged robots with gravity-scaled power-optimized reward functions. We use our approach to develop and validate a locomotion controller and a base pose controller in gravity environments from lunar gravity (1.62 m/s2) to a hypothetical super-Earth (19.62 m/s2). Our approach successfully scales across these gravity levels for locomotion and base pose control with the gravity-scaled reward functions. The power-optimized locomotion controller reached a power consumption for locomotion of 23.4 W in Earth gravity on a 15.65 kg robot at 0.4 m/s, a 23 % improvement over the baseline policy. Additionally, we designed a constant-force spring offload system that allowed us to conduct real-world experiments on legged locomotion in lunar gravity. In lunar gravity, the power-optimized control policy reached 12.2 W, 36 % less than a baseline controller which is not optimized for power efficiency. Our method provides a scalable approach to developing power-efficient locomotion controllers for legged robots across multiple gravity levels.

  Country:
  Genre: Research Report (0.50)
  Industry: Government (0.46)

Gravity-Awareness: Deep Learning Models and LLM Simulation of Human Awareness in Altered Gravity

Alibekov, Bakytzhan, Gutoreva, Alina, Raffaella-Ferre, Elisa

arXiv.org Artificial Intelligence

Earth's gravity has fundamentally shaped human development by guiding the brain's integration of vestibular, visual, and proprioceptive inputs into an internal model of gravity: a dynamic neural representation enabling prediction and interpretation of gravitational forces. This work presents a dual computational framework to quantitatively model these adaptations. The first component is a lightweight Multi-Layer Perceptron (MLP) that predicts g-load-dependent changes in key electroencephalographic (EEG) frequency bands, representing the brain's cortical state. The second component utilizes a suite of independent Gaussian Processes (GPs) to model the body's broader physiological state, including Heart Rate Variability (HRV), Electrodermal Activity (EDA), and motor behavior. Both models were trained on data derived from a comprehensive review of parabolic flight literature, using published findings as anchor points to construct robust, continuous functions. To complement this quantitative analysis, we simulated subjective human experience under different gravitational loads, ranging from microgravity (0g) and partial gravity (Moon 0.17g, Mars 0.38g) to hypergravity associated with spacecraft launch and re-entry (1.8g), using a large language model (Claude 3.5 Sonnet). The model was prompted with physiological parameters to generate introspective narratives of alertness and self-awareness, which closely aligned with the quantitative findings from both the EEG and physiological models. This combined framework integrates quantitative physiological modeling with generative cognitive simulation, offering a novel approach to understanding and predicting human performance in altered gravity


Chain of Time: In-Context Physical Simulation with Image Generation Models

Wang, YingQiao, Bigelow, Eric, Li, Boyi, Ullman, Tomer

arXiv.org Artificial Intelligence

We propose a novel cognitively-inspired method to improve and interpret physical simulation in vision-language models. Our ``Chain of Time" method involves generating a series of intermediate images during a simulation, and it is motivated by in-context reasoning in machine learning, as well as mental simulation in humans. Chain of Time is used at inference time, and requires no additional fine-tuning. We apply the Chain-of-Time method to synthetic and real-world domains, including 2-D graphics simulations and natural 3-D videos. These domains test a variety of particular physical properties, including velocity, acceleration, fluid dynamics, and conservation of momentum. We found that using Chain-of-Time simulation substantially improves the performance of a state-of-the-art image generation model. Beyond examining performance, we also analyzed the specific states of the world simulated by an image model at each time step, which sheds light on the dynamics underlying these simulations. This analysis reveals insights that are hidden from traditional evaluations of physical reasoning, including cases where an image generation model is able to simulate physical properties that unfold over time, such as velocity, gravity, and collisions. Our analysis also highlights particular cases where the image generation model struggles to infer particular physical parameters from input images, despite being capable of simulating relevant physical processes.