Goto

Collaborating Authors

 body shape



Simulating Extinct Species

Communications of the ACM

How did extinct animals move? Paleontologists are interested in figuring this out since it can tell us more about their ways of life, such as whether they were agile enough to hunt prey. It can also provide clues about how locomotion evolved; for example, when our ancestors started to walk upright. Researchers have come up with hypotheses about the movement of long-gone species by examining evidence such as fossilized bones or well-preserved footprints. Extinct animals can also be compared to similar living ones: comparing their limb length, for example, can give an idea of their speed of movement.


The quest to find out how our bodies react to extreme temperatures

MIT Technology Review

Scientists hope to prevent deaths from climate change, but heat and cold are more complicated than we thought. Libby Cowgill is an anthropologist at the University of Missouri who hopes to revamp the science of thermoregulation. Libby Cowgill, an anthropologist in a furry parka, has wheeled me and my cot into a metal-walled room set to 40 F. A loud fan pummels me from above and siphons the dregs of my body heat through the cot's mesh from below. A large respirator fits snug over my nose and mouth. The device tracks carbon dioxide in my exhales--a proxy for how my metabolism speeds up or slows down throughout the experiment. Eventually Cowgill will remove my respirator to slip a wire-thin metal temperature probe several pointy inches into my nose. Cowgill and a graduate student quietly observe me from the corner of their so-called "climate chamber. Just a few hours earlier I'd sat beside them to observe as another volunteer, a 24-year-old personal trainer, endured the cold. Every few minutes, they measured his skin temperature with a thermal camera, his core temperature with a wireless pill, and his blood pressure and other metrics that hinted at how his body handles extreme cold. He lasted almost an hour without shivering; when my turn comes, I shiver aggressively on the cot for nearly an hour straight. I'm visiting Texas to learn about this experiment on how different bodies respond to extreme climates. I jokingly ask Cowgill as she tapes biosensing devices to my chest and legs. After I exit the cold, she surprises me: "You, believe it or not, were not the worst person we've ever seen." Climate change forces us to reckon with the knotty science of how our bodies interact with the environment. Cowgill is a 40-something anthropologist at the University of Missouri who powerlifts and teaches CrossFit in her spare time. She's small and strong, with dark bangs and geometric tattoos. Since 2022, she's spent the summers at the University of North Texas Health Science Center tending to these uncomfortable experiments. Her team hopes to revamp the science of thermoregulation. While we know in broad strokes how people thermoregulate, the science of keeping warm or cool is mottled with blind spots. "We have the general picture.



Zero-shot Human Pose Estimation using Diffusion-based Inverse solvers

Karnoor, Sahil Bhandary, Choudhury, Romit Roy

arXiv.org Artificial Intelligence

Pose estimation refers to tracking a human's full body posture, including their head, torso, arms, and legs. The problem is challenging in practical settings where the number of body sensors are limited. Past work has shown promising results using conditional diffusion models, where the pose prediction is conditioned on both measurements from the sensors. Unfortunately, nearly all these approaches generalize poorly across users, primarly because location measurements are highly influenced by the body size of the user. In this paper, we formulate pose estimation as an inverse problem and design an algorithm capable of zero-shot generalization. Our idea utilizes a pre-trained diffusion model and conditions it on rotational measurements alone; the priors from this model are then guided by a likelihood term, derived from the measured locations. Thus, given any user, our proposed InPose method generatively estimates the highly likely sequence of poses that best explains the sparse on-body measurements.



Shape-induced obstacle attraction and repulsion during dynamic locomotion

Han, Yuanfeng, Othayoth, Ratan, Wang, Yulong, Hsu, Chun-Cheng, Obert, Rafael de la Tijera, Francois, Evains, Li, Chen

arXiv.org Artificial Intelligence

Robots still struggle to dynamically traverse complex 3-D terrain with many large obstacles, an ability required for many critical applications. Body-obstacle interaction is often inevitable and induces perturbation and uncertainty in motion that challenges closed-form dynamic modeling. Here, inspired by recent discovery of a terradynamic streamlined shape, we studied how two body shapes interacting with obstacles affect turning and pitching motions of an open-loop multi-legged robot and cockroaches during dynamic locomotion. With a common cuboidal body, the robot was attracted towards obstacles, resulting in pitching up and flipping-over. By contrast, with an elliptical body, the robot was repelled by obstacles and readily traversed. The animal displayed qualitatively similar turning and pitching motions induced by these two body shapes. However, unlike the cuboidal robot, the cuboidal animal was capable of escaping obstacle attraction and subsequent high pitching and flipping over, which inspired us to develop an empirical pitch-and-turn strategy for cuboidal robots. Considering the similarity of our self-propelled body-obstacle interaction with part-feeder interaction in robotic part manipulation, we developed a quasi-static potential energy landscape model to explain the dependence of dynamic locomotion on body shape. Our experimental and modeling results also demonstrated that obstacle attraction or repulsion is an inherent property of locomotor body shape and insensitive to obstacle geometry and size. Our study expanded the concept and usefulness of terradynamic shapes for passive control of robot locomotion to traverse large obstacles using physical interaction. Our study is also a step in establishing an energy landscape approach to locomotor transitions.


AI can transform a photo of your dog into a VR avatar

Popular Science

Breakthroughs, discoveries, and DIY tips sent every weekday. Nearly four years have passed since Facebook officially changed its corporate name to Meta, amid promises from founder Mark Zuckerberg that a fully realized digital "metaverse" was just around the corner. Since then, user adoption of virtual reality spaces has plateaued, and Zuckerberg himself has seemingly shifted focus towards AI companions and podcast-playing Ray Bans. For many, simply sitting on the couch at home with a dog by their side remains more appealing than slipping into VR. But what if your furry friend could join you?


Democratizing High-Fidelity Co-Speech Gesture Video Generation

Yang, Xu, Huang, Shaoli, Xie, Shenbo, Chen, Xuelin, Liu, Yifei, Ding, Changxing

arXiv.org Artificial Intelligence

Co-speech gesture video generation aims to synthesize realistic, audio-aligned videos of speakers, complete with synchronized facial expressions and body gestures. This task presents challenges due to the significant one-to-many mapping between audio and visual content, further complicated by the scarcity of large-scale public datasets and high computational demands. We propose a lightweight framework that utilizes 2D full-body skeletons as an efficient auxiliary condition to bridge audio signals with visual outputs. Our approach introduces a diffusion model conditioned on fine-grained audio segments and a skeleton extracted from the speaker's reference image, predicting skeletal motions through skeleton-audio feature fusion to ensure strict audio coordination and body shape consistency. The generated skeletons are then fed into an off-the-shelf human video generation model with the speaker's reference image to synthesize high-fidelity videos. To democratize research, we present CSG-405-the first public dataset with 405 hours of high-resolution videos across 71 speech types, annotated with 2D skeletons and diverse speaker demographics. Experiments show that our method exceeds state-of-the-art approaches in visual quality and synchronization while generalizing across speakers and contexts. Code, models, and CSG-405 are publicly released at https://mpi-lab.github.io/Democratizing-CSG/


MotionPersona: Characteristics-aware Locomotion Control

Shi, Mingyi, Liu, Wei, Mei, Jidong, Tse, Wangpok, Chen, Rui, Chen, Xuelin, Komura, Taku

arXiv.org Artificial Intelligence

We present MotionPersona, a novel real-time character controller that allows users to characterize a character by specifying attributes such as physical traits, mental states, and demographics, and projects these properties into the generated motions for animating the character. In contrast to existing deep learning-based controllers, which typically produce homogeneous animations tailored to a single, predefined character, MotionPersona accounts for the impact of various traits on human motion as observed in the real world. To achieve this, we develop a block autoregressive motion diffusion model conditioned on SMPLX parameters, textual prompts, and user-defined locomotion control signals. We also curate a comprehensive dataset featuring a wide range of locomotion types and actor traits to enable the training of this characteristic-aware controller. Unlike prior work, MotionPersona is the first method capable of generating motion that faithfully reflects user-specified characteristics (e.g., an elderly person's shuffling gait) while responding in real time to dynamic control inputs. Additionally, we introduce a few-shot characterization technique as a complementary conditioning mechanism, enabling customization via short motion clips when language prompts fall short. Through extensive experiments, we demonstrate that MotionPersona outperforms existing methods in characteristics-aware locomotion control, achieving superior motion quality and diversity. Results, code, and demo can be found at: https://motionpersona25.github.io/.