Goto

Collaborating Authors

 backflip


Flexibility-Conditioned Protein Structure Design with Flow Matching

Viliuga, Vsevolod, Seute, Leif, Wolf, Nicolas, Wagner, Simon, Elofsson, Arne, Stühmer, Jan, Gräter, Frauke

arXiv.org Artificial Intelligence

Recent advances in geometric deep learning and generative modeling have enabled the design of novel proteins with a wide range of desired properties. However, current state-of-the-art approaches are typically restricted to generating proteins with only static target properties, such as motifs and symmetries. In this work, we take a step towards overcoming this limitation by proposing a framework to condition structure generation on flexibility, which is crucial for key functionalities such as catalysis or molecular recognition. We first introduce BackFlip, an equivariant neural network for predicting per-residue flexibility from an input backbone structure. Relying on BackFlip, we propose FliPS, an SE(3)-equivariant conditional flow matching model that solves the inverse problem, that is, generating backbones that display a target flexibility profile. In our experiments, we show that FliPS is able to generate novel and diverse protein backbones with the desired flexibility, verified by Molecular Dynamics (MD) simulations. FliPS and BackFlip are available at https://github.com/graeter-group/flips .


Humanoid robot stuns with perfect side-flip acrobatics

FOX News

A robotics company has advanced from a backflipping robot to a side-flipping robot. Robots aren't just efficient machines anymore, they are now agile performers that can flip and jog. Take, for instance, Unitree, a Chinese robotics company that has been making headlines with its incredible G1 humanoid robot. You might have seen it dancing alongside humans or remembered its predecessor, the H1, which stunned us with a backflip using electric motors. But now, the G1 has taken things to a whole new level.


Chinese humanoid robot lands world's first front flip

FOX News

Among robots, a front flip is significantly more difficult than a backflip. Chinese robotics company Zhongqing Robotics, also known as EngineAI, has officially entered the humanoid robotics scene by releasing a video showcasing what it claims is the world's first humanoid robot front flip. Robot backflips are becoming commonplace, but a front flip is significantly more difficult than a backflip, as any gymnast can attest. GET EXPERT SECURITY ALERTS, MUST-KNOW TECH TIPS, AND THE LATEST DIGITAL TRENDS -- STRAIGHT TO YOUR INBOX. Unlike humans, robots rely on precise sensor data and motor control to execute complex movements.


Terrifying robot dog can walk, climb, and even backflip on almost any terrain - but concerned viewers predict it will be 'hunting down every last human before long'

Daily Mail - Science & tech

The idea of a robotic dog that can move on almost any terrain might sound like something from the latest episode of Black Mirror. But as this terrifying footage shows, it has now become a reality. The state-of-the-art robot dog is called Lynx, and is the brainchild of Chinese company, Deep Robotics. Equipped with four wheels instead of paws, the bot can walk, climb, and even backflip on everything from rocks to snow. Deep Robtics hopes that it could be used in search and rescue operations. However, some sceptics have already raised concerns about the four-legged robot.


Boston Dynamics wishes you a merry terrifying robot Christmas in new video

Popular Science

Boston Dynamics, the advanced robotics company known for displaying its machines engaged in mildly terrifying dance routines, is back at it again for the holidays. This year, the company released a brief video clip that shows its four-legged "Spot" robot tiptoeing across an icy, winter-themed warehouse floor with Christmas music softly playing in the background. The scene then cuts to its new, more slender Atlas humanoid robot draped in a Santa Claus outfit, white beard and all. A low-humming mechanical sound can be heard moments before Atlas suddenly hurls itself into the sky for a backflip. It sticks the landing perfectly.


CEAR: Comprehensive Event Camera Dataset for Rapid Perception of Agile Quadruped Robots

Zhu, Shifan, Xiong, Zixun, Kim, Donghyun

arXiv.org Artificial Intelligence

When legged robots perform agile movements, traditional RGB cameras often produce blurred images, posing a challenge for rapid perception. Event cameras have emerged as a promising solution for capturing rapid perception and coping with challenging lighting conditions thanks to their low latency, high temporal resolution, and high dynamic range. However, integrating event cameras into agile-legged robots is still largely unexplored. Notably, no dataset including event cameras has yet been developed for the context of agile quadruped robots. To bridge this gap, we introduce CEAR, a dataset comprising data from an event camera, an RGB-D camera, an IMU, a LiDAR, and joint encoders, all mounted on a dynamic quadruped, Mini Cheetah robot. This comprehensive dataset features more than 100 sequences from real-world environments, encompassing various indoor and outdoor environments, different lighting conditions, a range of robot gaits (e.g., trotting, bounding, pronking), as well as acrobatic movements like backflip. To our knowledge, this is the first event camera dataset capturing the dynamic and diverse quadruped robot motions under various setups, developed to advance research in rapid perception for quadruped robots.


Discovering Fatigued Movements for Virtual Character Animation

Cheema, Noshaba, Xu, Rui, Kim, Nam Hee, Hämäläinen, Perttu, Golyanik, Vladislav, Habermann, Marc, Theobalt, Christian, Slusallek, Philipp

arXiv.org Artificial Intelligence

Virtual character animation and movement synthesis have advanced rapidly during recent years, especially through a combination of extensive motion capture datasets and machine learning. A remaining challenge is interactively simulating characters that fatigue when performing extended motions, which is indispensable for the realism of generated animations. However, capturing such movements is problematic, as performing movements like backflips with fatigued variations up to exhaustion raises capture cost and risk of injury. Surprisingly, little research has been done on faithful fatigue modeling. To address this, we propose a deep reinforcement learning-based approach, which -- for the first time in literature -- generates control policies for full-body physically simulated agents aware of cumulative fatigue. For this, we first leverage Generative Adversarial Imitation Learning (GAIL) to learn an expert policy for the skill; Second, we learn a fatigue policy by limiting the generated constant torque bounds based on endurance time to non-linear, state- and time-dependent limits in the joint-actuation space using a Three-Compartment Controller (3CC) model. Our results demonstrate that agents can adapt to different fatigue and rest rates interactively, and discover realistic recovery strategies without the need for any captured data of fatigued movement.


Humpback whale pulls off stunning move during rescue in Canada, video shows

FOX News

A humpback whale caught in a fishing rope off the western coast of Canada stunned rescuers when it pulled off a magnificent maneuver to free itself. A humpback whale entangled in fishing gear off the coast of western Canada was caught on video pulling off a spectacular maneuver to free itself as rescuers worked to help the sea creature. The ocean mammal was caught in the ropes of a buoy used to catch prawn for two days when rescuers caught up with the whale near Texada Island on Oct. 14, Fisheries and Oceans Canada said. The department's Marine Mammal Rescue team was following the distressed whale when its aerial drone spotted two more humpback whales swimming alongside the creature. Before rescuers attempted to cut the rope that was caught in the animal's mouth, they added some drag to slow the whale down, Paul Cottrell, with Fisheries and Oceans Canada, told the BBC.


Detecting Backdoors in Deep Text Classifiers

Guo, You, Wang, Jun, Cohn, Trevor

arXiv.org Artificial Intelligence

Deep neural networks are vulnerable to adversarial attacks, such as backdoor attacks in which a malicious adversary compromises a model during training such that specific behaviour can be triggered at test time by attaching a specific word or phrase to an input. This paper considers the problem of diagnosing whether a model has been compromised and if so, identifying the backdoor trigger. We present the first robust defence mechanism that generalizes to several backdoor attacks against text classification models, without prior knowledge of the attack type, nor does our method require access to any (potentially compromised) training resources. Our experiments show that our technique is highly accurate at defending against state-of-the-art backdoor attacks, including data poisoning and weight poisoning, across a range of text classification tasks and model architectures. Our code will be made publicly available upon acceptance.


Inside Boston Dynamics' project to create humanoid robots

#artificialintelligence

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Boston Dynamics is known for the flashy videos of its robots doing impressive feats. Among Boston Dynamics' creations is Atlas, a humanoid robot that has become popular for showing unrivaled ability in jumping over obstacles, doing backflips, and dancing. The videos of Boston Dynamics robots usually go viral, accumulating millions of views on YouTube and generating discussions on social media. And the robotics company's latest video, which shows Atlas successfully running a parkour track, is no exception.