Goto

Collaborating Authors

DeepDribble: Simulating Basketball with AI

#artificialintelligence

When training physically simulated characters basketball skills, these competing talents must also be held in balance. While AAA game titles like EA's NBA LIVE and NBA 2K have made drastic improvements to their graphics and character animation, basketball video games still rely heavily on canned animations. The industry is always looking for new methods for creating gripping, on-court action in a more personalized, interactive way. In a recent paper by DeepMotion Chief Scientist, Libin Liu, and Carnegie Mellon University Professor, Jessica Hodgins, virtual agents are trained to simulate a range of complex ball handling skills in real time. This blog gives an overview of their work and results, which will be presented at SIGGRAPH 2018.


AI-driven animations will make your digital avatars come to life

Engadget

Even with the assistance of automated animation features in modern game-development engines, bringing on-screen avatars to life can be an arduous and time-consuming task. However, a recent string of advancements in AI could soon help drastically reduce the number of hours needed to create realistic character movements. Take basketball games like the NBA2K franchise, for example. Prior to 2010, the on-screen players -- be it Shaq, LeBron, KD or Curry -- were all modeled on regular-sized people wearing motion-capture suits. "There was a time when NBA2K was made entirely of animators and producers," 2K's Anthony Tominia told the Evening Standard in 2016.


The AI revolution is making game characters move more realistically

#artificialintelligence

When we talk about artificial intelligence in games, we usually picture smarter or more realistic enemies that don't come off as mindless automatons. New research, though, is showing how an AI powered by a neural network could revolutionize the way player avatars animate realistically through complicated game environments in real time. Phase-Functioned Neural Networks for Character Control is a fundamentally new way of handling character animation that will be presented at the ACM's upcoming SIGGRAPH conference this summer. In most games, character animation is handled through "canned," pre-recorded motion capture. This means an average player will see precisely the same motion cycled repeated thousands of times in a single play-through.


Recurrent Transition Networks for Character Locomotion

arXiv.org Machine Learning

Manually authoring transition animations for a complete locomotion system can be a tedious and time-consuming task, especially for large games that allow complex and constrained locomotion movements, where the number of transitions grows exponentially with the number of states. In this paper, we present a novel approach, based on deep recurrent neural networks, to automatically generate such transitions given a \textit{past context} of a few frames and a target character state to reach. We present the Recurrent Transition Network (RTN), based on a modified version of the Long-Short-Term-Memory (LSTM) network, designed specifically for transition generation and trained without any gait, phase, contact or action labels. We further propose a simple yet principled way to initialize the hidden states of the LSTM layer for a given sequence which improves the performance and generalization to new motions. We both quantitatively and qualitatively evaluate our system and show that making the network terrain-aware by adding a local terrain representation to the input yields better performance for rough-terrain navigation on long transitions. Our system produces realistic and fluid transitions that rival the quality of Motion Capture-based ground-truth motions, even before applying any inverse-kinematics postprocess. Direct benefits of our approach could be to accelerate the creation of transition variations for large coverage, or even to entirely replace transition nodes in an animation graph. We further explore applications of this model in a animation super-resolution setting where we temporally decompress animations saved at 1 frame per second and show that the network is able to reconstruct motions that are hard to distinguish from un-compressed locomotion sequences.


This neural network could make animations in games a little less awkward

#artificialintelligence

The graphical fidelity of games these days is truly astounding, but one thing their creators struggle to portray is the variety and fluidity of human motion. An animation system powered by a neural network drawing from real motion-captured data may help make our avatars walk, run and jump a little more naturally. Researchers from the University of Edinburgh and Method Studios put together a machine learning system that feeds on motion capture clips showing various kinds of movement. Then, when given an input such as a user saying "go this way" and taking into account the terrain, it outputs the animation that best fits both -- for example, going from a jog to hopping over a small obstacle. No custom animation has to be made transitioning from a jog to a hop; the algorithm determines that, producing smooth movements and no jarring switches from one animation type to another.