Goto

Collaborating Authors

Animation


AI will make humans more creative, not replace them, predict entertainment executives

FOX News

People in Texas sounded off on AI job displacement, with half of people who spoke to Fox News convinced that the tech will rob them of work. With new developments in generative artificial intelligence bringing the technology to the forefront of public conversation, concerns about how it will affect jobs in the entertainment industry have risen, even contributing in a writer strike in Hollywood. But, founders of Web3 animation studio Toonstar have been using artificial intelligence in their studio for years, and told Fox News Digital it serves as an aid in the creative process. AI can "unlock creativity" and give animators a "head start" in terms of creativity, Luisa Huang, COO and co-founder of Toonstar told Fox News Digital. "But I have yet to see AI be able to put output anything … that is ready for production," she added.


Watch Chad Nelson's 'Critterz', "an animated short designed with AI"

#artificialintelligence

Using characters and scenes he generated with Dall-E, writer / director Chad Nelson and creative agency Native Foreign have made the animated short Critters, which recently debuted on YouTube. The five-minute film, which was partly financed by OpenAI and is a cross between something from Pixar and a David Attenborough-style documentary, we meet a cast of cute, furry creatures who live in an imaginary jungle. While the assets were generated using AI, Chad wrote the script himself. He used actors to record the voices and the film was made together with a team of animators. His son also worked on the film, as an Unreal Engine programmer.


Meta has open-sourced an AI project that turns your doodles into animations

Engadget

Meta has open-sourced an artificial intelligence project that lets anyone bring their doodles to life. The company hopes that by offering Animated Drawings as an open-source project other developers will be able to create new, richer experiences. The Fundamental AI Research (FAIR) team originally released a web-based version of the tool in 2021. It asks users to upload a drawing of a single human-like character or to select a demo figure. If you use your own doodle, you'll see a consent form that asks if Meta can use your drawing to help train its models.


First Order Motion Model for Image Animation

Neural Information Processing Systems

Image animation consists of generating a video sequence so that an object in a source image is animated according to the motion of a driving video. Our framework addresses this problem without using any annotation or prior information about the specific object to animate. Once trained on a set of videos depicting objects of the same category (e.g.


CageNeRF: Cage-based Neural Radiance Field for Generalized 3D Deformation and Animation

Neural Information Processing Systems

While implicit representations have achieved high-fidelity results in 3D rendering, it remains challenging to deforming and animating the implicit field. Existing works typically leverage data-dependent models as deformation priors, such as SMPL for human body animation. However, this dependency on category-specific priors limits them to generalize to other objects. To solve this problem, we propose a novel framework for deforming and animating the neural radiance field learned on arbitrary objects. The key insight is that we introduce a cage-based representation as deformation prior, which is category-agnostic. Specifically, the deformation is performed based on an enclosing polygon mesh with sparsely defined vertices called cage inside the rendering space, where each point is projected into a novel position based on the barycentric interpolation of the deformed cage vertices. In this way, we transform the cage into a generalized constraint, which is able to deform and animate arbitrary target objects while preserving geometry details. Based on extensive experiments, we demonstrate the effectiveness of our framework in the task of geometry editing, object animation and deformation transfer.


Implicit Warping for Animation with Image Sets

Neural Information Processing Systems

We present a new implicit warping framework for image animation using sets of source images through the transfer of the motion of a driving video. A single crossmodal attention layer is used to find correspondences between the source images and the driving image, choose the most appropriate features from different source images, and warp the selected features. This is in contrast to the existing methods that use explicit flow-based warping, which is designed for animation using a single source and does not extend well to multiple sources. The pick-and-choose capability of our framework helps it achieve state-of-the-art results on multiple datasets for image animation using both single and multiple source images.


AnimeRun: 2D Animation Visual Correspondence from Open Source 3D Movies

Neural Information Processing Systems

Existing correspondence datasets for two-dimensional (2D) cartoon suffer from simple frame composition and monotonic movements, making them insufficient to simulate real animations. In this work, we present a new 2D animation visual correspondence dataset, AnimeRun, by converting open source three-dimensional (3D) movies to full scenes in 2D style, including simultaneous moving background and interactions of multiple subjects. Our analyses show that the proposed dataset not only resembles real anime more in image composition, but also possesses richer and more complex motion patterns compared to existing datasets. With this dataset, we establish a comprehensive benchmark by evaluating several existing optical flow and segment matching methods, and analyze shortcomings of these methods on animation data.


AnimeSR: Learning Real-World Super-Resolution Models for Animation Videos

Neural Information Processing Systems

This paper studies the problem of real-world video super-resolution (VSR) for animation videos, and reveals three key improvements for practical animation VSR. First, recent real-world super-resolution approaches typically rely on degradation simulation using basic operators without any learning capability, such as blur, noise, and compression. In this work, we propose to learn such basic operators from real low-quality animation videos, and incorporate the learned ones into the degradation generation pipeline. Such neural-network-based basic operators could help to better capture the distribution of real degradations. Second, a large-scale high-quality animation video dataset, AVC, is built to facilitate comprehensive training and evaluations for animation VSR. Third, we further investigate an efficient multi-scale network structure. It takes advantage of the efficiency of unidirectional recurrent networks and the effectiveness of sliding-window-based methods. Thanks to the above delicate designs, our method, AnimeSR, is capable of restoring real-world low-quality animation videos effectively and efficiently, achieving superior performance to previous state-of-the-art methods.


Fast Neural Network Emulation of Dynamical Systems for Computer Animation

Neural Information Processing Systems

Computer animation through the numerical simulation of physics-based graphics models offers unsurpassed realism, but it can be computation(cid:173) ally demanding. This paper demonstrates the possibility of replacing the numerical simulation of nontrivial dynamic models with a dramatically more efficient "NeuroAnimator" that exploits neural networks. Neu(cid:173) roAnimators are automatically trained off-line to emulate physical dy(cid:173) namics through the observation of physics-based models in action. De(cid:173) pending on the model, its neural network emulator can yield physically realistic animation one or two orders of magnitude faster than conven(cid:173) tional numerical simulation.


Machine Learning for Video-Based Rendering

Neural Information Processing Systems

This work extends the new paradigm for computer animation, video tex(cid:173) tures, which uses recorded video to generate novel animations by replaying the video samples in a new order. Here we concentrate on video sprites, which are a special type of video texture. In video sprites, instead of storing whole images, the object of inter(cid:173) est is separated from the background and the video samples are stored as a sequence of alpha-matted sprites with associated veloc(cid:173) ity information. They can be rendered anywhere on the screen to create a novel animation of the object. We present methods to cre(cid:173) ate such animations by finding a sequence of sprite samples that is both visually smooth and follows a desired path.