Walt Disney Animation Studios is set to share its first VR short, a film called Cycles that took four months to create. The short will make its debut at the Association for Computing Machinery's annual SIGGRAPH conference in August and the team behind it hopes VR will help viewers form a stronger emotional connection with the film. "VR is an amazing technology and a lot of times the technology is what is really celebrated," Director Jeff Gipson said in a statement. "We hope more and more people begin to see the emotional weight of VR films, and with Cycles in particular, we hope they will feel the emotions we aimed to convey with our story." Cycles is about a house, what happens inside of it and how those that live there make it a home.
One thing no one needed this summer was a very rubbish version of Inside Out, that animated gem about the personified emotions inside the surreal landscape of a young girl's mind. Here, instead of a mind, a smartphone, and instead of emotions, emojis: all the wacky little symbols that originated in Japan, not that you'd know that from this film. The Emoji Movie could in theory have been witty and sophisticated, like The Lego Movie – or even the Angry Birds movie – juxtaposing its apparently dumbed-down world with a smart script. This is just a boilerplate animation, zestless, pointless. The idea is that the "Meh" emoji wants to express something more complicated, in effect to be something other than its assigned identity, and here I am prepared to concede that The Emoji Movie does in its way confront an existential problem that Inside Out arguably never solved.
Many studies have been conducted on how to detect emotion classes or magnitudes from multimedia information such as text, audio, and images. However, the methods that can use predicted emotion classes and magnitudes to render emotion expressions in Embodied Conversational Agents (ECA) are still unclear. This paper proposes a computer graphics methodology that uses predicted non-linear regression values to render facial expressions using mesh morphing techniques. Results of the rendering technique are presented and discussed.
Researchers at the University of East Anglia, Caltech, Carnegie Mellon University and Disney have created a way to animate speech in real-time. With their method, rather than having skilled animators manually match an animated character's mouth to recorded speech, new dialogue can be incorporated automatically in much less time with a lot less effort. To do this, the researchers recorded over eight hours of audio and video of a speaker reciting more than 2500 different sentences. The speaker's face was tracked as they spoke, which was used to create a reference face for an animation model. Off-the-shelf speech recognition software was then used to transcribe the speech sounds.
Kate Middleton appears in a new animation for mental health to discuss the matter. Middleton, Prince William and Prince Harry are advocates of mental health. The Duchess of Cambridge who is pregnant with royal baby No. 3 opted to speak in an animation titled "Talking Mental Health" by Anna Freud NCCF. Elle noted that this is Middleton's first appearance after announcing her third pregnancy. "Hello, mental health is how we feel and think things that can't really be seen but that affect us every day and talking about them can feel difficult," Middleton said in the clip to introduce the animation.