metaphysic
The 50 Million Movie 'Here' De-Aged Tom Hanks With Generative AI
On Friday, TriStar Pictures released Here, a 50 million Robert Zemeckis-directed film that used real-time generative AI face transformation techniques to portray actors Tom Hanks and Robin Wright across a 60-year span, marking one of Hollywood's first full-length features built around AI-powered visual effects. The film adapts a 2014 graphic novel set primarily in a New Jersey living room across multiple time periods. Rather than cast different actors for various ages, the production used AI to modify Hanks' and Wright's appearances throughout. The de-aging technology comes from Metaphysic, a visual effects company that creates real time face swapping and aging effects. During filming, the crew watched two monitors simultaneously: one showing the actors' actual appearances and another displaying them at whatever age the scene required.
- North America > United States > New Jersey (0.26)
- North America > United States > Indiana (0.06)
- North America > United States > California (0.06)
- Media > Film (1.00)
- Leisure & Entertainment (1.00)
How to police Hollywood from swiping original creative work with AI
Kurt "The Cyberguy" Knutsson explains the benefits of using the new AI massage bot. Imagine stumbling upon a video of yourself doing something you've never done or saying something you've never said. That's the unsettling reality many face with the surge of deepfakes, and celebrities are the prime targets. In an era swarming with unauthorized AI-generated content, one startup is stepping up to help celebs keep control of their own images, voices and performance data. Metaphysic, already recognized for its convincing deepfake videos, has launched a new tool, Metaphysic Pro.
- Media (1.00)
- Information Technology > Security & Privacy (0.76)
- Law > Intellectual Property & Technology Law (0.51)
Tom Hanks says he will live on the big screen forever thanks to AI
Two-time Oscar-winner Tom Hanks could live forever on the big screen with the help of artificial intelligence. Hanks, 66, claims to have predicted the rise of AI in the film industry 20 years ago and believes it will recreate him in films long after he is dead. He said the powers of AI came to him when making the 2004 computer-animated movie The Polar Express when he was reimagined as a digital train conductor. 'What is a bonafide possibility right now is - if I wanted to - I could get together and pitch a series of seven movies that would star me in them in which I would be 32 years old from now until kingdom come,' Hanks said, speaking with British comedian Adam Buxton. 'I can tell you that there's discussions going on in all of the guilds, all of the agencies, and all of the legal firms in order to come up with the legal ramifications of my face and my voice and everybody else's being our intellectual property,' Hanks said.
- North America > United States (0.05)
- Europe > Spain > Andalusia > Málaga Province > Málaga (0.05)
- Media > Film (1.00)
- Leisure & Entertainment (1.00)
'We're going through a big revolution': how AI is de-ageing stars on screen
Craggy, grey-haired and 80 years old, Harrison Ford might seem a bit old to don his brown Fedora-style hat or crack his whip as Indiana Jones. But a trailer for his upcoming film Indiana Jones and the Dial of Destiny offers a flashback to Indy in his swashbuckling glory days. "That is my actual face at that age," the actor explained on CBS's The Late Show with Stephen Colbert. "They have this artificial intelligence (AI) programme. It can go through every foot of film that Lucasfilm owns because I did a bunch of movies for them and they have all this footage including film that wasn't printed: stock. They could mine it from where the light is coming from, the expression. Then I put little dots on my face and I say the words and they make it. Having discovered the secret of eternal youth, Ford joked: "That's what I see when I look in the mirror now." He is not the only actor to get a digital facelift with an assist from AI. Tom Hanks, Robin Wright and other cast members will play younger versions of themselves in Here, directed by Robert Zemeckis, thanks to a tool that the AI company Metaphysic says can create "high-resolution photorealistic faceswaps and de-ageing effects on top of actors' performances live and in real time without the need for further compositing or VFX work". Metaphysic's website proclaims: "We are world leaders in creating AI generated content that looks real" and suggests: "Use AI to create your own hyperreal avatar". The company has just struck a deal with the Creative Artists Agency "to develop generative AI tools and services for talent", according to the Hollywood Reporter. Just as the buzzy AI chatbot ChatGPT threatens to upend journalism, speechwriting and school essays, so AI could turn digital de-ageing from something that requires many months of highly skilled artists to something that many people can do in their bedrooms. And as the technology becomes ever more sophisticated, there are fears that deepfake technology could fall into the wrong hands and be weaponised. Olcun Tan, a German-born visual effects supervisor based in Los Angeles, reflects: "We're going through a big revolution.
- North America > United States > Indiana (0.46)
- North America > United States > California > Los Angeles County > Los Angeles (0.25)
- Europe > Ukraine (0.15)
- (2 more...)
- Media > Film (1.00)
- Leisure & Entertainment (1.00)
'Forrest Gump' stars Tom Hanks, Robin Wright to be 'de-aged' in new movie
Tom Hanks was seen speaking at the Australian premiere of "Elvis" earlier this month. Tom Hanks and Robin Wright will be reuniting and going back in time in an upcoming film. "Forrest Gump" director Robert Zemeckis' "Here" will star Wright and Hanks, digitally "de-aged," thanks to Metaphysic, an AI company that will bring the experience to life. The film adaption of Richard McGuire's novel will also star Paul Bettany and Kelly Reilly. "Here" is set to be released 30 years later in 2024.
- Media > Film (1.00)
- Leisure & Entertainment (1.00)
Real-time deepfakes can be beaten by a sideways glance
Real-time deepfake videos, heralded as the bringers of a new age of internet uncertainty, appear to have a fundamental flaw: They can't handle side profiles. That's the conclusion drawn in a report [PDF] from Metaphysic.ai, which specializes in 3D avatars, deepfake technology and rendering 3D images from 2D photographs. In tests it conducted using popular real-time deepfake app DeepFaceLive, a hard turn to the side made it readily apparent that the person on screen wasn't who they appeared to be. Multiple models were used in the test - several from deepfake communities and models included in DeepFaceLive - but a a 90-degree view of the face caused flickering and distortion as the Facial Alignment Network used to estimate poses struggled to figure out what it was seeing. "Most 2D-based facial alignments algorithms assign only 50-60 percent of the number of landmarks from a front-on face view to a profile view," said Metaphysic.ai contributor Martin Anderson, who wrote the study's blog post.
The Future of Generative Adversarial Networks in Deepfakes - Metaphysic.ai
In the image above, we see examples of'frontalization' under OSFR, where the system fails (bottom row) to infer an authentic likeness from an'off-center' angle in a source photograph, and where the degree of occlusion (i.e., how far the subject is looking away from camera) seems to accord directly with the degree of inaccuracy in the final result. Fed into the ClarifAI celebrity face recognition engine, the frontalized synthetic image of Mathew Rhys (top row, second from right) scores a respectable 0.061 likelihood of being an image of the actor; however, the frontalized Ursula Andress (bottom row, second from right), whose input source image (bottom left) is at a pretty acute 45-50 angle from the camera, is interpreted by ClarifAI as singer Kacey Musgraves (0.089 probability). The pose transformations in OSFR are not informed by multiple views, but rather inferred from generic pose knowledge across multiple identities (in datasets such as CelebA-HQ, a typical training source in a wide-ranging GAN framework). Likewise, expression transformations are powered by'baseline' transformations that are not specific to the identity in an image that you might want a GAN to alter, and therefore cannot take account of the unpredictable ways that the resting human face will distort and transform across a range of expressions. Most GAN initiatives that attempt expression alterations publish test results of'unknown' subjects, where it's not possible for the viewer to know whether the expressions are faithful to the source identity.
NeRF: An Eventual Successor for Deepfakes? - Metaphysic.ai
We'll take a deeper look at this proprietary technique when we chat with its creator, in a later article on autoencoder-based deepfakes. However, results as impressive as these are difficult to obtain with standard open source deepfakes software; require expensive and powerful hardware; and usually entail very long training times to obtain very limited sequences. Machine learning models are trained and developed within the capacity of the VRAM and tensor cores on a single video card -- a prospect that becomes more and more challenging in the age of hyperscale datasets, and which presents some specific obstacles to improving deepfake quality. Approaches that shunt training cycles to the CPU, or divide the workload up among multiple GPUs via Data Parallelism or Model Parallelism techniques (we'll examine these more closely in a later article) are still in the early stages. For the near future, a single-GPU training setup remains the most common scenario.
The deepfake dangers lurking in the metaverse
When you're in the metaverse, you are generally represented by either a blocky or cartoonish avatar or a disembodied floating torso and a pair of hands. None of which looks remotely like you. But what happens when things become much more real? A number of companies are developing ways for you to create hyper-realistic representations of yourself for the metaverse, with your face, your voice and even the way you move. One of these is Metaphysic, a deepfake or synthetic media company, founded by Chris Ume, creator of the Deep Tom Cruise videos that took TikTok by storm last year.
- Information Technology > Security & Privacy (1.00)
- Leisure & Entertainment (0.98)
- Media > Film (0.71)
Can deepfakes be ethical? An interview with Metaphysic's Tom Graham
On TikTok, you might have seen Tom Cruise playing acoustic guitar in a plain white t-shirt and a green baseball cap. You might have seen Tom Cruise check himself out shirtless in a bathroom mirror. All of these Tom Cruise appearances were deepfakes, computer-generated videos that transplant a person's face, voice, and overall likeness onto another body (in this case, actor Miles Fisher). Almost everything about deepfakes is controversial. The term, a mishmash of "deep learning" and "fake"), originates from a Reddit community in 2017 that retrofitted pornographic videos with celebrities' faces on them, causing an ethical row around the technology.