hyperrealistic deepfake
The Download: hyperrealistic deepfakes, and using math to shape wood
The new full-body avatars will be able to do things like sing and brandish a microphone while dancing, or move from behind a desk and walk across a room. They will be able to express more complex emotions than previously possible, like excitement, fear, or nervousness. These new capabilities, which are set to launch toward the end of the year, will add a lot to the illusion of realism. That's a scary prospect at a time when deepfakes and online misinformation are proliferating. Read the full story and watch our reporter's avatars meet each other.
Synthesia's hyperrealistic deepfakes will soon have full bodies
No one else is able to do that," says Jack Saunders, a researcher at the University of Bath, who was not involved in Synthesia's work. The full-body avatars he previewed are very good, he says, despite small errors such as hands "slicing" into each other at times. But "chances are you're not really going to be looking that close to notice it," Saunders says. Synthesia launched its first version of hyperrealistic AI avatars, also known as deepfakes, in April. These avatars use large language models to match expressions and tone of voice to the sentiment of spoken text.
The Download: hyperrealistic deepfakes, and clean energy's implications for mining
Until now, AI-generated videos of people have tended to have some stiffness, glitchiness, or other unnatural elements that make them pretty easy to differentiate from reality. For the past several years, AI video startup Synthesia has produced these kinds of AI-generated avatars. But today it launches a new generation, its first to take advantage of the latest advancements in generative AI, and they are more realistic and expressive than anything we've seen before. While today's release means almost anyone will now be able to make a digital double, before the technology went public, Synthesia agreed to make one of Melissa Heikkilä, our senior AI reporter. This technological progress signals a much larger shift.
An AI startup made a hyperrealistic deepfake of me that's so good it's scary
Thanks to rapid advancements in generative AI and a glut of training data created by human actors that has been fed into its AI model, Synthesia has been able to produce avatars that are indeed more humanlike and more expressive than their predecessors. The digital clones are better able to match their reactions and intonation to the sentiment of their scripts--acting more upbeat when talking about happy things, for instance, and more serious or sad when talking about unpleasant things. They also do a better job matching facial expressions--the tiny movements that can speak for us without words. But this technological progress also signals a much larger social and cultural shift. Increasingly, so much of what we see on our screens is generated (or at least tinkered with) by AI, and it is becoming more and more difficult to distinguish what is real from what is not.