video-generating ai
The Download: video-generating AI, and Meta's voice cloning watermarks
You may not be familiar with Kuaishou, but this Chinese company just hit a major milestone: It's released the first ever text-to-video generative AI model that's freely available for the public to test. The short-video platform, which has over 600 million active users, announced the new tool, called Kling, on June 6. Like OpenAI's Sora model, Kling is able to generate videos up to two minutes long from prompts. But unlike Sora, which still remains inaccessible to the public four months after OpenAI debuted it, Kling has already started letting people try the model themselves. Zeyi Yang, our China reporter, has been putting it through its paces.
Google answers Meta's video-generating AI with its own, dubbed Imagen Video
Not to be outdone by Meta's Make-A-Video, Google today detailed its work on Imagen Video, an AI system that can generate video clips given a text prompt (e.g., "a teddy bear washing dishes"). While the results aren't perfect -- the looping clips the system generates tend to have artifacts and noise -- Google claims that Imagen Video is a step toward a system with a "high degree of controllability" and world knowledge, including the ability to generate footage in a range of artistic styles. As my colleague Devin Coldewey noted in his piece about Make-A-Video, text-to-video systems aren't new. Earlier this year, a group of researchers from Tsinghua University and the Beijing Academy of Artificial Intelligence released CogVideo, which can translate text into reasonably-high-fidelity short clips. But Imagen Video appears to be a significant leap over the previous state-of-the-art, showing an aptitude for animating captions that existing systems would have trouble understanding.