generate video
TikTok creator ByteDance vows to curb AI video tool after Disney threat
ByteDance's new AI video tool Seedance 2.0 can generate videos based on just a few lines of text. ByteDance's new AI video tool Seedance 2.0 can generate videos based on just a few lines of text. Videos created by new Seedance 2.0 generator go viral, including one of Tom Cruise and Brad Pitt fighting Mon 16 Feb 2026 03.25 ESTLast modified on Mon 16 Feb 2026 03.29 EST ByteDance, the Chinese technology company behind TikTok, has said it will restrain its AI video-making tool, after threats of legal action from Disney and a backlash from other media businesses, according to reports. The AI video generator Seedance 2.0, released last week, has spooked Hollywood as users create realistic clips of movie stars and superheroes with just a short text prompt. On Friday, Walt Disney reportedly sent a cease-and-desist letter to ByteDance which accused it of supplying Seedance with a "pirated library" of the studio's characters, including those from Marvel and Star Wars, according to the US news outlet Axios. Disney's lawyers claimed that ByteDance committed a "virtual smash-and-grab" of their intellectual property, according to a report from the BBC.
- North America > United States (0.32)
- Europe > Ukraine (0.07)
- South America > Venezuela (0.05)
- (2 more...)
- Media > Film (1.00)
- Leisure & Entertainment (1.00)
- Law (1.00)
- Government > Regional Government > North America Government > United States Government (0.32)
OpenAI promises more 'granular control' to copyright owners after Sora 2 generates videos of popular characters
OpenAI's Sora 2 app allows users to make AI-generated videos based on a text prompt. OpenAI's Sora 2 app allows users to make AI-generated videos based on a text prompt. Company behind the AI video app says it will work with rights holders to'block characters from Sora at their request' Mon 6 Oct 2025 00.10 EDTLast modified on Mon 6 Oct 2025 00.11 EDT Sora 2, a video generator powered by artificial intelligence, was launched last week on an invite-only basis. The app allows users to generate short videos based on a text prompt. Varun Shetty, OpenAI's head of media partnerships, said: "We'll work with rights holders to block characters from Sora at their request and respond to takedown requests."
- Oceania > Australia (0.19)
- North America > United States (0.17)
- Europe > Ukraine (0.07)
- Government > Regional Government (0.74)
- Leisure & Entertainment > Sports (0.72)
- Law > Intellectual Property & Technology Law (0.52)
- Media > News (0.49)
OpenAI's New Sora App Lets You Deepfake Yourself for Entertainment
OpenAI's latest app encourages users to generate a personal digital avatar and scroll AI-generated videos of themselves and their friends. On Tuesday, OpenAI released an AI video app called Sora . The platform is powered by OpenAI's latest video generation model, Sora 2, and revolves around a TikTok-like For You page of user-generated clips. This is the first product release from OpenAI that adds AI-generated sounds to videos. For now, it's available only on iOS and requires an invite code to join.
- North America > United States > California (0.05)
- Europe > Slovakia (0.05)
- Europe > Ireland (0.05)
- (2 more...)
- Media (0.96)
- Health & Medicine (0.74)
- Leisure & Entertainment (0.71)
- (2 more...)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
Google's Gemini AI can now make 1080p videos and costs half as much
When you purchase through links in our articles, we may earn a small commission. Google's Gemini AI can now make 1080p videos and costs half as much Both Veo 3 and Veo 3 Fast models just got updated features and much better pricing. Google's video-generating AI model, called Veo 3, can now generate videos in Full HD. The company announced yesterday that it has updated the AI model to support both a higher resolution and a new format. With both Veo 3 and Veo 3 Fast models, Gemini can now generate videos in 1080p resolution.
The Most Hyped Bot Since ChatGPT
For more than two years, every new AI announcement has lived in the shadow of ChatGPT. No model from any company has eclipsed or matched that initial fever. But perhaps the closest any firm has come to replicating the buzz was this past February, when OpenAI first teased its video-generating AI model, Sora. Tantalizing clips--woolly mammoths kicking up clouds of snow, Pixar-esque animations of adorable fluffy critters--promised a stunning future, one in which anyone can whip up high-quality clips by typing simple text prompts into a computer program. But Sora, which was not immediately available to the public, remained just that: a teaser.
- Government (0.97)
- Media (0.92)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.98)
How to use Sora, OpenAI's new video generating tool
Sora is a powerful AI video generation model that can create videos from text prompts, animate images, or remix videos in new styles. OpenAI first previewed the model back in February, but today is the first time the company is releasing it for broader use. The core function of Sora--creating impressive videos with simple prompts--remains similar to what was previewed in February, but OpenAI worked to make the model faster and cheaper ahead of this wider release. There are a few new features, and two stand out. With it, you can create multiple AI-generated videos and then assemble them together on a timeline, much the way you would with conventional video editors like Adobe Premiere Pro.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.93)
Adobe brings generative AI video to Premiere Pro
Adobe is now adding its AI-based video generator, Firefly, to its video editing software Premiere Pro. The Firefly model can be used to extend a video clip or generate video from still images or text instructions. This was first brought to our attention by The Verge. The Generative Extend tool will initially be available in beta and can extend the length of a video clip by up to two seconds with an image resolution of 720p or 1080p and a refresh rate of 24 frames-per-second. This tool will also be applicable to ambient sounds and sound effects, but not to music or speech.
Meta announces new AI model that can generate video with sound
Meta, the owner of Facebook and Instagram, announced on Friday it had built a new artificial intelligence model called Movie Gen that can create realistic-seeming video and audio clips in response to user prompts, claiming it can rival tools from leading media generation startups like OpenAI and ElevenLabs. Samples of Movie Gen's creations provided by Meta showed videos of animals swimming and surfing, as well as clips using people's real photos to depict them performing actions like painting on a canvas. Movie Gen also can generate background music and sound effects synced to the content of the videos, Meta said in a blogpost. Users can also edit existing videos with the model. In one such video, Meta had the tool insert pompoms into the hands of a man running by himself in the desert, while in another it changed a parking lot on which a man was skateboarding from dry ground into one covered by a splashing puddle.
- North America > United States (0.06)
- Asia > Pakistan (0.06)
- Asia > Indonesia (0.06)
- Asia > India (0.06)
- Leisure & Entertainment (0.78)
- Media > Film (0.76)
- Information Technology > Services (0.57)
VIMI: Grounding Video Generation through Multi-modal Instruction
Fang, Yuwei, Menapace, Willi, Siarohin, Aliaksandr, Chen, Tsai-Shien, Wang, Kuan-Chien, Skorokhodov, Ivan, Neubig, Graham, Tulyakov, Sergey
Existing text-to-video diffusion models rely solely on text-only encoders for their pretraining. This limitation stems from the absence of large-scale multimodal prompt video datasets, resulting in a lack of visual grounding and restricting their versatility and application in multimodal integration. To address this, we construct a large-scale multimodal prompt dataset by employing retrieval methods to pair in-context examples with the given text prompts and then utilize a two-stage training strategy to enable diverse video generation tasks within the same model. In the first stage, we propose a multimodal conditional video generation framework for pretraining on these augmented datasets, establishing a foundational model for grounded video generation. Secondly, we finetune the model from the first stage on three video generation tasks, incorporating multi-modal instructions. This process further refines the model's ability to handle diverse inputs and tasks, ensuring seamless integration of multi-modal information. After this two-stage train-ing process, VIMI demonstrates multimodal understanding capabilities, producing contextually rich and personalized videos grounded in the provided inputs, as shown in Figure 1. Compared to previous visual grounded video generation methods, VIMI can synthesize consistent and temporally coherent videos with large motion while retaining the semantic control. Lastly, VIMI also achieves state-of-the-art text-to-video generation results on UCF101 benchmark.
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.04)
- Asia > Middle East > Jordan (0.04)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.96)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.93)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.93)
Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models
Chen, Weifeng, Ji, Yatai, Wu, Jie, Wu, Hefeng, Xie, Pan, Li, Jiashi, Xia, Xin, Xiao, Xuefeng, Lin, Liang
Recent advancements in diffusion models have unlocked unprecedented abilities in visual creation. However, current text-to-video generation models struggle with the trade-off among movement range, action coherence and object consistency. To mitigate this issue, we present a controllable text-to-video (T2V) diffusion model, called Control-A-Video, capable of maintaining consistency while customizable video synthesis. Based on a pre-trained conditional text-to-image (T2I) diffusion model, our model aims to generate videos conditioned on a sequence of control signals, such as edge or depth maps. For the purpose of improving object consistency, Control-A-Video integrates motion priors and content priors into video generation. We propose two motion-adaptive noise initialization strategies, which are based on pixel residual and optical flow, to introduce motion priors from input videos, producing more coherent videos. Moreover, a first-frame conditioned controller is proposed to generate videos from content priors of the first frame, which facilitates the semantic alignment with text and allows longer video generation in an auto-regressive manner. With the proposed architecture and strategies, our model achieves resource-efficient convergence and generate consistent and coherent videos with fine-grained control. Extensive experiments demonstrate its success in various video generative tasks such as video editing and video style transfer, outperforming previous methods in terms of consistency and quality.
- North America > Canada > Quebec > Montreal (0.04)
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.04)