Collaborating Authors


Google answers Meta's video-generating AI with its own, dubbed Imagen Video


Not to be outdone by Meta's Make-A-Video, Google today detailed its work on Imagen Video, an AI system that can generate video clips given a text prompt (e.g., "a teddy bear washing dishes"). While the results aren't perfect -- the looping clips the system generates tend to have artifacts and noise -- Google claims that Imagen Video is a step toward a system with a "high degree of controllability" and world knowledge, including the ability to generate footage in a range of artistic styles. As my colleague Devin Coldewey noted in his piece about Make-A-Video, text-to-video systems aren't new. Earlier this year, a group of researchers from Tsinghua University and the Beijing Academy of Artificial Intelligence released CogVideo, which can translate text into reasonably-high-fidelity short clips. But Imagen Video appears to be a significant leap over the previous state-of-the-art, showing an aptitude for animating captions that existing systems would have trouble understanding.

Facebook is giving us a little bit more control over our feeds


As someone who spends an unfortunately significant amount of time on Facebook, there are so many things I would like to see less often. Facebook's parent company Meta has now, decades after its platform was released to the public, given us that option. What we see in our Facebook feed is often fueled by the much-criticized algorithm. For instance, if you like a bunch of hiking Groups and Pages, interact with a ton of photos of the outdoors, and post about your backpacking adventures, you might be met with more recommended posts from creators and communities related to hiking. As Facebook puts it, "what you see in your Feed is uniquely personalized to your interests through machine learning."

Meta's AI Chief Publishes Paper on Creating 'Autonomous' Artificial Intelligence


Much like how varying sections of the brain are responsible for different functions of the body, LeCun suggests a model for spawning autonomous intelligence that would be composed of five separate, yet configurable modules. One of the most complex parts of the proposed architecture, the "world model module" would work to estimate the state of the world, as well as predict imagined actions and other world sequences, much like a simulator. But by using this single world model engine, knowledge about how the world operates can be easily shared across different tasks. In some ways, it might resemble memory.

Meta releases a new AI platform that can switch freely between Nvidia and AMD chips


Facebook parent company Meta announced the launch of a new artificial intelligence free software platform that supports both NVIDIA and AMD chips, making it easier for developers to develop artificial intelligence programs between hardware systems based on different chips. Meta's newly released set of artificial intelligence open source software is built on the basis of the PyTorch open source machine learning framework, which can make the code on Nvidia's flagship A100 chip run 12 times faster, and also make the code on AMD's MI250 chip run faster. Meta said in a blog post that the AI software platform not only speeds up code execution but also supports AI chips from different manufacturers. At present, software development has become a key area for chip manufacturers to build developer ecosystems and use their own chips. For example, CUDA developed by NVIDIA is very popular. However, after developers develop artificial intelligence code based on Nvidia chips through CUDA, it is difficult to run on graphics and image processing chips made by companies such as Nvidia's rival AMD.

Meta's Groundbreaking AI Film Maker: Make-A-Scene


I explain Artificial Intelligence terms and news to non-experts. Meta AI's new model make-a-video is out and in a single sentence: it generates videos from text. It's not only able to generate videos, but it's also the new state-of-the-art method, producing higher quality and more coherent videos than ever before! You can see this model as a stable diffusion model for videos. Surely the next step after being able to generate images.

Get ready for the next generation of AI


Is anyone else feeling dizzy? Just when the AI community was wrapping its head around the astounding progress of text-to-image systems, we're already moving on to the next frontier: text-to-video. Late last week, Meta unveiled Make-A-Video, an AI that generates five-second videos from text prompts. Built on open-source data sets, Make-A-Video lets you type in a string of words, like "A dog wearing a superhero outfit with a red cape flying through the sky," and then generates a clip that, while pretty accurate, has the aesthetics of a trippy old home video. The development is a breakthrough in generative AI that also raises some tough ethical questions.

AI Data Laundering: How Academic and Nonprofit Researchers Shield Tech Companies from Accountability -


Yesterday, Meta's AI Research Team announced Make-A-Video, a "state-of-the-art AI system that generates videos from text." We're pleased to introduce Make-A-Video, our latest in #GenerativeAI research! With just a few words, this state-of-the-art AI system generates high-quality videos from text prompts. Have an idea you want to see? Reply w/ your prompt using #MetaAI and we'll share more results. Like he did for the Stable Diffusion data, Simon Willison created a Datasette browser to explore WebVid-10M, one of the two datasets used to train the video generation model, and quickly learned that all 10.7 million video clips were scraped from Shutterstock, watermarks and all.

Meta AI Boss: current AI methods will never lead to true intelligence


Meta is one of the leading companies in AI development globally. However, the company appears to not have confidence in the current AI methods. According to Yann LeCun, chief AI scientist at Meta, there needs to be an improvement for true intelligence. LeCun claims that the most current AI methods will never lead to true intelligence. His research on many of the most successful deep learning fields today method is skeptical.

Meta's New AI Turns Text To Video


Meta (Facebook 2.0) has unveiled a brand new AI system that turns text into video. The AI tool is called Make-A-Video. Yes, you read that right, most people are not even aware that text-to-image AI models exist, and now we're moving on to the next frontier: Text-to-video. Make-A-Video is a state-of-the-art AI system that generates videos from text.

Text-to-image models are dated, text-to-video is in now


In brief AI progresses rapidly. Just months after the release of the most advanced text-to-image models, developers are showing off text-to-video systems. Meta announced a multimodal algorithm named Make-A-Video that allows its users to type a text description of a scene as input and get a short computer-generated animated clip as output, typically depicting what was described. Other types of data, such as an image or a video, can be used as an input prompt, too. The text-to-video system was trained on public datasets, according to a non-peer reviewed paper [PDF] describing the software.