Technology
How to try Veo 3, Google's AI video generator that's going viral on the internet
AI-generated video has been advancing rapidly, with leading tech developers racing to build and commercialize their own models. We're now seeing the rise of tools that can generate strikingly photorealistic video from a single prompt in natural language. For the most part, however, AI-generated video has had a glaring shortcoming: it's silent. At its annual I/O developer conference on Tuesday, Google announced the release of Veo 3, the latest iteration of its video-generating AI model, which also comes with the ability to generate synchronized audio. Imagine you prompt the system to generate a video set inside a busy subway car, for example.
Congress Passed a Sweeping Free-Speech Crackdown--and No One's Talking About It
Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily. Had you scanned any of the latest headlines around the TAKE IT DOWN Act, legislation that President Donald Trump signed into law Monday, you would have come away with a deeply mistaken impression of the bill and its true purpose. The surface-level pitch is that this is a necessary law for addressing nonconsensual intimate images--known more widely as revenge porn. Obfuscating its intent with a classic congressional acronym (Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks), the TAKE IT DOWN Act purports to help scrub the internet of exploitative, nonconsensual sexual media, whether real or digitally mocked up, at a time when artificial intelligence tools and automated image generators have supercharged its spread. Enforcement is delegated to the Federal Trade Commission, which will give online communities that specialize primarily in user-generated content (e.g., social media, message boards) a heads-up and a 48-hour takedown deadline whenever an appropriate example is reported.
A Definition of a batch normalization layer
A small constant is included in the denominator for numerical stability. For distributed training, the batch statistics are usually estimated locally on a subset of the training minibatch ("ghost batch normalization" [32]). In figure 2 of the main text, we studied the variance of hidden activations and the batch statistics of residual blocks at a range of depths in three different architectures; a deep linear fully connected unnormalized residual network, a deep linear fully connected normalized residual network and a deep convolutional normalized residual network with ReLUs. We now define the three models in full. Deep fully connected linear residual network without normalization: The inputs are 100 dimensional vectors composed of independent random samples from the unit normal distribution, and the batch size is 1000.
Batch Normalization Biases Residual Blocks Towards the Identity Function in Deep Networks
Batch normalization dramatically increases the largest trainable depth of residual networks, and this benefit has been crucial to the empirical success of deep residual networks on a wide range of benchmarks. We show that this key benefit arises because, at initialization, batch normalization downscales the residual branch relative to the skip connection, by a normalizing factor on the order of the square root of the network depth. This ensures that, early in training, the function computed by normalized residual blocks in deep networks is close to the identity function (on average). We use this insight to develop a simple initialization scheme that can train deep residual networks without normalization. We also provide a detailed empirical study of residual networks, which clarifies that, although batch normalized networks can be trained with larger learning rates, this effect is only beneficial in specific compute regimes, and has minimal benefits when the batch size is small.
Google made an AI content detector - join the waitlist to try it
Fierce competition among some of the world's biggest tech companies has led to a profusion of AI tools that can generate humanlike prose and uncannily realistic images, audio, and video. While those companies promise productivity gains and an AI-powered creativity revolution, fears have also started to swirl around the possibility of an internet that's so thoroughly muddled by AI-generated content and misinformation that it's impossible to tell the real from the fake. Many leading AI developers have, in response, ramped up their efforts to promote AI transparency and detectability. Most recently, Google announced the launch of its SynthID Detector, a platform that can quickly spot AI-generated content created by one of the company's generative models: Gemini, Imagen, Lyria, and Veo. Originally released in 2023, SynthID is a technology that embeds invisible watermarks -- a kind of digital fingerprint that can be detected by machines but not by the human eye -- into AI-generated images.
Anthropic's latest Claude AI models are here - and you can try one for free today
Since its founding in 2021, Anthropic has quickly become one of the leading AI companies and a worthy competitor to OpenAI, Google, and Microsoft with its Claude models. Building on this momentum, the company held its first developer conference, Thursday, -- Code with Claude -- which showcased what the company has done so far and where it is going next. Also: I let Google's Jules AI agent into my code repo and it did four hours of work in an instant Anthropic used the event stage to unveil two highly anticipated models, Claude Opus 4 and Claude Sonnet 4. Both offer improvements over their preceding models, including better performance in coding and reasoning. Beyond that, the company launched new features and tools for its models that should improve the user experience. Keep reading to learn more about the new models.
A United Arab Emirates Lab Announces Frontier AI Projects--and a New Outpost in Silicon Valley
A United Arab Emirates (UAE) academic lab today launched an artificial intelligence world model and agent, two large language models (LLMs) and a new research center in Silicon Valley as it ramps up its investment in the cutting-edge field. The UAE's Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) revealed an AI world model called PAN, which can be used to build physically realistic simulations for testing and honing the performance of AI agents. Eric Xing, President and Professor of MBZUAI and a leading AI researcher, revealed the models and lab at the Computer History Museum in Mountain View, California today. The UAE has made big investments in AI in recent years under the guidance of Sheikh Tahnoun bin Zayed al Nahyan, the nation's tech-savvy national security advisor and younger brother of president Mohamed bin Zayed Al Nahyan. Xing says the UAE's new center in Sunnyvale, California, will help the nation tap into the world's most concentrated source of AI knowledge and talent.
DOGE Used Meta AI Model to Review Emails From Federal Workers
Elon Musk's so-called Department of Government Efficiency (DOGE) used artificial intelligence from Meta's Llama model to comb through and analyze emails from federal workers. Materials viewed by WIRED show that DOGE affiliates within the Office of Personnel Management (OPM) tested and used Meta's Llama 2 model to review and classify responses from federal workers to the infamous "Fork in the Road" email that was sent across the government in late January. The email offered deferred resignation to anyone opposed to changes the Trump administration was making to its federal workforce, including an enforced return to office policy, downsizing, and a requirement to be "loyal." To leave their position, recipients merely needed to reply with the word "resign." This email closely mirrored one that Musk sent to Twitter employees shortly after he took over the company in 2022.
Anthropic's New Model Excels at Reasoning and Planning--and Has the Pokรฉmon Skills to Prove It
Anthropic announced two new models, Claude 4 Opus and Claude Sonnet 4, during its first developer conference in San Francisco on Thursday. The pair will be immediately available to paying Claude subscribers. The new models, which jump the naming convention from 3.7 straight to 4, have a number of strengths, including their ability to reason, plan, and remember the context of conversations over extended periods of time, the company says. Claude 4 Opus is also even better at playing Pokรฉmon than its predecessor. "It was able to work agentically on Pokรฉmon for 24 hours," says Anthropic's chief product officer Mike Krieger in an interview with WIRED.
Leak reveals what Sam Altman and Jony Ive are cooking up: 100 million AI companion devices
OpenAI and Jony Ive's vision for its AI device is a screenless companion that knows everything about you. Details leaked to the Wall Street Journal give us a clearer picture of OpenAI's acquisition of io, cofounded by Ive, the iconic iPhone designer. The ChatGPT maker reportedly plans to ship 100 million AI devices designed to fit in with users' everyday life. "The product will be capable of being fully aware of a user's surroundings and life, will be unobtrusive, able to rest in one's pocket or on one's desk," according to a recording of an OpenAI staff meeting reviewed by the Journal. The device "will be a third core device a person would put on a desk after a MacBook Pro and an iPhone," per the meeting which occurred the same day (Wednesday) that OpenAI announced its acquisition of Ive's company.