Generation
How to watch LlamaCon 2025, Meta's first generative AI developer conference
After a couple years of having its open-source Llama AI model be just a part of its Connect conferences, Meta is breaking things out and hosting an entirely generative AI-focused developer conference called LlamaCon on April 29. The event is entirely virtual, and you'll be able to watch along live on the Meta for Developers Facebook page. LlamaCon kicks off at 1PM ET / 10AM PT with a keynote address from Meta's Chief Product Officer Chris Cox, Vice President of AI Manohar Paluri and research scientist Angela Fan. The keynote is supposed to cover developments in the company's open-source AI community, "the latest on the Llama collection of models and tools" and offer a glimpse at yet-to-be released AI features. The keynote address will be followed by a conversation at 1:45PM ET / 10:45PM ET between Meta CEO Mark Zuckerberg and Databricks CEO Ali Ghodsi on "building AI-powered applications," followed by a chat at 7PM ET / 4PM PT about "the latest trends in AI" between Zuckerberg and Microsoft CEO Satya Nadella. It doesn't seem like either conversation will be used to break news, but Microsoft and Meta have collaborated before, so anything is possible.
U.S. government agency sounds alarm on AIs toll on environment, humanity
Generative AI's impact on the environment is still deeply understudied, according to a report by the Government Accountability Office (GAO), and its human effects are just as unclear. In the latest of several AI technology assessments conducted by the GAO -- a nonpartisan agency that provides audits and evaluations to Congress, executive agency leaders, and the general public upon request -- the legislative office outlined multiple human and environmental risks posed by the tech's unhampered development and widespread use. "Generative AI may displace workers, help spread false information, and create or elevate risks to national security," the report reads. Threats to data privacy and cybersecurity, the use of biased systems, and a lack of accountability could have unintended effects on society, culture, and people, writes the GAO. And just as pressing is the need to determine how much of an energy drain AI's training (and ongoing use) presents and how we can mitigate it.
The Oscars announces new rules for using AI. Sort of.
The Oscars has landed squarely on the fence about the use of AI in potentially nominated films. Following a widely publicised controversy around the use of artificial intelligence in Best Picture nominees The Brutalist and Emilia Pérez, the Academy has made its position of impartiality clear. In the latest update to the Oscars rules, released on April 21 to apply to the upcoming 98th Academy Awards set for March 2026, there's an addition to the "Eligibility" section: "With regard to Generative Artificial Intelligence and other digital tools used in the making of the film, the tools neither help nor harm the chances of achieving a nomination. The Academy and each branch will judge the achievement, taking into account the degree to which a human was at the heart of the creative authorship when choosing which movie to award." Essentially, AI won't help films get nominated for an Oscar, nor hinder their chances.
Using generative AI will 'neither help nor harm the chances of achieving' Oscar nominations
The Academy of Motion Picture Arts and Sciences has decide that its official stance towards AI-use in films is to take no stance at all, according to a statement the organization shared outlining changes to voting for the 98th Oscars. The issue of award-nominated films using AI was first raised in 2024 when the productions behind Best Picture nominees The Brutalist and Emilia Pérez admitted to using the tech to alter performances. "With regard to Generative Artificial Intelligence and other digital tools used in the making of the film, the tools neither help nor harm the chances of achieving a nomination, " AMPAS writes. "The Academy and each branch will judge the achievement, taking into account the degree to which a human was at the heart of the creative authorship when choosing which movie to award." While the organization at least reaffirms that human involvement is their primary concern, they also don't seem to believe that using AI -- potentially trained on the ill-gotten work of their membership -- is an existential problem.
Generative AI is learning to spy for the US military
"We still need to validate the sources," says Lowdon. But the unit's commanders encouraged the use of large language models, he says, "because they provide a lot more efficiency during a dynamic situation." The generative AI tools they used were built by the defense-tech company Vannevar Labs, which in November was granted a production contract worth up to 99 million by the Pentagon's startup-oriented Defense Innovation Unit with the goal of bringing its intelligence tech to more military units. The company, founded in 2019 by veterans of the CIA and US intelligence community, joins the likes of Palantir, Anduril, and Scale AI as a major beneficiary of the US military's embrace of artificial intelligence--not only for physical technologies like drones and autonomous vehicles but also for software that is revolutionizing how the Pentagon collects, manages, and interprets data for warfare and surveillance. Though the US military has been developing computer vision models and similar AI tools, like those used in Project Maven, since 2017, the use of generative AI--tools that can engage in human-like conversation like those built by Vannevar Labs--represent a newer frontier.
Accelerating drug development with AI
Developing new drugs to treat illnesses has typically been a slow and expensive process. However, a team of researchers at the University of Waterloo uses machine learning to speed up the development time. The Waterloo research team has created "Imagand," a generative artificial intelligence model that assesses existing information about potential drugs and then suggests their potential properties. Trained on and tested against existing drug data, Imagand successfully predicts important properties of different drugs that have already been independently verified in lab studies, demonstrating the AI's accuracy. Traditionally, bringing a successful drug candidate to market can cost between US 2 billion and US 3 billion and take over a decade to complete.
Samsung's house robot Ballie will have Google Cloud's generative AI built in
Samsung's Ballie might soon be the smartest thing rolling around your living room. The tech giant announced today that it's partnering with Google to bring Google Cloud's generative AI technology to its Ballie AI companion home robot. If you're not familiar, Ballie first rolled out at CES 2020. As it moves, Ballie can manage lights and temperature, interact with smart appliances, send video updates of pets or loved ones, project videos or websites on the wall, play music, answer phone calls, and more. ZDNET's Senior Editor Sabrina Ortiz got an up-close look at Ballie last year, calling it "a serious attempt at making a robot assistant without being overly ambitious."
Gartner to CIOs: Prepare to spend more money on generative AI
You know all that money your company may already be spending on generative AI products and projects? Well, be prepared to spend a lot more in the coming year. Also: How to use ChatGPT: A beginner's guide to the most popular AI chatbot As generative AI, or gen AI, becomes more integral to business operations and consumer products, spending is expected to rise dramatically. In its latest forecast, Gartner predicts that the amount of money spent around the world on this hot flavor of AI will total 644 billion this year, an increase of 76.4% from last year. One problem with gen AI, as it currently stands, is that the technology is still in its nascent stages and often flawed and fallible.
The Download: generative AI therapy, and the future of 23andMe's genetic data
June 2022 Across the world, video cameras have become an accepted feature of urban life. Many cities in China now have dense networks of them, and London and New Delhi aren't far behind. Now France is playing catch-up. Concerns have been raised throughout the country. But the surveillance rollout has met special resistance in Marseille, France's second-biggest city. It's unsurprising, perhaps, that activists are fighting back against the cameras, highlighting the surveillance system's overreach and underperformance.
Multi-objective Deep Data Generation with Correlated Property Control
Developing deep generative models has been an emerging field due to the ability to model and generate complex data for various purposes, such as image synthesis and molecular design. However, the advancement of deep generative models is limited by challenges to generate objects that possess multiple desired properties: 1) the existence of complex correlation among real-world properties is common but hard to identify; 2) controlling individual property enforces an implicit partially control of its correlated properties, which is difficult to model; 3) controlling multiple properties under various manners simultaneously is hard and under-explored. We address these challenges by proposing a novel deep generative framework, CorrVAE, that recovers semantics and the correlation of properties through disentangled latent vectors. The correlation is handled via an explainable mask pooling layer, and properties are precisely retained by generated objects via the mutual dependence between latent vectors and properties. Our generative model preserves properties of interest while handling correlation and conflicts of properties under a multi-objective optimization framework. The experiments demonstrate our model's superior performance in generating data with desired properties.