Goto

Collaborating Authors

 Generative AI




Video: OpenAI and Anthropic CEOs refuse to hold hands at India AI summit

Al Jazeera

The heads of AI companies OpenAI and Anthropic refused to hold hands in a group photo at the opening of a major AI summit in India. The rivalry between Sam Altman of OpenAI and Dario Amodei of Anthropic escalated this year when the latter ran attack ads during the Super Bowl. South Korea's ex-President Yoon gets life for insurrection Mark Zuckerberg appears to be'served' new lawsuit upon arrival at court



From Collapse to Improvement: Statistical Perspectives on the Evolutionary Dynamics of Iterative Training on Contaminated Sources

Bakshi, Soham, Chakraborty, Sunrit

arXiv.org Machine Learning

The problem of model collapse has presented new challenges in iterative training of generative models, where such training with synthetic data leads to an overall degradation of performance. This paper looks at the problem from a statistical viewpoint, illustrating that one can actually hope for improvement when models are trained on data contaminated with synthetic samples, as long as there is some amount of fresh information from the true target distribution. In particular, we consider iterative training on samples sourced from a mixture of the true target and synthetic distributions. We analyze the entire iterative evolution in a next-token prediction language model, capturing how the interplay between the mixture weights and the sample size controls the overall long-term performance. With non-trivial mixture weight of the true distribution, even if it decays over time, simply training the model in a contamination-agnostic manner with appropriate sample sizes can avoid collapse and even recover the true target distribution under certain conditions. Simulation studies support our findings and also show that such behavior is more general for other classes of models.


Nvidia's Deal With Meta Signals a New Era in Computing Power

WIRED

The days of tech giants buying up discrete chips are over. AI companies now need GPUs, CPUs, and everything in between. Ask anyone what Nvidia makes, and they're likely to first say "GPUs." For decades, the chipmaker has been defined by advanced parallel computing, and the emergence of generative AI and the resulting surge in demand for GPUs has been a boon for the company . But Nvidia's recent moves signal that it's looking to lock in more customers at the less compute-intensive end of the AI market--customers who don't necessarily need the beefiest, most powerful GPUs to train AI models, but instead are looking for the most efficient ways to run agentic AI software.


ChatGPT gets 'Lockdown Mode' mode for extra security and privacy

PCWorld

PCWorld reports that OpenAI is launching new security features for ChatGPT, including Lockdown Mode and Elevated Risk labels to combat growing threats. Lockdown Mode restricts external interactions and disables web browsing for high-privacy users, while risk labels clearly mark potentially dangerous features. These updates specifically address prompt injection attacks where malicious prompts attempt to trick the AI into performing harmful actions. OpenAI is launching two new security features in ChatGPT to address growing threats to its AI systems, according to a recent blog post . As AI services increasingly connect to wider parts of the web and more external apps, the risk of so-called "prompt injection attacks" also increases.


Big Tech Says Generative AI Will Save the Planet. It Doesn't Offer Much Proof

WIRED

Big Tech Says Generative AI Will Save the Planet. A new report finds that of 154 specific claims about how AI will benefit the climate, just a quarter cited academic research. A third included no evidence at all. A few years ago, Ketan Joshi read a statistic about artificial intelligence and climate change that caught his eye. In late 2023, Google began claiming that AI could help cut global greenhouse gas emissions by between 5 and 10 percent by 2030.


Federal court rules that OpenAI must stop using the term 'Cameo'

Engadget

Samsung Galaxy Unpacked 2026 is Feb. 25 Valve's Steam Machine: Everything we know Federal court rules that OpenAI must stop using the term'Cameo' The company's video generator Sora offered a feature bearing the name. Cameo, the platform where celebrities sell short, personalized videos, has scored a in a trademark against OpenAI. A California judge has ruled that the AI company's video generation tool cannot use the term'cameo' or any variation likely to cause confusion. A temporary restraining order in the case was in November of last year. The suit was in response to a feature available within the at launch called'Cameo' that allowed users to add any likeness to videos they generated.