Goto

Collaborating Authors

 johansson


Scarlett Johansson and Cate Blanchett back campaign accusing AI firms of theft

The Guardian

Johansson was dragged into the AI debate after OpenAI's voice assistant used her vocal likeness, prompting the actor to say she was'angered' by the move. Johansson was dragged into the AI debate after OpenAI's voice assistant used her vocal likeness, prompting the actor to say she was'angered' by the move. Scarlett Johansson, Cate Blanchett, REM and Jodi Picoult are among hundreds of Hollywood stars, musicians and authors backing a new campaign accusing AI companies of "theft" of their work. The "Stealing Isn't Innovation" drive launched on Thursday with the support of approximately 800 creative professionals and bands. It adds: "Artists, writers, and creators of all kinds are banding together with a simple message: Stealing our work is not innovation.


'Tron: Ares' Wants to Gaslight You About the Future of AI

WIRED

The latest film in the franchise seems to have not learned any lessons from sci-fi movies past--or from current reality. All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links. Ares, named after the Greek god of war, was built to be an AI super-soldier. Then he found out about, started listening to Depeche Mode, and realized the tech bro who made him might be a hack.


Debiasing Machine Learning Predictions for Causal Inference Without Additional Ground Truth Data: "One Map, Many Trials" in Satellite-Driven Poverty Analysis

Pettersson, Markus, Jerzak, Connor T., Daoud, Adel

arXiv.org Machine Learning

Machine learning models trained on Earth observation data, such as satellite imagery, have demonstrated significant promise in predicting household-level wealth indices, enabling the creation of high-resolution wealth maps that can be leveraged across multiple causal trials. However, because standard training objectives prioritize overall predictive accuracy, these predictions inherently suffer from shrinkage toward the mean, leading to attenuated estimates of causal treatment effects and limiting their utility in policy. Existing debiasing methods, such as Prediction-Powered Inference, can handle this attenuation bias but require additional fresh ground-truth data at the downstream stage of causal inference, which restricts their applicability in data-scarce environments. Here, we introduce and evaluate two correction methods -- linear calibration correction and Tweedie's correction -- that substantially reduce prediction bias without relying on newly collected labeled data. Linear calibration corrects bias through a straightforward linear transformation derived from held-out calibration data, whereas Tweedie's correction leverages empirical Bayes principles to directly address shrinkage-induced biases by exploiting score functions derived from the model's learning patterns. Through analytical exercises and experiments using Demographic and Health Survey data, we demonstrate that the proposed methods meet or outperform existing approaches that either require (a) adjustments to training pipelines or (b) additional labeled data. These approaches may represent a promising avenue for improving the reliability of causal inference when direct outcome measures are limited or unavailable, enabling a "one map, many trials" paradigm where a single upstream data creation team produces predictions usable by many downstream teams across diverse ML pipelines.


ChatGPT Turned Into a Studio Ghibli Machine. How Is That Legal?

The Atlantic - Technology

A few weeks ago, OpenAI pulled off one of the greatest corporate promotions in recent memory. Whereas the initial launch of ChatGPT, back in 2022, was "one of the craziest viral moments i'd ever seen," CEO Sam Altman wrote on social media, the response to a new upgrade was, in his words, "biblical": 1 million users supposedly signed up to use the chatbot in just one hour, Altman reported, thanks to a new, more permissive image-generating capability that could imitate the styles of various art and design studios. Altman called it "a new high-water mark for us in allowing creative freedom." Almost immediately, images began to flood the internet. The most popular style, by a long shot, was that of Studio Ghibli, the Japanese animation studio co-founded by Hayao Miyazaki and widely beloved for films such as Spirited Away and Princess Mononoke.


An Automated Reinforcement Learning Reward Design Framework with Large Language Model for Cooperative Platoon Coordination

Wei, Dixiao, Yi, Peng, Lei, Jinlong, Hong, Yiguang, Du, Yuchuan

arXiv.org Artificial Intelligence

Reinforcement Learning (RL) has demonstrated excellent decision-making potential in platoon coordination problems. However, due to the variability of coordination goals, the complexity of the decision problem, and the time-consumption of trial-and-error in manual design, finding a well performance reward function to guide RL training to solve complex platoon coordination problems remains challenging. In this paper, we formally define the Platoon Coordination Reward Design Problem (PCRDP), extending the RL-based cooperative platoon coordination problem to incorporate automated reward function generation. To address PCRDP, we propose a Large Language Model (LLM)-based Platoon coordination Reward Design (PCRD) framework, which systematically automates reward function discovery through LLM-driven initialization and iterative optimization. In this method, LLM first initializes reward functions based on environment code and task requirements with an Analysis and Initial Reward (AIR) module, and then iteratively optimizes them based on training feedback with an evolutionary module. The AIR module guides LLM to deepen their understanding of code and tasks through a chain of thought, effectively mitigating hallucination risks in code generation. The evolutionary module fine-tunes and reconstructs the reward function, achieving a balance between exploration diversity and convergence stability for training. To validate our approach, we establish six challenging coordination scenarios with varying complexity levels within the Yangtze River Delta transportation network simulation. Comparative experimental results demonstrate that RL agents utilizing PCRD-generated reward functions consistently outperform human-engineered reward functions, achieving an average of 10\% higher performance metrics in all scenarios.


Scarlett Johansson warns of AI dangers, says 'there's no boundary here'

FOX News

AI expert Marva Bailer explains how, even though there are currently laws in place, the average person has more access than ever to create deepfakes of celebrities. Scarlett Johansson has taken a vocal stand on artificial intelligence, after having her likeness and voice used without permission. Last year, Johansson said she had been asked to voice OpenAI's Chatbot by CEO Sam Altman, but turned down the job, only for people to notice that the feature, named "Sky," sounded almost exactly like the actress. It was like: If that can happen to me, how are we going to protect ourselves from this? There's no boundary here; we're setting ourselves up to be taken advantage of," the 40-year-old told InStyle Magazine earlier this month. In a statement to NPR following the release of "Sky," Johansson said, "When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference.


Scarlett Johansson warns of dangers of AI after Kanye West deepfake goes viral

The Guardian

Scarlett Johansson has warned of the "imminent dangers of AI" after a deepfake video of her and other prominent Jewish celebrities opposing recent antisemitic remarks from Kanye West went viral this week. The video contained AI-generated versions of more than a dozen celebrities, including Johansson, David Schwimmer, Jerry Seinfeld, Drake, Adam Sandler, Stephen Spielberg, and Mila Kunis. It opened with a deepfake likeness of Johansson in a T-shirt that was emblazoned with a hand and middle finger extended, a Star of David and the name Kanye. The video was set to "Hava Nagila", a Jewish folk song that is typically played at celebratory cultural events, and ended with the slogan: "Enough is enough. Other stars depicted included Sacha Baron Cohen, Jack Black, Natalie Portman, Adam Levine, Ben Stiller, and Lenny Kravitz. "It has been brought to my attention by family members and friends, that an AI-generated video featuring my likeness, in response to an antisemitic view, has been circulating online and gaining traction," Johansson said in a statement to People. "I am a Jewish woman who has no tolerance for antisemitism or hate speech of any kind.


'You're gonna find this creepy': my AI-cloned voice was used by the far right. Could I stop it? Georgina Findlay

The Guardian

My brother held his phone up to my ear. "You're gonna find this creepy," he warned. An Instagram reel showing a teenage boy at a rally featured a voiceover in the style of a news broadcast. A calm, female voice, with an almost imperceptible Mancunian accent, said: "The recent outcry from a British student has become a powerful symbol of a deepening crisis in the UK's educational system." I sat bolt upright, my eyes wide open.



UK proposes letting tech firms use copyrighted work to train AI

The Guardian

The proposed changes are seeking to resolve a standoff between AI firms and creatives. Sir Paul McCartney has warned the technology "could just take over" without new laws. However, it will also allow writers, artists and composers to "reserve their rights", which involves declaring that they do not want their work to be used in an AI training process – or to demand a licence fee to do so. "We're absolutely clear that this is about giving greater control in a difficult and complex set of circumstances to creators and rights holders, and we intend it to lead to more licensing of content, which is potentially a new revenue stream for creators," he said. The British composer Ed Newton-Rex, a key figure in the campaign by creative professionals for a fair deal, told the Guardian in October that opt-out schemes were "totally unfair" for creators.