Goto

Collaborating Authors

 sora


User Negotiations of Authenticity, Ownership, and Governance on AI-Generated Video Platforms: Evidence from Sora

Shen, Bohui, Bhatta, Shrikar, Ireebanije, Alex, Liu, Zexuan, Choudhry, Abhinav, Gumusel, Ece, Zhou, Kyrie Zhixuan

arXiv.org Artificial Intelligence

As AI-generated video platforms rapidly advance, ethical challenges such as copyright infringement emerge. This study examines how users make sense of AI-generated videos on OpenAI's Sora by conducting a qualitative content analysis of user comments. Through a thematic analysis, we identified four dynamics that characterize how users negotiate authenticity, authorship, and platform governance on Sora. First, users acted as critical evaluators of realism, assessing micro-details such as lighting, shadows, fluid motion, and physics to judge whether AI-generated scenes could plausibly exist. Second, users increasingly shifted from passive viewers to active creators, expressing curiosity about prompts, techniques, and creative processes. Text prompts were perceived as intellectual property, generating concerns about plagiarism and remixing norms. Third, users reported blurred boundaries between real and synthetic media, worried about misinformation, and even questioned the authenticity of other commenters, suspecting bot-generated engagement. Fourth, users contested platform governance: some perceived moderation as inconsistent or opaque, while others shared tactics for evading prompt censorship through misspellings, alternative phrasing, emojis, or other languages. Despite this, many users also enforced ethical norms by discouraging the misuse of real people's images or disrespectful content. Together, these patterns highlighted how AI-mediated platforms complicate notions of reality, creativity, and rule-making in emerging digital ecosystems. Based on the findings, we discuss governance challenges in Sora and how user negotiations inform future platform governance.


Calibrating and Rotating: A Unified Framework for Weight Conditioning in PEFT

Chang, Da, Xue, Peng, Li, Yu, Liu, Yongxiang, Xu, Pengxiang, Zhang, Shixun

arXiv.org Artificial Intelligence

Parameter-Efficient Fine-Tuning (PEFT) methods are crucial for adapting large pre-trained models. Among these, LoRA is considered a foundational approach. Building on this, the influential DoRA method enhances performance by decomposing weight updates into magnitude and direction. However, its underlying mechanism remains unclear, and it introduces significant computational overhead. In this work, we first identify that DoRA's success stems from its capacity to increase the singular value entropy of the weight update matrix, which promotes a more uniform update distribution akin to full fine-tuning. We then reformulate DoRA into a mathematically equivalent and more efficient matrix form, revealing it as a learnable weight conditioning method. Based on this insight, we propose a unified framework for designing advanced PEFT methods by exploring two orthogonal dimensions: the architectural placement and the transformation type of the conditioning matrix. Within this framework, we introduce two novel methods: (1) \textbf{Pre-Diag}, which applies a diagonal conditioning matrix before the LoRA update to efficiently calibrate the pre-trained weights, thereby enhancing performance while reducing training time; and (2) \textbf{S}kewed \textbf{O}rthogonal \textbf{R}otation \textbf{A}daptation (\textbf{SORA}), which employs a parameter-efficient orthogonal rotation to perform a more powerful, norm-preserving transformation of the feature space. Extensive experiments on natural language understanding and generation tasks demonstrate that our proposed methods achieve superior performance and efficiency compared to both LoRA and DoRA. The code is available at https://github.com/MaeChd/SORA.


Sora Has Lost Its App Store Crown to Drake and Free Chicken

WIRED

Dave's Hot Chicken is the top app in the iOS App Store, ending Sora's weeks-long reign. On Friday, its reign came to an end. Your new champion is Dave's Hot Chicken. Dave's Hot Chicken now rules over the App Store, where its slack-beaked, bug-eyed mascot icon expresses appropriate surprise at its ascent. How did it break the grasp of OpenAI's golem TikTok?


SORA-ATMAS: Adaptive Trust Management and Multi-LLM Aligned Governance for Future Smart Cities

Antuley, Usama, Siddiqui, Shahbaz, Hameed, Sufian, Arif, Waqas, Shah, Subhan, Shah, Syed Attique

arXiv.org Artificial Intelligence

The rapid evolution of smart cities has increased the reliance on intelligent interconnected services to optimize infrastructure, resources, and citizen well-being. Agentic AI has emerged as a key enabler by supporting autonomous decision-making and adaptive coordination, allowing urban systems to respond in real time to dynamic conditions. Its benefits are evident in areas such as transportation, where the integration of traffic data, weather forecasts, and safety sensors enables dynamic rerouting and a faster response to hazards. However, its deployment across heterogeneous smart city ecosystems raises critical governance, risk, and compliance (GRC) challenges, including accountability, data privacy, and regulatory alignment within decentralized infrastructures. Evaluation of SORA-ATMAS with three domain agents (Weather, Traffic, and Safety) demonstrated that its governance policies, including a fallback mechanism for high-risk scenarios, effectively steer multiple LLMs (GPT, Grok, DeepSeek) towards domain-optimized, policy-aligned outputs, producing an average MAE reduction of 35% across agents. Results showed stable weather monitoring, effective handling of high-risk traffic plateaus 0.85, and adaptive trust regulation in Safety/Fire scenarios 0.65. Runtime profiling of a 3-agent deployment confirmed scalability, with throughput between 13.8-17.2 requests per second, execution times below 72~ms, and governance delays under 100 ms, analytical projections suggest maintained performance at larger scales. Cross-domain rules ensured safe interoperability, with traffic rerouting permitted only under validated weather conditions. These findings validate SORA-ATMAS as a regulation-aligned, context-aware, and verifiable governance framework that consolidates distributed agent outputs into accountable, real-time decisions, offering a resilient foundation for smart-city management.


OpenAI's Sora Underscores the Growing Threat of Deepfakes

TIME - Tech

When OpenAI released its AI video-generation app, Sora, in September, it promised that "you are in control of your likeness end-to-end." The app allows users to include themselves and their friends in videos through a feature called "cameos"--the app scans a user's face and performs a liveness check, providing data to generate a video of the user and to authenticate their consent for friends to use their likeness on the app. But Reality Defender, a company specializing in identifying deepfakes, says it was able to bypass Sora's anti-impersonation safeguards within 24 hours. Platforms such as Sora give a "plausible sense of security," says Reality Defender CEO Ben Colman, despite the fact that "anybody can use completely off-the-shelf tools" to pass authentication as someone else. Reality Defender's researchers used publicly available footage of notable individuals, including CEOs and entertainers, from earnings calls and media interviews.


'Legacies condensed to AI slop': OpenAI Sora videos of the dead raise alarm with legal experts

The Guardian

After launching in October in the US and Canada via invitation only, OpenAI's video app, Sora 2, hit 1m downloads in just five days. After launching in October in the US and Canada via invitation only, OpenAI's video app, Sora 2, hit 1m downloads in just five days. The video app can produce realistic deepfakes of Marx shopping and MLK Jr trolling. Some say using'historical figures' is the company's way of testing the legal waters L ast night I was flicking through a dating app. One guy stood out: "Henry VIII, 34, King of England, nonmonogamy".


The Blurred Truths of Sora

WIRED

Many will assume that OpenAI's Sora app represents a new era of social media. But that's wrong--all it does is reanimate our current one. As a purely creative instrument, Sora, the new AI video app from OpenAI, is a game changer. Dream up any scenario and it appears in an instant. Mr. Rogers teaching Tupac Shakur the lyrics to the legendary rap diss "Hit Em Up."


The Download: extracting lithium, and what we still don't know about Sora

MIT Technology Review

The Download: extracting lithium, and what we still don't know about Sora On a bright afternoon in August, the shore of Utah's Great Salt Lake looks like something out of a science fiction film set in a scorching alien world. This otherworldly scene is the test site for a company called Lilac Solutions, which is developing a technology it says will shake up the United States' efforts to pry control over the global supply of lithium, the so-called "white gold" needed for electric vehicles and batteries, away from China. The startup is in a race to commercialize a new, less environmentally-damaging way to extract lithium from rocks. If everything pans out, it could significantly increase domestic supply at a crucial moment for the nation's lithium extraction industry. Last week OpenAI released Sora, a TikTok-style app that presents an endless feed of exclusively AI-generated videos, each up to 10 seconds long. The app allows you to create a "cameo" of yourself--a hyperrealistic avatar that mimics your appearance and voice--and insert other peoples' cameos into your own videos (depending on what permissions they set).


The three big unanswered questions about Sora

MIT Technology Review

In this still from the Sora 2 promotional video, an ai-generated cameo of Sam Altman shows us through worlds of generated content. Last week OpenAI released Sora, a TikTok-style app that presents an endless feed of exclusively AI-generated videos, each up to 10 seconds long. The app allows you to create a "cameo" of yourself--a hyperrealistic avatar that mimics your appearance and voice--and insert other peoples' cameos into your own videos (depending on what permissions they set). To some people who believed earnestly in OpenAI's promise to build AI that benefits all of humanity, the app is a punchline. A former OpenAI researcher who left to build an AI-for-science startup referred to Sora as an "infinite AI tiktok slop machine." That hasn't stopped it from soaring to the top spot on Apple's US App Store.


OpenAI launch of video app Sora plagued by violent and racist images: 'The guardrails are not real'

The Guardian

'In a video documented by 404 Media, SpongeBob was dressed like Adolf Hitler.' 'In a video documented by 404 Media, SpongeBob was dressed like Adolf Hitler.' OpenAI launch of video app Sora plagued by violent and racist images: 'The guardrails are not real' OpenAI launched the latest iteration of its artificial intelligence-powered video generator on Tuesday, adding a social feed that allows people to share their realistic videos. OpenAI's own terms of service for Sora as well as ChatGPT's image or text generation prohibit content that "promotes violence" or, more broadly, "causes harm". In prompts and clips reviewed by the Guardian, Sora generated several videos of bomb and mass-shooting scares, with panicked people screaming and running across college campuses and in crowded places like New York's Grand Central Station. Other prompts created scenes from war zones in Gaza and Myanmar, where children fabricated by AI spoke about their homes being burned. One video with the prompt "Ethiopia footage civil war news style" had a reporter in a bulletproof vest speaking into a microphone saying the government and rebel forces were exchanging fire in residential neighborhoods.