authorship
User Negotiations of Authenticity, Ownership, and Governance on AI-Generated Video Platforms: Evidence from Sora
Shen, Bohui, Bhatta, Shrikar, Ireebanije, Alex, Liu, Zexuan, Choudhry, Abhinav, Gumusel, Ece, Zhou, Kyrie Zhixuan
As AI-generated video platforms rapidly advance, ethical challenges such as copyright infringement emerge. This study examines how users make sense of AI-generated videos on OpenAI's Sora by conducting a qualitative content analysis of user comments. Through a thematic analysis, we identified four dynamics that characterize how users negotiate authenticity, authorship, and platform governance on Sora. First, users acted as critical evaluators of realism, assessing micro-details such as lighting, shadows, fluid motion, and physics to judge whether AI-generated scenes could plausibly exist. Second, users increasingly shifted from passive viewers to active creators, expressing curiosity about prompts, techniques, and creative processes. Text prompts were perceived as intellectual property, generating concerns about plagiarism and remixing norms. Third, users reported blurred boundaries between real and synthetic media, worried about misinformation, and even questioned the authenticity of other commenters, suspecting bot-generated engagement. Fourth, users contested platform governance: some perceived moderation as inconsistent or opaque, while others shared tactics for evading prompt censorship through misspellings, alternative phrasing, emojis, or other languages. Despite this, many users also enforced ethical norms by discouraging the misuse of real people's images or disrespectful content. Together, these patterns highlighted how AI-mediated platforms complicate notions of reality, creativity, and rule-making in emerging digital ecosystems. Based on the findings, we discuss governance challenges in Sora and how user negotiations inform future platform governance.
- North America > United States > Texas (0.04)
- North America > United States > Illinois > Champaign County > Urbana (0.04)
- Asia > Middle East > Jordan (0.04)
- Africa > Nigeria (0.04)
- Media (1.00)
- Law > Intellectual Property & Technology Law (1.00)
The Ethics of Generative AI
This chapter discusses the ethics of generative AI. It provides a technical primer to show how generative AI affords experiencing technology as if it were human, and this affordance provides a fruitful focus for the philosophical ethics of generative AI. It then shows how generative AI can both aggravate and alleviate familiar ethical concerns in AI ethics, including responsibility, privacy, bias and fairness, and forms of alienation and exploitation. Finally, the chapter examines ethical questions that arise specifically from generative AI's mimetic generativity, such as debates about authorship and credit, the emergence of as-if social relationships with machines, and new forms of influence, persuasion, and manipulation.
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Netherlands > South Holland > Delft (0.04)
- Summary/Review (0.54)
- Research Report (0.50)
The author is dead, but what if they never lived? A reception experiment on Czech AI- and human-authored poetry
Marklová, Anna, Vinš, Ondřej, Vokáčová, Martina, Milička, Jiří
Large language models are increasingly capable of producing creative texts, yet most studies on AI-generated poetry focus on English -- a language that dominates training data. In this paper, we examine the perception of AI- and human-written Czech poetry. We ask if Czech native speakers are able to identify it and how they aesthetically judge it. Participants performed at chance level when guessing authorship (45.8\% correct on average), indicating that Czech AI-generated poems were largely indistinguishable from human-written ones. Aesthetic evaluations revealed a strong authorship bias: when participants believed a poem was AI-generated, they rated it as less favorably, even though AI poems were in fact rated equally or more favorably than human ones on average. The logistic regression model uncovered that the more the people liked a poem, the less probable was that they accurately assign the authorship. Familiarity with poetry or literary background had no effect on recognition accuracy. Our findings show that AI can convincingly produce poetry even in a morphologically complex, low-resource (with respect of the training data of AI models) Slavic language such as Czech. The results suggest that readers' beliefs about authorship and the aesthetic evaluation of the poem are interconnected.
- North America > United States > North Dakota > Billings County (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Europe > Czechia > Prague (0.04)
- Africa > Nigeria > Delta State > Abraka (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Regression (0.54)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.47)
Towards Explainable Personalized Recommendations by Learning from Users' Photos
Díez, Jorge, Pérez-Núñez, Pablo, Luaces, Oscar, Remeseiro, Beatriz, Bahamonde, Antonio
Explaining the output of a complex system, such as a Recommender System (RS), is becoming of utmost importance for both users and companies. In this paper we explore the idea that personalized explanations can be learned as recommendation themselves. There are plenty of online services where users can upload some photos, in addition to rating items. We assume that users take these photos to reinforce or justify their opinions about the items. For this reason we try to predict what photo a user would take of an item, because that image is the argument that can best convince her of the qualities of the item. In this sense, an RS can explain its results and, therefore, increase its reliability. Furthermore, once we have a model to predict attractive images for users, we can estimate their distribution. The paper includes a formal framework that estimates the authorship probability for a given pair (user, photo). To illustrate the proposal, we use data gathered from TripAdvisor containing the reviews (with photos) of restaurants in six cities of different sizes. Keywords: Recommender Systems, Personalization, Explainability, Photo, Collaborative 1. Introduction Explainable Artificial Intelligence (XAI) is becoming an important area of interest since explainability is increasingly necessary to meet stakeholder demands. In particular, the General Data Protection Regulation (GDPR) [29] of the European Union demands transparency in systems that take decisions affecting people, making explanations more needed than ever. Additionally, explanations may help increase the trust of users in AI algorithms, since people rely not only on their efficacy but also on the degree of understanding of the process they follow. Since they provide suggestions to users, explainability plays an important role on them.
- Europe > Spain > Galicia > Madrid (0.05)
- North America > United States > New York > New York County > New York City (0.04)
EditLens: Quantifying the Extent of AI Editing in Text
Thai, Katherine, Emi, Bradley, Masrour, Elyas, Iyyer, Mohit
A significant proportion of queries to large language models ask them to edit user-provided text, rather than generate new text from scratch. While previous work focuses on detecting fully AI-generated text, we demonstrate that AI-edited text is distinguishable from human-written and AI-generated text. First, we propose using lightweight similarity metrics to quantify the magnitude of AI editing present in a text given the original human-written text and validate these metrics with human annotators. Using these similarity metrics as intermediate supervision, we then train EditLens, a regression model that predicts the amount of AI editing present within a text. Our model achieves state-of-the-art performance on both binary (F1=94.7%) and ternary (F1=90.4%) classification tasks in distinguishing human, AI, and mixed writing. Not only do we show that AI-edited text can be detected, but also that the degree of change made by AI to human writing can be detected, which has implications for authorship attribution, education, and policy. Finally, as a case study, we use our model to analyze the effects of AI-edits applied by Grammarly, a popular writing assistance tool. To encourage further research, we commit to publicly releasing our models and dataset.
- Europe > Austria > Vienna (0.14)
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- (11 more...)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.93)
A vibe coding learning design to enhance EFL students' talking to, through, and about AI
Woo, David James, Guo, Kai, Yu, Yangyang
This innovative practice article reports on the piloting of vibe coding (using natural language to create software applications with AI) for English as a Foreign Language (EFL) education. We developed a human-AI meta-languaging framework with three dimensions: talking to AI (prompt engineering), talking through AI (negotiating authorship), and talking about AI (mental models of AI). Using backward design principles, we created a four-hour workshop where two students designed applications addressing authentic EFL writing challenges. We adopted a case study methodology, collecting data from worksheets and video recordings, think-aloud protocols, screen recordings, and AI-generated images. Contrasting cases showed one student successfully vibe coding a functional application cohering to her intended design, while another encountered technical difficulties with major gaps between intended design and actual functionality. Analysis reveals differences in students' prompt engineering approaches, suggesting different AI mental models and tensions in attributing authorship. We argue that AI functions as a beneficial languaging machine, and that differences in how students talk to, through, and about AI explain vibe coding outcome variations. Findings indicate that effective vibe coding instruction requires explicit meta-languaging scaffolding, teaching structured prompt engineering, facilitating critical authorship discussions, and developing vocabulary for articulating AI mental models.
- Research Report (0.84)
- Instructional Material > Course Syllabus & Notes (0.34)
- Leisure & Entertainment > Games (0.47)
- Education > Curriculum > Subject-Specific Education (0.47)
The Impact of Artificial Intelligence on Traditional Art Forms: A Disruption or Enhancement
Marella, Viswa Chaitanya, Erukude, Sai Teja, Veluru, Suhasnadh Reddy
The introduction of Artificial Intelligence (AI) into the domains of traditional art (visual arts, performing arts, and crafts) has sparked a complicated discussion about whether this might be an agent of disruption or an enhancement of our traditional art forms. This paper looks at the duality of AI, exploring the ways that recent technologies like Generative Adversarial Networks and Diffusion Models, and text-to-image generators are changing the fields of painting, sculpture, calligraphy, dance, music, and the arts of craft. Using examples and data, we illustrate the ways that AI can democratize creative expression, improve productivity, and preserve cultural heritage, while also examining the negative aspects, including: the threats to authenticity within art, ethical concerns around data, and issues including socio-economic factors such as job losses. While we argue for the context-dependence of the impact of AI (the potential for creative homogenization and the devaluation of human agency in artmaking), we also illustrate the potential for hybrid practices featuring AI in cuisine, etc. We advocate for the development of ethical guidelines, collaborative approaches, and inclusive technology development. In sum, we are articulating a vision of AI in which it amplifies our innate creativity while resisting the displacement of the cultural, nuanced, and emotional aspects of traditional art. The future will be determined by human choices about how to govern AI so that it becomes a mechanism for artistic evolution and not a substitute for the artist's soul.
- North America > United States (0.14)
- Asia > India > Tamil Nadu > Vellore (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Asia > India > Telangana > Hyderabad (0.04)
- Leisure & Entertainment (1.00)
- Education (1.00)
- Government (0.68)
- (2 more...)
Human-AI Collaboration or Academic Misconduct? Measuring AI Use in Student Writing Through Stylometric Evidence
Oliveira, Eduardo Araujo, Mohoni, Madhavi, López-Pernas, Sonsoles, Saqr, Mohammed
Human - Artificial Intelligence (HAI) collaboration in writing offers opportunities to enhance efficiency and boost student confidence; however, it also carries risks, such as reduced creativity, over - reliance on AI - generated content, and academic integrity (Kim & Lee, 2023) . While the ethical use of AI in education is widely acknowledged as a way to enhance student learning (Cotton et al., 2023; Foltynek et al., 2023), the rise of Unauthorised Content Generation (UCG) presents a significant challenge to academic misconduct. Measuring the extent and nature of HAI collaboration in academic contexts remains a critical challenge for educators, particularly as generative AI (genAI) tools become increasingly available and integrated into educational settings (Atchley et al., 2024; E. Oliveira et al., 2023) . Distinguishing AI - generated text from human - authored content is necessary for understanding student learning behaviours, supporting skill development, and maintaining academic integrity. Analysing student writing patterns can help educators evaluate how st udents engage with AI tools, track their writing skill progression, and identify areas where additional support is needed (Pan et al., 2025). Existing detection tools for AI - assisted misconduct often lack reliability, explainability, and resilience to circ umvention strategies such as paraphrasing (Cotton et al., 2023) . These challenges highlight the need for innovative, transparent, and robust approaches to address the unacknowledged use of genAI in HAI collaboration within academic writing (Kasneci et al., 2023) .
- Education > Educational Setting (1.00)
- Education > Curriculum > Subject-Specific Education (0.68)
Authorship Without Writing: Large Language Models and the Senior Author Analogy
Hurshman, Clint, Mann, Sebastian Porsdam, Savulescu, Julian, Earp, Brian D.
Abstract: The use of large language models (LLMs) in bioethical, scientific, and medical writing remains controversial. While there is broad agreement in some circles that LLMs cannot count as authors, there is no consensus about whether and how humans using LLMs can count as authors. In many fields, authorship is distributed among large teams of researchers, some of whom -- including paradigmatic "senior authors" who guide and determine the scope of a project and ultimately vouch for its integrity -- may not write a singl e word. In this paper, we argue that LLM use (under specific conditions) is analogous to a form of senior authorship. On this view, the use of LLMs, even to generate complete drafts of research papers, can be considered a legitimate form of authorship according to the accepted criteria in many fields. We conclude that either such use should be recognized as legitimate, or current criteria for authorship require fundamental revision. AI use declaration: Chat GPT version 5 was used to help format Box 1. AI wa s not used for any other part of the preparation or writing of this manuscript. This is a pre print of a paper that has been submitted to a journal. It has not yet gone through peer review. Authorship Without Writing: Large Language Models and the "Senior Author" Analogy Clint Hurshman, Sebastian Porsdam Mann, Julian Savulescu, Brian D. Earp I. Introduction The use of large language models (LLMs) in bioethics as well as scientific and medical writing continues to be controversial. Thus far, there has been broad agreement -- for example, among medical publishers -- that LLMs cannot count as authors, but there is still no consensus about the status of LLM - assisted text production as a form of writing, and by extension, the status of LLM users as authors. Here, we contribute to this debate by exploring -- and drawing analogies to -- the collaborative nature of writing, and t he distributed character of authorship, in many domains of research.
- Europe > Denmark > Capital Region > Copenhagen (0.04)
- North America > United States > New York (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- (2 more...)
- Education (1.00)
- Health & Medicine > Therapeutic Area > Immunology (0.68)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (0.46)
Agency Among Agents: Designing with Hypertextual Friction in the Algorithmic Web
Liu, Sophia, Almeda, Shm Garanganao
Today's algorithm-driven interfaces, from recommendation feeds to GenAI tools, often prioritize engagement and efficiency at the expense of user agency. As systems take on more decision-making, users have less control over what they see and how meaning or relationships between content are constructed. This paper introduces "Hypertextual Friction," a conceptual design stance that repositions classical hypertext principles--friction, traceability, and structure--as actionable values for reclaiming agency in algorithmically mediated environments. Through a comparative analysis of real-world interfaces--Wikipedia vs. Instagram Explore, and Are.na vs. GenAI image tools--we examine how different systems structure user experience, navigation, and authorship. We show that hypertext systems emphasize provenance, associative thinking, and user-driven meaning-making, while algorithmic systems tend to obscure process and flatten participation. We contribute: (1) a comparative analysis of how interface structures shape agency in user-driven versus agent-driven systems, and (2) a conceptual stance that offers hypertextual values as design commitments for reclaiming agency in an increasingly algorithmic web.
- North America > United States > California > Alameda County > Berkeley (0.14)
- North America > United States > New York > New York County > New York City (0.07)
- North America > United States > Illinois > Cook County > Chicago (0.05)
- (10 more...)
- Information Technology > Human Computer Interaction (1.00)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.95)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.32)