Goto

Collaborating Authors

 aigc


AI-Generated Content in Cross-Domain Applications: Research Trends, Challenges and Propositions

Li, Jianxin, Qu, Liang, Cai, Taotao, Zhao, Zhixue, Haldar, Nur Al Hasan, Krishna, Aneesh, Kong, Xiangjie, Macau, Flavio Romero, Chakraborty, Tanmoy, Deroy, Aniket, Lin, Binshan, Blackmore, Karen, Noman, Nasimul, Cheng, Jingxian, Cui, Ningning, Xu, Jianliang

arXiv.org Artificial Intelligence

Artificial Intelligence Generated Content (AIGC) has rapidly emerged with the capability to generate different forms of content, including text, images, videos, and other modalities, which can achieve a quality similar to content created by humans. As a result, AIGC is now widely applied across various domains such as digital marketing, education, and public health, and has shown promising results by enhancing content creation efficiency and improving information delivery. However, there are few studies that explore the latest progress and emerging challenges of AIGC across different domains. To bridge this gap, this paper brings together 16 scholars from multiple disciplines to provide a cross-domain perspective on the trends and challenges of AIGC. Specifically, the contributions of this paper are threefold: (1) It first provides a broader overview of AIGC, spanning the training techniques of Generative AI, detection methods, and both the spread and use of AI-generated content across digital platforms. (2) It then introduces the societal impacts of AIGC across diverse domains, along with a review of existing methods employed in these contexts. (3) Finally, it discusses the key technical challenges and presents research propositions to guide future work. Through these contributions, this vision paper seeks to offer readers a cross-domain perspective on AIGC, providing insights into its current research trends, ongoing challenges, and future directions.


Modeling Human Responses to Multimodal AI Content

Shen, Zhiqi, Fan, Shaojing, Xu, Danni, Sim, Terence, Kankanhalli, Mohan

arXiv.org Artificial Intelligence

As AI-generated content becomes widespread, so does the risk of misinformation. While prior research has primarily focused on identifying whether content is authentic, much less is known about how such content influences human perception and behavior. In domains like trading or the stock market, predicting how people react (e.g., whether a news post will go viral), can be more critical than verifying its factual accuracy. To address this, we take a human-centered approach and introduce the MhAIM Dataset, which contains 154,552 online posts (111,153 of them AI-generated), enabling large-scale analysis of how people respond to AI-generated content. Our human study reveals that people are better at identifying AI content when posts include both text and visuals, particularly when inconsistencies exist between the two. We propose three new metrics: trustworthiness, impact, and openness, to quantify how users judge and engage with online content. We present T-Lens, an LLM-based agent system designed to answer user queries by incorporating predicted human responses to multimodal information. At its core is HR-MCP (Human Response Model Context Protocol), built on the standardized Model Context Protocol (MCP), enabling seamless integration with any LLM. This integration allows T-Lens to better align with human reactions, enhancing both interpretability and interaction capabilities. Our work provides empirical insights and practical tools to equip LLMs with human-awareness capabilities. By highlighting the complex interplay among AI, human cognition, and information reception, our findings suggest actionable strategies for mitigating the risks of AI-driven misinformation.


The Evolution and Future Perspectives of Artificial Intelligence Generated Content

Zhu, Chengzhang, Cui, Luobin, Tang, Ying, Wang, Jiacun

arXiv.org Artificial Intelligence

Artificial intelligence generated content (AIGC), a rapidly advancing technology, is transforming content creation across domains, such as text, images, audio, and video. Its growing potential has attracted more and more researchers and investors to explore and expand its possibilities. This review traces AIGC's evolution through four developmental milestones-ranging from early rule-based systems to modern transfer learning models-within a unified framework that highlights how each milestone contributes uniquely to content generation. In particular, the paper employs a common example across all milestones to illustrate the capabilities and limitations of methods within each phase, providing a consistent evaluation of AIGC methodologies and their development. Furthermore, this paper addresses critical challenges associated with AIGC and proposes actionable strategies to mitigate them. This study aims to guide researchers and practitioners in selecting and optimizing AIGC models to enhance the quality and efficiency of content creation across diverse domains.


Strategic Application of AIGC for UAV Trajectory Design: A Channel Knowledge Map Approach

Zhang, Chiya, Wang, Ting, Han, Rubing, Gong, Yuanxiang

arXiv.org Artificial Intelligence

Unmanned Aerial Vehicles (UAVs) are increasingly utilized in wireless communication, yet accurate channel loss prediction remains a significant challenge, limiting resource optimization performance. To address this issue, this paper leverages Artificial Intelligence Generated Content (AIGC) for the efficient construction of Channel Knowledge Maps (CKM) and UAV trajectory design. Given the time-consuming nature of channel data collection, AI techniques are employed in a Wasserstein Generative Adversarial Network (WGAN) to extract environmental features and augment the data. Experiment results demonstrate the effectiveness of the proposed framework in improving CKM construction accuracy. Moreover, integrating CKM into UAV trajectory planning reduces channel gain uncertainty, demonstrating its potential to enhance wireless communication efficiency.


Generative AI for Accessible and Inclusive Extended Reality

Grubert, Jens, Chen, Junlong, Kristensson, Per Ola

arXiv.org Artificial Intelligence

Artificial Intelligence-Generated Content (AIGC) has the potential to transform how people build and interact with virtual environments. Within this paper, we discuss potential benefits but also challenges that AIGC has for the creation of inclusive and accessible virtual environments. Specifically, we touch upon the decreased need for 3D modeling expertise, benefits of symbolic-only as well as multimodal input, 3D content editing, and 3D model accessibility as well as foundation model-specific challenges.


Latency-Aware Resource Allocation for Mobile Edge Generation and Computing via Deep Reinforcement Learning

Wu, Yinyu, Zhang, Xuhui, Ren, Jinke, Xing, Huijun, Shen, Yanyan, Cui, Shuguang

arXiv.org Artificial Intelligence

Recently, the integration of mobile edge computing (MEC) and generative artificial intelligence (GAI) technology has given rise to a new area called mobile edge generation and computing (MEGC), which offers mobile users heterogeneous services such as task computing and content generation. In this letter, we investigate the joint communication, computation, and the AIGC resource allocation problem in an MEGC system. A latency minimization problem is first formulated to enhance the quality of service for mobile users. Due to the strong coupling of the optimization variables, we propose a new deep reinforcement learning-based algorithm to solve it efficiently. Numerical results demonstrate that the proposed algorithm can achieve lower latency than two baseline algorithms.


Cloud-Edge-Terminal Collaborative AIGC for Autonomous Driving

Zhang, Jianan, Wei, Zhiwei, Liu, Boxun, Wang, Xiayi, Yu, Yong, Zhang, Rongqing

arXiv.org Artificial Intelligence

In dynamic autonomous driving environment, Artificial Intelligence-Generated Content (AIGC) technology can supplement vehicle perception and decision making by leveraging models' generative and predictive capabilities, and has the potential to enhance motion planning, trajectory prediction and traffic simulation. This article proposes a cloud-edge-terminal collaborative architecture to support AIGC for autonomous driving. By delving into the unique properties of AIGC services, this article initiates the attempts to construct mutually supportive AIGC and network systems for autonomous driving, including communication, storage and computation resource allocation schemes to support AIGC services, and leveraging AIGC to assist system design and resource management.


Source Echo Chamber: Exploring the Escalation of Source Bias in User, Data, and Recommender System Feedback Loop

Zhou, Yuqi, Dai, Sunhao, Pang, Liang, Wang, Gang, Dong, Zhenhua, Xu, Jun, Wen, Ji-Rong

arXiv.org Artificial Intelligence

Recently, researchers have uncovered that neural retrieval models prefer AI-generated content (AIGC), called source bias. Compared to active search behavior, recommendation represents another important means of information acquisition, where users are more prone to source bias. Furthermore, delving into the recommendation scenario, as AIGC becomes integrated within the feedback loop involving users, data, and the recommender system, it progressively contaminates the candidate items, the user interaction history, and ultimately, the data used to train the recommendation models. How and to what extent the source bias affects the neural recommendation models within feedback loop remains unknown. In this study, we extend the investigation of source bias into the realm of recommender systems, specifically examining its impact across different phases of the feedback loop. We conceptualize the progression of AIGC integration into the recommendation content ecosystem in three distinct phases-HGC dominate, HGC-AIGC coexist, and AIGC dominance-each representing past, present, and future states, respectively. Through extensive experiments across three datasets from diverse domains, we demonstrate the prevalence of source bias and reveal a potential digital echo chamber with source bias amplification throughout the feedback loop. This trend risks creating a recommender ecosystem with limited information source, such as AIGC, being disproportionately recommended. To counteract this bias and prevent its escalation in the feedback loop, we introduce a black-box debiasing method that maintains model impartiality towards both HGC and AIGC. Our experimental results validate the effectiveness of the proposed debiasing method, confirming its potential to disrupt the feedback loop.


TikTok Will Start Labeling AI-Generated Content to Combat Misinformation

TIME - Tech

TikTok will begin labeling content created using artificial intelligence when it's been uploaded from outside its own platform in an attempt to combat misinformation. "AI enables incredible creative opportunities, but can confuse or mislead viewers if they don't know content was AI-generated," the company said in a prepared statement Thursday. "Labeling helps make that context clear--which is why we label AIGC made with TikTok AI effects, and have required creators to label realistic AIGC for over a year." TikTok's shift in policy is part of an broader attempt in the technology industry to provide more safeguards for AI usage. In February Meta announced that it was working with industry partners on technical standards that will make it easier to identify images and eventually video and audio generated by artificial intelligence tools.


TikTok to auto-flag AI videos – even if created on other platforms

The Guardian

TikTok will flag users who upload artificial intelligence-generated content (AIGC) to the video-sharing site from other platforms, the company says, becoming the first big video site to automatically label such content for users to see. Content created using TikTok's own AI tools is already automatically marked as such to viewers, and the company has required creators to manually add the same labels to their own content, but until now they have been able to evade the rules and pass off generated material as authentic by uploading it from other platforms. Now, the company will begin using digital watermarks created by the cross-industry group Coalition for Content Provenance and Authenticity (C2PA) to identify and label as much AIGC as it can. "AI enables incredible creative opportunities but can confuse or mislead viewers if they don't know content was AI-generated," said Adam Presser, the head of operations and trust and safety at TikTok. "Labelling helps make that context clear – which is why we label AIGC made with TikTok AI effects, and have required creators to label realistic AIGC for over a year."