ai generation
MIRAGE: Towards AI-Generated Image Detection in the Wild
Xia, Cheng, Lin, Manxi, Tan, Jiexiang, Du, Xiaoxiong, Qiu, Yang, Zheng, Junjun, Kong, Xiangheng, Jiang, Yuning, Zheng, Bo
The spreading of AI-generated images (AIGI), driven by advances in generative AI, poses a significant threat to information security and public trust. Existing AIGI detectors, while effective against images in clean laboratory settings, fail to generalize to in-the-wild scenarios. These real-world images are noisy, varying from ``obviously fake" images to realistic ones derived from multiple generative models and further edited for quality control. We address in-the-wild AIGI detection in this paper. We introduce Mirage, a challenging benchmark designed to emulate the complexity of in-the-wild AIGI. Mirage is constructed from two sources: (1) a large corpus of Internet-sourced AIGI verified by human experts, and (2) a synthesized dataset created through the collaboration between multiple expert generators, closely simulating the realistic AIGI in the wild. Building on this benchmark, we propose Mirage-R1, a vision-language model with heuristic-to-analytic reasoning, a reflective reasoning mechanism for AIGI detection. Mirage-R1 is trained in two stages: a supervised-fine-tuning cold start, followed by a reinforcement learning stage. By further adopting an inference-time adaptive thinking strategy, Mirage-R1 is able to provide either a quick judgment or a more robust and accurate conclusion, effectively balancing inference speed and performance. Extensive experiments show that our model leads state-of-the-art detectors by 5% and 10% on Mirage and the public benchmark, respectively. The benchmark and code will be made publicly available.
Single- vs. Dual-Prompt Dialogue Generation with LLMs for Job Interviews in Human Resources
De Baer, Joachim, Doğruöz, A. Seza, Demeester, Thomas, Develder, Chris
Optimizing language models for use in conversational agents requires large quantities of example dialogues. Increasingly, these dialogues are synthetically generated by using powerful large language models (LLMs), especially in domains with challenges to obtain authentic human data. One such domain is human resources (HR). In this context, we compare two LLM-based dialogue generation methods for the use case of generating HR job interviews, and assess whether one method generates higher-quality dialogues that are more challenging to distinguish from genuine human discourse. The first method uses a single prompt to generate the complete interview dialog. The second method uses two agents that converse with each other. To evaluate dialogue quality under each method, we ask a judge LLM to determine whether AI was used for interview generation, using pairwise interview comparisons. We demonstrate that despite a sixfold increase in token cost, interviews generated with the dual-prompt method achieve a win rate up to ten times higher than those generated with the single-prompt method. This difference remains consistent regardless of whether GPT-4o or Llama 3.3 70B is used for either interview generation or judging quality.
- North America > Mexico > Mexico City > Mexico City (0.04)
- Europe > Belgium > Flanders > East Flanders > Ghent (0.04)
- Asia > Singapore (0.04)
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.04)
- Research Report (1.00)
- Personal > Interview (1.00)
How Does the Disclosure of AI Assistance Affect the Perceptions of Writing?
Li, Zhuoyan, Liang, Chen, Peng, Jing, Yin, Ming
Recent advances in generative AI technologies like large language models have boosted the incorporation of AI assistance in writing workflows, leading to the rise of a new paradigm of human-AI co-creation in writing. To understand how people perceive writings that are produced under this paradigm, in this paper, we conduct an experimental study to understand whether and how the disclosure of the level and type of AI assistance in the writing process would affect people's perceptions of the writing on various aspects, including their evaluation on the quality of the writing and their ranking of different writings. Our results suggest that disclosing the AI assistance in the writing process, especially if AI has provided assistance in generating new content, decreases the average quality ratings for both argumentative essays and creative stories. This decrease in the average quality ratings often comes with an increased level of variations in different individuals' quality evaluations of the same writing. Indeed, factors such as an individual's writing confidence and familiarity with AI writing assistants are shown to moderate the impact of AI assistance disclosure on their writing quality evaluations. We also find that disclosing the use of AI assistance may significantly reduce the proportion of writings produced with AI's content generation assistance among the top-ranked writings.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Connecticut (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
AI generations can be copyrighted now - on one condition
In a Statement of policy (opens in new tab) published earlier this month, the Office's Director Shira Perlmutter wrote: "In the case of works containing AI-generated material, the Office will consider whether the AI contributions are the result of'mechanical reproduction' or instead of an author's'own original mental conception, to which [the author] gave visible form.'" Perlmutter describes that the analysis would be on a "case-by-case" basis in order to assess whether the human is the true author of the content. An example of a denied application would involve an AI writer receiving a prompt and generating new "complex written, visual, or musical" content. This could involve the creative arrangement of AI-generated content or further editing whereby AI content is considered merely a template for further work. In order to help distinguish AI from human-generated content, there have been discussions of watermarking work created by machines, but so far that has proven troublesome.
7 Artists for the AI Generation
David Hockney, one of the world's most famous living artists, is also a proponent of digital art. Hockney would argue significant technological advances occurred in the 15th Century with the arrival of optical devices. Centring around the mid 15th Century a radical transformation in the visual quality of painting happened. What we would call photorealistic today replaced the stylised rendering of the likes of Giotto. An understanding of optics and lenses gave artists a new way to capture the reality that the eye could see.
Effidit: Your AI Writing Assistant
Shi, Shuming, Zhao, Enbo, Tang, Duyu, Wang, Yan, Li, Piji, Bi, Wei, Jiang, Haiyun, Huang, Guoping, Cui, Leyang, Huang, Xinting, Zhou, Cong, Dai, Yong, Ma, Dongyang
In this technical report, we introduce Effidit (Efficient and Intelligent Editing), a digital writing assistant that facilitates users to write higher-quality text more efficiently by using artificial intelligence (AI) technologies. Previous writing assistants typically provide the function of error checking (to detect and correct spelling and grammatical errors) and limited text-rewriting functionality. With the emergence of large-scale neural language models, some systems support automatically completing a sentence or a paragraph. In Effidit, we significantly expand the capacities of a writing assistant by providing functions in five categories: text completion, error checking, text polishing, keywords to sentences (K2S), and cloud input methods (cloud IME). In the text completion category, Effidit supports generation-based sentence completion, retrieval-based sentence completion, and phrase completion. In contrast, many other writing assistants so far only provide one or two of the three functions. For text polishing, we have three functions: (context-aware) phrase polishing, sentence paraphrasing, and sentence expansion, whereas many other writing assistants often support one or two functions in this category. The main contents of this report include major modules of Effidit, methods for implementing these modules, and evaluation results of some key methods.
AI Generation: Learnings from Alliance4AI's First 100 Startups in Africa
Those who raise questions about Africa's preparedness for the fourth industrial revolution should look no further than the continent's young entrepreneurs, transcending tough resource constraints to lead a burgeoning AI startup ecosystem. Studies show that young people are more enthusiastic about technology. With the youngest and fastest-growing population on earth (Africa has a median age of 19 years compared to Europe's 41.8 years), there could be no better time for Africa than now. Young people in Africa are rising above infrastructure and resource constraints to create ingenious processes to adopt and apply the fourth industrial revolution technologies like artificial intelligence (AI) to generate more value for their localities. Since we founded the Alliance for Africa's Intelligence (Alliance4ai) one year ago, we have interacted with more than 100 AI startups to learn about their work and support them with a platform where they exchange knowledge and opportunities with students, schools and other players in the ecosystem.
The Next Big Privacy Hurdle? Teaching AI to Forget
When the European Union enacted the General Data Protection Regulation (GDPR) a year ago, one of the most revolutionary aspects of the regulation was the "right to be forgotten"--an often-hyped and debated right, sometimes perceived as empowering individuals to request the erasure of their information on the internet, most commonly from search engines or social networks. Darren Shou is vice president of research at Symantec. Since then, the issue of digital privacy has rarely been far from the spotlight. There is widespread debate in governments, boardrooms, and the media on how data is collected, stored, and used, and what ownership the public should have over their own information. But as we continue to grapple with this crucial issue, we've largely failed to address one of the most important aspects--how do we control our data once it's been fed into the artificial intelligence (AI) and machine-learning algorithms that are becoming omnipresent in our lives?