ai-generated
DHS is using Google and Adobe AI to make videos
Immigration agencies have been flooding social media with bizarre, seemingly AI-generated content. We now know more about what might be making it. The US Department of Homeland Security is using AI video generators from Google and Adobe to make and edit content shared with the public, a new document reveals. It comes as immigration agencies have flooded social media with content to support President Trump's mass deportation agenda--some of which appears to be made with AI--and as workers in tech have put pressure on their employers to denounce the agencies' activities. The document, released on Wednesday, provides an inventory of which commercial AI tools DHS uses for tasks ranging from generating drafts of documents to managing cybersecurity. In a section about "editing images, videos or other public affairs materials using AI," it reveals for the first time that DHS is using Google's Veo 3 video generator and Adobe Firefly, estimating that the agency has between 100 and 1,000 licenses for the tools.
- North America > United States > Massachusetts (0.05)
- Asia > China (0.05)
How can you tell if your new favourite artist is a real person?
How can you tell if your new favourite artist is a real person? There's a new song doing the rounds, and in the immortal words of Kylie Minogue, you just can't get it out of your head. But what if it was created by a robot, or the artist themself is a product of artificial intelligence (AI)? Do streaming sites have an obligation to label music as AI-generated? And does it even matter, if you like what you hear?
- South America (0.14)
- North America > Central America (0.14)
- Europe > United Kingdom > Scotland (0.05)
- (15 more...)
- Media > Music (1.00)
- Leisure & Entertainment (1.00)
'We could have asked ChatGPT': students fight back over course taught by AI
'We could have asked ChatGPT': students fight back over course taught by AI Students at the University of Staffordshire have said they feel "robbed of knowledge and enjoyment" after a course they hoped would launch their digital careers turned out to be taught in large part by AI. James and Owen were among 41 students who took a coding module at Staffordshire last year, hoping to change careers through a government-funded apprenticeship programme designed to help them become cybersecurity experts or software engineers. But after a term of AI-generated slides being read, at times, by an AI voiceover, James said he had lost faith in the programme and the people running it, worrying he had "used up two years" of his life on a course that had been done "in the cheapest way possible". "If we handed in stuff that was AI-generated, we would be kicked out of the uni, but we're being taught by an AI," said James during a confrontation with his lecturer recorded as a part of the course in October 2024. James and other students confronted university officials multiple times about the AI materials. But the university appears to still be using AI-generated materials to teach the course.
- Europe > United Kingdom > England > Staffordshire (0.88)
- North America > United States (0.30)
- Oceania > Australia (0.05)
- Europe > Ukraine (0.05)
- Government (1.00)
- Education > Educational Setting > Higher Education (1.00)
- Leisure & Entertainment > Sports (0.71)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.73)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.63)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.63)
Imagining a future where smart glasses allow 'AI slop' to be avoided
Imagining a future where smart glasses allow'AI slop' to be avoided "Wearing the unsmart glasses created an entirely un-augmented reality " By the mid-2020s, the world was becoming swamped with "AI slop". Whether images, video, music, emails, ads, speeches or TV shows, many people's interactions were with asinine content generated by artificial intelligence. Sometimes the experience was fun and relatively harmless, but often it was tedious and brain-sapping . At worst, it could be dangerously misleading . Even engagements with other people became suspect - who knew if the person on the phone was real or not?
Can AI Models be Jailbroken to Phish Elderly Victims? An End-to-End Evaluation
We present an end-to-end demonstration of how attackers can exploit AI safety failures to harm vulnerable populations: from jailbreaking LLMs to generate phishing content, to deploying those messages against real targets, to successfully compromising elderly victims. We systematically evaluated safety guardrails across six frontier LLMs spanning four attack categories, revealing critical failures where several models exhibited near-complete susceptibility to certain attack vectors. In a human validation study with 108 senior volunteers, AI-generated phishing emails successfully compromised 11\% of participants. Our work uniquely demonstrates the complete attack pipeline targeting elderly populations, highlighting that current AI safety measures fail to protect those most vulnerable to fraud. Beyond generating phishing content, LLMs enable attackers to overcome language barriers and conduct multi-turn trust-building conversations at scale, fundamentally transforming fraud economics. While some providers report voluntary counter-abuse efforts, we argue these remain insufficient.
- North America > United States > California (0.29)
- Asia > Southeast Asia (0.05)
- Asia > Vietnam > Hanoi > Hanoi (0.04)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
Everyone prefers human writers, including AI
Haverals, Wouter, Martin, Meredith
As AI writing tools become widespread, we need to understand how both humans and machines evaluate literary style, a domain where objective standards are elusive and judgments are inherently subjective. We conducted controlled experiments using Raymond Queneau's Exercises in Style (1947) to measure attribution bias across evaluators. Study 1 compared human participants (N=556) and AI models (N=13) evaluating literary passages from Queneau versus GPT-4-generated versions under three conditions: blind, accurately labeled, and counterfactually labeled. Study 2 tested bias generalization across a 14$\times$14 matrix of AI evaluators and creators. Both studies revealed systematic pro-human attribution bias. Humans showed +13.7 percentage point (pp) bias (Cohen's h = 0.28, 95% CI: 0.21-0.34), while AI models showed +34.3 percentage point bias (h = 0.70, 95% CI: 0.65-0.76), a 2.5-fold stronger effect (P$<$0.001). Study 2 confirmed this bias operates across AI architectures (+25.8pp, 95% CI: 24.1-27.6%), demonstrating that AI systems systematically devalue creative content when labeled as "AI-generated" regardless of which AI created it. We also find that attribution labels cause evaluators to invert assessment criteria, with identical features receiving opposing evaluations based solely on perceived authorship. This suggests AI models have absorbed human cultural biases against artificial creativity during training. Our study represents the first controlled comparison of attribution bias between human and artificial evaluators in aesthetic judgment, revealing that AI systems not only replicate but amplify this human tendency.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Europe > Austria > Vienna (0.14)
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- (15 more...)
- Transportation (1.00)
- Education (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.45)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
US investigators are using AI to detect child abuse images made by AI
Though artificial intelligence is fueling a surge in synthetic child abuse images, it's also being tested as a way to stop harm to real victims. Generative AI has enabled the production of child sexual abuse images to skyrocket. Now the leading investigator of child exploitation in the US is experimenting with using AI to distinguish AI-generated images from material depicting real victims, according to a new government filing. The Department of Homeland Security's Cyber Crimes Center, which investigates child exploitation across international borders, has awarded a $150,000 contract to San Francisco-based Hive AI for its software, which can identify whether a piece of content was AI-generated. The filing, posted on September 19, is heavily redacted and Hive cofounder and CEO Kevin Guo told that he could not discuss the details of the contract, but confirmed it involves use of the company's AI detection algorithms for child sexual abuse material (CSAM). The filing quotes data from the National Center for Missing and Exploited Children that reported a 1,325% increase in incidents involving generative AI in 2024.
- North America > United States > California > San Francisco County > San Francisco (0.25)
- North America > United States > Massachusetts (0.05)
- North America > United States > Illinois > Cook County > Chicago (0.05)
Wired and Business Insider remove articles by AI-generated 'freelancer'
Multiple news organisations have taken down articles written by an alleged freelance journalist that now appear to have been generated by AI. On Thursday, Press Gazette reported that at least six publications, including Wired and Business Insider, have removed articles from their websites in recent months after it was discovered that the stories – written under the name of Margaux Blanchard – were AI-generated. Wired published a story titled "They Fell in Love Playing Minecraft. A few weeks later, the outlet took down the story, stating in an editor's note: "After an additional review of the article … Wired editorial leadership has determined this article does not meet our editorial standards." The story cited a "Jessica Hu", an alleged 34-year-old "ordained officiant based in Chicago" who reportedly "made a name for herself as a'digital celebrant', specialising in ceremonies across Twitch, Discord and VRChat", according to Press Gazette, which reviewed the Wired article. Both the Press Gazette and the Guardian were not able to verify the identity of Hu. Press Gazette further reported that in April, Business Insider published two essays by Blanchard titled: "Remote work has been the best thing for me as a parent but the worst as a person" and "I had my first kid at 45.
- North America > United States > Illinois > Cook County > Chicago (0.26)
- North America > United States > Utah (0.05)
- North America > United States > Colorado (0.05)
- Personal (0.50)
- Research Report (0.35)
- Leisure & Entertainment (0.50)
- Media > News (0.36)
- Law (0.30)
Singing Syllabi with Virtual Avatars: Enhancing Student Engagement Through AI-Generated Music and Digital Embodiment
In practical teaching, we observe that few students thoroughly read or fully comprehend the information provided in traditional, text-based course syllabi. As a result, essential details, such as course policies and learning outcomes, are frequently overlooked. To address this challenge, in this paper, we propose a novel approach leveraging AI-generated singing and virtual avatars to present syllabi in a format that is more visually appealing, engaging, and memorable. Especially, we leveraged the open-source tool, HeyGem, to transform textual syllabi into audiovisual presentations, in which digital avatars perform the syllabus content as songs. The proposed approach aims to stimulate students' curiosity, foster emotional connection, and enhance retention of critical course information. Student feedback indicated that AI-sung syllabi significantly improved awareness and recall of key course information.
- North America > United States (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Research Report (1.00)
- Instructional Material > Course Syllabus & Notes (1.00)
Pink Floppy Disc and The Bitles: Embracing the future of AI music
Feedback is New Scientist's popular sideways look at the latest science and technology news. You can submit items you believe may amuse readers to Feedback by emailing feedback@newscientist.com Feedback has been dimly aware for a while that there is a slew of AI-generated music swamping platforms like Spotify. Our awareness was limited, we confess, because we are so old that we still prefer to listen to CDs. Still, we weren't too surprised when New Scientist's Timothy Revell told us about an indie rock band called The Velvet Sundown that appears to be entirely AI-generated, from their songs, which sound like the beige love-children of Coldplay and the Eagles, to their uncanny-valley Instagram photos, which look like rejected concept art from Daisy Jones & the Six.
- South America > Peru (0.05)
- North America > United States > California (0.05)
- Europe > United Kingdom > England > Greater London > London > Wimbledon (0.05)
- (2 more...)
- Media > Music (1.00)
- Leisure & Entertainment > Sports > Tennis (0.30)