ai creator
How I learned to stop worrying and love AI slop
Speaking with popular AI content creators convinces me that "slop" isn't just the internet rotting in real time, but the early draft of a new kind of pop culture. Lately, everywhere I scroll, I keep seeing the same fish-eyed CCTV view: a grainy wide shot from the corner of a living room, a driveway at night, an empty grocery store. JD Vance shows up at the doorstep in a crazy outfit. A car folds into itself like paper and drives away. A cat comes in and starts hanging out with capybaras and bears, as if in some weird modern fairy tale. This fake-surveillance look has become one of the signature flavors of what people now call AI slop. For those of us who spend time online watching short videos, slop feels inescapable: a flood of repetitive, often nonsensical AI-generated clips that washes across TikTok, Instagram, and beyond. For that, you can thank new tools like OpenAI's Sora (which exploded in popularity after launching in app form in September), Google's Veo series, and AI models built by Runway. Now anyone can make videos, with just a few taps on a screen.
- Asia > India (0.14)
- North America > United States > Massachusetts (0.04)
- North America > United States > California > San Bernardino County > Redlands (0.04)
- (2 more...)
- Media (0.94)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology > Mental Health (0.50)
World's first beauty pageant for AI women is announced: 'Miss AI' contest will see computer-generated ladies face off in tests of beauty, technology and social-media clout - with a 20,000 prize at stake
Beauty, poise, and classical pageantry might not be what first springs to mind when you think of AI. But contestants in the world's first AI beauty pageant will need all of these in spades if they are to claim their share of a 20,000 ( 16,000) prize pool. The Fanvue Miss AI pageant will see AI-generated ladies go head-to-head in front of a panel of judges, including two AI influencers. These synthetic competitors will be judged on beauty, social media clout and their creator's use of AI tools. Will Monanage, Fanvue Co-Founder, says he hopes that these events will'become the Oscars of the AI creator economy.'
- Leisure & Entertainment (0.50)
- Media (0.36)
Trust AI Regulation? Discerning users are vital to build trust and effective AI regulation
Alalawi, Zainab, Bova, Paolo, Cimpeanu, Theodor, Di Stefano, Alessandro, Duong, Manh Hong, Domingos, Elias Fernandez, Han, The Anh, Krellner, Marcus, Ogbo, Bianca, Powers, Simon T., Zimmaro, Filippo
There is general agreement that some form of regulation is necessary both for AI creators to be incentivised to develop trustworthy systems, and for users to actually trust those systems. But there is much debate about what form these regulations should take and how they should be implemented. Most work in this area has been qualitative, and has not been able to make formal predictions. Here, we propose that evolutionary game theory can be used to quantitatively model the dilemmas faced by users, AI creators, and regulators, and provide insights into the possible effects of different regulatory regimes. We show that creating trustworthy AI and user trust requires regulators to be incentivised to regulate effectively. We demonstrate the effectiveness of two mechanisms that can achieve this. The first is where governments can recognise and reward regulators that do a good job. In that case, if the AI system is not too risky for users then some level of trustworthy development and user trust evolves. We then consider an alternative solution, where users can condition their trust decision on the effectiveness of the regulators. This leads to effective regulation, and consequently the development of trustworthy AI and user trust, provided that the cost of implementing regulations is not too high. Our findings highlight the importance of considering the effect of different regulatory regimes from an evolutionary game theoretic perspective.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > Italy > Emilia-Romagna > Metropolitan City of Bologna > Bologna (0.04)
- Law > Statutes (1.00)
- Government (1.00)
- Information Technology > Security & Privacy (0.93)
"I'm Not Confident in Debiasing AI Systems Since I Know Too Little": Teaching AI Creators About Gender Bias Through Hands-on Tutorials
Zhou, Kyrie Zhixuan, Cao, Jiaxun, Yuan, Xiaowen, Weissglass, Daniel E., Kilhoffer, Zachary, Sanfilippo, Madelyn Rose, Tong, Xin
Gender bias is rampant in AI systems, causing bad user experience, injustices, and mental harm to women. School curricula fail to educate AI creators on this topic, leaving them unprepared to mitigate gender bias in AI. In this paper, we designed hands-on tutorials to raise AI creators' awareness of gender bias in AI and enhance their knowledge of sources of gender bias and debiasing techniques. The tutorials were evaluated with 18 AI creators, including AI researchers, AI industrial practitioners (i.e., developers and product managers), and students who had learned AI. Their improved awareness and knowledge demonstrated the effectiveness of our tutorials, which have the potential to complement the insufficient AI gender bias education in CS/AI courses. Based on the findings, we synthesize design implications and a rubric to guide future research, education, and design efforts.
- Asia > China (0.05)
- North America > United States > Illinois (0.04)
- Europe > United Kingdom (0.04)
- (8 more...)
- Questionnaire & Opinion Survey (1.00)
- Personal > Interview (1.00)
- Instructional Material > Course Syllabus & Notes (1.00)
- (2 more...)
- Information Technology (1.00)
- Health & Medicine (1.00)
- Education > Curriculum > Subject-Specific Education (0.68)
- (2 more...)
OpenAI CEO Sam Altman Asks Congress to Regulate AI
OpenAI CEO Sam Altman made an appeal to members of Congress under oath: Regulate artificial intelligence. Altman, whose company is on the extreme forefront of generative A.I. technology with its ChatGPT tool, testified in front of the Senate Judiciary Committee for the first time in a Tuesday hearing. And while he said he is ultimately optimistic that innovation will benefit people on a grand scale, Altman echoed his previous assertion that lawmakers should create parameters for AI creators to avoid causing "significant harm to the world." "We think it can be a printing press moment," Altman said. "We have to work together to make it so."
- Asia > China (0.05)
- North America > United States > New York (0.05)
- Europe (0.05)
- Law (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.94)
AI creator on the risks, opportunities and how it may make humans 'boring'
The entrepreneur is convinced that the scale of what's coming is enormous. He reckons that in 10 years time, his company and fellow AI leaders, ChatGPT and DeepMind, will even be bigger than Google and Facebook. Predictions about technology are as tricky as predictions about politics - educated guesses that could turn out to be totally wrong. But what is clear is that a public conversation about the risks and realities of AI is now underway. We might be on the cusp of sweeping changes too big for any one company, country or politician to manage.
It's the end of the world as we know it: 'Godfather of AI' warns nation of trouble ahead
One of the world's foremost architects of artificial intelligence warned Wednesday that unexpectedly rapid advances in AI – including its ability to learn simple reasoning – suggest it could someday take over the world and push humanity toward extinction. Geoffrey Hinton, the renowned researcher and "Godfather of AI," quit his high-profile job at Google recently so he could speak freely about the serious risks that he now believes may accompany the artificial intelligence technology he helped ushered in, including user-friendly applications like ChatGPT. Hinton, 75, gave his first public remarks about his concerns at the MIT Technology Review's AI conference. His comments appeared to rattle the audience of some of the nation's top tech creators and AI developers. Asked by the panel's moderator what was the "worst case scenario that you think is conceivable," Hinton replied without hesitation.
- North America > United States (0.48)
- Asia > China (0.05)
- Information Technology > Security & Privacy (0.95)
- Government > Regional Government (0.70)
Experts warn AI creators should study human consciousness in open letter
Twitter CEO Elon Musk provides insight on the consequences of developing artificial intelligence and the potential impact on elections on'Tucker Carlson Tonight.' Academic leaders from around the world penned an open letter calling on artificial intelligence developers to learn more about consciousness as artificial intelligence (AI) systems advance rapidly, giving it a prominent place in our moral landscape, raising ethnical, legal and political concerns. The Association for Mathematical Consciousness Science (AMCS), "a large community of over 150 international researchers who are spearheading mathematical and computational approaches to consciousness," published a letter Wednesday as "a wakeup call for the tech sector, the scientific community and society in general to take seriously the need to accelerate research in the field of consciousness science." The Association for Mathematical Consciousness Science published an open letter calling on "the tech sector, the scientific community and society in general to take seriously the need to accelerate research in the field of consciousness science." Its writers referenced the recent letter written by leaders in tech that called for a pause in AI experiments, noting "we are living through an exciting and uncertain time in the development of artificial intelligence (AI) and other brain-related technologies" and warned that AI is "accelerating at a pace that far exceeds our progress in understanding their capabilities and their'alignment' with human values." Signatories of the letter argue that language models like OpenAI's ChatGPT and Google's Bard are based on the neural networks of animal brains, but in the near future will be constructed to mimic "aspects of higher-level brain architecture and functioning."
AI Shouldn't Compete With Workers--It Should Supercharge Them
In 1950, Alan Turing famously created what's now known as the Turing Test, a way of deciding whether a computer is intelligent. If the computer could converse so fluently that it passed as a human? Turing's test became the north star for generations of AI pioneers. For decades, they've labored mightily to mimic basic human skills, with wild success: We've now got AI that can hold conversations, draw pictures, or play expert rounds of chess, Go, and fast-paced video games. But now some AI thinkers wonder whether we've succeeded a little too well--at the wrong task.
- North America > United States > Colorado (0.06)
- Europe (0.06)
- Information Technology > Artificial Intelligence > History (0.92)
- Information Technology > Artificial Intelligence > Issues > Turing's Test (0.59)