Goto

Collaborating Authors

 farid


Artificial intelligence research has a slop problem, academics say: 'It's a mess'

The Guardian

The author, Kevin Zhu, now runs Algoverse, an AI research and mentoring company for high schoolers. The author, Kevin Zhu, now runs Algoverse, an AI research and mentoring company for high schoolers. Artificial intelligence research has a slop problem, academics say: 'It's a mess' AI research in question as author claims to have written over 100 papers on AI that one expert calls a'disaster' A single person claims to have authored 113 academic papers on artificial intelligence this year, 89 of which will be presented this week at one of the world's leading conference on AI and machine learning, which has raised questions among computer scientists about the state of AI research. Zhu himself graduated from high school in 2018. Papers he has put out in the past two years cover subjects like using AI to locate nomadic pastoralists in sub-Saharan Africa, to evaluate skin lesions, and to translate Indonesian dialects.


These Democrats Think the Party Needs AI to Win Elections

WIRED

The 2024 election cycle saw artificial intelligence deployed by political campaigns for the very first time. While candidates largely avoided major mishaps, the tech was used with little guidance or restraint. Now, the National Democratic Training Committee (NDTC) is rolling out the first official playbook making the case that Democratic campaigns can use AI responsibly ahead of the midterms. In a new online training, the committee has laid out a plan for Democratic candidates to leverage AI to create social content, write voter outreach messages, and research their districts and opponents. Since NDTC's founding in 2016, the organization says, it has trained more than 120,000 Democrats seeking political office.


No One Is Ready for Digital Immortality

The Atlantic - Technology

Every few years, Hany Farid and his wife have the grim but necessary conversation about their end-of-life plans. They hope to have many more decades together--Farid is 58, and his wife is 38--but they want to make sure they have their affairs in order when the time comes. In addition to discussing burial requests and financial decisions, Farid has recently broached an eerier topic: If he dies first, would his wife want to digitally resurrect him as an AI clone? Farid, an AI expert at UC Berkeley, knows better than most that physical death and digital death are two different things. "My wife has my voice, my likeness, and a lot of my writings," he told me. "She could very easily train a large language model to be an interactive version of me."


'Inceptionism' and Balenciaga popes: a brief history of deepfakes

The Guardian

Concern about doctored or manipulative media is always high around election cycles, but 2024 will be different for two reasons: deepfakes made by artificial intelligence (AI) and the sheer number of polls. The term deepfake refers to a hoax that uses AI to create a phoney image, most commonly fake videos of people, with the effect often compounded by a voice component. Combined with the fact that around half the world's population is holding important elections this year – including India, the US, the EU and, most probably, the UK – and there is potential for the technology to be highly disruptive. Here is a guide to some of the most effective deepfakes in recent years, including the first attempts to create hoax images. The banana where it all began.


The Terrifying A.I. Scam That Uses Your Loved One's Voice

The New Yorker

On a recent night, a woman named Robin was asleep next to her husband, Steve, in their Brooklyn home, when her phone buzzed on the bedside table. Robin is in her mid-thirties with long, dirty-blond hair. She works as an interior designer, specializing in luxury homes. The couple had gone out to a natural-wine bar in Cobble Hill that evening, and had come home a few hours earlier and gone to bed. Their two young children were asleep in bedrooms down the hall.


Meta Will Crack Down on AI-Generated Fakes--but Leave Plenty Undetected

WIRED

Meta, like other leading tech companies, has spent the past year promising to speed up deployment of generative artificial intelligence. Today it acknowledged it must also respond to the technology's hazards, announcing an expanded policy of tagging AI-generated images posted to Facebook, Instagram, and Threads with warning labels to inform people of their artificial origins. Yet much of the synthetic media likely to appear on Meta's platforms is unlikely to be covered by the new policy, leaving many gaps through which malicious actors could slip. "It's a step in the right direction, but with challenges," says Sam Gregory, program director of the nonprofit Witness, which helps people use technology to support human rights. Meta already labels AI-generated images made using its own generative AI tools with the tag "Imagined with AI," in part by looking for the digital "watermark" its algorithms embed into their output.


Researchers Say the Deepfake Biden Robocall Was Likely Made With Tools From AI Startup ElevenLabs

WIRED

Last week, some voters in New Hampshire received an AI-generated robocall impersonating President Biden, telling them not to vote in the state's primary election. It's not clear who was responsible for the call, but two separate teams of audio experts tell WIRED it was likely created using technology from voice-cloning startup ElevenLabs. ElevenLabs markets its AI tools for uses like audiobooks and video games; it recently achieved "unicorn" status by raising 80 million at a 1.1 billion valuation in a new funding round co-led by venture firm Andreessen Horowitz. Anyone can sign up for the company's paid service and clone a voice from an audio sample. The company's safety policy says it is best to obtain someone's permission before cloning their voice, but that permissionless cloning can be OK for a variety of non-commercial purposes, including "political speech contributing to public debates." ElevenLabs did not respond to multiple requests for comment.


What the Doomsayers Get Wrong About Deepfakes

The New Yorker

With that sentence, written by the journalist Samantha Cole for the tech site Motherboard in December, 2017, a queasy new chapter in our cultural history opened. A programmer calling himself "deepfakes" told Cole that he'd used artificial intelligence to insert Gadot's face into a pornographic video. And he'd made others: clips altered to feature Aubrey Plaza, Scarlett Johansson, Maisie Williams, and Taylor Swift. Porn, as a Times headline once proclaimed, is the "low-slung engine of progress." It can be credited with the rapid spread of VCRs, cable, and the Internet--and with several important Web technologies.



Educators have said using ChatGPT is cheating, but now they are using AI to write syllabi and exams: Professor

FOX News

ChatGPT has proven it can help students with their homework, but now it is helping teachers create those very courses, a computer science professor told Fox News. As educators debate whether students should be allowed to use artificial intelligence for assignments, one professor told Fox News that teachers themselves are using the tech to help with their lessons. "I know faculty who are using ChatGPT to help write syllabi and to write exams," a University of California, Berkeley professor of computer science, Hany Farid, told Fox News. "I've seen professors using it to help design courses, write exam problems, write homework problems." "It is both an enabling and a potentially problematic technology," he continued.