riparbelli
TIME100 Impact Dinner London: AI Leaders Discuss Responsibility, Regulation, and Text as a 'Relic of the Past'
On Wednesday, luminaries in the field of AI gathered at Serpentine North, a former gunpowder store turned exhibition space, for the inaugural TIME100 Impact Dinner London. Following a similar event held in San Francisco last month, the dinner convened influential leaders, experts, and honorees of TIME's 2023 and 2024 100 Influential People in AI lists--all of whom are playing a role in shaping the future of the technology. Following a discussion between TIME's CEO Jessica Sibley and executives from the event's sponsors--Rosanne Kincaid-Smith, group chief operating officer at Northern Data Group, and Jaap Zuiderveld, Nvidia's VP of Europe, the Middle East, and Africa--and after the main course had been served, attention turned to a panel discussion. The panel featured TIME 100 AI honorees Jade Leung, CTO at the U.K. AI Safety Institute, an institution established last year to evaluate the capabilities of cutting-edge AI models; Victor Riparbelli, CEO and co-founder of the UK-based AI video communications company Synthesia; and Abeba Birhane, a cognitive scientist and adjunct assistant professor at the School of Computer Science and Statistics at Trinity College Dublin, whose research focuses on auditing AI models to uncover empirical harms. Moderated by TIME senior editor Ayesha Javed, the discussion focused on the current state of AI and its associated challenges, the question of who bears responsibility for AI's impacts, and the potential of AI-generated videos to transform how we communicate.
- Europe > United Kingdom (0.55)
- North America > United States > California > San Francisco County > San Francisco (0.25)
- Europe > Middle East (0.25)
- (4 more...)
'It's not me, it's just my face': the models who found their likenesses had been used in AI propaganda
The well-groomed young man dressed in a crisp, blue shirt speaking with a soft American accent seems an unlikely supporter of the junta leader of the west African state of Burkina Faso. "We must support … President Ibrahim Traoré … Homeland or death we shall overcome!" he says in a video that began circulating in early 2023 on Telegram. It was just a few months after the dictator had come to power via a military coup. Other videos fronted by different people, with a similar professional-looking appearance and repeating the exact same script in front of the Burkina Faso flag, cropped up around the same time. On a verified account on X a few days later the same young man, in the same blue shirt, claimed to be Archie, the chief executive of a new cryptocurrency platform. They were generated with artificial intelligence (AI) developed by a startup based in east London.
- Africa > Burkina Faso (0.48)
- Europe > United Kingdom > England > Greater London > London (0.25)
- North America > United States (0.15)
- (5 more...)
- Government (1.00)
- Media > News (0.48)
This AI Company Releases Deepfakes Into the Wild. Can It Control Them?
Erica is on YouTube, detailing how much it costs to hire a divorce attorney in the state of Massachusetts. Dr. Dass is selling private medical insurance in the UK. But Jason has been on Facebook spreading disinformation about France's relationship with its former colony, Mali. And Gary has been caught impersonating a CEO as part of an elaborate crypto scam. They're deepfakes, let loose into the wild by Victor Riparbelli, CEO of Synthesia.
- North America > United States > Massachusetts (0.27)
- Europe > United Kingdom (0.27)
- Europe > France (0.27)
- Africa > Mali (0.27)
- Banking & Finance (0.85)
- Information Technology > Security & Privacy (0.66)
Why authorized deepfakes are becoming big for business
Join us on November 9 to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers at the Low-Code/No-Code Summit. "Deepfake implies unauthorized use of synthetic media and generative artificial intelligence -- we are authorized from the get-go," she told VentureBeat. She described the Tel Aviv- and New York-based Hour One as an AI company that has also "built a legal and ethical framework for how to engage with real people to generate their likeness in digital form." It's an important delineation in an era when deepfakes, or synthetic media in which a person in an existing image or video is replaced with someone else's likeness, has gotten a boatload of bad press -- not surprisingly, given deepfakes' longstanding connection to revenge porn and fake news. The term "deepfake" can be traced to a Reddit user in 2017 named "deepfakes" who, along with others in the community, shared videos, many involving celebrity faces swapped onto the bodies of actresses in pornographic videos.
- North America > United States > New York (0.25)
- Asia > Middle East > Israel > Tel Aviv District > Tel Aviv (0.25)
- Media (1.00)
- Information Technology > Security & Privacy (1.00)
Deepfake reality check: AI avatars set to transform business and education outreach
In the digital age, the line between the material world and simulation often blur. The realm of simulacra is very much intermeshed within an increasingly virtual reality. The proliferation of synthetic media has created a vast realm of possibilities from deepfake-enabled political misinformation to a wholly new type of computer-generated Instagram influencer. Moving forward, synthetic media and artificial intelligence (AI) could transform the way companies target global audiences and provide internal core competency training. Digital education avatars could also boost engagement and memory retention in the virtual classroom.