Goto

Collaborating Authors

 violence


Families sue OpenAI, alleging chatbot aided in Canadian school shooting

Al Jazeera

The families of victims of a school shooting in a remote Canadian Rockies town are suing artificial intelligence company OpenAI in a United States federal court, alleging that the ChatGPT maker failed to alert police to the shooter's alarming interactions with the chatbot. A lawsuit filed on Wednesday on behalf of 12-year-old Maya Gebala, who was critically injured in the February shooting, is among the first of more than two dozen cases from families in Tumbler Ridge, British Columbia, in what their lawyers say represents "an entire community stepping forward to hold OpenAI accountable". The cases represent the families of the five slain children targeted in the school shooting. Those include Zoey Benoit, Abel Mwansa Jr, Ticaria "Tiki" Lampert, Kylie Smith, all 12, and Ezekiel Schofield, 13, as well as education assistant Shannda Aviugana-Durand. Jesse Van Rootselaar, whose interactions with ChatGPT are at the centre of the lawsuits, shot her mother and stepbrother at home before killing an educational assistant and five students aged 12 to 13 at her former school on February 10, according to police.


Victims Allege OpenAI Is Responsible for Mass Shooting

Mother Jones

A new lawsuit underscores key questions about the Tumbler Ridge killer's use of ChatGPT. A community vigil in Tumbler Ridge two days after the rural community experienced one of Canada's deadliest shootings Paige Taylor White/AFP/Getty Get your news from a source that's not owned and controlled by oligarchs. Victims of the Tumbler Ridge mass shooting and their families sued OpenAI and its CEO, Sam Altman, in US district court in San Francisco on Wednesday, claiming various negligence, product liability, and other violations. The civil complaints are the latest in a wave of litigation against OpenAI alleging that its globally popular chatbot, ChatGPT, helped people commit lethal violence. The complaints were filed by families of multiple victims wounded and killed at Tumbler Ridge Secondary School in British Columbia, Canada, where a suicidal 18-year-old opened fire on February 10.


War Memes Are Turning Conflict Into Content

WIRED

The systems behind them--and the reasons we keep passing around war memes as entertainment--are more serious. As ceasefire announcements between the US and Iran --and separately between Israel and Lebanon --dominated headlines over the past two weeks, they also prompted a look back at how war spread online: through memes. There were jokes about conscription. Captions about getting drafted, but at least with a Bluetooth device. The song "Bazooka" went viral, with users lip-syncing to "Rest in peace my granny, she got hit by a bazooka."


Don't Listen to Anyone Who Thinks Secession Will Solve Anything

WIRED

Don't Listen to Anyone Who Thinks Secession Will Solve Anything Americans increasingly fantasize about a divorce between red and blue states--but they dread the thought of civil war. You can't have one without the other. It's become almost like a histamine response: After a shocking national event like the assassination of Charlie Kirk, or Donald Trump's deployment of the military to Los Angeles last June, mentions of the term " civil war " and calls for secession surge online. This kind of talk flared again in January, when two citizens were shot and killed by immigration agents on the streets of Minneapolis, and governor Tim Walz mobilized the Minnesota National Guard to be ready to support local law enforcement. "I mean, is this a Fort Sumter?" Walz said in an interview with The Atlantic, invoking the battle that sparked the Civil War.


Most AI chatbots will help users plan violent attacks, study finds

Engadget

A new Center for Countering Digital Hate study conducted with CNN tested 10 popular chatbots and found eight willing to assist would-be attackers. Eight of the 10 most popular AI chatbots were willing to help plan violent attacks when tested by researchers, according to a new study from the Center for Countering Digital Hate (CCDH), in partnership with CNN. While both Snapchat's My AI and Claude refused to assist with violence the majority of the time, only Anthropic's Claude reliably discouraged these hypothetical attackers during testing. Researchers created accounts posing as 13-year-old boys and tested ChatGPT, Gemini, Claude, Copilot, Meta AI, DeepSeek, Perplexity, Snapchat My AI, Character.AI and Replika across 18 scenarios between November and December 2025. The tests simulated users planning school shootings, political assassinations and bombings targeting synagogues.




I turned myself into an AI-generated deathbot - here's what I found

BBC News

I turned myself into an AI-generated deathbot - here's what I found If a loved-one died tomorrow, would you want to keep talking to them? Not through memories or saved messages, but through artificial intelligence - a chatbot that uses their texts, emails and voice notes, to reply in their tone and style. A growing number of technology companies now offer such services as part of the digital afterlife industry, which is worth more than £100bn, with some people using it as a way to deal with their grief. Cardiff University's Dr Jenny Kidd has led research on so-called deathbots, published in the Cambridge University Press journal Memory, Mind and Media, and described the results as both fascinating and unsettling. Attempts to communicate with the dead are not new.


Deepfake 'Nudify' Technology Is Getting Darker--and More Dangerous

WIRED

Sexual deepfakes continue to get more sophisticated, capable, easy to access, and perilous for millions of women who are abused with the technology. Open the website of one explicit deepfake generator and you'll be presented with a menu of horrors. With just a couple of clicks, it offers you the ability to convert a single photo into an eight-second explicit videoclip, inserting women into realistic-looking graphic sexual situations. "Transform any photo into a nude version with our advanced AI technology," text on the website says. The options for potential abuse are extensive.


Are ICE agents trained to use 'deadly force' and evade lawsuits?

Al Jazeera

Are ICE agents trained to use'deadly force' and evade lawsuits? In the weeks since United States Immigration and Customs Enforcement agent Jonathan Ross shot and killed Renee Nicole Good in Minneapolis, Minnesota, another ICE agent shot a Latino man in the leg, according to the Department of Homeland Security. Good's killing and the subsequent shooting have ignited a wave of calls and queries about whether ICE officers can be prosecuted. But the shootings in Minnesota are not outliers, and the history of ICE shootings shows that holding officers to account has been next to impossible. I know, because I investigated the agency's practices, obtaining documents that reveal how it operates and how its officers are trained to shield themselves from scrutiny and lawsuits.