More and more frequently, schools across the United States are turning to artificial intelligence-backed solutions to stop tragic acts of student violence. Companies like Bark Technologies, Gaggle.net, and Securly, Inc., are using a combination of artificial intelligence (AI) and machine learning (ML) along with trained human safety experts to scan student emails, texts, documents, and in some cases, social media activity. They're looking for warning signs of cyber bullying, sexting, drug and alcohol use, depression, and to flag students who may pose a violent risk not only to themselves, but to classmates as well. Any potential problems discovered trigger alerts to school administration, parents, and law enforcement officials, depending on the severity. Bark ran a test pilot of its program with 25 schools in fall 2017.
On Feb. 14, 2018, a gunman killed 17 people at Marjory Stoneman Douglas High School in Parkland, Florida. The incident was the deadliest high school shooting in U.S. history, and in the year since, various tech companies across the nation have ramped up efforts to use artificial intelligence to prevent similar tragedies -- and they claim the systems are flagging many violent incidents before they happen. A new story by USA Today details several of the companies offering services that use AI to prevent school shootings. Bark's AI monitors students' text messages, emails, and social media accounts for signs of cyberbullying, drug use, depression, and other possible safety concerns, sending automatic alerts to officials in more than 1,100 school districts when it notes something suspicious.
Kimberly Krawczyk says she would do anything to keep her students safe. But one of the unconventional responses the local Broward County school district has said could stop another tragedy has left her deeply unnerved: an experimental artificial-intelligence system that would surveil her students closer than ever before. The South Florida school system, one of the largest in the country, said last month it would install a camera-software system called Avigilon that would allow security officials to track students based on their appearance: With one click, a guard could pull up video of everywhere else a student has been recorded on campus. The 145-camera system, which administrators said will be installed around the perimeters of the schools deemed "at highest risk," will also automatically alert a school-monitoring officer when it senses events "that seem out of the ordinary" and people "in places they are not supposed to be." The supercharged surveillance network has raised major questions for some students, parents and teachers, like Krawczyk, who voiced concerns about its accuracy, invasiveness and effectiveness.
Students from Marjory Stoneman Douglas High School walk through the Florida state Capitol in Tallahassee. Schools are increasingly turning to artificial intelligence-backed solutions to stop tragic acts of student violence such as the shooting at the Marjory Stoneman Douglas High School in Parkland, Florida, a year ago. Bark Technologies, Gaggle.Net, and Securly Inc. are three companies that employ AI and machine learning to scan student emails, texts, documents, and in some cases, social media activity. They look for warning signs of cyber bullying, sexting, drug and alcohol use, depression, and to flag students who may pose a violent risk not only to themselves, but classmates. When potential problems are found, and depending on the severity, school administrators, parents -- and under the most extreme cases -- law enforcement officials, are alerted.
According to a story in The Japan Times, the school will feed the AI information about 9,000 suspected bullying cases reported by Otsu's elementary and junior high schools between 2012 and 2018. This information will include details on the students involved -- their ages, genders, absenteeism records, and academic achievements -- as well as when and where any bullying incidents took place. "Through an AI theoretical analysis of past data, we will be able to properly respond to cases without just relying on teachers' past experiences," Otsu Mayor Naomi Koshi said, according to The Japan Times. The hope is that the AI will allow school officials to identify the bullying cases that are likely to escalate in seriousness so that they can intervene and diffuse the situation before it's too late. "Bullying may start from low-level friction in relationships, but can get worse day by day," an Otsu education board official said, according to The Japan Times.
"Through an AI theoretical analysis of past data, we will be able to properly respond to cases without just relying on teachers' past experiences," Otsu Mayor Naomi Koshi said of the planned analysis, set to begin from the next fiscal year. AI will be used to analyze 9,000 suspected bullying cases reported by elementary and junior high schools in the city over the six years through fiscal 2018. It will examine the school grade and gender of the suspected victims and perpetrators as well as when and where the incidents occurred. Statistical analysis of the data is expected to help local authorities and teachers identify forms of bullying that tend to escalate in seriousness and which therefore require extra attention, the Otsu board of education said. The AI analysis will also look at other factors, such as school absenteeism and academic achievement, and the findings will be compiled into a report for use by teachers and in training seminars.
YouTube said Friday it is retooling its recommendation algorithm that suggests new videos to users in order to prevent promoting conspiracies and false information, reflecting a growing willingness to quell misinformation on the world's largest video platform after several public missteps. In a blog post that YouTube plans to publish Friday, the company said that it was taking a "closer look" at how it can reduce the spread of content that "comes close to -- but doesn't quite cross the line" of violating its rules. YouTube has been criticized for directing users to conspiracies and false content when they begin watching legitimate news. The change to the company's recommendation algorithms is the result of a six-month-long technical effort. It will be small at first -- YouTube said it would apply to less than 1 percent of the content of the site -- and affects only English-language videos, meaning that much unwanted content will still slip through the cracks.
A new breed of intelligent video surveillance is being installed in schools around the country -- tech that follows people around campus and detects unusual behaviors. Axios' Kaveh Waddell reports: This new phase in campus surveillance responds to high-profile school shootings like the one in Parkland, Florida, last February. School administrators are now reaching for security tech that keeps a constant, increasingly sophisticated eye on halls and classrooms. Background: Schools are experimenting wildly with technology in order to secure students, deploying facial recognition, license plate readers, microphones for gunshot detection and even patrol robots. The tech the district wants is benignly branded "intelligent video analytics."
"It'll automatically call police if the administration wants it to. It comes in and you see it and you can click on the video," said Christopher Ciabarra. He's describing a new technology that's based on artificial intelligence. He and Lisa Falcone are the inventors of Athena. They say it's the first A.I. security cameras used to detect guns in schools.