Academic reputation is a key driver for colleges and universities to attract the best students, but campus safety is becoming an increasingly important factor in the college-choice equation. Topping the list of safety concerns is the threat of gun violence at schools, which struck on a nearly weekly basis in the past year in the United States. Of the 45 school shootings that took place up to Nov. 19, 2019, 14 occurred on higher education campuses, according to a CNN report. The active-shooter-on-campus threat is real, and it's the primary threat that keeps campus safety leaders up at night thinking of new ways to detect, deter and react to such incidents. Other top campus crime concerns include burglaries, forcible sex offenses, vehicle theft, assault and robberies.
LOUISVILLE – Even after Kentucky High School Athletic Association Commissioner Julian Tackett sent out an email notifying school officials that esports teams may not participate in the video game "Fortnite," there was nothing to be done among schools here. That's because "Fortnite," an online video game developed by Epic Games and released in 2017, was never included among the games played by Kentucky students in high school competitions. "Fortnite" is a third-person shooter game that doesn't include any blood, injuries or dead bodies, but nevertheless was given a Teen rating for violence by the Entertainment Software Rating Board. Epic Games and PlayVS, a software company that provides a platform for competitive esports, last week announced last Wednesday a partnership to introduce a competitive league for "Fortnite" across high schools and colleges. "There is no place for shooter games in our schools," Tackett said, adding that the KHSAA and the National Federation of State High School Associations had no knowledge that "Fortnite" was being added as part of the competition platform and are "strongly against it."
As mass shootings at US schools increase in frequency while our country's gun control laws remain weaker than those in any other developed nation, more school administrators across the US are turning to artificially intelligent surveillance tools in an attempt to beef up school safety. But systems that allow schools to easily track people on campus have left some worried about the impact on student privacy. Recode has identified at least nine US public school districts -- including the district home to Marjory Stoneman Douglas High School (MSD) in Parkland, Florida, which in 2018 experienced one of the deadliest school shootings in US history -- that have acquired analytic surveillance cameras that come with new, AI-based software, including one tool called Appearance Search. Appearance Search can find people based on their age, gender, clothing, and facial characteristics, and it scans through videos like facial recognition tech -- though the company that makes it, Avigilon, says it doesn't technically count as a full-fledged facial recognition tool. Even so, privacy experts told Recode that, for students, the distinction doesn't necessarily matter.
After a school shooting in Parkland, Florida left 17 people dead, RealNetworks decided to make its facial recognition technology available for free to schools across the US and Canada. If school officials could detect strangers on their campuses, they might be able to stop shooters before they got to a classroom. Anxious to keep children safe from gun violence, thousands of schools reached out with interest in the technology. Dozens started using SAFR, RealNetworks' facial recognition technology. From working with schools, RealNetworks, the streaming media company, says it's learned an important lesson: Facial recognition isn't likely an effective tool for preventing shootings.
For years, the Denver public school system worked with Video Insight, a Houston-based video management software company that centralized the storage of video footage used across its campuses. So when Panasonic acquired Video Insight, school officials simply transferred the job of updating and expanding their security system to the Japanese electronics giant. That meant new digital HD cameras and access to more powerful analytics software, including Panasonic's facial recognition, a tool the public school system's safety department is now exploring. Denver, where some activists are pushing for a ban on government use of facial recognition, is not alone. Mass shootings have put school administrators across the country on edge, and they're understandably looking at anything that might prevent another tragedy.
A group of veterans inspired by the need to keep schools and public spaces safer have created a new technology they say can detect guns and send out alerts before shots are ever fired. Active shooter situations have played out across the country – a gunman opened fire inside a Florida high school, shots rang out at a Texas Walmart and multiple people were shot to death in an office building in Virginia Beach. The nation's most recent school shooting happened Thursday morning – when a 16-year-old high school student in Santa Clarita, California, opened fire in the campus quad, shooting five classmates and killing two. What if the gun was detected early – so early, the shooter was never able to get inside to hurt anyone? The technology to do that exists, and only WUSA9 was there when it was tested in Northern Virginia.
For Adam Jasinski, a technology director for a school district outside of St Louis, Missouri, monitoring student emails used to be a time-consuming job. Jasinski used to do keyword searches of the official school email accounts for the district's 2,600 students, looking for words like "suicide" or "marijuana". Then he would have to read through every message that included one of the words. The process would occasionally catch some concerning behavior, but "it was cumbersome", Jasinski recalled. Last year Jasinski heard about a new option: following the school shooting in Parkland, Florida, the technology company Bark was offering schools free, automated, 24-hour-a-day surveillance of what students were writing in their school emails, shared documents and chat messages, and sending alerts to school officials any time the monitoring technology flagged concerning phrases.
People have long blamed video games as a cause of school shootings, but a new study has found that this is more likely to be the case if the perpetrator is white. Researchers have found that video games are eight times more likely to be mentioned when the perpetrator was a white male than if the shooter were an African American male. Experts believe the public looks to find an explanation for this type of behavior if the act is carried out by someone who doesn't match the racial stereotype of a violent person. Although many politicians and media outlets point to violent video games as the cause of school shootings, experts have yet to find scientific evidence to support these claims. 'Video games are often used by lawmakers and others as a red herring to distract from other potential causes of school shootings,' said lead researcher Patrick Markey, PhD, a psychology professor at Villanova University.
The request was "rather strange," the director noted later in an email, but the voice was so lifelike that he felt he had no choice but to comply. The insurer, whose case was first reported by the Wall Street Journal, provided new details on the theft to The Washington Post on Wednesday, including an email from the employee tricked by what the insurer is referring to internally as "the false Johannes." Now being developed by a wide range of Silicon Valley titans and AI start-ups, such voice-synthesis software can copy the rhythms and intonations of a person's voice and be used to produce convincing speech. Tech giants such as Google and smaller firms such as the "ultrarealistic voice cloning" start-up Lyrebird have helped refine the resulting fakes and made the tools more widely available free for unlimited use. But the synthetic audio and AI-generated videos, known as "deepfakes," have fueled growing anxieties over how the new technologies can erode public trust, empower criminals and make traditional communication -- business deals, family phone calls, presidential campaigns -- that much more vulnerable to computerized manipulation.
Thieves used voice-mimicking software to imitate a company executive's speech and dupe his subordinate into sending hundreds of thousands of dollars to a secret account, the company's insurer said, in a remarkable case that some researchers are calling one of the world's first publicly reported artificial-intelligence heists. The managing director of a British energy company, believing his boss was on the phone, followed orders one Friday afternoon in March to wire more than $240,000 to an account in Hungary, said representatives from the French insurance giant Euler Hermes, which declined to name the company. The request was "rather strange," the director noted later in an email, but the voice was so lifelike that he felt he had no choice but to comply. The insurer, whose case was first reported by the Wall Street Journal, provided new details on the theft to The Washington Post on Wednesday, including an email from the employee tricked by what the insurer is referring to internally as "the false Johannes." Now being developed by a wide range of Silicon Valley titans and AI start-ups, such voice-synthesis software can copy the rhythms and intonations of a person's voice and be used to produce convincing speech.