Goto

Collaborating Authors

 predpol


SoundThinking, Maker of ShotSpotter, Is Buying Parts of PredPol Creator Geolitica

WIRED

SoundThinking, the company behind the gunshot-detection system ShotSpotter, is quietly acquiring staff, patents, and customers of the firm that created the notorious predictive policing software PredPol, WIRED has learned. In an August earnings call, SoundThinking CEO Ralph Clark announced to investors that the company was negotiating an agreement to acquire parts of Geolitica--formerly called PredPol--and transition its customers to SoundThinking's own "patrol management" solution. "We have already hired their engineering team," Clark said during the call, a transcript of which is public. He added that the acquisition of patents and staff would "facilitate our application of AI and machine learning technology to public safety." SoundThinking's absorption of Geolitica marks its latest step in becoming the Google of crime fighting--a one-stop shop for policing tools.


The Age of the Videogame

#artificialintelligence

The history of decision-making has always been intrinsically tied to the history of technology. Charts and compasses have guided explorers for centuries, and a level is an indispensable instrument for construction workers. New tools allow us to make more informed choices which, in turn, may positively impact technological advancements. This dependence suggests that a change in the technological landscape will have implications in how we make decisions. The last half-century has seen one of the most radical revolutions: the emergence of artificial intelligence (AI), powered by the ever-increasing data we gather.


The Humanities Can't Save Big Tech From Itself

WIRED

The problem with tech, many declare, is its quantitative inclination, its "hard" math deployed in the softer human world. Tech is Mark Zuckerberg: all turning pretty girls into numbers and raving about the social wonders of the metaverse while so awkward in every human interaction that he is instantly memed. The human world contains Zuck, but it is also everything he fails at so spectacularly. That failure, the lack of social and ethical chops, is one many believe he shares with the industry with which he is so associated. And so, because Big Tech is failing at understanding humans, we often hear that its workforce simply needs to employ more people who do understand.


Police Use of Artificial Intelligence: 2021 in Review - Activist Post

#artificialintelligence

Decades ago, when imagining the practical uses of artificial intelligence, science fiction writers imagined autonomous digital minds that could serve humanity. Sure, sometimes a HAL 9000 or WOPR would subvert expectations and go rogue, but that was very much unintentional, right? And for many aspects of life, artificial intelligence is delivering on its promise. AI is, as we speak, looking for evidence of life on Mars. Scientists are using AI to try to develop more accurate and faster ways to predict the weather.


The effect of differential victim crime reporting on predictive policing systems

Akpinar, Nil-Jana, De-Arteaga, Maria, Chouldechova, Alexandra

arXiv.org Machine Learning

Police departments around the world have been experimenting with forms of place-based data-driven proactive policing for over two decades. Modern incarnations of such systems are commonly known as hot spot predictive policing. These systems predict where future crime is likely to concentrate such that police can allocate patrols to these areas and deter crime before it occurs. Previous research on fairness in predictive policing has concentrated on the feedback loops which occur when models are trained on discovered crime data, but has limited implications for models trained on victim crime reporting data. We demonstrate how differential victim crime reporting rates across geographical areas can lead to outcome disparities in common crime hot spot prediction models. Our analysis is based on a simulation patterned after district-level victimization and crime reporting survey data for Bogot\'a, Colombia. Our results suggest that differential crime reporting rates can lead to a displacement of predicted hotspots from high crime but low reporting areas to high or medium crime and high reporting areas. This may lead to misallocations both in the form of over-policing and under-policing.


Why Hundreds of Mathematicians Are Boycotting Predictive Policing

#artificialintelligence

Several prominent academic mathematicians want to sever ties with police departments across the U.S., according to a letter submitted to Notices of the American Mathematical Society on June 15. The letter arrived weeks after widespread protests against police brutality, and has inspired over 1,500 other researchers to join the boycott. These mathematicians are urging fellow researchers to stop all work related to predictive policing software, which broadly includes any data analytics tools that use historical data to help forecast future crime, potential offenders, and victims. The technology is supposed to use probability to help police departments tailor their neighborhood coverage so it puts officers in the right place at the right time. Dive deeper.Read the most in-depth science, math, and tech features, solve life's biggest mysteries, and get unlimited access to all things Pop Mech--starting now. "Given the structural racism and brutality in U.S. policing, we do not believe that mathematicians should be collaborating with police departments in this manner," the authors write in the letter.


'Predictive policing' could amplify today's law enforcement issues

Engadget

Law enforcement in America is facing a day of reckoning over its systemic, institutionalized racism and ongoing brutality against the people it was designed to protect. Virtually every aspect of the system is now under scrutiny, from budgeting and staffing levels to the data-driven prevention tools it deploys. A handful of local governments have already placed moratoriums on facial recognition systems in recent months and on Wednesday, Santa Cruz, California became the first city in the nation to outright ban the use of predictive policing algorithms. While it's easy to see the privacy risks that facial recognition poses, predictive policing programs have the potential to quietly erode our constitutional rights and exacerbate existing racial and economic biases in the law enforcement community. Simply put, predictive policing technology uses algorithms to pore over massive amounts of data to predict when and where future crimes will occur.


Santa Cruz becomes first U.S. city to ban predictive policing

Los Angeles Times

Nearly a decade ago, Santa Cruz was among the first cities in the U.S. to adopt predictive policing. This week, the California city became the first in the country to ban the policy. In a unanimous decision Tuesday, the City Council passed an ordinance that banishes the use of data to predict where crimes may occur and also barred the city from using facial recognition software. In recent years, both predictive policing and facial recognition technology have been criticized as racially prejudiced, often contributing to increased patrols in Black or brown neighborhoods or false accusations against people of color. Predictive policing uses algorithms that encourage officers to patrol locations identified as high-crime based on victim reports.


Is artificial intelligence making racial profiling worse?

#artificialintelligence

REVERB is a new documentary series from CBSN Originals. Throughout its history, the LAPD has found itself embroiled in controversy over racially biased policing. In 1992, police violence and the acquittal of four police officers who beat black motorist Rodney King culminated in riots that killed more than 50 people. Many reforms have been instituted in the decades since then, but racial bias in LA law enforcement continues to raise concerns. A 2019 report found that the LAPD pulled over black drivers four times as often as white drivers, and Latino drivers three times as often as whites, despite white drivers being more likely to have weapons, drugs or other contraband.


Bay Area police try out controversial AI software that tells them where to patrol

#artificialintelligence

Even the head of a Santa Cruz tech company that sells software to Bay Area police departments admits that using an algorithm to tell cops where and when to patrol raises a host of complicated issues. With the promise of trying to predict crime before it happens, police departments across the United States are experimenting with artificial intelligence programs like the one from PredPol in Santa Cruz. It's an evolution of the "hot-spot" crime maps police have been using for decades to guide their patrolling -- with 21st century twists that opponents say can reinforce bias and make people less safe. At a time when tension is high over police misconduct and shootings of unarmed suspects, predictive policing is under increasing scrutiny from privacy advocates, watchdogs and even law enforcement itself. The software for predictive policing relies on data, ranging from crime-victim reports to arrests to individuals' histories of police interaction.