Goto

Collaborating Authors

Abolish the #TechToPrisonPipeline

#artificialintelligence

The authors of the Harrisburg University study make explicit their desire to provide "a significant advantage for law enforcement agencies and other intelligence agencies to prevent crime" as a co-author and former NYPD police officer outlined in the original press release.[38] At a time when the legitimacy of the carceral state, and policing in particular, is being challenged on fundamental grounds in the United States, there is high demand in law enforcement for research of this nature, research which erases historical violence and manufactures fear through the so-called prediction of criminality. Publishers and funding agencies serve a crucial role in feeding this ravenous maw by providing platforms and incentives for such research. The circulation of this work by a major publisher like Springer would represent a significant step towards the legitimation and application of repeatedly debunked, socially harmful research in the real world. To reiterate our demands, the review committee must publicly rescind the offer for publication of this specific study, along with an explanation of the criteria used to evaluate it. Springer must issue a statement condemning the use of criminal justice statistics to predict criminality and acknowledging their role in incentivizing such harmful scholarship in the past. Finally, all publishers must refrain from publishing similar studies in the future.


Despite what you may think, face recognition surveillance isn't inevitable

#artificialintelligence

Last year, communities banded together to prove that they can--and will--defend their privacy rights. As part of ACLU-led campaigns, three California cities--San Francisco, Berkeley, and Oakland--as well as three Massachusetts municipalities--Somerville, Northhampton, and Brookline--banned the government's use of face recognition from their communities. Following another ACLU effort, the state of California blocked police body cam use of the technology, forcing San Diego's police department to shutter its massive face surveillance flop. And in New York City, tenants successfully fended off their landlord's efforts to install face surveillance. Even the private sector demonstrated it had a responsibility to act in the face of the growing threat of face surveillance.


Politicians fume after Amazon's face-recog AI fingers dozens of them as suspected crooks

#artificialintelligence

Amazon's online facial recognition system incorrectly matched pictures of US Congress members to mugshots of suspected criminals in a study by the American Civil Liberties Union. The ACLU, a nonprofit headquartered in New York, has called for Congress to ban cops and Feds from using any sort of computer-powered facial recognition technology due to the fact that, well, it sucks. Amazon's AI-powered Rekognition service was previously criticized by the ACLU when it revealed the web giant was aggressively marketing its face-matching tech to police in Washington County, Oregon, and Orlando, Florida. Rekognition is touted by the Bezos Bunch as, among other applications, a way to identify people in real time from surveillance camera footage or from officers' body cameras. The results from the ACLU's latest probing showed that Rekognition mistook images of 28 members of Congress for mugshots of cuffed people suspected of crimes.


The dead can unlock iPhones, offering possible clues to a killer's plan after memories go

USATODAY - Tech Top Stories

See how Apple's new facial recognition system works in real life. A conductive model of a finger, used to spoof a fingerprint ID system. Created by Prof. Anil Jain, a professor of computer science at Michigan State University and expert on biometric technology. SAN FRANCISCO -- Your shiny new smartphone may unlock with only your thumbprint, eye or face. The FBI is struggling to gain access to the iPhone of Texas church gunman Devin Kelley, who killed 25 people in a shooting rampage.


Discriminative models for robust image classification

arXiv.org Machine Learning

A variety of real-world tasks involve the classification of images into pre-determined categories. Designing image classification algorithms that exhibit robustness to acquisition noise and image distortions, particularly when the available training data are insufficient to learn accurate models, is a significant challenge. This dissertation explores the development of discriminative models for robust image classification that exploit underlying signal structure, via probabilistic graphical models and sparse signal representations. Probabilistic graphical models are widely used in many applications to approximate high-dimensional data in a reduced complexity set-up. Learning graphical structures to approximate probability distributions is an area of active research. Recent work has focused on learning graphs in a discriminative manner with the goal of minimizing classification error. In the first part of the dissertation, we develop a discriminative learning framework that exploits the complementary yet correlated information offered by multiple representations (or projections) of a given signal/image. Specifically, we propose a discriminative tree-based scheme for feature fusion by explicitly learning the conditional correlations among such multiple projections in an iterative manner. Experiments reveal the robustness of the resulting graphical model classifier to training insufficiency.