Law Enforcement & Public Safety


When governments turn to AI: Algorithms, trade-offs, and trust

#artificialintelligence

The notion reflects an interest in bias-free decision making or, when protected classes of individuals are involved, in avoiding disparate impact to legally protected classes.3


Is Facial Recognition Technology Racist? The Tech Connoisseur

#artificialintelligence

Recent studies demonstrate that machine learning algorithms can discriminate based on classes like race and gender. In this work, we present an approach to evaluate bias present in automated facial analysis algorithms and datasets with respect to phenotypic subgroups. Using the dermatologist approved Fitzpatrick Skin Type classification system, we characterize the gender and skin type distribution of two facial analysis benchmarks, IJB-A and Adience. We find that these datasets are overwhelmingly composed of lighter-skinned subjects (79.6% for IJB-A and 86.2% for Adience) and introduce a new facial analysis dataset which is balanced by gender and skin type. We evaluate 3 commercial gender classification systems using our dataset and show that darker-skinned females are the most misclassified group (with error rates of up to 34.7%).


Detect discrimination with help of artificial intelligence

#artificialintelligence

Washington D.C. [USA], July 14 (ANI): Researchers developed a new artificial intelligence (AI) tool for detecting unfair discrimination such as race or gender. Preventing unfair treatment of individuals on the basis of race, gender or ethnicity, for example, been a long-standing concern of civilized societies. However, detecting such discrimination resulting from decisions, whether by human decision-makers or automated AI systems, can be extremely challenging. This challenge is further exacerbated by the wide adoption of AI systems to automate decisions in many domains including policing, consumer finance, higher education, and business. "Artificial intelligence systems such as those involved in selecting candidates for a job or for admission to a university are trained on large amounts of data. But if these data are biased, they can affect the recommendations of AI systems," said Vasant Honavar, one of the researchers of the study presented at the meeting of The Web Conference.


Using artificial intelligence to detect discrimination

#artificialintelligence

Preventing unfair treatment of individuals on the basis of race, gender or ethnicity, for example, been a long-standing concern of civilized societies. However, detecting such discrimination resulting from decisions, whether by human decision makers or automated AI systems, can be extremely challenging. This challenge is further exacerbated by the wide adoption of AI systems to automate decisions in many domains -- including policing, consumer finance, higher education and business. "Artificial intelligence systems -- such as those involved in selecting candidates for a job or for admission to a university -- are trained on large amounts of data," said Vasant Honavar, Professor and Edward Frymoyer Chair of Information Sciences and Technology, Penn State. "But if these data are biased, they can affect the recommendations of AI systems."


Man, 28, arrested for allegedly beating girlfriend after an Amazon Alexa device calls 911

Daily Mail - Science & tech

A New Mexico man was arrested for allegedly beating his girlfriend after their Amazon device alerted police. Eduardo Barros, 28, was with his girlfriend and her daughter at a residence in Tijeras, outside of Albuquerque, on July 2. The pair got into an argument and the confrontation became physical, according to the Bernalillo County Sheriff Department's spokesperson, Deputy Felicia Romero. Eduardo Barros, 28, (pictured), was arrested for allegedly threatening to kill his girlfriend after he mentioned'calling sheriffs' during a fight, which prompted an Alexa device to call 911 It is understood Barros allegedly became angered because of a text message that the woman received and he accused her of cheating on him. He was allegedly in possession of a firearm and threatened to kill his unidentified girlfriend, saying to her, 'Did you call the sheriffs?' A smart speaker, which was connected to a surround sound system inside the house, recognized the comment as a voice command and called 911, Romero told the New York Post.


Using artificial intelligence to detect discrimination

#artificialintelligence

A new artificial intelligence (AI) tool for detecting unfair discrimination--such as on the basis of race or gender--has been created by researchers at Penn State and Columbia University. Preventing unfair treatment of individuals on the basis of race, gender or ethnicity, for example, been a long-standing concern of civilized societies. However, detecting such discrimination resulting from decisions, whether by human decision makers or automated AI systems, can be extremely challenging. This challenge is further exacerbated by the wide adoption of AI systems to automate decisions in many domains--including policing, consumer finance, higher education and business. "Artificial intelligence systems--such as those involved in selecting candidates for a job or for admission to a university--are trained on large amounts of data," said Vasant Honavar, Professor and Edward Frymoyer Chair of Information Sciences and Technology, Penn State.


The Debate Over Facial Recognition Technology's Role In Law Enforcement

NPR Technology

Facial recognition technology has come under fire from lawmakers, advocacy groups and citizens, but Lt. Derek Sabatini of the Los Angeles Sheriff's Department says it helps control crime.


Real-Time Entity Resolution Made Accessible - Senzing

#artificialintelligence

Knowing exactly who your customers are is an important task for security, fraud detection, marketing, and personalization. The proliferation of data sources and services has made ER very challenging in the internet age. In addition, many applications now increasingly require near real-time entity resolution.


Could a face-reading AI 'lie detector' tell police when suspects aren't telling the truth?

Daily Mail - Science & tech

Forget the old'good cop, bad cop' routine -- soon police may be turning to artificial intelligence systems that can reveal a suspect's true emotions during interrogations. The face-scanning technology would rely on micro-expressions, tiny involuntary facial movements that betray true feelings and even reveal when people are lying. London-based startup Facesoft has been training an AI on micro-expressions seen on the faces of real-life people, as well as in a database of 300 million expressions. The firm has been in discussion with both UK and Mumbai police forces about potential practical applications for the AI technology. The latter are reportedly interested in using the technology as part of crowd control measures, with the algorithm detecting when an angry mob might be forming.


Companies starting to use AI technology to fight fraud

#artificialintelligence

Companies are beginning to employ advanced technologies such as artificial intelligence and machine learning to uncover fraud, according to a new report. The report, from the Association of Certified Fraud Examiners and analytical technology provider SAS, found that 13 percent of organizations currently use AI or machine learning to help fight fraud. Meanwhile, 25 percent of organizations expect to adopt that technology in the next year or two. The use of AI and machine learning as part of anti-fraud programs is expected to almost over the next two years. The report also found that 26 percent of organizations currently use biometrics as part of their anti-fraud programs, and another 16 percent expect to deploy biometrics as part of their programs over the next two years.