Law Enforcement & Public Safety


Neo4j: How a lack of context awareness is hampering AI development

#artificialintelligence

What do we mean when we say'context'? In essence, context is the information that frames something to give it meaning. Taken on its own, a shout could be anything from an expression of joy to warning. In the context of a structured piece of on-stage Grime, it's what made Stormzy's appearance at Glastonbury the triumph it was. The problem is that context doesn't come free – it has to be discovered.


Glitches, bandwidth plagued a now defunct pilot of Amazon's facial recognition software in Orlando

Daily Mail - Science & tech

A police department in Orlando has terminated its trial of Amazon's AI-powered facial recognition for the second time, citing costs and complexity. According to a report from Orlando Weekly, the department ended its trial of the technology, called Rekognition, after 15 months of glitches and concerns over whether the technology was actually working. 'At this time, the city was not able to dedicate the resources to the pilot to enable us to make any noticeable progress toward completing the needed configuration and testing,' Orlando's Chief Administrative Office said in a memo to City Council, as reported by Orlando Weekly. A police department in Orlando has ended its pilot of Amazon's facial recognition software after being unable to get its system working properly. The decision marks the second time in just 10 months that the department decided not to proceed with using the technology.


When governments turn to AI: Algorithms, trade-offs, and trust

#artificialintelligence

The notion reflects an interest in bias-free decision making or, when protected classes of individuals are involved, in avoiding disparate impact to legally protected classes.3


Is Facial Recognition Technology Racist? The Tech Connoisseur

#artificialintelligence

Recent studies demonstrate that machine learning algorithms can discriminate based on classes like race and gender. In this work, we present an approach to evaluate bias present in automated facial analysis algorithms and datasets with respect to phenotypic subgroups. Using the dermatologist approved Fitzpatrick Skin Type classification system, we characterize the gender and skin type distribution of two facial analysis benchmarks, IJB-A and Adience. We find that these datasets are overwhelmingly composed of lighter-skinned subjects (79.6% for IJB-A and 86.2% for Adience) and introduce a new facial analysis dataset which is balanced by gender and skin type. We evaluate 3 commercial gender classification systems using our dataset and show that darker-skinned females are the most misclassified group (with error rates of up to 34.7%).


Detect discrimination with help of artificial intelligence

#artificialintelligence

Washington D.C. [USA], July 14 (ANI): Researchers developed a new artificial intelligence (AI) tool for detecting unfair discrimination such as race or gender. Preventing unfair treatment of individuals on the basis of race, gender or ethnicity, for example, been a long-standing concern of civilized societies. However, detecting such discrimination resulting from decisions, whether by human decision-makers or automated AI systems, can be extremely challenging. This challenge is further exacerbated by the wide adoption of AI systems to automate decisions in many domains including policing, consumer finance, higher education, and business. "Artificial intelligence systems such as those involved in selecting candidates for a job or for admission to a university are trained on large amounts of data. But if these data are biased, they can affect the recommendations of AI systems," said Vasant Honavar, one of the researchers of the study presented at the meeting of The Web Conference.


Using artificial intelligence to detect discrimination

#artificialintelligence

Preventing unfair treatment of individuals on the basis of race, gender or ethnicity, for example, been a long-standing concern of civilized societies. However, detecting such discrimination resulting from decisions, whether by human decision makers or automated AI systems, can be extremely challenging. This challenge is further exacerbated by the wide adoption of AI systems to automate decisions in many domains -- including policing, consumer finance, higher education and business. "Artificial intelligence systems -- such as those involved in selecting candidates for a job or for admission to a university -- are trained on large amounts of data," said Vasant Honavar, Professor and Edward Frymoyer Chair of Information Sciences and Technology, Penn State. "But if these data are biased, they can affect the recommendations of AI systems."


Man, 28, arrested for allegedly beating girlfriend after an Amazon Alexa device calls 911

Daily Mail - Science & tech

A New Mexico man was arrested for allegedly beating his girlfriend after their Amazon device alerted police. Eduardo Barros, 28, was with his girlfriend and her daughter at a residence in Tijeras, outside of Albuquerque, on July 2. The pair got into an argument and the confrontation became physical, according to the Bernalillo County Sheriff Department's spokesperson, Deputy Felicia Romero. Eduardo Barros, 28, (pictured), was arrested for allegedly threatening to kill his girlfriend after he mentioned'calling sheriffs' during a fight, which prompted an Alexa device to call 911 It is understood Barros allegedly became angered because of a text message that the woman received and he accused her of cheating on him. He was allegedly in possession of a firearm and threatened to kill his unidentified girlfriend, saying to her, 'Did you call the sheriffs?' A smart speaker, which was connected to a surround sound system inside the house, recognized the comment as a voice command and called 911, Romero told the New York Post.


Using artificial intelligence to detect discrimination

#artificialintelligence

A new artificial intelligence (AI) tool for detecting unfair discrimination--such as on the basis of race or gender--has been created by researchers at Penn State and Columbia University. Preventing unfair treatment of individuals on the basis of race, gender or ethnicity, for example, been a long-standing concern of civilized societies. However, detecting such discrimination resulting from decisions, whether by human decision makers or automated AI systems, can be extremely challenging. This challenge is further exacerbated by the wide adoption of AI systems to automate decisions in many domains--including policing, consumer finance, higher education and business. "Artificial intelligence systems--such as those involved in selecting candidates for a job or for admission to a university--are trained on large amounts of data," said Vasant Honavar, Professor and Edward Frymoyer Chair of Information Sciences and Technology, Penn State.


The Debate Over Facial Recognition Technology's Role In Law Enforcement

NPR Technology

Facial recognition technology has come under fire from lawmakers, advocacy groups and citizens, but Lt. Derek Sabatini of the Los Angeles Sheriff's Department says it helps control crime.


Real-Time Entity Resolution Made Accessible - Senzing

#artificialintelligence

Knowing exactly who your customers are is an important task for security, fraud detection, marketing, and personalization. The proliferation of data sources and services has made ER very challenging in the internet age. In addition, many applications now increasingly require near real-time entity resolution.