law enforcement


Neo4j: How a lack of context awareness is hampering AI development

#artificialintelligence

What do we mean when we say'context'? In essence, context is the information that frames something to give it meaning. Taken on its own, a shout could be anything from an expression of joy to warning. In the context of a structured piece of on-stage Grime, it's what made Stormzy's appearance at Glastonbury the triumph it was. The problem is that context doesn't come free – it has to be discovered.


Facial Recognition: When Convenience and Privacy Collide

#artificialintelligence

The use of facial recognition in the United States public sector has received a great deal of press lately, and most of it isn't positive. There's a lot of concern over how state and federal government agencies are using this technology and how the resulting biometric data will be used. Many fear that the use of this technology will lead to a Big Brother state. Unfortunately, these concerns are not without merit. We're already seeing damaging results where this technology is prevalent in countries like China, Singapore, and even the United Kingdom where London authorities recently fined a man for disorderly conduct for covering his face to avoid surveillance on the streets.


Glitches, bandwidth plagued a now defunct pilot of Amazon's facial recognition software in Orlando

Daily Mail - Science & tech

A police department in Orlando has terminated its trial of Amazon's AI-powered facial recognition for the second time, citing costs and complexity. According to a report from Orlando Weekly, the department ended its trial of the technology, called Rekognition, after 15 months of glitches and concerns over whether the technology was actually working. 'At this time, the city was not able to dedicate the resources to the pilot to enable us to make any noticeable progress toward completing the needed configuration and testing,' Orlando's Chief Administrative Office said in a memo to City Council, as reported by Orlando Weekly. A police department in Orlando has ended its pilot of Amazon's facial recognition software after being unable to get its system working properly. The decision marks the second time in just 10 months that the department decided not to proceed with using the technology.


When governments turn to AI: Algorithms, trade-offs, and trust

#artificialintelligence

The notion reflects an interest in bias-free decision making or, when protected classes of individuals are involved, in avoiding disparate impact to legally protected classes.3


Companies use OKCupid photos, social media to train face recognition

Daily Mail - Science & tech

With images aggregated from social media platforms, dating sites, or even CCTV footage of a trip to the local coffee shop, companies could be using your face to train a sophisticated facial recognition software. As reported by the New York Times, among the sometimes massive data sets that researchers use to teach artificially intelligent software to recognize faces is a database collected by Stanford researchers called Brainwash. More than 10,000 images of customers at a cafe in San Francisco were collected in 2014 without their knowledge. OKCupid and photo-sharing platforms like Flickr are among for researchers looking to load their databases up with images that help train facial recognition software. That same database was then made available to other academics, including some in China at the National University of Defense Technology.


Is Facial Recognition Technology Racist? The Tech Connoisseur

#artificialintelligence

Recent studies demonstrate that machine learning algorithms can discriminate based on classes like race and gender. In this work, we present an approach to evaluate bias present in automated facial analysis algorithms and datasets with respect to phenotypic subgroups. Using the dermatologist approved Fitzpatrick Skin Type classification system, we characterize the gender and skin type distribution of two facial analysis benchmarks, IJB-A and Adience. We find that these datasets are overwhelmingly composed of lighter-skinned subjects (79.6% for IJB-A and 86.2% for Adience) and introduce a new facial analysis dataset which is balanced by gender and skin type. We evaluate 3 commercial gender classification systems using our dataset and show that darker-skinned females are the most misclassified group (with error rates of up to 34.7%).


Gamers get a chance to battle an AI on the QT. Plus: Robo-marines, and fisticuffs over facial recognition in Detroit

#artificialintelligence

Roundup Hello, here's a few announcements from the world of machine learning beyond what we've already covered this week. AlphaStar is coming out to play: AlphaStar, the StarCraft II-playing bot built by DeepMind researchers, will be facing human players in a series of 1v1 games online. StarCraft II players can enter the open competition league set up by Blizzard Entertainment, the creators of the popular battle strategy game, and opt-in to play against AlphaStar. But nobody will know if they're facing the bot, however, because it'll be entering the matches anonymously. Characters in the StarCraft II are from three species: Terran, Zerg or Protoss.


Detect discrimination with help of artificial intelligence

#artificialintelligence

Washington D.C. [USA], July 14 (ANI): Researchers developed a new artificial intelligence (AI) tool for detecting unfair discrimination such as race or gender. Preventing unfair treatment of individuals on the basis of race, gender or ethnicity, for example, been a long-standing concern of civilized societies. However, detecting such discrimination resulting from decisions, whether by human decision-makers or automated AI systems, can be extremely challenging. This challenge is further exacerbated by the wide adoption of AI systems to automate decisions in many domains including policing, consumer finance, higher education, and business. "Artificial intelligence systems such as those involved in selecting candidates for a job or for admission to a university are trained on large amounts of data. But if these data are biased, they can affect the recommendations of AI systems," said Vasant Honavar, one of the researchers of the study presented at the meeting of The Web Conference.


Using artificial intelligence to detect discrimination

#artificialintelligence

Preventing unfair treatment of individuals on the basis of race, gender or ethnicity, for example, been a long-standing concern of civilized societies. However, detecting such discrimination resulting from decisions, whether by human decision makers or automated AI systems, can be extremely challenging. This challenge is further exacerbated by the wide adoption of AI systems to automate decisions in many domains -- including policing, consumer finance, higher education and business. "Artificial intelligence systems -- such as those involved in selecting candidates for a job or for admission to a university -- are trained on large amounts of data," said Vasant Honavar, Professor and Edward Frymoyer Chair of Information Sciences and Technology, Penn State. "But if these data are biased, they can affect the recommendations of AI systems."


Man, 28, arrested for allegedly beating girlfriend after an Amazon Alexa device calls 911

Daily Mail - Science & tech

A New Mexico man was arrested for allegedly beating his girlfriend after their Amazon device alerted police. Eduardo Barros, 28, was with his girlfriend and her daughter at a residence in Tijeras, outside of Albuquerque, on July 2. The pair got into an argument and the confrontation became physical, according to the Bernalillo County Sheriff Department's spokesperson, Deputy Felicia Romero. Eduardo Barros, 28, (pictured), was arrested for allegedly threatening to kill his girlfriend after he mentioned'calling sheriffs' during a fight, which prompted an Alexa device to call 911 It is understood Barros allegedly became angered because of a text message that the woman received and he accused her of cheating on him. He was allegedly in possession of a firearm and threatened to kill his unidentified girlfriend, saying to her, 'Did you call the sheriffs?' A smart speaker, which was connected to a surround sound system inside the house, recognized the comment as a voice command and called 911, Romero told the New York Post.