Goto

Collaborating Authors

Law Enforcement & Public Safety


Tutorial on fairness, accountability, transparency and ethics in computer vision

AIHub

The Computer Vision and Pattern Recognition conference (CVPR) was held virtually on 14-19 June. As well as invited talks, posters and workshops, there were a number of tutorials on a range of topics. Timnit Gebru and Emily Denton were the organisers of one of the tutorials, which covered fairness, accountability, transparency and ethics in computer vision. As the organisers write in the introduction to their tutorial, computer vision is no longer a purely academic endeavour; computer vision systems have been utilised widely across society. Such systems have been applied to law enforcement, border control, employment and healthcare.


EFF's new database reveals what tech local police are using to spy on you

ZDNet

The Electronic Frontier Foundation (EFF) has debuted a new database that reveals how, and where, law enforcement is using surveillance technology in policing strategies. Launched on Monday in partnership with the University of Nevada's Reynolds School of Journalism, the "Atlas of Surveillance" is described as the "largest-ever collection of searchable data on police use of surveillance technologies." The civil rights and privacy organization says the database was developed to help the general public learn about the accelerating adoption and use of surveillance technologies by law enforcement agencies. The map pulls together thousands of data points from over 3,000 police departments across the United States. Users can zoom in to different locations and find summaries of what technologies are in use, by what department, and track how adoption is spreading geographically.


'Booyaaa': Australian Federal Police use of Clearview AI detailed

ZDNet

Earlier this year, the Australian Federal Police (AFP) admitted to using a facial recognition tool, despite not having an appropriate legislative framework in place, to help counter child exploitation. The tool was Clearview AI, a controversial New York-based startup that has scraped social media networks for people's photos and created one of the biggest facial recognition databases in the world. It provides facial recognition software, marketed primarily at law enforcement. The AFP previously said while it did not adopt the facial recognition platform Clearview AI as an enterprise product and had not entered into any formal procurement arrangements with the company, it did use a trial version. Documents published by the AFP under the Freedom of Information Act 1982 confirmed that the AFP-led Australian Centre to Counter Child Exploitation (ACCCE) registered for a free trial of the Clearview AI facial recognition tool and conducted a pilot of the system from 2 November 2019 to 22 January 2020.


Photo sent before Naya Rivera disappeared may help search

Los Angeles Times

The search for the body of actress Naya Rivera, who is believed to have drowned last week while boating with her young son on Lake Piru, resumed early Monday with crews focusing on a section of the water where they suspect she was swimming. Divers, helicopters, drone aircraft and cadaver dogs have been searching for six days. The 33-year-old actress, who gained fame for her role on "Glee," was reported missing Wednesday after her 4-year-old son was found asleep in a rental boat on the Ventura County lake by himself. Authorities later learned that Rivera and her son were swimming together in the lake and that he was able to get back on the boat, but she had not. On Sunday, search teams checked cabins and outbuildings surrounding the lake, as well as the shoreline, to ensure that she hadn't made it out of the water on her own, officials said. All of those areas were also searched the afternoon Rivera went missing.


Machine Learning Can Help Detect Misinformation Online

#artificialintelligence

As social media is increasingly being used as people's primary source for news online, there is a rising threat from the spread of malign and false information. With an absence of human editors in news feeds and a growth of artificial online activity, it has become easier for various actors to manipulate the news that people consume. RAND Europe was commissioned by the UK Ministry of Defence's (MOD) Defence and Security Accelerator (DASA) to develop a method for detecting the malign use of information online. The study was contracted as part of DASA's efforts to help the UK MOD develop its behavioural analytics capability. Our study found that online communities are increasingly being exposed to junk news, cyber bullying activity, terrorist propaganda, and political reputation boosting or smearing campaigns.


Credit Card Fraud Detection with Machine Learning

#artificialintelligence

Fraud detection, one of the many cases of anomaly detection is an important aspect of financial markets. Is there any way to predict whether a transaction is fraudulent or not based on the history of transactions? Let's explore a neural network architecture as it attempts to predict the cases as frauds or not. By the end of this article, we'll be able to build an encoder-decoder architecture from scratch using Keras and classify the transactions as fraudulent or non-fraudulent. We use a dataset credit card fraud detection by the ULB machine learning group.


Death by drone: How can states justify targeted killings?

Al Jazeera

In a move that caused a ripple effect across the Middle East, Iranian General Qassem Soleimani was killed in a US drone strike near Baghdad's international airport on January 3. On that day, the Pentagon announced the attack was carried out "at the direction of the president". In a new report examining the legality of armed drones and the Soleimani killing in particular, Agnes Callamard, UN special rapporteur on extrajudicial and arbitrary killings, said the US raid that killed Soleimani was "unlawful". Callamard presented her report at the Human Rights Council in Geneva on Thursday. The United States, which is not a member after quitting the council in 2018, rejected the report saying it gave "a pass to terrorists". In Callamard's view, the consequences of targeted killings by armed drones have been neglected by states.


Will facial recognition technology bring ethical 'sea changes' in governance? - ET Government

#artificialintelligence

By Rajiv Saxena Police in Detroit, while investigating, were trying to figure out who stole five watches from a Shinola retail store. Authorities mentioned that the thief took off with an estimated $3,800 worth of merchandise. Investigators pulled a security video that had recorded the incident from cameras installed in the store and neighbourhood, which is very common in the US. Detectives zoomed in on the grainy footage and ran the person who appeared to be primary through'facial recognition software'. A hit came back: Robert Julian - Borchak Williams, 42, of Farmington Hills, Michigan, about 25 miles northwest of Detroit. In January, police pulled up to Williams' home and arrested him while he stood on his front lawn in front of his wife and two daughters, ages 2 and 5, who cried as they watched their father being taken away in the patrol car.


Controversial Detroit facial recognition got him arrested for a crime he didn't commit

USATODAY - Tech Top Stories

The high-profile case of a Black man wrongly arrested earlier this year wasn't the first misidentification linked to controversial facial recognition technology used by Detroit police, the Free Press has learned. Last year, a 25-year-old Detroit man was wrongly accused of a felony for supposedly reaching into a teacher's vehicle, grabbing a cell phone and throwing it, cracking the screen and breaking the case. Detroit police used facial recognition technology in that investigation, too. It identified Michael Oliver as an investigative lead. After that hit, the teacher who had his phone snatched from his hands identified Oliver in a photo lineup as the person responsible.


Defending Black Lives Means Banning Facial Recognition

WIRED

Uprisings for racial justice are sweeping the country. Following the police murders of George Floyd, Breonna Taylor, and so many others, named and unnamed, America has finally reached its moment of reckoning. And politicians are starting to respond. That starts with banning facial recognition, a technology perfectly designed for the automation of racism. Tawana Petty is director of the Data Justice Program at Detroit Community Technology Project and co-leads the Our Data Bodies project.