Goto

Collaborating Authors

police


IBM pushes for US to limit facial recognition system exports

ZDNet

IBM has called for the US Department of Commerce to limit the export of facial recognition systems, particularly to countries that could potentially use it for mass surveillance, racial profiling, or other human rights violations. In a letter [PDF] to the Commerce Department, IBM highlighted the need for tighter export controls for facial recognition technologies that employ for what it referred to as "1-to-many" matching. These suggested controls include controlling the export of both the high-resolution cameras used to collect data and the software algorithms used to analyse and match that data against a database of images, and restricting access to online image databases that can be used to train 1-to-many facial recognition systems. "These systems are distinct from '1 to 1' facial matching systems, such as those that might unlock your phone or allow you to board an airplane -- in those cases, facial recognition is verifying that a consenting person is who they say they are," IBM government and regulatory affairs vice president Christopher Padilla explained in a blog post. "But in a '1-to-many' application, a system can, for example, pick a face out of crowd by matching one image against a database of many others."


The organizations positioned to lobby against a US ban on facial recognition

#artificialintelligence

Pressure on US lawmakers to create federal regulations on facial recognition has been mounting. IBM, Amazon, and Microsoft stopped selling the technology to US police, and called on Congress to regulate its use. Amidst international protests against racism and police misconduct, news broke that Detroit police had wrongfully arrested a Black man based on a faulty facial recognition match. In response, House Democrats proposed a bill last week that would ban police from using facial recognition. Against that backdrop, industry groups have quietly lobbied to soften regulations and avoid an outright ban.


People are fighting algorithms for a more just and equitable future. You can, too.

Mashable

Mashable's series Algorithms explores the mysterious lines of code that increasingly control our lives -- and our futures. From dating apps, to news feeds, to streaming and purchase recommendations, we have become accustomed to a subtle prodding by unseen instruction sets, themselves generated by unnamed humans or opaque machines. But there is another, not so gentle side to the way algorithms affect us. A side where the prodding is more forceful, and the consequences more lasting than a song not to your liking, a product you probably shouldn't have bought, or even a date that fell flat. Automated license plate readers resulting in children held at gunpoint. Algorithms have the power to drive pain and oppression, at scale, and unless there is an intentional systematic effort to push back, our ever increasing reliance on algorithmic decision-making will only lead us further down a dark path.


Councils scrapping use of algorithms in benefit and welfare decisions

#artificialintelligence

Councils are quietly scrapping the use of computer algorithms in helping to make decisions on benefit claims and other welfare issues, the Guardian has found, as critics call for more transparency on how such tools are being used in public services. It comes as an expert warns the reasons for cancelling programmes among government bodies around the world range from problems in the way the systems work to concerns about bias and other negative effects. Most systems are implemented without consultation with the public, but critics say this must change. The use of artificial intelligence or automated decision-making has come into sharp focus after an algorithm used by the exam regulator Ofqual downgraded almost 40% of the A-level grades assessed by teachers. It culminated in a humiliating government U-turn and the system being scrapped.


Axon delivers new tech for police, but is more tools really what cops need? – TechCrunch

#artificialintelligence

Axon, the company formerly known as Taser and which provides the majority of police body cameras, has a few new tech tools for cops that could cut down on paperwork and improve response times. But at a time when the fundamental means and mission of police is being questioned, is this what's really needed? Even the company's CEO has reservations. The new products Axon is delivering will no doubt be useful for police and emergency personnel across the country. The first is the ability to automatically transcribe voices on body camera footage (using machine learning, naturally).


'Have You Thought About . . .'

Communications of the ACM

How do researchers talk to one another about the ethics of our research? How do you tell someone you are concernened their work may do more harm than good for the world? If someone tells you your work may cause harm, how do you receive that feedback with an open mind, and really listen? I find myself lately on both sides of this dilemma--needing both to speak to others and listen myself more. It is not easy on either side.


AI Weekly: Surveillance, structural racism, and the Biden 2020 presidential campaign

#artificialintelligence

In the United Kingdom there's been some landmark AI news recently involving government use of the technology. First, use of facial recognition by South Wales Police was ruled unlawful by a Court of Appeal judge in part for violating privacy, human rights, and failure by police to verify the tech did not exhibit race or gender bias. How the U.K. treats facial recognition is important since London has more CCTV cameras than any major city outside of China. Then, U.K. government officials used an algorithm that ended up benefiting kids who go to private schools and downgrading students from disadvantaged backgrounds. Prime Minister Boris Johnson defended the algorithm grading results as "robust" and "dependable for employers."


There is a crisis of face recognition and policing in the US

MIT Technology Review

When news broke that a mistaken match from a face recognition system had led Detroit police to arrest Robert Williams for a crime he didn't commit, it was late June, and the country was already in upheaval over the death of George Floyd a month earlier. Soon after, it emerged that yet another Black man, Michael Oliver, was arrested under similar circumstances as Williams. While much of the US continues to cry out for racial justice, a quieter conversation is taking shape about face recognition technology and the police. We would do well to listen. When Jennifer Strong and I started reporting on the use of face recognition technology by police for our new podcast, "In Machines We Trust," we knew these AI-powered systems were being adopted by cops all over the US and in other countries.


Police use of facial recognition gets reined in by UK court - CNET

CNET - News

A close-up of a police facial recognition camera used in Cardiff, Wales. Since 2017, police in the UK have been testing live, or real-time, facial recognition in public places to try to identify criminals. The legality of these trials has been widely questioned by privacy and human rights campaigners, who just won a landmark case that could have a lasting impact on how police use the technology in the future. In a ruling Tuesday, the UK Court of Appeal said South Wales Police had been using the technology unlawfully, which amounted to a violation of human rights. In a case brought by civil liberties campaigner Ed Bridges and supported by human rights group Liberty, three senior judges ruled that the South Wales Police had violated Bridges' right to privacy under the European Convention of Human Rights.


US police's facial recognition systems misidentify Black people

Al Jazeera

It has been more than two months since the killing of George Floyd at the hands of police in the United States. And as protests continue - the message is no longer just about specific incidents of violence, but about what demonstrators say is systemic racism in policing. One of the most obvious examples is the widespread use of facial recognition systems that have been proven to misidentify people of colour.