To step onto city streets means taking your chances with cameras recording your every move. Over the years, surveillance tools such as face recognition and artificial intelligence have made it easier for states to capture and identify a person in schools, banks, stores or busy intersections. In some cases, our own phones serve as surveillance tools, with social media helping users spread their recordings. Most recently, a video taken by a bystander shows the death of 46-year-old George Floyd after a white police officer kneeled on the Floyd's neck, causing outrage and protests across the nation. Body-worn cameras are also used by police officers, making it a surveillance tool for both the law enforcement and members of the community.
Amazon's controversial facial recognition technology has incorrectly matched more than 100 photos of politicians in the UK and US to police mugshots, new tests have revealed. Amazon Rekognition uses artificial intelligence software to identify individuals from their facial structure. Customers include law enforcement and US government agencies like Immigration and Custome Enforcement (ICE). It is not the first time the software's accuracy has been called into question. In July 2018, the American Civil Liberties Union (ACLU) found 28 false matches between US Congress members and pictures of people arrested for a crime.
In January, my coworker received a peculiar email. The message, which she forwarded to me, was from a handful of corporate Walmart employees calling themselves the "Concerned Home Office Associates." While it's not unusual for journalists to receive anonymous tips, they don't usually come with their own slickly produced videos. The employees said they were "past their breaking point," with Everseen, a small artificial intelligence firm based in Cork, Ireland, whose technology Walmart began using in 2017. Walmart uses Everseen in thousands of stores to prevent shoplifting at registers and self-checkout kiosks.
The 122-page publication, called "Explaining decisions made with AI" and written in conjunction with The Alan Turing Institute, the U.K.'s national center for AI, hopes to ensure organizations can be transparent about how AI-generated decisions are made, as well as ensure clear accountability about who can be held responsible for them so that affected individuals can ask for an explanation. It does not directly reference AI or any associated technologies such as machine learning. However, the General Data Protection Regulation (and the U.K.'s 2018 Data Protection Act) does have a significant focus on large-scale automated processing of personal data, and several provisions specifically refer to the use of profiling and automated decision-making. This means data protection law applies to the use of AI to provide a prediction or recommendation about someone. The ICO suggests compliance teams (including the DPO) and senior management should expect assurances from the product manager that the system the organization is using provides the appropriate level of explanation to decision recipients.
Google warned on Thursday that the EU's definition of artificial intelligence was too broad and that Brussels must refrain from over-regulating a crucial technology. The search and advertising giant made its argument in feedback to the European Commission, the EU's powerful regulator that has reached out to big tech as it draws up ways to set new rules for AI. The EU has not decided yet on how to regulate AI, but is putting most of its focus on what it calls "high risk" sectors, such as healthcare and transport. It's plans, to be spearheaded by EU commissioners Margrethe Vestager and Thierry Breton, are not expected until the end of the year. "A clear and widely understood definition of AI will be a critical foundational element for an effective AI regulatory framework," the company said in its 45-page submission.
The American Civil Liberties Union (ACLU) is taking Clearview AI to court, claiming the company's facial surveillance activities violate the Illinois Biometric Information Privacy Act (BIPA) and "represent an unprecedented threat to our security and safety". The legal action, brought on by lawyers at the ACLU of Illinois and the law firm Edelson PC, is on behalf of organisations that represent survivors of sexual assault and domestic violence, undocumented immigrants, and other vulnerable communities. Clearview AI, founded by Australian entrepreneur Hoan Ton-That, provides facial recognition software, marketed primarily at law enforcement. The ACLU said not stopping Clearview AI would "end privacy as we know it". "Face recognition technology offers a surveillance capability unlike any other technology in the past. It makes it dangerously easy to identify and track us at protests, AA meetings, counselling sessions, political rallies, religious gatherings, and more," the ACLU wrote in a blog post.
Clearview AI is about to deal with more pushback beyond corporate objections and occasional bans. The American Civil Liberties Union has sued Clearview AI for allegedly violating Illinois' Biometric Information Privacy Act with its combination of facial recognition and internet data scraping. The ACLU claimed that the real-time identification technology infringed privacy rights by collecting faceprints from state residents without notifying them or obtaining consent. This facial data harvesting is bad for everyone, but it's particularly harmful to "Latinas and survivors," according to Mujeres Latinas en Acción's Linda Xóchitl Tortolero. She argued that it enables stalkers, abusers, "predatory companies" and immigration agents to illegally track and target people.
Artificial intelligence is the next big military advantage. For example, in early 2019, the U.S. announced a strategy for harnessing AI in many parts of the military including. Intelligence analysis, decision-making, vehicle autonomy, logistics, and weaponry, reports Technology Review. In fact, according to the U.S. Army, "The AI market was more than $21 billion in 2018, and it is expected to grow almost nine times larger by 2025. AI systems provide predictive analysis to interpret human inputs, determine what we most likely want, and then provide us with highly relevant information."