Goto

Collaborating Authors

clearview


The Company Ending Privacy as We Know It

#artificialintelligence

This article is a transcript of a presentation I gave to the Rotary eClub of Silicon Valley about Clearview AI, a facial recognition company which the New York Times said "might end privacy as we know it." My presentation was based on an article earlier this year in Medium's OneZero. Thanks to the whole Rotary eClub team for the opportunity to present. This is the Rotary eClub of Silicon Valley. Every week, we are trying to bring you cool and interesting material that will make you go, "Hmm. That's interesting," and hopefully will inspire you to act in some way, whether that's act in service, or perhaps even act in self defense. Because we are going to learn some really interesting stuff over the coming minutes, and that is a function of having as our speaker today, Thomas Smith. He goes by Tom when we were just speaking, so I'll refer to him as Tom. And Tom wrote an article recently that I found in OneZero, I think, via Medium. And I finished reading that article and thought, "Holy poop." So, so as a result of that, I actually reached out to him to say, "Could you speak to our Rotary eClub of Silicon Valley? And he was gracious enough to write back.


Racial biases infect artificial intelligence

#artificialintelligence

Detroit police wrongfully arrested Robert Julian-Borchak Williams in January 2020 for a shoplifting incident that had taken place two years earlier. Even though Williams had nothing to do with the incident, facial recognition technology used by Michigan State Police "matched" his face with a grainy image obtained from an in-store surveillance video showing another African-American man taking US$3,800 worth of watches. Two weeks later, the case was dismissed at the prosecution's request. However, relying on the faulty match, police had already handcuffed and arrested Williams in front of his family, forced him to provide a mug shot, fingerprints and a sample of his DNA, interrogated him and imprisoned him overnight. Experts suggest that Williams is not alone, and that others have been subjected to similar injustices.


ICE just signed a contract with facial recognition company Clearview AI

#artificialintelligence

Immigration and Customs Enforcement (ICE) signed a contract with facial recognition company Clearview AI this week for "mission support," government contracting records show (as first spotted by the tech accountability nonprofit Tech Inquiry). The purchase order for $224,000 describes "clearview licenses" and lists "ICE mission support dallas" as the contracting office. ICE is known to use facial recognition technology; last month, The Washington Post reported the agency, along with the FBI, had accessed state drivers' license databases -- a veritable facial recognition gold mine, as the Post termed it -- but without the knowledge or consent of drivers. The agency has been criticized for its practices at the US southern border, which has included separating immigrant children from their families and detaining refugees indefinitely. "Clearview AI's agreement is with Homeland Security Investigations (HSI), which uses our technology for their Child Exploitation Unit and ongoing criminal investigations," Clearview AI CEO Hoan Ton-That said in an emailed statement to The Verge.


Controversial facial recognition tech firm Clearview AI inks deal with ICE

ZDNet

The US Department of Homeland Security (DHS) has signed a contract with Clearview AI to give Immigration and Customs Enforcement (ICE) access to the controversial facial recognition firm's technology. Tech Inquiry, a non-profit technology watchdog and rights outfit, spotted documents revealing the deal last week. The $224,000 purchase order, signed on August 12, 2020, is for "Clearview licenses" relating to "information technology components," but no further information has been made public. The contract will last until September 4, 2021. Tech Inquiry has submitted a Freedom of Information Act (FOIA) request for the contracts and communication between Clearview AI and ICE relating to the award.


Facial recognition startup, Clearview AI, mounts defense in privacy suits

#artificialintelligence

By Kashmir Hill Floyd Abrams, one of the most prominent First Amendment lawyers in the country, has a new client: the facial recognition company Clearview AI. Litigation against the startup "has the potential of leading to a major decision about the interrelationship between privacy claims and First Amendment defenses in the 21st century," Abrams said in a phone interview. He said the underlying legal questions could one day reach the Supreme Court. Clearview AI has scraped billions of photos from the internet, including from platforms like LinkedIn and Instagram, and sells access to the resulting database to law enforcement agencies. When an officer uploads a photo or a video image containing a person's face, the app tries to match the likeness and provides other photos of that person that can be found online.


Clearview AI wins an ICE contract as it prepares to defend itself in court

Engadget

Immigration and Customs Enforcement (ICE) this week signed a deal with Clearview AI to licence the facial recognition company's technology. According to a federal purchase order unearthed by the nonprofit Tech Inquiry (via The Verge), an ICE mission support office in Dallas is paying $224,000 for "Clearview licenses." Engadget has contacted Clearview and ICE for details on the scope of this agreement, as well as what ICE plans to do with those licenses. ICE and Clearview signed the deal just as the company is set to defend itself in court. Lawsuits filed in a number of states accuse Clearview of violating privacy and safety laws. It can identify a person by matching their photo against billions of images it has scraped from social media and other internet services.


Combination of skilling, AI deployment helping businesses succeed: Microsoft research

#artificialintelligence

BENGALURU: Microsoft India on Monday released new research revealing that organisations that combine deployment of artificial intelligence (AI) with skilling initiatives are generating most value from AI. Topline findings of the research underscore that mature AI firms are more confident about the return on AI and skills, a company statement said. Microsoft recently conducted the global survey with approximately 12,000 people working with enterprise companies (more than 250 employees). The research surveyed employees and leaders within large enterprises across industry verticals in India, and 19 other countries, to look at the skills needed to thrive as AI becomes increasingly adopted by businesses, as well as the key learnings from early AI adopters. Over 93 per cent of senior executives surveyed from these companies were sure their business was gaining value from AI. The research further highlights that employees from mature AI companies are eager to deepen their AI skills and reinvest freed up time to add value for the organisation.


UK court rules police facial recognition trials violate privacy laws

Engadget

Human rights organization Liberty is claiming a win in its native Britain after a court ruled that police trials of facial recognition technology violated privacy laws. The Court of Appeal ruled that the use of automatic facial recognition systems unfairly impacted claimant Ed Bridges' right to a private life. Judges added that there were issues around how people's personal data was being processed, and said that the trials should be halted for now. The court also found that the South Wales Police (SWP) had not done enough to satisfy itself that facial recognition technology was not unbiased. A spokesperson for SWP told the BBC that it would not be appealing the judgment, but Chief Constable Matt Jukes said that the force will find a way to "work with" the judgment.


Can This AI Filter Protect Identities From Facial Recognition System?

#artificialintelligence

Facial recognition technology has been a matter of grave concern since long, as much as to that, major tech giants like Microsoft, Amazon, IBM as well as Google have earlier this year, banned selling their FRT to police authorities. Additionally, Clearview AI's groundbreaking facial recognition app that scrapped billions of images of people without consent made the matter even worse for the public. In fact, the whole concept of companies using social media images of people without their permission to train their FRT algorithms can turn out to be troublesome for the general public's identity and personal privacy. And thus, to protect human identities from companies who can misuse them, researchers from the computer science department of the University of Chicago, proposed an AI system to fool these facial recognition systems. Termed as Fawkes -- named after the British soldier Guy Fawkes Night, this AI system has been designed to help users to safeguard their images and selfies with a filter from against these unfavored facial recognition models. This filter, as the researchers called it "cloak," adds an invisible pixel-level change on the photos that cannot be seen with human eyes, but can deceive these FRTs.


AI named after V For Vendetta masks protects photos from being gathered by facial recognition apps

Daily Mail - Science & tech

Clearview AI is just one of many facial recognition firms scraping billions of online images to create a massive database for purchase – but a new program could block their efforts. Researchers designed an image clocking tool that makes subtle pixel-level changes that distort pictures enough so they cannot be used by online scrapers – and claims it is 100 percent effective. Named in honor of the'V for Vendetta' mask, Fawkes is an algorithm and software combination that'cloaks' an image to trick systems, which is like adding an invisible mask to your face. These altered pictures teach technologies a distorted version of the subject and when presented with an'uncloaked' form, the scraping app fails to recognize the individual. 'It might surprise some to learn that we started the Fawkes project a while before the New York Times article that profiled Clearview.ai in February 2020,' researchers from the SANLab at University of Chicago shared in a statement.