The use of machine learning (ML) in health care raises numerous ethical concerns, especially as models can amplify existing health inequities. Here, we outline ethical considerations for equitable ML in the advancement of health care. Specifically, we frame ethics of ML in health care through the lens of social justice. We describe ongoing efforts and outline challenges in a proposed pipeline of ethical ML in health, ranging from problem selection to post-deployment considerations. We close by summarizing recommendations to address these challenges.
In this paper we suggest a minimally-supervised approach for identifying nuanced frames in news article coverage of politically divisive topics. We suggest to break the broad policy frames suggested by Boydstun et al., 2014 into fine-grained subframes which can capture differences in political ideology in a better way. We evaluate the suggested subframes and their embedding, learned using minimal supervision, over three topics, namely, immigration, gun-control and abortion. We demonstrate the ability of the subframes to capture ideological differences and analyze political discourse in news media.
Immigration and Customs Enforcement (ICE) signed a contract with facial recognition company Clearview AI this week for "mission support," government contracting records show (as first spotted by the tech accountability nonprofit Tech Inquiry). The purchase order for $224,000 describes "clearview licenses" and lists "ICE mission support dallas" as the contracting office. ICE is known to use facial recognition technology; last month, The Washington Post reported the agency, along with the FBI, had accessed state drivers' license databases -- a veritable facial recognition gold mine, as the Post termed it -- but without the knowledge or consent of drivers. The agency has been criticized for its practices at the US southern border, which has included separating immigrant children from their families and detaining refugees indefinitely. "Clearview AI's agreement is with Homeland Security Investigations (HSI), which uses our technology for their Child Exploitation Unit and ongoing criminal investigations," Clearview AI CEO Hoan Ton-That said in an emailed statement to The Verge.
The US Department of Homeland Security (DHS) has signed a contract with Clearview AI to give Immigration and Customs Enforcement (ICE) access to the controversial facial recognition firm's technology. Tech Inquiry, a non-profit technology watchdog and rights outfit, spotted documents revealing the deal last week. The $224,000 purchase order, signed on August 12, 2020, is for "Clearview licenses" relating to "information technology components," but no further information has been made public. The contract will last until September 4, 2021. Tech Inquiry has submitted a Freedom of Information Act (FOIA) request for the contracts and communication between Clearview AI and ICE relating to the award.
Immigration and Customs Enforcement (ICE) this week signed a deal with Clearview AI to licence the facial recognition company's technology. According to a federal purchase order unearthed by the nonprofit Tech Inquiry (via The Verge), an ICE mission support office in Dallas is paying $224,000 for "Clearview licenses." Engadget has contacted Clearview and ICE for details on the scope of this agreement, as well as what ICE plans to do with those licenses. ICE and Clearview signed the deal just as the company is set to defend itself in court. Lawsuits filed in a number of states accuse Clearview of violating privacy and safety laws. It can identify a person by matching their photo against billions of images it has scraped from social media and other internet services.
When Gulzira Aeulkhan finally fled China for Kazakhstan early last year, she still suffered debilitating headaches and nausea. She didn't know if this was a result of the guards at an internment camp hitting her in the head with an electric baton for spending more than two minutes on the toilet, or from the enforced starvation diet. Maybe it was simply the horror she had witnessed – the sounds of women screaming when they were beaten, their silence when they returned to the cell. Like an estimated 1.5 million other Turkic Muslims, Gulzira had been interned in a "re-education camp" in north-west China. After discovering that she had watched a Turkish TV show in which some of the actors wore hijabs, Chinese police had accused her of "extremism" and said she was "infected by the virus" of Islamism.
The The US Department of Homeland Security is reportedly worried that face coverings will stymie the police's use of facial recognition technology. According to a report from The Intercept, a bulletin drafted by the DHS discusses the effects of widespread use of face coverings in a correspondence with other federal agencies, including Immigration and Customs Enforcement (ICE). 'The potential impacts that widespread use of protective masks could have on security operations that incorporate face recognition systems -- such as video cameras, image processing hardware and software, and image recognition algorithms -- to monitor public spaces during the ongoing Covid-19 public health emergency and in the months after the pandemic subsides,' reads the bulletin according to The Intercept. The bulletin, which was obtained via a trove of police documents leaked in the'BlueLeaks' hack on law enforcement agencies, mentions that the masks could be used by extremists to avoid facial recognition technology but says there is no current evidence that any such group is currently doing so. '[There is] no specific information that violent extremists or other criminals in the United States are using protective face coverings to conduct attacks,' reads the document.
The use of facial recognition technology has been spreading rapidly. Before Clearview AI became the target of public scrutiny earlier this year, the facial recognition app was used freely by the company's investors, clients and friends, according to a report Thursday from The New York Times. The app was reportedly demonstrated at events like parties, business gatherings and even on dates. Clearview identifies people by comparing photos to a database of images scraped from social media and other sites. It came under fire after a New York Times investigation in January.
As facial recognition systems become increasingly accurate, more governments and law enforcement organizations are tapping them to verify people's identities, nab criminals and keep transactions secure. In recent months, France's government announced a nationwide facial recognition ID program, a UK court ruled that live facial recognition doesn't violate privacy rights and research revealed that the US Immigration and Customs Enforcement (ICE) agency and the FBI are using facial recognition to apprehend undocumented immigrants. Most of this activity is undertaken in the name of safety and security, but it is also raising major red flags among privacy advocates. They argue that the technology--which can scan and identify faces without consent in crowded streets, retail stores and sports stadiums--is predatory and invasive. Among consumers, the jury is still out.
Artificial Intelligence (AI) and Machine Learning are used to power a variety of important modern software technologies. AI also powers the facial recognition software commonly used by law enforcement, landlords, and private citizens. Of all the uses for AI-powered software, facial recognition is a big deal. Security teams from large buildings that rely on video surveillance – like schools and airports – can benefit greatly from this technology. An AI algorithm has the potential to detect a known criminal or an unauthorized person on the property.