Goto

Collaborating Authors

Results


Disability Bias in AI Hiring Tools Targeted in US Guidance (1)

#artificialintelligence

Employers have a responsibility to inspect artificial intelligence tools for disability bias and should have plans to provide reasonable accommodations, the Equal Employment Opportunity Commission and Justice Department said in guidance documents. The guidance released Thursday is the first from the federal government on the use of AI hiring tools that focuses on their impact on people with disabilities. The guidance also seeks to inform workers of their right to inquire about a company's use of AI and to request accommodations, the agencies said. "Today we are sounding an alarm regarding the dangers of blind reliance on AI and other technologies that are increasingly used by employers," Assistant Attorney General Kristen Clarke told reporters. The DOJ enforces disability discrimination laws with respect to state and local government employers, while the EEOC enforces such laws in the private sector and federal employers.


U.S. warns of discrimination in using artificial intelligence to screen job candidates

NPR Technology

Assistant Attorney General for Civil Rights Kristen Clarke speaks at a news conference on Aug. 5, 2021. The federal government said Thursday that artificial intelligence technology to screen new job candidates or monitor their productivity can unfairly discriminate against people with disabilities. Assistant Attorney General for Civil Rights Kristen Clarke speaks at a news conference on Aug. 5, 2021. The federal government said Thursday that artificial intelligence technology to screen new job candidates or monitor their productivity can unfairly discriminate against people with disabilities. The federal government said Thursday that artificial intelligence technology to screen new job candidates or monitor worker productivity can unfairly discriminate against people with disabilities, sending a warning to employers that the commonly used hiring tools could violate civil rights laws.


Algorithms, Artificial Intelligence, and Disability Discrimination in Hiring

#artificialintelligence

This guidance explains how algorithms and artificial intelligence can lead to disability discrimination in hiring. The Department of Justice enforces disability discrimination laws with respect to state and local government employers. The Equal Employment Opportunity Commission (EEOC) enforces disability discrimination laws with respect to employers in the private sector and the federal government. The obligation to avoid disability discrimination in employment applies to both public and private employers. Employers, including state and local government employers, increasingly use hiring technologies to help them select new employees.


Clearview AI settles with ACLU on face-recog database sales

#artificialintelligence

Clearview AI has promised to stop selling its controversial face-recognizing tech to most private US companies in a settlement proposed this week with the ACLU. The New-York-based startup made headlines in 2020 for scraping billions of images from people's public social media pages. These photographs were used to build a facial-recognition database system, allowing the biz to link future snaps of people to their past and current online profiles. Clearview's software can, for example, be shown a face from a CCTV still, and if it recognizes the person from its database, it can return not only the URLs to that person's social networking pages, from where they were first seen, but also copies that allow that person to be identified, traced, and contacted. That same year, the ACLU sued the biz, claiming it violated Illinois' Biometric Information Privacy Act (BIPA), which requires organizations operating in the US state to obtain explicit consent from its residents to collect their biometric data, which includes their photographs.


Clearwater AI agrees to restrict sales of facial recognition technology

ZDNet

In a landmark settlement, facial recognition company Clearwater AI, known for downloading billions of user photos from social media and other websites to build a face-search database for use by law enforcement, has agreed to cease sales to private companies and individuals in the United States. Filed in Illinois' federal court on Monday, the settlement marks the most significant action against the New York-based company to date, and reigns in a technology that has reportedly been used by Ukraine to track "people of interest" during the ongoing Russian invasion. The lawsuit was brought by the non-profit American Civil Liberties Union (ACLU), and Mujeres Latinas en Acción, among others, in 2020 over alleged violations of an Illinois digital privacy law, with the settlement pending approval by a federal judge. Adopted in 2008, the Illinois law, known as the Biometric Information Privacy Act (BIPA), has so far led to several key tech-privacy settlements, including a $550 million settlement from Facebook related to its facial recognition use. Although Clearwater AI has agreed to stop selling its services to the Illinois government and local police services for five years, the company will continue to offer its services to other law enforcement and federal agencies, and government contractors outside of Illinois.


Artificial Intelligence and Automated Systems Legal Update (1Q22)

#artificialintelligence

Secretary shall support a program of fundamental research, development, and demonstration of energy efficient computing and data center technologies relevant to advanced computing applications, including high performance computing, artificial intelligence, and scientific machine learning.").


Two Paths for Digital Disability Law

Communications of the ACM

People with disabilities often cannot count on modern digital devices, software, and services to be accessible. Will streaming video platforms include closed captions for viewers who are deaf or hard of hearing? How will virtual assistants work for users with speech disabilities? Can websites be read aloud by text-to-speech engines for readers who are blind or visually impaired? How will smartphones be accessed by people with physical and mobility disabilities?


Can AI's Voracious Appetite Be Tamed?

#artificialintelligence

In the spring of 2019, artificial intelligence datasets started disappearing from the internet. Such collections -- typically gigabytes of images, video, audio, or text data -- are the foundation for the increasingly ubiquitous and profitable form of AI known as machine learning, which can mimic various kinds of human judgments such as facial recognition. In April, it was Microsoft's MS-Celeb-1M, consisting of 10 million images of 100,000 people's faces -- many of them celebrities, as the name suggests, but also many who were not public figures -- harvested from internet sites. In June, Duke University researchers withdrew their multi-target, multi-camera dataset (DukeMTMC), which consisted of images taken from videos, mostly of students, recorded at a busy campus intersection over 14 hours on a day in 2014. Around the same time, people reported that they could no longer access Diversity in Faces, a dataset of more than a million facial images collected from the internet, released at the beginning of 2019 by a team of IBM researchers. All together, about a dozen AI datasets vanished -- hastily scrubbed by their creators after researchers, activists, and journalists exposed an array of problems with the data and the ways it was used, from privacy, to race and gender bias, to issues with human rights.


Clearview AI aims to put almost every human in facial recognition database

#artificialintelligence

The controversial facial recognition company Clearview AI reportedly told investors that it aims to collect 100 billion photos--supposedly enough to ensure that almost every human will be in its database. "Clearview AI is telling investors it is on track to have 100 billion facial photos in its database within a year, enough to ensure'almost everyone in the world will be identifiable,' according to a financial presentation from December obtained by The Washington Post," the Post reported today. There are an estimated 7.9 billion people on the planet. The December presentation was part of an effort to obtain new funding from investors, so 100 billion facial images is more of a goal than a firm plan. However, the presentation said that Clearview has already racked up 10 billion images and is adding 1.5 billion images a month, the Post wrote.


Texas sues Meta, saying it misused facial recognition data

NPR Technology

FILE photo - Texas sued Meta on Monday over misuse of biometric data, the latest round of litigation between governments and the company over privacy. FILE photo - Texas sued Meta on Monday over misuse of biometric data, the latest round of litigation between governments and the company over privacy. Texas sued Facebook parent company Meta for exploiting the biometric data of millions of people in the state - including those who used the platform and those who did not. The company, according to a suit filed by state Attorney General Ken Paxton, violated state privacy laws and should be responsible for billions of dollars in damages. The suit involves Facebook's "tag suggestions" feature, which the company ended last year, that used facial recognition to encourage users to link the photo to a friend's profile.