bias audit
What we learned while automating bias detection in AI hiring systems for compliance with NYC Local Law 144
Clavell, Gemma Galdon, González-Sendino, Rubén
Since July 5, 2023, New York City's Local Law 144 requires employers to conduct independent bias audits for any automated employment decision tools (AEDTs) used in hiring processes. The law outlines a minimum set of bias tests that AI developers and implementers must perform to ensure compliance. Over the past few months, we have collected and analyzed audits conducted under this law, identified best practices, and developed a software tool to streamline employer compliance. Our tool, ITACA_144, tailors our broader bias auditing framework to meet the specific requirements of Local Law 144. While automating these legal mandates, we identified several critical challenges that merit attention to ensure AI bias regulations and audit methodologies are both effective and practical. This document presents the insights gained from automating compliance with NYC Local Law 144. It aims to support other cities and states in crafting similar legislation while addressing the limitations of the NYC framework. The discussion focuses on key areas including data requirements, demographic inclusiveness, impact ratios, effective bias, metrics, and data reliability.
- North America > United States > New York (0.25)
- North America > United States > Alaska (0.05)
NYC Publishes Final Rules for AEDT Law and Identifies New Enforcement Date
On April 6, 2023, the New York City Department of Consumer and Worker Protection ("DCWP") issued a Notice of Adoption of Final Rule to implement Local Law 144 of 2021, legislation regarding automated employment decision tools ("AEDT Law"). DCWP also announced that it will begin enforcement of the AEDT Law and Final Rule on July 5, 2023. Pursuant to the AEDT Law, an employer or employment agency that uses an automated employment decision tool ("AEDT") in NYC to screen a candidate or employee for an employment decision must subject the tool to a bias audit within one year of the tool's use, make information about the bias audit publicly available, and provide notice of the use of the tool to employees or job candidates. The Final Rule, which follows the DCWP's previous proposals in September and December 2022 and a review of public comments, aims to: The Final Rule clarifies certain phrases within the AEDT Law's definition "Automated Employment Decision Tool." First, "to substantially assist or replace discretionary decision making" means: (1) "to rely solely on a simplified output (score, tag, classification, ranking, etc.), with no other factors considered"; (2) "to use a simplified output as one of a set of criteria where the simplified output is weighted more than any other criterion in the set"; or (3) "to use a simplified output to overrule conclusions derived from other factors including human decision-making."
New York City Artificial Intelligence Laws
Newly emerging artificial intelligence (AI) technologies could hold a promising solution to streamlining certain employment practices and processes of hiring applicants in a number of different industries. Historically, both federal courts and regulatory enforcement agencies have been opposed to the overall usage of AI tools, having scrutinized them heavily under local, state and federal anti-discrimination laws. In what was a welcome piece of news for New York-based employers, the New York City Department of Consumer and Worker Protection recently published a set of proposed rules that could drastically reshape the process of hiring and employment-based legislation still awaiting approval. For city employers who heavily utilize automated employment decision tools (AEDT) for hiring, these proposed rules will provide some initial guidance on the laws surrounding artificial intelligence, with hopes of clarifying the ambiguous AI law the city enacted back in 2021. The law, which won't fully go into effect until January 1, 2023, prohibits employers from using any form of AEDT unless a bias audit is completed by an independently sourced auditor and notice requirements are fully met.
The Trouble With 'Responsible AI': Irresponsible Government Regulation
The trouble with AI is that it lacks a clear definition. Lacking a clear definition hasn't stopped many people from fretting about this new, very powerful technology--it is some kind of "intelligence" after all. So we hear calls for "responsible AI," and watch the establishment of "AI ethics"-related corporate departments, research centers, consulting practices, and new job categories. Most important, we see the emergence of new, irresponsible government regulation. A new New York City regulation, going into effect next January, will require companies to conduct audits to identify biases in the AI programs they use for hiring employees.
- Law > Statutes (0.88)
- Government (0.73)
New York City Will Soon Regulate Use of Artificial Intelligence in Employment Decisions
No. 1894-A, specifically regulates the use of "automated employment decision tools" in making employment decisions, including "any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons." The law protects candidates and employees interviewing and working in New York City and provides that an automated employment decision tool may not be used to screen such candidates for employment and promotion unless the tool: (i) has been subject to a "bias audit" conducted no more than one year prior to the use of such tool; and (ii) a summary of the results of the most recent bias audit of such tool, as well as the distribution date of the tool, have been made publicly available on the website of the employer or employment agency prior to the use of such tool. A bias audit is an "an impartial evaluation by an independent auditor," and includes, without limitation, "the testing of an automated employment decision tool to assess the tool's disparate impact on persons of any [gender, race and job level] required to be reported by employers [on the Employer Information Report EEO-1] pursuant to [federal law]." Notably, the law does not state who or what qualifies as an "independent auditor." The law also requires that the New York City employer or employment agency satisfy certain notice requirements.
New York City's New Law Regulating the Use of Artificial Intelligence in Employment Decisions
On Nov. 10, 2021, the New York City Council passed a bill that regulates employers and employment agencies' use of "automated employment decision tools" in making employment decisions. The bill was returned without Mayor Bill de Blasio's signature and lapsed into law on Dec. 11, 2021. The new law takes effect on Jan. 1, 2023. This new law is part of a growing trend towards examining and regulating the use of artificial intelligence (AI) in hiring, promotional and other employment decisions. The new law regulates employers and employment agencies' use of "automated employment decision tools" on candidates and employees residing in New York City.
- North America > United States > New York (0.86)
- North America > United States > Maryland (0.05)
- North America > United States > Illinois (0.05)
- (2 more...)
- Government > Regional Government > North America Government > United States Government (0.36)
- Law > Statutes (0.31)
New York City Enacts Law Restricting Use of Artificial Intelligence in Employment Decisions
Effective January 1, 2023, New York City employers will be restricted from using artificial intelligence machine-learning products in hiring and promotion decisions. In advance of the effective date, employers who already rely upon these AI products may want to begin preparing to ensure that their use comports with the new law's vetting and notice requirements. The new law governs employers' use of "automated employment decision tools," defined as "any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons." The law prohibits the use of such tools to screen a candidate or employee for an employment decision, unless it has been the subject of a "bias audit" no more than one year prior to its use. A "bias audit" is defined as an impartial evaluation by an independent auditor that tests, at minimum, the tool's disparate impact upon individuals based on their race, ethnicity, and sex.
- North America > United States > New York (0.68)
- North America > United States > District of Columbia > Washington (0.05)
- North America > United States > California > Los Angeles County > Los Angeles (0.05)
New York City Law Seeks to Curb Artificial Intelligence Bias in Hiring
Seyfarth Synopsis: A new law in New York City restricts the use of artificial intelligence ("AI") to screen candidates for employment or promotion, and will take effect on January 1, 2023. Although employers have a year to prepare, those that intend to rely on AI during the hiring process after January 2023 must submit their AI tools to a "bias audit" now. Companies that use AI as a component of their hiring process should be aware of a new law in New York City that goes into effect in early January 2023. AI tools can increase the speed and efficiency of candidate reviews. However, New York City's legislature, animated by concerns that bias may be embedded within AI tools, has regulated their use.
Why New York City is cracking down on AI in hiring
The New York City Council voted 38-4 on November 10, 2021 to pass a bill that would require hiring vendors to conduct annual bias audits of artificial intelligence (AI) use in the city's processes and tools. Companies using AI-generated resources will be responsible for disclosing to job applicants how the technology was used in the hiring process, and must allow candidates options for alternative approaches such as having a person process their application instead. For the first time, a city the size of New York will impose fines for undisclosed or biased AI use, charging up to $1,500 per violation on employers and vendors. Lapsing into law without outgoing Mayor DeBlasio's signature, the legislation is now set to take effect in 2023. It is a telling move in how government has started to crack down on AI use in hiring processes and foreshadows what other cities may do to combat AI-generated bias and discrimination.
- North America > United States > New York (0.86)
- North America > United States > Ohio (0.05)
- North America > United States > Maryland (0.05)
- (3 more...)
- Law > Statutes (1.00)
- Government > Regional Government > North America Government > United States Government (0.68)
NYC to Regulate Artificial Intelligence-Based Hiring Tools
On November 10, 2021, the New York City Council passed a bill prohibiting employers and employment agencies from using automated employment decision tools to screen candidates or employees, unless a bias audit has been conducted prior to deploying the tool (the "Bill"). The Bill defines an "automated employment decision tool" as any computational process (either derived from machine learning, statistical modeling, data analytics, or artificial intelligence) that issues a simplified output (e.g., a score, classification or recommendation) to substantially assist or replace human decision-making for employment decisions that have an impact natural persons. Under the Bill, use of such a tool is permissible if it has been the subject of a bias audit (i.e., an impartial evaluation by an independent auditor, conducted no more than one year prior to the use of the tool) and a summary of the audit results are made publicly available on the website of the employer or employment agency before deployment of the tool. Moreover, employers or employment agencies are required to provide notice to employees or candidates residing in NYC that an automated employment decision tool will be used to assess or evaluate their candidacy, no less than ten business days before using the tool. Candidates also have the right to request an alternative selection process or accommodation under the Bill.