eeoc
Unfair Automated Hiring Systems Are Everywhere
Earlier this month, Lina Khan, chair of the US Federal Trade Commission (FTC), wrote an essay in The New York Times affirming the agency's commitment to regulating AI. But there was one AI application Khan didn't mention that the FTC urgently needs to regulate: automated hiring systems. These range in complexity from tools that merely parse resumes and rank them to systems that green-light candidates and trash applicants deemed unfit. Increasingly, working Americans are obligated to use them if they want to get hired. If you buy something using links in our stories, we may earn a commission.
- Government > Regional Government > North America Government > United States Government (0.79)
- Law > Business Law (0.63)
Artificial Intelligence Takes Center Stage at EEOC
The U.S. Equal Employment Opportunity Commission (EEOC) recently released a draft of its new Strategic Enforcement Plan (SEP), outlining its priorities in tackling workplace discrimination over the next four years. The playbook, published in the Federal Register in January, indicates that the agency will be on the lookout for discrimination caused by artificial intelligence (AI) tools. "The EEOC is signaling in its draft SEP that it intends to enforce federal nondiscrimination laws equally, whether the discrimination takes place through traditional recruiting or through the use of modern and automated tools," said Andrew M. Gordon, an attorney with the law firm Hinshaw & Culbertson LLP in Fort Lauderdale, Fla. Over the last decade, AI use in the workplace has skyrocketed. Nearly 1 in 4 organizations uses AI to support HR-related activities, according to a 2022 survey by the Society for Human Resource Management (SHRM).
AI And Machine Learning In The Workplace
In recent years, government scrutiny over the use of artificial intelligence (AI) tools in the recruiting and hiring process has risen. Since I wrote about this topic last year, there has been significant activity within several federal government agencies regarding the use of AI and machine learning in the employment context. A better understanding of these actions can help business leaders reduce their risk of legal liability and better understand how to use AI and machine learning responsibly in their organizations. The Equal Employment Opportunity Commission (EEOC) has been particularly active through its EEOC initiative on AI and algorithmic fairness and its joint HIRE initiative with the U.S. Department of Labor. In May 2022, the EEOC released technical guidance regarding potential Americans with Disabilities Act (ADA) implications.
Council Post: AI And Machine Learning In The Workplace: Preparing For 2023
President & CEO of BBB National Programs, a non-profit organization dedicated to fostering a more accountable, trustworthy marketplace. In recent years, government scrutiny over the use of artificial intelligence (AI) tools in the recruiting and hiring process has risen. Since I wrote about this topic last year, there has been significant activity within several federal government agencies regarding the use of AI and machine learning in the employment context. A better understanding of these actions can help business leaders reduce their risk of legal liability and better understand how to use AI and machine learning responsibly in their organizations. The Equal Employment Opportunity Commission (EEOC) has been particularly active through its EEOC initiative on AI and algorithmic fairness and its joint HIRE initiative with the U.S. Department of Labor.
- North America > United States > New York (0.05)
- Europe > Italy > Abruzzo (0.05)
- Law > Labor & Employment Law (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
From Discrimination in Machine Learning to Discrimination in Law, Part 1: Disparate Treatment
Around 60 years ago, the U.S. Department of Justice Civil Rights Division was established for prohibiting discrimination based on protected attributes. Over these 60 years, they established a set of policies and guidelines to identify and penalize those who discriminate1. The widespread use of machine learning (ML) models in routine life has prompted researchers to begin studying the extent to which these models are discriminatory. However, some researcher are unaware that the legal system already has well established procedures for describing and proving discrimination in law. In this series of blog posts, we'll try to bridge this gap.
- North America > United States > California > Santa Clara County > Palo Alto (0.40)
- North America > Antigua and Barbuda > Barbuda > Codrington (0.05)
- North America > United States > North Carolina > Wake County > Raleigh (0.04)
- North America > United States > New York (0.04)
- Law > Labor & Employment Law (1.00)
- Law > Civil Rights & Constitutional Law (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
Artificial Intelligence and Automated Systems Legal Update (2Q22)
The second quarter of 2022 saw U.S. federal lawmakers and agencies focus on draft legislation and guidance aimed at closing the gap to the EU with respect to addressing risks in the development and use of AI systems, in particular risks related to algorithmic bias and discrimination. The American Data Privacy and Protection Act ("ADPPA"), the bipartisan federal privacy bill introduced to the U.S. House in June 2022, marks a major step towards a comprehensive national privacy framework, and companies should take particular note of its inclusion of mandated algorithmic impact assessments. Meanwhile, the E.U.'s regulatory scheme for AI continues to wind its way through the EU legislative process. Though it is unlikely to become binding law until late 2023 at the earliest, the EU policy landscape remains dynamic. Our 2Q22 Artificial Intelligence and Automated Systems Legal Update focuses on these key efforts, and also examines other policy developments within the U.S. and EU that may be of interest to domestic and international companies alike.
- North America > United States > California > Santa Clara County > Palo Alto (0.05)
- North America > United States > New York (0.05)
- North America > United States > California > Los Angeles County > Los Angeles (0.05)
- (6 more...)
- Law > Statutes (1.00)
- Law > Government & the Courts (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
Disability Bias in AI Hiring Tools Targeted in US Guidance (1)
Employers have a responsibility to inspect artificial intelligence tools for disability bias and should have plans to provide reasonable accommodations, the Equal Employment Opportunity Commission and Justice Department said in guidance documents. The guidance released Thursday is the first from the federal government on the use of AI hiring tools that focuses on their impact on people with disabilities. The guidance also seeks to inform workers of their right to inquire about a company's use of AI and to request accommodations, the agencies said. "Today we are sounding an alarm regarding the dangers of blind reliance on AI and other technologies that are increasingly used by employers," Assistant Attorney General Kristen Clarke told reporters. The DOJ enforces disability discrimination laws with respect to state and local government employers, while the EEOC enforces such laws in the private sector and federal employers.
- Government > Regional Government > North America Government > United States Government (0.92)
- Law > Government & the Courts (0.56)
DOJ warns AI hiring and productivity tools can violate anti-discrimination law
Federal agencies are the latest to alert companies to potential bias in AI recruiting tools. As the AP notes, the Justice Department and Equal Employment Opportunity Commission (EEOC) have warned employers that AI hiring and productivity systems can violate the Americans with Disabilities Act. These technologies might discriminate against people with disabilities by unfairly ruling out job candidates, applying incorrect performance monitoring, asking for illegal sensitive info or limiting pay raises and promotions. Accordingly, the government bodies have released documents (DOJ, EEOC) outlining the ADA's requirements and offering help to improve the fairness of workplace AI systems. Businesses should ensure their AI allows for reasonable accommodations.They should also consider how any of their automated tools might affect people with various disabilities.
- North America > United States > New York (0.09)
- North America > United States > California (0.09)
- Law (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
Use of algorithms, AI for hiring risks discriminating against disabled, Biden admin says
The Biden administration announced Thursday that employers who use algorithms and artificial intelligence to make hiring decisions risk violating the Americans with Disabilities Act if applicants with disabilities are disadvantaged in the process. The majority of American employers now use the automated hiring technology -- tools such as resume scanners, chatbot interviewers, gamified personality tests, facial recognition and voice analysis. The ADA is supposed to protect people with disabilities from employment discrimination, but just 19 percent of disabled Americans were employed in 2021, according to the Bureau of Labor Statistics. Kristen Clarke, the assistant attorney general for civil rights at the Department of Justice, which made the announcement jointly with the Equal Employment Opportunity Commission, told NBC News there is "no doubt" that increased use of the technologies is "fueling some of the persistent discrimination." "We hope this sends a strong message to employers that we are prepared to stand up for people with disabilities who are locked out of the job market because of increased reliance on these bias-fueled technologies," she said.
- North America > United States > New York (0.06)
- Asia > China (0.06)
- North America > United States > Maryland (0.05)
- North America > United States > Illinois (0.05)
- Law (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
DOJ, EEOC 'sounding alarm' on how AI, related tools can violate ADA
The technical assistance is a follow up to EEOC's announcement last fall that it would address the implications of hiring technologies for bias. In October 2021, Chair Charlotte Burrows said the agency would reach out to stakeholders as part of an initiative to learn about algorithmic tools and identify best practices around algorithmic fairness and the use of AI in employment decisions. Other EEOC members, including Commissioner Keith Sonderling, have previously spoken about the necessity of evaluating algorithm-based tools. A confluence of factors have led the agencies to address the topic, Burrows and Clarke said during Thursday's press call. One is the persistent issue of unemployment for U.S. workers with disabilities.
- Law (0.87)
- Government > Regional Government > North America Government > United States Government (0.67)