Employers have a responsibility to inspect artificial intelligence tools for disability bias and should have plans to provide reasonable accommodations, the Equal Employment Opportunity Commission and Justice Department said in guidance documents. The guidance released Thursday is the first from the federal government on the use of AI hiring tools that focuses on their impact on people with disabilities. The guidance also seeks to inform workers of their right to inquire about a company's use of AI and to request accommodations, the agencies said. "Today we are sounding an alarm regarding the dangers of blind reliance on AI and other technologies that are increasingly used by employers," Assistant Attorney General Kristen Clarke told reporters. The DOJ enforces disability discrimination laws with respect to state and local government employers, while the EEOC enforces such laws in the private sector and federal employers.
Researchers at Memorial Sloan Kettering Cancer Center (MSK) have developed a sensor that can be trained to sniff for cancer, with the help of artificial intelligence. Although the training doesn't work the same way one trains a police dog to sniff for explosives or drugs, the sensor has some similarity to how the nose works. The nose can detect more than a trillion different scents, even though it has just a few hundred types of olfactory receptors. The pattern of which odor molecules bind to which receptors creates a kind of molecular signature that the brain uses to recognize a scent. Like the nose, the cancer detection technology uses an array of multiple sensors to detect a molecular signature of the disease.
Federal agencies are the latest to alert companies to potential bias in AI recruiting tools. As the AP notes, the Justice Department and Equal Employment Opportunity Commission (EEOC) have warned employers that AI hiring and productivity systems can violate the Americans with Disabilities Act. These technologies might discriminate against people with disabilities by unfairly ruling out job candidates, applying incorrect performance monitoring, asking for illegal sensitive info or limiting pay raises and promotions. Accordingly, the government bodies have released documents (DOJ, EEOC) outlining the ADA's requirements and offering help to improve the fairness of workplace AI systems. Businesses should ensure their AI allows for reasonable accommodations.They should also consider how any of their automated tools might affect people with various disabilities.
The Biden administration announced Thursday that employers who use algorithms and artificial intelligence to make hiring decisions risk violating the Americans with Disabilities Act if applicants with disabilities are disadvantaged in the process. The majority of American employers now use the automated hiring technology -- tools such as resume scanners, chatbot interviewers, gamified personality tests, facial recognition and voice analysis. The ADA is supposed to protect people with disabilities from employment discrimination, but just 19 percent of disabled Americans were employed in 2021, according to the Bureau of Labor Statistics. Kristen Clarke, the assistant attorney general for civil rights at the Department of Justice, which made the announcement jointly with the Equal Employment Opportunity Commission, told NBC News there is "no doubt" that increased use of the technologies is "fueling some of the persistent discrimination." "We hope this sends a strong message to employers that we are prepared to stand up for people with disabilities who are locked out of the job market because of increased reliance on these bias-fueled technologies," she said.
Developed by Minderoo Foundation, the'Global Plastic Watch' tool uses advanced satellite data technology and machine learning to create a near-real-time, high resolution map of plastic pollution. The tool aims to help authorities better manage plastic leakage into the marine environment, and is said to provide the largest ever open source dataset of plastic waste across dozens of countries. Global Plastic Watch uses remote sensing satellite imagery from the European Space Agency and a novel machine learning model created in collaboration with digital product agency Earthrise Media. The tool can determine the size and scale of land-based plastic waste sites, which fuel the growing issue of plastic pollution in the world's rivers and oceans. By using the data, governments, industry and communities can evaluate and monitor the risk of land-based plastic waste sites as well as prioritise investment in solutions, Minderoo Foundation said.
The technical assistance is a follow up to EEOC's announcement last fall that it would address the implications of hiring technologies for bias. In October 2021, Chair Charlotte Burrows said the agency would reach out to stakeholders as part of an initiative to learn about algorithmic tools and identify best practices around algorithmic fairness and the use of AI in employment decisions. Other EEOC members, including Commissioner Keith Sonderling, have previously spoken about the necessity of evaluating algorithm-based tools. A confluence of factors have led the agencies to address the topic, Burrows and Clarke said during Thursday's press call. One is the persistent issue of unemployment for U.S. workers with disabilities.
Assistant Attorney General for Civil Rights Kristen Clarke speaks at a news conference on Aug. 5, 2021. The federal government said Thursday that artificial intelligence technology to screen new job candidates or monitor their productivity can unfairly discriminate against people with disabilities. Assistant Attorney General for Civil Rights Kristen Clarke speaks at a news conference on Aug. 5, 2021. The federal government said Thursday that artificial intelligence technology to screen new job candidates or monitor their productivity can unfairly discriminate against people with disabilities. The federal government said Thursday that artificial intelligence technology to screen new job candidates or monitor worker productivity can unfairly discriminate against people with disabilities, sending a warning to employers that the commonly used hiring tools could violate civil rights laws.
Developing responsible, human-centered artificial intelligence (AI) is a complex and resource-intensive task. As governments around the world race to meet the opportunities and challenges of developing AI, there remains an absence of deep, technical international cooperation that allows like-minded countries to leverage one another's resources and competitive advantages to facilitate cutting-edge AI research in a manner that upholds and promotes democratic values. Establishing a Multilateral AI Research Institute (MAIRI) would provide such a venue for force-multiplying AI research and development collaboration. It would also reinforce the United States' leadership as an international hub for basic and applied AI research, the development of AI governance models, and the fostering of AI norms that align with human-centric and democratic values. In its final report published in March 2021, the National Security Commission on Artificial Intelligence (NSCAI) recommended that the United States work closely with key allies and partners to establish a MAIRI and called for congressional authorization and funding to allow the National Science Foundation (NSF) to lead the effort.
This guidance explains how algorithms and artificial intelligence can lead to disability discrimination in hiring. The Department of Justice enforces disability discrimination laws with respect to state and local government employers. The Equal Employment Opportunity Commission (EEOC) enforces disability discrimination laws with respect to employers in the private sector and the federal government. The obligation to avoid disability discrimination in employment applies to both public and private employers. Employers, including state and local government employers, increasingly use hiring technologies to help them select new employees.