Not enough data to create a plot.
Try a different view from the menu above.
Employers have a responsibility to inspect artificial intelligence tools for disability bias and should have plans to provide reasonable accommodations, the Equal Employment Opportunity Commission and Justice Department said in guidance documents. The guidance released Thursday is the first from the federal government on the use of AI hiring tools that focuses on their impact on people with disabilities. The guidance also seeks to inform workers of their right to inquire about a company's use of AI and to request accommodations, the agencies said. "Today we are sounding an alarm regarding the dangers of blind reliance on AI and other technologies that are increasingly used by employers," Assistant Attorney General Kristen Clarke told reporters. The DOJ enforces disability discrimination laws with respect to state and local government employers, while the EEOC enforces such laws in the private sector and federal employers.
Researchers at Memorial Sloan Kettering Cancer Center (MSK) have developed a sensor that can be trained to sniff for cancer, with the help of artificial intelligence. Although the training doesn't work the same way one trains a police dog to sniff for explosives or drugs, the sensor has some similarity to how the nose works. The nose can detect more than a trillion different scents, even though it has just a few hundred types of olfactory receptors. The pattern of which odor molecules bind to which receptors creates a kind of molecular signature that the brain uses to recognize a scent. Like the nose, the cancer detection technology uses an array of multiple sensors to detect a molecular signature of the disease.
Federal agencies are the latest to alert companies to potential bias in AI recruiting tools. As the AP notes, the Justice Department and Equal Employment Opportunity Commission (EEOC) have warned employers that AI hiring and productivity systems can violate the Americans with Disabilities Act. These technologies might discriminate against people with disabilities by unfairly ruling out job candidates, applying incorrect performance monitoring, asking for illegal sensitive info or limiting pay raises and promotions. Accordingly, the government bodies have released documents (DOJ, EEOC) outlining the ADA's requirements and offering help to improve the fairness of workplace AI systems. Businesses should ensure their AI allows for reasonable accommodations.They should also consider how any of their automated tools might affect people with various disabilities.
The Biden administration announced Thursday that employers who use algorithms and artificial intelligence to make hiring decisions risk violating the Americans with Disabilities Act if applicants with disabilities are disadvantaged in the process. The majority of American employers now use the automated hiring technology -- tools such as resume scanners, chatbot interviewers, gamified personality tests, facial recognition and voice analysis. The ADA is supposed to protect people with disabilities from employment discrimination, but just 19 percent of disabled Americans were employed in 2021, according to the Bureau of Labor Statistics. Kristen Clarke, the assistant attorney general for civil rights at the Department of Justice, which made the announcement jointly with the Equal Employment Opportunity Commission, told NBC News there is "no doubt" that increased use of the technologies is "fueling some of the persistent discrimination." "We hope this sends a strong message to employers that we are prepared to stand up for people with disabilities who are locked out of the job market because of increased reliance on these bias-fueled technologies," she said.
The technical assistance is a follow up to EEOC's announcement last fall that it would address the implications of hiring technologies for bias. In October 2021, Chair Charlotte Burrows said the agency would reach out to stakeholders as part of an initiative to learn about algorithmic tools and identify best practices around algorithmic fairness and the use of AI in employment decisions. Other EEOC members, including Commissioner Keith Sonderling, have previously spoken about the necessity of evaluating algorithm-based tools. A confluence of factors have led the agencies to address the topic, Burrows and Clarke said during Thursday's press call. One is the persistent issue of unemployment for U.S. workers with disabilities.
Assistant Attorney General for Civil Rights Kristen Clarke speaks at a news conference on Aug. 5, 2021. The federal government said Thursday that artificial intelligence technology to screen new job candidates or monitor their productivity can unfairly discriminate against people with disabilities. Assistant Attorney General for Civil Rights Kristen Clarke speaks at a news conference on Aug. 5, 2021. The federal government said Thursday that artificial intelligence technology to screen new job candidates or monitor their productivity can unfairly discriminate against people with disabilities. The federal government said Thursday that artificial intelligence technology to screen new job candidates or monitor worker productivity can unfairly discriminate against people with disabilities, sending a warning to employers that the commonly used hiring tools could violate civil rights laws.
This guidance explains how algorithms and artificial intelligence can lead to disability discrimination in hiring. The Department of Justice enforces disability discrimination laws with respect to state and local government employers. The Equal Employment Opportunity Commission (EEOC) enforces disability discrimination laws with respect to employers in the private sector and the federal government. The obligation to avoid disability discrimination in employment applies to both public and private employers. Employers, including state and local government employers, increasingly use hiring technologies to help them select new employees.
George Anadiotis got tech, data, and media, and he's not afraid to use them. It's been almost one year since the European Commission unveiled the draft for what may well be one of the most influential legal frameworks in the world: the EU AI Act. According to the Mozilla Foundation, the framework is still work in progress, and now is the time to actively engage in the effort to shape its direction. Mozilla Foundation's stated mission is to work to ensure the internet remains a public resource that is open and accessible to everyone. Since 2019, Mozilla Foundation has focused a significant portion of its internet health movement-building programs on AI.
Royal Mail is building a fleet of 500 drones to carry mail to remote communities all over the UK, including the Isles of Scilly and the Hebrides. The postal service, which has already conducted successful trials over Scotland and Cornwall, will create more than 50 new postal drone routes over the next three years as part of a new partnership with London company Windracers. Drones, or UAVs (uncrewed aerial vehicles), can help reduce carbon emissions and improve the reliability of island mail services, Royal Mail claims. They offer an alternative to currently-used delivery methods that can be affected by bad weather – ferries, conventional aircraft and land-based deliveries. They can also take off from any flat surface (sand, grass or tarmac) providing it is long enough.