Collaborating Authors

Artificial Intelligence and Automated Systems Legal Update (2Q21)


After a busy start to the year, regulatory and policy developments related to Artificial Intelligence and Automated Systems ("AI") have continued apace in the second quarter of 2021. Unlike the comprehensive regulatory framework proposed by the European Union ("EU") in April 2021,[1] more specific regulatory guidelines in the U.S. are still being proposed on an agency-by-agency basis. President Biden has so far sought to amplify the emerging U.S. AI strategy by continuing to grow the national research and monitoring infrastructure kick-started by the 2019 Trump Executive Order[2] and remain focused on innovation and competition with China in transformative innovations like AI, superconductors, and robotics. Most recently, the U.S. Innovation and Competition Act of 2021--sweeping, bipartisan R&D and science-policy legislation--moved rapidly through the Senate. While there has been no major shift away from the previous "hands off" regulatory approach at the federal level, we are closely monitoring efforts by the federal government and enforcers such as the FTC to make fairness and transparency central tenets of U.S. AI policy.

2021 Year in Review: Biometric and AI Litigation


Read on for CPW's highlights of the year's most significant events concerning biometric litigation, as well as our predictions for what 2022 may bring. One of the most critical consumer privacy statutes for biometric litigation has been Illinois' Biometric Information Privacy Act ("BIPA"), which regulates the collection, processing, disclosure, and security of the biometric information of Illinois residents. BIPA protects the "biometric information" of Illinois residents, which is any information based on "biometric identifiers" that identifies a specific person--regardless of how it is captured, converted, stored, or shared. Biometric identifiers are "a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry." BIPA has found itself to be one of the most frequent targets for class actions, as it includes a private right of action with liquidated statutory damages, unlike many other data privacy statutes.

Artificial Intelligence Governance and Ethics: Global Perspectives Artificial Intelligence

Artificial intelligence (AI) is a technology which is increasingly being utilised in society and the economy worldwide, and its implementation is planned to become more prevalent in coming years. AI is increasingly being embedded in our lives, supplementing our pervasive use of digital technologies. But this is being accompanied by disquiet over problematic and dangerous implementations of AI, or indeed, even AI itself deciding to do dangerous and problematic actions, especially in fields such as the military, medicine and criminal justice. These developments have led to concerns about whether and how AI systems adhere, and will adhere to ethical standards. These concerns have stimulated a global conversation on AI ethics, and have resulted in various actors from different countries and sectors issuing ethics and governance initiatives and guidelines for AI. Such developments form the basis for our research in this report, combining our international and interdisciplinary expertise to give an insight into what is happening in Australia, China, Europe, India and the US.

New York City Enacts Law Restricting Use of Artificial Intelligence in Employment Decisions


Effective January 1, 2023, New York City employers will be restricted from using artificial intelligence machine-learning products in hiring and promotion decisions. In advance of the effective date, employers who already rely upon these AI products may want to begin preparing to ensure that their use comports with the new law's vetting and notice requirements. The new law governs employers' use of "automated employment decision tools," defined as "any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons." The law prohibits the use of such tools to screen a candidate or employee for an employment decision, unless it has been the subject of a "bias audit" no more than one year prior to its use. A "bias audit" is defined as an impartial evaluation by an independent auditor that tests, at minimum, the tool's disparate impact upon individuals based on their race, ethnicity, and sex.