Goto

Collaborating Authors

Breaking Down the World's First Proposal for Regulating Artificial Intelligence

#artificialintelligence

Today, artificial intelligence and machine learning tools are ubiquitous across sectors--used for everything from determining an individual's credit worthiness to enabling law enforcement surveillance--and rapidly evolving. Despite this, few nations have rules in place to oversee these systems or mitigate the harms they could cause. On April 21, the European Commission released a draft of its proposed AI regulation, the world's first legal framework addressing the risks posed by artificial intelligence. The draft regulation makes some notable strides, prohibiting the use of certain harmful AI systems and reining in harmful uses of some high-risk algorithmic systems. However, the Commission's proposed regulation displays gaps which, if not addressed, could limit its effectiveness in holding some of the biggest developers and deployers of algorithmic systems accountable.


European Commission Proposes Regulation on Artificial Intelligence

#artificialintelligence

AI is defined as software that is developed with one or more specified techniques and approaches (including machine learning and deep learning) that can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations or decisions influencing the environments they interact with.


The EU's new Regulation on Artificial Intelligence

#artificialintelligence

The Commission proposes a risk–based approach based on the level of risk presented by the AI system, with different levels of risk attracting corresponding compliance requirements. The risk categories include (i) unacceptable risk (these AI systems are prohibited); (ii) high-risk; (iii) limited risk; and (iv) minimal risk.


What the draft European Union AI regulations mean for business

#artificialintelligence

As artificial intelligence (AI) becomes increasingly embedded in the fabric of business and our everyday lives, both corporations and consumer-advocacy groups have lobbied for clearer rules to ensure that it is used fairly. In May, the European Union became the first governmental body in the world to issue a comprehensive response in the form of draft regulations aimed specifically at the development and use of AI. The proposed regulations would apply to any AI system used or providing outputs within the European Union, signaling implications for organizations around the world. Our research shows that many organizations still have a lot of work to do to prepare themselves for this regulation and address the risks associated with AI more broadly. In 2020, only 48 percent of organizations reported that they recognized regulatory-compliance risks, and even fewer (38 percent) reported actively working to address them.


Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed European AI Regulation

arXiv.org Artificial Intelligence

The proposed European Artificial Intelligence Act (AIA) is the first attempt to elaborate a general legal framework for AI carried out by any major global economy. As such, the AIA is likely to become a point of reference in the larger discourse on how AI systems can (and should) be regulated. In this article, we describe and discuss the two primary enforcement mechanisms proposed in the AIA: the conformity assessments that providers of high-risk AI systems are expected to conduct, and the post-market monitoring plans that providers must establish to document the performance of high-risk AI systems throughout their lifetimes. We argue that AIA can be interpreted as a proposal to establish a Europe-wide ecosystem for conducting AI auditing, albeit in other words. Our analysis offers two main contributions. First, by describing the enforcement mechanisms included in the AIA in terminology borrowed from existing literature on AI auditing, we help providers of AI systems understand how they can prove adherence to the requirements set out in the AIA in practice. Second, by examining the AIA from an auditing perspective, we seek to provide transferable lessons from previous research about how to refine further the regulatory approach outlined in the AIA. We conclude by highlighting seven aspects of the AIA where amendments (or simply clarifications) would be helpful. These include, above all, the need to translate vague concepts into verifiable criteria and to strengthen the institutional safeguards concerning conformity assessments based on internal checks.