Goto

Collaborating Authors

Artificial intelligence: UK and EU take legislative steps - convergence or divergence?

#artificialintelligence

In March this year, the UK government announced an assertive agenda on artificial intelligence (AI) by launching a UK Cyber Security Council and revealing plans to publish a National Artificial Intelligence Strategy (the UK Strategy). The details of the UK Strategy will be released later this year, but at this point we understand that it will focus in particular on promoting growth of the economy through widespread use of AI with, at the same time, an emphasis on ethical, safe, and trustworthy development of AI--including through the development of a legislative framework for AI which will promote public trust and a level playing field. Shortly after the UK government's announcement, the EU Commission published a proposed EU-wide AI legislative framework (the EU Regulation) which is part of the Commission's overall "AI package". The EU Regulation is focused on ensuring the safety of individuals and the protection of fundamental human rights, and categorises AI into unacceptable, high- or low-risk use cases. The EU Regulation proposes to protect users "where the risks that the AI systems pose are particularly high". The definition and categories of high-risk use cases of AI are broad, and capture many if not most use cases that relate to individuals, including AI use in the context of biometric identification and categorisation of natural persons, management of critical infrastructure, and employment and worker management.


Five early reflections on the EU's proposed legal framework for AI

#artificialintelligence

As the use of AI accelerates around the world, policymakers are asking questions about what frameworks should guide the design and use of AI, and how it can benefit society. The EU is the first institution to take a major step to answer these questions through a proposed legal framework for AI released on 21 April 2021. In doing so, the EU is seeking to establish a safe environment for AI innovation and to position itself as a leader in setting "the global gold standard" for regulating AI. This is a positive aspect of the proposal as AI is a broad set of technology, tools and applications. Shifting the focus away from AI technology, which can have significantly different impacts depending on the application for which it is used, helps to mitigate the risk of divergent requirements for AI products and services.


When will Washington regulate artificial intelligence?

#artificialintelligence

Artificial intelligence has been on Washington's radar for decades, at least conceptionally. More concretely, over the past few years the federal government has sought to keep up with the dizzying pace of advances by Big Tech and any number of smaller startups – not to mention international competitors, most notably China. Congress and the executive branch – including the White House and a wide range of federal agencies in both the national security and civilian economy spheres – have increasingly supported direct investments, promoted incentives for stepped-up R&D, and worked to develop non-regulatory guidance for the public and private sectors in navigating the economic, technological and social implications of AI. Seeking to ensure a leading global role for the US in AI development and implementation is a prime motivator for American policymakers. In doing so, Washington has been reluctant to adopt or even propose an EU-style sweeping regulatory regime governing applications and oversight of AI for fear that it may slow innovation.


The EU Cannot Build a Foreign Policy on Regulatory Power Alone

#artificialintelligence

There are two well-established ideas in trade. Combined, they can lead to a conclusion that is unfortunately wrong. The first idea is that, across a range of economic sectors, the EU and the US have been engaged in a battle to have their model of regulation accepted as the global one, and that the EU is generally winning. The second is that governments can use their regulatory power to extend strategic and foreign policy influence. The conclusion would seem to be that the EU, which has for decades tried to develop a foreign policy, should be able to use its superpower status in regulation and trade to project its interests and its values abroad.


European Union's Laws on Artificial Intelligence

#artificialintelligence

The European Union has developed an artificial intelligence strategy to simplify research and rules and regulations. The European Union's approach to this new technology is to implement a legal framework to address fundamental rights and safety risks. It plans to implement rules to address liability issues. It also plans to revise the sectoral safety legislation and modify the rules and regulations. The new framework grants developers, deployers, and users a certain amount of clarity if it becomes necessary for them to intervene if legislation does not cover the issues.