Artificial intelligence: UK and EU take legislative steps - convergence or divergence?

#artificialintelligence 

In March this year, the UK government announced an assertive agenda on artificial intelligence (AI) by launching a UK Cyber Security Council and revealing plans to publish a National Artificial Intelligence Strategy (the UK Strategy). The details of the UK Strategy will be released later this year, but at this point we understand that it will focus in particular on promoting growth of the economy through widespread use of AI with, at the same time, an emphasis on ethical, safe, and trustworthy development of AI--including through the development of a legislative framework for AI which will promote public trust and a level playing field. Shortly after the UK government's announcement, the EU Commission published a proposed EU-wide AI legislative framework (the EU Regulation) which is part of the Commission's overall "AI package". The EU Regulation is focused on ensuring the safety of individuals and the protection of fundamental human rights, and categorises AI into unacceptable, high- or low-risk use cases. The EU Regulation proposes to protect users "where the risks that the AI systems pose are particularly high". The definition and categories of high-risk use cases of AI are broad, and capture many if not most use cases that relate to individuals, including AI use in the context of biometric identification and categorisation of natural persons, management of critical infrastructure, and employment and worker management.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found