Collaborating Authors

European Commission Publishes Ethics Guidelines for Trustworthy Artificial Intelligence Lexology


The High-Level Expert Group on Artificial Intelligence ("AI HLEG"), an independent expert group set up by the European Commission in June 2018 as part of its AI strategy, has published its final Ethics Guidelines for Trustworthy Artificial Intelligence ("AI") (the "Guidelines"). These Guidelines form part of a wider focus by the Commission on AI, with President-elect of the European Commission, Ursula von der Leyen commenting most recently on July 16, in her proposed political guidelines, that: "In my first 100 days in office, I will put forward legislation for a coordinated European approach on the human and ethical implications of Artificial Intelligence…". The AI HLEG appreciates that AI has the potential to benefit a wide range of sectors and has a wide variety of uses. However, it also acknowledges that the use of AI also brings new challenges and raises various legal and ethical questions. It is with this in mind that the Guidelines have been developed: with a view to providing a framework to achieve and operationalize Trustworthy AI.

The European strategy of regulation on artificial intelligence JD Supra


On 12 February 2019, the European Parliament adopted a Resolution on a comprehensive European industrial policy on artificial intelligence (AI) and robotics1. After describing AI as "one of the strategic technologies of the 21st century"2, the European Parliament presented several recommendations to the Member States. This Resolution underlines the need to close the European gap with North America and Asia-Pacific, and promotes a coordinated approach at the European level "to be able to compete with the massive investments made by third countries, especially the US and China"3. Europe is well behind in private investments in AI, with €2.4 to €3.2 billion in 2016, as opposed to €6.5 to €9.7 billion in Asia-Pacific and €12.1 to €18.6 billion in North America. To address this challenge, the European Parliament develops a general approach based on a strategic regulatory environment for AI and encourages strong user protections.

Building trust in human-centric AI - FUTURIUM - European Commission


The Ethics Guidelines for Trustworthy Artificial Intelligence (AI) is a document prepared by the High-Level Expert Group on Artificial Intelligence (AI HLEG). This independent expert group was set up by the European Commission in June 2018, as part of the AI strategy announced earlier that year. The AI HLEG presented a first draft of the Guidelines in December 2018. Following further deliberations by the group in light of discussions on the European AI Alliance, a stakeholder consultation and meetings with representatives from Member States, the Guidelines were revised and published in April 2019. In parallel, the AI HLEG also prepared a revised document which elaborates on a definition of Artificial Intelligence used for the purpose of its deliverables.

My comments on the draft ethics guidelines


I welcome the Communications made by the Commission on the 25th of April 2018 and on the 7th of December 2018. In my opinion, a proposal of hard law would have been more efficient to send the message the EU is practically creating a common legislative framework on AI and to prevent from a fragmentation of the market. Such legislative proposal could have ensured the defense of European values. The goal of a Trustworthy AI through ethical purpose and technical robustness requirements promoted by this working document is a good thing. However, I would like to do some comments.

European Commission's Ethics Guidelines on Artificial Intelligence Lexology


"Artificial intelligence" can be defined as the theory and development of computer systems able to perform tasks that normally require human intervention. Artificial intelligence (AI) is being used in new products and services across numerous industries and for a variety of policy-related purposes, raising questions about the resulting legal implications, including its effect on individual privacy. Aspects of AI related to privacy concerns are the ability of systems to make decisions and to learn by adjusting their code in response to inputs received over time, using large volumes of data. Following the European Commission's declaration on AI in April 2018, its High-Level Expert Group on Artificial Intelligence (AI HLEG) published Draft Ethics Guidelines for Trustworthy AI in December 2018. A consultation process regarding this working document concluded on February 1, 2019, and a revised draft of the document based on the comments that were received is expected to be delivered to the European Commission in April 2019.