I welcome the Communications made by the Commission on the 25th of April 2018 and on the 7th of December 2018. In my opinion, a proposal of hard law would have been more efficient to send the message the EU is practically creating a common legislative framework on AI and to prevent from a fragmentation of the market. Such legislative proposal could have ensured the defense of European values. The goal of a Trustworthy AI through ethical purpose and technical robustness requirements promoted by this working document is a good thing. However, I would like to do some comments.
The High-Level Expert Group on Artificial Intelligence ("AI HLEG"), an independent expert group set up by the European Commission in June 2018 as part of its AI strategy, has published its final Ethics Guidelines for Trustworthy Artificial Intelligence ("AI") (the "Guidelines"). These Guidelines form part of a wider focus by the Commission on AI, with President-elect of the European Commission, Ursula von der Leyen commenting most recently on July 16, in her proposed political guidelines, that: "In my first 100 days in office, I will put forward legislation for a coordinated European approach on the human and ethical implications of Artificial Intelligence…". The AI HLEG appreciates that AI has the potential to benefit a wide range of sectors and has a wide variety of uses. However, it also acknowledges that the use of AI also brings new challenges and raises various legal and ethical questions. It is with this in mind that the Guidelines have been developed: with a view to providing a framework to achieve and operationalize Trustworthy AI.
How does one teach machines and robots, basically a bunch of circuitry and binary code, to behave ethically? Artificial intelligence isn't evil, but how its creators wield and use and apply AI, may make it seem so. Two bodies so far, Singapore's IMDA and the European Commission seem to believe that like everything else that is built into machines and robots, ethics too can be programmed into robots. More specifically, ethical-guided settings can be built in make AI behave ethically. According to an ethics guideline draft authored by the European Commission's AI high-level expert group (HLEG), trustworthy AI has two components.
The European Union has added its voice to the growing call for artificial intelligence (AI) to be regulated, with draft ethics guidelines that underline it must be human-centric and trustworthy to be effective. The European Commission's expert group on AI said that, to be trustworthy, the technology had both to respect fundamental rights and values and be technically reliable so it did not cause unintentional harm. The group claimed its first draft guidelines were different from other attempts to define ethical AI because they set out concrete proposals as well as broad principles. Meanwhile, the EU is due to make policy recommendations on regulating AI in May 2019. The experts said they aimed to foster "responsible competitiveness" and not stifle innovation.
The European Commission has announced the launch of a pilot project intended to test draft ethical rules for developing and applying artificial intelligence technologies to ensure they can be implemented in practice. It's also aiming to garner feedback and encourage international consensus building for what it dubs "human-centric AI" -- targeting among other talking shops the forthcoming G7 and G20 meetings for increasing discussion on the topic. The Commission's High Level Group on AI -- a body comprised of 52 experts from across industry, academia and civic society announced last summer -- published their draft ethics guidelines for trustworthy AI in December. A revised version of the document was submitted to the Commission in March. It's boiled the expert consultancy down to a set of seven "key requirements" for trustworthy AI, i.e. in addition to machine learning technologies needing to respect existing laws and regulations -- namely: The next stage of the Commission's strategy to foster ethical AI is to see how the draft guidelines operate in a large-scale pilot with a wide range of stakeholders, including international organizations and companies from outside the bloc itself.