I welcome the Communications made by the Commission on the 25th of April 2018 and on the 7th of December 2018. In my opinion, a proposal of hard law would have been more efficient to send the message the EU is practically creating a common legislative framework on AI and to prevent from a fragmentation of the market. Such legislative proposal could have ensured the defense of European values. The goal of a Trustworthy AI through ethical purpose and technical robustness requirements promoted by this working document is a good thing. However, I would like to do some comments.
How does one teach machines and robots, basically a bunch of circuitry and binary code, to behave ethically? Artificial intelligence isn't evil, but how its creators wield and use and apply AI, may make it seem so. Two bodies so far, Singapore's IMDA and the European Commission seem to believe that like everything else that is built into machines and robots, ethics too can be programmed into robots. More specifically, ethical-guided settings can be built in make AI behave ethically. According to an ethics guideline draft authored by the European Commission's AI high-level expert group (HLEG), trustworthy AI has two components.
The European Union has added its voice to the growing call for artificial intelligence (AI) to be regulated, with draft ethics guidelines that underline it must be human-centric and trustworthy to be effective. The European Commission's expert group on AI said that, to be trustworthy, the technology had both to respect fundamental rights and values and be technically reliable so it did not cause unintentional harm. The group claimed its first draft guidelines were different from other attempts to define ethical AI because they set out concrete proposals as well as broad principles. Meanwhile, the EU is due to make policy recommendations on regulating AI in May 2019. The experts said they aimed to foster "responsible competitiveness" and not stifle innovation.
The European Commission has announced the launch of a pilot project intended to test draft ethical rules for developing and applying artificial intelligence technologies to ensure they can be implemented in practice. It's also aiming to garner feedback and encourage international consensus building for what it dubs "human-centric AI" -- targeting among other talking shops the forthcoming G7 and G20 meetings for increasing discussion on the topic. The Commission's High Level Group on AI -- a body comprised of 52 experts from across industry, academia and civic society announced last summer -- published their draft ethics guidelines for trustworthy AI in December. A revised version of the document was submitted to the Commission in March. It's boiled the expert consultancy down to a set of seven "key requirements" for trustworthy AI, i.e. in addition to machine learning technologies needing to respect existing laws and regulations -- namely: The next stage of the Commission's strategy to foster ethical AI is to see how the draft guidelines operate in a large-scale pilot with a wide range of stakeholders, including international organizations and companies from outside the bloc itself.
There are many stereotypes and preconceptions about artificial intelligence, but AI should above all be considered as a tool, albeit a highly sophisticated one that is constantly evolving and improving as our human intelligence deepens. What makes AI different from any other type of tool is the ability to learn and act accordingly. In the same way that human intelligence has allowed us to flourish as a species by turning our collective hand to pretty much anything, it is the ability of artificial intelligence to improve so many different aspects of our lives that is so exciting. AI is already a day-to-day reality for many of us, from apps that know what kind of music we like without us asking to'personal assistants' on our smartphones or in our homes that can seemingly answer any question we may have in a matter of seconds. Yet these simple examples are just scratching the surface of what AI can do.