Europe to pilot AI ethics rules, calls for participants

#artificialintelligence

The European Commission has announced the launch of a pilot project intended to test draft ethical rules for developing and applying artificial intelligence technologies to ensure they can be implemented in practice. It's also aiming to garner feedback and encourage international consensus building for what it dubs "human-centric AI" -- targeting among other talking shops the forthcoming G7 and G20 meetings for increasing discussion on the topic. The Commission's High Level Group on AI -- a body comprised of 52 experts from across industry, academia and civic society announced last summer -- published their draft ethics guidelines for trustworthy AI in December. A revised version of the document was submitted to the Commission in March. It's boiled the expert consultancy down to a set of seven "key requirements" for trustworthy AI, i.e. in addition to machine learning technologies needing to respect existing laws and regulations -- namely: The next stage of the Commission's strategy to foster ethical AI is to see how the draft guidelines operate in a large-scale pilot with a wide range of stakeholders, including international organizations and companies from outside the bloc itself.


European Commission's Ethics Guidelines on Artificial Intelligence Lexology

#artificialintelligence

"Artificial intelligence" can be defined as the theory and development of computer systems able to perform tasks that normally require human intervention. Artificial intelligence (AI) is being used in new products and services across numerous industries and for a variety of policy-related purposes, raising questions about the resulting legal implications, including its effect on individual privacy. Aspects of AI related to privacy concerns are the ability of systems to make decisions and to learn by adjusting their code in response to inputs received over time, using large volumes of data. Following the European Commission's declaration on AI in April 2018, its High-Level Expert Group on Artificial Intelligence (AI HLEG) published Draft Ethics Guidelines for Trustworthy AI in December 2018. A consultation process regarding this working document concluded on February 1, 2019, and a revised draft of the document based on the comments that were received is expected to be delivered to the European Commission in April 2019.


EU urges ethics guidance to make AI "trustworthy" - Legal Futures

#artificialintelligence

The European Union has added its voice to the growing call for artificial intelligence (AI) to be regulated, with draft ethics guidelines that underline it must be human-centric and trustworthy to be effective. The European Commission's expert group on AI said that, to be trustworthy, the technology had both to respect fundamental rights and values and be technically reliable so it did not cause unintentional harm. The group claimed its first draft guidelines were different from other attempts to define ethical AI because they set out concrete proposals as well as broad principles. Meanwhile, the EU is due to make policy recommendations on regulating AI in May 2019. The experts said they aimed to foster "responsible competitiveness" and not stifle innovation.


The European Plan for Artificial Intelligence. Questions and Answers

#artificialintelligence

Why is AI important for Europe? As electricity did in the past, AI is transforming our world. AI is at our fingertips, when we translate texts online or use a mobile app to find the best way to go to our next destination. At home, a smart thermostat can reduce energy bills by up to 25% by analysing the habits of the people who live in the house and adjusting the temperature accordingly. In healthcare, algorithms can help dermatologists make better diagnoses: by detecting, for example, 95% of skin cancers by learning from large sets of medical images.


The European Perspective on Responsible Computing

Communications of the ACM

We live in the digital world, where every day we interact with digital systems either through a mobile device or from inside a car. These systems are increasingly autonomous in making decisions over and above their users or on behalf of them. As a consequence, ethical issues--privacy ones included (for example, unauthorized disclosure and mining of personal data, access to restricted resources)--are emerging as matters of utmost concern since they affect the moral rights of each human being and have an impact on the social, economic, and political spheres. Europe is at the forefront of the regulation and reflections on these issues through its institutional bodies. Privacy with respect to the processing of personal data is recognized as part of the fundamental rights and freedoms of individuals.