Europe to pilot AI ethics rules, calls for participants

#artificialintelligence 

The European Commission has announced the launch of a pilot project intended to test draft ethical rules for developing and applying artificial intelligence technologies to ensure they can be implemented in practice. It's also aiming to garner feedback and encourage international consensus building for what it dubs "human-centric AI" -- targeting among other talking shops the forthcoming G7 and G20 meetings for increasing discussion on the topic. The Commission's High Level Group on AI -- a body comprised of 52 experts from across industry, academia and civic society announced last summer -- published their draft ethics guidelines for trustworthy AI in December. A revised version of the document was submitted to the Commission in March. It's boiled the expert consultancy down to a set of seven "key requirements" for trustworthy AI, i.e. in addition to machine learning technologies needing to respect existing laws and regulations -- namely: The next stage of the Commission's strategy to foster ethical AI is to see how the draft guidelines operate in a large-scale pilot with a wide range of stakeholders, including international organizations and companies from outside the bloc itself.