A debate on the future of artificial intelligence (AI) in Europe drew a full house at the European Parliament - with MEPs and Commission leaders keen to find the best way to regulate and make the most of AI, while protecting us against its worst aspects, too. "We want to discuss this because you have to be cautious. Some artificial intelligence is simple, low risk, no risk, but some artificial intelligence may be life or death for you," says Margrethe Vestager, the Executive Vice-President of the European Commission. "So if it is very risky we have to be cautious, and all the rest of it, just go, go, go." And AI is already go, go, going fast, revolutionising areas like voice recognition and translation.
During this period of progressive development and deployment of artificial intelligence, discussions around the ethical, legal, socio-economic and cultural implications of its use are increasing. What are the challenges and the strategy, and what are the values that Europe can bring to this domain? During the European Conference on AI (ECAI 2020), two special events in the format of panels discussed the challenges of AI made in the European Union, the shape of future research and industry, and the strategy to retain talent and compete with other world powers. This article collects some of the main messages from these two sessions, which included the participation of AI experts from leading European organisations and networks. Since the publication of European directives and guidance, such as the EC White Paper on AI and the Trustworthy AI Guidelines, Europe has been laying the foundation for the future vision of AI. The European strategy for AI builds on the well-known and accepted principles found in the Charter of Fundamental Rights of the European Commission and the Universal Declaration of Human Rights to define a human-centric approach, whose primary purpose is to enhance human capabilities and societal well-being.
See list and speakers following the plenary agenda, below. The Athens Roundtable is committed to advancing legal stakeholder education in AI and the law. The Roundtable is being held with the intention that attendees qualify for continuing legal education in their areas of professional practice. Attendance is upon invitation only. If you wish to attend, please request an invitation at firstname.lastname@example.org.
Artificial Intelligence is already changing society. Algorithms and machine learning are trading millions of euros in financial markets; they are predicting what people want to search for online and what shows they might like to watch on Netflix; AI is already helping police identify criminals using facial recognition (albeit with mixed results), and sifting through climate change data. Soon, AI could be driving our cars and trains (even our ships and planes). How will these new technologies transform our workplaces, our homes, our cities, and our lives? Inevitably, there will be disruption.
The European Union has published a new framework to regulate the use of artificial intelligence across the bloc's 27 member states. The proposal, which will take years to implement into law and will be subject to many tweaks and amendments during this time, nevertheless constitutes the most ambitious AI regulations seen globally to date. The regulations cover a wide range of applications, from software in self-driving cars to algorithms used to vet job candidates, and arrive at a time when countries around the world are struggling with the ethical ramifications of artificial intelligence. Similar to the EU's data privacy law, GDPR, the regulation gives the bloc the ability to fine companies that infringe its rules up to 6 percent of their global revenues, though such punishments are extremely rare. "It is a landmark proposal of this Commission. It's our first ever legal framework on artificial intelligence," said European Commissioner Margrethe Vestager during a press conference.