Goto

Collaborating Authors

Results


The EU's new Regulation on Artificial Intelligence

#artificialintelligence

The Commission proposes a risk–based approach based on the level of risk presented by the AI system, with different levels of risk attracting corresponding compliance requirements. The risk categories include (i) unacceptable risk (these AI systems are prohibited); (ii) high-risk; (iii) limited risk; and (iv) minimal risk.


The EU Rules on Artificial Intelligence – Five actions to consider now!

#artificialintelligence

For a couple of years, the EU Commission has worked on rules, regulations, and incentives around Artificial Intelligence. On April 21st, it released the long-awaited proposal for harmonized regulations on AI. The proposal to be adopted in national legislation is a far-reaching set of rules that will impact all organizations, at par and perhaps even more comprehensive, than GDPR. Understanding the basics of the new rules and the organizational impact is a requirement for all senior executives. An


It's time to train professional AI risk managers

#artificialintelligence

Last year I wrote about how AI regulations will lead to the emergence of professional AI risk managers. This has already happened in the financial sector where regulations patterned after Basel rules have created a financial risk management profession to assess financial risks. Last week, the EU published a 108-page proposal to regulate AI systems. This will lead to the emergence of professional AI risk managers. The proposal doesn't cover all AI systems, just those deemed high-risk, and the regulation would vary depending on how risky the specific AI systems are: Since systems with unacceptable risks would be banned outright, most of the regulation is about high-risk AI systems.


Digital ethics: What do leaders need to think about?

#artificialintelligence

From privacy and surveillance to fairness and transparency, Avanade Ireland's Graham Healy discusses what leaders need to think about when it comes to digital ethics. As digital transformation accelerates, there are plenty of issues for leaders to contend with, from considering a remote workforce to a decentralised data management system. However, there are also ethical issues to consider when it comes to digitalisation, including data privacy, transparency and accessibility. According to Graham Healy, the areas on which leaders need to focus their attention depends on several factors, including the business they're in. Healy is the country manager for Avanade in Ireland, a joint venture between Microsoft and Accenture that delivers digital, IT and advisory services to clients all over the world.


The European Union Proposes New Legal Framework for Artificial Intelligence

#artificialintelligence

On 21 April 2021, the European Commission proposed a new, transformative legal framework to govern the use of artificial intelligence (AI) in the European Union. The proposal adopts a risk-based approach whereby the uses of artificial intelligence are categorised and restricted according to whether they pose an unacceptable, high, or low risk to human safety and fundamental rights. The policy is widely considered to be one of the first of its kind in the world which would, if passed, have profound and far-reaching consequences for organisations that develop or use technologies incorporating artificial intelligence. The European Commission's proposal has been in the making since 2017, when EU legislators enacted a resolution and a report with recommendations to the Commission on Civil Law Rules on Robotics. In 2020, the European Commission published a white paper on artificial intelligence.


The EU has Released its First Legal Framework for AI Regulation

#artificialintelligence

AI has become a part of all our lives. Our cars have automatic braking, platforms like Netflix and Spotify have recommendations, and Alexa and Google can search things for us on command, all powered by artificial intelligence. Although this technology comes with a lot of convenience and advantages, people are also concerned about its dangers. Inadequate security and ethical problems are a few examples of the cons that come with AI. In response to these dangers, the European Union has decided to work on a legal framework to regulate the way AI is used.


The European Union Is Proposing Regulations For Artificial Intelligence

#artificialintelligence

Today, the European Commission proposed regulations for the European Union (EU). The proposed regulations are discussed on the EU site. They are of interest for more than only facial recognition, but as the start of what will be increasing regulation for many aspects of artificial intelligence (AI). There should be zero surprise that facial recognition is the first major aspect of AI to meet with government regulations. This technology is very intrusive and can directly impact the lives of all citizens in many ways.


Ethics of AI: Benefits and risks of artificial intelligence

ZDNet

In 1949, at the dawn of the computer age, the French philosopher Gabriel Marcel warned of the danger of naively applying technology to solve life's problems. Life, Marcel wrote in Being and Having, cannot be fixed the way you fix a flat tire. Any fix, any technique, is itself a product of that same problematic world, and is therefore problematic, and compromised. Marcel's admonition is often summarized in a single memorable phrase: "Life is not a problem to be solved, but a mystery to be lived." Despite that warning, seventy years later, artificial intelligence is the most powerful expression yet of humans' urge to solve or improve upon human life with computers. But what are these computer systems? As Marcel would have urged, one must ask where they come from, whether they embody the very problems they would purport to solve. Ethics in AI is essentially questioning, constantly investigating, and never taking for granted the technologies that are being rapidly imposed upon human life. That questioning is made all the more urgent because of scale. AI systems are reaching tremendous size in terms of the compute power they require, and the data they consume. And their prevalence in society, both in the scale of their deployment and the level of responsibility they assume, dwarfs the presence of computing in the PC and Internet eras. At the same time, increasing scale means many aspects of the technology, especially in its deep learning form, escape the comprehension of even the most experienced practitioners. Ethical concerns range from the esoteric, such as who is the author of an AI-created work of art; to the very real and very disturbing matter of surveillance in the hands of military authorities who can use the tools with impunity to capture and kill their fellow citizens. Somewhere in the questioning is a sliver of hope that with the right guidance, AI can help solve some of the world's biggest problems. The same technology that may propel bias can reveal bias in hiring decisions. The same technology that is a power hog can potentially contribute answers to slow or even reverse global warming. The risks of AI at the present moment arguably outweigh the benefits, but the potential benefits are large and worth pursuing. As Margaret Mitchell, formerly co-lead of Ethical AI at Google, has elegantly encapsulated, the key question is, "what could AI do to bring about a better society?" Mitchell's question would be interesting on any given day, but it comes within a context that has added urgency to the discussion. Mitchell's words come from a letter she wrote and posted on Google Drive following the departure of her co-lead, Timnit Gebru, in December.


Artificial Intelligence ban slammed for failing to address "vast abuse potential" - Malwarebytes Labs

#artificialintelligence

A written proposal to ban several uses of artificial intelligence (AI) and to place new oversight on other "high-risk" AI applications--published by the European Commission this week--met fierce opposition from several digital rights advocates in Europe. Portrayed as a missed opportunity by privacy experts, the EU Commission's proposal bans four broad applications of AI, but it includes several loopholes that could lead to abuse, and it fails to include a mechanism to add other AI applications to the ban list. It deems certain types of AI applications as "high-risk"--meaning their developers will need to abide by certain restrictions--but some of those same applications were specifically called out by many digital rights groups earlier this year as "incompatible with a democratic society." It creates new government authorities, but the responsibilities of those authorities may overlap with separate authorities devoted to overall data protection. Most upsetting to digital rights experts, it appears, is that the 107-page document (not including the necessary annexes) offers only glancing restrictions on biometric surveillance, like facial recognition software.


Ethics of AI: Benefits and risks of artificial intelligence

#artificialintelligence

In 1949, at the dawn of the computer age, the French philosopher Gabriel Marcel warned of the danger of naively applying technology to solve life's problems. Life, Marcel wrote in Being and Having, cannot be fixed the way you fix a flat tire. Any fix, any technique, is itself a product of that same problematic world, and is therefore problematic, and compromised. Marcel's admonition is often summarized in a single memorable phrase: "Life is not a problem to be solved, but a mystery to be lived." Despite that warning, seventy years later, artificial intelligence is the most powerful expression yet of humans' urge to solve or improve upon human life with computers. But what are these computer systems? As Marcel would have urged, one must ask where they come from, whether they embody the very problems they would purport to solve. Ethics in AI is essentially questioning, constantly investigating, and never taking for granted the technologies that are being rapidly imposed upon human life. That questioning is made all the more urgent because of scale. AI systems are reaching tremendous size in terms of the compute power they require, and the data they consume. And their prevalence in society, both in the scale of their deployment and the level of responsibility they assume, dwarfs the presence of computing in the PC and Internet eras. At the same time, increasing scale means many aspects of the technology, especially in its deep learning form, escape the comprehension of even the most experienced practitioners. Ethical concerns range from the esoteric, such as who is the author of an AI-created work of art; to the very real and very disturbing matter of surveillance in the hands of military authorities who can use the tools with impunity to capture and kill their fellow citizens. Somewhere in the questioning is a sliver of hope that with the right guidance, AI can help solve some of the world's biggest problems. The same technology that may propel bias can reveal bias in hiring decisions. The same technology that is a power hog can potentially contribute answers to slow or even reverse global warming. The risks of AI at the present moment arguably outweigh the benefits, but the potential benefits are large and worth pursuing. As Margaret Mitchell, formerly co-lead of Ethical AI at Google, has elegantly encapsulated, the key question is, "what could AI do to bring about a better society?" Mitchell's question would be interesting on any given day, but it comes within a context that has added urgency to the discussion. Mitchell's words come from a letter she wrote and posted on Google Drive following the departure of her co-lead, Timnit Gebru, in December.