ai ethics guideline
Ethics of A.I.: Principles, Rules, and the Way Forward
Artificial Intelligence (AI) is being deployed in ways that touch people's lives, including in areas of healthcare, financial transactions, and delivery of justice. Advances in AI can have profound impacts across varied societal domains, and in recent years, this realisation has sparked ample debate about the values that should guide its development and use. States and international organisations have reacted to these societal concerns in various ways. Some have formed ad-hoc committees tasked to deliberate and provide recommendations on the subject. Examples include the United States National Artificial Intelligence Advisory Committee (NAIAC) that dispenses advice to the president and various federal officials; the expert group on AI at the Organisation for Economic Co-operation and Development (OECD); the High-Level Expert Group on AI formed by the European Commission; and the Select Committee on AI appointed by the UK Parliament's House of Lords.[1]
- Europe > United Kingdom (0.34)
- Asia > China (0.05)
- Oceania > Australia > Australian Capital Territory > Canberra (0.04)
- (2 more...)
- Banking & Finance > Economy (0.68)
- Government > Regional Government > Europe Government (0.54)
- Information Technology > Security & Privacy (0.46)
AI Ethics Issues in Real World: Evidence from AI Incident Database
With the powerful performance of Artificial Intelligence (AI) also comes prevalent ethical issues. Though governments and corporations have curated multiple AI ethics guidelines to curb unethical behavior of AI, the effect has been limited, probably due to the vagueness of the guidelines. In this paper, we take a closer look at how AI ethics issues take place in real world, in order to have a more in-depth and nuanced understanding of different ethical issues as well as their social impact. With a content analysis of AI Incident Database, which is an effort to prevent repeated real world AI failures by cataloging incidents, we identified 13 application areas which often see unethical use of AI, with intelligent service robots, language/vision models and autonomous driving taking the lead. Ethical issues appear in 8 different forms, from inappropriate use and racial discrimination, to physical safety and unfair algorithm. With this taxonomy of AI ethics issues, we aim to provide AI practitioners with a practical guideline when trying to deploy AI applications ethically.
- Europe > United Kingdom (0.14)
- Asia > China (0.05)
- North America > United States > Illinois (0.05)
- (2 more...)
- Law > Civil Rights & Constitutional Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (0.95)
- Transportation > Ground > Road (0.50)
AI Weekly: UN recommendations point to need for AI ethics guidelines
The U.N.'s Educational, Scientific, and Cultural Organization (UNESCO) this week approved a series of recommendations for AI ethics, which aim to recognize that AI can "be of great service" but also raise "fundamental … concerns." UNESCO's 193 member countries, including Russia and China, agreed to conduct AI impact assessments and place "strong enforcement mechanisms and remedial actions" to protect human rights. "The world needs rules for artificial intelligence to benefit humanity. The recommendation[s] on the ethics of AI is a major answer," UNESCO chief Audrey Azoulay said in a press release. "It sets the first global normative framework while giving States the responsibility to apply it at their level. UNESCO will support its … member states in its implementation and ask them to report regularly on their progress and practices."
The Department of Defense is issuing AI ethics guidelines for tech contractors
In 2018, when Google employees found out about their company's involvement in Project Maven, a controversial US military effort to develop AI to analyze surveillance video, they weren't happy. "We believe that Google should not be in the business of war," they wrote in a letter to the company's leadership. Around a dozen employees resigned. Google did not renew the contract in 2019. Project Maven still exists, and other tech companies, including Amazon and Microsoft, have since taken Google's place.
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)
The Department of Defense is issuing AI ethics guidelines for tech contractors
In a bid to promote transparency, the Defense Innovation Unit, which awards DoD contracts to companies, has released what it calls "responsible artificial intelligence" guidelines that it will require third-party developers to use when building AI for the military, whether that AI is for an HR system or target recognition. The guidelines provide a step-by-step process for companies to follow during planning, development, and deployment. They include procedures for identifying who might use the technology, who might be harmed by it, what those harms might be, and how they might be avoided--both before the system is built and once it is up and running. "There are no other guidelines that exist, either within the DoD or, frankly, the United States government, that go into this level of detail," says Bryce Goodman at the Defense Innovation Unit, who coauthored the guidelines. The work could change how AI is developed by the US government, if the DoD's guidelines are adopted or adapted by other departments.
Done right, human ethics can ensure AI bias is curbed - Tech Wire Asia
AI bias continues to be a prevailing problem when it comes to ensuring proper implementation of artificial intelligence (AI) in many industries. Since the technology has been implemented across several verticals, some of its use cases have been causing… unpleasantness among users. One of the biggest worries surrounding the sticky issue of AI bias, is in facial recognition solutions. As AI works purely on analyzing data inputs that it has access to, the algorithms may at times not provide entirely accurate results. In the case of facial recognition, the particular AI recognized certain races as criminals, causing an uproar in society.
AI reflections in 2020
Our article offered the first systematically conducted review of published artificial intelligence (AI) ethics guidelines. We analysed 84 documents and found that, despite an apparent convergence on certain ethical principles on the surface level, there are substantive divergences on how these principles are interpreted, why they are deemed important, what issue, domain or actors they pertain to, and how they should be implemented. Scholarly and public discussions on AI ethics have certainly evolved. Although the illusion that'ethical AI' is simply a technological matter still lingers, 2020 has seen an important push towards broader acceptance of the sociotechnicity of AI. Acknowledging the sociotechnical nature of AI systems requires us, as Pratyusha Kalluri put it succinctly1, to centre less on fairness, or on'AI for good', and more on power distribution and power differentials.
Thailand Drafts Ethics Guidelines for AI
Thailand's Digital Economy and Society (DES) Ministry has drafted the country's first artificial intelligence (AI) ethics guidelines. The ministry worked with the Thailand branch of an American multinational technology company and Mahidol University on the guidelines. Thailand is the first country in Asia-Pacific where the tech company contributed to crafting the guidelines as an adviser. According to Thai media, the first principle indicates AI technology must cater to the country's competitiveness and sustainable development. The technology must also comply with the law and international standards.
The global landscape of AI ethics guidelines
In the past five years, private companies, research institutions and public sector organizations have issued principles and guidelines for ethical artificial intelligence (AI). However, despite an apparent agreement that AI should be'ethical', there is debate about both what constitutes'ethical AI' and which ethical requirements, technical standards and best practices are needed for its realization. To investigate whether a global agreement on these questions is emerging, we mapped and analysed the current corpus of principles and guidelines on ethical AI. Our results reveal a global convergence emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility and privacy), with substantive divergence in relation to how these principles are interpreted, why they are deemed important, what issue, domain or actors they pertain to, and how they should be implemented. Our findings highlight the importance of integrating guideline-development efforts with substantive ethical analysis and adequate implementation strategies.