ethics guideline
Ethical and Scalable Automation: A Governance and Compliance Framework for Business Applications
The popularisation of applying AI in businesses poses significant challenges relating to ethical principles, governance, and legal compliance. Although businesses have embedded AI into their day-to-day processes, they lack a unified approach for mitigating its potential risks. This paper introduces a framework ensuring that AI must be ethical, controllable, viable, and desirable. Balancing these factors ensures the design of a framework that addresses its trade-offs, such as balancing performance against explainability. A successful framework provides practical advice for businesses to meet regulatory requirements in sectors such as finance and healthcare, where it is critical to comply with standards like GPDR and the EU AI Act. Different case studies validate this framework by integrating AI in both academic and practical environments. For instance, large language models are cost-effective alternatives for generating synthetic opinions that emulate attitudes to environmental issues. These case studies demonstrate how having a structured framework could enhance transparency and maintain performance levels as shown from the alignment between synthetic and expected distributions. This alignment is quantified using metrics like Chi-test scores, normalized mutual information, and Jaccard indexes. Future research should explore the framework's empirical validation in diverse industrial settings further, ensuring the model's scalability and adaptability.
- North America > United States > New York (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- (2 more...)
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.67)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
- Government (1.00)
- (4 more...)
A critical review of the EU's 'Ethics Guidelines for Trustworthy AI'
Europe has some of the most progressive, human-centric artificial intelligence governance policies in the world. Compared to the heavy-handed government oversight in China or the Wild West-style anything goes approach in the US, the EU's strategy is designed to stoke academic and corporate innovation while also protecting private citizens from harm and overreach. In 2018, the European Commission began its European AI Alliance initiative. The alliance exists so that various stakeholders can weigh-in and be heard as the EU considers its ongoing policies governing the development and deployment of AI technologies. Since 2018, more than 6,000 stakeholders have participated in the dialogue through various venues, including online forums and in-person events.
- Europe (0.55)
- Asia > China (0.25)
- North America > United States > California (0.05)
AI Ethics Issues in Real World: Evidence from AI Incident Database
With the powerful performance of Artificial Intelligence (AI) also comes prevalent ethical issues. Though governments and corporations have curated multiple AI ethics guidelines to curb unethical behavior of AI, the effect has been limited, probably due to the vagueness of the guidelines. In this paper, we take a closer look at how AI ethics issues take place in real world, in order to have a more in-depth and nuanced understanding of different ethical issues as well as their social impact. With a content analysis of AI Incident Database, which is an effort to prevent repeated real world AI failures by cataloging incidents, we identified 13 application areas which often see unethical use of AI, with intelligent service robots, language/vision models and autonomous driving taking the lead. Ethical issues appear in 8 different forms, from inappropriate use and racial discrimination, to physical safety and unfair algorithm. With this taxonomy of AI ethics issues, we aim to provide AI practitioners with a practical guideline when trying to deploy AI applications ethically.
- Europe > United Kingdom (0.14)
- Asia > China (0.05)
- North America > United States > Illinois (0.05)
- (2 more...)
- Law > Civil Rights & Constitutional Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (0.95)
- Transportation > Ground > Road (0.50)
Ethical frameworks for designing AI for telecom
"We can only see a short distance ahead, but we can see plenty there that needs to be done." The journey to artificial intelligence (AI) – or the thinking machine, as it was once called – may have begun more than half a century ago, yet although many of the technological questions may have since been conquered, many bioethical questions have not. Critical questions such as'can we guarantee that new technologies will always do good and never do harm?' and'can we always ensure that they are just, fair, explainable, and accountable?' will rightly and inevitably form the centerpiece of any discussion on future AI deployment. With new breakthroughs, new questions will be asked. While these questions will always be Important, they are also part of a much broader and more holistic conversation between society and technology itself, spanning many different Industries.
- North America > United States > Virginia (0.05)
- Europe > Sweden > Västerbotten County > Umeå (0.05)
- Law (0.98)
- Information Technology > Security & Privacy (0.98)
Actionable Approaches to Promote Ethical AI in Libraries
Bubinger, Helen, Dinneen, Jesse David
The widespread use of artificial intelligence (AI) in many domains has revealed numerous ethical issues from data and design to deployment. In response, countless broad principles and guidelines for ethical AI have been published, and following those, specific approaches have been proposed for how to encourage ethical outcomes of AI. Meanwhile, library and information services too are seeing an increase in the use of AI-powered and machine learning-powered information systems, but no practical guidance currently exists for libraries to plan for, evaluate, or audit the ethics of intended or deployed AI. We therefore report on several promising approaches for promoting ethical AI that can be adapted from other contexts to AI-powered information services and in different stages of the software lifecycle.
- Europe > Germany > Berlin (0.05)
- North America > United States > Illinois (0.04)
- Asia > China > Beijing > Beijing (0.04)
- Law (0.47)
- Health & Medicine (0.47)
AI reflections in 2020
Our article offered the first systematically conducted review of published artificial intelligence (AI) ethics guidelines. We analysed 84 documents and found that, despite an apparent convergence on certain ethical principles on the surface level, there are substantive divergences on how these principles are interpreted, why they are deemed important, what issue, domain or actors they pertain to, and how they should be implemented. Scholarly and public discussions on AI ethics have certainly evolved. Although the illusion that'ethical AI' is simply a technological matter still lingers, 2020 has seen an important push towards broader acceptance of the sociotechnicity of AI. Acknowledging the sociotechnical nature of AI systems requires us, as Pratyusha Kalluri put it succinctly1, to centre less on fairness, or on'AI for good', and more on power distribution and power differentials.
How do we practice responsible AI?
A key component to make sure that we develop responsible AI is diversity. This is because an AI application reflects and even amplifies the biases of its developers. As I discussed in my previous post, a diverse team will see things from many different points of view and help to reflect many different perspectives in the data. What can we do besides making sure our teams are diverse? Without claiming to have the complete answer, I would like to share some thoughts.
Does AI-driven cloud computing need ethics guidelines?
Just ask any marketing person--it's their job to keep demand for a product or service high. So they depend on advertising and other methods to create brand recognition and a sense of demand for what they sell. These days marketing firms are even more clever, recruiting social media influencers who promote a product or service directly or indirectly--sometimes without disclosing that they are a paid lackey. We're getting better at influencing humans, either by using traditional advertising methods, such as keyword advertising, or, even scarier, by leveraging AI technology as a way to change hearts and minds. Often "the targets" don't even understand that their hearts and minds are being changed.
The EU is funding dystopian Artificial Intelligence projects
Despite its commitment to'trustworthy' artificial intelligence, the EU is bankrolling AI projects that are questionable, write Fieke Jansen and Daniel Leufer. Fieke Jansen is a PhD candidate at the Data Justice Lab and Mozilla Foundation Fellow 2019-2020. Daniel Leufer, PhD, is a Mozilla Foundation Fellow 2019-2020 hosted by Access Now and member of the Working Group on Philosophy of Technology at KU Leuven, Belgium. Discussions on the negative impact of Artificial Intelligence in society include horror stories plucked from either China's high-tech surveillance state and its use of the controversial social credit system, or from the US and its use of recidivism algorithms and predictive policing. Typically, Europe is excluded from these stories, due to the perception that EU citizens are protected from such AI-fueled nightmares through the legal protection offered by the GDPR, or because there is simply no horror-inducing AI deployed across the continent. In contrast to this perception, journalists and NGOs have shown that imperfect and ethically questionable AI systems such as facial recognition, fraud detection and smart (a.k.a surveillance) cities, are also in use across Europe.
- Europe > Belgium > Flanders > Flemish Brabant > Leuven (0.25)
- Asia > China (0.25)
- Europe > United Kingdom (0.05)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.50)
- Information Technology > Security & Privacy (0.50)
- Government > Regional Government > Europe Government (0.49)
- Law > Statutes (0.32)
AI Ethics for Systemic Issues: A Structural Approach
van der Loeff, Agnes Schim, Bassi, Iggy, Kapila, Sachin, Gamper, Jevgenij
The debate on AI ethics largely focuses on technical improve ments and stronger regulation to prevent accidents or misuse of AI, with soluti ons relying on holding individual actors accountable for responsible AI devel opment. While useful and necessary, we argue that this "agency" approach disrega rds more indirect and complex risks resulting from AI's interaction with the soci o-economic and political context. This paper calls for a "structural" approach to assessing AI's effects in order to understand and prevent such systemic risks where no individual can be held accountable for the broader negative impacts. This i s particularly relevant for AI applied to systemic issues such as climate change and f ood security which require political solutions and global cooperation. To pro perly address the wide range of AI risks and ensure'AI for social good', agency-foc used policies must be complemented by policies informed by a structural approa ch.
- North America > Canada > Quebec > Montreal (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- Asia > Japan > Honshū > Kansai > Osaka Prefecture > Osaka (0.04)
- Government (1.00)
- Food & Agriculture > Agriculture (0.96)
- Banking & Finance (0.94)
- (3 more...)