Goto

Collaborating Authors

 ethics guideline


Ethical and Scalable Automation: A Governance and Compliance Framework for Business Applications

Lin, Haocheng

arXiv.org Artificial Intelligence

The popularisation of applying AI in businesses poses significant challenges relating to ethical principles, governance, and legal compliance. Although businesses have embedded AI into their day-to-day processes, they lack a unified approach for mitigating its potential risks. This paper introduces a framework ensuring that AI must be ethical, controllable, viable, and desirable. Balancing these factors ensures the design of a framework that addresses its trade-offs, such as balancing performance against explainability. A successful framework provides practical advice for businesses to meet regulatory requirements in sectors such as finance and healthcare, where it is critical to comply with standards like GPDR and the EU AI Act. Different case studies validate this framework by integrating AI in both academic and practical environments. For instance, large language models are cost-effective alternatives for generating synthetic opinions that emulate attitudes to environmental issues. These case studies demonstrate how having a structured framework could enhance transparency and maintain performance levels as shown from the alignment between synthetic and expected distributions. This alignment is quantified using metrics like Chi-test scores, normalized mutual information, and Jaccard indexes. Future research should explore the framework's empirical validation in diverse industrial settings further, ensuring the model's scalability and adaptability.


A critical review of the EU's 'Ethics Guidelines for Trustworthy AI'

#artificialintelligence

Europe has some of the most progressive, human-centric artificial intelligence governance policies in the world. Compared to the heavy-handed government oversight in China or the Wild West-style anything goes approach in the US, the EU's strategy is designed to stoke academic and corporate innovation while also protecting private citizens from harm and overreach. In 2018, the European Commission began its European AI Alliance initiative. The alliance exists so that various stakeholders can weigh-in and be heard as the EU considers its ongoing policies governing the development and deployment of AI technologies. Since 2018, more than 6,000 stakeholders have participated in the dialogue through various venues, including online forums and in-person events.


AI Ethics Issues in Real World: Evidence from AI Incident Database

Wei, Mengyi, Zhou, Zhixuan

arXiv.org Artificial Intelligence

With the powerful performance of Artificial Intelligence (AI) also comes prevalent ethical issues. Though governments and corporations have curated multiple AI ethics guidelines to curb unethical behavior of AI, the effect has been limited, probably due to the vagueness of the guidelines. In this paper, we take a closer look at how AI ethics issues take place in real world, in order to have a more in-depth and nuanced understanding of different ethical issues as well as their social impact. With a content analysis of AI Incident Database, which is an effort to prevent repeated real world AI failures by cataloging incidents, we identified 13 application areas which often see unethical use of AI, with intelligent service robots, language/vision models and autonomous driving taking the lead. Ethical issues appear in 8 different forms, from inappropriate use and racial discrimination, to physical safety and unfair algorithm. With this taxonomy of AI ethics issues, we aim to provide AI practitioners with a practical guideline when trying to deploy AI applications ethically.


Ethical frameworks for designing AI for telecom

#artificialintelligence

"We can only see a short distance ahead, but we can see plenty there that needs to be done." The journey to artificial intelligence (AI) – or the thinking machine, as it was once called – may have begun more than half a century ago, yet although many of the technological questions may have since been conquered, many bioethical questions have not. Critical questions such as'can we guarantee that new technologies will always do good and never do harm?' and'can we always ensure that they are just, fair, explainable, and accountable?' will rightly and inevitably form the centerpiece of any discussion on future AI deployment. With new breakthroughs, new questions will be asked. While these questions will always be Important, they are also part of a much broader and more holistic conversation between society and technology itself, spanning many different Industries.


Actionable Approaches to Promote Ethical AI in Libraries

Bubinger, Helen, Dinneen, Jesse David

arXiv.org Artificial Intelligence

The widespread use of artificial intelligence (AI) in many domains has revealed numerous ethical issues from data and design to deployment. In response, countless broad principles and guidelines for ethical AI have been published, and following those, specific approaches have been proposed for how to encourage ethical outcomes of AI. Meanwhile, library and information services too are seeing an increase in the use of AI-powered and machine learning-powered information systems, but no practical guidance currently exists for libraries to plan for, evaluate, or audit the ethics of intended or deployed AI. We therefore report on several promising approaches for promoting ethical AI that can be adapted from other contexts to AI-powered information services and in different stages of the software lifecycle.


AI reflections in 2020

#artificialintelligence

Our article offered the first systematically conducted review of published artificial intelligence (AI) ethics guidelines. We analysed 84 documents and found that, despite an apparent convergence on certain ethical principles on the surface level, there are substantive divergences on how these principles are interpreted, why they are deemed important, what issue, domain or actors they pertain to, and how they should be implemented. Scholarly and public discussions on AI ethics have certainly evolved. Although the illusion that'ethical AI' is simply a technological matter still lingers, 2020 has seen an important push towards broader acceptance of the sociotechnicity of AI. Acknowledging the sociotechnical nature of AI systems requires us, as Pratyusha Kalluri put it succinctly1, to centre less on fairness, or on'AI for good', and more on power distribution and power differentials.


How do we practice responsible AI?

#artificialintelligence

A key component to make sure that we develop responsible AI is diversity. This is because an AI application reflects and even amplifies the biases of its developers. As I discussed in my previous post, a diverse team will see things from many different points of view and help to reflect many different perspectives in the data. What can we do besides making sure our teams are diverse? Without claiming to have the complete answer, I would like to share some thoughts.


Does AI-driven cloud computing need ethics guidelines?

#artificialintelligence

Just ask any marketing person--it's their job to keep demand for a product or service high. So they depend on advertising and other methods to create brand recognition and a sense of demand for what they sell. These days marketing firms are even more clever, recruiting social media influencers who promote a product or service directly or indirectly--sometimes without disclosing that they are a paid lackey. We're getting better at influencing humans, either by using traditional advertising methods, such as keyword advertising, or, even scarier, by leveraging AI technology as a way to change hearts and minds. Often "the targets" don't even understand that their hearts and minds are being changed.


The EU is funding dystopian Artificial Intelligence projects

#artificialintelligence

Despite its commitment to'trustworthy' artificial intelligence, the EU is bankrolling AI projects that are questionable, write Fieke Jansen and Daniel Leufer. Fieke Jansen is a PhD candidate at the Data Justice Lab and Mozilla Foundation Fellow 2019-2020. Daniel Leufer, PhD, is a Mozilla Foundation Fellow 2019-2020 hosted by Access Now and member of the Working Group on Philosophy of Technology at KU Leuven, Belgium. Discussions on the negative impact of Artificial Intelligence in society include horror stories plucked from either China's high-tech surveillance state and its use of the controversial social credit system, or from the US and its use of recidivism algorithms and predictive policing. Typically, Europe is excluded from these stories, due to the perception that EU citizens are protected from such AI-fueled nightmares through the legal protection offered by the GDPR, or because there is simply no horror-inducing AI deployed across the continent. In contrast to this perception, journalists and NGOs have shown that imperfect and ethically questionable AI systems such as facial recognition, fraud detection and smart (a.k.a surveillance) cities, are also in use across Europe.


AI Ethics for Systemic Issues: A Structural Approach

van der Loeff, Agnes Schim, Bassi, Iggy, Kapila, Sachin, Gamper, Jevgenij

arXiv.org Artificial Intelligence

The debate on AI ethics largely focuses on technical improve ments and stronger regulation to prevent accidents or misuse of AI, with soluti ons relying on holding individual actors accountable for responsible AI devel opment. While useful and necessary, we argue that this "agency" approach disrega rds more indirect and complex risks resulting from AI's interaction with the soci o-economic and political context. This paper calls for a "structural" approach to assessing AI's effects in order to understand and prevent such systemic risks where no individual can be held accountable for the broader negative impacts. This i s particularly relevant for AI applied to systemic issues such as climate change and f ood security which require political solutions and global cooperation. To pro perly address the wide range of AI risks and ensure'AI for social good', agency-foc used policies must be complemented by policies informed by a structural approa ch.