Goto

Collaborating Authors

 ethical dimension


Do Ethical AI Principles Matter to Users? A Large-Scale Analysis of User Sentiment and Satisfaction

Pasch, Stefan, Cha, Min Chul

arXiv.org Artificial Intelligence

As AI systems become increasingly embedded in organizational workflows and consumer applications, ethical principles such as fairness, transparency, and robustness have been widely endorsed in policy and industry guidelines. However, there is still scarce empirical evidence on whether these principles are recognized, valued, or impactful from the perspective of users. This study investigates the link between ethical AI and user satisfaction by analyzing over 100,000 user reviews of AI products from G2.com. Using transformer - based language models, we measure sentiment across seven ethical dimensions defined by the EU Eth ics Guidelines for Trustworthy AI. Our findings show that all seven dimensions are positively associated with user satisfaction. Yet, this relationship varies systematically across user and product types. Technical users and reviewers of AI development pla tforms more frequently discuss system - level concerns (e.g., transparency, data governance), while non - technical users and reviewers of end - user applications emphasize human - centric dimensions (e.g., human agency, societal well - being). Moreover, the association between ethical AI and user satisfaction is significantly stronger for non - technical users and end - user applications across all dimensions. Our results highlight the importance of ethical AI design from the user's perspective and underscore the need t o account for contextual differences across user roles and product types.


A Capability Approach to AI Ethics

Ratti, Emanuele, Graves, Mark

arXiv.org Artificial Intelligence

We propose a conceptualization and implementation of AI ethics via the capability approach. We aim to show that conceptualizing AI ethics through the capability approach has two main advantages for AI ethics as a discipline. First, it helps clarify the ethical dimension of AI tools. Second, it provides guidance to implementing ethical considerations within the design of AI tools. We illustrate these advantages in the context of AI tools in medicine, by showing how ethics-based auditing of AI tools in medicine can greatly benefit from our capability-based approach.


Ethical Concerns of Generative AI and Mitigation Strategies: A Systematic Mapping Study

Huang, Yutan, Arora, Chetan, Houng, Wen Cheng, Kanij, Tanjila, Madulgalla, Anuradha, Grundy, John

arXiv.org Artificial Intelligence

The evolution of Generative AI, particularly Large Language Models (LLMs), has seen remarkable advancements since 2020 with the introduction of models like Chat-GPT and Bard. LLMs have revolutionized tasks, such as writing assistance, code generation, and customer support automation, by leveraging vast amounts of data to generate coherent and contextually relevant natural language (NL) responses [1, 2]. As a subset of Generative AI--systems designed to create new content--LLMs go beyond traditional AI techniques, which focus primarily on analyzing existing data. LLMs, in contrast, are capable of generating text, images, and music that mimic human creativity [3]. This capability is powered by advancements in neural network architectures, especially transformers, which enable LLMs to learn the nuances of human language and produce semantically accurate content [4].


The Ethical Dimensions of Artificial Intelligence: Exploring the Moral Implications and Challenges of Advanced Technology eBook : Wooten, Edris: Amazon.co.uk: Kindle Store

#artificialintelligence

The Ethical Dimensions of Artificial Intelligence is a must-read for anyone interested in the impact of AI on society and the ethical considerations that arise from its use. In this book, you will discover the ethical implications of AI, including issues related to privacy, bias, and accountability. You'll learn about the ethical frameworks that guide AI development and how to navigate the complex ethical landscape of this rapidly evolving technology. With the help of this book, you'll gain a deeper understanding of the ethical considerations surrounding AI and how to approach these issues in a responsible and informed manner. You'll also explore the ways in which AI is transforming various industries, from healthcare to finance to transportation, and the ethical implications of these changes.


Journal of Business Ethics

#artificialintelligence

Artificial Intelligence (AI), defined as "a system's ability to interpret external data correctly, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation" (Kaplan and Haenlein 2019, p. 17), is one of the most popular topics across a variety of academic disciplines, industry sectors, and business functions, and widely influences society at large. While many will first think of computational, organizational, or technological issues related to AI, there is an entire set of ethical dimensions triggered by this new era which urgently need to be analyzed, discussed, and reflected upon. As pointed out by Martin and Freeman (2004, p. 353) "business ethicists are uniquely positioned to analyze the relationship between business, technology, and society". There are many examples where inappropriate use of AI has resulted in unethical outcomes and behavior. Examples include image recognition services which make offensive classifications of minorities due to biased algorithms; Microsoft's AI chatbot Tay which became racist and adopted hate speech after only one day; and Amazon's facial recognition technology which simply failed to recognize users with darker skin colors.


The Ethical Dimension of Artificial Intelligence

#artificialintelligence

However, the frequency in which the Canadian government employs AI is worrying for some. Fears of governments using AI to infringe on private freedoms are very real, as some countries, such as China, have begun to use facial recognition software for police surveillance. Furthermore, people are rapidly losing confidence in social media platforms and Internet security, often citing the absence of human intervention in the decisions that algorithms make as the cause. Furthermore, 54% of North Americans express concern for their online privacy, and the non-consensual use of personal data by social media companies and federal governments do little to ease these fears. While more Canadians are more concerned about their online security due to threats posed by internet companies, at least 59% fear for their personal information being used by their own government.


SIENNA and SHERPA training on ethics and artificial intelligence for European Commission Sherpa Project

#artificialintelligence

The development and use of Artificial Intelligence (AI) will have both social and ethical impact. Because this technology has consequences for society, these are also key topics that need to be addressed by researchers and policymakers. And right now, the European Union is funding two projects addressing these issues under the "Science for and with Society" funding scheme: SIENNA and SHERPA, that will bring results that can help shape the ethical framework on new technological developments. During the workshop, participants will discuss a variety of topics, ranging from the application and impact of AI, and its social acceptance to standardisation efforts, ethics by design and regulatory options. The workshop is tailored to offer scientific support to policymakers to help them make informed decisions regarding the deployment and development of AI in EU funded projects.


Why Businesses Should Adopt an AI Code of Ethics -- Now - InformationWeek

#artificialintelligence

The issues of ethical development and deployment of applications using artificial intelligence (AI) technologies is rife with nuance and complexity. Because humans are diverse -- different genders, races, values and cultural norms -- AI algorithms and automated processes won't work with equal acceptance or effectiveness for everyone worldwide. What most people agree upon is that these technologies should be used to improve the human condition. There are many AI success stories with positive outcomes in fields from healthcare to education to transportation. But there have also been unexpected problems with several AI applications including facial recognition and unintended bias in numerous others.


Ethical Dimensions of Using Artificial Intelligence in Health Care

#artificialintelligence

An artificially intelligent computer program can now diagnose skin cancer more accurately than a board-certified dermatologist.1 Better yet, the program can do it faster and more efficiently, requiring a training data set rather than a decade of expensive and labor-intensive medical education. While it might appear that it is only a matter of time before physicians are rendered obsolete by this type of technology, a closer look at the role this technology can play in the delivery of health care is warranted to appreciate its current strengths, limitations, and ethical complexities. Artificial intelligence (AI), which includes the fields of machine learning, natural language processing, and robotics, can be applied to almost any field in medicine,2 and its potential contributions to biomedical research, medical education, and delivery of health care seem limitless. With its robust ability to integrate and learn from large sets of clinical data, AI can serve roles in diagnosis,3 clinical decision making,4 and personalized medicine.5 For example, AI-based diagnostic algorithms applied to mammograms are assisting in the detection of breast cancer, serving as a "second opinion" for radiologists.6


On Artificial Intelligence and the Public Good - Internet Ethics: Views From Silicon Valley - Resources - Internet Ethics - Focus Areas - Markkula Center for Applied Ethics - Santa Clara University

#artificialintelligence

Recently, the federal office of Science and Technology Policy issued a request for public feedback on "overarching questions in [Artificial Intelligence], including AI research and the tools, technologies, and training that are needed to answer these questions." OSTP is in the process of co-hosting four public workshops in 2016 on topics in AI in order to spur public dialogue on these topics and to identify challenges and opportunities related to this emerging technology. These topics include the legal and governance issues for AI, AI for public good, safety and control for AI, and the social and economic implications of AI. The Request for Information lists 10 specific topics on which the government would appreciate feedback, including "the use of AI for public good" and "the most pressing, fundamental questions in AI research, common to most or all scientific fields." One of the academics who answered the request for information is Shannon Vallor, who is the William J. Rewak Professor at Santa Clara University, and one of the Markkula Center for Applied Ethics' faculty scholars.