Goto

Collaborating Authors

Ethical Management of Artificial Intelligence

#artificialintelligence

With artificial intelligence (AI) becoming increasingly capable of handling highly complex tasks, many AI-enabled products and services are granted a higher autonomy of decision-making, potentially exercising diverse influences on individuals and societies. While organizations and researchers have repeatedly shown the blessings of AI for humanity, serious AI-related abuses and incidents have raised pressing ethical concerns. Consequently, researchers from different disciplines widely acknowledge an ethical discourse on AI. However, managers—eager to spark ethical considerations throughout their organizations—receive limited support on how they may establish and manage AI ethics. Although research is concerned with technological-related ethics in organizations, research on the ethical management of AI is limited. Against this background, the goals of this article are to provide a starting point for research on AI-related ethical concerns and to highlight future research opportunities. We propose an ethical management of AI (EMMA) framework, focusing on three perspectives: managerial decision making, ethical considerations, and macro- as well as micro-environmental dimensions. With the EMMA framework, we provide researchers with a starting point to address the managing the ethical aspects of AI.


Where AI and ethics meet 7wData

#artificialintelligence

Given a swell of dire warnings about the future of Artificial Intelligence over the last few years, the field of AI ethics has become a hive of activity. These warnings come from a variety of experts such as Oxford University's Nick Bostrom, but also from more public figures such as Elon Musk and the late Stephen Hawking. The picture they paint is bleak. In response, many have dreamed up sets of principles to guide AI researchers and help them negotiate the maze of human morality and ethics. Now, a paper in Nature Machine Intelligence throws a spanner in the works by claiming that such high principles, while laudable, will not give us the ethical AI society we need.


Ethics And Hacking: What You Need To Know

Forbes - Tech

The term hacking gets bandied about a great deal in both the industry and in the media. Some stories carry the image of bored tweens, building skills while bragging about tearing up someone else's hard work. Other stories talk more about offshore groups using server farms to mass phish for information. The kinds of damage that hackers can cause is as varied as functions of a computer or device: Lost finances, trade secrets, and files swapped or erased are only the tip of what could be done to a person or company. Sometimes, just being one of the few people aware that different companies are talking to each other about business can mean opportunities for the unethical.


Rethinking AI Ethics - Asimov has a lot to answer for

#artificialintelligence

From whence did this concept of AI'Ethics' derive? Digital systems that caused great harm to people via injustice, discrimination or exclusion, privacy or just plain cheating, not to mention the environment, have been with us for decades. Ethical issues in analytics and models did not arise with Big Data, Data Science or AI -- they have been with us for a long time. Was there ever a COBOL Ethics, a DB2 Ethics?, an ERP Ethics (well maybe)? This whole fascination with AI Ethics derives from, in my opinion, Isaac Asimov's Three Laws of Robotics.


2052

AI Magazine

The newly emerging field of machine ethics (Anderson and Anderson 2006) is concerned with adding an ethical dimension to machines. Unlike computer ethics--which has traditionally focused on ethical issues surrounding humans' use of machines--machine ethics is concerned with ensuring that the behavior of machines toward human users, and perhaps other machines as well, is ethically acceptable. In this article we discuss the importance of machine ethics, the need for machines that represent ethical principles explicitly, and the challenges facing those working on machine ethics. We also give an example of current research in the field that shows that it is possible, at least in a limited domain, for a machine to abstract an ethical principle from examples of correct ethical judgments and use that principle to guide its own behavior. We need to make a distinction between what James Moor has called an "implicit ethical agent" and an "explicit ethical agent" (Moor 2006).