Collaborating Authors

Artificial Intelligence and Ethics: What You Should Know


AI is transforming how enterprises function and engage with people. The technology offers the ability to automate simple and repetitive tasks, unlock insights hidden inside data, and help adopters make better, more informed decisions. Yet as AI firmly embeds itself into the IT mainstream, concerns are growing over its potential misuse. To address the ethical problems that can arise from non-human data analysis and decision-making, a growing number of enterprises are starting to pay attention to how AI can be kept from making potentially harmful decisions. AI is a powerful technology with an immense number of positive attributes.

Thoughts on Data Ethics


Notes from the President, based on a panel for CDO Magazine "Building AI/ML Organizations that are Ethical by Design" Some answers to some thoughtful questions, based on a panel completed by our president, Amina Al Sherif, moderated by Anno.Ai Chief AI Officer Ashley Antonides. AI and data ethics can most broadly be defined as guidelines by your organization that determines what right from wrong is. We loved this definition, because it acknowledges how ethically right and wrong algorithms largely depend on an organization's internal culture and ethical fabric, as well as what vertical or sector that organization serves. AI and data ethics is mostly about your data, and how it is handled in training a machine learning (ML) system. Models inherently cannot be unethical- because of their extreme dependencies on the data on which they are trained.

Social Robots, AI, and Ethics - Resources - Technology Ethics - Focus Areas - Markkula Center for Applied Ethics - Santa Clara University


Currently the world is rapidly developing robotic and artificial intelligence (AI) technologies. These technologies offer enormous potential benefits, yet there are also drawbacks and dangers. Using the Ethics Center's Framework for Ethical Decision Making, we can consider some of the ethical issues involved with Robots and AI. Utilitarianism is a form of moral reasoning which emphasizes the consequences of actions. Typically it tries to maximize happiness and minimize suffering, though there are other ways to use utilitarian evaluation such as cost-benefit analysis.

The Ethics Of New Technology

NPR Technology

She is a co-founder of Flickr, a venture capitalist, and host of the podcast Should This Exist? about understanding the impact of technology on humanity.

5 steps to incorporate ethics into your artificial intelligence strategy


By 2022, nearly a third of consumers will rely on artificial intelligence to decide what they eat, what they wear or where they live. To keep up with consumer demands, enterprises are adopting AI at a rapid pace, with industries from finance to healthcare embracing the transformational nature of this technology. Yet as virtual assistants become smarter and robots sound more like humans, IT leaders will become responsible for drawing ethical boundaries around the use of this technology. While many frameworks exist for creating an ethical AI strategy, these principles are not set in stone. Rather, formulating an ethical strategy requires a more individualistic, question-based approach.