Goto

Collaborating Authors

CIPR AI in PR ethics guide

#artificialintelligence

UK EDITION Ethics Guide to Artificial Intelligence in PR 2. The AIinPR panel and the authors are grateful for the endorsements and support from the following: In May 2020 the Wall Street Journal reported that 64 per cent of all signups to extremist groups on Facebook were due to Facebook's own recommendation algorithms. There could hardly be a simpler case study in the question of AI and ethics, the intersection of what is technically possible and what is morally desirable. CIPR members who find an automated/AI system used by their organisation perpetrating such online harms have a professional responsibility to try and prevent it. For all PR professionals, this is a fundamental requirement of the ability to practice ethically. The question is – if you worked at Facebook, what would you do? If you're not sure, this report guide will help you work out your answer. Alastair McCapra Chief Executive Officer CIPR Artificial Intelligence is quickly becoming an essential technology for ...


"EHLO WORLD" -- Checking If Your Conversational AI Knows Right from Wrong

arXiv.org Artificial Intelligence

In this paper we discuss approaches to evaluating and validating the ethical claims of a Conversational AI system. We outline considerations around both a top-down regulatory approach and bottom-up processes. We describe the ethical basis for each approach and propose a hybrid which we demonstrate by taking the case of a customer service chatbot as an example. We speculate on the kinds of top-down and bottom-up processes that would need to exist for a hybrid framework to successfully function as both an enabler as well as a shepherd among multiple use-cases and multiple competing AI solutions.


The State of AI Ethics Report (October 2020)

arXiv.org Artificial Intelligence

The 2nd edition of the Montreal AI Ethics Institute's The State of AI Ethics captures the most relevant developments in the field of AI Ethics since July 2020. This report aims to help anyone, from machine learning experts to human rights activists and policymakers, quickly digest and understand the ever-changing developments in the field. Through research and article summaries, as well as expert commentary, this report distills the research and reporting surrounding various domains related to the ethics of AI, including: AI and society, bias and algorithmic justice, disinformation, humans and AI, labor impacts, privacy, risk, and future of AI ethics. In addition, The State of AI Ethics includes exclusive content written by world-class AI Ethics experts from universities, research institutes, consulting firms, and governments. These experts include: Danit Gal (Tech Advisor, United Nations), Amba Kak (Director of Global Policy and Programs, NYU's AI Now Institute), Rumman Chowdhury (Global Lead for Responsible AI, Accenture), Brent Barron (Director of Strategic Projects and Knowledge Management, CIFAR), Adam Murray (U.S. Diplomat working on tech policy, Chair of the OECD Network on AI), Thomas Kochan (Professor, MIT Sloan School of Management), and Katya Klinova (AI and Economy Program Lead, Partnership on AI). This report should be used not only as a point of reference and insight on the latest thinking in the field of AI Ethics, but should also be used as a tool for introspection as we aim to foster a more nuanced conversation regarding the impacts of AI on the world.


A Framework for Ethical AI at the United Nations

arXiv.org Artificial Intelligence

This paper aims to provide an overview of the ethical concerns in artificial intelligence (AI) and the framework that is needed to mitigate those risks, and to suggest a practical path to ensure the development and use of AI at the United Nations (UN) aligns with our ethical values. The overview discusses how AI is an increasingly powerful tool with potential for good, albeit one with a high risk of negative side-effects that go against fundamental human rights and UN values. It explains the need for ethical principles for AI aligned with principles for data governance, as data and AI are tightly interwoven. It explores different ethical frameworks that exist and tools such as assessment lists. It recommends that the UN develop a framework consisting of ethical principles, architectural standards, assessment methods, tools and methodologies, and a policy to govern the implementation and adherence to this framework, accompanied by an education program for staff.


AI ethics: How Salesforce is helping developers build products with ethical use and privacy in mind

ZDNet

People have long debated what constitutes the ethical use of technology. But with the rise of artificial intelligence, the discussion has intensified as it's now algorithms not humans that are making decisions about how technology is applied. In June 2020, I had a chance to speak with Paula Goldman, Chief Ethical and Humane Use Officer for Salesforce about how companies can develop technology, specifically AI, with ethical use and privacy in mind. I spoke with Goldman during Salesforce's TrailheaDX 2020 virtual developer conference, but we didn't have a chance to air the interview then. I'm glad to bring it to you now, as the discussion about ethics and technology has only intensified as companies and governments around the world use new technologies to address the COVID-19 pandemic. The following is a transcript of the interview, edited for readability. Bill Detwiler: So let's get right to it.