Goto

Collaborating Authors

Critical Perspectives on Artificial Intelligence and Human Rights

#artificialintelligence

This is the fifth blogpost in a series on Artificial Intelligence and Human Rights. Following Data & Society's AI & Human Rights Workshop in April, several participants continued to reflect on the convening and comment on the key issues that were discussed. The following is a summary of articles written by workshop attendees Bendert Zevenbergen, Elizabeth Eagen, and Aubra Anthony. In Marrying Ethics and Human Rights for AI Scrutiny, Bendert Zevenbergen (Princeton University) responds to a post by Christiaan van Veen and Corinne Cath, in which they advocate the value of applying a human rights framework in the development and deployment of AI. Both articles stemmed from workshop debates that considered the relevance of an ethical versus a human rights perspective in AI design and governance.


Artificial Intelligence: What's Human Rights Got To Do With It?

#artificialintelligence

This is the second blogpost in a series on Artificial Intelligence and Human Rights, co-authored by: Christiaan van Veen (Center for Human Rights and Global Justice at NYU Law) & Corinne Cath (Oxford Internet Institute and Alan Turing Institute). Why are human rights relevant to the debate on Artificial Intelligence (AI)? That question was at the heart of a workshop at Data & Society on April 26 and 27 about'AI and Human Rights,' organized by Dr. Mark Latonero. The timely workshop brought together participants from key tech companies, civil society organizations, academia, government, and international organizations at a time when human rights have been peripheral in discussions on the societal impacts of AI systems. Many of those who are active in the field of AI may have doubts about the'added value' of the human rights framework to their work or are uncertain how addressing the human rights implications of AI is any different from work already being done on'AI and ethics'.


Artificial Intelligence Governance and Ethics: Global Perspectives

arXiv.org Artificial Intelligence

Artificial intelligence (AI) is a technology which is increasingly being utilised in society and the economy worldwide, and its implementation is planned to become more prevalent in coming years. AI is increasingly being embedded in our lives, supplementing our pervasive use of digital technologies. But this is being accompanied by disquiet over problematic and dangerous implementations of AI, or indeed, even AI itself deciding to do dangerous and problematic actions, especially in fields such as the military, medicine and criminal justice. These developments have led to concerns about whether and how AI systems adhere, and will adhere to ethical standards. These concerns have stimulated a global conversation on AI ethics, and have resulted in various actors from different countries and sectors issuing ethics and governance initiatives and guidelines for AI. Such developments form the basis for our research in this report, combining our international and interdisciplinary expertise to give an insight into what is happening in Australia, China, Europe, India and the US.


An Algorithmic Equity Toolkit for Technology Audits by Community Advocates and Activists

arXiv.org Artificial Intelligence

A wave of recent scholarship documenting the discriminatory harms of algorithmic systems has spurred widespread interest in algorithmic accountability and regulation. Yet effective accountability and regulation is stymied by a persistent lack of resources supporting public understanding of algorithms and artificial intelligence. Through interactions with a US-based civil rights organization and their coalition of community organizations, we identify a need for (i) heuristics that aid stakeholders in distinguishing between types of analytic and information systems in lay language, and (ii) risk assessment tools for such systems that begin by making algorithms more legible. The present work delivers a toolkit to achieve these aims. This paper both presents the Algorithmic Equity Toolkit (AEKit) Equity as an artifact, and details how our participatory process shaped its design. Our work fits within human-computer interaction scholarship as a demonstration of the value of HCI methods and approaches to problems in the area of algorithmic transparency and accountability.


As artificial intelligence progresses, what does real responsibility look like?

#artificialintelligence

Artificial intelligence (AI) technologies--and the data driven business models underpinning them--are disrupting how we live, interact, work, do business, and govern. The economic, social and environmental benefits could be significant, for example in the realms of medical research, urban design, fair employment practices, political participation, public service delivery. But evidence is mounting about the potential negative consequences for society and individuals. These include the erosion of privacy, online hate speech, and the distortion of political engagement. They also include amplifying socially embedded discrimination where algorithms based on bias training data are used in criminal sentencing or job advertising and recruitment.