Critical Perspectives on Artificial Intelligence and Human Rights


This is the fifth blogpost in a series on Artificial Intelligence and Human Rights. Following Data & Society's AI & Human Rights Workshop in April, several participants continued to reflect on the convening and comment on the key issues that were discussed. The following is a summary of articles written by workshop attendees Bendert Zevenbergen, Elizabeth Eagen, and Aubra Anthony. In Marrying Ethics and Human Rights for AI Scrutiny, Bendert Zevenbergen (Princeton University) responds to a post by Christiaan van Veen and Corinne Cath, in which they advocate the value of applying a human rights framework in the development and deployment of AI. Both articles stemmed from workshop debates that considered the relevance of an ethical versus a human rights perspective in AI design and governance.

Artificial Intelligence: What's Human Rights Got To Do With It?


This is the second blogpost in a series on Artificial Intelligence and Human Rights, co-authored by: Christiaan van Veen (Center for Human Rights and Global Justice at NYU Law) & Corinne Cath (Oxford Internet Institute and Alan Turing Institute). Why are human rights relevant to the debate on Artificial Intelligence (AI)? That question was at the heart of a workshop at Data & Society on April 26 and 27 about'AI and Human Rights,' organized by Dr. Mark Latonero. The timely workshop brought together participants from key tech companies, civil society organizations, academia, government, and international organizations at a time when human rights have been peripheral in discussions on the societal impacts of AI systems. Many of those who are active in the field of AI may have doubts about the'added value' of the human rights framework to their work or are uncertain how addressing the human rights implications of AI is any different from work already being done on'AI and ethics'.

Artificial Intelligence Governance and Ethics: Global Perspectives Artificial Intelligence

Artificial intelligence (AI) is a technology which is increasingly being utilised in society and the economy worldwide, and its implementation is planned to become more prevalent in coming years. AI is increasingly being embedded in our lives, supplementing our pervasive use of digital technologies. But this is being accompanied by disquiet over problematic and dangerous implementations of AI, or indeed, even AI itself deciding to do dangerous and problematic actions, especially in fields such as the military, medicine and criminal justice. These developments have led to concerns about whether and how AI systems adhere, and will adhere to ethical standards. These concerns have stimulated a global conversation on AI ethics, and have resulted in various actors from different countries and sectors issuing ethics and governance initiatives and guidelines for AI. Such developments form the basis for our research in this report, combining our international and interdisciplinary expertise to give an insight into what is happening in Australia, China, Europe, India and the US.

As artificial intelligence progresses, what does real responsibility look like?


Artificial intelligence (AI) technologies--and the data driven business models underpinning them--are disrupting how we live, interact, work, do business, and govern. The economic, social and environmental benefits could be significant, for example in the realms of medical research, urban design, fair employment practices, political participation, public service delivery. But evidence is mounting about the potential negative consequences for society and individuals. These include the erosion of privacy, online hate speech, and the distortion of political engagement. They also include amplifying socially embedded discrimination where algorithms based on bias training data are used in criminal sentencing or job advertising and recruitment.

EPIC - Algorithmic Transparency: End Secret Profiling


As more decisions become automated and processed by algorithms, these processes become more opaque and less accountable. The public has a right to know the data processes that impact their lives so they can correct errors and contest decisions made by algorithms. Personal data collected from our social connections and online activities are used by the government and companies to make determinations about our ability to fly, obtain a job, get security clearance, and even determine the severity of criminal sentencing. These opaque, automated decision-making processes bear risks of secret profiling and discrimination as well as undermine our privacy and freedom of association. Without knowledge of the factors that provide the basis for decisions, it is impossible to know whether government and companies engage in practices that are deceptive, discriminatory, or unethical.