Goto

Collaborating Authors

Consultation Human Rights and Technology

#artificialintelligence

The Australian Human Rights Commission is conducting a project on Human Rights and New Technology (the Project). As part of the Project, the Commission and the World Economic Forum are working together to explore models of governance and leadership on artificial intelligence (AI) in Australia. This White Paper has been produced to support a consultation process that aims to identify how Australia can simultaneously foster innovation and protect human rights – as we see unprecedented growth in new technologies, such as AI. The White Paper complements the broader issues raised in the Commission's Human Rights and Technology Issues Paper. The consultation conducted on the Issues Paper and White Paper will inform the Commission's proposals for reform, to be released in mid-2019. The White Paper asks whether Australia needs an organisation to take a central role in promoting responsible innovation in AI and related technology and, if so, what that organisation could look like.


Artificial Intelligence Governance and Ethics: Global Perspectives

arXiv.org Artificial Intelligence

Artificial intelligence (AI) is a technology which is increasingly being utilised in society and the economy worldwide, and its implementation is planned to become more prevalent in coming years. AI is increasingly being embedded in our lives, supplementing our pervasive use of digital technologies. But this is being accompanied by disquiet over problematic and dangerous implementations of AI, or indeed, even AI itself deciding to do dangerous and problematic actions, especially in fields such as the military, medicine and criminal justice. These developments have led to concerns about whether and how AI systems adhere, and will adhere to ethical standards. These concerns have stimulated a global conversation on AI ethics, and have resulted in various actors from different countries and sectors issuing ethics and governance initiatives and guidelines for AI. Such developments form the basis for our research in this report, combining our international and interdisciplinary expertise to give an insight into what is happening in Australia, China, Europe, India and the US.


AI Ethics Principles in Practice: Perspectives of Designers and Developers

arXiv.org Artificial Intelligence

As consensus across the various published AI ethics principles is approached, a gap remains between high-level principles and practical techniques that can be readily adopted to design and develop responsible AI systems. We examine the practices and experiences of researchers and engineers from Australia's national scientific research agency (CSIRO), who are involved in designing and developing AI systems for a range of purposes. Semi-structured interviews were used to examine how the practices of the participants relate to and align with a set of high-level AI ethics principles that are proposed by the Australian Government. The principles comprise: Privacy Protection & Security, Reliability & Safety, Transparency & Explainability, Fairness, Contestability, Accountability, Human-centred Values, and Human, Social & Environmental Wellbeing. The insights of the researchers and engineers as well as the challenges that arose for them in the practical application of the principles are examined. Finally, a set of organisational responses are provided to support the implementation of high-level AI ethics principles into practice.


Human Rights Commission calls for a freeze on 'high-risk' facial recognition

ZDNet

The Australian Human Rights Commission (AHRC) has called for stronger laws around the use facial recognition and other biometric technology, asking for a ban on its use in "high-risk" areas. The call was made in a 240-page report [PDF] from the AHRC, with outgoing Human Rights Commissioner Edward Santow saying Australians want technology that is safe, fair, and reliable, and technology that with the right settings in law, policy, education, and funding, the government, alongside the private sector, can "build a firm foundation of public trust in new technology". "The use of AI in biometric technology, and especially some forms of facial recognition, has prompted growing public and expert concern," the report says. Must read: Facial recognition tech is supporting mass surveillance. It's time for a ban, say privacy campaigners As a result, the Commission recommends privacy law reform to protect against the "most serious harms associated with biometric technology".


CSIRO promotes ethical use of AI in Australia's future guidelines

ZDNet

The Commonwealth Scientific and Industrial Research Organisation (CSIRO) has highlighted a need for development of artificial intelligence (AI) in Australia to be wrapped with a sufficient framework to ensure nothing is set onto citizens without appropriate ethical consideration. The organisation has published a discussion paper [PDF], Artificial Intelligence: Australia's Ethics Framework, on the key issues raised by large-scale AI, seeking answers to a handful of questions that are expected to inform the government's approach to AI ethics in Australia. Highlighted by CSIRO are eight core principles that will guide the framework: That it generates net-benefits, does no harm, complies with regulatory and legal requirements, appropriately considers privacy, boasts fairness, is transparent and easily explained, contains provisions for contesting a decision made by a machine, and that there is an accountability trail. "Australia's colloquial motto is a'fair go' for all. Ensuring fairness across the many different groups in Australian society will be challenging, but this cuts right to the heart of ethical AI," CSIRO wrote.