Collaborating Authors

Checks and balances in AI ethics


Ethics of AI: While artificial intelligence promises significant benefits, there are concerns it could make unethical decisions. Prefer to listen to this story? Here it is in audio format. Artificial intelligence (AI) is fast becoming important for accountants and businesses, and how it is used raises several ethical issues and questions. While autonomous AI algorithms teach themselves, concerns have been raised that some machine learning techniques are essentially "black boxes" that make it technically impossible to fully understand how the machine arrived at a result.

Artificial Intelligence Governance and Ethics: Global Perspectives Artificial Intelligence

Artificial intelligence (AI) is a technology which is increasingly being utilised in society and the economy worldwide, and its implementation is planned to become more prevalent in coming years. AI is increasingly being embedded in our lives, supplementing our pervasive use of digital technologies. But this is being accompanied by disquiet over problematic and dangerous implementations of AI, or indeed, even AI itself deciding to do dangerous and problematic actions, especially in fields such as the military, medicine and criminal justice. These developments have led to concerns about whether and how AI systems adhere, and will adhere to ethical standards. These concerns have stimulated a global conversation on AI ethics, and have resulted in various actors from different countries and sectors issuing ethics and governance initiatives and guidelines for AI. Such developments form the basis for our research in this report, combining our international and interdisciplinary expertise to give an insight into what is happening in Australia, China, Europe, India and the US.

Australia releases Artificial Intelligence technology roadmap


The Australian Government released its artificial intelligence (AI) technology roadmap during Australia's inaugural AI summit Techtonic, held recently in Canberra. As reported, 'Artificial Intelligence: Solving problems, growing the economy and improving our quality of life' was developed by CSIRO, Australia's national science agency. The roadmap outlines the importance of action for Australia to capture the benefits of AI, which is estimated to be worth AU$ 22.17 trillion to the global economy by 2030. It was developed for the Australian Government in consultation with industry, government and academia. The roadmap is intended to help guide future investment in AI and machine learning, and accompanies Artificial Intelligence: Australia's Ethics Framework, a discussion paper prepared by CSIRO's Data61 and published by the Australian Government in April 2019.

Should you incorporate an AI strategy into your business? - Dynamic Business


With both government and companies eagerly adopting artificial intelligence (AI) strategies, we explore how AI could also streamline and scale your business. We examine the potential opportunities and risks that come with using AI, and what the future of AI and business looks like. The CSIRO defines AI as "a collection of interrelated technologies used to solve problems autonomously and perform tasks to achieve defined objectives, in some cases without explicit guidance from a human being." Subfields of AI include machine learning, computer vision, human language technologies, robotics, knowledge representation and other scientific fields. For instance, AI is already being used in autonomous emergency breaking (helping reduce 1,137 vehicle-related deaths per year) and in maintaining Sydney Harbour Bridge (using machine-learning and predictive analytics to identify priority locations for maintenance).

Interaction Design for Explainable AI: Workshop Proceedings Artificial Intelligence

As artificial intelligence (AI) systems become increasingly complex and ubiquitous, these systems will be responsible for making decisions that directly affect individuals and society as a whole. Such decisions will need to be justified due to ethical concerns as well as trust, but achieving this has become difficult due to the `black-box' nature many AI models have adopted. Explainable AI (XAI) can potentially address this problem by explaining its actions, decisions and behaviours of the system to users. However, much research in XAI is done in a vacuum using only the researchers' intuition of what constitutes a `good' explanation while ignoring the interaction and the human aspect. This workshop invites researchers in the HCI community and related fields to have a discourse about human-centred approaches to XAI rooted in interaction and to shed light and spark discussion on interaction design challenges in XAI.