Artificial intelligence (AI) is a technology which is increasingly being utilised in society and the economy worldwide, and its implementation is planned to become more prevalent in coming years. AI is increasingly being embedded in our lives, supplementing our pervasive use of digital technologies. But this is being accompanied by disquiet over problematic and dangerous implementations of AI, or indeed, even AI itself deciding to do dangerous and problematic actions, especially in fields such as the military, medicine and criminal justice. These developments have led to concerns about whether and how AI systems adhere, and will adhere to ethical standards. These concerns have stimulated a global conversation on AI ethics, and have resulted in various actors from different countries and sectors issuing ethics and governance initiatives and guidelines for AI. Such developments form the basis for our research in this report, combining our international and interdisciplinary expertise to give an insight into what is happening in Australia, China, Europe, India and the US.
A Senate committee has recommended that Parliament pass Australia's new intellectual property (IP) laws in a bid to promote and incentivise "investment in creativity, innovation, research, and technology". The Senate Economics Legislation Committee's report said the Intellectual Property Laws Amendment (Productivity Commission Response Part 1 and Other Measures) Bill 2018 [PDF] will phase out the innovation patent system, as well as allowing for automated decision-making on patents. "Schedule 2 consists of 21 parts which implement a number of measures to streamline and align the administration of the Australian IP system," the report says. "Schedule 2: Part 5 amends the Patents Act, Designs Act, PBR Act, and Trade Marks Act to enable the commissioner and the registrars to arrange for a computer program under their control to make decisions, exercise powers, and comply with obligations under the legislation." Under the Bill, the registrar may "arrange for the use, under the registrar's control, of computer programs for any purposes for which the registrar may, or must, under this Act or the regulations: make a decision; or exercise any power or comply with any obligation; or do anything else related to making a decision".
This translation by Jeffrey Ding, edited by Paul Triolo, covers some of the most interesting parts of the Standards Administration of China's 2018 White Paper on Artificial Intelligence Standardization, a joint effort by more than 30 academic and industry organizations overseen by the Chinese Electronics Standards Institute. Ding, Triolo, and Samm Sacks describe the importance of this white paper and other Chinese government efforts to influence global AI development and policy formulation in their companion piece, "Chinese Interests Take a Big Seat at the AI Governance Table." Historical experience demonstrates that new technologies can often improve productivity and promote societal progress. But at the same time, as artificial intelligence (AI) is still in the early phrase of development, the policies, laws, and standards for safety, ethics, and privacy in this area are worthy of attention. In the case of AI technology, issues of safety, ethics, and privacy have a direct impact on people's trust in AI technology in their interaction experience with AI tools.
The Commonwealth Scientific and Industrial Research Organisation (CSIRO) has highlighted a need for development of artificial intelligence (AI) in Australia to be wrapped with a sufficient framework to ensure nothing is set onto citizens without appropriate ethical consideration. The organisation has published a discussion paper [PDF], Artificial Intelligence: Australia's Ethics Framework, on the key issues raised by large-scale AI, seeking answers to a handful of questions that are expected to inform the government's approach to AI ethics in Australia. Highlighted by CSIRO are eight core principles that will guide the framework: That it generates net-benefits, does no harm, complies with regulatory and legal requirements, appropriately considers privacy, boasts fairness, is transparent and easily explained, contains provisions for contesting a decision made by a machine, and that there is an accountability trail. "Australia's colloquial motto is a'fair go' for all. Ensuring fairness across the many different groups in Australian society will be challenging, but this cuts right to the heart of ethical AI," CSIRO wrote.
The Australian government has announced it will be adopting an internationally aligned standard for IT accessibility in government, requiring vendors at procurement stage to offer accessible website, software, and digital device services. The standard, Accessibility requirements suitable for public procurement of ICT products and services, is a Direct Text Adoption of European Standard EN 301 549 and establishes a minimum standard to ensure that all Australians can access information and use services electronically by public authorities and other public sector agencies, the government said. The government expects the new standard will be used by all levels of government when determining technical specifications for the procurement of accessible IT products and services, including computer software and hardware, telecommunications, and office equipment such as printers, photocopiers, and scanners. Australian Communications Consumer Action Network (ACCAN) CEO Teresa Corbin said that while the standard is intended in particular for use by public sector bodies during procurement, she believes there is application in the private sector. "The standard will help industry and operators avoid creating technologies that exclude users from the information society," she said.