Since 2019, government-sponsored initiatives around AI have proliferated across Asia Pacific. Such initiatives include the setting up of cross-domain AI ethics councils, guidelines and frameworks for the responsible use of AI, and other initiatives such as financial and technology support. The majority of these initiatives builds on the country's respective data privacy and protection acts. This is a clear sign that governments see the need to expand existing regulations when it comes to leveraging AI as a key driver for digital economies. All initiatives to date are voluntary in nature, but there are indications already that existing data privacy and protection laws will be updated and expanded to include AI.
In order to drive awareness of the benefits and understand the challenges of AI (such as on ethics and legal issues), IMDA is engaging key stakeholders including government, industry, consumers and academia to collaboratively shape the Government's plans for the AI ecosystem. Such discourse will inform the Government's ongoing plans to support Singapore as a hub for AI development and innovation, and help Singapore to effectively respond to global developments. These initiatives complement IMDA's current suite of business and talent programmes to develop a vibrant AI ecosystem and position Singapore as a leading hub for AI. An Advisory Council on the Ethical Use of AI and Data will be appointed by the Minister for Communications and Information to advise and work with IMDA in the areas of responsible development and deployment of AI. The Advisory Council will assist the Government to develop ethics standards and reference governance frameworks and publish advisory guidelines, practical guidance, and/or codes of practice for the voluntary adoption by the industry.
Singapore's Info-communications Media Development Authority (IMDA) recently announced the creation of an Advisory Council on Ethical Use of AI and Data as part of an effort to bring together a range of key stakeholders to inform the government on possible approaches to ensure consumer trust in AI-powered products and services. ITU News recently caught up with IMDA's Assistant Chief Executive of Data Innovation and Protection, Yeong Zee Kin, to learn more about Singapore's approach to this important and timely issue. With the recent launch of the Digital Economy Framework for Action, Singapore has entered a new phase of its digitalisation journey. The ability to use and share data innovatively and responsibly can become a competitive advantage for businesses. Infusing AI into business operations can accelerate digital transformation through new features and functionalities.
Fifteen global companies have taken up Singapore's AI Model Governance Framework; Practical examples for organisations to follow suit. Singapore sees Artificial Intelligence ("AI") as an important and fundamental technology for the Digital Economy, with AI-powered products offering a level of personalised service at scale that was previously unimaginable. In the global discourse on AI ethics and governance issue, Singapore believes that its balanced approach can facilitate innovation, safeguard consumer interests, and serve as a common global reference point. These initiatives follow Singapore's launch of the Model AI Governance Framework in Davos in 2019, as well as the announcement of Singapore's National AI Strategy in November 2019, and demonstrate the progress made in supporting organisations in deploying responsible AI. The new initiatives were announced by Mr S Iswaran, Singapore's Minister for Communications and Information, and Ms Kay Firth-Butterfield, AI Portfolio Lead at the World Economic Forum, at a joint press conference with the WEF's Centre for the Fourth Industrial Revolution ("WEF C4IR") at WEF's Annual Meeting in Davos.
Artificial intelligence (AI) is a technology which is increasingly being utilised in society and the economy worldwide, and its implementation is planned to become more prevalent in coming years. AI is increasingly being embedded in our lives, supplementing our pervasive use of digital technologies. But this is being accompanied by disquiet over problematic and dangerous implementations of AI, or indeed, even AI itself deciding to do dangerous and problematic actions, especially in fields such as the military, medicine and criminal justice. These developments have led to concerns about whether and how AI systems adhere, and will adhere to ethical standards. These concerns have stimulated a global conversation on AI ethics, and have resulted in various actors from different countries and sectors issuing ethics and governance initiatives and guidelines for AI. Such developments form the basis for our research in this report, combining our international and interdisciplinary expertise to give an insight into what is happening in Australia, China, Europe, India and the US.