Singapore's Info-communications Media Development Authority (IMDA) recently announced the creation of an Advisory Council on Ethical Use of AI and Data as part of an effort to bring together a range of key stakeholders to inform the government on possible approaches to ensure consumer trust in AI-powered products and services. ITU News recently caught up with IMDA's Assistant Chief Executive of Data Innovation and Protection, Yeong Zee Kin, to learn more about Singapore's approach to this important and timely issue. With the recent launch of the Digital Economy Framework for Action, Singapore has entered a new phase of its digitalisation journey. The ability to use and share data innovatively and responsibly can become a competitive advantage for businesses. Infusing AI into business operations can accelerate digital transformation through new features and functionalities.
The Australian Human Rights Commission is conducting a project on Human Rights and New Technology (the Project). As part of the Project, the Commission and the World Economic Forum are working together to explore models of governance and leadership on artificial intelligence (AI) in Australia. This White Paper has been produced to support a consultation process that aims to identify how Australia can simultaneously foster innovation and protect human rights – as we see unprecedented growth in new technologies, such as AI. The White Paper complements the broader issues raised in the Commission's Human Rights and Technology Issues Paper. The consultation conducted on the Issues Paper and White Paper will inform the Commission's proposals for reform, to be released in mid-2019. The White Paper asks whether Australia needs an organisation to take a central role in promoting responsible innovation in AI and related technology and, if so, what that organisation could look like.
During this period of progressive development and deployment of artificial intelligence, discussions around the ethical, legal, socio-economic and cultural implications of its use are increasing. What are the challenges and the strategy, and what are the values that Europe can bring to this domain? During the European Conference on AI (ECAI 2020), two special events in the format of panels discussed the challenges of AI made in the European Union, the shape of future research and industry, and the strategy to retain talent and compete with other world powers. This article collects some of the main messages from these two sessions, which included the participation of AI experts from leading European organisations and networks. Since the publication of European directives and guidance, such as the EC White Paper on AI and the Trustworthy AI Guidelines, Europe has been laying the foundation for the future vision of AI. The European strategy for AI builds on the well-known and accepted principles found in the Charter of Fundamental Rights of the European Commission and the Universal Declaration of Human Rights to define a human-centric approach, whose primary purpose is to enhance human capabilities and societal well-being.
Andrew Pery, is the Ethics Evangelist at ABBYY, a digital intelligence company. They empower organizations to access the valuable, yet often hard to attain, insight into their operations that enables true business transformation. ABBYY recently released a Global Initiative Promoting the Development of Trustworthy Artificial Intelligence. We decided to ask Andrew questions regarding ethics in AI, abuses of AI, and what the AI industry can do about these concerns moving forward. What is it that initially instigated your interest in AI ethics? What initially sparked my interest in AI ethics was a deep interest in the intersection of law and AI technology.
SUMMARY: Artificial intelligence (AI) technologies offer great promise for creating new and innovative products, growing the economy, and advancing national priorities in areas such as education, mental and physical health, addressing climate change, and more. Like any transformative technology, however, AI carries risks and presents complex policy challenges along a number of different fronts. The Office of Science and Technology Policy (OSTP) is interested in developing a view of AI across all sectors for the purpose of recommending directions for research and determining challenges and opportunities in this field. The views of the American people, including stakeholders such as consumers, academic and industry researchers, private companies, and charitable foundations, are important to inform an understanding of current and future needs for AI in diverse fields. The purpose of this RFI is to solicit feedback on overarching questions in AI, including AI research and the tools, technologies, and training that are needed to answer these questions.