A new initiative to shape international standards for Artificial Intelligence (AI) was launched last week by the UK government, as part of its strategy to become a global AI power. The "AI Standards Hub" will focus on governance and guidance and falls under the National AI Strategy that aims to increase Britain's contribution to development of global AI technical standards. The Alan Turing Institute, the London-based data science and AI organisation, has been selected to lead the pilot with support from the British Standards Institution and National Physical Laboratory. "The new AI Standard Hub will create practical tools for businesses, bring the UK's AI community together through a new online platform, and develop educational materials to help organisations develop and benefit from global standards," the government announced, adding that the move puts the country at the "forefront" of a rapidly developing industry. "On the face of it, the AI Standards Hub offers some substance to the government's claims of Britain being a tech power and paves the way for it to play a leadership role in shaping AI at the global level," London-based political risk analyst Mikhail Sebastian told TRT World.
The new AI Standard Hub will create practical tools for businesses, bring the UK's AI community together through a new online platform, and develop educational materials to help organisations develop and benefit from global standards. This will help put the UK at the forefront of this rapidly developing area. The Hub will work to improve the governance of AI, complement pro-innovation regulation and unlock the huge economic potential of these technologies to boost investment and employment now the UK has left the European Union. BSI, the UK National Standards Body, and NPL, the country's national metrology institute, will share their world-class expertise in developing standards and research to deliver the pilot with The Alan Turing Institute, the national institute for data science and AI. The hub is backed by the Department for Digital, Culture, Media and Sport (DCMS) and the Office for AI (OAI).
Artificial intelligence (AI) is a technology which is increasingly being utilised in society and the economy worldwide, and its implementation is planned to become more prevalent in coming years. AI is increasingly being embedded in our lives, supplementing our pervasive use of digital technologies. But this is being accompanied by disquiet over problematic and dangerous implementations of AI, or indeed, even AI itself deciding to do dangerous and problematic actions, especially in fields such as the military, medicine and criminal justice. These developments have led to concerns about whether and how AI systems adhere, and will adhere to ethical standards. These concerns have stimulated a global conversation on AI ethics, and have resulted in various actors from different countries and sectors issuing ethics and governance initiatives and guidelines for AI. Such developments form the basis for our research in this report, combining our international and interdisciplinary expertise to give an insight into what is happening in Australia, China, Europe, India and the US.
What happens when injustices are propagated not by individuals or organizations but by a collection of machines? Lately, there's been increased attention on the downsides of artificial intelligence and the harms it may produce in our society, from unequitable access to opportunities to the escalation of polarization in our communities. Not surprisingly, there's been a corresponding rise in discussion around how to regulate AI. Do we need new laws and rules from governmental authorities to police companies and their conduct when designing and deploying AI into the world? Part of the conversation arises from the fact that the public questions -- and rightly so -- the ethical restraints that organizations voluntarily choose to comply with.
The race to become the global leader in artificial intelligence (AI) has officially begun. In the past fifteen months, Canada, Japan, Singapore, China, the UAE, Finland, Denmark, France, the UK, the EU Commission, South Korea, and India have all released strategies to promote the use and development of AI. No two strategies are alike, with each focusing on different aspects of AI policy: scientific research, talent development, skills and education, public and private sector adoption, ethics and inclusion, standards and regulations, and data and digital infrastructure. It also highlights relevant policies and initiatives that the countries have announced since the release of their initial strategies. I plan to continuously update this article as new strategies and initiatives are announced. If a country or policy is missing (or if something in the summary is incorrect), please leave a comment and I will update the article as soon as possible.