The UK's first major Parliamentary inquiry into Artificial Intelligence has called for a new cross-sector ethics code to ensure that the country becomes a world leader in AI. Lord Clement-Jones, the Chairman of The House of Lords Select Committee on Artificial Intelligence, told Techworld that an ethical approach was essential to ensure public support for AI. "What we want is to make sure that the public is fully trusting in this technology, and you can only do that if they believe it's for the benefit of them and others when they're being applied, and also that it's transparent and unbiased in its application," he said. The proposed "AI Code" could attract public support by creating consistent guidelines for developing and using AI across all organisations and companies in both the public and private sectors. In a report titled AI in the UK: Ready, Willing and Able?, the committee set out five principles to form the basis of the code, which could be adopted internationally: This AI code could provide the basis for future statutory regulation, but the committee stopped short of recommending new regulation specifically for AI at this point.
The need for greater gender and ethnic in diversity in technology is growing from a whisper a decade ago to the roar of a world cup football goal. We can no longer ignore the injustice of a male-dominated algorithmic trade, a despicable parade of inequity and inequality. The naysayers who call out about the discrimination against white males, need to look at the facts of what Joy Boulamwini calls the coded gaze and the increases in algorithmic bias. True, having greater gender and ethnic diversity won't solve all the problems of unfairness, but it will bleed its greatest excesses. Potential imbalances are less likely to go unnoticed.
Members of the House of Lords have called for an artificial intelligence code of conduct in the UK. The House of Lords Select Committee on Artificial Intelligence, in a report titled AI in the UK: Ready, Willing and Able?, argued that the UK can lead the world in AI, as long as it puts ethics at the centre of its plans. The Committee recommended five principles guiding how researchers and businesses develop artificially intelligent systems in the UK. As part of its report, the Lords have called for these principles to be formulated into a cross-sector AI code to be adopted internationally as well as in the UK. Commenting on the report, the Committee's chairman, Lord Clement-Jones, said: "The UK has a unique opportunity to shape AI positively for the public's benefit and to lead the international community in AI's ethical development, rather than passively accept its consequences."
Artificial intelligence (AI) is a technology which is increasingly being utilised in society and the economy worldwide, and its implementation is planned to become more prevalent in coming years. AI is increasingly being embedded in our lives, supplementing our pervasive use of digital technologies. But this is being accompanied by disquiet over problematic and dangerous implementations of AI, or indeed, even AI itself deciding to do dangerous and problematic actions, especially in fields such as the military, medicine and criminal justice. These developments have led to concerns about whether and how AI systems adhere, and will adhere to ethical standards. These concerns have stimulated a global conversation on AI ethics, and have resulted in various actors from different countries and sectors issuing ethics and governance initiatives and guidelines for AI. Such developments form the basis for our research in this report, combining our international and interdisciplinary expertise to give an insight into what is happening in Australia, China, Europe, India and the US.
This article was co-authored by Professor Helen Margetts, Programme Director for Public Policy, and Turing Fellow; Dr Cosmina Dorobantu, Deputy Programme Director for Public Policy, and Policy Fellow; and Josh Cowls, Research Associate. If you hear about data, artificial intelligence (AI) and algorithms in the news, it's likely to be about discriminatory practices, threats to personal privacy or national security, or another crisis created by the advance of digital technology. Sometimes these problems can feel so entrenched that they are insurmountable. But there is reason to be optimistic. As it turns out, the challenges posed by these modern fields of data science and AI can be addressed by one of the oldest: ethics.