Collaborating Authors

AI accountability needs action now, say UK MPs


A UK parliamentary committee has urged the government to act proactively -- and to act now -- to tackle "a host of social, ethical and legal questions" arising from growing usage of autonomous technologies such as artificial intelligence. "While it is too soon to set down sector-wide regulations for this nascent field, it is vital that careful scrutiny of the ethical, legal and societal dimensions of artificially intelligent systems begins now," says the committee. "Not only would this help to ensure that the UK remains focused on developing'socially beneficial' AI systems, it would also represent an important step towards fostering public dialogue about, and trust in, such systems over time." The committee kicked off an enquiry into AI and robotics this March, going on to take 67 written submissions and hear from 12 witnesses in person, in addition to visiting Google DeepMind's London office. Publishing its report into robotics and AI today, the Science and Technology committee flags up several issues that it says need "serious, ongoing consideration" -- including: "[W]itnesses were clear that the ethical and legal matters raised by AI deserved attention now and that suitable governance frameworks were needed," it notes in the report.

UK tech committee: It's time to lay down the law on AI accountability


A UK parliamentary committee has appealed the UK government to take action and begin seriously considering "a host of social, ethical and legal questions" that are increasingly pertinent thanks to the rise of artificial intelligence. The science and technology committee started its inquiry in March 2016, visiting Google's DeepMind office, gathering 67 written statements, and interviewing 12 witnesses in person in order to establish the most urgent issues. In its newly published report, the committee has concluded that "while it is too soon to set down sector-wide regulations for this nascent field, it is vital that careful scrutiny of the ethical, legal and societal dimensions of artificially intelligent systems begins now." The biggest reason for this is the need to ensure that the UK is building socially beneficial AI systems, and one of the best ways to make this happen is to start a wider public dialogue on the issue. There are three main issues that the committee flags up as requiring "serious" consideration: minimizing bias being accidentally built into AI systems; ensuring that the decisions they make are transparent; and establishing ways to verify that AI systems are operating as intended and won't behave unpredictably.

Teaching AI, Ethics, Law and Policy Artificial Intelligence

The cyberspace and the development of new technologies, especially intelligent systems using artificial intelligence, present enormous challenges to computer professionals, data scientists, managers and policy makers. There is a need to address professional responsibility, ethical, legal, societal, and policy issues. This paper presents problems and issues relevant to computer professionals and decision makers and suggests a curriculum for a course on ethics, law and policy. Such a course will create awareness of the ethics issues involved in building and using software and artificial intelligence.

UK public sector failing to be open about its use of AI, review finds – TechCrunch


A report into the use of artificial intelligence by the U.K.'s public sector has warned that the government is failing to be open about automated decision-making technologies which have the potential to significantly impact citizens' lives. Ministers have been especially bullish on injecting new technologies into the delivery of taxpayer-funded healthcare -- with health minister Matt Hancock setting out a tech-fueled vision of "preventative, predictive and personalised care" in 2018, calling for a root and branch digital transformation of the National Health Service (NHS) to support piping patient data to a new generation of "healthtech" apps and services. He has also personally championed a chatbot startup, Babylon Health, that's using AI for healthcare triage -- and which is now selling a service in to the NHS. Policing is another area where AI is being accelerated into U.K. public service delivery, with a number of police forces trialing facial recognition technology -- and London's Met Police switching over to a live deployment of the AI technology just last month. However the rush by cash-strapped public services to tap AI "efficiencies" risks glossing over a range of ethical concerns about the design and implementation of such automated systems, from fears about embedding bias and discrimination into service delivery and scaling harmful outcomes to questions of consent around access to the data sets being used to build AI models and human agency over automated outcomes, to name a few of the associated concerns -- all of which require transparency into AIs if there's to be accountability over automated outcomes.

UK report urges action to combat AI bias


The need for diverse development teams and truly representational data-sets to avoid biases being baked into AI algorithms is one of the core recommendations in a lengthy Lords committee report looking into the economic, ethical and social implications of artificial intelligence, and published today by the upper House of the UK parliament. "The main ways to address these kinds of biases are to ensure that developers are drawn from diverse gender, ethnic and socio-economic backgrounds, and are aware of, and adhere to, ethical codes of conduct," the committee writes, chiming with plenty of extant commentary around algorithmic accountability. "It is essential that ethics take centre stage in AI's development and use," adds committee chairman, Lord Clement-Jones, in a statement. "The UK has a unique opportunity to shape AI positively for the public's benefit and to lead the international community in AI's ethical development, rather than passively accept its consequences." The report also calls for the government to take urgent steps to help foster "the creation of authoritative tools and systems for auditing and testing training datasets to ensure they are representative of diverse populations, and to ensure that when used to train AI systems they are unlikely to lead to prejudicial decisions" -- recommending a publicly funded challenge to incentivize the development of technologies that can audit and interrogate AIs.