AI has had several winters.Footnote 1 Among the most significant, there was one in the late 1970s, and another at the turn of the 1980s and 1990s. Today, we are talking about another predictable winter (Nield 2019; Walch 2019; Schuchmann 2019).Footnote 2 AI is subject to these hype cycles because it is a hope or fear that we have entertained since we were thrown out of paradise: something that does everything for us, instead of us, better than us, with all the dreamy advantages (we shall be on holiday forever) and the nightmarish risks (we are going to be enslaved) that this entails. For some people, speculating about all this is irresistible. It is the wild west of "what if" scenarios. But I hope the reader will forgive me for a "I told you so" moment.
Artificial intelligence AI can bring substantial benefits to society by helping to reduce costs, increase efficiency and enable new solutions to complex problems. Using Floridi's notion of how to design the 'infosphere' as a starting point, in this chapter I consider the question: what are the limits of design, i.e. what are the conceptual constraints on designing AI for social good? The main argument of this chapter is that while design is a useful conceptual tool to shape technologies and societies, collective efforts towards designing future societies are constrained by both internal and external factors. Internal constraints on design are discussed by evoking Hardin's thought experiment regarding 'the Tragedy of the Commons'. Further, Hayek's classical distinction between 'cosmos' and 'taxis' is used to demarcate external constraints on design. Finally, five design principles are presented which are aimed at helping policymakers manage the internal and external constraints on design. A successful approach to designing future societies needs to account for the emergent properties of complex systems by allowing space for serendipity and socio-technological coevolution.
In this commentary, we respond to a recent editorial letter by Professor Luciano Floridi entitled 'AI as a public service: Learning from Amsterdam and Helsinki'. Here, Floridi considers the positive impact of these municipal AI registers, which collect a limited number of algorithmic systems used by the city of Amsterdam and Helsinki. There are a number of assumptions about AI registers as a governance model for automated systems that we seek to question. Starting with recent attempts to normalize AI by decontextualizing and depoliticizing it, which is a fraught political project that encourages what we call 'ethics theater' given the proven dangers of using these systems in the context of the digital welfare state. We agree with Floridi that much can be learned from these registers about the role of AI systems in municipal city management. Yet, the lessons we draw, on the basis of our extensive ethnographic engagement with digital well-fare states are distinctly less optimistic.
However, in recent years symbolic AI has been complemented and sometimes replaced by (Deep) Neural Networks and Machine Learning (ML) techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles--the'what' of AI ethics (beneficence, non-maleficence, autonomy, justice and explicability)--rather than on practices, the'how.' Awareness of the potential issues is increasing at a fast rate, but the AI community's ability to take action to mitigate the associated risks is still at its infancy. Therefore, our intention in presenting this research is to contribute to closing the gap between principles and practices by constructing a typology that may help practically-minded developers'apply ethics' at each stage of the pipeline, and to signal to researchers where further work is needed. The focus is exclusively on Machine Learning, but it is hoped that the results of this research may be easily applicable to other branches of AI. The article outlines the research method for creating this typology, the initial findings, and provides a summary of future research needs.
Luciano Floridi has a job title that might seem odd at first glance but has a strong underlying logic: professor of philosophy and ethics of information at the Oxford Internet Institute at the University of Oxford. Anyone who thinks seriously about technology's role in society has to bring an element of philosophy to the mix. It involves critical discussion and questions around ethics in addressing the core issue of whether innovations are going to be good or bad for people in the long term. It underpins Floridi's work in looking at the implications of digital technology on people's lives and society, and leads him into areas beyond those highlighted by many of the tech evangelists. It includes the implications for public services, and while he is no alarmist, he says that governments should be careful in planning for what they want to achieve.