Throughout recorded history, humans have reigned unchallenged as Earth's dominant species. Turkeys, heretofore harmless creatures, have been exploding in size, swelling from an average 13.2lb (6kg) in 1929 to over 30lb today. On the rock-solid scientific assumption that present trends will persist, The Economist calculates that turkeys will be as big as humans in just 150 years. Within 6,000 years, turkeys will dwarf the entire planet. Scientists claim that the rapid growth of turkeys is the result of innovations in poultry farming, such as selective breeding and artificial insemination.
Digital technologies can do more and more things better than us, by processing increasing amounts of data and improving their performance by analysing their own output as input for the next operations. AlphaGo, the computer program developed by Google DeepMind, won the boardgame Go against the world's best player because it could use a database of around 30 million moves and play thousands of games against itself, 'learning' how to improve its performance. It is like a two-knife system that can sharpen itself. That any apocalyptic vision of AI can be disregarded. The serious risk is not the appearance of some ultraintelligence, but that we might misuse our digital technologies, to the detriment of a large percentage of humanity and the whole planet.
What will the world of technology look like 30 years from now? Megatech: Technology In 2050 tries to tackle this question. Edited by The Economist's executive editor Daniel Franklin, the book is a collection of essays by eminent personalities like Frank Wilczek, Alastair Reynolds, Nancy Kress and Melinda Gates--each one of whom tells their version of the future. An essay by Luciano Floridi, professor of philosophy and ethics of information at the University of Oxford in the UK, talks about Artificial Intelligence (AI). In "The Ethics Of Artificial Intelligence", he says the threat of monstrous machines dominating humanity is imaginary, but the risk of humanity misusing its machines is real. In an email interview, Prof. Floridi talks about how real, or not, the threat of AI is.
However, in recent years symbolic AI has been complemented and sometimes replaced by (Deep) Neural Networks and Machine Learning (ML) techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles--the'what' of AI ethics (beneficence, non-maleficence, autonomy, justice and explicability)--rather than on practices, the'how.' Awareness of the potential issues is increasing at a fast rate, but the AI community's ability to take action to mitigate the associated risks is still at its infancy. Therefore, our intention in presenting this research is to contribute to closing the gap between principles and practices by constructing a typology that may help practically-minded developers'apply ethics' at each stage of the pipeline, and to signal to researchers where further work is needed. The focus is exclusively on Machine Learning, but it is hoped that the results of this research may be easily applicable to other branches of AI. The article outlines the research method for creating this typology, the initial findings, and provides a summary of future research needs.