Ethics, as applied to the business world, are nothing new, and ethics itself has been a topic of conversation and debate for thousands of years. However, the rapid development of technology in the modern world brings with it both potential harm and benefits. As automated decision-making systems become ever more ubiquitous across all industries, what are the key questions organizations need to address, now and in the future? How can organizations create a sustainable future through managing ethical concerns at every stage of development? Ethical frameworks must be more than a way to define digital ethics; they must create an'ethics of action' by proactively influencing approaches to technology development and implementation.
Do you know about Eliza? It was a 1960's computer science experience on psycho-therapy and it turned out that patients preferred interacting with the crude algorithms of the technology of the time to meeting a real-life therapist. Oxford professor Viktor Mayer-Schönberger tells that story in Netopia's broadcast on Digital Ethics. Radio Resonance FM host Peter Warren and reporter Jane Whyatt were two of the authors of the report, this podcast is based on the same research but here are the voices of the academics that were interviewed. Other names include Murray Shanahan (Professor of Cognitive Robotics, Imperial College) and Adrian David Cheok (Professor of Pervasive Computing, City University London), both of whom appeared in the Netopia-seminar on the same topic last month.
We live in the digital world, where every day we interact with digital systems either through a mobile device or from inside a car. These systems are increasingly autonomous in making decisions over and above their users or on behalf of them. As a consequence, ethical issues--privacy ones included (for example, unauthorized disclosure and mining of personal data, access to restricted resources)--are emerging as matters of utmost concern since they affect the moral rights of each human being and have an impact on the social, economic, and political spheres. Europe is at the forefront of the regulation and reflections on these issues through its institutional bodies. Privacy with respect to the processing of personal data is recognized as part of the fundamental rights and freedoms of individuals.
I recently mentioned to my 12-year-old daughter that artificial intelligence will be able to outsmart us by 2045. She got very upset, feeling that this would be the end of the human race and saying, "Then we can kill ourselves, otherwise we will be killed by them." My daughter's reaction was childish (which one would expect from a 12-year-old). But when world-leading technology and science visionaries also express concerns about the dangers of artificial intelligence, maybe we should pay attention. Physicist Stephen Hawking, technology entrepreneur Elon Musk and Microsoft founder Bill Gates have all expressed concerns that computers and smart technologies may eventually outsmart humans and, through calculations based in cold logic without regard to the value of human life, could lead to our own demise.