Stanford Computer Forum provides a platform to discuss the latest advancements in computer science and engineering. In their April meeting, I realized that we have come a long way since Asimov's 3 laws of robotics. The nuances associated with robotics has become way beyond the dystopian vision of robotic overlords, and apocalyptical notions of cyborgs but more practical society augmentation. Issac Isomov, the master of the science-fiction genre and author of Foundation series suggested three laws devised to protect humans from interactions with robots. In the Computer Forum meetup, guest speaker Shannon Vallor, Professor and Department Chair of Philosophy, in Santa Clara University spoke about Artificial Intelligence's ethical imperative and how to humanize machine values.
Here at Thwaites we are lucky enough to have not one, but two offices – our Shoreditch HQ, and our Northern home at The Federation in Manchester. Here we share co-working space with lots of brilliant digital and tech firms who have signed up to a pledge outlining a broad set of values that chime with us – to be open, honest and ethical. As well as providing a great space to work, Federation Manchester also gives us access to excellent talks by leading speakers from around the world – most recently The Federation Presents series, which explored ethics in the tech industry and wider society. Naturally, one of the topics that's arisen (more than once) is AI, and the seemingly boundless scope of machine learning. Yet despite the many ways in which intelligent systems can transform our lives for the better, there is still an underlying mistrust.
Chinese professional Go player Ke Jie preparing to make a move during the second game of a match against Google's AlphaGo in May 2017. Artificial intelligence (AI), once described as a technology with permanent potential, has come of age in the past decade. Propelled by massively parallel computer systems, huge datasets, and better algorithms, AI has brought a number of important applications, such as image- and speech-recognition and autonomous vehicle navigation, to near-human levels of performance. Now, AI experts say, a wave of even newer technology may enable systems to understand and react to the world in ways that traditionally have been seen as the sole province of human beings. These technologies include algorithms that model human intuition and make predictions in the face of incomplete knowledge, systems that learn without being pre-trained with labeled data, systems that transfer knowledge gained in one domain to another, hybrid systems that combine two or more approaches, and more powerful and energy-efficient hardware specialized for AI.
Recently, the federal office of Science and Technology Policy issued a request for public feedback on "overarching questions in [Artificial Intelligence], including AI research and the tools, technologies, and training that are needed to answer these questions." OSTP is in the process of co-hosting four public workshops in 2016 on topics in AI in order to spur public dialogue on these topics and to identify challenges and opportunities related to this emerging technology. These topics include the legal and governance issues for AI, AI for public good, safety and control for AI, and the social and economic implications of AI. The Request for Information lists 10 specific topics on which the government would appreciate feedback, including "the use of AI for public good" and "the most pressing, fundamental questions in AI research, common to most or all scientific fields." One of the academics who answered the request for information is Shannon Vallor, who is the William J. Rewak Professor at Santa Clara University, and one of the Markkula Center for Applied Ethics' faculty scholars.
Like paper, print, steel and the wheel, computer-generated artificial intelligence is a revolutionary technology that can bend how we work, play and love. It is already doing so in ways we can and cannot perceive. As Facebook, Apple and Google pour billions into A.I. development, there is a fledgling branch of academic ethical study--influenced by Catholic social teaching and encompassing thinkers like the Jesuit scientist Pierre Teilhard de Chardin--that aims to study its moral consequences, contain the harm it might do and push tech firms to integrate social goods like privacy and fairness into their business plans. "There are a lot of people suddenly interested in A.I. ethics because they realize they're playing with fire," says Brian Green, an A.I. ethicist at Santa Clara University. "And this is the biggest thing since fire." "There are a lot of people suddenly interested in A.I. ethics because they realize they're playing with fire. And this is the biggest thing since fire."