Stanford Computer Forum provides a platform to discuss the latest advancements in computer science and engineering. In their April meeting, I realized that we have come a long way since Asimov's 3 laws of robotics. The nuances associated with robotics has become way beyond the dystopian vision of robotic overlords, and apocalyptical notions of cyborgs but more practical society augmentation. Issac Isomov, the master of the science-fiction genre and author of Foundation series suggested three laws devised to protect humans from interactions with robots. In the Computer Forum meetup, guest speaker Shannon Vallor, Professor and Department Chair of Philosophy, in Santa Clara University spoke about Artificial Intelligence's ethical imperative and how to humanize machine values.
Here at Thwaites we are lucky enough to have not one, but two offices – our Shoreditch HQ, and our Northern home at The Federation in Manchester. Here we share co-working space with lots of brilliant digital and tech firms who have signed up to a pledge outlining a broad set of values that chime with us – to be open, honest and ethical. As well as providing a great space to work, Federation Manchester also gives us access to excellent talks by leading speakers from around the world – most recently The Federation Presents series, which explored ethics in the tech industry and wider society. Naturally, one of the topics that's arisen (more than once) is AI, and the seemingly boundless scope of machine learning. Yet despite the many ways in which intelligent systems can transform our lives for the better, there is still an underlying mistrust.
Chinese professional Go player Ke Jie preparing to make a move during the second game of a match against Google's AlphaGo in May 2017. Artificial intelligence (AI), once described as a technology with permanent potential, has come of age in the past decade. Propelled by massively parallel computer systems, huge datasets, and better algorithms, AI has brought a number of important applications, such as image- and speech-recognition and autonomous vehicle navigation, to near-human levels of performance. Now, AI experts say, a wave of even newer technology may enable systems to understand and react to the world in ways that traditionally have been seen as the sole province of human beings. These technologies include algorithms that model human intuition and make predictions in the face of incomplete knowledge, systems that learn without being pre-trained with labeled data, systems that transfer knowledge gained in one domain to another, hybrid systems that combine two or more approaches, and more powerful and energy-efficient hardware specialized for AI.
Vatican City, Dec 4, 2016 / 03:03 am (CNA/EWTN News).- This week the Vatican hosted a high-level discussion in the world of science, gathering experts to discuss the progress, benefits and limits of advances in artificial intelligence. A new conference at the Vatican drew experts in various fields of science and technology for a two-day dialogue on the "Power and Limits of Artificial Intelligence," hosted by the Pontifical Academy for Sciences. Among the scheduled speakers were several prestigious scientists, including Stephen Hawkins, a prominent British professor at the University of Cambridge and a self-proclaimed atheist, as well as a number of major tech heads such as Demis Hassabis, CEO of Google DeepMind, and Yann LeCun of Facebook. The event, which ran from Nov. 30-Dec.
Recently, the federal office of Science and Technology Policy issued a request for public feedback on "overarching questions in [Artificial Intelligence], including AI research and the tools, technologies, and training that are needed to answer these questions." OSTP is in the process of co-hosting four public workshops in 2016 on topics in AI in order to spur public dialogue on these topics and to identify challenges and opportunities related to this emerging technology. These topics include the legal and governance issues for AI, AI for public good, safety and control for AI, and the social and economic implications of AI. The Request for Information lists 10 specific topics on which the government would appreciate feedback, including "the use of AI for public good" and "the most pressing, fundamental questions in AI research, common to most or all scientific fields." One of the academics who answered the request for information is Shannon Vallor, who is the William J. Rewak Professor at Santa Clara University, and one of the Markkula Center for Applied Ethics' faculty scholars.