Nick Bostrom is a philosopher at University of Oxford and the director of the Future of Humanity Institute. He has worked on fascinating and important ideas in existential risks, simulation hypothesis, human enhancement ethics, and the risks of superintelligent AI systems, including in his book Superintelligence. I can see talking to Nick multiple times on this podcast, many hours each time, but we have to start somewhere. This conversation is part of the Artificial Intelligence podcast.
Everywhere you look, now there is some form of artificial intelligence appearing. Whether it's to make a process more efficient or whether it's to keep humans safe and away from danger, robots are creeping in at every chance they get, and this is expected to carry on for quite some years to come. Now, a new center has been launched in Cambridge, England that will look to continue the study of AI more closely along with the implications that come with these marvelous machines. The Centre for the Future of Intelligence (CFI) has one aim: "to work together to ensure that we humans make the best of the opportunities of artificial intelligence as it develops over coming decades." It's a collaboration between four top universities which are Cambridge, Oxford, Imperial, and Berkeley, and has the full backing and support of the Leverhulme Trust.
The best minds in the business--Yann LeCun of Facebook, Luke Nosek of the Founders Fund, Nick Bostrom of Oxford University and Andrew Ng of Baidu--on what life will look like in the age of the machinesThe traditional definition of artificial intelligence is the ability of machines to execute tasks and solve problems in ways normally attributed to humans. Some tasks that we consider simple--recognizing an object in a photo, driving a car--are incredibly complex for AI. Machines can surpass us when it comes to things like playing chess, but those machines are limited by the manual nature of their programming; a 30 gadget can beat us at a board game, but it can't do--or learn to do--anything else. This is where machine learning comes in. Show millions of cat photos to a machine, and it will hone its algorithms to improve at recognizing pictures of cats.
In a setting befitting the opening scene of a sci-fi thriller at the recently opened Leverhulme Center for the Future of Intelligence at Cambridge University, Professor Stephen Hawking cautions that the future of artificial intelligence could potentially be "either the best, or worst thing to ever happen to humanity." Reiterating his 2014 statement to the BBC, Hawking urges that if done wrong, "the development of full AI could spell the end of the human race" –– think Terminator, I, Robot, or WestWorld. But on the other hand, Hawking believes that amplifying our minds through utilization of artificial intelligence can transform every aspect of our lives. The overarching concern surrounding AI is machine morality and whether it is safe for society. If people do not have proper ethical guidelines or fully comprehend the risks AI could play on mankind, is the expansion of functionalities and the powering of complex self-evolving capabilities – as Hawking would put it – the'worst thing to happen to humanity?'
The University of Cambridge professor was an iconic figure in both the scientific community and in popular culture, known for his keen mind and humor, as well as his striking physical challenges. Dr. Hawking had long battled with amyotrophic lateral sclerosis, which left him wheelchair-bound for most of his life. Commonly known as Lou Gehrig's disease or motor neuron disease, the condition damages the nerves that control movement and results in paralysis. Patients with ALS typically die within five years of diagnosis. Dr. Hawking, who was diagnosed in 1963 at the age of 21, is believed to have been the longest-living survivor, a fact that still perplexes neurologists.