The Intelligence Explosion, an original drama published by the Guardian, is obviously a work of fiction. But the fears behind it are very real, and have led some of the biggest brains in artificial intelligence (AI) to reconsider how they work. The film dramatises a near-future conversation between the developers of an "artificial general intelligence" – named Günther – and an ethical philosopher. Günther himself (itself?) sits in, making fairly cringeworthy jokes and generally missing the point. It shows an event which has come to be known in the technology world as the "singularity": the moment when an artificial intelligence that has the ability to improve itself starts doing so at exponential speeds.
Now I suppose it is possible that once an agent reaches a sufficient level of intellectual ability it derives some universal morality from the ether and there really is nothing to worry about, but I hope you agree that this is, at the very least, not a conservative assumption. For the purposes of this article I will take the orthogonality thesis as a given. So a smarter than human artificial intelligence can have any goal.
How worried should we be about artificial intelligence? Recently, I asked a number of AI researchers this question. The responses I received vary considerably; it turns out there is not much agreement about the risks or implications. Non-experts are even more confused about AI and its attendant challenges. Part of the problem is that "artificial intelligence" is an ambiguous term.
Recently, I asked a number of AI researchers this question. The responses i received vary considerably; it turns out there is not much agreement about the risks or implications. Non-experts are even more confused about AI and its attendant challenges. Part of the problem is that "artificial intelligence" is an ambiguous term. By AI one can mean a Roomba vacuum cleaner, a self-driving truck, or one of those death-dealing Terminator robots.
It took me 4 hours and 5 minutes to effectively annihilate the Universe by pretending to be an Artificial Intelligence tasked with making paper-clips. Put another way, it took me 4 hours and 5 minutes to have an existential crisis. This was done by playing the online game "Paperclip", which was released in 2017. Though the clip-making goal of the game is in itself simple, there are so many contemporary lessons to be extracted from the playthrough that a deep dive seems necessary. Indeed, the game explores our past, present and future in the most interesting way, especially when it comes to the technological advances Silicon Valley is currently oh so proud of.