The 19th-century U.K. Locomotive Act, also known as the Red Flag Act, required motorized vehicles to be preceded by a person waving a red flag to signal the oncoming danger. Movies can be a good place to see what the future looks like. According to Robert Wallace, a retired director of the CIA's Office of Technical Service: "... When a new James Bond movie was released, we always got calls asking, 'Do you have one of those?' If I answered'no', the next question was, 'How long will it take you to make it?' Folks didn't care about the laws of physics or that Q was an actor in a fictional series--his character and inventiveness pushed our imagination ..."3 As an example, the CIA successfully copied the shoe-mounted spring-loaded and poison-tipped knife in From Russia With Love. It's interesting to speculate on what else Bond movies may have led to being invented. For this reason, I have been considering what movies predict about the future of artificial intelligence (AI).
In recent years, Elon Musk has become one of the most vocal critics of artificial intelligence, issuing numerous warnings about the threat that powerful machines pose to the future of mankind. Now the 47-year-old billionaire inventor and Tesla chief executive has unveiled a potential way for the meager human brain to compete with a superior force that Musk has compared to "an immortal dictator" and "the devil." During an interview with Axios co-founders Jim VandeHei and Mike Allen that aired Sunday, Musk said humans must merge with artificial intelligence, creating a "symbiosis" that leads to "a democratization of intelligence." "Essentially, how do we ensure that the future constitutes the sum of the will of humanity?" "And so, if we have billions of people with the high-bandwidth link to the AI extension of themselves, it would actually make everyone hyper-smart."
Tech leader Elon Musk is known for sounding the alarm bells on the risks of artificial intelligence. Musk has said that he believes that AI will soon manipulate social media if it hasn't already -- a concern that pales in comparison to his previous predictions of a future humanity governed by an intelligent machine dictator. A year ago, he told Recode Decode that the relative intelligence ratio between such a dictator and the rest of humanity would resemble the ratio between a person and a cat. The great Musk doesn't stand alone in fearing the risks of AI gone wrong. Stephen Hawking and other researchers have said that intelligent machines could become very dangerous.
Elon Musk is concerned about the future of artificial intelligence, an outlook that's changing all the time. On Sunday, the tech entrepreneur shared a link to a retrospective look at 2017 in the nascent field, a look back that helps understand how ultra-smart machines could influence all areas of life. It's in keeping with Musk's interests, who founded Neuralink as a way of ensuring humans aren't left behind in the coming machine revolution. The Wired restospective highlights why this work is so necessary: while noting achievements like Sophia becoming the first robot to receive a nationality, as drone-piloting programs took to the skies, the story also notes the implicit biases in man-made machines, like denying a man of Asian descent's passport application because it claimed he was blinking, or an A.I.-judged beauty pageant that favored lighter skin. The future is accelerating https://t.co/hKXFJtV8Wt
Earlier this year a group of world experts convened to discuss Doomsday scenarios and ways to counter them. The problem though was that they found discussing the threats humanity faces easy, but as for solutions, well, in the majority of cases they were stumped. This week DeepMind, Google's world famous Artificial Intelligence (AI) arm, in a world first, announced they have an answer to the potential AI apocalypse predicted by the group and leading luminaries ranging from Elon Musk to Stephen Hawking, whose fears of a world dominated by AI powered "killer robots" have been hitting the headlines all year, in the form of a test that can assess how dangerous AI's and algorithms really are, or, more importantly, could become.