A sinister threat is brewing deep inside the technology laboratories of Silicon Valley, according to Professor Stephen Hawking. Artificial Intelligence, disguised as helpful digital assistants and self-driving vehicles, is gaining a foothold, and it could one day spell the end for mankind. The world-renowned professor has warned robots could evolve faster than humans and their goals will be unpredictable. Professor Stephen Hawking (pictured) claimed AI would be difficult to stop if the appropriate safeguards are not in place. During a talk in Cannes, Google's chairman Eric Schmidt said AI will be developed for the benefit of humanity and there will be systems in place in case anything goes awry.
In the early to mid-90's while my friends were getting into Indie Rock, I was hacking away at robots and getting them to learn to map a room. A computer science graduate student, I programmed LISP algorithms for parsing nursing records in order to predict intervention codes. I was no less a nerd (or to put it a better way, a technology enthusiast) in law school, when I wrote about how natural language processing can improve legal research tools. I didn't put much thought, either as a computer scientist or law student to whether artificial intelligence (AI) should be regulated. Frankly, we were in such the early days of the technology, that AI regulations seemed like science fiction a la Isaac Asimov's three laws of robotics.
Whether it's a new employment contract, a rental contract, or sale contract, it needs to be checked before signing. Everyone knows the struggle of working through the dreaded small print, searching for pitfalls hidden in the tiniest details, and trying to make sense out of the bizarre language of law. In fairness to the layman, contract review is also a hustle for lawyers themselves. In 2014, commercial lawyer Noory Bechor got sick of the fact that 80 percent of his work was spent reviewing contracts. He figured the service could be done much cheaper, faster, and more accurately by a computer.
Not a day goes by when I don't hear another artificial intelligence horror case. If evoking more of a modern and less of a killer machine image is desired, the protagonist in Ex Machina (although no less scary) is selected. The audience is really interested now. Even more critical -- the end of the human race is beckoning! Going back to work is less motivating when you know you'll be replaced by your Roomba in a few years' time.
In the future, will artificial intelligence be so sophisticated that it will be able to tell when someone is trying to deceive it? A Carnegie Mellon University professor and his team is working on technology that could move this idea from the realm of science fiction to reality. Their work -- rooted in game theory and machine learning -- is part of a larger push for more advanced AI. As AI becomes more commonplace in the technology we use every day, detractors and supporters are becoming more vocal about its potential risks and benefits. For some, smarter AI sets up a dangerous precedent for a future too reliant on machines to make decisions about everything from medical diagnoses to the operation of self-driving cars.