Artificial intelligence is the future. Google, Microsoft, Amazon and Apple are all making big bets on AI. (Amazon owner Jeff Bezos also owns The Washington Post.) Congress has held hearings and even formed a bipartisan Artificial Intelligence Caucus. From health care to transportation to national security, AI has the potential to improve lives. But it comes with fears about economic disruption and a brewing "AI arms race ."
Over the next five years, we are about to witness the world we live in entirely disrupted by improvements in artificial intelligence (AI) and machine learning. Children today are growing up with AI assistants in their homes (Google Assistant, Siri and Alexa) -- to the point that you might consider their mere presence an extension of co-parenting. As voice and facial recognition continue to evolve, machine learning algorithms are getting smarter. More and more industries are being influenced by AI, and our society as we know it is transforming. The transportation industry looks like it will be the first to be completely disrupted by artificial intelligence.
When it comes to the future of artificial intelligence, the ultimate battle between man and machine may come to mind -- but that's really the stuff of science fiction. AI actually has a presence in our daily lives on a much more useful and less apocalyptic level. Think personal assistant devices and apps like Alexa, Cortana and Siri, web search predictions, movie suggestions on Netflix and self-driving cars. The term "artificial intelligence" was coined back in 1956. It describes a machine's ability to perform intelligent behavior such as decision-making or speech recognition.
With the Hollywood blockbuster Transcendence playing in cinemas, with Johnny Depp and Morgan Freeman showcasing clashing visions for the future of humanity, it's tempting to dismiss the notion of highly intelligent machines as mere science fiction. But this would be a mistake, and potentially our worst mistake in history. Artificial-intelligence (AI) research is now progressing rapidly. Recent landmarks such as self-driving cars, a computer winning at Jeopardy! and the digital personal assistants Siri, Google Now and Cortana are merely symptoms of an IT arms race fuelled by unprecedented investments and building on an increasingly mature theoretical foundation. Such achievements will probably pale against what the coming decades will bring.
Movies like Blade Runner and Her have popularised the idea of fully conscious computers, and with AI (Artificial Intelligence) technology like Apple's Siri or Amazon's Alexa increasingly present in our lives, it'd be easy to believe that what you see on the silver screen is just around the corner. Whilst I enjoy a Sci-Fi epic as much as the next person, in my dual role as Professor of Computer Science at the University of San Francisco and Chief Scientist at data integration software provider SnapLogic, I investigate the practical applications of AI and am tasked with explaining and teaching the realities of what can be achieved. In other words, I separate the fact from the fiction, which is what I aim to do today. It's not self-aware or able to generate original thoughts. What many people call AI is actually a subfield called machine learning (ML).