If you're like most marketers, you're probably trying to get in on a little AI action to raise your game and keep up with your competition. And if you're like most marketers, you might not understand exactly how it all works yet. As you discover new smart tools for your company, the first step towards making smart buying decisions is to understand the difference between machine learning and artificial intelligence. These terms are often used interchangeably, but they are definitely not the same thing. "AI is any technology that enables a system to demonstrate human-like intelligence," explained Patrick Nguyen, chief technology officer at 7.ai.
Welcome to TechTalks' AI book reviews, a series of posts that explore the latest literature on AI. It wouldn't be an overstatement to say that artificial intelligence is one of the most confusing and least understood fields of science. On the one hand, we have headlines that warn of deep learning outperforming medical experts, creating their own language and spinning fake news stories. On the other hand, AI experts point out that artificial neural networks, the key innovation of current AI techniques, fail at some of the most basic tasks that any human child can perform. Artificial intelligence is also marked with some of the most divisive disputes and rivalries.
If you're reading these words, rest assured, they were written by a human being. Whether they amount to intelligence, that's for you to say. The age of writing by a machine that can pass muster with human readers is not quite upon us, at least, not if one reads closely. Scientists at the not-for-profit OpenAI this week released a neural network model that not only gobbles tons of human writing -- 40 gigabytes worth of Web-scraped data -- it also discovers what kind of task it should perform, from answering questions to writing essays to performing translation, all without being explicitly told to do so, what's known as "zero-shot" learning of tasks. The debut set off a swarm of headlines about new and dangerous forms of "deep fakes."
Fraudulent images have been around for as long as photography itself. Take the famous hoax photos of the Cottingley fairies or the Loch Ness monster. Photoshop ushered image doctoring into the digital age. Now artificial intelligence is poised to lend photographic fakery a new level of sophistication, thanks to artificial neural networks whose algorithms can analyze millions of pictures of real people and places--and use them to create convincing fictional ones. These networks consist of interconnected computers arranged in a system loosely based on the human brain's structure.
Many people claim current technological progress as happening at a faster and faster pace (exponential even), with no end in sight. The merits and detriments of technology can be argued ad nauseum, but I won't be getting into that in this post (I generally view technology itself as neutral -- it can be used to improve human life or terribly misused to oppress, control, and kill). What I am going to briefly explore here is the question: is current progress in AI exponential? And if so, what implications does that have for estimates on the arrival of human level or superhuman level AI? Before I dive in, it's worth asking (if you didn't study mathematics): why does it matter if something is changing exponentially? Frequently people think the word "exponential" means "really fast", which is sometimes true, but doesn't capture much of the meaning of the concept.