I mean that both the ways people interpret Shakespeare's meaning when he has Antonio utter the phrase in The Tempest. In one interpretation it is that the past has predetermined the sequence which is about to unfold–and so I believe that how we have gotten to where we are in Artificial Intelligence will determine the directions we take next–so it is worth studying that past. Another interpretation is that really the past was not much and the majority of necessary work lies ahead–that too, I believe. We have hardly even gotten started on Artificial Intelligence and there is lots of hard work ahead. It is generally agreed that John McCarthy coined the phrase "artificial intelligence" in the written proposal2 for a 1956 Dartmouth workshop, dated August 31st, 1955. It is authored by, in listed order, John McCarthy of Dartmouth, Marvin Minsky of Harvard, Nathaniel Rochester of IBM and Claude Shannon of Bell Laboratories. Later all but Rochester would serve on the faculty at MIT, although by early in the sixties McCarthy had left to join Stanford University. The nineteen page proposal has a title page and an introductory six pages (1 through 5a), followed by individually authored sections on proposed research by the four authors.
Hitherto the present, there has been a post floating around the internet detailing multiple "types" of artificial intelligence, purportedly written by someone named "Yuli Ban". If you see this post, know that it wasn't written by me at all, absolutely not, I take no responsibility for the cringy contents of that post, and you are likely remembering something that never existed or perhaps was written by my evil twin, Tali. In all seriousness, I've been meaning to update that post for a while now thanks to some greater understanding of how AI works. I recall mentioning how it was a smorgasbord of buzzwords without much meaning, written by someone in 2016 with no experience in AI whatsoever. This one, I hope, provides greater usefulness. Artificial intelligence has a problem: no one can precisely tell you what it is supposed to be.
Ideally, an interface will surface the deepest principles underlying a subject, revealing a new world to the user. When you learn such an interface, you internalize those principles, giving you more powerful ways of reasoning about that world. Those principles are the diffs in your understanding. They're all you really want to see, everything else is at best support, at worst unimportant dross. The purpose of the best interfaces isn't to be user-friendly in some shallow sense.
When it comes to Artificial Intelligence (AI), people's responses vary: from "Terminator and Skynet are coming to kill us all" to "Will the bots take my jobs?" to "Awesome, now I can sit back and do the fun stuff while the bots take care of tedious tasks for me." But there are also misperceptions and misinformation. It's always useful to have a basic grasp of AI, because whether you like it or not, AI is already manifesting in many aspects of our lives. For instance, you can now order Domino's pizzas by talking to your phone. Plus, the pizza giant also says it is moving from a "mobile first" to an "AI first" philosophy.
CEO of AI.io and Moonshot, Terence Mills is an AI pioneer and digital technology specialist. As our society's technological progress marches forward, we've become ever more fascinated with the concept of artificial general intelligence (AGI). From IBM's Jeopardy-playing computer, Watson to television programs like Westworld, we've collectively begun exploring and philosophizing about the potential of AGI. Of course, most discussions about AGI in our popular culture are focused on the future, and not the current realities of the present when it comes to artificial general intelligence. Below, we'll discuss the current realities of AGI and what breakthroughs we're on the cusp of in 2018.