If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
When you return to school after summer break, it may feel like you forgot everything you learned the year before. But if you learned like an AI system does, you actually would have -- as you sat down for your first day of class, your brain would take that as a cue to wipe the slate clean and start from scratch. AI systems' tendency to forget the things it previously learned upon taking on new information is called catastrophic forgetting. See, cutting-edge algorithms learn, so to speak, after analyzing countless examples of what they're expected to do. A facial recognition AI system, for instance, will analyze thousands of photos of people's faces, likely photos that have been manually annotated, so that it will be able to detect a face when it pops up in a video feed.
Incremental steps in exploring artificial intelligence now can position organizations to pounce on a powerful array of machine-powered capabilities in the future. In March 2016, the American Association for Artificial Intelligence and I asked 193 AI researchers how long it would be until we achieve artificial superintelligence (ASI), defined as an intellect that is smarter than the best human in practically every field. Of the 80 respondents, 67.5 percent said it could take a quarter century or more, and 25 percent said it would likely never happen. Given the sheer number of "AI is coming to take your job" articles appearing across media, these survey findings may come as a surprise to some. Yet they are grounded in certain realities.
There are fears that tend to come up when people talk about futuristic artificial intelligence -- say, one that could teach itself to learn and become more advanced than anything we humans might be able to comprehend. In the wrong hands, perhaps even on its own, such an advanced algorithm might dominate the world's governments and militaries, impart Orwellian levels of surveillance, manipulation, and social control over societies, and perhaps even control entire battlefields of autonomous lethal weapons such as military drones. But some artificial intelligence experts don't think those fears are well-founded. In fact, highly-advanced artificial intelligence could be better at managing the world than humans have been. These fears themselves are the real danger, because they may hold us back from making that potential a reality.
Pretty much every tech startup that boasts its use of artificial intelligence is actually focused on an ultra-specific problem. Visual effects company Digital Domain has an AI algorithm that automates and enhances video editing; healthcare startup Babylon developed an AI chatbot to answer the constant barrage of patient questions. They do what they're supposed to. But they'll never lead to something more, something bigger. No matter how great these AI systems sound, no startup will ever stumble upon artificial general intelligence (AGI), which is essentially a complete, human-like AI system that has become truly intelligent.
Welcome to the AI era. We missed the official announcement too, but it's obvious that's what we're in. This new paradigm requires the acceptance or denial of a new brand of faith: Artificial general intelligence (AGI). Or, sentient machines, if you prefer. Either way, let's talk about killer robots.
An Artificial General Intelligence (AGI) would be a machine capable of understanding the world as well as any human, and with the same capacity to learn how to carry out a huge range of tasks. AGI doesn't exist, but has featured in science-fiction stories for more than a century, and been popularized in modern times by films such as 2001: A Space Odyssey. Fictional depictions of AGI vary widely, although tend more towards the dystopian vision of intelligent machines eradicating or enslaving humanity, as seen in films like The Matrix or The Terminator. In such stories, AGI is often cast as either indifferent to human suffering or even bent on mankind's destruction. In contrast, utopian imaginings, such as Iain M Banks' Culture civilization novels, cast AGI as benevolent custodians, running egalitarian societies free of suffering, where inhabitants can pursue their passions and technology advances at a breathless pace. Whether these ideas would bear any resemblance to real-world AGI is unknowable since nothing of the sort has been created, or, according to many working in the field of AI, is even close to being created.
However even these reinforcement learning algorithms couldn't transfer what they'd learned about one task to acquiring a new task. In order to realize this achievement, DeepMind supercharged a reinforcement learning algorithm called A3C. In so-called actor-critic reinforcement learning, of which A3C is one variety, acting and learning are decoupled so that one neural network, the critic, evaluates the other, the actor. Together, they drive the learning process. This was already the state of the art, but DeepMind added a new off-policy correction algorithm called V-trace to the mix, which made the learning more efficient, and crucially, better able to achieve positive transfer between tasks.
Much of the debate over how artificial intelligence (AI) will affect geopolitics focuses on the emerging arms race between Washington and Beijing, as well as investments by major military powers like Russia. And to be sure, breakthroughs are happening at a rapid pace in the United States and China. But while an arms race between superpowers is riveting, AI development outside of the major powers, even where advances are less pronounced, could also have a profound impact on our world. The way smaller countries choose to use and invest in AI will affect their own power and status in the international system. Middle powers--countries like Australia, France, Singapore, and South Korea--are generally prosperous and technologically advanced, with small-to-medium-sized populations.
Modern data science emerged in tech, from optimizing Google search rankings and LinkedIn recommendations to influencing the headlines Buzzfeed editors run. But it's poised to transform all sectors, from retail, telecommunications, and agriculture to health, trucking, and the penal system. Yet the terms "data science" and "data scientist" aren't always easily understood, and are used to describe a wide range of data-related work. What, exactly, is it that data scientists do? As the host of the DataCamp podcast DataFramed, I have had the pleasure of speaking with over 30 data scientists across a wide array of industries and academic disciplines.
However, these achievements remain essentially in the domain of "narrow AI"--AI that carries out tasks based on specifically-supplied data or rules, or carefully-created training situations. AIs that can generalize to unanticipated domains and confront the world as autonomous agents are still part of the road ahead. The question remains: what do we need to do to get from today's narrow AI tools, which have become mainstream in business and society, to the AGI envisioned by futurists and science fiction authors? While there is a tremendous diversity of perspectives and no shortage of technical and conceptual ideas on the path to AGI, there is nothing resembling an agreement among experts on the matter. For example, Google DeepMind's chief founder Demis Hassabis has long been a fan of relatively closely brain-inspired approaches to AGI, and continues to publish papers in this direction.