If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Superintelligence: What happens in a world with AI that is hundreds or thousands of times smarter than humans? In this episode, we chat with research scientist Roman Yampolskiy. He's a professor at the University of Louisville, and his most recent book is Artificial Superintelligence: A Futuristic Approach. Subscribe wherever you find podcasts: If you listen to podcasts, here's where you can subscribe to future39 and here more interviews like this on the future. What happens in a world with AI that's hundreds or thousands of times smarter than we are? He's a professor at the University of Louisville, and his most recent book is Artificial Superintelligence: A Futuristic Approach. John Koetsier: Thank you so much for coming on the show. You have an amazing background there, I love it.
"Artificial intelligence: Safety and Security is a timely and ambitious edited volume. It comprises 28 chapters organized under three distinct themes: security, artificial intelligence and safety. Edited by Roman V. Yampolskiy, the contributions are well integrated and challenge common conceptions. Yampolskiy has assembled a diverse team of leading scholars. In sum, the book provides valuable insight into the cyber ecosystem. It can be read in any order without missing the essence of the subject matter, yet the chapters speak to each other. The chapters provide insight into new research areas and experimental designs. The book is a must-read for computer scientists, security experts, mathematicians, students and individuals who are interested in learning more about the progress of the artificial intelligence field. It will also be of interest to hackers and the intelligence community."
"When considering potential risks from future technology, one should not be content with merely analyzing what's likely to happen--instead, one should look at what's possible, even if unlikely." I'm a big believer in that quote – the reason I spend so much time painting pictures of possible futures. Mr. Tallinn expects the backbone of technology in the 2020s to be defined by gradual improvements in biotechnology, nanotechnology, and Artificial Intelligence. What else can we expect in the next decade? A recent Article by George Dvosrsky – a senior staff reporter at Gizmodo – explores the futuristic developments in the next ten years.
With the decade winding down it's time for us to set our sights on the next one. The 2020s promises to be anything but dull. From the automation revolution and increasingly dangerous AI to geohacking the planet and radical advances in biotechnology, here are the most futuristic developments to expect in the next 10 years. Making predictions is easy; it's getting them right that's tough. That said, some tangible trends are emerging that should allow us to make some informed guesses about what the future will hold over the next 10 years. Of great concern, of course, is the pending automation revolution and the associated onset of technological unemployment.
Although the idea of "artificial intelligence" has been around since 1956, this seems to be a breakthrough year for AI in the restaurant space. Major players from Chick-fil-A to Chipotle to Domino's have implemented AI in some form or fashion, whether to identify food safety issues, scale up logistics or generate orders via voice assistance. Even some smaller chains are getting on board the AI train. In February, Colorado-based Good Times Burger & Frozen Custard launched its conversational AI platform through a partnership with Valyant AI, for example. Perhaps the biggest breakthrough came when McDonald's adopted the technology.
Computer scientists have developed a card-playing bot, called Pluribus, capable of defeating some of the world's best players at six-person no-limit Texas hold'em poker, in what's considered an important breakthrough in artificial intelligence. Two years ago, a research team from Carnegie Mellon University developed a similar poker-playing system, called Libratus, which consistently defeated the world's best players at one-on-one Heads-Up, No-Limit Texas Hold'em poker. The creators of Libratus, Tuomas Sandholm and Noam Brown, have now upped the stakes, unveiling a new system capable of playing six-player no-limit Texas hold'em poker, a wildly popular version of the game. In a series of contests, Pluribus handedly defeated its professional human opponents, at a level the researchers described as "superhuman." When pitted against professional human opponents with real money involved, Pluribus managed to collect winnings at an astounding rate of $1,000 per hour.
With increase in capabilities of artificial intelligence, over the last decade, a significant number of researchers have realized importance in creating not only capable intelligent systems, but also making them safe and secure [1-6]. Unfortunately, the field of AI Safety is very young, and researchers are still working to identify its main challenges and limitations. Impossibility results are well known in many fields of inquiry [7-13], and some have now been identified in AI Safety [14-16]. In this paper, we concentrate on a poorly understood concept of unpredictability of intelligent systems , which limits our ability to understand impact of intelligent systems we are developing and is a challenge for software verification and intelligent system control, as well as AI Safety in general. In theoretical computer science and in software development in general, many well-known impossibility results are well established, some of them are strongly related to the subject of this paper, for example: Rice's Theorem states that no computationally effective method can decide if a program will exhibit a particular nontrivial behavior, such as producing a specific output .
In 2017, artificial intelligence attracted $12 billion of VC investment. We are only beginning to discover the usefulness of AI applications. Amazon recently unveiled a brick-and-mortar grocery store that has successfully supplanted cashiers and checkout lines with computer vision, sensors, and deep learning. Between the investment, the press coverage, and the dramatic innovation, "AI" has become a hot buzzword. But does it even exist yet?
Since the birth of the field of Artificial Intelligence (AI) researchers worked on creating ever capable machines, but with recent success in multiple subdomains of AI [1-7] safety and security of such systems and predicted future superintelligences [8, 9] has become paramount [10, 11]. While many diverse safety mechanisms are being investigated [12, 13], the ultimate goal is to align AI with goals, values and preferences of its users which is likely to include all of humanity. Value alignment problem , can be decomposed into three sub-problems, namely: personal value extraction from individual persons, combination of such personal preferences in a way, which is acceptable to all, and finally production of an intelligent system, which implements combined values of humanity. A number of approaches for extracting values [15-17] from people have been investigated, including inverse reinforcement learning [18, 19], brain scanning , value learning from literature , and understanding of human cognitive limitations . Assessment of potential for success for particular techniques of value extraction is beyond the scope of this paper and we simply assume that one of the current methods, their combination, or some future approach will allow us to accurately learn values of given people.
Awareness of the possible impacts associated with artificial intelligence has risen in proportion to progress in the field. While there are tremendous benefits to society, many argue that there are just as many, if not more, concerns related to advanced forms of artificial intelligence. Accordingly, research into methods to develop artificial intelligence safely is increasingly important. In this paper, we provide an overview of one such safety paradigm: containment with a critical lens aimed toward generative adversarial networks and potentially malicious artificial intelligence. Additionally, we illuminate the potential for a developmental blindspot in the stovepiping of containment mechanisms.