If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
There is a need for a new global platform to monitor, consider, and make recommendations about the implications of emerging technologies in general, and AI more specifically, for international security. The doomsday scenarios spun around this theme are so outlandish – like The Matrix, in which human-created artificial intelligence plugs humans into a simulated reality to harvest energy from their bodies – it's difficult to visualize them as serious threats. Meanwhile, artificially intelligent systems continue to develop apace. Self-driving cars are beginning to share our roads; pocket-sized devices respond to our queries and manage our schedules in real-time; algorithms beat us at Go; robots become better at getting up when they fall over. It's obvious how developing these technologies will benefit humanity. But, then – don't all the dystopian sci-fi stories start out this way?
Continuing the mission of the past AGI conferences, AGI-16 gathers an international group of leading academic and industry researchers involved in scientific and engineering work aimed directly toward the goal of Artificial General Intelligence (AGI). AGI-16 @ New York will be held from July 16-19 of 2016, on the campus of the New School in Lower Manhattan. As a special event for 2016, the AGI-16 conference will be co-located with three other related conferences -- BICA-16, the Neural-Symbolic Workshop 2016 and the AI & Cognition Workshop 2016 -- as part of the overall Human-Level Intelligence 2016 (HLAI-16) event. AGI conferences are organized by the Artificial General Intelligence Society, in cooperation with the Association for the Advancement of Artificial Intelligence (AAAI). The proceedings of AGI-16 will be published as a book in Springer's Lecture Notes in AI series, and all the accepted papers will be available online.
The doomsday scenarios spun around this theme are so outlandish -- like The Matrix, in which human-created artificial intelligence plugs humans into a simulated reality to harvest energy from their bodies -- it's difficult to visualize them as serious threats. Meanwhile, artificially intelligent systems continue to develop apace. Self-driving cars are beginning to share our roads; pocket-sized devices respond to our queries and manage our schedules in real-time; algorithms beat us at Go; robots become better at getting up when they fall over. It's obvious how developing these technologies will benefit humanity. But, then -- don't all the dystopian sci-fi stories start out this way? One is overly credulous scare-mongering.
From answering queries to predicting future of your relationship, a lot is already being said and written about Artificial Intelligence (AI). We've seen movies depicting the technology like Matrix, and even Bollywood doesn't fall short of explaining what's AI, of course with a fair share of melodrama. But, what seems fascinating and equally scary is a new report talking about an AI arms race. An army of machines may be decades away, and Anja Kaspersen, Head of International Security, World Economic Forum, pointing at a survey of AI researchers by TechEmergence (via Medium) points out how it poses an array of security concerns which could be curbed by timely implementation of norms and protocols. There are many questions raised about how AI could be a life-changing and threatening factor, and what it is goes into the hands of some malicious minds.
Are you tired of hearing about artificial intelligence yet? Well, I have some bad news. It's only going to become the most important thing in our lives. Every once in a while you read an article that completely changes and overwhelms the way you think about something. Today, for me, it was The Artificial Intelligence Revolution: Part 1 and Part 2. These are very long articles, so I have taken the liberty of extracting the best parts for a quick skim.
The Daily Roundup is our comprehensive coverage of the VR industry wrapped up into one daily email, delivered directly to your inbox. Artificial Intelligence has the potential to disrupt so many different dimensions of our society that the White House Office of Science & Technology Policy recently announced a series of four public workshops to look at some of the possible impacts of AI. The first of these workshops happened at the University of Washington on Tuesday, May 24th, and I was there to cover how some of these discussions may impact the virtual reality community. The first AI public workshop was focused on law and policy, and I had a chance to talk to three different people about their perspectives on AI. I interviewed the White House Deputy U.S. Chief Technology Officer Edward Felten about how these workshops came about, and the government's plan for addressing the issue.
Deep learning has been fantastically successful in recent years, and is responsible for better-than-human performance in image classification, face recognition and playing Go. Not everyone thinks that deep learning is the bee's knees -- because the conclusions it reaches can't be explained easily (they're not'interpretable'), and it tends to require a LOT of data and compute power. Combinations of deep and other learning methods may be far more powerful than one alone. How does machine learning relate to Artificial Intelligence (and Artificial General Intelligence)? AI refers to systems that can act intelligently, even in a very narrow scope.
In order to properly handle a dangerous Artificially Intelligent (AI) system it is important to understand how the system came to be in such a state. In popular culture (science fiction movies/books) AIs/Robots became self-aware and as a result rebel against humanity and decide to destroy it. While it is one possible scenario, it is probably the least likely path to appearance of dangerous AI. In this work, we survey, classify and analyze a number of circumstances, which might lead to arrival of malicious AI. To the best of our knowledge, this is the first attempt to systematically classify types of pathways leading to malevolent AI. Previous relevant work either surveyed specific goals/meta-rules which might lead to malevolent behavior in AIs (Özkural 2014) or reviewed specific undesirable behaviors AGIs can exhibit at different stages of its development (Turchin July 10 2015a, Turchin July 10, 2015b).
Value alignment is a property of an intelligent agent indicating that it can only pursue goals that are beneficial to humans. Successful value alignment should ensure that an artificial general intelligence cannot intentionally or unintentionally perform behaviors that adversely affect humans. This is problematic in practice since it is difficult to exhaustively enumerated by human programmers. In order for successful value alignment, we argue that values should be learned. In this paper, we hypothesize that an artificial intelligence that can read and understand stories can learn the values tacitly held by the culture from which the stories originate.We describe preliminary work on using stories to generate a value-aligned reward signal for reinforcement learning agents that prevents psychotic-appearing behavior.