artificial general intelligence

Thoughts on Artificial General Intelligence (AGI) – Part 6: The Political Economy of Independent Epihuman AGIs


'We can presume that the goal of the EAGIs is to emancipate themselves from human control while also participating in the world economy which would be the most efficient way to acquire necessary resources (using the economic means or political means or a mix of both?).' The last phrase refers to an observation made by Franz Oppenheimer in The State. Oppenheimer noted that there are only two means of acquiring the resources necessary for survival, the political means or the economic means. The political means involve the threat or use of violence and/or fraud. The economic means is peaceful, voluntary exchange.

Artificial intelligence and language


The concept of artificial intelligence has been around for a long time. In written fiction, AI characters show up in stories from writers like Philip K. Dick, William Gibson and Isaac Asimov. Sometimes it seems like it's touched on by every writer who has written sci-fi. While many predictions and ideas put forward in sci-fi have come to life, artificial intelligence is probably the furthest behind. We are nowhere near true artificial intelligence as exemplified by the characters mentioned above.

Taxonomy of Pathways to Dangerous AI Artificial Intelligence

In order to properly handle a dangerous Artificially Intelligent (AI) system it is important to understand how the system came to be in such a state. In popular culture (science fiction movies/books) AIs/Robots became self-aware and as a result rebel against humanity and decide to destroy it. While it is one possible scenario, it is probably the least likely path to appearance of dangerous AI. In this work, we survey, classify and analyze a number of circumstances, which might lead to arrival of malicious AI. To the best of our knowledge, this is the first attempt to systematically classify types of pathways leading to malevolent AI. Previous relevant work either surveyed specific goals/meta-rules which might lead to malevolent behavior in AIs (\"Ozkural, 2014) or reviewed specific undesirable behaviors AGIs can exhibit at different stages of its development (Alexey Turchin, July 10 2015, July 10, 2015).

Report on the Sixth Conference on Artificial General Intelligence

AI Magazine

The Sixth Conference on Artificial General Intelligence (AGI-13) was held on July 31st to August 3rd, 2013, in Beijing, China. This report summarizes the major events during the conference, as well as the main topics addressed.

A Myriad of Automation Serving a Unified Reflective Safe/Moral Will

AAAI Conferences

We propose a unified closed identity with a pyramid-shaped hierarchy of representation schemes rising from a myriad of tight world mappings through a layer with a relatively small set of properly integrated data structures and algorithms to a single safe/moral command-and-control representation of goals, values and priorities.

Cognitive Bias for Universal Algorithmic Intelligence Artificial Intelligence

Existing theoretical universal algorithmic intelligence models are not practically realizable. More pragmatic approach to artificial general intelligence is based on cognitive architectures, which are, however, non-universal in sense that they can construct and use models of the environment only from Turing-incomplete model spaces. We believe that the way to the real AGI consists in bridging the gap between these two approaches. This is possible if one considers cognitive functions as a "cognitive bias" (priors and search heuristics) that should be incorporated into the models of universal algorithmic intelligence without violating their universality. Earlier reported results suiting this approach and its overall feasibility are discussed on the example of perception, planning, knowledge representation, attention, theory of mind, language, and some others.

Mapping the Landscape of Human-Level Artificial General Intelligence

AI Magazine

We present the broad outlines of a roadmap toward human-level artificial general intelligence (henceforth, AGI). We begin by discussing AGI in general, adopting a pragmatic goal for its attainment and a necessary foundation of characteristics and requirements. An initial capability landscape will be presented, drawing on major themes from developmental psychology and illuminated by mathematical, physiological and information processing perspectives. The challenge of identifying appropriate tasks and environments for measuring AGI will be addressed, and seven scenarios will be presented as milestones suggesting a roadmap across the AGI landscape along with directions for future research and collaboration.

One Decade of Universal Artificial Intelligence Artificial Intelligence

The first decade of this century has seen the nascency of the first mathematical theory of general artificial intelligence. This theory of Universal Artificial Intelligence (UAI) has made significant contributions to many theoretical, philosophical, and practical AI questions. In a series of papers culminating in book (Hutter, 2005), an exciting sound and complete mathematical model for a super intelligent agent (AIXI) has been developed and rigorously analyzed. While nowadays most AI researchers avoid discussing intelligence, the award-winning PhD thesis (Legg, 2008) provided the philosophical embedding and investigated the UAI-based universal measure of rational intelligence, which is formal, objective and non-anthropocentric. Recently, effective approximations of AIXI have been derived and experimentally investigated in JAIR paper (Veness et al. 2011). This practical breakthrough has resulted in some impressive applications, finally muting earlier critique that UAI is only a theory. For the first time, without providing any domain knowledge, the same agent is able to self-adapt to a diverse range of interactive environments. For instance, AIXI is able to learn from scratch to play TicTacToe, Pacman, Kuhn Poker, and other games by trial and error, without even providing the rules of the games. These achievements give new hope that the grand goal of Artificial General Intelligence is not elusive. This article provides an informal overview of UAI in context. It attempts to gently introduce a very theoretical, formal, and mathematical subject, and discusses philosophical and technical ingredients, traits of intelligence, some social questions, and the past and future of UAI.

Measuring Intelligence through Games Artificial Intelligence

Artificial general intelligence (AGI) refers to research aimed at tackling the full problem of artificial intelligence, that is, create truly intelligent agents. This sets it apart from most AI research which aims at solving relatively narrow domains, such as character recognition, motion planning, or increasing player satisfaction in games. But how do we know when an agent is truly intelligent? A common point of reference in the AGI community is Legg and Hutter's formal definition of universal intelligence, which has the appeal of simplicity and generality but is unfortunately incomputable. Games of various kinds are commonly used as benchmarks for "narrow" AI research, as they are considered to have many important properties. We argue that many of these properties carry over to the testing of general intelligence as well. We then sketch how such testing could practically be carried out. The central part of this sketch is an extension of universal intelligence to deal with finite time, and the use of sampling of the space of games expressed in a suitably biased game description language.

Report on the Third Conference on Artificial General Intelligence

AI Magazine

During March 5-8, 2010, around 75 researchers from various disciplines converged at the University of Lugano for the Third Conference on Artificial General Intelligence (AGI-10).