Goto

Collaborating Authors

 hlai


Toward Human-Level Artificial Intelligence

Park, Deokgun

arXiv.org Artificial Intelligence

In this paper, we present our research on programming human-level artificial intelligence (HLAI), including 1) a definition of HLAI, 2) an environment to develop and test HLAI, and 3) a cognitive architecture for HLAI. The term AI is used in a broad meaning, and HLAI is not clearly defined. I claim that the essence of Human-Level Intelligence to be the capability to learn from others' experiences via language. The key is that the event described by language has the same effect as if the agent experiences it firsthand for the update of the behavior policy. To develop and test models with such a capability, we are developing a simulated environment called SEDRo. There is a 3D Home, and a mother character takes care of the baby (the learning agent) and teaches languages. The environment provides comparable experiences to that of a human baby from birth to one year. Finally, I propose a cognitive architecture of HLAI called Modulated Heterarchical Prediction Memory (mHPM). In mHPM, there are three components: a universal module that learns to predict the next vector given the sequence of vector signals, a heterarchical network of those modules, and a reward-based modulation of learning.


A beginner's guide to the AI apocalypse: Artificial stupidity

#artificialintelligence

Welcome to the latest article in TNW's guide to the AI apocalypse. In this series we'll examine some of the most popular doomsday scenarios prognosticated by modern AI experts. In this edition we're going to flip the script and talk about something that might just save us from being destroyed by our robot overlords on September 23, 2029 (random date, but if it actually happens your mind is going to be blown), and that is: artificial stupidity. You won't find any comprehensive data on the subject outside of the testimonials at the Darwin Awards, but stupidity is surely the biggest threat to humans throughout all of history. Luckily we're still the smartest species on the planet, so we've managed to remain in charge for a long time despite our shortcomings.


Lifelong learning machines (L2M) - Hava Siegelmann keynote at HLAI

#artificialintelligence

Sign in to report inappropriate content. Hava Siegelmann, Microsystems Technology Office Program Manager DARPA, gives a keynote at the Human-Level AI Conference in Prague in August 2018. The conference combined three major conferences AGI, BICA, and NeSy and was organized by AI research and development company GoodAI.


Alternative Techniques for Mapping Paths to HLAI

Gruetzemacher, Ross, Paradice, David

arXiv.org Artificial Intelligence

The only systematic mapping of the HLAI technical landscape was conducted at a workshop in 2009 [Adams et al., 2012]. However, the results from it were not what organizers had hoped for [Goertzel 2014, 2016], merely just a series of milestones, up to 50% of which could be argued to have been completed already. We consider two more recent articles outlining paths to human-like intelligence [Mikolov et al., 2016; Lake et al., 2017]. These offer technical and more refined assessments of the requirements for HLAI rather than just milestones. While useful, they also have limitations. To address these limitations we propose the use of alternative techniques for an updated systematic mapping of the paths to HLAI. The newly proposed alternative techniques can model complex paths of future technologies using intricate directed graphs. Specifically, there are two classes of alternative techniques that we consider: scenario mapping methods and techniques for eliciting expert opinion through digital platforms and crowdsourcing. We assess the viability and utility of both the previous and alternative techniques, finding that the proposed alternative techniques could be very beneficial in advancing the existing body of knowledge on the plausible frameworks for creating HLAI. In conclusion, we encourage discussion and debate to initiate efforts to use these proposed techniques for mapping paths to HLAI.


Why should we bother building human-level AI? Five experts weigh in

#artificialintelligence

Human-level AI is similar, but not quite as powerful as AGI, for the simple reason that many in the know expect AGI to surpass anything we mortals can accomplish. Though some see this as an argument against building HLAI, some experts believe that only an HLAI could ever be clever enough to design a true AGI – human engineers would only be necessary up to a certain point once we get the ball rolling. At a conference on HLAI held by Prague-based AI startup GoodAI in August, a number of AI experts and thought leaders were asked a simple question: "Why should we bother trying to create human-level AI?" For those AI researchers that have detached from the outside world and gotten stuck in their own little loops (yes, of course we care about your AI-driven digital marketplace for farm supplies), the responses may remind them why they got into this line of work in the first place. For the rest of us, they provide a glimpse of the great things to come. For what it's worth, this particular panel was more of a lightning round -- largely for fun, the experts were instructed to come up with a quick answer rather than taking time to deliberate and carefully choose their words.


When will we have artificial intelligence as smart as a human? Here's what experts think

#artificialintelligence

What do all these movies have in common? The artificial intelligence (AI) depicted in there is crazy-sophisticated. These robots can think creatively, continue learning over time, and maybe even pass for conscious. Real-life artificial intelligence experts have a name for AI that can do this -- it's Artificial General Intelligence (AGI). For decades, scientists have tried all sorts of approaches to create AGI, using techniques such as reinforcement learning and machine learning.