artificial general intelligence


If Planet Death Doesn't Get Us, an AI Superintelligence Most Certainly Will

#artificialintelligence

Trying to comprehend the nature of an AI superintelligence is akin to trying to comprehend the mind of God. There's an element of the mystical in attempting to define the intentions of a being infinitely more intelligent and therefore infinitely more powerful than you--a being that may or may not even exist. There's even an actual religion, founded by a former Uber and Waymo self-driving engineer, dedicated to the worship of God based on artificial intelligence. But the existential fear of superintelligence isn't just spiritual vaporware, and we have a sense, at least, of how it might come to be. Researchers like Eliezer Yudkowsky suggest that the development of superintelligence could start with the creation of a "seed AI"--an artificial intelligence program that is able to achieve recursive self-improvement. We've already done that with narrow AI-- AlphaGo Zero started out knowing nothing about the game Go, but quickly managed to improve to a human level and then far beyond.


DataHack Radio #19: The Path to Artificial General Intelligence with Professor Melanie Mitchell - Analytics Vidhya

#artificialintelligence

"People underestimate how complex intelligence is." How close are we to Artificial General Intelligence (AGI)? It seems we take a step closer to that reality with every breakthrough. And yet, it feels like a million miles away in the future. Why are we still so distant from AGI despite the unabated rise in computational hardware?


Artificial Super Intelligence Might Be Closer than You Think

#artificialintelligence

According to Gartner's survey of over 3,000 CIOs, Artificial intelligence (AI) was by far the most mentioned technology and takes the spot as the top game-changer technology away from data and analytics, which is now occupying a second place. AI is set to become the core of everything humans are going to be interacting with in the forthcoming years and beyond. Robots are programmable entities designed to carry out a series of tasks. When programmers embed human-like intelligence, behavior, emotions, and even when they engineer ethics into robots we say they create robots with an embedded Artificial Intelligence that is able to mimic any task a human can perform, including debating, as IBM showed earlier this year at CES Las Vegas. IBM has made a human-AI debate possible through its Project Debater, aimed at helping decision-makers make more informed decisions.


Decentralized AI Ben Goertzel TEDxBerkeley

#artificialintelligence

Dr. Ben Goertzel is the CEO of the decentralized AI network SingularityNET, a blockchain-based AI platform company, and the Chief Scientist of Hanson Robotics. Dr. Goertzel is one of the world's foremost experts in Artificial General Intelligence, a subfield of AI oriented toward creating thinking machines with general cognitive capability at the human level and beyond He has published 20 scientific books and 140 scientific research papers, and is the main architect and designer of the OpenCog system and associated design for human-level general intelligence. Dr. Ben Goertzel is the CEO of the decentralized AI network SingularityNET, a blockchain-based AI platform company, and the Chief Scientist of Hanson Robotics. Dr. Goertzel is one of the world's foremost experts in Artificial General Intelligence, a subfield of AI oriented toward creating thinking machines with general cognitive capability at the human level and beyond He has published 20 scientific books and 140 scientific research papers, and is the main architect and designer of the OpenCog system and associated design for human-level general intelligence. This talk was given at a TEDx event using the TED conference format but independently organized by a local community.



Why Trusting AI Means Trusting People

#artificialintelligence

Artificial general intelligence, colloquially known as superintelligence, can be defined as AI improving upon itself through the process of iterative learning, reaching a point of singularity and quickly surpassing the limits of human intelligence. By no means will this assuredly happen, but the likes of Sam Altman and Elon Musk believe it will and are striving for regulations to be set before it does. Regardless of where you stand, AI is a dangerous technology -- and some might go so far as to call it a weapon. To give just a few examples, deepfakes use AI to perform highly realistic face swaps that create the illusion that someone said or did something they never did. Deepfakes can also be used to create fake audio in order to impersonate others, the potential dangers of which, when combined with fake videos, become enormous.


Rebooting AI: Building Artificial Intelligence We Can Trust: Gary Marcus, Ernest Davis: 9781524748258: Amazon.com: Books

#artificialintelligence

"Artificial intelligence is among the most consequential issues facing humanity, yet much of today's commentary has been less than intelligent: awe-struck, credulous, apocalyptic, uncomprehending. Gary Marcus and Ernest Davis, experts in human and machine intelligence, lucidly explain what today's AI can and cannot do, and point the way to systems that are less A and more I." --Steven Pinker, Johnstone Professor of Psychology, Harvard University, and the author of How the Mind Works and The Stuff of Thought "Finally, a book that tells us what AI is, what AI is not, and what AI could become if only we are ambitious and creative enough. No matter how smart and useful our intelligent machines are today, they don't know what really matters. Rebooting AI dares to imagine machine minds that goes far beyond the closed systems of games and movie recommendations to become real partners in every aspect of our lives." Every CEO should read it, and everyone else at the company, too.


DeepMind's Losses and the Future of Artificial Intelligence

#artificialintelligence

Alphabet's DeepMind lost $572 million last year. DeepMind, likely the world's largest research-focused artificial intelligence operation, is losing a lot of money fast, more than $1 billion in the past three years. DeepMind also has more than $1 billion in debt due in the next 12 months. Does this mean that AI is falling apart? Gary Marcus is founder and CEO of Robust.AI and a professor of psychology and neural science at NYU.


DeepMind's Losses and the Future of Artificial Intelligence

#artificialintelligence

Alphabet's DeepMind lost $572 million last year. DeepMind, likely the world's largest research-focused artificial intelligence operation, is losing a lot of money fast, more than $1 billion in the past three years. DeepMind also has more than $1 billion in debt due in the next 12 months. Does this mean that AI is falling apart? Gary Marcus is founder and CEO of Robust.AI and a professor of psychology and neural science at NYU.


Global Big Data Conference

#artificialintelligence

The term "artificial general intelligence," or AGI, doesn't actually refer to anything, at this point, it is merely a placeholder, a kind of Rorschach Test for people to fill the void with whatever notions they have of what it would mean for a machine to "think" like a person. Despite that fact, or perhaps because of it, AGI is an ideal marketing term to attach to a lot of efforts in machine learning. Case in point, a research paper featured on the cover of this week's Nature magazine about a new kind of computer chip developed by researchers at China's Tsinghua University that could "accelerate the development of AGI," they claim. The chip is a strange hybrid of approaches, and is intriguing, but the work leaves unanswered many questions about how it's made, and how it achieves what researchers claim of it. And some longtime chip observers doubt the impact will be as great as suggested.