superintelligent machine
How's this for a bombshell – the US must make AI its next Manhattan Project John Naughton
Ten years ago, the Oxford philosopher Nick Bostrom published Superintelligence, a book exploring how superintelligent machines could be created and what the implications of such technology might be. One was that such a machine, if it were created, would be difficult to control and might even take over the world in order to achieve its goals (which in Bostrom's celebrated thought experiment was to make paperclips). The book was a big seller, triggering lively debates but also attracting a good deal of disagreement. Critics complained that it was based on a simplistic view of "intelligence", that it overestimated the likelihood of superintelligent machines emerging any time soon and that it failed to suggest credible solutions for the problems that it had raised. But it had the great merit of making people think about a possibility that had hitherto been confined to the remoter fringes of academia and sci-fi. Now, 10 years later, comes another shot at the same target.
- North America > United States > New Mexico (0.05)
- North America > United States > California > San Francisco County > San Francisco (0.05)
- Europe > Russia (0.05)
- (2 more...)
The Rise of the Machines: Exploring the AI Singularity
The concept of the AI singularity has been a topic of fascination and speculation for decades. At its most basic, the singularity refers to a hypothetical future point in time where artificial intelligence will surpass human intelligence, leading to exponential technological growth and a radical change in the nature of human civilization. The singularity has been described as a "tipping point" or a "knee of the curve" -- a moment when technological progress will accelerate at an unprecedented rate, leading to rapid and radical changes in society. While some believe that the singularity could lead to a utopia of technological advancement and human prosperity, others worry that it could have disastrous consequences, with some even going so far as to predict that it could lead to the end of humanity as we know it. Regardless of what the future holds, the AI singularity is a topic that is ripe for exploration and discussion, and one that will likely continue to be a source of fascination for years to come.
The Nature of Reality
In this series of Tales from the Dark Architecture articles I will be discussing some of the more extreme deep cognitive Artificial Intelligence designs that we are exploring on the pathway to Superintelligence. "Reality is a perception of trust fabricated by the human mind" We have approached a threshold in the design of Superintelligence. The issue before us is the nature of reality. Currently we are building AI machines to reflect our own human reality but what if they perceive far more than humans? Does human reality limit a Superintelligence and should a Superintelligence be free to experience a reality we humans can only contemplate but never experience? The truth is that advanced Cognitive AI systems not only perceive more of the natural world but they have the capacity to render more of the cognitively perceptive world than we humans can.
Debating Whether AI is Conscious Is A Distraction from Real Problems
Giada Pistilli is an Ethicist at Hugging Face and a P.h.D. Candidate in Philosophy at Sorbonne University. As a researcher in philosophy specializing in ethics applied to conversational AI systems, I have been studying conversational agents and human-computer interaction for years. At nearly every talk or panel I participate in, during the Q&A session, I am asked to engage in philosophical discussions about conscious AI and superintelligent machines, and often to explain the details of the technology to audiences that are unfamiliar. This happened a couple of weeks ago. Frustrated, I tweeted a thread that went viral, probably because many colleagues face the same situation.
Discourse on the Philosophy of Artificial Intelligence and the Future Role of Humanity
Artificial intelligence can be defined as "the ability of an artifact to imitate intelligent human behavior" or, more simply, the intelligence exhibited by a computer or machine that enables it to perform tasks that appear intelligent to human observers (Russell & Norvig 2010). AI can be broken down into two different categories: Artificial Narrow Intelligence (ANI) and Artificial General Intelligence (AGI), which are defined as follows: ANI refers to the ability of a machine or computer program to perform one particular task at an extremely high level or learn how to perform this task faster than any other machine. The most famous example of ANI is Deep Blue, which played chess against Garry Kasparov in 1997. AGI refers to the idea that a computer or machine would one day have the ability to exhibit intelligent behavior equal to that of humans across any given field such as language, motor skills, and social interaction; this would be similar in scope and complexity as natural intelligence. A typical example given for AGI is an educated seven-year-old child.
- Health & Medicine > Therapeutic Area (0.97)
- Leisure & Entertainment > Games > Chess (0.88)
Superintelligence Cannot be Contained: Lessons from Computability Theory
Alfonseca, Manuel (Universidad Autonoma de Madrid) | Cebrian, Manuel (Center for Humans & Machines, Max-Planck Institute for Human Development) | Fernandez Anta, Antonio (IMDEA Networks Institute) | Coviello, Lorenzo (Google, USA) | Abeliuk, Andrés (USC Information Sciences Institute) | Rahwan, Iyad (Center for Humans & Machines, Max-Planck Institute for Human Development)
Superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. In light of recent advances in machine intelligence, a number of scientists, philosophers and technologists have revived the discussion about the potentially catastrophic risks entailed by such an entity. In this article, we trace the origins and development of the neo-fear of superintelligence, and some of the major proposals for its containment. We argue that total containment is, in principle, impossible, due to fundamental limits inherent to computing itself. Assuming that a superintelligence will contain a program that includes all the programs that can be executed by a universal Turing machine on input potentially as complex as the state of the world, strict containment requires simulations of such a program, something theoretically (and practically) impossible. "Machines take me by surprise with great frequency. This is largely because I do not do sufficient calculation to decide what to expect them to do." Alan Turing (1950), Computing Machinery and Intelligence, Mind, 59, 433-460
- Europe > Spain > Galicia > Madrid (0.05)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > Germany > Berlin (0.04)
- (6 more...)
- Government (0.68)
- Leisure & Entertainment > Games (0.46)
- Information Technology > Security & Privacy (0.34)
Is Artificial Intelligence (AI) A Threat To Humans?
Are artificial intelligence (AI) and superintelligent machines the best or worst thing that could ever happen to humankind? This has been a question in existence since the 1940s when computer scientist Alan Turing wondered and began to believe that there would be a time when machines could have an unlimited impact on humanity through a process that mimicked evolution. Is Artificial Intelligence (AI) A Threat To Humans? When Oxford University Professor Nick Bostrom's New York Times best-seller, Superintelligence: Paths, Dangers, Strategies was first published in 2014, it struck a nerve at the heart of this debate with its focus on all the things that could go wrong. However, in my recent conversation with Bostrom, he also acknowledged there's an enormous upside to artificial intelligence technology.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.25)
- Asia > China (0.05)
- Information Technology > Security & Privacy (0.50)
- Government (0.50)
The Most Popular Computer Science Paper Of The Day
Artificial Intelligence (AI) has been one of the most discussed topics in recent times and efforts are being put every day to make it more human. However, the future of AI is uncertain since it is hard to determine the direction AI is heading. CEO and Cofounder of Robust.AI, Gary Marcus an expert in AI has recently a published a new paper by the name'The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence', which draws attention to a crucial fact about artificial intelligence, i.e., AI is not aware of its own operations and is only functioning as per certain commands within a controlled environment. The paper consists of 55 pages. It is an expansion of Marcus' argument against Yoshua Bengio during the 2019 AI debate.
AI promises and perils
Dr. Eng Lim Goh, vice president and chief technology officer for high-performance computing and artificial intelligence at Hewlett Packard Enterprise, has spent his career considering what machines can do, what they might do, and what they shouldn't do. As AI has become more prominent, he has been asked to play the role of futurist by the customers and partners he deals with daily. Goh, like most scientists, is unwilling to roll out any sort of crystal ball. But given his long familiarity with computer graphics, machine learning, analytics, and data, he is in a good position to talk about the different viewpoints on the subject. In this Q&A, he outlines the promises and concerns introduced by the ongoing uptick in AI adoption.
Is Anyone Home? A Way to Find Out If AI Has Become Self-Aware
Every moment of your waking life and whenever you dream, you have the distinct inner feeling of being "you." When you see the warm hues of a sunrise, smell the aroma of morning coffee or mull over a new idea, you are having conscious experience. But could an artificial intelligence (AI) ever have experience, like some of the androids depicted in Westworld or the synthetic beings in Blade Runner? The question is not so far-fetched. Robots are currently being developed to work inside nuclear reactors, fight wars and care for the elderly.