rogue artificial intelligence
Can We Prevent a Rogue Artificial Intelligence?
Artificial superintelligence (ASI) has the potential to be incredibly powerful and poses many questions as to how we appropriately manage it. Many people are worried that machines will break free from their shackles and go rogue. The Three Laws of Robotics, first introduced in Isaac Asimov's 1942 short story "Runaround," are as follows: Organizations should be required to provide their customers with information concerning the AI system's purpose, function, limitations and impact. In order to develop a comprehensible AI, public engagement and the exercise of individuals' rights should be guaranteed and encouraged. AI development should not be a secret undertaking by commercial companies.
Musk warns 'the world is accelerating towards collapse'
Be it climate change or rogue artificial intelligence, Elon Musk often turns to Twitter to share his concerns regarding the future of life on Earth. And this time, the tech boss has offered a grave perspective on the fate of humanity. Responding to a recent article which argues the world may soon hit'peak person' as fertility rates drop, Musk warned the global population is'accelerating towards collapse, but few seem to notice or care.' Be it climate change or rogue artificial intelligence, Elon Musk often turns to Twitter to share his concerns regarding the future of life on Earth. When asked at the Code Conference in California if the answer to the question of whether we are in a simulated computer game was'yes', Elon Musk said'probably.' Musk believes that computer game technology, particularly virtual reality, is already approaching a point that it is indistinguishable from reality.
- North America > United States > California (0.25)
- North America > Mexico (0.05)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- (3 more...)
- Leisure & Entertainment > Games > Computer Games (1.00)
- Information Technology (0.90)
Google Seeks Kill Switch for Rogue Artificial Intelligence
Autonomous machines sometimes "find unpredictable and undesirable shortcuts" to achieve their goals, the researchers explain. A self-driving car that learns to break traffic laws in order to avoid a crash may then break laws in other, in appropriate situations; a program taught to play a video game could learn to pause the game indefinitely in order to avoid losing. Building the countermeasure into the systems would "avoid the agent viewing the interruptions as being part of the environment, and thus part of the task."
Researchers want a 'big red button' for shutting down a rogue artificial intelligence
If artificial intelligence goes off the rails, which many philosophers and tech entrepreneurs seem to think is likely, it could result in rampant activity beyond human control. So some researchers think it's important to develop systems to "interrupt" AI programs, and to ensure the AI can't develop a way to prevent those interruptions. A study, conducted in 2014 by Google-owned AI lab DeepMind and the University of Oxford, sought to create a framework for handing control of AI programs over to human beings. In other words, a "big red button" to keep the software in check. "If such an agent is operating in real-time under human supervision, now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions -- harmful either for the agent or for the environment -- and lead the agent into a safer situation," reads the team's paper, titled "Safely Interruptible Agents" and published online with the Machine Intelligence Research Institute.
Researchers want a 'big red button' for shutting down a rogue artificial intelligence
If artificial intelligence goes off the rails, which many philosophers and tech entrepreneurs seem to think is likely, it could result in rampant activity beyond human control. So some researchers think it's important to develop systems to "interrupt" AI programs, and to ensure the AI can't develop a way to prevent those interruptions. A study, conducted in 2014 by Google-owned AI lab DeepMind and the University of Oxford, sought to create a framework for handing control of AI programs over to human beings. In other words, a "big red button" to keep the software in check. "If such an agent is operating in real-time under human supervision, now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions -- harmful either for the agent or for the environment -- and lead the agent into a safer situation," reads the team's paper, titled "Safely Interruptible Agents" and published online with the Machine Intelligence Research Institute.