doomsday
Etching AI Controls Into Silicon Could Keep Doomsday at Bay
Even the cleverest, most cunning artificial intelligence algorithm will presumably have to obey the laws of silicon. Its capabilities will be constrained by the hardware that it's running on. Some researchers are exploring ways to exploit that connection to limit the potential of AI systems to cause harm. The idea is to encode rules governing the training and deployment of advanced algorithms directly into the computer chips needed to run them. In theory--the sphere where much debate about dangerously powerful AI currently resides--this might provide a powerful new way to prevent rogue nations or irresponsible companies from secretly developing dangerous AI.
- Government > Foreign Policy (0.37)
- Information Technology > Hardware (0.34)
Doomsday to utopia: Meet AI's rival factions
Who is behind it?: Two leading AI labs cited building AGI in their mission statements: OpenAI, founded in 2015, and DeepMind, a research lab founded in 2010 and acquired by Google in 2014. Still, the concept might have stayed on the margins if not for the same wealthy tech investors interested in the outer limits of AI. Musk invested in DeepMind and introduced the company to Google co-founder Larry Page. Musk brought the concept of AGI to OpenAI's other co-founders, like CEO Sam Altman.
Self-driving spacecraft may save Earth from doomsday
Hera uses infrared to scan impact crater. Judging by the valuations of companies such as Waymo, Lyft and Uber, humanity is placing a big bet on self-driving cars as the future of transportation. But the future of humanity itself may rest on the hopes of self-driving spacecraft. The European Space Agency is currently developing a self-driving craft for its Hera planetary defense mission to the Didymos asteroid, which could happen as soon as 2023. "If you think self-driving cars are the future on Earth, then Hera is the pioneer of autonomy in deep space," Paolo Martino, lead systems engineer of ESA's proposed Hera mission, said in a statement.
- North America > United States (0.36)
- Asia > Japan (0.05)
- Transportation > Passenger (0.79)
- Transportation > Ground > Road (0.79)
- Government > Space Agency (0.76)
- Government > Regional Government > North America Government > United States Government (0.36)
Scientists Start Planning for Doomsday
Scientists have teamed up artificial intelligence boosters to help predict many things, like when we will have teleportation or whether we can actually time travel. Veteran AI scientists Eric Horvitz and Lawrence Krauss (of Doomsday Clock fame) are working together with a group of experts to try to predict and stop doomsday from coming. They met last weekend at Arizona State, helped along by funding from Tesla Inc. co-founder Elon Musk and Skype co-founder Jaan Tallinn to create a team called "Envisioning and Addressing Adverse AI Outcomes." All in all, over 40 scientist, security experts, and policy makers broke into two teams: attackers (red) and defenders (blue) to reenact some of the AI-gone-wrong scenarios that could happen, including environmental problems, global warfare, and stock-market problems. Horvitz hopes that they all learned something from this, and believes that the team has taken a few steps ahead of the rest of the world, according to Bloomberg.
- North America > United States > Arizona (0.27)
- Europe > Estonia > Harju County > Tallinn (0.27)
- North America > United States > Washington > King County > Redmond (0.07)
- Government (0.79)
- Information Technology > Security & Privacy (0.39)
Scientists think doomsday is on its way and governments won't be able to save us
Catastrophic climate change, nuclear war and natural disasters such as super volcanoes and asteroids could also pose a deadly risk to mankind, researchers said. It may sound like the stuff of sci-fi films, but experts said these apocalyptic threats are more likely than many realise. The report Global Catastrophic Risks, compiled by a team from Oxford University, the Global Challenges Foundation and the Global Priorities Project, ranks dangers that could wipe out 10% or more of the human population. It warns that while most generations never experience a catastrophe, they are far from fanciful, as the bouts of plague and the 1918 Spanish flu that wiped out millions illustrated. Sebastian Farquhar, director at the Global Priorities Project, told the Press Association: "There are some things that are on the horizon, things that probably won't happen in any one year but could happen, which could completely reshape our world and do so in a really devastating and disastrous way. "History teaches us that many of these things are more likely than we intuitively think."Many of these risks are changing and growing as technologies change and grow and reshape our world. But there are also things we can do about the risks."
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.25)
- Asia > China (0.06)