killer ai
Joe Biden Has a Secret Weapon Against Killer AI. It's Bureaucrats
As ChatGPT's first birthday approaches, presents are rolling in for the large language model that rocked the world. From President Joe Biden comes an oversized "Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence." And UK prime minister Rishi Sunak threw a party with a cool extinction-of-the-human-race theme, wrapped up with a 28-country agreement (counting the EU as a single country) promising international cooperation to develop AI responsibly. Before anyone gets too excited, let's remember that it has been over half a century since credible studies predicted disastrous climate change. Now that the water is literally lapping at our feet and heat is making whole chunks of civilization uninhabitable, the international order has hardly made a dent in the gigatons of fossil fuel carbon dioxide spewing into the atmosphere.
- North America > United States (1.00)
- Europe > United Kingdom (0.26)
'Killer AI' is real. Here's how we stay safe, sane and strong in a brave new world
The rapid advancement of artificial intelligence (AI) has been nothing short of remarkable. From health care to finance, AI is transforming industries and has the potential to elevate human productivity to unprecedented levels. However, this exciting promise is accompanied by a looming concern among the public and some experts: the emergence of "Killer AI." In a world where innovation has already changed society in unexpected ways, how do we separate legitimate fears from those that should still be reserved for fiction? To help answer questions like these, we recently released a policy brief for the Mercatus Center at George Mason University titled "On Defining'Killer AI.'"
- Media (0.51)
- Government (0.51)
- Health & Medicine (0.37)
Amazon and Microsoft are putting world at risk with killer AI, study says
WASHINGTON – Amazon, Microsoft and Intel are among leading tech companies putting the world at risk through killer robot development, according to a report that surveyed major players from the sector about their stance on lethal autonomous weapons. Dutch NGO Pax ranked 50 companies by three criteria: whether they were developing technology that could be relevant to deadly AI, whether they were working on related military projects, and if they had committed to abstaining from contributing in the future. "Why are companies like Microsoft and Amazon not denying that they're currently developing these highly controversial weapons, which could decide to kill people without direct human involvement?" The use of AI to allow weapon systems to autonomously select and attack targets has sparked ethical debates in recent years, with critics warning they would jeopardize international security and herald a third revolution in warfare after gunpowder and the atomic bomb. A panel of government experts debated policy options regarding lethal autonomous weapons at a meeting of the United Nations Convention on Certain Conventional Weapons in Geneva on Wednesday.
- North America > United States > California > Alameda County > Berkeley (0.05)
- Asia > Middle East > Israel (0.05)
Research indicates the only defense against killer AI is not developing it
A recent analysis on the future of warfare indicates that countries that continue to develop AI for military use risk losing control of the battlefield. Here's what that means, according to a trio of experts. Researchers from ASRC Federal, a private company that provides support for the intelligence and defense communities, and the University of Maryland recently published a paper on pre-print server ArXiv discussing the potential ramifications of integrating AI systems into modern warfare. Get 50% off tickets if you buy now. The paper – read here – focuses on the near-future consequences for the AI arms race under the assumption that AI will not somehow run amok or takeover.
The Road to Killer AI: ML Blockchain IOT Drones Skynet?
Lately, there has been a lot of concern about the recent explosion of AI, and how it could reach the point of 1) being more intelligent than humans, and 2) that it could decide that it no longer needs us and could in fact, take over the Earth. Physicist Stephen Hawking famously told the BBC: "The development of full artificial intelligence could spell the end of the human race." Billionaire Elon Musk has said that he thinks AI is the "biggest existential threat" to the human race. Computers running the latest AI have already beaten humans at games ranging from Chess to Go to esports games (which is interesting, because this is a case where AI could be better than humans at playing games which were built as software from the ground up, unlike Chess and Go, which were developer before the computer age). AI has been making dramatic leaps over the past few years -- the question that Hawking and Musk are asking is: Could AI evolve to the point where it could replace humans? If this scenario sounds like science fiction, it's one that science fiction writers have posed again and again. One of the most popular is of course, Skynet, the intelligence that takes over in the Terminator universe and decides to wipe out most of humanity and enslave the rest (except for the resistance fighters, led by John Conner, but that involves a terminator travelling back in time, and time travel will be handled in another essay). In perhaps equally popular trilogy of the Matrix, super-intelligent machines take over the planet as well, but rather than killing humans they enslave them in a unique way. In order to ensure that electricity generated by the human brain can be put to use, the super-intelligent machines put humans in pods, keeping our minds busy playing a giant video game or simulation (i.e. the Matrix).
- North America > United States > California > San Diego County > San Diego (0.04)
- Asia > China (0.04)
- Government > Military (0.68)
- Leisure & Entertainment > Games > Chess (0.45)
Why We Shouldn't Be So Worried About Killer AI
This article first appeared in Data Sheet, Fortune's daily newsletter on the top tech news. The machines are coming after us. Elon Musk repeatedly warns us to be very afraid. Masayoshi Son is so convinced of the coming "singularity" that he's busy buying up stakes in companies with enough of the right data. I recently met a super-smart scientist who is neither concerned about an AI takeover nor complacent about the meaningful changes artificial intelligence will bring.
The world's best Dota 2 players just got destroyed by a killer AI from Elon Musk's startup Ringo Paul
Tonight during Valve's yearly Dota 2 tournament, a surprise segment introduced what could be the best new player in the world -- a bot from Elon Musk-backed startup OpenAI. Engineers from the nonprofit say the bot learned enough to beat Dota 2 pros in just two weeks of real-time learning, though in that training period they say it amassed "lifetimes" of experience, likely using a neural network judging by the company's prior efforts. Musk is hailing the achievement as the first time artificial intelligence has been able to beat pros in competitive e-sports. While the demonstration was highly limited to a few variables of gameplay, it was still remarkable to witness crowd-favorite Dota 2 pro Danylo "Dendi" Ishutin get crushed in a live 1-vs-1 match with the bot. Some of the bot's maneuvers looked eerily human.
The world's best Dota 2 players just got destroyed by a killer AI from Elon Musk's startup
Tonight during Valve's yearly Dota 2 tournament, a surprise segment introduced what could be the best new player in the world -- a bot from Elon Musk-backed startup OpenAI. Engineers from the nonprofit say the bot learned enough to beat Dota 2 pros in just two weeks of real-time learning, though in that training period they say it amassed "lifetimes" of experience, likely using a neural network judging by the company's prior efforts. Musk is hailing the achievement as the first time artificial intelligence has been able to beat pros in competitive e-sports. OpenAI first ever to defeat world's best players in competitive eSports. Vastly more complex than traditional board games like chess & Go.
- Leisure & Entertainment > Sports (0.61)
- Leisure & Entertainment > Games (0.59)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.54)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.54)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.54)
Fact and Fiction Behind the Threat of 'Killer AI'
However, Oren Etzioni, professor of Computer Science at the University of Washington and CEO of the Allen Institute for Artificial Intelligence, argues that such headlines are in fact strongly influenced by the work of one man: professor Nick Bostrom of the Faculty of Philosophy at Oxford University, author of the bestselling treatise Superintelligence: Paths, Dangers, and Strategies. Essentially, Bostrom claims that if machine brains surpass human brains in general intelligence, the resultant new'superintelligence' could replace humans as the dominant lifeform on Earth. Furthermore, according to his findings, there's a 10-percent probability that human-level AI will be attained by 2022, a 50-percent probability that this feat will be achieved by 2040, and 90-percent probability that such an entity will be created by 2075. However, in his article published in the MIT Technology Review magazine Etzioni points out that Bostrom's main source of data is an aggregate of four different surveys of groups, including participants of the Philosophy and Theory of AI conference that was held in 2011 in Thessaloniki, and members of the Greek Association for Artificial Intelligence. Furthermore, it appears that Bostrom didn't provide the response rates or the phrasing of questions used during those surveys, and neither did he account for the reliance on data collected in Greece.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.26)
- Europe > Greece > Central Macedonia > Thessaloniki (0.26)