petrov
Google launches new AI PaLM 2 in attempt to regain leadership of the pack
Google is attempting to reclaim its crown as the leader in artificial intelligence with PaLM 2, a "next-generation language model" that the company says outperforms other leading systems on some tasks. Revealing the cutting-edge AI at its annual I/O conference, alongside a foldable Pixel phone and a new tablet, Google said it would be built in to 25 new products and features, as the company races to catch up with competitors after years of producing AI research but few products. Like other "large language models" such as OpenAI's GPT, PaLM 2 is a general-purpose AI model, which can be used to power ChatGPT-style chatbots but also translate between languages, write computer code, or even analyse and respond to images. Combining those capabilities, a user could ask a question in English about a restaurant in Bulgaria, and the system would be able to search the web for Bulgarian responses, find an answer, translate the answer into English, add a picture of the location – and then follow up with a code snippet to create a database entry for the place. "The neural network revolution that we are now experiencing started around 10 years ago," said Slav Petrov, the co-lead of the PaLM 2 project, "and it started in part at Google."
US think tank: AI could cause nuclear war by 2040
Artificial intelligence (AI) could potentially cause nuclear war by 2040, according to a research report by a US think tank. A paper by the non-profit think tank Rand Corporation warns that AI could undermine geopolitical stability and eliminate nuclear weapons as a deterrent. Rand researchers warn that while nuclear strikes have maintained peace for decades as a mutually assured destructive strategy, guarantees of stability could be eroded if AI and machine learning (ML) determine military action. In a paper based on a series of workshops with experts, the researchers said AI could enable human actors to make lethal decisions in the future. For example, improvements in sensing technology could destroy retaliatory powers such as submarines and mobile missiles.
Hitting the Books: The Soviets once tasked an AI with our mutually assured destruction
Barely a month into its already floundering invasion of Ukraine and Russia is rattling its nuclear saber and threatening to drastically escalate the regional conflict into all out world war. But the Russians are no stranger to nuclear brinksmanship. In the excerpt below from Ben Buchanan and Andrew Imbrie's latest book, we can see how closely humanity came to an atomic holocaust in 1983 and why an increasing reliance on automation -- on both sides of the Iron Curtain -- only served to heighten the likelihood of an accidental launch. The New Fire looks at the rapidly expanding roles of automated machine learning systems in national defense and how increasingly ubiquitous AI technologies (as examined through the thematic lenses of "data, algorithms, and computing power") are transforming how nations wage war both domestically and abroad. As the tensions between the United States and the Soviet Union reached their apex in the fall of 1983, the nuclear war began.
- Asia > Russia (0.70)
- North America > United States (0.36)
- Europe > Ukraine (0.24)
- (3 more...)
Giving an AI control of nuclear weapons: What could possibly go wrong? - Bulletin of the Atomic Scientists
If artificial intelligences controlled nuclear weapons, all of us could be dead. In 1983, Soviet Air Defense Forces Lieutenant Colonel Stanislav Petrov was monitoring nuclear early warning systems, when the computer concluded with the highest confidence that the United States had launched a nuclear war. But Petrov was doubtful: The computer estimated only a handful of nuclear weapons were incoming, when such a surprise attack would more plausibly entail an overwhelming first strike. He also didn't trust the new launch detection system, and the radar system didn't have corroborative evidence. Petrov decided the message was a false positive and did nothing. The computer was wrong; Petrov was right.
- North America > United States (0.69)
- Asia > Russia (0.36)
- Europe > Russia (0.06)
- (5 more...)
Iterative raises $20M for its MLOps platform – TechCrunch
Iterative, an open-source startup that is building an enterprise AI platform to help companies operationalize their models, today announced that it has raised a $20 million Series A round led by 468 Capital and Mesosphere co-founder Florian Leibert. Previous investors True Ventures and Afore Capital also participated in this round, which brings the company's total funding to $25 million. The core idea behind Iterative is to provide data scientists and data engineers with a platform that closely resembles a modern GitOps-driven development stack. After spending time in academia, Iterative co-founder and CEO Dmitry Petrov joined Microsoft as a data scientist on the Bing team in 2013. He noted that the industry has changed quite a bit since then.
Scientists fear turning over launch systems for nuclear missiles to artificial intelligence will lead to real-life "Terminator" event, wiping out all humans
One of the most popular movie franchises of our time is the "Terminator" series, launched back in the early 1980s and featuring six-time Mr. America bodybuilder Arnold Schwarzenegger as a futuristic humanoid killing machine As noted by Great Power War, the backstory to the film is that the creation of the nearly-invincible cyborg Terminators stemmed from a "SkyNet" computer system that controlled U.S. nuclear weapons and "got smart," eventually seeing all humans as its enemy. So, in one fell swoop, the system launched its missiles at pre-programmed targets, which, of course, invited a second-strike counter-launch and created a nuclear holocaust that nearly destroyed all of humankind. While the Terminator series never really identified the'smart' SkyNet computer system as having artificial intelligence, some years later after AI became more of a thing it was understood that's the kind of system the fictional SkyNet operated. The "machine-learning" aspect of AI is how SkyNet "got smart" one day and launched the nuclear payloads it controlled. But the Terminator series are just movies, right?
- Asia > Russia (0.08)
- Asia > China (0.07)
- Europe > United Kingdom (0.05)
- (2 more...)
- Government > Military (1.00)
- Media > Film (0.94)
- Leisure & Entertainment (0.94)
The pitfalls of a 'retrofit human' in AI systems
Stanislav Petrov is not a famous name in the computer science space, like Ada Lovelace or Grace Hopper, but his story serves as a critical lesson for developers of AI systems. Petrov, who passed away on May 19, 2017, was a lieutenant colonel in the Soviet Union's Air Defense Forces. On September 26, 1983, an alarm announced that the U.S. had launched five nuclear armed intercontinental ballistic missiles (ICBMs). His job, as the human in the loop of this technical detection system, was to escalate to leadership to launch Soviet missiles in retaliation, ensuring mutually assured destruction. As the sirens blared, he took a moment to pause and think critically. Why would the U.S. send only five missiles?
Microsoft chief Brad Smith warns that killer robots are 'unstoppable'
Microsoft chief Brad Smith issued a warning over the weekend that killer robots are'unstoppable' and a new digital Geneva Convention is required. Most sci-fi fans will think of Terminator when they hear of killer robots. In the classic film series, a rogue military AI called Skynet gained self-awareness after spreading to millions of servers around the world. Concluding that humans would attempt to shut it down, Skynet sought to exterminate all of mankind in the interest of self-preservation. While it was once just a popcorn flick, Terminator now offers a dire warning of what could be if precautions are not taken.
- Europe > Russia (0.17)
- Asia > Russia (0.17)
- North America > United States (0.16)
Strangelove redux: US experts propose having AI control nuclear weapons - Bulletin of the Atomic Scientists
Hypersonic missiles, stealthy cruise missiles, and weaponized artificial intelligence have so reduced the amount of time that decision makers in the United States would theoretically have to respond to a nuclear attack that, two military experts say, it's time for a new US nuclear command, control, and communications system. Give artificial intelligence control over the launch button. In an article in War on the Rocks titled, ominously, "America Needs a'Dead Hand,'" US deterrence experts Adam Lowther and Curtis McGiffin propose a nuclear command, control, and communications setup with some eerie similarities to the Soviet system referenced in the title to their piece. The Dead Hand was a semiautomated system developed to launch the Soviet Union's nuclear arsenal under certain conditions, including, particularly, the loss of national leaders who could do so on their own. Given the increasing time pressure Lowther and McGiffin say US nuclear decision makers are under, "[I]t may be necessary to develop a system based on artificial intelligence, with predetermined response decisions, that detects, decides, and directs strategic forces with such speed that the attack-time compression challenge does not place the United States in an impossible position."
- Asia > Russia (0.92)
- Europe > Russia (0.28)
- North America > United States > Pennsylvania (0.05)
- North America > United States > New York (0.05)
- Government > Military (1.00)
- Government > Regional Government > Europe Government > Russia Government (0.50)
- Government > Regional Government > Asia Government > Russia Government (0.50)
- North America > United States (1.00)
- Asia > Russia (0.14)
- Asia > Afghanistan > Kunduz Province > Kunduz (0.04)
- Government > Regional Government > North America Government > United States Government (0.70)
- Government > Military > Army (0.51)