Goto

Collaborating Authors

 manhattan project


Scientist Who Was Offline 'Living His Best Life' Stunned by Nobel Prize Win

WIRED

Scientist Who Was Offline'Living His Best Life' Stunned by Nobel Prize Win Fred Ramsdell was on vacation in the Montana wilderness when he and two colleagues received the honor for their breakthroughs in immunology. When Fred Ramsdell, 64, was named a Nobel Prize winner earlier this week, he was deep in the Wyoming mountains, blissfully offline and surrounded by fresh snow. The next day, as he was wrapping up a three-week backpacking trip with his wife, her phone began to light up with hundreds of messages about the good news: Ramsdell, along with Mary E. Brunkow and Shimon Sakaguchi, had won the 2025 Nobel Prize in Physiology or Medicine for their discoveries that reshaped immunology . Ramsdell tells WIRED he was completely unaware that the Nobel Prizes were being announced, let alone that the Nobel committee was trying to get in touch with him. Sonoma Biotherapeutics, the biotechnology firm he co-founded, told reporters that Ramsdell was "was living his best life and was off the grid on a preplanned hiking trip."


Christopher Nolan on the Promise and Peril of Technology

The Atlantic - Technology

By the time I sat down with Christopher Nolan in his posh hotel suite not far from the White House, I guessed that he was tired of Washington, D.C. The day before, he'd toured the Oval Office and had lunch on Capitol Hill. Later that night, I'd watched him receive an award from the Federation for American Scientists, an organization that counts Robert Oppenheimer, the subject of Nolan's most recent film, among its founders. He'd endured a joke, repeated too many times by Senate Majority Leader Chuck Schumer, about the subject of his next film--"It's another biopic: Schumer." The award was sitting on an end table next to Nolan, who was dressed in brown slacks, a gray vest, and a navy suit jacket--his Anglo-formality undimmed by decades spent living in Los Angeles. "It's heavy, and glass, and good for self-defense," he said of the award, while filling his teacup.


Here's what GOP Sen. Mike Rounds told Musk, Zuckerberg, other experts at closed-door Senate AI Forum

FOX News

Sen. Mike Rounds, R-S.D., weighs in on whether Ukraine should be given NATO membership and President Biden's decision to send cluster bombs to Ukraine on'Your World.' EXCLUSIVE: Sen. Mike Rounds, R-S.D., told a group of tech leaders, union leaders and artificial intelligence experts on Wednesday that AI's rapid advancement has inspired calls for "a new Manhattan-like project" and how the government should regulate AI -- if at all -- is still a matter of debate. Rounds, along with Senate Majority Leader Chuck Schumer, is leading the first in a series of bipartisan AI Insight Forums designed to help lawmakers get ahead of AI as it permeates everyday life. Wednesday's session saw the attendance of Meta's Mark Zuckerberg, X owner Elon Musk, AFL-CIO union boss Elizabeth Shuler and others. "Today, we stand at the beginning of a journey of monumental change. While Artificial Intelligence has been around in various forms for years, recent advances in the most cutting-edge models have shown us just how capable the technology has become," Rounds told the closed-door meeting, according to prepared comments obtained exclusively by Fox News Digital.


Christopher Nolan says AI experts face their 'Oppenheimer moment'

The Guardian

The Oppenheimer director, Christopher Nolan, has highlighted the difficulties of applying nuclear weapons-style regulation to artificial intelligence, as he warned that the United Nations had become a "very diminished" force. Nolan told the Guardian J Robert Oppenheimer's call for international control of nuclear weapons had "sort of come true", but there had nonetheless been extensive proliferation of the technology since the "father of the atomic bomb" led the Manhattan project in the second world war. "To look at the international control of nuclear weapons and feel that the same principles could be applied to something that doesn't require massive industrial processes – it's a bit tricky," he said. "International surveillance of nuclear weapons is possible because nuclear weapons are very difficult to build. Oppenheimer spent $2bn and used thousands of people across America to build those first bombs. It's reassuringly difficult to make nuclear weapons and so it's relatively easy to spot when a country is doing that. I don't believe any of that applies to AI."


The Man Who Wrote the AI Doomer Bible

The Atlantic - Technology

A framed photograph of three men in military fatigues hangs above his desk. They're tightening straps on what first appear to be two water heaters but are, in fact, thermonuclear weapons. Resting against a nearby wall is a black-and-white print depicting the first billionth of a second after the detonation of an atomic bomb: a thousand-foot-tall ghostly amoeba. And above us, dangling from the ceiling like the sword of Damocles, is a plastic model of the Hindenburg. Depending on how you choose to look at it, Rhodes's office is either a shrine to awe-inspiring technological progress or a harsh reminder of its power to incinerate us all in the blink of an eye.


Who is Sam Altman? The tech leader behind artificial intelligence lab OpenAI

FOX News

Fox News correspondent Matt Finn has the latest on the impact of AI technology that some say could outpace humans on'Special Report.' Artificial intelligence will take center stage in the nation's capital on Tuesday, when tech CEO Sam Altman testifies for the first time before Congress regarding ChatGPT, his company's revolutionary chatbot. Altman's OpenAI, an AI research lab, revolutionized the technology last year when it released ChatGPT, a chatbot that's able to mimic human conversation based on prompts it is given. The company has gone on to release updated iterations of the chatbot since last November, which has sparked a race in Silicon Valley for other tech companies to build and release more power systems powered by artificial intelligence. Altman will appear before the Senate Judiciary subcommittee on privacy, technology, and the law on Tuesday morning amid pressure on government leaders to craft regulations for artificial intelligence.


'A race it might be impossible to stop': how worried should we be about AI?

The Guardian

Last Monday an eminent, elderly British scientist lobbed a grenade into the febrile anthill of researchers and corporations currently obsessed with artificial intelligence or AI (aka, for the most part, a technology called machine learning). The scientist was Geoffrey Hinton, and the bombshell was the news that he was leaving Google, where he had been doing great work on machine learning for the last 10 years, because he wanted to be free to express his fears about where the technology he had played a seminal role in founding was heading. To say that this was big news would be an epic understatement. The tech industry is a huge, excitable beast that is occasionally prone to outbreaks of "irrational exuberance", ie madness. One recent bout of it involved cryptocurrencies and a vision of the future of the internet called "Web3", which an astute young blogger and critic, Molly White, memorably describes as "an enormous grift that's pouring lighter fluid on our already smoldering planet".


Researchers predict artificial intelligence could lead to a 'nuclear-level catastrophe'

FOX News

Fox News host Steve Hilton delves into ChatGPT, an artificial intelligence program that could have major implications for writing-focused jobs on'The Next Revolution.' In the past few years, the world has seen huge advancements in artificial intelligence, with chatbots being able to have almost human-like conversations with users in real time, and image generators conjuring realistic-looking photos based on word prompts. While proponents of the advancing technology have lauded its ability to increase creativity and streamline work, others are more critical, even warning of potential catastrophes. Stanford's 2023 Artificial Intelligence Index Report highlights a study which revealed 36% of the Natural Language Processing (NLP) research community said AI decisions could cause "nuclear-level catastrophe." Seventy-three percent of respondents said it could lead to "revolutionary societal change."


The 'Manhattan Project' Theory of Generative AI

WIRED

The pace of change in generative AI right now is insane. OpenAI released ChatGPT to the public just four months ago. It took only two months to reach 100 million users. Google, scrambling to keep up, has rolled out Bard, its own AI chatbot, and there are already various ChatGPT clones as well as new plug-ins to make the bot work with popular websites like Expedia and OpenTable. GPT-4, the new version of OpenAI's model released last month, is both more accurate and "multimodal," handling text, images, video, and audio all at once.


Nuclear Espionage and AI Governance - LessWrong

#artificialintelligence

Using both primary and secondary sources, I discuss the role of espionage in early nuclear history. Nuclear weapons are analogous to AI in many ways, so this period may hold lessons for AI governance. Nuclear spies successfully transferred information about the plutonium implosion bomb design and the enrichment of fissile material. Spies were mostly ideologically motivated. Counterintelligence was hampered by its fragmentation across multiple agencies and its inability to be choosy about talent used on the most important military research program in the largest war in human history. Nuclear espionage most likely sped up Soviet nuclear weapons development, but the Soviet Union would have been capable of developing nuclear weapons within a few years without spying. The slight gain in speed due to spying may nevertheless have been strategically significant. Acknowledgements: I am grateful to Matthew Gentzel for supervising this project and Michael Aird, Christina Barta, Daniel Filan, Aaron Gertler, Sidney Hough, Nat Kozak, Jeffery Ohl, and Waqar Zaidi for providing comments. This research was supported by a fellowship from the Stanford Existential Risks Initiative. This post is a short version of the report, x-posted from EA Forum. The full version with additional sections, an appendix, and a bibliography, is available here. The early history of nuclear weapons is in many ways similar to hypothesized future strategic situations involving advanced artificial intelligence (Zaidi and Dafoe 2021, 4). And, in addition to the objective similarity of the situations, the situations may be made more similar by deliberate imitation of the Manhattan Project experience (see this report to the US House Armed Service Committee).