Goto

Collaborating Authors

 powerful ai system


What are the odds? Risk and uncertainty about AI existential risk

Grossi, Marco

arXiv.org Artificial Intelligence

This work is a commentary of the article \href{https://doi.org/10.18716/ojs/phai/2025.2801}{AI Survival Stories: a Taxonomic Analysis of AI Existential Risk} by Cappelen, Goldstein, and Hawthorne. It is not just a commentary though, but a useful reminder of the philosophical limitations of \say{linear} models of risk. The article will focus on the model employed by the authors: first, I discuss some differences between standard Swiss Cheese models and this one. I then argue that in a situation of epistemic indifference the probability of P(D) is higher than what one might first suggest, given the structural relationships between layers. I then distinguish between risk and uncertainty, and argue that any estimation of P(D) is structurally affected by two kinds of uncertainty: option uncertainty and state-space uncertainty. Incorporating these dimensions of uncertainty into our qualitative discussion on AI existential risk can provide a better understanding of the likeliness of P(D).


Silicon Valley Takes Artificial General Intelligence Seriously--Washington Must Too

TIME - Tech

Artificial General Intelligence--machines that can learn and perform any cognitive task that a human can--has long been relegated to the realm of science fiction. But recent developments show that AGI is no longer a distant speculation; it's an impending reality that demands our immediate attention. On Sept. 17, during a Senate Judiciary Subcommittee hearing titled "Oversight of AI: Insiders' Perspectives," whistleblowers from leading AI companies sounded the alarm on the rapid advancement toward AGI and the glaring lack of oversight. Helen Toner, a former board member of OpenAI and director of strategy at Georgetown University's Center for Security and Emerging Technology, testified that, "The biggest disconnect that I see between AI insider perspectives and public perceptions of AI companies is when it comes to the idea of artificial general intelligence." She continued that leading AI companies such as OpenAI, Google, and Anthropic are "treating building AGI as an entirely serious goal."


Nobody Knows How to Safety-Test AI

TIME - Tech

Beth Barnes and three of her colleagues sit cross-legged in a semicircle on a damp lawn on the campus of the University of California, Berkeley. They are describing their attempts to interrogate artificial intelligence chatbots. "They are, in some sense, these vast alien intelligences," says Barnes, 26, who is the founder and CEO of Model Evaluation and Threat Research (METR), an AI-safety nonprofit. "They know so much about whether the next word is going to be'is' versus'was.' We're just playing with a tiny bit on the surface, and there's all this, miles and miles underneath," she says, gesturing at the potentially immense depths of large language models' capabilities. Researchers at METR look a lot like Berkeley students--the four on the lawn are in their twenties and dressed in jeans or sweatpants.


Fox News AI Newsletter: Artificial intelligence-designed drug

FOX News

HIGH-TECH HEALTH: Inflammatory bowel disease impacts 1.6 million people in the U.S. -- and a new artificial intelligence-generated drug could help alleviate symptoms. AI SAFETY: The White House says "developers of the most powerful AI systems" will now have to report AI safety test results to the Department of Commerce in the wake of an executive order issued by President Biden aimed at "managing the risks" of the technology. HIGH-TECH HILL: A top House Republican lawmaker is eyeing the opportunities and risks of integrating artificial intelligence technology into the day-to-day operations of the U.S. Congress. 'MEMORY RESTORED': Restoring your memories of a vague childhood toy, movie, video game or book that's been on the tip of your tongue for years could be as simple as plugging a couple of sentences into a chatbot, some users say. WARTIME AI: Israel's Defense Ministry is taking advantage of its country's vibrant high-tech scene to create an artificial intelligence-driven information platform that will help keep track of the increasingly deteriorating humanitarian situation in the Gaza Strip, even as Israeli troops continue to battle the Iranian-backed Islamist terror group Hamas, Fox News Digital has learned.


White House: Developers of 'powerful AI systems' now have to report safety test results to government

FOX News

The White House says "developers of the most powerful AI systems" will now have to report AI safety test results to the Department of Commerce in the wake of an executive order issued by President Biden aimed at "managing the risks" of the technology. The news comes as Deputy Chief of Staff Bruce Reed is convening the White House AI Council on Monday, consisting of "top officials from a wide range of federal departments and agencies" who have reported completing 90-day actions and advancing other directives tasked by the order Biden signed last October, according to the White House. Among those actions was that they "[u]sed Defense Production Act authorities to compel developers of the most powerful AI systems to report vital information, especially AI safety test results, to the Department of Commerce," the White House said. "These companies now must share this information on the most powerful AI systems, and they must likewise report large computing clusters able to train these systems," the White House added. The White House announced Monday that companies that are working on the "most powerful AI systems" must now report "AI safety test results" to the Department of Commerce.


'Very scary': Mark Zuckerberg's pledge to build advanced AI alarms experts

The Guardian

Mark Zuckerberg has been accused of taking an irresponsible approach to artificial intelligence after committing to building a powerful AI system on a par with human levels of intelligence. The Facebook founder has also raised the prospect of making it freely available to the public. The Meta chief executive has said the company will attempt to build an artificial general intelligence (AGI) system and make it open source, meaning it will be accessible to developers outside the company. The system should be made "as widely available as we responsibly can", he added. In a Facebook post, Zuckerberg said it was clear that the next generation of tech services "requires building full general intelligence".


AI firms 'should include members of public on boards to protect society'

The Guardian

Companies developing powerful artificial intelligence systems must have independent board members representing the "interests of society", according to an expert regarded as one of the modern godfathers of the technology. Yoshua Bengio, a co-winner of the 2018 Turing Award – referred to as the "Nobel prize of computing" – said AI firms must have oversight from members of the public, as advances in the technology accelerate rapidly. Speaking in the wake of the boardroom upheaval at the ChatGPT developer OpenAI, including the exit and return of its chief executive, Sam Altman, Bengio said a "democratic process" was needed to monitor developments in the field. "How do we make sure that these advances are happening in a way that doesn't endanger the public? How do we make sure that they're not abused for increasing one's power?" the AI pioneer told the Guardian. "To me, the answer is obvious in principle.


Unpacking the hype around OpenAI's rumored new Q* model

MIT Technology Review

While we still don't know all the details, there have been reports that researchers at OpenAI had made a "breakthrough" in AI that had alarmed staff members. Reuters and The Information both report that researchers had come up with a new way to make powerful AI systems and had created a new model, called Q* (pronounced Q star), that was able to perform grade-school-level math. According to the people who spoke to Reuters, some at OpenAI believe this could be a milestone in the company's quest to build artificial general intelligence, a much-hyped concept referring to an AI system that is smarter than humans. The company declined to comment on Q*. Social media is full of speculation and excessive hype, so I called some experts to find out how big a deal any breakthrough in math and AI would really be.


Meta Is Developing a New, More Powerful AI System as Technology Race Escalates

WSJ.com: WSJD - Technology

This copy is for your personal, non-commercial use only. For non-personal use or to order multiple copies, please contact Dow Jones Reprints at 1-800-843-0008 or visit www.djreprints.com.


Prosecutors in all 50 states urge Congress to guard against AI-generated child pornography

FOX News

Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. The top prosecutors in all 50 states are urging Congress to study how artificial intelligence can be used to exploit children through pornography, and come up with legislation to further guard against it. In a letter sent Tuesday to Republican and Democratic leaders of the House and Senate, the attorneys general from across the country call on federal lawmakers to "establish an expert commission to study the means and methods of AI that can be used to exploit children specifically" and expand existing restrictions on child sexual abuse materials specifically to cover AI-generated images. "We are engaged in a race against time to protect the children of our country from the dangers of AI," the prosecutors wrote in the letter, shared ahead of time with The Associated Press.