doomer
The AI doomers feel undeterred
But they certainly wish people were still taking their warnings really seriously. It's a weird time to be an AI doomer. This small but influential community of researchers, scientists, and policy experts believes, in the simplest terms, that AI could get so good it could be bad--very, very bad--for humanity. Though many of these people would be more likely to describe themselves as advocates for AI safety than as literal doomsayers, they warn that AI poses an existential risk to humanity. They argue that absent more regulation, the industry could hurtle toward systems it can't control. They commonly expect such systems to follow the creation of artificial general intelligence (AGI), a slippery concept generally understood as technology that can do whatever humans can do, and better. Though this is far from a universally shared perspective in the AI field, the doomer crowd has had some notable success over the past several years: helping shape AI policy coming from the Biden administration, organizing prominent calls for international "red lines " to prevent AI risks, and getting a bigger (and more influential) megaphone as some of its adherents win science's most prestigious awards. But a number of developments over the past six months have put them on the back foot.
- North America > United States > Massachusetts (0.04)
- North America > United States > California (0.04)
- Asia > China > Shanghai > Shanghai (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.98)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.98)
- (2 more...)
Why They Disagree: Decoding Differences in Opinions about AI Risk on the Lex Fridman Podcast
Truong, Nghi, Puranam, Phanish, Koçak, Özgecan
The emergence of transformative technologies often surfaces deep societal divisions, nowhere more evident than in contemporary debates about artificial intelligence (AI). A striking feature of these divisions is that they persist despite shared interests in ensuring that AI benefits humanity and avoiding catastrophic outcomes. This paper analyzes contemporary debates about AI risk, parsing the differences between the "doomer" and "boomer" perspectives into definitional, factual, causal, and moral premises to identify key points of contention. We find that differences in perspectives about existential risk ("X-risk") arise fundamentally from differences in causal premises about design vs. emergence in complex systems, while differences in perspectives about employment risks ("E-risks") pertain to different causal premises about the applicability of past theories (evolution) vs their inapplicability (revolution). Disagreements about these two forms of AI risk appear to share two properties: neither involves significant disagreements on moral values and both can be described in terms of differing views on the extent of boundedness of human rationality. Our approach to analyzing reasoning chains at scale, using an ensemble of LLMs to parse textual data, can be applied to identify key points of contention in debates about risk to the public in any arena.
- North America > Canada > Ontario > Toronto (0.28)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- (3 more...)
- Research Report > New Finding (1.00)
- Overview (1.00)
- Law (1.00)
- Government (1.00)
- Information Technology > Security & Privacy (0.92)
The Doomers Who Insist AI Will Kill Us All
The subtitle of the doom bible to be published by AI extinction prophets Eliezer Yudkowsky and Nate Soares later this month is "Why superhuman AI would kill us all." But it really should be "Why superhuman AI WILL kill us all," because even the coauthors don't believe that the world will take the necessary measures to stop AI from eliminating all non-super humans. The book is beyond dark, reading like notes scrawled in a dimly lit prison cell the night before a dawn execution. When I meet these self-appointed Cassandras, I ask them outright if they believe that they personally will meet their ends through some machination of superintelligence. The answers come promptly: "yeah" and "yup."
The AI Doomers Are Getting Doomier
Nate Soares doesn't set aside money for his 401(k). "I just don't expect the world to be around," he told me earlier this summer from his office at the Machine Intelligence Research Institute, where he is the president. A few weeks earlier, I'd heard a similar rationale from Dan Hendrycks, the director of the Center for AI Safety. By the time he could tap into any retirement funds, Hendrycks anticipates a world in which "everything is fully automated," he told me. That is, "if we're around."
- North America > United States > New York (0.05)
- North America > United States > California (0.04)
- Government (1.00)
- Media (0.69)
- Information Technology (0.69)
Roll Over Shakespeare: ChatGPT Is Here
Sitting in Lincoln Center awaiting the curtain for Ayad Akhtar's McNeal--a much anticipated theater production starring Robert Downey Jr., with ChatGPT in a supporting role--I mused how playwrights have been dealing with the implications of AI for over a century. In 1920--well before Alan Turing devised his famous test and decades before the 1956 summer Dartmouth conference that gave artificial intelligence its name--a Czech playwright named Karel Čapek wrote R.U.R.--Rossum's Universal Robots. Not only was this the first time the word "robot" was employed, but Čapek may qualify as the first AI doomer, since his play dramatized an android uprising that slaughtered all of humanity, save for a single soul. Also on the boards in New York City this winter was a small black-box production called Doomers, a thinly veiled dramatization of the weekend where OpenAI's nonprofit board gave Sam Altman the boot, only to see him return after an employee rebellion. Neither of these productions have the pizzazz of a splashy Broadway extravaganza--maybe later we'll buy tickets to a musical where Altman and Elon Musk have a dance-off--but both grapple with issues that reverberate in Silicon Valley conference rooms, Congressional hearings, and late-night drinking sessions at the annual NeurIPS conference.
- North America > United States > New York (0.27)
- North America > United States > California > San Francisco County > San Francisco (0.05)
Does AI need all that money? (Tech giants say yes)
It's been another wild few days in Elon Musk news. Stay tuned for our coverage. In personal news, I deleted Instagram from my phone to try out a month without it there. Instead of scrolling, I've been listening to Shygirl and Lady Gaga's new music. DeepSeek roiled the US stock market last week by proposing that AI shouldn't really be all that expensive. The suggestion was so stunning it wiped about 600bn off of Nvidia's market cap in one day.
- Media (1.00)
- Information Technology (1.00)
- Health & Medicine (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
AIhub coffee corner: Open vs closed science
This month, we consider the debate around open vs closed science. Joining the conversation this time are: Joydeep Biswas (The University of Texas at Austin), Sanmay Das (George Mason University), Tom Dietterich (Oregon State University), Sabine Hauert (University of Bristol) and Sarit Kraus (Bar-Ilan University). Sabine Hauert: There have been many discussions online recently about the topic of open vs closed science. We've seen a lot of people advocating for open AI (not the company, but being open generally, just to clarify!). I was at an event recently in preparation for the AI summit in the UK.
- North America > United States > Texas > Travis County > Austin (0.25)
- North America > United States > Oregon (0.25)
- Europe > United Kingdom (0.25)
Among the A.I. Doomsayers
Katja Grace's apartment, in West Berkeley, is in an old machinist's factory, with pitched roofs and windows at odd angles. It has terra-cotta floors and no central heating, which can create the impression that you've stepped out of the California sunshine and into a duskier place, somewhere long ago or far away. Yet there are also some quietly futuristic touches. Nonperishables stacked in the pantry. A sleek white machine that does lab-quality RNA tests.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- Media (0.47)
- Leisure & Entertainment (0.47)
- Education > Educational Setting (0.47)
The Shocking Drama at OpenAI Isn't As Stupid As It Looks
The confounding saga of Sam Altman's sudden, shocking expulsion from OpenAI on Friday, followed by last-ditch attempts from investors and loyalists to reinstate him over the weekend, appears to have ended right where it started: with Altman and former OpenAI co-founder/president/board member Greg Brockman out for good. But there's a twist: Microsoft, which has been OpenAI's cash-and-infrastructure backer for years, announced early Monday morning that it was hiring Altman and Brockman "to lead a new advanced AI research team." In a follow-up tweet, Microsoft CEO Satya Nadella declared that Altman would become chief executive of this team, which would take the shape of an "independent" entity within Microsoft, operating something like company subsidiaries GitHub and LinkedIn. Notably, per Brockman, this new entity will be led by himself, Altman, and the first three employees who'd quit OpenAI Friday night in protest of how those two had been treated. I'm super excited to have you join as CEO of this new group, Sam, setting a new pace for innovation.
- South America > Argentina (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
The Claims That "A.I. Will Kill Us All" Are Sounding Awfully Convenient
This article is from Big Technology, a newsletter by Alex Kantrowitz. Shortly after ChatGPT's release last year, a cadre of critics captured headlines and made noise on social media claiming that A.I. would soon kill us. As wondrous as a computer speaking in natural language might be, it could use that intelligence to level the planet. The thinking went mainstream via letters calling for research pauses and 60 Minutes interviews amplifying existential concerns. Leaders like Barack Obama publicly worried about A.I. autonomously hacking the financial system--or worse.