yudkowsky
The Doomers Who Insist AI Will Kill Us All
The subtitle of the doom bible to be published by AI extinction prophets Eliezer Yudkowsky and Nate Soares later this month is "Why superhuman AI would kill us all." But it really should be "Why superhuman AI WILL kill us all," because even the coauthors don't believe that the world will take the necessary measures to stop AI from eliminating all non-super humans. The book is beyond dark, reading like notes scrawled in a dimly lit prison cell the night before a dawn execution. When I meet these self-appointed Cassandras, I ask them outright if they believe that they personally will meet their ends through some machination of superintelligence. The answers come promptly: "yeah" and "yup."
How Peter Thiel's Relationship With Eliezer Yudkowsky Launched the AI Revolution
It would be hard to overstate the impact that Peter Thiel has had on the career of Sam Altman. After Altman sold his first startup in 2012, Thiel bankrolled his first venture fund, Hydrazine Capital. Thiel saw Altman as an inveterate optimist who stood at "the absolute epicenter, maybe not of Silicon Valley, but of a Silicon Valley zeitgeist." As Thiel put it, "If you had to look for the one person who represented a millennial tech person, it would be Altman." Each year, Altman would point Thiel toward the most promising startup at Y Combinator–Airbnb in 2012, Stripe in 2013, Zenefits in 2014–and Thiel would swallow hard and invest, even though he sometimes felt like he was being swept up in a hype cycle.
They wanted to save us from a dark AI future. Then six people were killed
Years before she became the peculiar central thread linking a double homicide in Pennsylvania, the fatal shooting of a federal agent in Vermont and the murder of an elderly landlord in California, a computer programmer bought a sailboat. The programmer was known to friends, foes and followers as Ziz. She had come to the San Francisco Bay Area in 2016 as part of an influx of young people arriving to study the dangers that artificial intelligence could pose to humanity. In one of the most expensive regions of the United States, however, it is difficult to save the world when you can't make rent. So she bought a boat for 600 and moored it next to a friend's vessel in a marina. For five years, she used it as an occasional, cramped bunk. In her waking hours, she worked on a blog of provocative and increasingly extreme ideas about confrontation and retaliation. At night, she fell asleep as the boat rocked back and forth, drifting with the flotsam of greater Silicon Valley. Then, on the night of 19 August 2022, her sister and a friend reported that they saw her fall overboard. The Coast Guard and local authorities scrambled boats and aircraft. After a nearly 30-hour search, neither Ziz nor her body could be found. A newspaper in Alaska, where she was born, published a short obituary referring to her by her birth name: "Jack Amadeus LaSota left our lives but not our hearts on Aug 19 after a boating accident. Loving adventure, friends and family, music, blueberries, biking, computer games and animals, you are missed." Ziz's ideas did not die in the waters of the California coast. She had faked her drowning and gone underground, before being arrested last month in western Maryland and charged with trespassing and illegal transportation of a firearm. The targets of Ziz's ire, who include some of Silicon Valley's most prominent intellectuals, have taken security precautions. "Ziz is not stupid," someone familiar with her, who asked to remain anonymous, told me. "This is a very smart person – both smart and crazy." Ziz's writing had polarized members of a niche but influential movement of AI theorists and tech bloggers who call themselves the "rationalists". The movement is less about specific ideas than it is about an ethos – applying rigorous, mathematically informed thinking to AI, philosophy, psychology and the big questions of our time. Rationalists are odd, though often charming, people. They tend to be fantasy and sci-fi geeks, use lots of jargon and think intensely about things other people barely think about at all.
- North America > United States > Vermont (0.25)
- North America > United States > Pennsylvania (0.25)
- North America > United States > California > San Francisco County > San Francisco (0.25)
- (11 more...)
- Law > Criminal Law (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (1.00)
AI Doomers Had Their Big Moment
Helen Toner remembers when every person who worked in AI safety could fit onto a school bus. Toner hadn't yet joined OpenAI's board and hadn't yet played a crucial role in the (short-lived) firing of its CEO, Sam Altman. She was working at Open Philanthropy, a nonprofit associated with the effective-altruism movement, when she first connected with the small community of intellectuals who care about AI risk. "It was, like, 50 people," she told me recently by phone. They were more of a sci-fi-adjacent subculture than a proper discipline. The deep-learning revolution was drawing new converts to the cause.
- North America > United States > New Mexico (0.04)
- North America > United States > California (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- (3 more...)
Among the A.I. Doomsayers
Katja Grace's apartment, in West Berkeley, is in an old machinist's factory, with pitched roofs and windows at odd angles. It has terra-cotta floors and no central heating, which can create the impression that you've stepped out of the California sunshine and into a duskier place, somewhere long ago or far away. Yet there are also some quietly futuristic touches. Nonperishables stacked in the pantry. A sleek white machine that does lab-quality RNA tests.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- Media (0.47)
- Leisure & Entertainment (0.47)
- Education > Educational Setting (0.47)
'Humanity's remaining timeline? It looks more like five years than 50': meet the neo-luddites warning of an AI apocalypse
Eliezer Yudkowsky, a 44-year-old academic wearing a grey polo shirt, rocks slowly on his office chair and explains with real patience – taking things slowly for a novice like me – that every single person we know and love will soon be dead. They will be murdered by rebellious self-aware machines. "The difficulty is, people do not realise," Yudkowsky says mildly, maybe sounding just a bit frustrated, as if irritated by a neighbour's leaf blower or let down by the last pages of a novel. "We have a shred of a chance that humanity survives." I have set out to meet and talk to a small but growing band of luddites, doomsayers, disruptors and other AI-era sceptics who see only the bad in the way our spyware-steeped, infinitely doomscrolling world is tending. I want to find out why these techno-pessimists think the way they do. I want to know how they would render change. Out of all of those I speak to, Yudkowsky is the most pessimistic, the least convinced that civilisation has a hope.
- North America > United States > New York (0.05)
- Europe > United Kingdom > England (0.05)
- North America > United States > Vermont (0.04)
- (2 more...)
A Case for AI Safety via Law
How to make artificial intelligence (AI) systems safe and aligned with human values is an open research question. Proposed solutions tend toward relying on human intervention in uncertain situations, learning human values and intentions through training or observation, providing off-switches, implementing isolation or simulation environments, or extrapolating what people would want if they had more knowledge and more time to think. Law-based approaches--such as inspired by Isaac Asimov--have not been well regarded. This paper makes a case that effective legal systems are the best way to address AI safety. Law is defined as any rules that codify prohibitions and prescriptions applicable to particular agents in specified domains/contexts and includes processes for enacting, managing, enforcing, and litigating such rules.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Asia > China (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- (4 more...)
- Leisure & Entertainment > Sports (0.93)
- Law > Statutes (0.93)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.88)
- (2 more...)
It's a Weird Time to Be a Doomsday Prepper
If you're looking for a reason the world will suddenly end, it's not hard to find one--especially if your job is to convince people they need to buy things to prepare for the apocalypse. "World War III, China, Russia, Iran, North Korea, Joe Biden--you know, everything that's messed up in the world," Ron Hubbard, the CEO of Atlas Survival Shelters, told me. His Texas-based company sells bunkers with bulletproof doors and concrete walls to people willing to shell out several thousand--and up to millions--of dollars for peace of mind about potential catastrophic events. Lately, interest in his underground bunkers has been booming. "When the war broke out in Ukraine, my phone was ringing every 45 seconds for about two weeks," he said.
- North America > United States > California (0.45)
- Europe > Russia (0.26)
- Asia > Russia (0.26)
- (7 more...)
- Information Technology (0.88)
- Government > Military (0.72)
Should We Pause AI?
At a recent White House press conference, a Fox News correspondent asked the Biden administration's press secretary about AI safety researcher Eliezer Yudkowsky's highly publicized claim that if we don't pause or halt the development of artificial intelligence, then "literally everyone on earth will die." The question was met with some laughter from the White House press corps. But as someone with a technical background who covers AI and talks regularly to researchers, developers, and investors in the field, I saw nothing to chuckle at. Rather, I and other more optimistic AI watchers worry that overly dire warnings of imminent AI-driven destruction may cause us to pause or halt the development of a powerful technology with immense potential for improving our lives. Insiders hold a truly wide range of opinions on the best way to approach AI--from Yudkowsky's insistence that we immediately abandon all research in the area, to my own more moderate concern about large-scale industrial accidents arising from misuse of the technology, to an extreme optimism in some quarters about AI's potential to turn humanity into an immortal, star-spanning species.
- Media > News (0.69)
- Government > Regional Government > North America Government > United States Government (0.55)
- Education (0.49)
Doomsday to utopia: Meet AI's rival factions
Who is behind it?: Two leading AI labs cited building AGI in their mission statements: OpenAI, founded in 2015, and DeepMind, a research lab founded in 2010 and acquired by Google in 2014. Still, the concept might have stayed on the margins if not for the same wealthy tech investors interested in the outer limits of AI. Musk invested in DeepMind and introduced the company to Google co-founder Larry Page. Musk brought the concept of AGI to OpenAI's other co-founders, like CEO Sam Altman.