Effective Altruism Is Pushing a Dangerous Brand of 'AI Safety'
Throughout my two decades in Silicon Valley, I have seen effective altruism (EA)--a movement consisting of an overwhelmingly white male group based largely out of Oxford University and Silicon Valley--gain alarming levels of influence. EA is currently being scrutinized due to its association with Sam Bankman-Fried's crypto scandal, but less has been written about how the ideology is now driving the research agenda in the field of artificial intelligence (AI), creating a race to proliferate harmful systems, ironically in the name of "AI safety." EA is defined by the Center for Effective Altruism as "an intellectual project, using evidence and reason to figure out how to benefit others as much as possible." And "evidence and reason" have led many EAs to conclude that the most pressing problem in the world is preventing an apocalypse where an artificially generally intelligent being (AGI) created by humans exterminates us. To prevent this apocalypse, EA's career advice center, 80,000 hours, lists "AI safety technical research" and "shaping future governance of AI" as the top two recommended careers for EAs to go into, and the billionaire EA class funds initiatives attempting to stop an AGI apocalypse.
Nov-30-2022, 12:00:00 GMT
- Country:
- Europe
- Estonia > Harju County
- Tallinn (0.06)
- United Kingdom > England
- Oxfordshire > Oxford (0.26)
- Estonia > Harju County
- North America > United States
- California (0.49)
- Europe
- Industry:
- Banking & Finance > Trading (0.37)
- Technology: