Goto

Collaborating Authors

 longtermism


'Eugenics on steroids': the toxic and contested legacy of Oxford's Future of Humanity Institute

The Guardian

Two weeks ago it was quietly announced that the Future of Humanity Institute, the renowned multidisciplinary research centre in Oxford, no longer had a future. It shut down without warning on 16 April. Initially there was just a brief statement on its website stating it had closed and that its research may continue elsewhere within and outside the university. The institute, which was dedicated to studying existential risks to humanity, was founded in 2005 by the Swedish-born philosopher Nick Bostrom and quickly made a name for itself beyond academic circles – particularly in Silicon Valley, where a number of tech billionaires sang its praises and provided financial support. Bostrom is perhaps best known for his bestselling 2014 book Superintelligence, which warned of the existential dangers of artificial intelligence, but he also gained widespread recognition for his 2003 academic paper "Are You Living in a Computer Simulation?".


The pro-extinctionist philosopher who has sparked a battle over humanity's future

The Guardian

Given all the suffering, pain and destruction produced by humanity, Émile Torres, who is a non-binary philosopher specialising in existential threats, thinks that it would not be a bad thing if humanity ceased to exist. "The pro-extinctionist view," they say, "immediately conjures up for a lot of people the image of a homicidal, ghoulish, sadistic maniac, but actually most pro-extinctionists would say that most ways of going extinct would be absolutely unacceptable. But what if everybody decided not to have children? I don't see anything wrong with that." Torres has just written a book called Human Extinction: A History of the Science and Ethics of Annihilation.


Philosopher Peter Singer: 'There's no reason to say humans have more worth or moral status than animals'

The Guardian

Australian philosopher Peter Singer's book Animal Liberation, published in 1975, exposed the realities of life for animals in factory farms and testing laboratories and provided a powerful moral basis for rethinking our relationship to them. Now, nearly 50 years on, Singer, 76, has a revised version titled Animal Liberation Now. It comes on the heels of an updated edition of his popular Ethics in the Real World, a collection of short essays dissecting important current events, first published in 2016. Singer, a utilitarian, is a professor of bioethics at Princeton University. In addition to his work on animal ethics, he is also regarded as the philosophical originator of a philanthropic social movement known as effective altruism, which argues for weighing up causes to achieve the most good.


"Longtermism" and AI: How Our Billionaire Overlords Want to Live Forever

#artificialintelligence

A small global elite, call them "Davos Man" if you wish, owns an increasing share of global income and wealth. The 2009 global financial crisis, the 2020 pandemic and the 2022 War in Ukraine have swelled their fortunes. They are wielding increasing power over international affairs. We live in a new (neo-) feudal economy, with these (tech) elite billionaires our new overlords, while the middle class is shrinking. The prospects for escaping poverty are gradually evaporating for most of the poor.


Power-hungry robots, space colonization, cyborgs: inside the bizarre world of 'longtermism'

The Guardian

Most of us don't think of power-hungry killer robots as an imminent threat to humanity, especially when poverty and the climate crisis are already ravaging the Earth. This wasn't the case for Sam Bankman-Fried and his followers, powerful actors who have embraced a school of thought within the effective altruism movement called "longtermism". In February, the Future Fund, a philanthropic organization endowed by the now-disgraced cryptocurrency entrepreneur, announced that it would be disbursing more than $100m – and possibly up to $1bn – this year on projects to "improve humanity's long-term prospects". The slightly cryptic reference might have been a bit puzzling to those who think of philanthropy as funding homelessness charities and medical NGOs in the developing world. In fact, the Future Fund's particular areas of interest include artificial intelligence, biological weapons and "space governance", a mysterious term referring to settling humans in space as a potential "watershed moment in human history".


How can we help humans thrive trillions of years from now? This philosopher has a plan

NPR Technology

Philosopher William MacAskill coined the term "longtermism" to convey the idea that humans have a moral responsibility to protect the future of humanity, prevent it from going extinct and create a better future for many generations to come. He outlines this concept in his new book, What We Owe the Future. Philosopher William MacAskill coined the term "longtermism" to convey the idea that humans have a moral responsibility to protect the future of humanity, prevent it from going extinct and create a better future for many generations to come. He outlines this concept in his new book, What We Owe the Future. Let's say you're hiking, and you drop a piece of glass on the trail.


Product 'Longtermism' and the Danger it May Bring

#artificialintelligence

In Christopher Nolan's film Tenet, the future is at war with the past "because the oceans rose and the rivers ran dry." Thanks to catastrophic climate change, no path lay ahead for our descendants, and their only hope was to carve out a future by orchestrating a genocide in our past. As the film's protagonist, Neil, explains: "Every generation looks out for its own survival." It's hard to watch a film like Tenet and not contemplate the longer-term consequences of our actions -- how will the future judge us? Working in product, we tend to approach most opportunities with good intention.