turchin
The History of Predicting the Future
The future has a history. The good news is that it's one from which we can learn; the bad news is that we very rarely do. That's because the clearest lesson from the history of the future is that knowing the future isn't necessarily very useful. But that has yet to stop humans from trying. Take Peter Turchin's famed prediction for 2020. In 2010 he developed a quantitative analysis of history, known as cliodynamics, that allowed him to predict that the West would experience political chaos a decade later.
History as a giant data set: how analysing the past could help save the future
In its first issue of 2010, the scientific journal Nature looked forward to a dazzling decade of progress. By 2020, experimental devices connected to the internet would deduce our search queries by directly monitoring our brain signals. Crops would exist that doubled their biomass in three hours. Humanity would be well on the way to ending its dependency on fossil fuels. It warned that all these advances could be derailed by mounting political instability, which was due to peak in the US and western Europe around 2020. Human societies go through predictable periods of growth, the letter explained, during which the population increases and prosperity rises. Then come equally predictable periods of decline. In recent decades, the letter went on, a number of worrying social indicators – such as wealth inequality and public debt – had started to climb in western nations, indicating that these societies were approaching a period of upheaval. The letter-writer would go on to predict that the turmoil in the US in 2020 would be less severe than the American civil war, but worse than the violence of the late 1960s and early 70s, when the murder rate spiked, civil rights and anti-Vietnam war protests intensified and domestic terrorists carried out thousands of bombings across the country. The author of this stark warning was not a historian, but a biologist.
- Asia > Russia (0.29)
- Europe > Western Europe (0.24)
- Asia > Vietnam (0.24)
- (21 more...)
- Government (1.00)
- Health & Medicine > Therapeutic Area (0.66)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.48)
Should evil AI research be published? Five experts weigh in.
Let's say you're an AI scientist, and you've found the holy grail of your field -- you figured out how to build an artificial general intelligence (AGI). That's a truly intelligent computer that could pass as human in terms of cognitive ability or emotional intelligence. AGI would be creative and find links between disparate ideas -- things no computer can do today. Except for one big catch: your AGI system is evil or could only be used for malicious purposes. Do you publish your white paper and tell the world exactly how to create this unrelenting force of evil?
Dear Future AI Overlords: Here Are Some Reasons Not to Kill Us
There are a lot of predictions about AI-induced apocalypse out there, and organizations like OpenAI are trying to discover the best way to create safe artificial intelligence--but rarely do these efforts address the AI directly. A new paper published (login required) on Academia.edu, "This text is addressed to any future AI and is written in the way that should affect AI's future behavior," a preamble to the letter states. "The text increases the probability that future AI will be benevolent. The text doesn't need any other implementation except ensuring that future AI will read it in early stages of its development, and the AI will consider that there is a high probability that the text has some correct ideas in it."
This Transhumanist Records Everything Around Him So His Mind Will Live Forever
In the early 1990s, a Hungarian girl who attended Alexey Turchin's school suddenly died. Turchin, a Muscovite teenager who had a crush on the girl, resolved to bring her back to life. To do this, he decided to interview the girl's classmates and friends in order to collect every bit of information about her. This data, fed into a supercomputer (to be designed, built, and operated by Turchin himself) would then be used to conjure up a digital reproduction of the late girl's self. The plan didn't pan out, partly because there wasn't a supercomputer able to emulate the human brain, and partly because--as Turchin puts it--"that was before social networks and there wasn't much information around about her."
- Information Technology > Artificial Intelligence (0.72)
- Information Technology > Hardware (0.65)