While it is difficult for people to agree on a vision of utopia, it is relatively easy to agree on what a "better world" might look like. The United Nations "Sustainable Development Goals," for example, are an important set of agreed-upon global priorities in the near-term: These objectives (alleviation of poverty, food for all, etc.) are important to keep society from crumbling and to keep large swaths of humanity in misery, and they serve as common reference points for combined governmental or nonprofit initiatives. However, they don't help inform humanity as to which future scenarios we want to move closer or farther to as the human condition is radically altered by technology. As artificial intelligence and neurotechnologies become more and more a part of our lives in the coming two decades, humanity will need a shared set of goals about what kinds of intelligence we develop and unleash in the world, and I suspect that failure to do so will lead to massive conflict. Given these hypotheses, I've argued that there are only two major questions that humanity must ultimately be concerned with: In the rest of this article, I'll argue that current united human efforts at prioritization are important, but incomplete in preventing conflict and maximizing the likelihood of a beneficial long-term (40 year) outcome for humanity.
The great power nations that master the use of artificial intelligence are likely to gain a tremendous military and economic benefits from the technology. The United States benefitted greatly from a relatively fast adoption of the internet, and many of its most powerful companies today are the global giants of the internet age. I believe these to be fatal assumptions. The decade ahead will make it clear that the United States must, as it has in the past, earn its prosperity and its technological leadership – something that many Americans now take completely for granted. This will involve a focus on the competitiveness of the US economy – and a willingness to continually earn its place in the international order.
Hugo de Garis is one of the first AGI thinkers that I came across in 2012, when I decided to focus my life on the post-human transition. Aside from Bostrom and Al-Rodhan, few thinkers molded my early ideas about AGI and transhumanism more than de Garis. I believe that two of his ideas are extremely important, and are somewhat absent in most of the artificial general intelligence conversations today (and even most of the discussions from 2010-2014). Those ideas are what de Garis calls "Globism" (global world order) and "Cosmism" (the belief that humanity should create diety-level machine intelligences). The following screenshot is from de Garis's (seemingly neglected) online blog: Since first exploring Kurzweil's ideas in The Singularity is Near, it seemed evident to me that the default mode of technology development would be competition – the economic or military "state of nature" – and that conflict is extremely likely if new forms of thinking and valuing come into existence.
Ray Kurzweil's The Singularity is Near peaked my interest when he posited his reasoning for why there is likely no intelligent life elsewhere in the universe. By a mere matter of odds, most of us assume (likely myself included) that there simply must be some kind of super-intelligent species "out there somewhere." One of the many postulations made (the book is more than worth reading), is that species might – at the point of attaining a certain degree of capacity or intelligence – destroy themselves. Could be bombs, could be nanotechnologies, could be super-intelligent computers – but something batters them back to the stone age – or worse. In thinking recently on topics related to ethical enhancement and human enhancement in general, I came to the notion that this "self-extermination theory" might pan out in some other interesting and less considered ways.
The "Grand Trajectory" refers to the direction of development of intelligence and sentience. It is unclear as to which of these scenarios humanity should strive towards, or how we should go about it. In the long term, it seems somewhat inevitable that the best possible scenarios (in utilitarian terms) would involve the proliferation of post-human intelligence, well beyond current humanity or cognitively enhanced humanity. If the richness and depth of the sentience of an entity indicate its moral worth, then astronomically advanced (and conscious) superintelligence would be the most (a concept that I explore in great depth at the end of my TEDx at Cal Poly). The stewardship of the Grand Trajectory is the most important role of humanity – and indeed is the Cause.