While it is difficult for people to agree on a vision of utopia, it is relatively easy to agree on what a "better world" might look like. The United Nations "Sustainable Development Goals," for example, are an important set of agreed-upon global priorities in the near-term: These objectives (alleviation of poverty, food for all, etc.) are important to keep society from crumbling and to keep large swaths of humanity in misery, and they serve as common reference points for combined governmental or nonprofit initiatives. However, they don't help inform humanity as to which future scenarios we want to move closer or farther to as the human condition is radically altered by technology. As artificial intelligence and neurotechnologies become more and more a part of our lives in the coming two decades, humanity will need a shared set of goals about what kinds of intelligence we develop and unleash in the world, and I suspect that failure to do so will lead to massive conflict. Given these hypotheses, I've argued that there are only two major questions that humanity must ultimately be concerned with: In the rest of this article, I'll argue that current united human efforts at prioritization are important, but incomplete in preventing conflict and maximizing the likelihood of a beneficial long-term (40 year) outcome for humanity.