It's hard to avoid the prominence of AI in our lives, and there is a plethora of predictions about how it will influence our future. In their new book Solomon's Code: Humanity in a World of Thinking Machines, co-authors Olaf Groth, Professor of Strategy, Innovation and Economics at HULT International Business School and CEO of advisory network Cambrian.ai, I caught up with the authors about how the continued integration between technology and humans, and their call for a "Digital Magna Carta," a broadly-accepted charter developed by a multi-stakeholder congress that would help guide the development of advanced technologies to harness their power for the benefit of all humanity. Lisa Kay Solomon: Your new book, Solomon's Code, explores artificial intelligence and its broader human, ethical, and societal implications that all leaders need to consider. AI is a technology that's been in development for decades.
Would you want to alter your future children's genes to make them smarter, stronger or better-looking? As the state of the science brings prospects like these closer to reality, an international debate has been raging over the ethics of enhancing human capacities with biotechnologies such as so-called smart pills, brain implants and gene editing. This discussion has only intensified in the past year with the advent of the CRISPR-cas9 gene editing tool, which raises the spectre of tinkering with our DNA to improve traits like intelligence, athleticism and even moral reasoning. Experts believe genetic enhancement is more likely to emerge out of China. In China, genetic enhancement may be linked to more generally approving attitudes toward old-fashioned eugenics programs such as selective abortion of fetuses with severe genetic disorders, though more research is needed to fully explain the difference.
The entirety of human knowledge has been leading to this point. Information technologies and life sciences are at an inflection. Two technologies that are the pinnacle of achievement in their domains are going mainstream. In the IT world, it's Artificial Intelligence (AI), super-powerful computers that can program themselves and learn without the assistance of humans. In Life Sciences, it's Gene Editing (CRISPR/Cas9), the ability to reprogram genomes and change the course of evolution.
The scientists and engineers spearheading the creation of artificial beings and bionic people are responding to the magnetism of the technological imperative, the pull of a scientific problem as challenging as any imaginable. Fascinating scientific puzzle though it is, the creation of artificial beings is also expected to meet important needs for society and individuals. Industrial robots are already widely used in factories and on assembly lines. Robots for hazardous duty, from dealing with terrorist threats to exploring hostile environments, including distant planets, are in place or on the drawing boards. Such duty could include military postings because there is a longstanding interest in self-guided battlefield mechanisms that reduce the exposure of human soldiers, and in artificially enhanced soldiers with increased combat effectiveness.
Prof. Bostrom has written a book that I believe will become a classic within that subarea of Artificial Intelligence (AI) concerned with the existential dangers that could threaten humanity as the result of the development of artificial forms of intelligence. What fascinated me is that Bostrom has approached the existential danger of AI from a perspective that, although I am an AI professor, I had never really examined in any detail. When I was a graduate student in the early 80s, studying for my PhD in AI, I came upon comments made in the 1960s (by AI leaders such as Marvin Minsky and John McCarthy) in which they mused that, if an artificially intelligent entity could improve its own design, then that improved version could generate an even better design, and so on, resulting in a kind of "chain-reaction explosion" of ever-increasing intelligence, until this entity would have achieved "superintelligence". This chain-reaction problem is the one that Bostrom focusses on. He sees three main paths to superintelligence: 1. The AI path -- In this path, all current (and future) AI technologies, such as machine learning, Bayesian networks, artificial neural networks, evolutionary programming, etc. are applied to bring about a superintelligence.