We live in the greatest time in human history. Only 200 years ago, for most Europeans, life was a struggle rather than a pleasure. Without antibiotics and hospitals, every infection was fatal. There was only a small elite of citizens who lived in the cities in relative prosperity. Freedom of opinion, human and civil rights were far away. Voting rights and decision-making were reserved for a class consisting of nobility, clergy, the military and rich citizens. The interests of the general population were virtually ignored.
Whoever controls the strongest artificial intelligences controls the world. Artificial intelligence is the most important technology of the 21st century. It is therefore important to understand global ambitions and movements. In this article I examine the global artificial intelligence industry and in this context consider the aspects of politics, data, economy, start-ups, financing, research and infrastructure. I will only briefly discuss the current superpowers China and the USA, as I will dedicate a separate article to each of them. The question that we must ask ourselves in the end is how humanity will deal with the global challenges. So far, the first wave of digitization has developed without much government influence. Although there are now plans to break Google's monopoly (USA and Europe), for example by imposing European fines on Google and Facebook, politics is lagging behind the market by over a decade. As far as AI is concerned, for the first time in recent history I have observed a multitude of initiatives, strategies and actions by dozens of governments around the world - with very different goals and approaches. Artificial intelligence is and remains an issue that politicians and administrations of all nations have to deal with. AIs are relevant for climate protection and economic policy.
One measure of the status of civilization is the complexity of tools used by the society. As societies have progressed, tools and machines used by them have become increasingly complex. Despite their rising complexity the current set of tools and machines still need humans to create and use them and they can only do things what humans have pre-programmed them to do or control them to do. In particular, current set of machines cannot learn and enhance their knowledge. However, a new set of machines are emerging that can learn and they need minimal human intervention to operate.
Although the concept of artificial intelligence has been around for centuries it wasn't until the 1950's where the true possibility of it was explored. A generation of scientists, mathematicians and philosophers all had the concept of AI but it wasn't until one British Polymath, Alan Turing, suggested that if humans use available information, as well as reason, to solve problems and make decisions -- then why can't machines do the same thing? Although Turing outlined machines and how to test their intelligence in his paper Computing Machinery and Intelligence in 1950 -- his findings did not advance. The main halt in growth was the problem of computers. Before any more growth could happen they needed to change fundamentally -- computers could execute commands, but they could not store them.
Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages. For eons they have carried out a huge variety of tasks, from manufacturing goods, to transporting people around, to helping us decipher the natural world, to simply entertaining us. Machines can fight, protect, heal, and even teach us. But what they have not been able to do until quite recently is to learn, make decisions, and act on their own. Today, intelligent machines are everywhere. From the Netflix recommendation en- gine to Google Translate to Appleâ s Siri voice-recognition system, artificial intelligence has become sufficiently accurate, reliable, and useful to find its way into numerous devices and applications. These technologies have taken off in parallel with a dramatic expan- sion of the amount and complexity of data, which provides fertile teaching ground from which machines can learn to make intelligent decisions on their own.