From virtual assistants to driverless cars, technology imitating human intelligence is on the rise. But at what ethical cost and how do boards future-proof their organisations in the face of rapid change? Earlier this year, a Japanese insurance company made headlines for doing something that company executives and directors around the world have been anticipating - and fearing - for years. Fukoku Mutual Life Insurance made 34 of its staff redundant and replaced them with artificial intelligence (AI) system IBM Watson. Japanese newspaper The Mainichi reported the company will be using Watson to determine payout amounts and check customer cases against their insurance contracts. Evidently, the future of AI is already here and technology has been changing the world at a dramatic pace.
"Please think forward to the year 2030. Analysts expect that people will become even more dependent on networked artificial intelligence (AI) in complex digital systems. Some say we will continue on the historic arc of augmenting our lives with mostly positive results as we widely implement these networked tools. Some say our increasing dependence on these AI and related systems is likely to lead to widespread difficulties. Our question: By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities and empower them? That is, most of the time, will most people be better off than they are today? Or is it most likely that advancing AI and related technology systems will lessen human autonomy and agency to such an extent that most people will not be better off than the way things are today? Please explain why you chose the answer you did and sketch out a vision of how the human-machine/AI collaboration will function in 2030.
It was reported that Venture Capital investments into AI related startups made a significant increase in 2018, jumping by 72% compared to 2017, with 466 startups funded from 533 in 2017. PWC moneytree report stated that that seed-stage deal activity in the US among AI-related companies rose to 28% in the fourth-quarter of 2018, compared to 24% in the three months prior, while expansion-stage deal activity jumped to 32%, from 23%. There will be an increasing international rivalry over the global leadership of AI. President Putin of Russia was quoted as saying that "the nation that leads in AI will be the ruler of the world". Billionaire Mark Cuban was reported in CNBC as stating that "the world's first trillionaire would be an AI entrepreneur".
This year has seen some notable advancements in computer-based brain mimicry, not just on the artificial intelligence (AI) front, but also related to in silico brain simulations. Watson's vanquishing of Jeopardy champions Brad Rutter and Ken Jennings in February set the stage for the year. The now world-famous IBM super exhibited a sophisticated understanding of language semantics along with the ability to integrate that understanding into a complex analytics engine. Since the Jeopardy match, IBM has been looking to take the technology into the commercial realm, most notably in the health care arena. Meanwhile projects like FACETS (Fast Analog Computing with Emergent Transient States) and SpiNNaker are working to uncover the nature of the brain at the level of the neuron.
As we saw yesterday, artificial intelligence (AI) has enjoyed a a string of unbroken successes against humans. But these are successes in games where the map is the territory. That fact hints at the problem tech philosopher and futurist George Gilder raises in Gaming AI (free download here). Whether all human activities can be treated that way successfully is an entirely different question. As Gilder puts it, "AI is a system built on the foundations of computer logic, and when Silicon Valley's AI theorists push the logic of their case to a "singularity," they defy the most crucial findings of twentieth-century mathematics and computer science." Here is one of the crucial findings they defy (or ignore): Philosopher Charles Sanders Peirce (1839–1914) pointed out that, generally, mental activity comes in threes, not twos (so he called it triadic).