Artificial Intelligence (AI) does not belong to the future – it is happening now. With the global AI software market surging by 154 percent year-on-year, this industry is predicted to be valued at 22.6 billion US dollars by 2025. Invented by John McCarthy in 1950, Artificial Intelligence is the ability of machines or computer programs to learn, think, and reason, much like a human brain. An AI system is fed in data and instructions, based on which the system draws conclusions and performs functions. It keeps learning human reasoning and logic with time, getting efficient on-the-go.
It is not that hard to believe, how just two decades ago Deep Blue a computer beat a chess grandmaster Gary Kasparov. AI is enhancing itself and is becoming better at numerous "human" jobs -- diagnosing disease, translating languages, providing customer service -- and it's improving fast. This is raising reasonable fears amongst workers and upcoming students. According to The Guardian, 76% of Americans fear that their job will be lost to AI. While it's speculated AI will take over 1.8 million human jobs by the year 2020, however, the technology is also expected to create a 2.3 million new kinds of jobs, many of which will involve the collaboration between humans and AI.
This blog will take a thorough dive into the timeline of AI, beginning from the very start, the 1940s. The term "Artificial Intelligence" was first coined by the father of AI, John McCarthy in 1956. But the revolution of AI began a few years in advance, i.e. the 1940's. Around 37% of industries have implemented AI in some form, which is a 270% increase for the past 4 years. AI has taken multiple forms over the years.
From intelligent personal assistants to home robots, technology once thought of as a sci-fi dream is now embedded into everyday life. But this leap from dream to reality didn't happen overnight. There is no one'eureka' moment in a field as vast as AI. Rather, the technology we enjoy today is a result of countless milestones in artificial intelligence, delivered by countless forgotten people across a countless range of projects. So, let's pay homage to some of that work.
From 1960 to 2020, the field of AI has both seen tremendous sprints of progress and strenuous "AI winter" s. Headlines have always accompanied the many breakthroughs on how computers are becoming intelligent and will soon surpass humans, followed by pessimistic views on how limited the current technology is. Currently, the field is at its highest point ever, yet, no signs of a general form of intelligence have been achieved so far, neither has a conclusive definition of what intelligence is or consciousness should look like. For more than fifty years, the field has been chasing one of the most elusive target science has ever seen. For a long time, public opinion was that anything that could play chess well would be intelligent.
Some tasks that AI does are actually not impressive. Think about your camera recognizing and auto-focusing on faces in pictures. That technology has been around since 2001, and it doesn't tend to excite people. Well, because you can do that too, you can focus your eyes on someone's face very easily. In fact, it's so easy you don't even know how you do it.
It is desirable to guard against the possibility of exaggerated ideas that might arise as to the powers of the Analytical Engine. In considering any new subject, there is frequently a tendency, first, to overrate what we find to be already interesting or remarkable; and, secondly, by a sort of natural reaction, to undervalue the true state of the case, when we do discover that our notions have surpassed those that were really tenable. The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths. Its province is to assist us in making available what we are already acquainted with. The first words uttered on a controversial subject can rarely be taken as the last, but this comment by British mathematician Lady Lovelace, who died in 1852, is just that--the basis of our understanding of what computers are and can be, including the notion that they might come to acquire artificial intelligence, which here means "strong AI," or the ability to think in the fullest sense of the word. Her words demand and repay close reading: the computer "can do whatever we know how to order it to perform." This means both that it can do only what we know how to instruct it to do, and that it can do all that we know how to instruct it to do.
It is difficult to open an insurance industry newsletter these days without seeing some reference to machine learning or its cousin artificial intelligence and how they will revolutionize the industry. Yet according to Willis Towers Watson's recently released 2019/2020 P&C Insurance Advanced Analytics Survey results, fewer companies have adopted machine learning and artificial intelligence than had planned to do so just two years ago (see the graphic below). In the context of insurance, we're not talking about self-driving cars (though these may have important implications for insurance) or chess-playing computers. We're talking about predicting the outcome of comparatively simple future events: Who will buy what product, which clients are more likely to have what kind of claim, which claim will become complex according to some definition. Analytics have applications across the insurance value chain, from marketing, client acquisition and retention to underwriting, pricing and claims management, as insurers look to squeeze more signal out of their data.
It is difficult to open an insurance industry newsletter these days without seeing some reference to machine learning or its cousin artificial intelligence and how they will revolutionize the industry. Yet according to Willis Towers Watson's recently released 2019/2020 P&C Insurance Advanced Analytics Survey results, fewer companies have adopted machine learning and artificial intelligence than had planned to do so just two years ago (see the accompanying graphic). In the context of insurance, we're not talking about self-driving cars (though these may have important implications for insurance) or chess-playing computers. We're talking about predicting the outcome of comparatively simple future events: Who will buy what product, which clients are more likely to have what kind of claim, which claim will become complex according to some definition. The better insurers can estimate the outcomes of these future events, the better they can plan for them and achieve more positive results.
Last month at the San Francisco Museum of Modern Art I saw "2001: A Space Odyssey" on the big screen for my 47th time. The fact that this masterpiece remains on nearly every relevant list of "top ten films" and is shown and discussed over a half-century after its 1968 release is a testament to the cultural achievement of its director Stanley Kubrick, writer Arthur C. Clarke, and their team of expert filmmakers. As with each viewing, I discovered or appreciated new details. But three iconic scenes -- HAL's silent murder of astronaut Frank Poole in the vacuum of outer space, HAL's silent medical murder of the three hibernating crewmen, and the poignant sorrowful "death" of HAL -- prompted deeper reflection, this time about the ethical conundrums of murder by a machine and of a machine. In the past few years experimental autonomous cars have led to the death of pedestrians and passengers alike. AI-powered bots, meanwhile, are infecting networks and influencing national elections. Elon Musk, Stephen Hawking, Sam Harris, and many other leading AI researchers have sounded the alarm: Unchecked, they say, AI may progress beyond our control and pose significant dangers to society. When astronauts Frank and Dave retreat to a pod to discuss HAL's apparent malfunctions and whether they should disconnect him, Dave imagines HAL's views and says: "Well I don't know what he'd think about it."