If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Garry Kasparov has warned that any attempts by the Government to regulate artificial intelligence (AI) could "stifle" its development and give Russia and China an advantage. The former world chess champion has become an advocate for AI development following his resignation from professional chess in 2005. He told The Telegraph that "the government should be involved" in helping researchers and private firms to develop AI in order to "pave the road" for the technology. However, he cautioned against governments attempting to regulate the technology too closely. "It's too early for the government to interfere," he said.
In recent times, we have seen an increasing number of instances of Artificial Intelligence (AI) donning the proverbial lab coat. In early 2019, thousands of people were screened every day in a hospital in Madurai by an AI system developed by Google that helps diagnose diabetic retinopathy, a condition that can lead to blindness. Startups like Niramai, based in Bengaluru are developing AI technology for early diagnosis of conditions like breast cancer and river blindness. The sudden, accelerated growth of Machine Learning not just in research but in all walks of life can bring to mind Black Mirror-esque visions of dystopia in which machines rule over humanity. But let us leave worrying about the consequences of the far future to science fiction and look at the immediate impact this technology has had in science.
It is difficult to open an insurance industry newsletter these days without seeing some reference to machine learning or its cousin artificial intelligence and how they will revolutionize the industry. Yet according to Willis Towers Watson's recently released 2019/2020 P&C Insurance Advanced Analytics Survey results, fewer companies have adopted machine learning and artificial intelligence than had planned to do so just two years ago (see the accompanying graphic). In the context of insurance, we're not talking about self-driving cars (though these may have important implications for insurance) or chess-playing computers. We're talking about predicting the outcome of comparatively simple future events: Who will buy what product, which clients are more likely to have what kind of claim, which claim will become complex according to some definition. The better insurers can estimate the outcomes of these future events, the better they can plan for them and achieve more positive results.
We've all seen the film where robots take over the world, with their mechanical bodies causing Hollywood-style screams from unsuspecting (or maybe very suspecting) victims. And, while these kinds of films let us live an alternate reality for an hour and a half, there's always that niggling thought at the backs of our minds telling us that this could actually happen in the not-too-distant future. In fact, the "father of AI", Alan Turing, was beavering away on it in the 1950s. He developed the Turing Test, which had a judge ask questions to a machine and a human. The judge would then have to decide who was the human and, if the computer could fool the judge at least half of the time, it was considered intelligent.
As with Go, we are excited about AlphaZero's creative response to chess, which has been a grand challenge for artificial intelligence since the dawn of the computing age with early pioneers including Babbage, Turing, Shannon, and von Neumann all trying their hand at designing chess programs. But AlphaZero is about more than chess, shogi or Go. To create intelligent systems capable of solving a wide range of real-world problems we need them to be flexible and generalise to new situations. While there has been some progress towards this goal, it remains a major challenge in AI research with systems capable of mastering specific skills to a very high standard, but often failing when presented with even slightly modified tasks. AlphaZero's ability to master three different complex games – and potentially any perfect information game – is an important step towards overcoming this problem.
January 17, 2020 Written by: John R. Smith IBM Research has a long history as a leader in the field of Artificial Intelligence (AI). IBM's pioneering work in AI dates back to the field's inception in the 1950s, when IBM developed one of the first instances of machine learning, which was applied to the game of checkers. Since then, IBM has been responsible for achieving major milestones in AI, ranging from Deep Blue – the first chess-playing computer to defeat a reigning world champion, to Watson – the first natural language question and answering system able to win at Jeopardy!, to last year's Project Debater – the first AI system that can build persuasive arguments on its own and effectively engage in debates on complex topics. IBM's leadership in AI continued in earnest in 2019, which was notable for a growing focus on critical topics such as making trustworthy AI work in practice, creating new AI engineering paradigms to scale AI for a broader use, and continuing to advance core AI capabilities in language, speech, vision, knowledge & reasoning, human-centered AI, and more. While recent years have seen incredible progress in "narrow AI," built on technologies like deep learning, IBM Research pushed its AI research in 2019 towards developing a new foundational underpinning of AI for enterprise applications by addressing important problems like learning more from less, enabling trusted AI by ensuring the fairness, explainability, adversarial robustness, and transparency of AI systems, and integrating learning and reasoning as a way to understand more in order to do more.
More than a decade has passed since the British government issued an apology to the mathematician Alan Turing. The tone of pained contrition was appropriate, given Britain's grotesquely ungracious treatment of Turing, who played a decisive role in cracking the German Enigma cipher, allowing Allied intelligence to predict where U-boats would strike and thus saving tens of thousands of lives. Unapologetic about his homosexuality, Turing had made a careless admission of an affair with a man, in the course of reporting a robbery at his home in 1952, and was arrested for an "act of gross indecency" (the same charge that had led to a jail sentence for Oscar Wilde in 1895). Turing was subsequently given a choice to serve prison time or undergo a hormone treatment meant to reverse the testosterone levels that made him desire men (so the thinking went at the time). Turing opted for the latter and, two years later, ended his life by taking a bite from an apple laced with cyanide.
Last month at the San Francisco Museum of Modern Art I saw "2001: A Space Odyssey" on the big screen for my 47th time. The fact that this masterpiece remains on nearly every relevant list of "top ten films" and is shown and discussed over a half-century after its 1968 release is a testament to the cultural achievement of its director Stanley Kubrick, writer Arthur C. Clarke, and their team of expert filmmakers. As with each viewing, I discovered or appreciated new details. But three iconic scenes -- HAL's silent murder of astronaut Frank Poole in the vacuum of outer space, HAL's silent medical murder of the three hibernating crewmen, and the poignant sorrowful "death" of HAL -- prompted deeper reflection, this time about the ethical conundrums of murder by a machine and of a machine. In the past few years experimental autonomous cars have led to the death of pedestrians and passengers alike. AI-powered bots, meanwhile, are infecting networks and influencing national elections. Elon Musk, Stephen Hawking, Sam Harris, and many other leading AI researchers have sounded the alarm: Unchecked, they say, AI may progress beyond our control and pose significant dangers to society. When astronauts Frank and Dave retreat to a pod to discuss HAL's apparent malfunctions and whether they should disconnect him, Dave imagines HAL's views and says: "Well I don't know what he'd think about it."
It is difficult not to smile when reading the Wall Street Journal report about a guest in a robot-staffed hotel in Japan who was woken every few hours by the in-room assistant asking him to repeat his command. The hotel manager finally realized that heavy snoring by the guest had triggered the robot's voice recognition system. For every clanger, though, there is also a success story. For example, DeepMind's AI programme AlphaStar has for the first time beaten human video game players at StarCraft II, winning 10 games in a row. AlphaStar's success demonstrated the ability of AI programmes, in this case based on a reinforcement learning algorithm, to make quick decisions without any errors while operating in a complex environment.
Recently, a report was released regarding the misuse from companies claiming to use artificial intelligence   on their products and services. According to the Verge , 40% of European startups that claimed to use AI don't actually use the technology. Last year, TechTalks, also stumbled upon such misuse by companies claiming to use machine learning and advanced artificial intelligence to gather and examine thousands of users' data to enhance user experience in their products and services  . Unfortunately, there's still a lot of confusion within the public and the media regarding what truly is artificial intelligence , and what truly is machine learning . Often the terms are being used as synonyms, in other cases, these are being used as discrete, parallel advancements, while others are taking advantage of the trend to create hype and excitement, as to increase sales and revenue    .