Visual object recognition, speech recognition, machine translation – these are among the "holy grails" of artificial intelligence research. But machines are now at a level that the benchmark performance for these three areas has reached, and even surpassed, human levels. Moreover, in the space of 24 hours, a single program, AlphaZero, became by far the world's best player in three games – chess, Go, and Shogi – to which it had no prior exposure. These developments have provoked some alarmist reporting in the media, invariably accompanied by pictures of Terminator robots, but predictions of imminent superhuman AI are almost certainly wrong – we're still several conceptual breakthroughs away. On the other hand, massive investments in AI research, several hundred billion pounds over the next decade, suggest further rapid advances are not far away.
Thomson Reuters has a series, AI experts, where they interview thought leaders from different areas - including technology executives, researchers, robotics experts and policymakers - on what we might expect as we move towards AI. As part of that series I recently spoke to Paul Thies of Thomson Reuters, and here are the excerpts from the interview: Anticipating the next move in data science Thomson Reuters: For timely information concerning developments in data science, data mining and business analytics, KDnuggets is widely regarded as a leading outlet in the field. Created in 1993 by founder, editor and president Gregory Piatetsky-Shapiro, it is frequently cited as one of the top sources of data science news and influence by various industry watchers. Thomson Reuters: What are some use cases of data science that you find to be particularly valuable to organizations in this age of Big Data? GREGORY: Where people typically apply data science, probably not surprisingly, are in the areas of customer relationship management (CRM) and consumer analytics.
We propose a new General Game Playing (GGP) language called Regular Boardgames (RBG), which is based on the theory of regular languages. The objective of RBG is to join key properties as expressiveness, efficiency, and naturalness of the description in one GGP formalism, compensating certain drawbacks of the existing languages. This often makes RBG more suitable for various research and practical developments in GGP. While dedicated mostly for describing board games, RBG is universal for the class of all finite deterministic turn-based games with perfect information. We establish foundations of RBG, and analyze it theoretically and experimentally, focusing on the efficiency of reasoning. Regular Boardgames is the first GGP language that allows efficient encoding and playing games with complex rules and with large branching factor (e.g.\ amazons, arimaa, large chess variants, go, international checkers, paper soccer).
In 1913, the largest and most influential art show in history took place; The 1913 Armory Show. Packed into New York's 69th Regiment Armory on Lexington Avenues between 25th and 26th streets were over 1200 works of art that ranged from sculptures, paintings and decorative works by over 300 artists from America and Europe. The show introduced Picasso, Matisse, Duchamp and modernism to American audiences. The event was so radical at the time, critics, who were used to realism in their art, questioned the sanity of the artists whose works were represented in the show. But the experimental art was eventually embraced by America and made way for great American artists such as Jackson Pollock, Mark Rothko and Andy Warhol.
According to Merriam-Webster, artificial intelligence is "a branch of computer science dealing with the simulation of intelligent behavior in computers." Love is defined as "a strong affection for another arising out of kinship or personal ties." It is difficult today to bypass the raging, Manichean debate in the tech and business communities about the role artificial intelligence (AI) will play in our economy and our society. Will this emerging technology become some kind of terminator, killing all of our jobs? Or will it emerge with a more theological, liberating approach to the human condition?
AI is a large topic, and there is no single agreed definition of what it involves. But there seems to be more agreement than disagreement. Broadly speaking, AI is an umbrella term for the field in computer science dedicated to making machines simulate different aspects of human intelligence, including learning, decision-making and pattern recognition. Some of the most striking applications, in fields like speech recognition and computer vision, are things people take for granted when assessing human intelligence but have been beyond the limits of computers until relatively recently. The term "artificial intelligence" was coined in 1956 by mathematics professor John McCarthy, who wrote, The study is to proceed on the basis of the conjecture that every aspect of learning and any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.
Modern day Cognitive Computing date back to the late 19th century, with the work of mathematician George Boole and his book The Laws of Thought, and the propositions of Charles Babbage on creating what he termed an "analytical engine." The term Artificial Intelligence (AI) was coined by the late John McCarthy in 1955 (revised in 2007), when he defined AI as "the science and engineering of making intelligent machines." Artificial intelligence has been a far-flung goal of computing since the conception of the computer, but we may be getting closer than ever with new cognitive computing models. While computers have been faster at calculations and processing than humans for decades, they haven't been able to accomplish tasks that humans take for granted as simple, like understanding natural language, or recognizing unique objects in an image. The study of AI really began to excel during the 1980s when funding increased considerably over previous decades to develop new technologies into Machine Learning and AI.
Supervised vs Reinforcement Learning: In Supervised Learning we have an external supervisor who has sufficient knowledge of the environment and also shares the learning with a supervisor to form a better understanding and complete the task, but since we have problems where the agent can perform so many different kind of subtasks by itself to achieve the overall objective, the presence of a supervisor is unnecessary and impractical. We can take up the example of a chess game, where the player can play tens of thousands of moves to achieve the ultimate objective. Creating a knowledge base for this purpose can be a really complicated task. Thus, it is imperative that in such tasks, the computer learn how to manage affairs by itself. It is hence more feasible and pertinent for the machine to learn from its own experience.
When thinking about robots and artificial intelligence (AI) most of us will envision the archetypical images of Star Wars legends R2D2 and C3PO or my personal favourite childhood bad-ass the Terminator. While science fiction literature and movies have provided a stage for robots to display their awesome potential, they also often paint a picture of a dystopian world in which robots become self-aware beings, rebelling against and ultimately suppressing mankind. I wouldn't be surprised if a lot of people are actually quite anxious about robots taking over the world in the future. Luckily this not the case (yet) and are robots nowadays used for carrying out all kind of routine or complex tasks. Currently, robots can be used to perform various tasks, such as: surgery, accompany the elderly, diffuse bombs, and space exploration.
When thinking back to the era of ancient civilisations, it's unlikely you'd consider insurance and artificial intelligence staples of the time. Rather, they fit much better into the modern day, where technological innovation goes hand-in-hand with better business practices. Yet, the idea of giving artificial beings a form of mind goes back to antiquity, seen in folklore, myths and stories. As Pamela McCorduck, a writer and novelist on artificial intelligence, wrote, AI stemmed from "an ancient wish to forge the gods" – making it just a bit older than Google, really. Civilisations have long-held beliefs and folklore surrounding bringing inanimate objects to life in Greek, Chinese and Jewish folklore, from Pygmalion's Galatea, an ivory sculpture brought to life to be his wife, to rabbinic golems and a lifelike robot performing to King Mu of Zhou.