If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
His concern is warranted and will require us to strike a balance between protecting the democratic and egalitarian values that made the Internet great to begin with while ensuring those values are used for good. The fundamental issue, then, in creating a 21st-century Internet becomes what changes are warranted and who will be responsible for defining and administering them. On the technology dimension, computer scientists and engineers must develop smarter systems for detecting, addressing, and preventing malicious content on the Web. Cerf's argument on behalf of user training is helpful but will not ultimately solve the problem of an untrustworthy, ungovernable, potentially malicious network. I myself recently fell for a phishing attack, which only proves that today's attacks can fool even savvy, experienced users.
Visual object recognition, speech recognition, machine translation – these are among the "holy grails" of artificial intelligence research. But machines are now at a level that the benchmark performance for these three areas has reached, and even surpassed, human levels. Moreover, in the space of 24 hours, a single program, AlphaZero, became by far the world's best player in three games – chess, Go, and Shogi – to which it had no prior exposure. These developments have provoked some alarmist reporting in the media, invariably accompanied by pictures of Terminator robots, but predictions of imminent superhuman AI are almost certainly wrong – we're still several conceptual breakthroughs away. On the other hand, massive investments in AI research, several hundred billion pounds over the next decade, suggest further rapid advances are not far away.
Thomson Reuters has a series, AI experts, where they interview thought leaders from different areas - including technology executives, researchers, robotics experts and policymakers - on what we might expect as we move towards AI. As part of that series I recently spoke to Paul Thies of Thomson Reuters, and here are the excerpts from the interview: Anticipating the next move in data science Thomson Reuters: For timely information concerning developments in data science, data mining and business analytics, KDnuggets is widely regarded as a leading outlet in the field. Created in 1993 by founder, editor and president Gregory Piatetsky-Shapiro, it is frequently cited as one of the top sources of data science news and influence by various industry watchers. Thomson Reuters: What are some use cases of data science that you find to be particularly valuable to organizations in this age of Big Data? GREGORY: Where people typically apply data science, probably not surprisingly, are in the areas of customer relationship management (CRM) and consumer analytics.
A Google researcher has highlighted some of the hilarious ways that artificial intelligence (AI) software has'cheated' to fulfil its purpose. A programme designed not to lose at Tetris completed its task by simply pausing the game, while a self-driving car simulator asked to keep cars'fast and safe' did so by making them spin on the spot. An AI programmed to spot cancerous skin lesions learned to flag blemishes pictured next to a ruler, as they indicated humans were already concerned about them. Victoria Krakovna, of Google's DeepMind AI lab, asked her colleagues for examples of misbehaving AI to highlight an often overlooked danger of the technology. She said that the biggest threat posed by AI was not that they disobeyed us, but that they obeyed us in the wrong way.
We propose a new General Game Playing (GGP) language called Regular Boardgames (RBG), which is based on the theory of regular languages. The objective of RBG is to join key properties as expressiveness, efficiency, and naturalness of the description in one GGP formalism, compensating certain drawbacks of the existing languages. This often makes RBG more suitable for various research and practical developments in GGP. While dedicated mostly for describing board games, RBG is universal for the class of all finite deterministic turn-based games with perfect information. We establish foundations of RBG, and analyze it theoretically and experimentally, focusing on the efficiency of reasoning. Regular Boardgames is the first GGP language that allows efficient encoding and playing games with complex rules and with large branching factor (e.g.\ amazons, arimaa, large chess variants, go, international checkers, paper soccer).
What's more, even AIs based on mechanisms inspired by human biology, such as neural networks, have only a distant relationship with biological neurons in the brain. NN are examples more of the importance of reinforcement and self-organisation of controller networks than any similarity with biology. The first, naive, approach to AI is to think that it is necessary to create a synthetic human, or a synthetic brain to produce cognition: in fact, cognition does not need to be anthropomorphic at all. Second attempt at a definition: "The ability of a machine to achieve performance equal to or better than certain human cognitive processes." This definition is based on the final outcome, without presupposing imitation of biological mechanisms.
In 1913, the largest and most influential art show in history took place; The 1913 Armory Show. Packed into New York's 69th Regiment Armory on Lexington Avenues between 25th and 26th streets were over 1200 works of art that ranged from sculptures, paintings and decorative works by over 300 artists from America and Europe. The show introduced Picasso, Matisse, Duchamp and modernism to American audiences. The event was so radical at the time, critics, who were used to realism in their art, questioned the sanity of the artists whose works were represented in the show. But the experimental art was eventually embraced by America and made way for great American artists such as Jackson Pollock, Mark Rothko and Andy Warhol.
According to Merriam-Webster, artificial intelligence is "a branch of computer science dealing with the simulation of intelligent behavior in computers." Love is defined as "a strong affection for another arising out of kinship or personal ties." It is difficult today to bypass the raging, Manichean debate in the tech and business communities about the role artificial intelligence (AI) will play in our economy and our society. Will this emerging technology become some kind of terminator, killing all of our jobs? Or will it emerge with a more theological, liberating approach to the human condition?
AI is a large topic, and there is no single agreed definition of what it involves. But there seems to be more agreement than disagreement. Broadly speaking, AI is an umbrella term for the field in computer science dedicated to making machines simulate different aspects of human intelligence, including learning, decision-making and pattern recognition. Some of the most striking applications, in fields like speech recognition and computer vision, are things people take for granted when assessing human intelligence but have been beyond the limits of computers until relatively recently. The term "artificial intelligence" was coined in 1956 by mathematics professor John McCarthy, who wrote, The study is to proceed on the basis of the conjecture that every aspect of learning and any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.
Not too long ago it was often said that computer vision could not compete with the visual abilities of a one-year-old. That is no longer true: computers can now recognize objects in images about as well as most adults can, and there are computerized cars on the road that drive themselves more safely than an average sixteen-year-old could. And rather than being told how to see or drive, computers have learned from experience, following a path that nature took millions of years ago. What is fueling these advances is gushers of data. Data are the new oil. Learning algorithms are refineries that extract information from raw data; information can be used to create knowledge; knowledge leads to understanding; and understanding leads to wisdom. Welcome to the brave new world of deep learning. Deep learning is a branch of machine learning that has its roots in mathematics, computer science, and neuroscience.