If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
This article started out as an addendum to a chapter in our book, Data Visualization: A History of Visual Thinking and Graphic Communication (Friendly & Wainer, 2020). In this we claimed that much of the history of data visualization could be seen as combination of three forces: (1) important scientific problems of the day, (2) a developing abundance of data, and (3) the cognitive ability of some heroes in this history to conceive solutions to problems by visual imagination. In the book and what follows we make frequent reference to cognitive aspects of the visual understanding of phenomena and their expression in graphic displays: "inner vision", "graphic communication", "visual insight" are some of the terms we use. An early metaphor for this and an early title for our book was "A gleam in the mind's eye." We give some additional explanations and examples here. We also want to place this topic in a wider framework.
Fresh off of the OpenAI Retro contest, I wanted to keep exploring more AI topics. Somebody told me that the best way to learn was reproducing other people's papers, but not wanting to learn any more Python than I had to, I decided to try to tackle some existing work with TensorFlow.js. I first tried to run with a GAN, but I realized it might be better to crawl first since I am coming from a pretty fresh background. I was able to find a series of basic TensorFlow examples that I felt would let me ladder up my TensorFlow.js I'll be redoing all of these Basic Operations and Linear Regressionwith TensorFlow.js.
This article is coauthored by Joy Rimchala and Shir Meir Lador. Rapid adoption of complex machine learning (ML) models in recent years has brought with it a new challenge for today's companies: how to interpret, understand, and explain the reasoning behind these complex models' predictions. Treating complex ML systems as trustworthy black boxes without sanity checking has led to some disastrous outcomes, as evidenced by recent disclosures of gender and racial biases in GenderShades¹. As ML-assisted predictions integrate more deeply into high-stakes decision-making, such as medical diagnoses, recidivism risk prediction, loan approval processes, etc., knowing the root causes of an ML prediction becomes crucial. If we know that certain model predictions reflect bias and are not aligned with our best knowledge and societal values (such as an equal opportunity policy or outcome equity), we can detect these undesirable ML defects, prevent the deployment of such ML systems, and correct model defects.
Editor's Note: See Joris and Matteo at their tutorial "Opening The Black Box -- Interpretability in Deep Learning" at ODSC Europe 2019 this November 20th in London. In the last decade, the application of deep neural networks to long-standing problems has brought a breakthrough in performance and prediction power. However, high accuracy, deriving from the increased model complexity, often comes at the price of loss of interpretability, i.e., many of these models behave as black-boxes and fail to provide explanations on their predictions. While in certain application fields this issue may play a secondary role, in high-risk domains, e.g., health care, it is crucial to build trust in a model and being able to understand its behavior. The definition of the verb interpret is "to explain or tell the meaning of: present in understandable terms" (Merriam- Webster 2019).
Convolutional Neural Networks(CNNs) and other deep learning networks have enabled unprecedented breakthroughs in a variety of computer vision tasks from image classification to object detection, semantic segmentation, image captioning and more recently visual question answering. While these networks enable superior performance, their lack of decomposability into intuitive and understandable components makes them hard to interpret. Consequently, when today's intelligent systems fail, they fail spectacularly disgracefully without warning or explanation, leaving a user staring at an incoherent output, wondering why. Interpretability of Deep Learning models matters to build trust and move towards their successful integration in our daily lives. To achieve this goal the model transparency is useful to explain why they predict what they predict.
This is a quick transcript of the interview of Peter Norvig by Lex Fridman. I find this interview so interesting and revealing, that I decided to take on the task of making a transcript of the interview published in YouTube. Lex Friedman: The following is a conversation with Peter Norvig. A Modern Approach", and educated and inspired a whole generation of researchers, including myself, to get into the field of Artificial Intelligence. This is the Artificial Intelligence podcast. Lex Fridman: Most researchers in the AI community, including myself, own all three editions, red green and blue, of the "Artificial intelligence, a modern approach", the field defining textbook. As many people are aware that you wrote with Stuart Russell, how is the book changed, and how have you changed in relation to it from the first edition to the second, to the third, and now fourth edition as you work on it? Peter Norvig: Yeah so it's been a lot of years, a lot of changes. One of the things changing from the first, to maybe the second, or third, was just the rise of computing power, right? So, I think in the First Edition we said: "here's predicate logic but that only goes so far because pretty soon you have millions of short little medical expressions and they can possibly fit in memory, so we're gonna use first-order logic that's more concise." And then we quickly realized: "Oh, predicate logic is pretty nice because there are really fast Sat solvers, and other things, and look there's only millions of expressions and that fits easily into memory, or maybe even billions fit into memory now.
Microsoft is nowadays one of the major providers for AI powered cloud services. In fact, according to a RightScale's survey carried out in 2018, Microsoft Azure Cloud services are currently second just to Amazon AWS (Figure 1). In this article, I will be considering Microsoft as case study as Microsoft CEO Satya Nadella recently shared Microsoft interest to make AI a vital part of their business . I will now introduce you to some of the different Microsoft tools which are currently available and some alternatives provided by the completion. Finally, we will focus on what are going to be next steps in research.
Tree ensemble methods such as gradient boosted decision trees and random forests are among the most popular and effective machine learning tools available when working with structured data. Tree ensemble methods are fast to train, work well without a lot of tuning, and do not require large datasets to train on. In TensorFlow, gradient boosted trees are available using the tf.estimator API, which also supports deep neural networks, wide-and-deep models, and more. For boosted trees, regression with pre-defined mean squared error loss (BoostedTreesRegressor) and classification with cross entropy loss (BoostedTreesClassifier) are supported.
AI is constantly in the news these days, identifying prospects for the technology doing both good and bad. One topic that's generating a lot of buzz is the use of AI for creating "deepfakes," a term originally coined in 2017. Deepfakes uses neural networks to combine and superimpose existing images and videos onto source images or videos using a deep learning technique known as generative adversarial networks (GANs). Three of the most common deepfakes techniques are known as "lip-sync," "face swap," and "puppet-master." These techniques, however, can create a disconnect that may be uncovered by a clever algorithm as a way to combat deepfakes.
At Fiddler Labs, we place great emphasis on model explanations being faithful to the model's behavior. Ideally, feature importance explanations should surface and appropriately quantify all and only those factors that are causally responsible for the prediction. This is especially important if we want explanations to be legally compliant (e.g., GDPR, article 13 section 2f, people have a right to '[information about] the existence of automated decision-making, including profiling .. and .. meaningful information about the logic involved'), and actionable. Even when making post-processing explanations human-intelligible, we must preserve faithfulness to the model. How do we differentiate between features that are correlated with the outcome, and those that cause the outcome?