If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Many NLP applications today deploy state-of-the-art deep neural networks that are essentially black-boxes. One of the goals of Explainable AI (XAI) is to have AI models reveal why and how they make their predictions so that these predictions are interpretable by a human. But work in this direction has been conducted on different datasets with correspondingly unique aims, and the inherent subjectivity in defining what constitutes'interpretability' has resulted in no standard way to evaluate performance. Interpretability can mean multiple things depending on the task and context. The Evaluating Rationales And Simple English Reasoning (ERASER) benchmark is the first ever effort to unify and standardize NLP tasks with the goal of interpretability.
Arguably, one of the biggest debates happening in data science in 2019 is the need for AI explainability. The ability to interpret machine learning models is turning out to be a defining factor for the acceptance of statistical models for driving business decisions. Enterprise stakeholders are demanding transparency in how and why these algorithms are making specific predictions. A firm understanding of any inherent bias in machine learning keeps boiling up to the top of requirements for data science teams. As a result, many top vendors in the big data ecosystem are launching new tools to take a stab at resolving the challenge of opening the AI "black box."
Selection of input features such as relevant pieces of text has become a common technique of highlighting how complex neural predictors operate. The selection can be optimized post-hoc for trained models or incorporated directly into the method itself (self-explaining). However, an overall selection does not properly capture the multi-faceted nature of useful rationales such as pros and cons for decisions. To this end, we propose a new game theoretic approach to class-dependent rationalization, where the method is specifically trained to highlight evidence supporting alternative conclusions. Each class involves three players set up competitively to find evidence for factual and counterfactual scenarios. We show theoretically in a simplified scenario how the game drives the solution towards meaningful class-dependent rationales. We evaluate the method in single- and multi-aspect sentiment classification tasks and demonstrate that the proposed method is able to identify both factual (justifying the ground truth label) and counterfactual (countering the ground truth label) rationales consistent with human rationalization. The code for our method is publicly available.
The rise of'deep learning' has caused a lot of excitement around the revolutionary capabilities of these artificially intelligent agents. But it's also raised fear and suspicion about what exactly is going on inside each algorithm. One way for us to gain some understanding of our silicon-based friends (or foes?) is for them to disclose their framework of decision-making in a way that we humans can understand – by using the concept of personality. My research explores how some of these deep learning agents can be better understood through their'personalities' – like whether they are'greedy', 'selfish' or'prudent'. We are now at the dawn of a new era in AI technology – a so-called fourth industrial revolution that will reshape every industry.
Researchers are proposing a framework that would allow users to understand the rationale behind artificial intelligence (AI) decisions. The work is significant, given the push to move away from "black box" AI systems – particularly in sectors, such as military and law enforcement, where there is a need to justify decisions. "One thing that sets our framework apart is that we make these interpretability elements part of the AI training process," says Tianfu Wu, first author of the paper and an assistant professor of computer engineering at North Carolina State University. "For example, under our framework, when an AI program is learning how to identify objects in images, it is also learning to localize the target object within an image, and to parse what it is about that locality that meets the target object criteria. This information is then presented alongside the result."
Researchers are proposing a framework for artificial intelligence (AI) that would allow users to understand the rationale behind AI decisions. The work is significant, given the push move away from "black box" AI systems--particularly in sectors, such as military and law enforcement, where there is a need to justify decisions. "One thing that sets our framework apart is that we make these interpretability elements part of the AI training process," says Tianfu Wu, first author of the paper and an assistant professor of computer engineering at North Carolina State University. "For example, under our framework, when an AI program is learning how to identify objects in images, it is also learning to localize the target object within an image, and to parse what it is about that locality that meets the target object criteria. This information is then presented alongside the result."
The task of Visual Commonsense Reasoning is extremely challenging in the sense that the model has to not only be able to answer a question given an image, but also be able to learn to reason. The baselines introduced in this task are quite limiting because two networks are trained for predicting answers and rationales separately. Question and image is used as input to train answer prediction network while question, image and correct answer are used as input in the rationale prediction network. As rationale is conditioned on the correct answer, it is based on the assumption that we can solve Visual Question Answering task without any error - which is over ambitious. Moreover, such an approach makes both answer and rationale prediction two completely independent VQA tasks rendering cognition task meaningless. In this paper, we seek to address these issues by proposing an end-to-end trainable model which considers both answers and their reasons jointly. Specifically, we first predict the answer for the question and then use the chosen answer to predict the rationale. However, a trivial design of such a model becomes non-differentiable which makes it difficult to train. We solve this issue by proposing four approaches - softmax, gumbel-softmax, reinforcement learning based sampling and direct cross entropy against all pairs of answers and rationales. We demonstrate through experiments that our model performs competitively against current state-of-the-art. We conclude with an analysis of presented approaches and discuss avenues for further work.
Banks have been in the business of deciding who is eligible for credit for centuries. But in the age of artificial intelligence (AI), machine learning (ML), and big data, digital technologies have the potential to transform credit allocation in positive as well as negative directions. Given the mix of possible societal ramifications, policymakers must consider what practices are and are not permissible and what legal and regulatory structures are necessary to protect consumers against unfair or discriminatory lending practices. In this paper, I review the history of credit and the risks of discriminatory practices. I discuss how AI alters the dynamics of credit denials and what policymakers and banking officials can do to safeguard consumer lending.
Algorithmic systems (such as those deciding mortgage applications, or sentencing decisions) can be very difficult to understand, for experts as well as the general public. The EU General Data Protection Regulation (GDPR) has sparked much discussion about the "right to explanation" for the algorithm-supported decisions made about us in our everyday lives. While there's an obvious need for transparency in the automated decisions that are increasingly being made in areas like policing, education, healthcare and recruitment, explaining how these complex algorithmic decision-making systems arrive at any particular decision is a technically challenging problem--to put it mildly. In their article "Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR" which is forthcoming in the Harvard Journal of Law & Technology, Sandra Wachter, Brent Mittelstadt, and Chris Russell present the concept of "unconditional counterfactual explanations" as a novel type of explanation of automated decisions that could address many of these challenges. Counterfactual explanations describe the minimum conditions that would have led to an alternative decision (e.g. a bank loan being approved), without the need to describe the full logic of the algorithm.
Without question, 2018 was a big year for artificial intelligence (AI) as it pushed even further into the mainstream, successfully automating more functionality than ever before. Companies are increasingly exploring applications for AI, and the general public has grown accustomed to interacting with the technology on a daily basis. The stage is set for AI to continue transforming the world as we know it. In 2019, not only will the technology continue growing in global prevalence, but it will also spawn deeper conversations around important topics, fuel innovative business models, and impact society in new ways, including the following seven. In 2018, we witnessed major strides in MLaaS with technology powerhouses like Google, Microsoft, and Amazon leading the way.