Goto

Collaborating Authors

Why AI and human perception are too complex to be compared

#artificialintelligence

In the seemingly endless quest to reconstruct human perception, the field that has become known as computer vision, deep learning has so far yielded the most favorable results. Convolutional neural networks (CNN), an architecture often used in computer vision deep learning algorithms, are accomplishing tasks that were extremely difficult with traditional software. However, comparing neural networks to human perception remains a challenge. And this is partly because we still have a lot to learn about the human vision system and the human brain in general.


Computer vision: Why it's hard to compare AI and human perception

#artificialintelligence

This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence. Those are terms you hear a lot from companies developing artificial intelligence systems, whether it's facial recognition, object detection, or question answering. And to their credit, the recent years have seen many great products powered by AI algorithms, mostly thanks to advances in machine learning and deep learning. But many of these comparisons only take into account the end-result of testing the deep learning algorithms on limited data sets. This approach can create false expectations about AI systems and yield dangerous results when they are entrusted with critical tasks.


Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey

arXiv.org Artificial Intelligence

Nowadays, deep neural networks are widely used in mission critical systems such as healthcare, self-driving vehicles, and military which have direct impact on human lives. However, the black-box nature of deep neural networks challenges its use in mission critical applications, raising ethical and judicial concerns inducing lack of trust. Explainable Artificial Intelligence (XAI) is a field of Artificial Intelligence (AI) that promotes a set of tools, techniques, and algorithms that can generate high-quality interpretable, intuitive, human-understandable explanations of AI decisions. In addition to providing a holistic view of the current XAI landscape in deep learning, this paper provides mathematical summaries of seminal work. We start by proposing a taxonomy and categorizing the XAI techniques based on their scope of explanations, methodology behind the algorithms, and explanation level or usage which helps build trustworthy, interpretable, and self-explanatory deep learning models. We then describe the main principles used in XAI research and present the historical timeline for landmark studies in XAI from 2007 to 2020. After explaining each category of algorithms and approaches in detail, we then evaluate the explanation maps generated by eight XAI algorithms on image data, discuss the limitations of this approach, and provide potential future directions to improve XAI evaluation.


Generating Token-Level Explanations for Natural Language Inference

arXiv.org Machine Learning

The task of Natural Language Inference (NLI) is widely modeled as supervised sentence pair classification. While there has been a lot of work recently on generating explanations of the predictions of classifiers on a single piece of text, there have been no attempts to generate explanations of classifiers operating on pairs of sentences. In this paper, we show that it is possible to generate token-level explanations for NLI without the need for training data explicitly annotated for this purpose. We use a simple LSTM architecture and evaluate both LIME and Anchor explanations for this task. We compare these to a Multiple Instance Learning (MIL) method that uses thresholded attention make token-level predictions. The approach we present in this paper is a novel extension of zero-shot single-sentence tagging to sentence pairs for NLI. We conduct our experiments on the well-studied SNLI dataset that was recently augmented with manually annotation of the tokens that explain the entailment relation. We find that our white-box MIL-based method, while orders of magnitude faster, does not reach the same accuracy as the black-box methods.


Learning Joint Acoustic-Phonetic Word Embeddings

arXiv.org Machine Learning

Most speech recognition tasks pertain to mapping words across two modalities: acoustic and orthographic. In this work, we suggest learning encoders that map variable-length, acoustic or phonetic, sequences that represent words into fixed-dimensional vectors in a shared latent space; such that the distance between two word vectors represents how closely the two words sound. Instead of directly learning the distances between word vectors, we employ weak supervision and model a binary classification task to predict whether two inputs, one of each modality, represent the same word given a distance threshold. We explore various deep-learning models, bimodal contrastive losses, and techniques for mining hard negative examples such as the semi-supervised technique of self-labeling. Our best model achieves an $F_1$ score of 0.95 for the binary classification task.