explanation


Deep Learning Explainability: Hints from Physics

#artificialintelligence

Nowadays, artificial intelligence is present in almost every part of our lives. Smartphones, social media feeds, recommendation engines, online ad networks, and navigation tools are some examples of AI-based applications that already affect us every day. Deep learning in areas such as speech recognition, autonomous driving, machine translation, and visual object recognition has been systematically improving the state of the art for a while now. However, the reasons that make deep neural networks (DNN) so powerful are only heuristically understood, i.e. we know only from experience that we can achieve excellent results by using large datasets and following specific training protocols. Recently, one possible explanation was proposed, based on a remarkable analogy between a physics-based conceptual framework called renormalization group (RG) and a type of neural network known as a restricted Boltzmann machine (RBM).


AI Can Edit Photos With Zero Experience Weizmann USA

#artificialintelligence

Imagine showing a photo taken through a storefront window to someone who has never opened her eyes before, and asking her to point to what's in the reflection and what's in the store. To her, everything in the photo would just be a big jumble. Computers can perform image separations, but to do it well, they typically require handcrafted rules or many, many explicit demonstrations: here's an image, and here are its component parts. New research finds that a machine-learning algorithm given just one image can discover patterns that allow it to separate the parts you want from the parts you don't. The multi-purpose method might someday benefit any area where computer vision is used, including forensics, wildlife observation, and artistic photo enhancement.


Ethical Artificial Intelligence Becomes A Supreme Competitive Advantage

#artificialintelligence

Ethical AI ensures more socially conscious approaches to customer and employee interactions, and in the long run, may be the ultimate competitive differentiatior as well, a recent survey suggests. Three in five consumers who perceive their AI interactions to be ethical place higher trust in the company, spread positive word of mouth, and are more loyal. More than half of consumers participating in a recent survey say they would purchase more from a company whose AI interactions are deemed ethical. Leaders finally sit up and take notice of AI ethics. As organizations progress to harness the benefits of AI, consumers, employees and citizens are watching closely and are ready to reward or punish behavior.


Salesforce aims to bring more common sense to AI - SiliconANGLE

#artificialintelligence

Machine learning and deep learning have produced plenty of breakthroughs in recent years, from more capable speech and image recognition to self-driving cars. But one big problem with these artificial-intelligence techniques that attempt to mimic how the brain works is that the neural networks they employ don't have the common-sense knowledge and context that people have, such as social conventions, laws of physics, and causes and effects. That can makes their decisions sometimes perplexing or downright wrong -- as anyone who uses Alexa, Google Assistant or any number of customer-assistant chatbots knows. Inc.'s research team today announced a paper that outlines a way to improve that situation. In the paper, to be presented at the Association of Computational Linguistics' annual meeting July 29-Aug.


Test-Driven Machine Learning

#artificialintelligence

First, before I start, I want to say something about what that is, or what I understand from this. So, here is one interpretation. It is about using data, obviously. So, it has relationships to analytics and data science, and it is, obviously, part of AI in some way. This is my little taxonomy, how I see things linking together. You have computer science, and that has subfields like AI, software engineering, and machine learning is typically considered to be subfield of AI, but a lot of principles of software engineering apply in this area. This is what I want to talk about today. It's heavily used in data science. So, the difference between AI and data science is somewhat fluid if you like, but data science tries to understand what's in data and tries to understand questions about data. But then it tries to use this to make decisions, and then we are back at AI, artificial intelligence, where it's mostly about automating decision making. We have a couple of definitions. AI means using intelligence, making machines intelligent, and that means you can somehow function appropriate in an environment with foresight. Machine learning is a field that looks for algorithms that can automatically improve their performance without explicit programming, but by observing relevant data. And yes, I've thrown in data science as well for good measure, the scientific process of turning data into insight for making better decisions. If you have opened any newspaper, you must have seen the discussion around the ethical dimensions of artificial intelligence, machine learning or data science. Testing touches on that as well because there are quite a few problems in that space, and I'm just listing two here. So, you use data, obviously, to do machine learning. Where does this data come from, and are you allowed to use it? Do you violate any privacy laws, or are you building models that you use to make decisions about people? If you do that, then the general data protection regulation in the EU says you have to be able to explain to an individual if you're making a decision based on an algorithm or a machine, if this decision is of any kind of significant impact. That means, in machine learning, a lot of models are already out of the door because you can't do that. You can't explain why a certain decision comes out of a machine learning model if you use particular models.


Illustrated Guide to LSTM's and GRU's: A step by step explanation

#artificialintelligence

Then I'll explain the internal mechanisms that allow LSTM's and GRU's to perform so well. If you want to understand what's happening under the hood for these two networks, then this post is for you. You can also watch the video version of this post on youtube if you prefer. Recurrent Neural Networks suffer from short-term memory. If a sequence is long enough, they'll have a hard time carrying information from earlier time steps to later ones. So if you are trying to process a paragraph of text to do predictions, RNN's may leave out important information from the beginning.


Explanations can be manipulated and geometry is to blame

arXiv.org Machine Learning

Explanation methods aim to make neural networks more trustworthy and interpretable. In this paper, we demonstrate a property of explanation methods which is disconcerting for both of these purposes. Namely, we show that explanations can be manipulated arbitrarily by applying visually hardly perceptible perturbations to the input that keep the network's output approximately constant. We establish theoretically that this phenomenon can be related to certain geometrical properties of neural networks. This allows us to derive an upper bound on the susceptibility of explanations to manipulations. Based on this result, we propose effective mechanisms to enhance the robustness of explanations.


From Clustering to Cluster Explanations via Neural Networks

arXiv.org Machine Learning

A wealth of algorithms have been developed to extract natural cluster structure in data. Identifying this structure is desirable but not always sufficient: We may also want to understand why the data points have been assigned to a given cluster. Clustering algorithms do not offer a systematic answer to this simple question. Hence we propose a new framework that can, for the first time, explain cluster assignments in terms of input features in a comprehensive manner. It is based on the novel theoretical insight that clustering models can be rewritten as neural networks, or 'neuralized'. Predictions of the obtained networks can then be quickly and accurately attributed to the input features. Several showcases demonstrate the ability of our method to assess the quality of learned clusters and to extract novel insights from the analyzed data and representations.


Model Explanations under Calibration

arXiv.org Artificial Intelligence

Explaining and interpreting the decisions of recommender systems are becoming extremely relevant both, for improving predictive performance, and providing valid explanations to users. While most of the recent interest has focused on providing local explanations, there has been a much lower emphasis on studying the effects of model dynamics and its impact on explanation. In this paper, we perform a focused study on the impact of model interpretability in the context of calibration. Specifically, we address the challenges of both over-confident and under-confident predictions with interpretability using attention distribution. Our results indicate that the means of using attention distributions for interpretability are highly unstable for un-calibrated models. Our empirical analysis on the stability of attention distribution raises questions on the utility of attention for explainability.


Top 10 Books on Artificial Intelligence You Cannot Afford to Miss Analytics Insight

#artificialintelligence

Artificial Intelligence is the need of the hour. This technology of today is neither an elementary school math nor a rocket science application. The understanding of AI not only allows business decision makers and enthusiasts to make advancements in technologies but also let them make processes better. Another term that is doing the rounds is artificial general intelligence (AGI) which encompasses human-level cognitive ability making automation think and work like a human mind. So how do you benefit from AI and the latest advancements that move around it?