Data science cowboys are exacerbating the AI and analytics challenge

#artificialintelligence

In the below, Dr Scott Zoldi, chief analytics officer at analytic software firm FICO, explains to Information Age why data science cowboys and citizen data scientists could cause catastrophic failures to a business' AI and analytics ambitions. Although the future will see fast-paced adoption and benefits driven by applying AI to all types of businesses, we will also see catastrophic failures due to the over-extension of analytic tools, and the rise of citizen data scientists and data science cowboys. The former does not have data science training but uses analytic tooling and methods to bring analytics into their businesses; the latter has data science training, but a disregard for the right way to handle AI. Citizen data scientists often use algorithms and technology they don't understand, which might result in inappropriate use of their AI tools; the risk from the data science cowboys is that they build AI models that may incorporate non-causal relationships learned from limited data, spurious correlations and outright bias -- which could have serious consequences for driverless car systems, for example. Today's AI threat stems from the efforts of both citizen data scientists and data scientist cowboys to tame complex machine learning algorithms for business outcomes.


Explainable Neural Networks based on Additive Index Models

arXiv.org Machine Learning

Machine Learning algorithms are increasingly being used in recent years due to their flexibility in model fitting and increased predictive performance. However, the complexity of the models makes them hard for the data analyst to interpret the results and explain them without additional tools. This has led to much research in developing various approaches to understand the model behavior. In this paper, we present the Explainable Neural Network (xNN), a structured neural network designed especially to learn interpretable features. Unlike fully connected neural networks, the features engineered by the xNN can be extracted from the network in a relatively straightforward manner and the results displayed. With appropriate regularization, the xNN provides a parsimonious explanation of the relationship between the features and the output. We illustrate this interpretable feature--engineering property on simulated examples.


Explaining Explanations: An Approach to Evaluating Interpretability of Machine Learning

arXiv.org Machine Learning

There has recently been a surge of work in explanatory artificial intelligence (XAI). This research area tackles the important problem that complex machines and algorithms often cannot provide insights into their behavior and thought processes. XAI allows users and parts of the internal system to be more transparent, providing explanations of their decisions in some level of detail. These explanations are important to ensure algorithmic fairness, identify potential bias/problems in the training data, and to ensure that the algorithms perform as expected. However, explanations produced by these systems is neither standardized nor systematically assessed. In an effort to create best practices and identify open challenges, we provide our definition of explainability and show how it can be used to classify existing literature. We discuss why current approaches to explanatory methods especially for deep neural networks are insufficient. Finally, based on our survey, we conclude with suggested future research directions for explanatory artificial intelligence.


Is Deep Learning Going to be Illegal in Europe?

#artificialintelligence

In a matter of months, General Data Protection Regulation (GDPR) will become a law throughout Europe, deeming a complete overhaul in the way artificial intelligence techniques are used in business settings. By May 25, the GDPR will become fully enforceable throughout the European Union, states the EU GDPR timeline.


The 2019 Data Science Dictionary -- Key Terms You Need to Know

#artificialintelligence

Activation function: In neural networks, linear and non-linear activation functions produce output decision boundaries by combining the network's weighted inputs. The ReLU (Rectified Linear Unit) activation function is the most commonly used activation function right now, although the Tanh or hyperbolic tangent, and Sigmoid or logistic activation functions are also used. Backpropagation: For this definition, I defer to a nice one I found by data scientist Mikio L. Braun on Quora: "Back prop is just gradient descent on individual errors. You compare the predictions of the neural network with the desired output and then compute the gradient of the errors with respect to the weights of the neural network. This gives you a direction in the parameter weight space in which the error would become smaller." Blockchain: Blockchain is essentially a decentralized distributed database.