Collaborating Authors

Data science cowboys are exacerbating the AI and analytics challenge


In the below, Dr Scott Zoldi, chief analytics officer at analytic software firm FICO, explains to Information Age why data science cowboys and citizen data scientists could cause catastrophic failures to a business' AI and analytics ambitions. Although the future will see fast-paced adoption and benefits driven by applying AI to all types of businesses, we will also see catastrophic failures due to the over-extension of analytic tools, and the rise of citizen data scientists and data science cowboys. The former does not have data science training but uses analytic tooling and methods to bring analytics into their businesses; the latter has data science training, but a disregard for the right way to handle AI. Citizen data scientists often use algorithms and technology they don't understand, which might result in inappropriate use of their AI tools; the risk from the data science cowboys is that they build AI models that may incorporate non-causal relationships learned from limited data, spurious correlations and outright bias -- which could have serious consequences for driverless car systems, for example. Today's AI threat stems from the efforts of both citizen data scientists and data scientist cowboys to tame complex machine learning algorithms for business outcomes.

Explainable Neural Networks based on Additive Index Models Machine Learning

Machine Learning algorithms are increasingly being used in recent years due to their flexibility in model fitting and increased predictive performance. However, the complexity of the models makes them hard for the data analyst to interpret the results and explain them without additional tools. This has led to much research in developing various approaches to understand the model behavior. In this paper, we present the Explainable Neural Network (xNN), a structured neural network designed especially to learn interpretable features. Unlike fully connected neural networks, the features engineered by the xNN can be extracted from the network in a relatively straightforward manner and the results displayed. With appropriate regularization, the xNN provides a parsimonious explanation of the relationship between the features and the output. We illustrate this interpretable feature--engineering property on simulated examples.

Explaining Explanations: An Approach to Evaluating Interpretability of Machine Learning Machine Learning

There has recently been a surge of work in explanatory artificial intelligence (XAI). This research area tackles the important problem that complex machines and algorithms often cannot provide insights into their behavior and thought processes. XAI allows users and parts of the internal system to be more transparent, providing explanations of their decisions in some level of detail. These explanations are important to ensure algorithmic fairness, identify potential bias/problems in the training data, and to ensure that the algorithms perform as expected. However, explanations produced by these systems is neither standardized nor systematically assessed. In an effort to create best practices and identify open challenges, we provide our definition of explainability and show how it can be used to classify existing literature. We discuss why current approaches to explanatory methods especially for deep neural networks are insufficient. Finally, based on our survey, we conclude with suggested future research directions for explanatory artificial intelligence.

Towards Interpretable Deep Neural Networks: An Exact Transformation to Multi-Class Multivariate Decision Trees Machine Learning

Deep neural networks (DNNs) are commonly labelled as black-boxes lacking interpretability; thus, hindering human's understanding of DNNs' behaviors. A need exists to generate a meaningful sequential logic for the production of a specific output. Decision trees exhibit better interpretability and expressive power due to their representation language and the existence of efficient algorithms to generate rules. Growing a decision tree based on the available data could produce larger than necessary trees or trees that do not generalise well. In this paper, we introduce two novel multivariate decision tree (MDT) algorithms for rule extraction from a DNN: an Exact-Convertible Decision Tree (EC-DT) and a Deep C-Net algorithm to transform a neural network with Rectified Linear Unit activation functions into a representative tree which can be used to extract multivariate rules for reasoning. While the EC-DT translates the DNN in a layer-wise manner to represent exactly the decision boundaries implicitly learned by the hidden layers of the network, the Deep C-Net inherits the decompositional approach from EC-DT and combines with a C5 tree learning algorithm to construct the decision rules. The results suggest that while EC-DT is superior in preserving the structure and the accuracy of DNN, C-Net generates the most compact and highly effective trees from DNN. Both proposed MDT algorithms generate rules including combinations of multiple attributes for precise interpretation of decision-making processes.

Will 2018 be the year blockchain and artificial intelligence meet?


The growing use of blockchain technology in financial services will include a healthy dose of artificial intelligence, as new, automated analytic techniques look for patterns in the "relationship data" about people, contracts and transactions.