Goto

Collaborating Authors

Explaining Machine Learning and Artificial Intelligence in Collections to the Regulator

#artificialintelligence

There is significant growth in the application of machine learning (ML) and artificial intelligence (AI) techniques within collections as it has been proven to create countless efficiencies; from enhancing the results of predictive models, to powering AI bots that interact with customers leaving staff free to address more complex issues. At present, one of the major constraining factors to using this advanced technology is the difficulty that comes with explaining the decisions made by these solutions to regulators. This regulatory focus is unlikely to diminish, especially with the various examples of AI bias which continue to be uncovered within various applications, resulting in discriminatory behaviors towards different groups of people. While collections-specific regulations remain somewhat undefined on the subject, major institutions are resorting to their broader policy; namely that any decision needs be fully explainable. Although there are explainable Artificial Intelligence (xAI) techniques that can help us gain deeper insights from ML models such as FICO's xAI Toolkit, the path to achieving sign-off within an organization can be a challenge.


Machine Learning Interpretability: Explaining Blackbox Models with LIME

#artificialintelligence

This is the second part of our series about Machine Learning interpretability. We want to describe LIME (Local Interpretable Model-Agnostic Explanations), a popular technique to explain blackbox models. It was proposed by Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin in their paper Why Should I Trust You? Explaining the Predictions of Any Classifier, which they first presented at the ACM's Conference on Knowledge Discovery and Data Mining in 2016. Please check out our previous article if you are not familiar with the concept of interpretability. We previously made a distinction between model-specific and model-agnostic techniques as well as between global and local techniques.


Understanding how LIME explains predictions – Towards Data Science

#artificialintelligence

An example of this procedure for images is shown in the image below. Note that, in addition to a black box model (classifier or regressor) f and an instance to explain y (and its interpretable representation y'), the previous procedure requires setting in advance the number of samples N, the kernel width σ and the length of the explanation K. In future posts I will explain how to explain predictions of ML models with LIME using R and Python.


Unifying Topic, Sentiment & Preference in an HDP-Based Rating Regression Model for Online Reviews

arXiv.org Machine Learning

This paper proposes a new HDP based online review rating regression model named Topic-Sentiment-Preference Regression Analysis (TSPRA). TSPRA combines topics (i.e. product aspects), word sentiment and user preference as regression factors, and is able to perform topic clustering, review rating prediction, sentiment analysis and what we invent as "critical aspect" analysis altogether in one framework. TSPRA extends sentiment approaches by integrating the key concept "user preference" in collaborative filtering (CF) models into consideration, while it is distinct from current CF models by decoupling "user preference" and "sentiment" as independent factors. Our experiments conducted on 22 Amazon datasets show overwhelming better performance in rating predication against a state-of-art model FLAME (2015) in terms of error, Pearson's Correlation and number of inverted pairs. For sentiment analysis, we compare the derived word sentiments against a public sentiment resource SenticNet3 and our sentiment estimations clearly make more sense in the context of online reviews. Last, as a result of the de-correlation of "user preference" from "sentiment", TSPRA is able to evaluate a new concept "critical aspects", defined as the product aspects seriously concerned by users but negatively commented in reviews. Improvement to such "critical aspects" could be most effective to enhance user experience.


Google pulled 'millions' of junk Play Store ratings in one week

Engadget

Google is just as frustrated with bogus app reviews as you are, and it's apparently bending over backwards to improve the trustworthiness of the feedback you see. The company instituted a system this year that uses a mix of AI and human oversight to cull junk Play Store reviews and the apps that promote them, and the results are slightly intimidating. In an unspecified recent week, Google removed "millions" of dodgy ratings and reviews, and "thousands" of apps encouraging shady behavior. There are a lot of attempts to game Android app reviews, in other words. The internet giant is typically looking for ratings surges that are clearly outliers (say, a sudden burst of five-star ratings) as well as reviews that are "profane, hateful, or off-topic."