Goto

Collaborating Authors

Results


AI-Decision Making: State Of Play And What's Next - The Innovator

#artificialintelligence

FinnAir, an airline that dominates domestic and international air traffic in Finland, thought it could use AI to manage airport congestion. AI alone was not up to the job so Finland's largest airline instead implemented a hybrid system that uses AI to make predictions about air traffic and allows the humans-in-the-loop to make better decisions, explains Tero Ojanpera, CEO of Silo.ai, a Finnish AI lab that specializes in bringing cutting-edge AI talent to corporations around the world. Getting the FinnAir project to that point was not a question of plug and play. It required a complex multi-step modeling process to help the organization become more AI literate. Finnair's experience neatly illustrates the current state of play. AI is not fully ready to make the kind of decision-making corporates expect it to make and even if it were corporate teams and networks are not fully ready to implement and reap the full benefits of AI.


Explanations of Black-Box Model Predictions by Contextual Importance and Utility

arXiv.org Artificial Intelligence

The significant advances in autonomous systems together with an immensely wider application domain have increased the need for trustable intelligent systems. Explainable artificial intelligence is gaining considerable attention among researchers and developers to address this requirement. Although there is an increasing number of works on interpretable and transparent machine learning algorithms, they are mostly intended for the technical users. Explanations for the end-user have been neglected in many usable and practical applications. In this work, we present the Contextual Importance (CI) and Contextual Utility (CU) concepts to extract explanations that are easily understandable by experts as well as novice users. This method explains the prediction results without transforming the model into an interpretable one. We present an example of providing explanations for linear and non-linear models to demonstrate the generalizability of the method. CI and CU are numerical values that can be represented to the user in visuals and natural language form to justify actions and explain reasoning for individual instances, situations, and contexts. We show the utility of explanations in car selection example and Iris flower classification by presenting complete (i.e. the causes of an individual prediction) and contrastive explanation (i.e. contrasting instance against the instance of interest). The experimental results show the feasibility and validity of the provided explanation methods.