Wijekoon, Anjana
PitRSDNet: Predicting Intra-operative Remaining Surgery Duration in Endoscopic Pituitary Surgery
Wijekoon, Anjana, Das, Adrito, Herrera, Roxana R., Khan, Danyal Z., Hanrahan, John, Carter, Eleanor, Luoma, Valpuri, Stoyanov, Danail, Marcus, Hani J., Bano, Sophia
Accurate intra-operative Remaining Surgery Duration (RSD) predictions allow for anaesthetists to more accurately decide when to administer anaesthetic agents and drugs, as well as to notify hospital staff to send in the next patient. Therefore RSD plays an important role in improving patient care and minimising surgical theatre costs via efficient scheduling. In endoscopic pituitary surgery, it is uniquely challenging due to variable workflow sequences with a selection of optional steps contributing to high variability in surgery duration. This paper presents PitRSDNet for predicting RSD during pituitary surgery, a spatio-temporal neural network model that learns from historical data focusing on workflow sequences. PitRSDNet integrates workflow knowledge into RSD prediction in two forms: 1) multi-task learning for concurrently predicting step and RSD; and 2) incorporating prior steps as context in temporal learning and inference. PitRSDNet is trained and evaluated on a new endoscopic pituitary surgery dataset with 88 videos to show competitive performance improvements over previous statistical and machine learning methods. The findings also highlight how PitRSDNet improve RSD precision on outlier cases utilising the knowledge of prior steps.
XEQ Scale for Evaluating XAI Experience Quality Grounded in Psychometric Theory
Wijekoon, Anjana, Wiratunga, Nirmalie, Corsar, David, Martin, Kyle, Nkisi-Orji, Ikechukwu, Díaz-Agudo, Belen, Bridge, Derek
Explainable Artificial Intelligence (XAI) aims to improve the transparency of autonomous decision-making through explanations. Recent literature has emphasised users' need for holistic "multi-shot" explanations and the ability to personalise their engagement with XAI systems. We refer to this user-centred interaction as an XAI Experience. Despite advances in creating XAI experiences, evaluating them in a user-centred manner has remained challenging. To address this, we introduce the XAI Experience Quality (XEQ) Scale (pronounced "Seek" Scale), for evaluating the user-centred quality of XAI experiences. Furthermore, XEQ quantifies the quality of experiences across four evaluation dimensions: learning, utility, fulfilment and engagement. These contributions extend the state-of-the-art of XAI evaluation, moving beyond the one-dimensional metrics frequently developed to assess single-shot explanations. In this paper, we present the XEQ scale development and validation process, including content validation with XAI experts as well as discriminant and construct validation through a large-scale pilot study. Out pilot study results offer strong evidence that establishes the XEQ Scale as a comprehensive framework for evaluating user-centred XAI experiences.
Tell me more: Intent Fulfilment Framework for Enhancing User Experiences in Conversational XAI
Wijekoon, Anjana, Corsar, David, Wiratunga, Nirmalie, Martin, Kyle, Salimi, Pedram
The evolution of Explainable Artificial Intelligence (XAI) has emphasised the significance of meeting diverse user needs. The approaches to identifying and addressing these needs must also advance, recognising that explanation experiences are subjective, user-centred processes that interact with users towards a better understanding of AI decision-making. This paper delves into the interrelations in multi-faceted XAI and examines how different types of explanations collaboratively meet users' XAI needs. We introduce the Intent Fulfilment Framework (IFF) for creating explanation experiences. The novelty of this paper lies in recognising the importance of "follow-up" on explanations for obtaining clarity, verification and/or substitution. Moreover, the Explanation Experience Dialogue Model integrates the IFF and "Explanation Followups" to provide users with a conversational interface for exploring their explanation needs, thereby creating explanation experiences. Quantitative and qualitative findings from our comparative user study demonstrate the impact of the IFF in improving user engagement, the utility of the AI system and the overall user experience. Overall, we reinforce the principle that "one explanation does not fit all" to create explanation experiences that guide the complex interaction through conversation.
Towards Feasible Counterfactual Explanations: A Taxonomy Guided Template-based NLG Method
Salimi, Pedram, Wiratunga, Nirmalie, Corsar, David, Wijekoon, Anjana
Counterfactual Explanations (cf-XAI) describe the smallest changes in feature values necessary to change an outcome from one class to another. However, many cf-XAI methods neglect the feasibility of those changes. In this paper, we introduce a novel approach for presenting cf-XAI in natural language (Natural-XAI), giving careful consideration to actionable and comprehensible aspects while remaining cognizant of immutability and ethical concerns. We present three contributions to this endeavor. Firstly, through a user study, we identify two types of themes present in cf-XAI composed by humans: content-related, focusing on how features and their values are included from both the counterfactual and the query perspectives; and structure-related, focusing on the structure and terminology used for describing necessary value changes. Secondly, we introduce a feature actionability taxonomy with four clearly defined categories, to streamline the explanation presentation process. Using insights from the user study and our taxonomy, we created a generalisable template-based natural language generation (NLG) method compatible with existing explainers like DICE, NICE, and DisCERN, to produce counterfactuals that address the aforementioned limitations of existing approaches. Finally, we conducted a second user study to assess the performance of our taxonomy-guided NLG templates on three domains. Our findings show that the taxonomy-guided Natural-XAI approach (n-XAI^T) received higher user ratings across all dimensions, with significantly improved results in the majority of the domains assessed for articulation, acceptability, feasibility, and sensitivity dimensions.
Behaviour Trees for Creating Conversational Explanation Experiences
Wijekoon, Anjana, Corsar, David, Wiratunga, Nirmalie
This paper presented an XAI system specification and an interactive dialogue model to facilitate the creation of Explanation Experiences (EE). Such specifications combine the knowledge of XAI, domain and system experts of a use case to formalise target user groups and their explanation needs and to implement explanation strategies to address those needs. Formalising the XAI system promotes the reuse of existing explainers and known explanation needs that can be refined and evolved over time using user evaluation feedback. The abstract EE dialogue model formalised the interactions between a user and an XAI system. The resulting EE conversational chatbot is personalised to an XAI system at run-time using the knowledge captured in its XAI system specification. This seamless integration is enabled by using Behaviour Trees (BT) to conceptualise both the EE dialogue model and the explanation strategies. In the evaluation, we discussed several desirable properties of using BTs over traditionally used STMs or FSMs. BTs promote the reusability of dialogue components through the hierarchical nature of the design. Sub-trees are modular, i.e. a sub-tree is responsible for a specific behaviour, which can be designed in different levels of granularity to improve human interpretability. The EE dialogue model consists of abstract behaviours needed to capture EE, accordingly, it can be implemented as a conversational, graphical or text-based interface which caters to different domains and users. There is a significant computational cost when using BTs for modelling dialogue, which we mitigate by using memory. Overall, we find that the ability to create robust conversational pathways dynamically makes BTs a good candidate for designing and implementing conversation for creating explanation experiences.
DisCERN:Discovering Counterfactual Explanations using Relevance Features from Neighbourhoods
Wiratunga, Nirmalie, Wijekoon, Anjana, Nkisi-Orji, Ikechukwu, Martin, Kyle, Palihawadana, Chamath, Corsar, David
Counterfactual explanations focus on "actionable knowledge" to help end-users understand how a machine learning outcome could be changed to a more desirable outcome. For this purpose a counterfactual explainer needs to discover input dependencies that relate to outcome changes. Identifying the minimum subset of feature changes needed to action an output change in the decision is an interesting challenge for counterfactual explainers. The DisCERN algorithm introduced in this paper is a case-based counter-factual explainer. Here counterfactuals are formed by replacing feature values from a nearest unlike neighbour (NUN) until an actionable change is observed. We show how widely adopted feature relevance-based explainers (i.e. LIME, SHAP), can inform DisCERN to identify the minimum subset of "actionable features". We demonstrate our DisCERN algorithm on five datasets in a comparative study with the widely used optimisation-based counterfactual approach DiCE. Our results demonstrate that DisCERN is an effective strategy to minimise actionable changes necessary to create good counterfactual explanations.
Learning to Recognise Exercises in the Self-Management of Low Back Pain
Wijekoon, Anjana (Robert Gordon University ) | Wiratunga, Nirmalie (Robert Gordon University) | Cooper, Kay (Robert Gordon University) | Bach, Kerstin ( Norwegian University of Science and Technology )
Globally, Low back pain (LBP) is one of the top three contributors to years lived with disability. Self-management with an active lifestyle and regular exercises is the cornerstone for preventing and managing LBP. Digital interventions are introduced in the recent past to reinforce self-management where they rely on self-reporting to keep track of the exercises performed. This data directly influence the recommendations made by the digital intervention thus accurate and reliable reporting is fundamental to the success of the intervention. In addition, performing exercises with precision is important where current systems are unable to provide the guidance required. The main challenge to implementing an end-to-end solution is the lack of public sensor-rich datasets to implement Machine Learning algorithms to perform Exercise Recognition (ExR) and qualitative analysis. Accordingly we introduce the ExR benchmark dataset “MEx”, which we share publicly to encourage future research. The dataset include 7 exercise classes, recorded with 30 users using 4 sensors. In this paper we benchmark state-of-the-art classification algorithms with deep and shallow architectures on each sensor and achieve performances 90.2%, 63.4%, 87.2% and 74.1% respectively for the pressure mat, the depth camera, the thigh accelerometer and the wrist accelerometer. We recognise the scope of each sensor in capturing exercise movements with confusion matrices and highlight the most suitable sensors for deployment considering performance vs. obtrusiveness.
MEx: Multi-modal Exercises Dataset for Human Activity Recognition
Wijekoon, Anjana, Wiratunga, Nirmalie, Cooper, Kay
MEx: Multi-modal Exercises Dataset is a multi-sensor, multi-modal dataset, implemented to benchmark Human Activity Recognition(HAR) and Multi-modal Fusion algorithms. Collection of this dataset was inspired by the need for recognising and evaluating quality of exercise performance to support patients with Musculoskeletal Disorders(MSD). We select 7 exercises regularly recommended for MSD patients by physiotherapists and collected data with four sensors a pressure mat, a depth camera and two accelerometers. The dataset contains three data modalities; numerical time-series data, video data and pressure sensor data posing interesting research challenges when reasoning for HAR and Exercise Quality Assessment. This paper presents our evaluation of the dataset on number of standard classification algorithms for the HAR task by comparing different feature representation algorithms for each sensor. These results set a reference performance for each individual sensor that expose their strengths and weaknesses for the future tasks. In addition we visualise pressure mat data to explore the potential of the sensor to capture exercise performance quality. With the recent advancement in multi-modal fusion, we also believe MEx is a suitable dataset to benchmark not only HAR algorithms, but also fusion algorithms of heterogeneous data types in multiple application domains.