Dwivedi, Girish
Multitask Deep Learning for Accurate Risk Stratification and Prediction of Next Steps for Coronary CT Angiography Patients
Lu, Juan, Bennamoun, Mohammed, Stewart, Jonathon, Eshraghian, JasonK., Liu, Yanbin, Chow, Benjamin, Sanfilippo, Frank M., Dwivedi, Girish
Diagnostic investigation has an important role in risk stratification and clinical decision making of patients with suspected and documented Coronary Artery Disease (CAD). However, the majority of existing tools are primarily focused on the selection of gatekeeper tests, whereas only a handful of systems contain information regarding the downstream testing or treatment. We propose a multi-task deep learning model to support risk stratification and down-stream test selection for patients undergoing Coronary Computed Tomography Angiography (CCTA). The analysis included 14,021 patients who underwent CCTA between 2006 and 2017. Our novel multitask deep learning framework extends the state-of-the art Perceiver model to deal with real-world CCTA report data. Our model achieved an Area Under the receiver operating characteristic Curve (AUC) of 0.76 in CAD risk stratification, and 0.72 AUC in predicting downstream tests. Our proposed deep learning model can accurately estimate the likelihood of CAD and provide recommended downstream tests based on prior CCTA data. In clinical practice, the utilization of such an approach could bring a paradigm shift in risk stratification and downstream management. Despite significant progress using deep learning models for tabular data, they do not outperform gradient boosting decision trees, and further research is required in this area. However, neural networks appear to benefit more readily from multi-task learning than tree-based models. This could offset the shortcomings of using single task learning approach when working with tabular data.
Training Spiking Neural Networks Using Lessons From Deep Learning
Eshraghian, Jason K., Ward, Max, Neftci, Emre, Wang, Xinxin, Lenz, Gregor, Dwivedi, Girish, Bennamoun, Mohammed, Jeong, Doo Seok, Lu, Wei D.
The brain is the perfect place to look for inspiration to develop more efficient neural networks. The inner workings of our synapses and neurons provide a glimpse at what the future of deep learning might look like. This paper serves as a tutorial and perspective showing how to apply the lessons learnt from several decades of research in deep learning, gradient descent, backpropagation and neuroscience to biologically plausible spiking neural neural networks. We also explore the delicate interplay between encoding data as spikes and the learning process; the challenges and solutions of applying gradient-based learning to spiking neural networks (SNNs); the subtle link between temporal backpropagation and spike timing dependent plasticity, and how deep learning might move towards biologically plausible online learning. Some ideas are well accepted and commonly used amongst the neuromorphic engineering community, while others are presented or justified for the first time here. The fields of deep learning and spiking neural networks evolve very rapidly. We endeavour to treat this document as a 'dynamic' manuscript that will continue to be updated as the common practices in training SNNs also change. A series of companion interactive tutorials complementary to this paper using our Python package, snnTorch, are also made available. See https://snntorch.readthedocs.io/en/latest/tutorials/index.html .
FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare
Lekadir, Karim, Feragen, Aasa, Fofanah, Abdul Joseph, Frangi, Alejandro F, Buyx, Alena, Emelie, Anais, Lara, Andrea, Porras, Antonio R, Chan, An-Wen, Navarro, Arcadi, Glocker, Ben, Botwe, Benard O, Khanal, Bishesh, Beger, Brigit, Wu, Carol C, Cintas, Celia, Langlotz, Curtis P, Rueckert, Daniel, Mzurikwao, Deogratias, Fotiadis, Dimitrios I, Zhussupov, Doszhan, Ferrante, Enzo, Meijering, Erik, Weicken, Eva, Gonzรกlez, Fabio A, Asselbergs, Folkert W, Prior, Fred, Krestin, Gabriel P, Collins, Gary, Tegenaw, Geletaw S, Kaissis, Georgios, Misuraca, Gianluca, Tsakou, Gianna, Dwivedi, Girish, Kondylakis, Haridimos, Jayakody, Harsha, Woodruf, Henry C, Aerts, Hugo JWL, Walsh, Ian, Chouvarda, Ioanna, Buvat, Irรจne, Rekik, Islem, Duncan, James, Kalpathy-Cramer, Jayashree, Zahir, Jihad, Park, Jinah, Mongan, John, Gichoya, Judy W, Schnabel, Julia A, Kushibar, Kaisar, Riklund, Katrine, Mori, Kensaku, Marias, Kostas, Amugongo, Lameck M, Fromont, Lauren A, Maier-Hein, Lena, Alberich, Leonor Cerdรก, Rittner, Leticia, Phiri, Lighton, Marrakchi-Kacem, Linda, Donoso-Bach, Lluรญs, Martรญ-Bonmatรญ, Luis, Cardoso, M Jorge, Bobowicz, Maciej, Shabani, Mahsa, Tsiknakis, Manolis, Zuluaga, Maria A, Bielikova, Maria, Fritzsche, Marie-Christine, Linguraru, Marius George, Wenzel, Markus, De Bruijne, Marleen, Tolsgaard, Martin G, Ghassemi, Marzyeh, Ashrafuzzaman, Md, Goisauf, Melanie, Yaqub, Mohammad, Ammar, Mohammed, Abadรญa, Mรณnica Cano, Mahmoud, Mukhtar M E, Elattar, Mustafa, Rieke, Nicola, Papanikolaou, Nikolaos, Lazrak, Noussair, Dรญaz, Oliver, Salvado, Olivier, Pujol, Oriol, Sall, Ousmane, Guevara, Pamela, Gordebeke, Peter, Lambin, Philippe, Brown, Pieta, Abolmaesumi, Purang, Dou, Qi, Lu, Qinghua, Osuala, Richard, Nakasi, Rose, Zhou, S Kevin, Napel, Sandy, Colantonio, Sara, Albarqouni, Shadi, Joshi, Smriti, Carter, Stacy, Klein, Stefan, Petersen, Steffen E, Aussรณ, Susanna, Awate, Suyash, Raviv, Tammy Riklin, Cook, Tessa, Mutsvangwa, Tinashe E M, Rogers, Wendy A, Niessen, Wiro J, Puig-Bosch, Xรจnia, Zeng, Yi, Mohammed, Yunusa G, Aquino, Yves Saint James, Salahuddin, Zohaib, Starmans, Martijn P A
Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI.
Analysis and Evaluation of Explainable Artificial Intelligence on Suicide Risk Assessment
Tang, Hao, Rekavandi, Aref Miri, Rooprai, Dharjinder, Dwivedi, Girish, Sanfilippo, Frank, Boussaid, Farid, Bennamoun, Mohammed
This study investigates the effectiveness of Explainable Artificial Intelligence (XAI) techniques in predicting suicide risks and identifying the dominant causes for such behaviours. Data augmentation techniques and ML models are utilized to predict the associated risk. Furthermore, SHapley Additive exPlanations (SHAP) and correlation analysis are used to rank the importance of variables in predictions. Experimental results indicate that Decision Tree (DT), Random Forest (RF) and eXtreme Gradient Boosting (XGBoost) models achieve the best results while DT has the best performance with an accuracy of 95.23% and an Area Under Curve (AUC) of 0.95. As per SHAP results, anger problems, depression, and social isolation are the leading variables in predicting the risk of suicide, and patients with good incomes, respected occupations, and university education have the least risk. Results demonstrate the effectiveness of machine learning and XAI framework for suicide risk prediction, and they can assist psychiatrists in understanding complex human behaviours and can also assist in reliable clinical decision-making.
Explainable Artificial Intelligence for Pharmacovigilance: What Features Are Important When Predicting Adverse Outcomes?
Ward, Isaac Ronald, Wang, Ling, lu, Juan, Bennamoun, Mohammed, Dwivedi, Girish, Sanfilippo, Frank M
Explainable Artificial Intelligence (XAI) has been identified as a viable method for determining the importance of features when making predictions using Machine Learning (ML) models. In this study, we created models that take an individual's health information (e.g. their drug history and comorbidities) as inputs, and predict the probability that the individual will have an Acute Coronary Syndrome (ACS) adverse outcome. Using XAI, we quantified the contribution that specific drugs had on these ACS predictions, thus creating an XAI-based technique for pharmacovigilance monitoring, using ACS as an example of the adverse outcome to detect. Individuals aged over 65 who were supplied Musculo-skeletal system (anatomical therapeutic chemical (ATC) class M) or Cardiovascular system (ATC class C) drugs between 1993 and 2009 were identified, and their drug histories, comorbidities, and other key features were extracted from linked Western Australian datasets. Multiple ML models were trained to predict if these individuals would have an ACS related adverse outcome (i.e., death or hospitalisation with a discharge diagnosis of ACS), and a variety of ML and XAI techniques were used to calculate which features -- specifically which drugs -- led to these predictions. The drug dispensing features for rofecoxib and celecoxib were found to have a greater than zero contribution to ACS related adverse outcome predictions (on average), and it was found that ACS related adverse outcomes can be predicted with 72% accuracy. Furthermore, the XAI libraries LIME and SHAP were found to successfully identify both important and unimportant features, with SHAP slightly outperforming LIME. ML models trained on linked administrative health datasets in tandem with XAI algorithms can successfully quantify feature importance, and with further development, could potentially be used as pharmacovigilance monitoring techniques.