Islam, Sheikh Rabiul
Aiming to Minimize Alcohol-Impaired Road Fatalities: Utilizing Fairness-Aware and Domain Knowledge-Infused Artificial Intelligence
Venkateswaran, Tejas, Islam, Sheikh Rabiul, Hasan, Md Golam Moula Mehedi, Ahmed, Mohiuddin
Approximately 30% of all traffic fatalities in the United States are attributed to alcohol-impaired driving. This means that, despite stringent laws against this offense in every state, the frequency of drunk driving accidents is alarming, resulting in approximately one person being killed every 45 minutes. The process of charging individuals with Driving Under the Influence (DUI) is intricate and can sometimes be subjective, involving multiple stages such as observing the vehicle in motion, interacting with the driver, and conducting Standardized Field Sobriety Tests (SFSTs). Biases have been observed through racial profiling, leading to some groups and geographical areas facing fewer DUI tests, resulting in many actual DUI incidents going undetected, ultimately leading to a higher number of fatalities. To tackle this issue, our research introduces an Artificial Intelligence-based predictor that is both fairness-aware and incorporates domain knowledge to analyze DUI-related fatalities in different geographic locations. Through this model, we gain intriguing insights into the interplay between various demographic groups, including age, race, and income. By utilizing the provided information to allocate policing resources in a more equitable and efficient manner, there is potential to reduce DUI-related fatalities and have a significant impact on road safety.
Explainable Artificial Intelligence Approaches: A Survey
Islam, Sheikh Rabiul, Eberle, William, Ghafoor, Sheikh Khaled, Ahmed, Mohiuddin
The lack of explainability of a decision from an Artificial Intelligence (AI) based "black box" system/model, despite its superiority in many real-world applications, is a key stumbling block for adopting AI in many high stakes applications of different domain or industry. While many popular Explainable Artificial Intelligence (XAI) methods or approaches are available to facilitate a human-friendly explanation of the decision, each has its own merits and demerits, with a plethora of open challenges. We demonstrate popular XAI methods with a mutual case study/task (i.e., credit default prediction), analyze for competitive advantages from multiple perspectives (e.g., local, global), provide meaningful insight on quantifying explainability, and recommend paths towards responsible or human-centered AI using XAI as a medium. Practitioners can use this work as a catalog to understand, compare, and correlate competitive advantages of popular XAI methods. In addition, this survey elicits future research directions towards responsible or human-centric AI systems, which is crucial to adopt AI in high stakes applications.
Towards Quantification of Explainability in Explainable Artificial Intelligence Methods
Islam, Sheikh Rabiul (Tennessee Technological University ) | Eberle, William (Tennessee Technological University) | Ghafoor, Sheikh K. (Tennessee Technological University)
Artificial Intelligence (AI) has become an integral part of domains such as security, finance, healthcare, medicine, and criminal justice. Explaining the decisions of AI systems in human terms is a key challengeโdue to the high complexity of the model, as well as the potential implications on human interests, rights, and lives. While Explainable AI is an emerging field of research, there is no consensus on the definition, quantification, and formalization of explainability. In fact, the quantification of explainability is an open challenge. In our previous work, we incorporated domain knowledge for better explainability, however, we were unable to quantify the extent of explainability. In this work, we (1) briefly analyze the definitions of explainability from the perspective of different disciplines (e.g., psychology, social science), properties of explanation, explanation methods, and human-friendly explanations; and (2) propose and formulate an approach to quantify the extent of explainability. Our experimental result suggests a reasonable and model-agnostic way to quantify explainability.
Infusing domain knowledge in AI-based "black box" models for better explainability with application in bankruptcy prediction
Islam, Sheikh Rabiul, Eberle, William, Bundy, Sid, Ghafoor, Sheikh Khaled
Although "black box" models such as Artificial Neural Networks, Support Vector Machines, and Ensemble Approaches continue to show superior performance in many disciplines, their adoption in the sensitive disciplines (e.g., finance, healthcare) is questionable due to the lack of interpretability and explainability of the model. In fact, future adoption of "black box" models is difficult because of the recent rule of "right of explanation" by the European Union where a user can ask for an explanation behind an algorithmic decision, and the newly proposed bill by the US government, the "Algorithmic Accountability Act", which would require companies to assess their machine learning systems for bias and discrimination and take corrective measures. Top Bankruptcy Prediction Models are A.I.-based and are in need of better explainability -the extent to which the internal working mechanisms of an AI system can be explained in human terms. Although explainable artificial intelligence is an emerging field of research, infusing domain knowledge for better explainability might be a possible solution. In this work, we demonstrate a way to collect and infuse domain knowledge into a "black box" model for bankruptcy prediction. Our understanding from the experiments reveals that infused domain knowledge makes the output from the black box model more interpretable and explainable.
A Deep Learning Based Illegal Insider-Trading Detection and Prediction Technique in Stock Market
Islam, Sheikh Rabiul
The stock market is a nonlinear, nonstationary, dynamic, and complex system. There are several factors that affect the stock market conditions, such as news, social media, expert opinion, political transitions, and natural disasters. In addition, the market must also be able to handle the situation of illegal insider trading, which impacts the integrity and value of stocks. Illegal insider trading occurs when trading is performed based on non-public (private, leaked, tipped) information (e.g., new product launch, quarterly financial report, acquisition or merger plan) before the information is made public. Preventing illegal insider trading is a priority of the regulatory authorities (e.g., SEC) as it involves billions of dollars, and is very difficult to detect. In this work, we present different types of insider trading approaches, techniques and our proposed approach for detecting and predicting insider trader using a deep-learning based approach combined with discrete signal processing on time series data.
Credit Default Mining Using Combined Machine Learning and Heuristic Approach
Islam, Sheikh Rabiul, Eberle, William, Ghafoor, Sheikh Khaled
Predicting potential credit default accounts in advance is challenging. Traditional statistical techniques typically cannot handle large amounts of data and the dynamic nature of fraud and humans. To tackle this problem, recent research has focused on artificial and computational intelligence based approaches. In this work, we present and validate a heuristic approach to mine potential default accounts in advance where a risk probability is precomputed from all previous data and the risk probability for recent transactions are computed as soon they happen. Beside our heuristic approach, we also apply a recently proposed machine learning approach that has not been applied previously on our targeted dataset [15]. As a result, we find that these applied approaches outperform existing state-of-the-art approaches.