Goto

Collaborating Authors

 Tennessee Technological University


Towards Quantification of Explainability in Explainable Artificial Intelligence Methods

AAAI Conferences

Artificial Intelligence (AI) has become an integral part of domains such as security, finance, healthcare, medicine, and criminal justice. Explaining the decisions of AI systems in human terms is a key challengeโ€”due to the high complexity of the model, as well as the potential implications on human interests, rights, and lives. While Explainable AI is an emerging field of research, there is no consensus on the definition, quantification, and formalization of explainability. In fact, the quantification of explainability is an open challenge. In our previous work, we incorporated domain knowledge for better explainability, however, we were unable to quantify the extent of explainability. In this work, we (1) briefly analyze the definitions of explainability from the perspective of different disciplines (e.g., psychology, social science), properties of explanation, explanation methods, and human-friendly explanations; and (2) propose and formulate an approach to quantify the extent of explainability. Our experimental result suggests a reasonable and model-agnostic way to quantify explainability.


Uncertainty Quantification in Multimodal Ensembles of Deep Learners

AAAI Conferences

Uncertainty quantification in deep learning is an active area of research that examines two primary types of uncertainty in deep learning: epistemic uncertainty and aleatoric uncertainty. Epistemic uncertainty is caused by not having enough data to adequately learn. This creates volatility in the parameters and predictions and causes uncertainty. High epistemic uncertainty can indicate that the modelโ€™s prediction is based on a pattern with which is it not familiar. Aleatoric uncertainty measures the uncertainty due to noise in the data. Two additional active areas of research are multimodal learning and malware analysis. Multimodal learning takes into consideration distinct expressions of features such as different representations (e.g., audio and visual data) or different sampling techniques. Multimodal learning has recently been used in malware analysis to combine multiple types of features. In this work, we present and analyze a novel technique to measure epistemic uncertainty from deep ensembles of modalities. Our results suggest that deep ensembles of modalities provide higher accuracy and lower uncertainty that the constituent single modalities and than the comparable hierarchical multimodal deep learner.


Assessing Modality Selection Heuristics to Improve Multimodal Machine Learning for Malware Detection

AAAI Conferences

With the growing usage of Android devices, security threats are also growing. While there are some existing malware detection methods, cybercriminals continue to develop ways to evade these security mechanisms. Thus, malware detection systems also need to evolve to meet this challenge. This work is a step towards achieving that goal. Malware detection methods need as much information as possible about the potential malware, and a multimodal approach can help in this regard by combining differing aspects of an Android application. Using multimodal deep learning, it is possible to automatically learn a hierarchical representation for each modality and to give more weights to the more reliable modalities. Multiple modalities can improve classification by providing complementary information, however, the use of all available modalities does not necessarily maximize algorithm performance. Thus, multimodal machine learning could benefit from a mechanism to guide the selection of modalities to include in a multimodal model. This work uses a malware detection problem to compare multiple heuristics for this selection process and the assumptions behind them. Our experiments show that selection modalities with low predictive correlation work better than the other examined heuristics.


Secure Industrial Control System with Intrusion Detection

AAAI Conferences

Detecting intrusions and anomalies in Industrial Control Systems at early stages is important to prevent process failure. Operator errors, device or equipment failures, and other non-network events could lead to a critical state. As a result, these events can indirectly lead to anomalous network traffic, and, thus, a manually configured IDS that uses network traffic alone can generate false positives and false negatives. In this paper, we propose a novel approach that uses multimodal machine learning and incorporates both network data and device state information to improve the detection accuracy. Our methodology can detect anomalies as well as their root causes, which is essential. To protect device state data, we use a secure data container to store log records for devices in cyber-physical systems. The secure data container provides protection for log records in transit and at rest. It also supports role-based and attribute-based access control and protects against insider threats.


Biologically Extending the Gen 2 ANN Model

AAAI Conferences

In this paper the generations of artificial neural net- works (ANN) are surveyed. The assumptions present in Gen 1 and 2 ANNs are enumerated. In the pro- cess of reformulating the Gen 2 ANN an extension was observed that could increase the biological plausibility of the model. The resulting model makes use of the neurological interneuron structures that provide inhibi- tion and input gain control in the cortical regions of the brain. The resulting interneuron neural network (INN) is applied to the well know MNIST. The INN beats an identical ANN. The application of the model is used to validate the derivation of the model and associated backpropagation.


Detecting the Onset of a Network Layer DoS Attack with a Graph-Based Approach

AAAI Conferences

A denial-of-service (DoS) attack is a malicious act with the goal of interrupting the access to a computer network. The result of DoS attack can cause the computers on the network to squander their resources to serve illegitimate requests that result in a disruption of the networkโ€™s services to legitimate users. With a sophisticated DoS attack, it becomes difficult to distinguish malicious requests from legitimate requests. Since a network layer DoS attack can cause interruptions to a network while causing collateral damage, it is vital to understand the measures to mitigate against such attacks. Generally, approaches that implement distribution charts based on statistical analysis or honeypots have been applied to detect a DoS attack. However, this is usually too late, as the damage is already done. We hypothesize in this work that a graph-based approach can provide the capability to identify a DoS attack at its inception. A graph-based approach will also allow us to not only focus on anomalies within an entity (like a computer) but also allow us to analyze the anomalies that exist in an entityโ€™s relationship with other entities, thus providing a rich source of contextual analysis. We demonstrate our proposed approach using a publicly-available dataset.


Report on the 29th International Florida Artificial Intelligence Research Society Conference (FLAIRS-29)

AI Magazine

The 29th International Florida Artificial Intelligence Research Society Conference (FLAIRS-29) was held May 16-18, 2016, at the Hilton Key Largo Resort in Key Largo, Florida, USA. The conference events included invited speakers, special tracks, and presentations of papers, posters, and awards. The conference chair was Bill Eberle from Tennessee Technological University. The special track were coordinated by Vasile Rus from University of Memphis.


Report on the 29th International Florida Artificial Intelligence Research Society Conference (FLAIRS-29)

AI Magazine

The 29th International Florida Artificial Intelligence Research Society Conference (FLAIRS-29) was held May 16-18, 2016, at the Hilton Key Largo Resort in Key Largo, Florida, USA. The conference events included invited speakers, special tracks, and presentations of papers, posters, and awards. The conference chair was Bill Eberle from Tennessee Technological University. The program co-chairs were Zdravko Markov from Central Connecticut State University and Ingrid Russell from the University of Hartford. The special track were coordinated by Vasile Rus from University of Memphis.


Invited Talk Abstracts

AAAI Conferences

Abstracts of the invited talks presented a the Twenty-Seventh International Florida Artificial Intelligence Research Society Conference (FLAIRS-27) held May 21-23, 2014, in Pensacola, Florida USA.