Uncertainty Quantification in Multimodal Ensembles of Deep Learners
Brown, Katherine E. (Tennessee Technological University ) | Bhuiyan, Farzana Ahamed (Tennessee Technological University) | Talbert, Douglas A. (Tennessee Technological University)
Uncertainty quantification in deep learning is an active area of research that examines two primary types of uncertainty in deep learning: epistemic uncertainty and aleatoric uncertainty. Epistemic uncertainty is caused by not having enough data to adequately learn. This creates volatility in the parameters and predictions and causes uncertainty. High epistemic uncertainty can indicate that the model’s prediction is based on a pattern with which is it not familiar. Aleatoric uncertainty measures the uncertainty due to noise in the data. Two additional active areas of research are multimodal learning and malware analysis. Multimodal learning takes into consideration distinct expressions of features such as different representations (e.g., audio and visual data) or different sampling techniques. Multimodal learning has recently been used in malware analysis to combine multiple types of features. In this work, we present and analyze a novel technique to measure epistemic uncertainty from deep ensembles of modalities. Our results suggest that deep ensembles of modalities provide higher accuracy and lower uncertainty that the constituent single modalities and than the comparable hierarchical multimodal deep learner.
May-16-2020
- Genre:
- Research Report > New Finding (0.53)
- Industry:
- Information Technology > Security & Privacy (0.93)
- Technology: