Has the Deep Neural Network learned the Stochastic Process? A Wildfire Perspective

Kumar, Harshit, Kang, Beomseok, Chakraborty, Biswadeep, Mukhopadhyay, Saibal

arXiv.org Artificial Intelligence 

This paper presents the first systematic study of evalution of Deep Neural Network (DNN) designed and trained to predict the evolution of a stochastic dynamical system, using wildfire prediction as a case study. We show that traditional evaluation methods based on threshold based classification metrics and error-based scoring rules assess a DNN's ability to replicate the observed ground truth (GT), but do not measure the fidelity of the DNN's learning of the underlying stochastic process. To address this gap, we propose a new system property: Statistic-GT, representing the GT of the stochastic process, and an evaluation metric that exclusively assesses fidelity to Statistic-GT. Utilizing a synthetic dataset, we introduce a stochastic framework to characterize this property and establish criteria for a metric to be a valid measure of the proposed property. We formally show that Expected Calibration Error (ECE) tests the necessary condition for fidelity to Statistic-GT. We perform empirical experiments, differentiating ECE's behavior from conventional metrics and demonstrate that ECE exclusively measures fidelity to the stochastic process. Extending our analysis to real-world wildfire data, we highlight the limitations of traditional evaluation methods and discuss the utility of evaluating fidelity to the stochastic process alongside existing metrics.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found