Automating concept-drift detection by self-evaluating predictive model degradation
Cerquitelli, Tania, Proto, Stefano, Ventura, Francesco, Apiletti, Daniele, Baralis, Elena
A key aspect of automating predictive machine learning entails the capability of properly triggering the update of the trained model. To this aim, suitable automatic solutions to self-assess the prediction quality and the data distribution drift between the original training set and the new data have to be devised. In this paper, we propose a novel methodology to automatically detect prediction-quality degradation of machine learning models due to class-based concept drift, i.e., when new data contains samples that do not fit the set of class labels known by the currently-trained predictive model. Experiments on synthetic and real-world public datasets show the effectiveness of the proposed methodology in automatically detecting and describing concept drift caused by changes in the class-label data distributions.
Jul-18-2019
- Country:
- Europe (0.28)
- North America > United States
- California > San Francisco County > San Francisco (0.14)
- Genre:
- Research Report (0.82)
- Industry:
- Education (0.47)
- Technology: