Fromont, Élisa
LCE: An Augmented Combination of Bagging and Boosting in Python
Fauvel, Kevin, Fromont, Élisa, Masson, Véronique, Faverdin, Philippe, Termier, Alexandre
The package implements Local Cascade Ensemble (LCE), a machine learning method that further enhances the prediction performance of the current state-of-the-art methods Random Forest and XGBoost. LCE combines their strengths and adopts a complementary diversification approach to obtain a better generalizing predictor. The package is compatible with scikit-learn, therefore it can interact with scikit-learn pipelines and model selection tools.
Local Cascade Ensemble for Multivariate Data Classification
Fauvel, Kevin, Fromont, Élisa, Masson, Véronique, Faverdin, Philippe, Termier, Alexandre
We present LCE, a Local Cascade Ensemble for traditional (tabular) multivariate data classification, and its extension LCEM for Multivariate Time Series (MTS) classification. LCE is a new hybrid ensemble method that combines an explicit boosting-bagging approach to handle the bias-variance trade-off faced by machine learning models and an implicit divide-and-conquer approach to individualize classifier errors on different parts of the training data. Our evaluation firstly shows that the hybrid ensemble method LCE outperforms the state-of-the-art classifiers on the UCI datasets and that LCEM outperforms the state-of-the-art MTS classifiers on the UEA datasets. Furthermore, LCEM provides faithful explainability by design and manifests robust performance when faced with challenges arising from continuous data collection (different MTS length, missing data and noise).
XCM: An Explainable Convolutional Neural Network for Multivariate Time Series Classification
Fauvel, Kevin, Lin, Tao, Masson, Véronique, Fromont, Élisa, Termier, Alexandre
We present XCM, an eXplainable Convolutional neural network for Multivariate time series classification. XCM is a new compact convolutional neural network which extracts, in parallel, information relative to the observed variables and time from the input data. Thus, XCM architecture enables faithful explainability based on a post-hoc model-specific method (Gradient-weighted Class Activation Mapping), which identifies the observed variables and timestamps of the input data that are important for predictions. Our evaluation firstly shows that XCM outperforms the state-of-the-art multivariate time series classifiers on both the large and small public UEA datasets. Furthermore, following the illustration of the performance and explainability of XCM on a synthetic dataset, we present how XCM can outperform the current most accurate state-of-the-art algorithm on a real-world application while enhancing explainability by providing faithful and more informative explanations.
A Performance-Explainability Framework to Benchmark Machine Learning Methods: Application to Multivariate Time Series Classifiers
Fauvel, Kevin, Masson, Véronique, Fromont, Élisa
In order to match these requirements and conduct experiments to validate the usefulness of the explanations Our research aims to propose a new performanceexplainability by the end-users, there is a need to have a comprehensive analytical framework to assess and assessment of the explainability of the existing methods.