Berretti, Stefano
Establishing a Unified Evaluation Framework for Human Motion Generation: A Comparative Analysis of Metrics
Ismail-Fawaz, Ali, Devanne, Maxime, Berretti, Stefano, Weber, Jonathan, Forestier, Germain
Evaluating generative models is one of the most challenging tasks to achieve (Naeem et al., 2020). This kind of challenge is largely absent in discriminative models, where evaluation primarily involves comparison with ground truth data. However, for generative models, evaluation involves quantifying the validity between real samples and those generated by the model. A common method for evaluating generative models is through human judgment metrics, such as Mean Opinion Scores (MOS) (Streijl et al., 2016). However, this type of evaluation assumes a uniform perception among users regarding what constitutes ideal and realistic generation, which is often not the case. For this reason, generative models require quantitative evaluation based on measures of validity between real and generated samples. This similarity is quantified on two dimensions: fidelity and diversity. On the one hand, fidelity is the measure of similarity between real and generated spaces on the marginal distribution scale. On the other hand, diversity is the measure of how varied a set of samples is, indicating the extent to which the diversity of the generated set in generative models aligns with the diversity of the real set.
Finding Foundation Models for Time Series Classification with a PreText Task
Ismail-Fawaz, Ali, Devanne, Maxime, Berretti, Stefano, Weber, Jonathan, Forestier, Germain
Over the past decade, Time Series Classification (TSC) has gained an increasing attention. While various methods were explored, deep learning - particularly through Convolutional Neural Networks (CNNs)-stands out as an effective approach. However, due to the limited availability of training data, defining a foundation model for TSC that overcomes the overfitting problem is still a challenging task. The UCR archive, encompassing a wide spectrum of datasets ranging from motion recognition to ECG-based heart disease detection, serves as a prime example for exploring this issue in diverse TSC scenarios. In this paper, we address the overfitting challenge by introducing pre-trained domain foundation models. A key aspect of our methodology is a novel pretext task that spans multiple datasets. This task is designed to identify the originating dataset of each time series sample, with the goal of creating flexible convolution filters that can be applied across different datasets. The research process consists of two phases: a pre-training phase where the model acquires general features through the pretext task, and a subsequent fine-tuning phase for specific dataset classifications. Our extensive experiments on the UCR archive demonstrate that this pre-training strategy significantly outperforms the conventional training approach without pre-training. This strategy effectively reduces overfitting in small datasets and provides an efficient route for adapting these models to new datasets, thus advancing the capabilities of deep learning in TSC.
ShapeDBA: Generating Effective Time Series Prototypes using ShapeDTW Barycenter Averaging
Ismail-Fawaz, Ali, Fawaz, Hassan Ismail, Petitjean, Franรงois, Devanne, Maxime, Weber, Jonathan, Berretti, Stefano, Webb, Geoffrey I., Forestier, Germain
Time series data can be found in almost every domain, ranging from the medical field to manufacturing and wireless communication. Generating realistic and useful exemplars and prototypes is a fundamental data analysis task. In this paper, we investigate a novel approach to generating realistic and useful exemplars and prototypes for time series data. Our approach uses a new form of time series average, the ShapeDTW Barycentric Average. We therefore turn our attention to accurately generating time series prototypes with a novel approach. The existing time series prototyping approaches rely on the Dynamic Time Warping (DTW) similarity measure such as DTW Barycentering Average (DBA) and SoftDBA. These last approaches suffer from a common problem of generating out-of-distribution artifacts in their prototypes. This is mostly caused by the DTW variant used and its incapability of detecting neighborhood similarities, instead it detects absolute similarities. Our proposed method, ShapeDBA, uses the ShapeDTW variant of DTW, that overcomes this issue. We chose time series clustering, a popular form of time series analysis to evaluate the outcome of ShapeDBA compared to the other prototyping approaches. Coupled with the k-means clustering algorithm, and evaluated on a total of 123 datasets from the UCR archive, our proposed averaging approach is able to achieve new state-of-the-art results in terms of Adjusted Rand Index.
4DSR-GCN: 4D Video Point Cloud Upsampling using Graph Convolutional Networks
Berlincioni, Lorenzo, Berretti, Stefano, Bertini, Marco, Del Bimbo, Alberto
Time varying sequences of 3D point clouds, or 4D point clouds, are now being acquired at an increasing pace in several applications (e.g., LiDAR in autonomous or assisted driving). In many cases, such volume of data is transmitted, thus requiring that proper compression tools are applied to either reduce the resolution or the bandwidth. In this paper, we propose a new solution for upscaling and restoration of time-varying 3D video point clouds after they have been heavily compressed. In consideration of recent growing relevance of 3D applications, %We focused on a model allowing user-side upscaling and artifact removal for 3D video point clouds, a real-time stream of which would require . Our model consists of a specifically designed Graph Convolutional Network (GCN) that combines Dynamic Edge Convolution and Graph Attention Networks for feature aggregation in a Generative Adversarial setting. By taking inspiration PointNet++, We present a different way to sample dense point clouds with the intent to make these modules work in synergy to provide each node enough features about its neighbourhood in order to later on generate new vertices. Compared to other solutions in the literature that address the same task, our proposed model is capable of obtaining comparable results in terms of quality of the reconstruction, while using a substantially lower number of parameters (about 300KB), making our solution deployable in edge computing devices such as LiDAR.
An Approach to Multiple Comparison Benchmark Evaluations that is Stable Under Manipulation of the Comparate Set
Ismail-Fawaz, Ali, Dempster, Angus, Tan, Chang Wei, Herrmann, Matthieu, Miller, Lynn, Schmidt, Daniel F., Berretti, Stefano, Weber, Jonathan, Devanne, Maxime, Forestier, Germain, Webb, Geoffrey I.
The measurement of progress using benchmarks evaluations is ubiquitous in computer science and machine learning. However, common approaches to analyzing and presenting the results of benchmark comparisons of multiple algorithms over multiple datasets, such as the critical difference diagram introduced by Dem\v{s}ar (2006), have important shortcomings and, we show, are open to both inadvertent and intentional manipulation. To address these issues, we propose a new approach to presenting the results of benchmark comparisons, the Multiple Comparison Matrix (MCM), that prioritizes pairwise comparisons and precludes the means of manipulating experimental results in existing approaches. MCM can be used to show the results of an all-pairs comparison, or to show the results of a comparison between one or more selected algorithms and the state of the art. MCM is implemented in Python and is publicly available.