Mouchère, Harold
Local and Global Graph Modeling with Edge-weighted Graph Attention Network for Handwritten Mathematical Expression Recognition
Xie, Yejing, Zanibbi, Richard, Mouchère, Harold
TEX), handwritten mathematical expressions offer greater ease of use for humans but pose a greater challenge for machine recognition due to variations in individual writing styles and writing habits. Handwritten Mathematical Expression Recognition (HMER), which involves converting handwritten math into markup language for easier computer processing and rendering, is a challenging promising field with various of potential applications. Compared to Optical Character Recognition (OCR), recognizing handwritten manuscripts is more challenging due to the wide variation in handwriting styles. HMER not only faces the common challenges of handwriting recognition but also has to deal with the added complexity of interpreting the 2D structure of mathematical expressions. According to different processing objective, HMER can be categorized into Online HMER and Offline HMER. Online HMER processes a sequence of temporal trajectories captured by digital devices like tablets and digital pens. Online data is segmented into individual strokes based on pen-down and pen-up interruption. While offline expressions are static images collected by scanner, camera or smartphone.
Handwritten Text Recognition from Crowdsourced Annotations
Tarride, Solène, Faine, Tristan, Boillet, Mélodie, Mouchère, Harold, Kermorvant, Christopher
In this paper, we explore different ways of training a model for handwritten text recognition when multiple imperfect or noisy transcriptions are available. We consider various training configurations, such as selecting a single transcription, retaining all transcriptions, or computing an aggregated transcription from all available annotations. In addition, we evaluate the impact of quality-based data selection, where samples with low agreement are removed from the training set. Our experiments are carried out on municipal registers of the city of Belfort (France) written between 1790 and 1946. % results The results show that computing a consensus transcription or training on multiple transcriptions are good alternatives. However, selecting training samples based on the degree of agreement between annotators introduces a bias in the training data and does not improve the results. Our dataset is publicly available on Zenodo: https://zenodo.org/record/8041668.
Metrics for saliency map evaluation of deep learning explanation methods
Gomez, Tristan, Fréour, Thomas, Mouchère, Harold
Due to the black-box nature of deep learning models, there is a recent development of solutions for visual explanations of CNNs. Given the high cost of user studies, metrics are necessary to compare and evaluate these different methods. In this paper, we critically analyze the Deletion Area Under Curve (DAUC) and Insertion Area Under Curve (IAUC) metrics proposed by Petsiuk et al. (2018). These metrics were designed to evaluate the faithfulness of saliency maps generated by generic methods such as Grad-CAM or RISE. First, we show that the actual saliency score values given by the saliency map are ignored as only the ranking of the scores is taken into account. This shows that these metrics are insufficient by themselves, as the visual appearance of a saliency map can change significantly without the ranking of the scores being modified. Secondly, we argue that during the computation of DAUC and IAUC, the model is presented with images that are out of the training distribution which might lead to an unreliable behavior of the model being explained. To complement DAUC/IAUC, we propose new metrics that quantify the sparsity and the calibration of explanation methods, two previously unstudied properties. Finally, we give general remarks about the metrics studied in this paper and discuss how to evaluate them in a user study.