Goto

Collaborating Authors

 Chapuis, Emile


NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation

arXiv.org Artificial Intelligence

Data augmentation is an important component in the robustness evaluation of models in natural language processing (NLP) and in enhancing the diversity of the data they are trained on. In this paper, we present NL-Augmenter, a new participatory Python-based natural language augmentation framework which supports the creation of both transformations (modifications to the data) and filters (data splits according to specific features). We describe the framework and an initial set of 117 transformations and 23 filters for a variety of natural language tasks. We demonstrate the efficacy of NL-Augmenter by using several of its transformations to analyze the robustness of popular natural language models. The infrastructure, datacards and robustness analysis results are available publicly on the NL-Augmenter repository (\url{https://github.com/GEM-benchmark/NL-Augmenter}).


Improving Multimodal fusion via Mutual Dependency Maximisation

arXiv.org Artificial Intelligence

Multimodal sentiment analysis is a trending area of research, and the multimodal fusion is one of its most active topic. Acknowledging humans communicate through a variety of channels (i.e visual, acoustic, linguistic), multimodal systems aim at integrating different unimodal representations into a synthetic one. So far, a consequent effort has been made on developing complex architectures allowing the fusion of these modalities. However, such systems are mainly trained by minimising simple losses such as $L_1$ or cross-entropy. In this work, we investigate unexplored penalties and propose a set of new objectives that measure the dependency between modalities. We demonstrate that our new penalties lead to a consistent improvement (up to $4.3$ on accuracy) across a large variety of state-of-the-art models on two well-known sentiment analysis datasets: \texttt{CMU-MOSI} and \texttt{CMU-MOSEI}. Our method not only achieves a new SOTA on both datasets but also produces representations that are more robust to modality drops. Finally, a by-product of our methods includes a statistical network which can be used to interpret the high dimensional representations learnt by the model.


Code-switched inspired losses for generic spoken dialog representations

arXiv.org Artificial Intelligence

While there has been a growing interest in pretraining for dialog A crucial step in conversational AI is the identification (Mehri et al., 2019; Zhang et al., 2019d), the focus of underlying information of the user's utterance has mainly been on English datasets. Thus, these (e.g communicative intent or dialog acts, and works can not be directly applied to our multilingual emotions). This requires modeling utterance-level setting. Additionally, available multilingual information (Mitkov, 2014; Williams et al., 2014), pretraining objectives (Lample and Conneau, 2019; to capture immediate nuances of the user utterance; Liu et al., 2020; Xue et al., 2020; Qi et al., 2021) and discourse-level features (Thornbury and Slade, face two main limitations when applied to dialog 2006), to capture patterns over long ranges of the modeling: (1) they are a generalization of monolingual conversation. An added difficulty to this modeling objectives that use flat input text, whereas problem is that most people in the world are bilingual hierarchy has been shown to be a powerful prior (Grosjean and Li, 2013): therefore, progress for dialog modeling. This is a reflection of a dialog on these systems is limited by their inability to process itself, for example, context plays an essential role more than one language (English being the in the labeling of dialog acts.


Hierarchical Pre-training for Sequence Labelling in Spoken Dialog

arXiv.org Artificial Intelligence

Sequence labelling tasks like Dialog Act and Emotion/Sentiment identification are a key component of spoken dialog systems. In this work, we propose a new approach to learn generic representations adapted to spoken dialog, which we evaluate on a new benchmark we call Sequence labellIng evaLuatIon benChmark fOr spoken laNguagE benchmark (\texttt{SILICONE}). \texttt{SILICONE} is model-agnostic and contains 10 different datasets of various sizes. We obtain our representations with a hierarchical encoder based on transformer architectures, for which we extend two well-known pre-training objectives. Pre-training is performed on OpenSubtitles: a large corpus of spoken dialog containing over $2.3$ billion of tokens. We demonstrate how hierarchical encoders achieve competitive results with consistently fewer parameters compared to state-of-the-art models and we show their importance for both pre-training and fine-tuning.