Colombo, Pierre
Code-switched inspired losses for generic spoken dialog representations
Chapuis, Emile, Colombo, Pierre, Labeau, Matthieu, Clavel, Chloe
While there has been a growing interest in pretraining for dialog A crucial step in conversational AI is the identification (Mehri et al., 2019; Zhang et al., 2019d), the focus of underlying information of the user's utterance has mainly been on English datasets. Thus, these (e.g communicative intent or dialog acts, and works can not be directly applied to our multilingual emotions). This requires modeling utterance-level setting. Additionally, available multilingual information (Mitkov, 2014; Williams et al., 2014), pretraining objectives (Lample and Conneau, 2019; to capture immediate nuances of the user utterance; Liu et al., 2020; Xue et al., 2020; Qi et al., 2021) and discourse-level features (Thornbury and Slade, face two main limitations when applied to dialog 2006), to capture patterns over long ranges of the modeling: (1) they are a generalization of monolingual conversation. An added difficulty to this modeling objectives that use flat input text, whereas problem is that most people in the world are bilingual hierarchy has been shown to be a powerful prior (Grosjean and Li, 2013): therefore, progress for dialog modeling. This is a reflection of a dialog on these systems is limited by their inability to process itself, for example, context plays an essential role more than one language (English being the in the labeling of dialog acts.
A Novel Estimator of Mutual Information for Learning to Disentangle Textual Representations
Colombo, Pierre, Clavel, Chloe, Piantanida, Pablo
Learning disentangled representations of textual data is essential for many natural language tasks such as fair classification, style transfer and sentence generation, among others. The existent dominant approaches in the context of text data {either rely} on training an adversary (discriminator) that aims at making attribute values difficult to be inferred from the latent code {or rely on minimising variational bounds of the mutual information between latent code and the value attribute}. {However, the available methods suffer of the impossibility to provide a fine-grained control of the degree (or force) of disentanglement.} {In contrast to} {adversarial methods}, which are remarkably simple, although the adversary seems to be performing perfectly well during the training phase, after it is completed a fair amount of information about the undesired attribute still remains. This paper introduces a novel variational upper bound to the mutual information between an attribute and the latent code of an encoder. Our bound aims at controlling the approximation error via the Renyi's divergence, leading to both better disentangled representations and in particular, a precise control of the desirable degree of disentanglement {than state-of-the-art methods proposed for textual data}. Furthermore, it does not suffer from the degeneracy of other losses in multi-class scenarios. We show the superiority of this method on fair classification and on textual style transfer tasks. Additionally, we provide new insights illustrating various trade-offs in style transfer when attempting to learn disentangled representations and quality of the generated sentence.
Hierarchical Pre-training for Sequence Labelling in Spoken Dialog
Chapuis, Emile, Colombo, Pierre, Manica, Matteo, Labeau, Matthieu, Clavel, Chloe
Sequence labelling tasks like Dialog Act and Emotion/Sentiment identification are a key component of spoken dialog systems. In this work, we propose a new approach to learn generic representations adapted to spoken dialog, which we evaluate on a new benchmark we call Sequence labellIng evaLuatIon benChmark fOr spoken laNguagE benchmark (\texttt{SILICONE}). \texttt{SILICONE} is model-agnostic and contains 10 different datasets of various sizes. We obtain our representations with a hierarchical encoder based on transformer architectures, for which we extend two well-known pre-training objectives. Pre-training is performed on OpenSubtitles: a large corpus of spoken dialog containing over $2.3$ billion of tokens. We demonstrate how hierarchical encoders achieve competitive results with consistently fewer parameters compared to state-of-the-art models and we show their importance for both pre-training and fine-tuning.
From the Token to the Review: A Hierarchical Multimodal approach to Opinion Mining
Garcia, Alexandre, Colombo, Pierre, Essid, Slim, d'Alchรฉ-Buc, Florence, Clavel, Chloรฉ
The task of predicting fine grained user opinion based on spontaneous spoken language is a key problem arising in the development of Computational Agents as well as in the development of social network based opinion miners. Unfortunately, gathering reliable data on which a model can be trained is notoriously difficult and existing works rely only on coarsely labeled opinions. In this work we aim at bridging the gap separating fine grained opinion models already developed for written language and coarse grained models developed for spontaneous multimodal opinion mining. We take advantage of the implicit hierarchical structure of opinions to build a joint fine and coarse grained opinion model that exploits different views of the opinion expression. The resulting model shares some properties with attention-based models and is shown to provide competitive results on a recently released multimodal fine grained annotated corpus.
Affect-Driven Dialog Generation
Colombo, Pierre, Witon, Wojciech, Modi, Ashutosh, Kennedy, James, Kapadia, Mubbasir
The majority of current systems for end-to-end dialog generation focus on response quality without an explicit control over the affective content of the responses. In this paper, we present an affect-driven dialog system, which generates emotional responses in a controlled manner using a continuous representation of emotions. The system achieves this by modeling emotions at a word and sequence level using: (1) a vector representation of the desired emotion, (2) an affect regularizer, which penalizes neutral words, and (3) an affect sampling method, which forces the neural network to generate diverse words that are emotionally relevant. During inference, we use a reranking procedure that aims to extract the most emotionally relevant responses using a human-in-the-loop optimization process. We study the performance of our system in terms of both quantitative (BLEU score and response diversity), and qualitative (emotional appropriateness) measures.