Goto

Collaborating Authors

 sentiment


Content preserving text generation with attribute controls

Lajanugen Logeswaran, Honglak Lee, Samy Bengio

Neural Information Processing Systems

We focus on categorical attributes of language. Examples of such attributes include sentiment, language complexity, tense, voice, honorifics, mood, etc. Our approach draws inspiration from styletransfer methods inthevision andlanguage literature.


Supplementary Material Infer Induced Sentiment of Comment Response to Video: A New Task, Dataset and Baseline Qi Jia 1 Baoyu Fan 2,1 Cong Xu1 Lu Liu

Neural Information Processing Systems

This section provides a comprehensive overview of the CSMV dataset. This extensive time range allows for the inclusion of a diverse set of content, capturing the evolution of sentiments over the course of more than two years. The distribution of labels in our CSMV dataset is shown in Figure 1. In Figure 1a, the opinion labels are distributed as follows: positive - 47%, neutral - 42%, and negative - 11%. Negative comments are clearly in the minority.



Explanations that reveal all through the definition of encoding

Neural Information Processing Systems

Feature attributions attempt to highlight what inputs drive predictive power. Good attributions or explanations are thus those that produce inputs that retain this predictive power; accordingly, evaluations of explanations score their quality of prediction. However, evaluations produce scores better than what appears possible from the values in the explanation for a class of explanations, called encoding explanations. Probing for encoding remains a challenge because there is no general characterization of what gives the extra predictive power. We develop a definition of encoding that identifies this extra predictive power via conditional dependence and show that the definition fits existing examples of encoding. This definition implies, in contrast to encoding explanations, that non-encoding explanations contain all the informative inputs used to produce the explanation, giving them a "what you see is what you get" property, which makes them transparent and simple to use.