Neo, Dexter
MaxEnt Loss: Constrained Maximum Entropy for Calibration under Out-of-Distribution Shift
Neo, Dexter, Winkler, Stefan, Chen, Tsuhan
We present a new loss function that addresses the out-of-distribution (OOD) calibration problem. While many objective functions have been proposed to effectively calibrate models in-distribution, our findings show that they do not always fare well OOD. Based on the Principle of Maximum Entropy, we incorporate helpful statistical constraints observed during training, delivering better model calibration without sacrificing accuracy. We provide theoretical analysis and show empirically that our method works well in practice, achieving state-of-the-art calibration on both synthetic and real-world benchmarks.
DSAC-C: Constrained Maximum Entropy for Robust Discrete Soft-Actor Critic
Neo, Dexter, Chen, Tsuhan
We present a novel extension to the family of Soft Actor-Critic (SAC) algorithms. We argue that based on the Maximum Entropy Principle, discrete SAC can be further improved via additional statistical constraints derived from a surrogate critic policy. Furthermore, our findings suggests that these constraints provide an added robustness against potential domain shifts, which are essential for safe deployment of reinforcement learning agents in the real-world. We provide theoretical analysis and show empirical results on low data regimes for both in-distribution and out-of-distribution variants of Atari 2600 games.
Morphset:Augmenting categorical emotion datasets with dimensional affect labels using face morphing
Vonikakis, Vassilios, Neo, Dexter, Winkler, Stefan
Since even experienced annotators Emotion recognition and understanding is a vital component may disagree on these labels, multiple annotations per image in human-machine interaction. Dimensional models of affect are required, which further increases the cost and complexity such as those using valence and arousal have advantages over of the task. Yet there are no guarantees that the full range of traditional categorical ones due to the complexity of emotional possible expressions and intensities will be covered, resulting states in humans. However, dimensional emotion annotations in imbalanced datasets, with only few images with'interesting' are difficult and expensive to collect, therefore they affective content. Consequently, large, balanced emotion are still limited in the affective computing community. To address datasets, with high-quality annotations, covering a wide range these issues, we propose a method to generate synthetic of expression variations and expression intensities of many images from existing categorical emotion datasets using face different subjects, are in short supply.