Goto

Collaborating Authors

Reinforcement Sensitivity Theory and Cognitive Architectures

AAAI Conferences

Many biological models of human motivation and behavior posit a functional division between those subsystems respon- sible for approach and avoidance behaviors. Gray and McNaughton's (2000) revised Reinforcement Sensitivity Theory (RST) casts this distinction in terms of a Behavioral Activation System (BAS) and a Fight-Flight-Freeze System (FFFS), mediated by a third, conflict resolution system — the Behavioral Inhibition System (BIS). They argued that these are fundamental, functionally distinct systems. The model has been highly influential both in personality psychology, where it provides a biologically-based explanation of traits such as extraversion and neuroticism, and in clinical psychology wherein state disorders such as Major Depressive Disorder and Generalized Anxiety Disorder can be modeled as differences in baseline sensitivities of one or more of the systems. In this paper, we present work in progress on implementing a simplified simulation of RST in a set of embodied virtual characters. We argue that RST provides an interesting and potentially powerful starting point for cognitive architectures for various applications, including interactive entertainment, in which simulation of human-like affect and personality is important.


Semi-supervised Learning with GANs: Manifold Invariance with Improved Inference

Neural Information Processing Systems

Semi-supervised learning methods using Generative adversarial networks (GANs) have shown promising empirical success recently. Most of these methods use a shared discriminator/classifier which discriminates real examples from fake while also predicting the class label. Motivated by the ability of the GANs generator to capture the data manifold well, we propose to estimate the tangent space to the data manifold using GANs and employ it to inject invariances into the classifier. In the process, we propose enhancements over existing methods for learning the inverse mapping (i.e., the encoder) which greatly improves in terms of semantic similarity of the reconstructed sample with the input sample. We observe considerable empirical gains in semi-supervised learning over baselines, particularly in the cases when the number of labeled examples is low. We also provide insights into how fake examples influence the semi-supervised learning procedure.


Eigen-Distortions of Hierarchical Representations

Neural Information Processing Systems

We develop a method for comparing hierarchical image representations in terms of their ability to explain perceptual sensitivity in humans. Specifically, we utilize Fisher information to establish a model-derived prediction of sensitivity to local perturbations of an image. For a given image, we compute the eigenvectors of the Fisher information matrix with largest and smallest eigenvalues, corresponding to the model-predicted most- and least-noticeable image distortions, respectively. For human subjects, we then measure the amount of each distortion that can be reliably detected when added to the image. We use this method to test the ability of a variety of representations to mimic human perceptual sensitivity.


AI Safety for High Energy Physics

arXiv.org Machine Learning

The field of high-energy physics (HEP), along with many scientific disciplines, is currently experiencing a dramatic influx of new methodologies powered by modern machine learning techniques. Over the last few years, a growing body of HEP literature has focused on identifying promising applications of deep learning in particular, and more recently these techniques are starting to be realized in an increasing number of experimental measurements. The overall conclusion from this impressive and extensive set of studies is that rarer and more complex physics signatures can be identified with the new set of powerful tools from deep learning. However, there is an unstudied systematic risk associated with combining the traditional HEP workflow and deep learning with high-dimensional data. In particular, calibrating and validating the response of deep neural networks is in general not experimentally feasible, and therefore current methods may be biased in ways that are not covered by current uncertainty estimates. By borrowing ideas from AI safety, we illustrate these potential issues and propose a method to bound the size of unaccounted for uncertainty. In addition to providing a pragmatic diagnostic, this work will hopefully begin a dialogue within the community about the robust application of deep learning to experimental analyses.


Towards Auditability for Fairness in Deep Learning

arXiv.org Artificial Intelligence

Group fairness metrics can detect when a deep learning model behaves differently for advantaged and disadvantaged groups, but even models that score well on these metrics can make blatantly unfair predictions. We present smooth prediction sensitivity, an efficiently computed measure of individual fairness for deep learning models that is inspired by ideas from interpretability in deep learning. smooth prediction sensitivity allows individual predictions to be audited for fairness. We present preliminary experimental results suggesting that smooth prediction sensitivity can help distinguish between fair and unfair predictions, and that it may be helpful in detecting blatantly unfair predictions from "group-fair" models.