Rodriguez, Andres C.
Fine-grained Species Recognition with Privileged Pooling: Better Sample Efficiency Through Supervised Attention
Rodriguez, Andres C., D'Aronco, Stefano, Schindler, Konrad, Wegner, Jan Dirk
We propose a scheme for supervised image classification that uses privileged information, in the form of keypoint annotations for the training data, to learn strong models from small and/or biased training sets. Our main motivation is the recognition of animal species for ecological applications such as biodiversity modelling, which is challenging because of long-tailed species distributions due to rare species, and strong dataset biases such as repetitive scene background in camera traps. To counteract these challenges, we propose a visual attention mechanism that is supervised via keypoint annotations that highlight important object parts. This privileged information, implemented as a novel privileged pooling operation, is only required during training and helps the model to focus on regions that are discriminative. In experiments with three different animal species datasets, we show that deep networks with privileged pooling can use small training sets more efficiently and generalize better.
Reinforcement Learning Using Approximate Belief States
Rodriguez, Andres C., Parr, Ronald, Koller, Daphne
The problem of developing good policies for partially observable Markov decision problems (POMDPs) remains one of the most challenging areas of research in stochastic planning. One line of research in this area involves the use of reinforcement learning with belief states, probability distributions over the underlying model states. This is a promising method for small problems, but its application is limited by the intractability of computing or representing a full belief state for large problems. Recent work shows that, in many settings, we can maintain an approximate belief state, which is fairly close to the true belief state. In particular, great success has been shown with approximate belief states that marginalize out correlations between state variables. In this paper, we investigate two methods of full belief state reinforcement learning and one novel method for reinforcement learning using factored approximate belief states. We compare the performance of these algorithms on several well-known problem from the literature. Our results demonstrate the importance of approximate belief state representations for large problems.
Reinforcement Learning Using Approximate Belief States
Rodriguez, Andres C., Parr, Ronald, Koller, Daphne
The problem of developing good policies for partially observable Markov decision problems (POMDPs) remains one of the most challenging areas ofresearch in stochastic planning. One line of research in this area involves the use of reinforcement learning with belief states, probability distributionsover the underlying model states. This is a promising methodfor small problems, but its application is limited by the intractability ofcomputing or representing a full belief state for large problems. Recent work shows that, in many settings, we can maintain an approximate belief state, which is fairly close to the true belief state. In particular, great success has been shown with approximate belief states that marginalize out correlations between state variables. In this paper, we investigate two methods of full belief state reinforcement learning and one novel method for reinforcement learning using factored approximate belief states. We compare the performance of these algorithms on several well-known problem from the literature. Our results demonstrate the importance ofapproximate belief state representations for large problems.