Not enough data to create a plot.
Try a different view from the menu above.
Posner, Ingmar
Hierarchical Attentive Recurrent Tracking
Kosiorek, Adam, Bewley, Alex, Posner, Ingmar
Class-agnostic object tracking is particularly difficult in cluttered environments as target specific discriminative models cannot be learned a priori. Inspired by how the human visual cortex employs spatial attention and separate ``where'' and ``what'' processing pathways to actively suppress irrelevant visual features, this work develops a hierarchical attentive recurrent model for single object tracking in videos. The first layer of attention discards the majority of background by selecting a region containing the object of interest, while the subsequent layers tune in on visual features particular to the tracked object. This framework is fully differentiable and can be trained in a purely data driven fashion by gradient methods. To improve training convergence, we augment the loss function with terms for auxiliary tasks relevant for tracking. Evaluation of the proposed model is performed on two datasets: pedestrian tracking on the KTH activity recognition dataset and the more difficult KITTI object tracking dataset.
Find Your Own Way: Weakly-Supervised Segmentation of Path Proposals for Urban Autonomy
Barnes, Dan, Maddern, Will, Posner, Ingmar
We present a weakly-supervised approach to segmenting proposed drivable paths in images with the goal of autonomous driving in complex urban environments. Using recorded routes from a data collection vehicle, our proposed method generates vast quantities of labelled images containing proposed paths and obstacles without requiring manual annotation, which we then use to train a deep semantic segmentation network. With the trained network we can segment proposed paths and obstacles at run-time using a vehicle equipped with only a monocular camera without relying on explicit modelling of road or lane markings. We evaluate our method on the large-scale KITTI and Oxford RobotCar datasets and demonstrate reliable path proposal and obstacle segmentation in a wide variety of environments under a range of lighting, weather and traffic conditions. We illustrate how the method can generalise to multiple path proposals at intersections and outline plans to incorporate the system into a framework for autonomous urban driving.
Deep Tracking on the Move: Learning to Track the World from a Moving Vehicle using Recurrent Neural Networks
Dequaire, Julie, Rao, Dushyant, Ondruska, Peter, Wang, Dominic, Posner, Ingmar
This paper presents an end-to-end approach for tracking static and dynamic objects for an autonomous vehicle driving through crowded urban environments. Unlike traditional approaches to tracking, this method is learned end-to-end, and is able to directly predict a full unoccluded occupancy grid map from raw laser input data. Inspired by the recently presented DeepTracking approach [Ondruska, 2016], we employ a recurrent neural network (RNN) to capture the temporal evolution of the state of the environment, and propose to use Spatial Transformer modules to exploit estimates of the egomotion of the vehicle. Our results demonstrate the ability to track a range of objects, including cars, buses, pedestrians, and cyclists through occlusion, from both moving and stationary platforms, using a single learned model. Experimental results demonstrate that the model can also predict the future states of objects from current inputs, with greater accuracy than previous work.
Deep Tracking: Seeing Beyond Seeing Using Recurrent Neural Networks
Ondruska, Peter (University of Oxford) | Posner, Ingmar (University of Oxford)
This paper presents to the best of our knowledge the first end-to-end object tracking approach which directly maps from raw sensor input to object tracks in sensor space without requiring any feature engineering or system identification in the form of plant or sensor models. Specifically, our system accepts a stream of raw sensor data at one end and, in real-time, produces an estimate of the entire environment state at the output including even occluded objects. We achieve this by framing the problem as a deep learning task and exploit sequence models in the form of recurrent neural networks to learn a mapping from sensor measurements to object tracks. In particular, we propose a learning method based on a form of input dropout which allows learning in an unsupervised manner, only based on raw, occluded sensor data without access to ground-truth annotations. We demonstrate our approach using a synthetic dataset designed to mimic the task of tracking objects in 2D laser data โ as commonly encountered in robotics applications โ and show that it learns to track many dynamic objects despite occlusions and the presence of sensor noise.
The Route Not Taken: Driver-Centric Estimation of Electric Vehicle Range
Ondruska, Peter (University of Oxford) | Posner, Ingmar (University of Oxford)
This paper addresses the challenge of efficiently and accurately predicting an electric vehicle's attainable range. Specifically, our approach accounts for a driver's generalised route preferences to provide up-to-date, personalised information based on estimates of the energy required to reach every possible destination in a map. We frame this task in the context of sequential decision making and show that energy consumption in reaching a particular destination can be formulated as policy evaluation in a Markov Decision Process. In particular, we exploit the properties of the model adopted for predicting likely energy consumption to every possible destination in a realistically sized map in real-time. The policy to be evaluated is learned and, over time, refined using Inverse Reinforcement Learning to provide for a life-long adaptive system. Our approach is evaluated using a publicly available dataset providing real trajectory data of 50 individuals spanning approximately 10,000 miles of travel. We show that by accounting for driver specific route preferences our system significantly reduces the relative error in energy prediction compared to more common, driver-agnostic heuristics such as shortest-path or shortest-time routes.
Planning to Perceive: Exploiting Mobility for Robust Object Detection
Velez, Javier (Massachusetts Institute of Technology) | Hemann, Garrett (Massachusetts Institute of Technology) | Huang, Albert S. (Massachusetts Institute of Technology) | Posner, Ingmar (Department of Engineering Science, University of Oxford) | Roy, Nicholas (Massachusetts Institute of Technology)
Consider the task of a mobile robot autonomously navigating through an environment while detecting and mapping objects of interest using a noisy object detector. The robot must reach its destination in a timely manner, but is rewarded for correctly detecting recognizable objects to be added to the map, and penalized for false alarms. However, detector performance typically varies with vantage point, so the robot benefits from planning trajectories which maximize the efficacy of the recognition system. This work describes an online, any-time planning framework enabling the active exploration of possible detections provided by an off-the-shelf object detector. We present a probabilistic approach where vantage points are identified which provide a more informative view of a potential object. The agent then weighs the benefit of increasing its confidence against the cost of taking a detour to reach each identified vantage point. The system is demonstrated to significantly improve detection and trajectory length in both simulated and real robot experiments.