Goto

Collaborating Authors

 streetlearn


Retouchdown: Adding Touchdown to StreetLearn as a Shareable Resource for Language Grounding Tasks in Street View

Mehta, Harsh, Artzi, Yoav, Baldridge, Jason, Ie, Eugene, Mirowski, Piotr

arXiv.org Artificial Intelligence

The Touchdown dataset (Chen et al., 2019) provides instructions by human annotators for navigation through New York City streets and for resolving spatial descriptions at a given location. To enable the wider research community to work effectively with the Touchdown tasks, we are publicly releasing the 29k raw Street View panoramas needed for Touchdown. We follow the process used for the StreetLearn data release (Mirowski et al., 2019) to check panoramas for personally identifiable information and blur them as necessary. These have been added to the StreetLearn dataset and can be obtained via the same process as used previously for StreetLearn. We also provide a reference implementation for both of the Touchdown tasks: vision and language navigation (VLN) and spatial description resolution (SDR). We compare our model results to those given in Chen et al. (2019) and show that the panoramas we have added to StreetLearn fully support both Touchdown tasks and can be used effectively for further research and comparison.


Google to release DeepMind's StreetLearn for teaching machine-learning agents to navigate cities

#artificialintelligence

Google is getting ready to release its StreetLearn dataset for training machine-learning models to navigate cities without a map. The StreetLearn environment relies on images from Google Street View and has been used by Google DeepMind to train a software agent to navigate various western cities without reference to a map or GPS co-ordinates, using only visual clues such as landmarks as it wanders the streets. The StreetLearn environment encompasses multiple regions within the centers of the cities of London, Paris and New York. It is made up of cropped 360-degree panoramic images of street scenes from Street View, each measuring 84 x 84 pixels. Each panoramic image is a node in larger network or graph of images, with up to 65,000 nodes per 5km city region, and multiple regions per city. Each region has a distinct urban setting, for instance differing amount of construction and varying numbers of parks and bridges.