The system was developed by researchers working on Google Brain. It's based on a pixel recursive super resolution model that allows pixelated, low-resolution images to be dynamically enhanced. It reduces blur, fills in details and eventually pieces together a high-resolution copy. Google Brain uses two neural networks to create the output images. Working with an input file containing 8x8 pixels, it attempts to match the low-resolution source with an existing high-resolution image.
The team at Google Brain has made an impressive breakthrough for increasing the resolution of images. They've managed to turn 8x8 grids of pixels into monstrous approximations of human beings. Neural networks are our best chance at being able to truly increase the level of detail in a low-resolution image. We're stuck with the pixel information that a photo contains but deep learning can add detail through what are commonly referred to as "hallucinations." This essentially means a piece of software making guesses about an image based on the information it's learned from other images.
Papers about deep learning ordered by task, date. Current state-of-the-art papers are labelled. RenderGAN: Generating Realistic Labeled Data, nov 2016, arxiv Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding, feb 2016, arxiv SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and 0.5MB model size, feb 2016, arxiv Snapshot Ensembles: Train 1, Get M for Free, 2016, paper, github
Generative deep learning has sparked a new wave of Super-Resolution (SR) algorithms that enhance single images with impressive aesthetic results, albeit with imaginary details. Multi-frame Super-Resolution (MFSR) offers a more grounded approach to the ill-posed problem, by conditioning on multiple low-resolution views. This is important for satellite monitoring of human impact on the planet -- from deforestation, to human rights violations -- that depend on reliable imagery. To this end, we present HighRes-net, the first deep learning approach to MFSR that learns its sub-tasks in an end-to-end fashion: (i) co-registration, (ii) fusion, (iii) up-sampling, and (iv) registration-at-the-loss. Co-registration of low-resolution views is learned implicitly through a reference-frame channel, with no explicit registration mechanism. We learn a global fusion operator that is applied recursively on an arbitrary number of low-resolution pairs. We introduce a registered loss, by learning to align the SR output to a ground-truth through ShiftNet. We show that by learning deep representations of multiple views, we can super-resolve low-resolution signals and enhance Earth Observation data at scale. Our approach recently topped the European Space Agency's MFSR competition on real-world satellite imagery.