RS-ESRGAN: Super-Resolution of Sentinel-2 Imagery Using Generative Adversarial Networks

Fork me on GitHub

upc-logo IOCAG-logo

Universitat Politècnica de Catalunya

Universidad de Las Palmas de Gran Canaria

introduction

Sentinel-2 satellites provide multi-spectral optical remote sensing images with four bands at 10 m of spatial resolution. These images, due to the open data distribution policy, are becoming an important resource for several applications. However, for small scale studies, the spatial detail of these images might not be sufficient. On the other hand, WorldView commercial satellites offer multi-spectral images with a very high spatial resolution, typically less than 2 m, but their use can be impractical for large areas or multi-temporal analysis due to their high cost. To exploit the free availability of Sentinel imagery, it is worth considering deep learning techniques for single-image super-resolution tasks, allowing the spatial enhancement of low-resolution (LR) images by recovering high-frequency details to produce high-resolution (HR) super-resolved images. In this work, we implement and train a model based on the Enhanced Super-Resolution Generative Adversarial Network (ESRGAN) with pairs of WorldView-Sentinel images to generate a super-resolved multispectral Sentinel-2 output with a scaling factor of 5. Our model, named RS-ESRGAN, removes the upsampling layers of the network to make it feasible to train with co-registered remote sensing images. Results obtained outperform state-of-the-art models using standard metrics like PSNR, SSIM, ERGAS, SAM and CC. Moreover, qualitative visual analysis shows spatial improvements as well as the preservation of the spectral information, allowing the super-resolved Sentinel-2 imagery to be used in studies requiring very high spatial resolution

If you find this work useful, please consider citing:

Salgueiro Romero, L.; Marcello, J.; Vilaplana, V. Super-Resolution of Sentinel-2 Imagery Using Generative Adversarial Networks. Remote Sens. 2020, 12, 2424. https://doi.org/10.3390/rs12152424

@article{salgueiro2020super,
  title={Super-resolution of Sentinel-2 imagery using generative adversarial networks},
  author={Salgueiro Romero, Luis and Marcello, Javier and Vilaplana, Ver{\'o}nica},
  journal={Remote Sensing},
  volume={12},
  number={15},
  pages={2424},
  year={2020},
  publisher={Multidisciplinary Digital Publishing Institute}
}

Download our paper in pdf here.

Model

Our proposed architecture is based on ESRGAN, which is a Generative adversarial Network for Super-resolution. We remove the upsampling modules from the original implementation since we worked with co-registered images, which must be at the same spatial resolution to be co-registered and we modified the input/output channels to work with the four channels available in both remote sensing images, the traditional RGB besides the Near InfraRed (NIR) band.

GAN Bloques

Results

Standardization of data

We show that doing the standardization of data as pre-processing, instead of scaling, is better for preserve the spectral information between the input and output images. In Figure below, the ouput Super-resolution image tend to resemble the distribution of the input (Sentinel-2) by using the standardization or follow the target distribution (WorldView) by using the normalization schemes.

Cabrera

Metrics results on Test Sets

We have made two test sets, WS-Set1 is the test sub-set of tiles belonging to the training dataset. WS-Set2 did not belong to the train images, were only used for testing purposes.

Set1 Set2

We also have trained other models for Super-resolution and compared the results obtained, showingh that the best results were obtained with our proposed model. Set1

Examples

Test results on WS-Set1

Results for several values of interpolation, that results from a blurry image (non-adversarial training) and a noise image with texture (adversarial training).

Cabrera Cabrera

Test results on WS-Set2

Cabrera Cabrera

code

We implemented our model using Pytorch, based on BasicSR. You can find it here

Pytorch

acknowledgements

We want to thank our technical support team:

   
We gratefully acknowledge the support of NVIDIA Corporation with the donation of the GeForce GTX Titan X used in this work. logo-nvidia
The Image Processing Group at the UPC is a SGR14 Consolidated Research Group recognized and sponsored by the Catalan Government (Generalitat de Catalunya) through its AGAUR office. logo-catalonia
   

This research has been supported by the ARTEMISAT-2 (CTM2016-77733-R) and MALEGRA (TEC2016-75976-R) projects, funded by the Spanish Agencia Estatal de Investigación (AEI), by the Fondo Europeo de Desarrollo Regional (FEDER) and the Spanish Ministerio de Economía y Competitividad, respectively. Luis Salgueiro. would like to acknowledge the BECAL (Becas Carlos Antonio López) scholarship for the financial support.