Single Image Super-Resolution Using a Generative Adversarial Network

In recent years, the neural network has produced various breakthroughs in different areas. One of its promising results can be seen in super-resolving an image at large up-scaling factors as shown below

Isn’t it difficult to produce a high resolution image from a low resolution image?

In the paper, Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network, authors have used Generative Adversarial Network for super-resolution and are capable of producing photo-realistic natural images for 4x up-scaling factors.

In the paper, authors have used generative adversarial network (GAN) to produce single image super resolution from a low resolution image. In this blog we will see the followings:

  1. Architecture of GAN used in the paper.
  2. Loss function used for this problem.

Adversarial Network Architecture used in paper:

In the paper they have also used one discriminator and one generator model. Here, the generator is being fed with LR images and tries to generate images which are difficult to classify from real HR images by the discriminator.

Source

Generator Network: Input LR image is passed with 9*9 kernels with 64 filters and ParametricReLU. Then B residual blocks are being applied and each block is having 3*3 kernel with 64 filters followed by batch normalization and ParametricReLU. Then two sub-pixel convolution layers are applied to up-sample image to 4x.

Discriminator Network:  There is also a discriminator which will discriminate real HR image from generated SR image. It contains eight convolutional layers with an increasing number of 3 × 3 filter kernels, increasing by a factor of 2 from 64 to 512 kernels. To reduce the image resolution, strided convolutions are applied each time the number of features are doubled. The resulting 512 feature maps are followed by two dense layers and a final sigmoid activation function to obtain a probability for real or fake image.

Loss Function: In the paper, authors have defined a perceptual loss function which consists of content loss and an adversarial loss function. 

Adversarial loss tries to train generator such that it produces natural looking images which will be difficult for discriminator to distinguish from real image. In addition, they used a content loss motivated by perceptual similarity.

For content loss, mean squared error is the most widely used loss function. But it often results in perceptual unsatisfying content due to over smoothing of content. To resolve this problem authors of the paper use a loss function that is closer to perceptual similarity. They defined the VGG loss based on the ReLU activation layers of the pre-trained 19 layer VGG network.

They performed experiments on Set5, Set14 and BSD100 and tested it on BSD300 and achieved promising results. To test the results obtained by SRGAN authors have also taken mean opinion score of 26 rates. And they found results look much similar to original images.

Referenced Research Paper: Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network

GitHub: Super Resolution Examples

Hope you enjoy reading.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.

Leave a Reply