The image-to-image translation is a well-known problem in the field of image processing, computer graphics, and computer vision. Some of the problems are converting labels to street scenes, labels to facades, black&white to a color photo, aerial images to maps, day to night and edges to photo. Take a look into these conversions:
Earlier each of these tasks is performed separately. But with the help of convolutional neural networks (CNNs), communities are taking big steps in this field. Because of CNN, most of the work is automatic as we train the model in an end to end fashion. But still, we need to define a loss function that tries to achieve the target we want. Most of us take the loss function lightly but this is the most important thing that you should always give your attention to when training deep learning models. For instance, if we take euclidean distance as our loss function for image-to-image translation, it would produce blurred images because it minimizes by averaging all outputs. Thus we need a meaningful loss function corresponding to each task and this is something that is always painful. This is where the generative adversarial network (GAN) comes.
GANs learn a loss that tries to classify if the output image is real or fake, while simultaneously training a generative model to minimize this loss. Blurry images will not be tolerated since they look obviously fake. Because GANs learn a loss that adapts to the data, they can be applied to a multitude of tasks that traditionally would require very different kinds of loss functions.
Now with the help of GANs, we can generate a realistic-looking image. But in image-to-image translation, we do not just want to generate a realistic-looking image but also output image should be translated from the input image. To perform this type of task we need a conditional GAN, so you must first understand this before moving forward (To know in detail about conditional GAN you can follow this blog).
In image-to-image translation with conditional GAN, the generator is provided with the input image and a noise vector both. Now generator will generate an image that is translated from the input image and indistinguishable from original data (Discriminator will be fooled). To train this model we need some paired training examples as shown below:
Network Architecture
Here the network architecture consists of two models, generator and discriminator. First, take a look into the generator model.
Generally, a generator network in GAN architecture takes noise vector as input and generates an image as output. But here input consists of both noise vector and an image. So the network will be taking image as input and producing an image as output. In these types of problems generally, an encoder-decoder model is being used.
In an encoder-decoder network, first, the input is being down-sampled till a bottleneck layer and then upsampled to generate image again. In our problem of image-to-image translation, input and output differ in surface appearance but both have the same structure. So to make this encoder-decoder network-rich, the low-level information is shared between the input and output. For this, skip connections are added which forms an U-net architecture as shown in the above figure.
Here the discriminator model is a patchGAN. A patchGAN is nothing but a conv net. The only difference is that instead of mapping an input image to a single scalar vector, it maps to an NxN array. Where each individual element in NxN array maps to a patch in the input image. Finally, averaging is done to find the full input image is real or fake.
Reason for using patchGAN: The generator model is being trained using discriminator loss and also the L1 loss. It is well known that L1 losses produce blurry images. L1 losses fail to capture high frequencies in images while in many cases they are able to capture low frequencies. Now the task for discriminator will be only to capture high frequency. By straining the model’s attention to local image patches using patchGAN, it clearly helped in capturing high frequencies in the image.
Loss Function
Generally, loss function for a conditional GAN can be stated as follows:
Here generator G tries to minimize this loss function whereas discriminator D tries to maximize it. In the paper, authors have coupled it with L1 loss function such that the generator task is to not only fool the discriminator but also to generate ground truth near looking images. So final loos function would be:
Paper has suggested that this is a really promising approach in many image-to-image translation tasks but it always requires a paired training dataset which is sometimes difficult to get. That’s all for this blog, in the next blog we will implement its application (pix2pix) using keras.
Hope you enjoy reading.
If you have any doubts/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.
Referenced Research Paper: Image-to-Image Translation with Conditional Adversarial Networks