Tag Archives: Conditional GAN

Implementation of Image-to-image translation using conditional GAN

In the previous blog, we have learned what is an image-to-image translation. Also, we discussed how it can be performed using conditional GAN. Conditional GAN is a type of generative adversarial network where discriminator and generator networks are conditioned on some sort of auxiliary information. In image-to-image translation using conditional GAN, we take an image as a piece of auxiliary information. With the help of this information, the generator tries to generate a new image. Let’s say we want to translate the edge image of a shoe to a real looking image of a shoe. Here we can condition our GAN with the edge image.

To know more about conditional GAN and its implementation from scratch, you can read these blog:

  1. Conditional Generative Adversarial Networks (CGAN): Introduction and Implementation
  2. Image to Image Translation Using Conditional GAN

Next, in this blog, we will implement image-to-image translation from scratch using Keras functional API.

Dataset and Preprocessing

To implement an image-to-image translation model using conditional GAN, we need a paired dataset as shown in the below image.

Center for Machine Perception (CMP) at the Czech Technical University in Prague provides rich source of the paired dataset for image-to-image translation which we can use here for our model. In this blog, we will use edges to shoe dataset provided by this link. This dataset consists of a train and validation set. The training set is consist of 49825 images and validation set is consist of 200 images. This dataset consist of some preprocessed images which contains edge and shoe in a single image as shown below:

These images have the size of (256, 512, 3) where 256 is the height, 512 is the width and the number of channels is 3. Now to bifurcate this image into input and output image, we can just slice this image from mid. After segregating we also need to normalize the image. These images consist of values b/w 0 to 255 and to make training faster and reducing the chances of getting stuck in local minima we need to normalize these images. we will normalize these images between -1 to 1. Here is the code to preprocess the image.

In the preprocessing step we have only used the normalization technique. To preprocess the images we can also do some random jittering and random mirroring as mentioned in the paper. To perform random jittering you just need to upscale the image to 286×286 and then randomly crop to 256×256. To perform random mirroring you need to flip the image horizontally.

Generator Network

Generator network for this conditional GAN architecture is a modified U-net architecture. This U-net architecture consists of an encoder-decoder network with skip connections between encoder and decoder. Each encoder block is consist of three layers (Conv -> BatchNorm -> Leakyrelu). Downsampling in the encoder layer is performed using the strided convolutional layers. Each block in decoder network is consist of four layers (Transposed Conv -> BatchNorm -> Dropout -> Relu). Dropout is only applied for the first three blocks in the decoder network. The input shape for the network is (256, 256, 3). Output shape is also (256, 256, 3) which will be a generated image.

Normally in a generative adversarial network, input to a generator is a noise vector. But here we will use a combination of noise vector and edge image as input to the generator. We will take a noise vector of size 100 and then use a dense layer and then reshape it to concatenate with image input. Here is the code for the generator network. The model looks a little lengthy but don’t worry these are just repeated U-net blocks for encoder and decoder.

Discriminator Network

Here discriminator is a patchGAN. A patchGAN is basically a convolutional network where the input image is mapped to an NxN array instead of a single scalar vector. For this conditional GAN, the discriminator takes two inputs. One is edge image and the other is the shoe image. Both inputs are of shape 9256, 256, 3). The output shape of this network is (30, 30, 1). Here each 30×30 output patch classifies the 70×70 portion of the input image.

Here each block in the discriminator is consist of 3 layers (Conv -> BatchNorm -> LeakyRelu). I have used the Gaussian Blurring layer to reduce the dominance of discriminator while training. Here is the full code.

Combined Network

Now we will create a combined network to train the generator model. Firstly this network takes noise vector and edge image as input and generates a new image using a generator network. Now the output from the generator network and edge image is fed to the discriminator network to get the output. But here discriminator will be non-trainable. Here is the network code.

Training

I have used binary cross-entropy loss for the discriminator network. For the generator network, I have coupled the binary cross-entropy loss with mae loss. This is because, for image-to-image translation, the generator’s duty is not only to fool the discriminator but also to generate real-looking images. I have used Adam optimizer for both generator and discriminator but the only difference is that I have kept a low learning rate for the discriminator to make it less dominant while training. I have used a batch size of 1. Here are the steps to train the explained conditional GAN.

  1. Train the discriminator model with real output images with patch labels of values 1.
  2. Train the discriminator model with images generated from a generator with patch labels of values 0.
  3. Train the generator network using the combined model.
  4. Repeat the steps from 1 to 3 for each image in the training dataset and then repeat all this for some number of epochs.

Hope you enjoy reading.

If you have any doubts/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.

Image to Image Translation Using Conditional GAN

The image-to-image translation is a well-known problem in the field of image processing, computer graphics, and computer vision. Some of the problems are converting labels to street scenes, labels to facades, black&white to a color photo, aerial images to maps, day to night and edges to photo. Take a look into these conversions:

Earlier each of these tasks is performed separately. But with the help of convolutional neural networks (CNNs), communities are taking big steps in this field. Because of CNN, most of the work is automatic as we train the model in an end to end fashion. But still, we need to define a loss function that tries to achieve the target we want. Most of us take the loss function lightly but this is the most important thing that you should always give your attention to when training deep learning models. For instance, if we take euclidean distance as our loss function for image-to-image translation, it would produce blurred images because it minimizes by averaging all outputs. Thus we need a meaningful loss function corresponding to each task and this is something that is always painful. This is where the generative adversarial network (GAN) comes.

GANs learn a loss that tries to classify if the output image is real or fake, while simultaneously training a generative model to minimize this loss. Blurry images will not be tolerated since they look obviously fake. Because GANs learn a loss that adapts to the data, they can be applied to a multitude of tasks that traditionally would require very different kinds of loss functions.

Now with the help of GANs, we can generate a realistic-looking image. But in image-to-image translation, we do not just want to generate a realistic-looking image but also output image should be translated from the input image. To perform this type of task we need a conditional GAN, so you must first understand this before moving forward (To know in detail about conditional GAN you can follow this blog).

In image-to-image translation with conditional GAN, the generator is provided with the input image and a noise vector both. Now generator will generate an image that is translated from the input image and indistinguishable from original data (Discriminator will be fooled). To train this model we need some paired training examples as shown below:

Network Architecture

Here the network architecture consists of two models, generator and discriminator. First, take a look into the generator model.

Generally, a generator network in GAN architecture takes noise vector as input and generates an image as output. But here input consists of both noise vector and an image. So the network will be taking image as input and producing an image as output. In these types of problems generally, an encoder-decoder model is being used.

In an encoder-decoder network, first, the input is being down-sampled till a bottleneck layer and then upsampled to generate image again. In our problem of image-to-image translation, input and output differ in surface appearance but both have the same structure. So to make this encoder-decoder network-rich, the low-level information is shared between the input and output. For this, skip connections are added which forms an U-net architecture as shown in the above figure.

Here the discriminator model is a patchGAN. A patchGAN is nothing but a conv net. The only difference is that instead of mapping an input image to a single scalar vector, it maps to an NxN array. Where each individual element in NxN array maps to a patch in the input image. Finally, averaging is done to find the full input image is real or fake.

Reason for using patchGAN: The generator model is being trained using discriminator loss and also the L1 loss. It is well known that L1 losses produce blurry images. L1 losses fail to capture high frequencies in images while in many cases they are able to capture low frequencies. Now the task for discriminator will be only to capture high frequency. By straining the model’s attention to local image patches using patchGAN, it clearly helped in capturing high frequencies in the image.

Loss Function

Generally, loss function for a conditional GAN can be stated as follows:

Here generator G tries to minimize this loss function whereas discriminator D tries to maximize it. In the paper, authors have coupled it with L1 loss function such that the generator task is to not only fool the discriminator but also to generate ground truth near looking images. So final loos function would be:

Paper has suggested that this is a really promising approach in many image-to-image translation tasks but it always requires a paired training dataset which is sometimes difficult to get. That’s all for this blog, in the next blog we will implement its application (pix2pix) using keras.

Hope you enjoy reading.

If you have any doubts/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.

Referenced Research Paper: Image-to-Image Translation with Conditional Adversarial Networks

Conditional Generative Adversarial Networks (CGAN): Introduction and Implementation

Generative adversarial networks (GANs) are trained to generate new images that look similar to original images. Let say we have trained a GAN network on MNIST digit dataset that consists of 0-9 handwritten digits. Now if we generate images from this trained GAN network, it will randomly generate images which can be any digit between 0 to 9. But if we want to generate images only for a particular digit, it will be difficult. One way is to find a mapping between random noise given as input to generator and images generated by the network. But with the variations in random input noise, it is really difficult to find the mapping. Here comes the conditional GANs.

A GAN network will be a conditional GAN if we train both the discriminator and generator conditioned on some sort of auxiliary information. This information can be class labels, black&white images, and other modalities. In this blog, we will learn how to generate images from a conditional GANs (cGAN) conditioned on the class label.

After the introduction of conditional GANs in 2014, there has been a wide range of applications developed based on this network. Some of them are:

  1. Image to Image Translation: With the use of cGAN there has been a various implementation of image to image translations like translation from day to night, translation from black and white to color, translation from sketches to color photographs, etc.


  2. Face Aging: Uses conditional GANs to generate face photographs with different ages, from younger to older.


  3. Text to Image: Inspired by the idea of conditional GANs, generates images given text explaining the image.


That’s enough for the introduction now we will implement a conditional GANs to generate handwritten digits conditioned on class labels.

Here we will use MNIST digits dataset to train this conditional GAN. This dataset consists of images of digits ranging from 0-9 and corresponding labels. Create a cgan.py file and insert the following code:

Line 1 imports all the required layers from keras. Line 2 and 3 imports Model and optimizer respectively. Line 4 imports required MNIST dataset from keras. If you haven’t done it earlier it will download the data first. Line 5 imports the numpy package.

As we have imported all the necessary packages, next we will create our cGAN architecture. To create this network, first, we will create a class and initialize all the necessary variables.

In the above code, Line 1 creates a class named as GAN. Line 2 defines an init function which is used to initialize all the required variables. Line 4 loads the data which consists of training and test data both with their labels. Line 5-9 initializes hyperparameters required for the network. Line 10-12 call the functions generator, the discriminator and combined model which we will later define in this class.

After initializing all the required variables we will next define the generator function of class GAN.

In generator we are taking two inputs, one is random noise of shape (100,) and another is class label of shape (1,) which will be an integer between 0-9. This extra input taken as class label will be our condition to GAN. During test time we will use this class label as a condition to generate images for that specific class only.

In the above code, Line 3-6 is for our input of class label. Here we have added Embedding layer to this conditional input which consists of weights and will be trained during the generator training. This embedding layer converts positive integers to a dense vector of fixed size. Here we have taken embedding of size 50. After this embedding layer we have added a dense layer and then reshaped it to make compatible during concatenation with random noise.

Line 8-9 creates an input layer for random noise and reshape it. Line 11 and 12 concatenate both the inputs after reshaping and then applied the batch norm. Batch normalization is really helpful in improving the quality of the model and stabilizing the training process.

Line 13-15 are for two upsampling layers (deconvolutional layers) with added batch normalization layer. Line 16 is an output layer with shape equals real images (28, 28, 1). Line 17, we create a generator model. Line-18 is for compiling the model where loss is cross-entropy and optimizer is Adam optimizer.

This GAN class is also consist of discriminator network which is also conditioned on class labels.

In the above code, line 3-6 are doing the same for converting class label input to embedding as we have seen in the case of generator network except for reshaping it to (28, 28, 1) instead of reshaping it to (7, 7, 1). Line 8 describes the second input layer which is an image (either real or fake). then in Line 10 we concatenate both the inputs to make it compatible with our discriminator network.

Line 11-19 is basically a combination of conv layer -> batch norm layer -> average pooling layer. Convolution layers are having filter size of 16, 32 and 64. Here we have used the average pooling layer instead of using max pooling layer as it is recommended to not use max pooling layers with GAN architectures.

Finally, from line 20-21 we first flatten the output from the previous layer and added a fully connected layer with shape 1 which is treated as output layer for our discriminator model. This model will discriminate between real and fake image. Line 22-23 we created discriminator model which takes two inputs with one output and then compiled the model with cross-entropy loss and Adam optimizer.

This was our discriminator model, now we will create a combined model which consists of both discriminator and generator to train the generator network.

In the above code, we created a combined model which takes two inputs one is random noise of shape (100, ) and another is the class label of shape (1, ). Generator model takes these two inputs and generates the new image which is then fed to the discriminator model to predict the output. Here, only the generator is being trained and the discriminator is made non-trainable.

Next we will train the whole GAN networks using these networks.

In the above code, from line 3-4, first, we first normalize the input image in the range of -1 to 1 and then reshape it to (28,28, 1). From line 9-11 we randomly select the real images and their corresponding labels equals to half the batch size. Line 13, we train the discriminator network using these real images conditioned on real class labels.

Then Line 15 we select the random labels between 0-9 of half the batch size for the input to the generator because during training we can not have the class labels for random noise to the generator. Then Line 16-17 we take random noise of shape (half_batch_size, 100) and generate the images from generator network which will be fake input images to the discriminator. Then Line 19 we train the discriminator network with these fake generated images which is conditioned on random class labels.

Finally, in line 21-22, we train our generator network using the combined model. Here we take the random noise and random class labels as input to the combined model.

We train this network for some number of iterations until our network is not able to fool the discriminator network. Finally, after training this network we can discard the discriminator network and use the generator network to generate new images conditioned on class labels.

Above code is used to test our trained cGAN. Here are the outputs generated from the network.

Hope you enjoy reading.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.

References: