Generative Adversarial Networks (GANs), as the name suggests these are the deep learning models used to generate new data from the given dataset using an adversarial process. GANs were first introduced by Ian Goodfellow at NIPS 2014. This idea is regarded as the most interesting in machine learning in the last 10 years. Generative models are carrying bigger hope because they can mimic any data distribution. They can be used to generated images, audio waveform containing speech, music, etc.
Generative Adversarial Network Algorithm:
To create a GAN, we train two networks simultaneously in an adversarial manner. The two networks are generator and discriminator. And the adversary is, while the generator tries to generate data similar to original data distribution, discriminator tries to discriminate between data generated by the generator and original data. Here generator will try to fool the discriminator by improving itself and discriminator tries to differentiate between original and fake. This training will continue until the discriminator model is fooled half the time and the generator is able to generate data similar to original data distribution.
Let’s consider an example of generating new images using GAN. The first network discriminator is D(X), where X is an image (either real or fake). And the second network generator is G(Z), where Z is random noise. To train these networks D is first fed with real images and train to produce values close to 1(real) and then fed with fake images(generated by generator) and trained to produce values close to 0 (fake). Similarly, the generator is trained with loss generated by each image fed to discriminator produced by the generator.
We train D to maximize the probability of assigning the correct label to both training examples and samples from G. We simultaneously train G to minimize log(1 − D(G(z))). Let’s take a look into the algorithm provided in GAN paper.
We train this network for some number of iterations to make generator predict images close to the training dataset.
Generative Adversarial Networks (GANs) Vs Variational Autoencoders (VAEs)
There are some other generative models such as variational autoencoders that can do a similar job as GANs do. A VAE model maps the input to low dimensional space and then create a probability distribution to generate new outputs using some decoder function (To know more about VAEs you can follow this blog).
While Vanilla GANs are not able to map the input to latent space rather they use random noise to generate new data. GANs are usually difficult to train but generate more fine and granular images while VAEs are easier to train but produces more blurred images.
This was a brief introduction about generative adversarial networks. In the following posts, we will implement different GAN architectures, train GAN network and learn more about GAN improvements with its variants (CycleGAN, InfoGAN, BigGAN, etc).
Hope you enjoy reading.
If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.