In my previous blog, we have discussed what is an autoencoder, its applications and a simple implementation in keras. In this blog, we will see a variant of autoencoder – ‘ denoising autoencoders ‘.
A denoising autoencoder is an extension of autoencoders. An autoencoder tries to learn identity function( output equals to input ), which makes it risking to not learn useful feature. One method to overcome this problem is to use denoising autoencoders.
For training a denoising autoencoder, we need to use noisy input data. For that, we need to add some noise to an original image. The amount of corrupting data depends on the amount of information present in data. Usually, 25-30 % data is being corrupted. This can be higher if your data contains less information. Let see how you can add noise to data in code:
1 2 3 4 |
# adding some noise to data input_x_train = output_X_train + 0.5 * np.random.normal(loc=0.0, scale=1.0, size=output_X_train.shape) input_x_test = output_X_test + 0.5 * np.random.normal(loc=0.0, scale=1.0, size=output_X_test.shape) |
To calculate loss, the output of the denoising autoencoder is then compared to original input instead of the corrupted one. Such a loss function train model to learn interesting features rather than learning identity function.
I have implemented denoising autoencoder in keras using MNIST data, which will give you an overview, how a denoising autoencoder works.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
# creating denoising autoencoder model inputs = Input(shape = (28,28,1)) conv1 = Conv2D(16, (3,3), activation = 'relu', padding = "SAME")(inputs) pool1 = MaxPooling2D(pool_size = (2,2), strides = 2)(conv1) conv2 = Conv2D(32, (3,3), activation = 'relu', padding = "SAME")(pool1) pool2 = MaxPooling2D(pool_size = (2,2), strides = 2)(conv2) upsampling_1 = Conv2DTranspose(32, 3, padding='same', activation='relu', strides=(2, 2))(pool2) upsampling_2 = Conv2DTranspose(16, 3, padding='same', activation='relu', strides=(2, 2))(upsampling_1) outputs = Conv2DTranspose(1, 3, padding='same', activation='relu')(upsampling_2) autoencoder = Model(inputs, outputs) m = 256 n_epoch = 10 autoencoder.compile(optimizer='adam', loss='binary_crossentropy') autoencoder.fit(input_x_train,output_X_train, epochs=n_epoch, batch_size=m, shuffle=True) |
following is the result of denoising autoencoder.
The full code can be find here.
Hope you understand the usefulness of denoising autoencoder. In the next blog, we will feature variational autoencoders. Hope you enjoy reading.
If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.