Let’s start with a simple definition of autoencoders. ‘ Autoencoders are the neural networks trained to reconstruct their original input’.
Now, you might be thinking what’s the use of reconstructing same data. Let me give you an example If you want to transfer data of GB’s of size and somehow if you can compress it into MB’s and then able to reconstruct back the data to the original size, isn’t that a better way to transfer data. This is one of the applications of autoencoders.
Autoencoders generally consists of two parts, one is encoder and other is decoder. Encoder downscale data to less number of features and decoder upscale the extracted features to original one.
There are some practical applications of autoencoders:
- Dimensionality reduction for data visualization
- Image Denoising
- Generative Models
Visualizing a 10-dimensional vector is difficult. To overcome this problem we need to reduce that 10-dimensional vector into 2-D or 3-D. One of the famous algorithm PCA (Principal Component Analysis) tries to solve this problem. PCA uses linear transformations while autoencoders can use both linear and non-linear transformations for dimensionality reduction. Which makes autoencoders to generate more complex and interesting features than PCA.
Autoencoders can be used to remove the noise present in the image. It can also be used to generate new images required for a specific task. We will see more about these two applications in the next blog.
Now, let’s start with the simple implementation of autoencoders in Keras using MNIST data. First, let’s download MNIST training and test data and reshape it.
1 2 3 4 5 6 7 8 |
(X_train, Y_train), (X_test, Y_test) = mnist.load_data() X_train = X_train.astype('float32') / 255. output_X_train = X_train.reshape(-1,28,28,1) X_test = X_test.astype('float32') / 255. output_X_test = X_test.reshape(-1,28,28,1) print(X_train.shape, X_test.shape) |
Encoder
MNIST data consists of images of digits. So, it is better to use a convolutional neural network in our encoders and decoders. In our encoder, I have used conv and max-pooling layers to extract the compressed representation. Then flatten the encoder output to 32 features. Which will be the input to the decoder.
1 2 3 4 5 6 7 8 9 |
encoder_inputs = Input(shape = (28,28,1)) conv1 = Conv2D(16, (3,3), activation = 'relu', padding = "SAME")(encoder_inputs) pool1 = MaxPooling2D(pool_size = (2,2), strides = 2)(conv1) conv2 = Conv2D(32, (3,3), activation = 'relu', padding = "SAME")(pool1) pool2 = MaxPooling2D(pool_size = (2,2), strides = 2)(conv2) flat = Flatten()(pool2) enocder_outputs = Dense(32, activation = 'relu')(flat) |
Decoder
In the decoder, we need to upsample the extracted 32 features into the original size of the image. To achieve this, I have used Conv2DTranspose functions from keras. Then the final layer of the decoder will give the reconstructed output which will be similar to the original input.
1 2 3 4 5 6 7 8 |
dense_layer_d = Dense(7*7*32, activation = 'relu')(enocder_outputs) output_from_d = Reshape((7,7,32))(dense_layer_d) conv1_1 = Conv2D(32, (3,3), activation = 'relu', padding = "SAME")(output_from_d) upsampling_1 = Conv2DTranspose(32, 3, padding='same', activation='relu', strides=(2, 2))(conv1_1) upsampling_2 = Conv2DTranspose(16, 3, padding='same', activation='relu', strides=(2, 2))(upsampling_1) decoded_outputs = Conv2DTranspose(1, 3, padding='same', activation='relu')(upsampling_2) autoencoder = Model(encoder_inputs, decoded_outputs) |
To minimize reconstruction loss, we train the network with a large dataset and update weights. Now, our model is created, the next thing is to compile and train the model.
1 2 3 4 |
m = 256 n_epoch = 10 autoencoder.compile(optimizer='adam', loss='binary_crossentropy') autoencoder.fit(output_X_train,output_X_train, epochs=n_epoch, batch_size=m, shuffle=True) |
Below are the results from autoencoder trained above. The first line of digits shows the original input (test images) while the second line represents the reconstructed inputs from the model.
The full code can be find here.
Hope you understand the basics of autoencoders, where these can be used and how a simple autoencoder be implemented. In the next blog, we will see how to denoise an image using autoencoders. Hope you enjoy reading.
If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.
Referenced Research Paper: http://proceedings.mlr.press/v27/baldi12a/baldi12a.pdf