In the previous blog, we have seen how to create training and validation dataset for our recognition model( Download and preprocess ). In this blog, we will create our model architecture and train it with the preprocessed data.
You can find full code here.
Model = CNN + RNN + CTC loss
Our model consists of three parts:
- The convolutional neural network to extract features from the image
- Recurrent neural network to predict sequential output per time-step
- CTC loss function which is transcription layer used to predict output for each time step.
Model Architecture
Here is the model architecture that we used:
This network architecture is inspired by this paper. Let’s see the steps that we used to create the architecture:
- Input shape for our architecture having an input image of height 32 and width 128.
- Here we used seven convolution layers of which 6 are having kernel size (3,3) and the last one is of size (2.2). And the number of filters is increased from 64 to 512 layer by layer.
- Two max-pooling layers are added with size (2,2) and then two max-pooling layers of size (2,1) are added to extract features with a larger width to predict long texts.
- Also, we used batch normalization layers after fifth and sixth convolution layers which accelerates the training process.
- Then we used a lambda function to squeeze the output from conv layer and make it compatible with LSTM layer.
- Then used two Bidirectional LSTM layers each of which has 128 units. This RNN layer gives the output of size (batch_size, 31, 63). Where 63 is the total number of output classes including blank character.
Let’s see the code for this architecture:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
|
# input with shape of height=32 and width=128 inputs = Input(shape=(32,128,1)) # convolution layer with kernel size (3,3) conv_1 = Conv2D(64, (3,3), activation = 'relu', padding='same')(inputs) # poolig layer with kernel size (2,2) pool_1 = MaxPool2D(pool_size=(2, 2), strides=2)(conv_1) conv_2 = Conv2D(128, (3,3), activation = 'relu', padding='same')(pool_1) pool_2 = MaxPool2D(pool_size=(2, 2), strides=2)(conv_2) conv_3 = Conv2D(256, (3,3), activation = 'relu', padding='same')(pool_2) conv_4 = Conv2D(256, (3,3), activation = 'relu', padding='same')(conv_3) # poolig layer with kernel size (2,1) pool_4 = MaxPool2D(pool_size=(2, 1))(conv_4) conv_5 = Conv2D(512, (3,3), activation = 'relu', padding='same')(pool_4) # Batch normalization layer batch_norm_5 = BatchNormalization()(conv_5) conv_6 = Conv2D(512, (3,3), activation = 'relu', padding='same')(batch_norm_5) batch_norm_6 = BatchNormalization()(conv_6) pool_6 = MaxPool2D(pool_size=(2, 1))(batch_norm_6) conv_7 = Conv2D(512, (2,2), activation = 'relu')(pool_6) squeezed = Lambda(lambda x: K.squeeze(x, 1))(conv_7) # bidirectional LSTM layers with units=128 blstm_1 = Bidirectional(LSTM(128, return_sequences=True, dropout = 0.2))(squeezed) blstm_2 = Bidirectional(LSTM(128, return_sequences=True, dropout = 0.2))(blstm_1) outputs = Dense(len(char_list)+1, activation = 'softmax')(blstm_2) act_model = Model(inputs, outputs) |
Loss Function
Now we have prepared model architecture, the next thing is to choose a loss function. In this text recognition problem, we will use the CTC loss function.
CTC loss is very helpful in text recognition problems. It helps us to prevent annotating each time step and help us to get rid of the problem where a single character can span multiple time step which needs further processing if we do not use CTC. If you want to know more about CTC( Connectionist Temporal Classification ) please follow this blog.
Note: For more details on the Optical Character Recognition , please refer to the Mastering OCR using Deep Learning and OpenCV-Python course.
A CTC loss function requires four arguments to compute the loss, predicted outputs, ground truth labels, input sequence length to LSTM and ground truth label length. To get this we need to create a custom loss function and then pass it to the model. To make it compatible with our model, we will create a model which takes these four inputs and outputs the loss. This model will be used for training and for testing we will use the model that we have created earlier “act_model”. Let’s see the code:
|
labels = Input(name='the_labels', shape=[max_label_len], dtype='float32') input_length = Input(name='input_length', shape=[1], dtype='int64') label_length = Input(name='label_length', shape=[1], dtype='int64') def ctc_lambda_func(args): y_pred, labels, input_length, label_length = args return K.ctc_batch_cost(labels, y_pred, input_length, label_length) loss_out = Lambda(ctc_lambda_func, output_shape=(1,), name='ctc')([outputs, labels, input_length, label_length]) model = Model(inputs=[inputs, labels, input_length, label_length], outputs=loss_out) |
Compile and Train the Model
To train the model we will use Adam optimizer. Also, we can use Keras callbacks functionality to save the weights of the best model on the basis of validation loss.
|
model.compile(loss={'ctc': lambda y_true, y_pred: y_pred}, optimizer = 'adam') filepath="best_model.hdf5" checkpoint = ModelCheckpoint(filepath=filepath, monitor='val_loss', verbose=1, save_best_only=True, mode='auto') callbacks_list = [checkpoint] |
In model.compile(), you can see that I have only taken y_pred and neglected y_true. This is because I have already taken labels as input to the model earlier.
Now train your model on 135000 training images and 15000 validation images.
|
training_img = np.array(training_img) train_input_length = np.array(train_input_length) train_label_length = np.array(train_label_length) valid_img = np.array(valid_img) valid_input_length = np.array(valid_input_length) valid_label_length = np.array(valid_label_length) model.fit(x=[training_img, train_padded_txt, train_input_length, train_label_length], y=np.zeros(135000), batch_size=256, epochs = 100, validation_data = ([valid_img, valid_padded_txt, valid_input_length, valid_label_length], [np.zeros(15000)]), verbose = 1, callbacks = callbacks_list) |
Test the model
Our model is now trained with 135000 images. Now its time to test the model. We can not use our training model because it also requires labels as input and at test time we can not have labels. So to test the model we will use ” act_model ” that we have created earlier which takes only one input: test images.
As our model predicts the probability for each class at each time step, we need to use some transcription function to convert it into actual texts. Here we will use the CTC decoder to get the output text. Let’s see the code:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
|
# load the saved best model weights act_model.load_weights('best_model_without_thresold.hdf5') # predict outputs on validation images prediction = act_model.predict(valid_img) # use CTC decoder out = K.get_value(K.ctc_decode(prediction, input_length=np.ones(prediction.shape[0])*prediction.shape[1], greedy=True)[0][0]) # see the results i = 0 for x in out: print(valid_orig_txt[i]) for p in x: if int(p) != -1: print(char_list[int(p)], end = '') print('\n') i+=1 |
Here are some results from the trained model:
Pretty good Yeah! Hope you enjoy reading.
If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.