Tag Archives: ReduceLROnPlateau callback keras

Keras Callbacks – ReduceLROnPlateau

As we already know that the neural networks suffer greatly from the problem of plateaus which substantially slows down the training process. Thus, the selection of a good learning rate becomes a really challenging task. Many methods have been proposed to counter this problem such as using cyclic learning rates where we vary learning rates between reasonable boundary values, or other methods like Stochastic Gradient Descent with Warm Restarts, etc.

Keras also provides ReduceLROnPlateau callback that reduces the learning rate by some factor whenever the learning stagnates. It is believed that sometimes our model will benefit from lowering the learning rate when trapped in the plateau region. So, let’s discuss its Keras API.

This callback “monitors” a quantity and if no improvement is seen for a ‘patience‘ number of epochs, the learning rate is reduced by the “factor” specified. Improvement is specified by the “min_delta” argument. No improvement is considered if the change in the monitored quantity is less than the min_delta specified. This also has an option whether you want to start evaluating the new LR instantly or give some time to the optimizer to crawl with the new LR and then evaluate the monitored quantity. This is done using the “cooldown” argument. You can also set the lower bound on the LR using the “min_lr” argument. No matter how many epochs or what reduction factor you use, the LR will never decrease beyond “min_lr“.

Now, let’s see how to use this.

That’s all for this blog. Hope you enjoy reading.

If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.