Techniques for performance improvement with hyperparameter tuning
- L2 regularization: penalizing complexity of the model, penalizing big weights.
L2 regularization encourage weights to be small but doesn’t force them to exactly 0.- Learning Rate (LR) optimization: starting with a base LR and subsequently decreasing it for next epoch.
- Batch Size: usually try with max batch size your GPU can handle, depends on the memory of the GPU.
Increase model capacity: increasing model depth (more number of layers) and width (number of filters in each convolution layer).
Techniques for performance improvement with data redesigning
- Increase image resolution (progressive resizing).
From 128 x 128 x 3 to 256 x 256 x 3 or to even higher size.- Random image rotations: change orientation of the image.
- Random image shifts: useful when object is not at the center of the image.
- Vertical and horizontal shift: randomly flip the image vertically or horizontally.
Algorithm should identify a glass whether its face up or face down.
Techniques for performance improvement with model optimization
- Fine tuning the model with subset data: dropping few data samples for some of the overly sampled data classes.
- Class weights: used to train highly imbalanced (biased) database, class weights will give equal importance to all the classes during training.
- Fine tuning the model with train data: use the model to predict on training data, retrain the model for the wrongly predicted images.
towardsdatascience.com/the-quest-of-higher-accuracy-for-cnn-models-42df5d731faf