I trained GoogLeNet model from scratch. But it didn't give me the promising results.
As an alternative, I would like to do fine tuning of GoogLeNet model on my dataset. Does anyone know what are the steps should I follow?
Assuming you are trying to do image classification. These should be the steps for finetuning a model:
The original classification layer "loss3/classifier"
outputs predictions for 1000 classes (it's mum_output
is set to 1000). You'll need to replace it with a new layer with appropriate num_output
. Replacing the classification layer:
num_output
to the right number of output classes you are trying to predict."loss1/classifier"
, "loss2/classifier"
and "loss3/classifier"
.You need to make a new training dataset with the new labels you want to fine tune to. See, for example, this post on how to make an lmdb dataset.
When finetuning a model, you can train ALL model's weights or choose to fix some weights (usually filters of the lower/deeper layers) and train only the weights of the top-most layers. This choice is up to you and it ususally depends on the amount of training data available (the more examples you have the more weights you can afford to finetune).
Each layer (that holds trainable parameters) has param { lr_mult: XX }
. This coefficient determines how susceptible these weights to SGD updates. Setting param { lr_mult: 0 }
means you FIX the weights of this layer and they will not be changed during the training process.
Edit your train_val.prototxt
accordingly.
Run caffe train
but supply it with caffemodel weights as an initial weights:
~$ $CAFFE_ROOT/build/tools/caffe train -solver /path/to/solver.ptototxt -weights /path/to/orig_googlenet_weights.caffemodel