In what order should we tune hyperparameters in Neural Networks?

Paul Rolin picture Paul Rolin · May 26, 2016 · Viewed 9.1k times · Source

I have a quite simple ANN using Tensorflow and AdamOptimizer for a regression problem and I am now at the point to tune all the hyperparameters.

For now, I saw many different hyperparameters that I have to tune :

  • Learning rate : initial learning rate, learning rate decay
  • The AdamOptimizer needs 4 arguments (learning-rate, beta1, beta2, epsilon) so we need to tune them - at least epsilon
  • batch-size
  • nb of iterations
  • Lambda L2-regularization parameter
  • Number of neurons, number of layers
  • what kind of activation function for the hidden layers, for the output layer
  • dropout parameter

I have 2 questions :

1) Do you see any other hyperparameter I might have forgotten ?

2) For now, my tuning is quite "manual" and I am not sure I am not doing everything in a proper way. Is there a special order to tune the parameters ? E.g learning rate first, then batch size, then ... I am not sure that all these parameters are independent - in fact, I am quite sure that some of them are not. Which ones are clearly independent and which ones are clearly not independent ? Should we then tune them together ? Is there any paper or article which talks about properly tuning all the parameters in a special order ?

EDIT : Here are the graphs I got for different initial learning rates, batch sizes and regularization parameters. The purple curve is completely weird for me... Because the cost decreases like way slowly that the others, but it got stuck at a lower accuracy rate. Is it possible that the model is stuck in a local minimum ?

Accuracy

Cost

For the learning rate, I used the decay : LR(t) = LRI/sqrt(epoch)

Thanks for your help ! Paul

Answer

Kilian Batzner picture Kilian Batzner · Jan 5, 2017

My general order is:

  1. Batch size, as it will largely affect the training time of future experiments.
  2. Architecture of the network:
    • Number of neurons in the network
    • Number of layers
  3. Rest (dropout, L2 reg, etc.)

Dependencies:

I'd assume that the optimal values of

  • learning rate and batch size
  • learning rate and number of neurons
  • number of neurons and number of layers

strongly depend on each other. I am not an expert on that field though.

As for your hyperparameters:

  • For the Adam optimizer: "Recommended values in the paper are eps = 1e-8, beta1 = 0.9, beta2 = 0.999." (source)
  • For the learning rate with Adam and RMSProp, I found values around 0.001 to be optimal for most problems.
  • As an alternative to Adam, you can also use RMSProp, which reduces the memory footprint by up to 33%. See this answer for more details.
  • You could also tune the initial weight values (see All you need is a good init). Although, the Xavier initializer seems to be a good way to prevent having to tune the weight inits.
  • I don't tune the number of iterations / epochs as a hyperparameter. I train the net until its validation error converges. However, I give each run a time budget.