Will larger batch size make computation time less in machine learning?

Setu Kumar Basak picture Setu Kumar Basak · Feb 2, 2016 · Viewed 7.4k times · Source

I am trying to tune the hyper parameter i.e batch size in CNN.I have a computer of corei7,RAM 12GB and i am training a CNN network with CIFAR-10 dataset which can be found in this blog.

Now At first what i have read and learnt about batch size in machine learning:

let's first suppose that we're doing online learning, i.e. that we're using a mini­batch size of 1. The obvious worry about online learning is that using mini­batches which contain just a single training example will cause significant errors in our estimate of the gradient. In fact, though, the errors turn out to not be such a problem. The reason is that the individual gradient estimates don't need to be super­accurate. All we need is an estimate accurate enough that our cost function tends to keep decreasing. It's as though you are trying to get to the North Magnetic Pole, but have a wonky compass that's 10­-20 degrees off each time you look at it. Provided you stop to check the compass frequently, and the compass gets the direction right on average, you'll end up at the North Magnetic Pole just fine.

Based on this argument, it sounds as though we should use online learning. In fact, the situation turns out to be more complicated than that.As we know we can use matrix techniques to compute the gradient update for all examples in a mini­batch simultaneously, rather than looping over them. Depending on the details of our hardware and linear algebra library this can make it quite a bit faster to compute the gradient estimate for a mini­batch of (for example) size 100 , rather than computing the mini­batch gradient estimate by looping over the 100 training examples separately. It might take (say) only 50 times as long, rather than 100 times as long.Now, at first it seems as though this doesn't help us that much.

With our mini­batch of size 100 the learning rule for the weights looks like:enter image description here

where the sum is over training examples in the mini­batch. This is versusenter image description here
for online learning. Even if it only takes 50 times as long to do the mini­batch update, it still seems likely to be better to do online learning, because we'd be updating so much more frequently. Suppose, however, that in the mini­batch case we increase the learning rate by a factor 100, so the update rule becomes
enter image description here
That's a lot like doing separate instances of online learning with a learning rate of η. But it only takes 50 times as long as doing a single instance of online learning. Still, it seems distinctly possible that using the larger mini­batch would speed things up.



Now i tried with MNIST digit dataset and ran a sample program and set the batch size 1 at first.I noted down the training time needed for the full dataset.Then i increased the batch size and i noticed that it became faster.
But in case of training with this code and github link changing the batch size doesn't decrease the training time.It remained same if i use 30 or 128 or 64.They are saying that they got 92% accuracy.After two or three epoch they have got above 40% accuracy.But when i ran the code in my computer without changing anything other than the batch size i got worse result after 10 epoch like only 28% and test accuracy stuck there in the next epochs.Then i thought since they have used batch size of 128 i need to use that.Then i used the same but it became more worse only give 11% after 10 epoch and stuck in there.Why is that??

Answer

Martin Thoma picture Martin Thoma · Feb 2, 2016

Neural networks learn by gradient descent an error function in the weight space which is parametrized by the training examples. This means the variables are the weights of the neural network. The function is "generic" and becomes specific when you use training examples. The "correct" way would be to use all training examples to make the specific function. This is called "batch gradient descent" and is usually not done for two reasons:

  1. It might not fit in your RAM (usually GPU, as for neural networks you get a huge boost when you use the GPU).
  2. It is actually not necessary to use all examples.

In machine learning problems, you usually have several thousands of training examples. But the error surface might look similar when you only look at a few (e.g. 64, 128 or 256) examples.

Think of it as a photo: To get an idea of what the photo is about, you usually don't need a 2500x1800px resolution. A 256x256px image will give you a good idea what the photo is about. However, you miss details.

So imagine gradient descent to be a walk on the error surface: You start on one point and you want to find the lowest point. To do so, you walk down. Then you check your height again, check in which direction it goes down and make a "step" (of which the size is determined by the learning rate and a couple of other factors) in that direction. When you have mini-batch training instead of batch-training, you walk down on a different error surface. In the low-resolution error surface. It might actually go up in the "real" error surface. But overall, you will go in the right direction. And you can make single steps much faster!

Now, what happens when you make the resolution lower (the batch size smaller)?

Right, your image of what the error surface looks like gets less accurate. How much this affects you depends on factors like:

  • Your hardware/implementation
  • Dataset: How complex is the error surface and how good it is approximated by only a small portion?
  • Learning: How exactly are you learning (momentum? newbob? rprop?)