When to call cudaDeviceSynchronize?

user1588226 picture user1588226 · Aug 9, 2012 · Viewed 68.8k times · Source

when is calling to the cudaDeviceSynchronize function really needed?.

As far as I understand from the CUDA documentation, CUDA kernels are asynchronous, so it seems that we should call cudaDeviceSynchronize after each kernel launch. However, I have tried the same code (training neural networks) with and without any cudaDeviceSynchronize, except one before the time measurement. I have found that I get the same result but with a speed up between 7-12x (depending on the matrix sizes).

So, the question is if there are any reasons to use cudaDeviceSynchronize apart of time measurement.

For example:

  • Is it needed before copying data from the GPU back to the host with cudaMemcpy?

  • If I do matrix multiplications like

    C = A * B
    D = C * F
    

should I put cudaDeviceSynchronize between both?

From my experiment It seems that I don't.

Why does cudaDeviceSynchronize slow the program so much?

Answer

aland picture aland · Aug 9, 2012

Although CUDA kernel launches are asynchronous, all GPU-related tasks placed in one stream (which is the default behavior) are executed sequentially.

So, for example,

kernel1<<<X,Y>>>(...); // kernel start execution, CPU continues to next statement
kernel2<<<X,Y>>>(...); // kernel is placed in queue and will start after kernel1 finishes, CPU continues to next statement
cudaMemcpy(...); // CPU blocks until memory is copied, memory copy starts only after kernel2 finishes

So in your example, there is no need for cudaDeviceSynchronize. However, it might be useful for debugging to detect which of your kernel has caused an error (if there is any).

cudaDeviceSynchronize may cause some slowdown, but 7-12x seems too much. Might be there is some problem with time measurement, or maybe the kernels are really fast, and the overhead of explicit synchronization is huge relative to actual computation time.