I am trying to compute mean of a 2d matrix using openmp. This 2d matrix is actually an image.
I am doing the thread-wise division of data. For example, if I have N
threads than I process Rows/N
number of rows with thread0
, and so on.
My question is: Can I use the openmp reduction clause with "#pragma omp parallel
"?
#pragma omp parallel reduction( + : sum )
{
if( thread == 0 )
bla bla code
sum = sum + val;
else if( thread == 1 )
bla bla code
sum = sum + val;
}
Yes, you can - the reduction clause is applicable to the whole parallel region as well as to individual for
worksharing constructs. This allows for e.g. reduction over computations done in different parallel sections (the preferred way to restructure the code):
#pragma omp parallel sections private(val) reduction(+:sum)
{
#pragma omp section
{
bla bla code
sum += val;
}
#pragma omp section
{
bla bla code
sum += val;
}
}
You can also use the OpenMP for
worksharing construct to automatically distribute the loop iterations among the threads in the team instead of reimplementing it using sections:
#pragma omp parallel for private(val) reduction(+:sum)
for (row = 0; row < Rows; row++)
{
bla bla code
sum += val;
}
Note that reduction variables are private and their intermediate values (i.e. the value they hold before the reduction at the end of the parallel
region) are only partial and not very useful. For example the following serial loop cannot be (easily?) transformed to a parallel one with reduction operation:
for (row = 0; row < Rows; row++)
{
bla bla code
sum += val;
if (sum > threshold)
yada yada code
}
Here the yada yada code
should be executed in each iteration once the accumulated value of sum
has passed the value of threshold
. When the loop is run in parallel, the private values of sum
might never reach threshold
, even if their sum does.