Why can't (or doesn't) the compiler optimize a predictable addition loop into a multiplication?

jhabbott picture jhabbott · Jun 30, 2012 · Viewed 18.2k times · Source

This is a question that came to mind while reading the brilliant answer by Mysticial to the question: why is it faster to process a sorted array than an unsorted array?

Context for the types involved:

const unsigned arraySize = 32768;
int data[arraySize];
long long sum = 0;

In his answer he explains that the Intel Compiler (ICC) optimizes this:

for (int i = 0; i < 100000; ++i)
    for (int c = 0; c < arraySize; ++c)
        if (data[c] >= 128)
            sum += data[c];

...into something equivalent to this:

for (int c = 0; c < arraySize; ++c)
    if (data[c] >= 128)
        for (int i = 0; i < 100000; ++i)
            sum += data[c];

The optimizer is recognizing that these are equivalent and is therefore exchanging the loops, moving the branch outside the inner loop. Very clever!

But why doesn't it do this?

for (int c = 0; c < arraySize; ++c)
    if (data[c] >= 128)
        sum += 100000 * data[c];

Hopefully Mysticial (or anyone else) can give an equally brilliant answer. I've never learned about the optimizations discussed in that other question before, so I'm really grateful for this.

Answer

Daniel Fischer picture Daniel Fischer · Jun 30, 2012

The compiler can't generally transform

for (int c = 0; c < arraySize; ++c)
    if (data[c] >= 128)
        for (int i = 0; i < 100000; ++i)
            sum += data[c];

into

for (int c = 0; c < arraySize; ++c)
    if (data[c] >= 128)
        sum += 100000 * data[c];

because the latter could lead to overflow of signed integers where the former doesn't. Even with guaranteed wrap-around behaviour for overflow of signed two's complement integers, it would change the result (if data[c] is 30000, the product would become -1294967296 for the typical 32-bit ints with wrap around, while 100000 times adding 30000 to sum would, if that doesn't overflow, increase sum by 3000000000). Note that the same holds for unsigned quantities, with different numbers, overflow of 100000 * data[c] would typically introduce a reduction modulo 2^32 that must not appear in the final result.

It could transform it into

for (int c = 0; c < arraySize; ++c)
    if (data[c] >= 128)
        sum += 100000LL * data[c];  // resp. 100000ull

though, if, as usual, long long is sufficiently larger than int.

Why it doesn't do that, I can't tell, I guess it's what Mysticial said, "apparently, it does not run a loop-collapsing pass after loop-interchange".

Note that the loop-interchange itself is not generally valid (for signed integers), since

for (int c = 0; c < arraySize; ++c)
    if (condition(data[c]))
        for (int i = 0; i < 100000; ++i)
            sum += data[c];

can lead to overflow where

for (int i = 0; i < 100000; ++i)
    for (int c = 0; c < arraySize; ++c)
        if (condition(data[c]))
            sum += data[c];

wouldn't. It's kosher here, since the condition ensures all data[c] that are added have the same sign, so if one overflows, both do.

I wouldn't be too sure that the compiler took that into account, though (@Mysticial, could you try with a condition like data[c] & 0x80 or so that can be true for positive and negative values?). I had compilers make invalid optimisations (for example, a couple of years ago, I had an ICC (11.0, iirc) use signed-32-bit-int-to-double conversion in 1.0/n where n was an unsigned int. Was about twice as fast as gcc's output. But wrong, a lot of values were larger than 2^31, oops.).