Optimizing away a "while(1);" in C++0x

Johannes Schaub - litb picture Johannes Schaub - litb · Aug 28, 2010 · Viewed 14.7k times · Source

Updated, see below!

I have heard and read that C++0x allows an compiler to print "Hello" for the following snippet

#include <iostream>

int main() {
  while(1) 
    ;
  std::cout << "Hello" << std::endl;
}

It apparently has something to do with threads and optimization capabilities. It looks to me that this can surprise many people though.

Does someone have a good explanation of why this was necessary to allow? For reference, the most recent C++0x draft says at 6.5/5

A loop that, outside of the for-init-statement in the case of a for statement,

  • makes no calls to library I/O functions, and
  • does not access or modify volatile objects, and
  • performs no synchronization operations (1.10) or atomic operations (Clause 29)

may be assumed by the implementation to terminate. [ Note: This is intended to allow compiler transfor- mations, such as removal of empty loops, even when termination cannot be proven. — end note ]

Edit:

This insightful article says about that Standards text

Unfortunately, the words "undefined behavior" are not used. However, anytime the standard says "the compiler may assume P," it is implied that a program which has the property not-P has undefined semantics.

Is that correct, and is the compiler allowed to print "Bye" for the above program?


There is an even more insightful thread here, which is about an analogous change to C, started off by the Guy done the above linked article. Among other useful facts, they present a solution that seems to also apply to C++0x (Update: This won't work anymore with n3225 - see below!)

endless:
  goto endless;

A compiler is not allowed to optimize that away, it seems, because it's not a loop, but a jump. Another guy summarizes the proposed change in C++0x and C201X

By writing a loop, the programmer is asserting either that the loop does something with visible behavior (performs I/O, accesses volatile objects, or performs synchronization or atomic operations), or that it eventually terminates. If I violate that assumption by writing an infinite loop with no side effects, I am lying to the compiler, and my program's behavior is undefined. (If I'm lucky, the compiler might warn me about it.) The language doesn't provide (no longer provides?) a way to express an infinite loop without visible behavior.


Update on 3.1.2011 with n3225: Committee moved the text to 1.10/24 and say

The implementation may assume that any thread will eventually do one of the following:

  • terminate,
  • make a call to a library I/O function,
  • access or modify a volatile object, or
  • perform a synchronization operation or an atomic operation.

The goto trick will not work anymore!

Answer

Philip Potter picture Philip Potter · Aug 29, 2010

To me, the relevant justification is:

This is intended to allow compiler transfor- mations, such as removal of empty loops, even when termination cannot be proven.

Presumably, this is because proving termination mechanically is difficult, and the inability to prove termination hampers compilers which could otherwise make useful transformations, such as moving nondependent operations from before the loop to after or vice versa, performing post-loop operations in one thread while the loop executes in another, and so on. Without these transformations, a loop might block all other threads while they wait for the one thread to finish said loop. (I use "thread" loosely to mean any form of parallel processing, including separate VLIW instruction streams.)

EDIT: Dumb example:

while (complicated_condition()) {
    x = complicated_but_externally_invisible_operation(x);
}
complex_io_operation();
cout << "Results:" << endl;
cout << x << endl;

Here, it would be faster for one thread to do the complex_io_operation while the other is doing all the complex calculations in the loop. But without the clause you have quoted, the compiler has to prove two things before it can make the optimisation: 1) that complex_io_operation() doesn't depend on the results of the loop, and 2) that the loop will terminate. Proving 1) is pretty easy, proving 2) is the halting problem. With the clause, it may assume the loop terminates and get a parallelisation win.

I also imagine that the designers considered that the cases where infinite loops occur in production code are very rare and are usually things like event-driven loops which access I/O in some manner. As a result, they have pessimised the rare case (infinite loops) in favour of optimising the more common case (noninfinite, but difficult to mechanically prove noninfinite, loops).

It does, however, mean that infinite loops used in learning examples will suffer as a result, and will raise gotchas in beginner code. I can't say this is entirely a good thing.

EDIT: with respect to the insightful article you now link, I would say that "the compiler may assume X about the program" is logically equivalent to "if the program doesn't satisfy X, the behaviour is undefined". We can show this as follows: suppose there exists a program which does not satisfy property X. Where would the behaviour of this program be defined? The Standard only defines behaviour assuming property X is true. Although the Standard does not explicitly declare the behaviour undefined, it has declared it undefined by omission.

Consider a similar argument: "the compiler may assume a variable x is only assigned to at most once between sequence points" is equivalent to "assigning to x more than once between sequence points is undefined".