Difference between std::system_clock and std::steady_clock?

Vincent picture Vincent · Nov 7, 2012 · Viewed 46.6k times · Source

What is the difference between std::system_clock and std::steady_clock? (An example case that illustrate different results/behaviours would be great).

If my goal is to precisely measure execution time of functions (like a benchmark), what would be the best choice between std::system_clock, std::steady_clock and std::high_resolution_clock?

Answer

Billy ONeal picture Billy ONeal · Nov 7, 2012

From N3376:

20.11.7.1 [time.clock.system]/1:

Objects of class system_clock represent wall clock time from the system-wide realtime clock.

20.11.7.2 [time.clock.steady]/1:

Objects of class steady_clock represent clocks for which values of time_point never decrease as physical time advances and for which values of time_point advance at a steady rate relative to real time. That is, the clock may not be adjusted.

20.11.7.3 [time.clock.hires]/1:

Objects of class high_resolution_clock represent clocks with the shortest tick period. high_resolution_clock may be a synonym for system_clock or steady_clock.

For instance, the system wide clock might be affected by something like daylight savings time, at which point the actual time listed at some point in the future can actually be a time in the past. (E.g. in the US, in the fall time moves back one hour, so the same hour is experienced "twice") However, steady_clock is not allowed to be affected by such things.

Another way of thinking about "steady" in this case is in the requirements defined in the table of 20.11.3 [time.clock.req]/2:

In Table 59 C1 and C2 denote clock types. t1 and t2 are values returned by C1::now() where the call returning t1 happens before the call returning t2 and both of these calls occur before C1::time_point::max(). [ Note: this means C1 did not wrap around between t1 and t2. —end note ]

Expression: C1::is_steady
Returns: const bool
Operational Semantics: true if t1 <= t2 is always true and the time between clock ticks is constant, otherwise false.

That's all the standard has on their differences.

If you want to do benchmarking, your best bet is probably going to be std::high_resolution_clock, because it is likely that your platform uses a high resolution timer (e.g. QueryPerformanceCounter on Windows) for this clock. However, if you're benchmarking, you should really consider using platform specific timers for your benchmark, because different platforms handle this differently. For instance, some platforms might give you some means of determining the actual number of clock ticks the program required (independent of other processes running on the same CPU). Better yet, get your hands on a real profiler and use that.