What is the difference between std::system_clock
and std::steady_clock
? (An example case that illustrate different results/behaviours would be great).
If my goal is to precisely measure execution time of functions (like a benchmark), what would be the best choice between std::system_clock
, std::steady_clock
and std::high_resolution_clock
?
From N3376:
20.11.7.1 [time.clock.system]/1:
Objects of class
system_clock
represent wall clock time from the system-wide realtime clock.
20.11.7.2 [time.clock.steady]/1:
Objects of class
steady_clock
represent clocks for which values oftime_point
never decrease as physical time advances and for which values oftime_point
advance at a steady rate relative to real time. That is, the clock may not be adjusted.
20.11.7.3 [time.clock.hires]/1:
Objects of class
high_resolution_clock
represent clocks with the shortest tick period.high_resolution_clock
may be a synonym forsystem_clock
orsteady_clock
.
For instance, the system wide clock might be affected by something like daylight savings time, at which point the actual time listed at some point in the future can actually be a time in the past. (E.g. in the US, in the fall time moves back one hour, so the same hour is experienced "twice") However, steady_clock
is not allowed to be affected by such things.
Another way of thinking about "steady" in this case is in the requirements defined in the table of 20.11.3 [time.clock.req]/2:
In Table 59
C1
andC2
denote clock types.t1
andt2
are values returned byC1::now()
where the call returningt1
happens before the call returningt2
and both of these calls occur beforeC1::time_point::max()
. [ Note: this meansC1
did not wrap around betweent1
andt2
. —end note ]Expression:
C1::is_steady
Returns:const bool
Operational Semantics:true
ift1 <= t2
is always true and the time between clock ticks is constant, otherwisefalse
.
That's all the standard has on their differences.
If you want to do benchmarking, your best bet is probably going to be std::high_resolution_clock
, because it is likely that your platform uses a high resolution timer (e.g. QueryPerformanceCounter
on Windows) for this clock. However, if you're benchmarking, you should really consider using platform specific timers for your benchmark, because different platforms handle this differently. For instance, some platforms might give you some means of determining the actual number of clock ticks the program required (independent of other processes running on the same CPU). Better yet, get your hands on a real profiler and use that.