I was running a cpp code , but one thing i noticed that on windows 7, CLOCKS_PER_SEC in C++ code gives 1000 while on linux fedora 16 it gives 1000000. Can anyone justify this behaviour?
What's to justify? CLOCKS_PER_SEC
is implementation defined, and can
be anything. All it indicates it the units returned by the function
clock()
. It doesn't even indicate the resolution of clock()
: Posix
requires it to be 1000000, regardless of the actual resolution. If
Windows is returning 1000, that's probably not the actual resolution
either. (I find that my Linux box has a resolution of 10ms, and my Windows box 15ms.)