I don't understand why this code chokes with g++ 4.7.2:
#include <chrono>
main ()
{
std::chrono::system_clock::time_point t1, t2 ;
std::chrono::seconds delay ;
t1 = std::chrono::system_clock::time_point::max () ;
t2 = std::chrono::system_clock::now () ;
delay = t1 - t2 ;
// t1 = t2 + delay ;
// t1 = t2 - delay ;
}
with the error:
test.cc: In function ‘int main()’:
test.cc:10:18: error: no match for ‘operator=’ in ‘delay = std::chrono::operator,<std::chrono::system_clock, std::chrono::duration<long int, std::ratio<1l, 1000000l> >, std::chrono::duration<long int, std::ratio<1l, 1000000l> > >((*(const std::chrono::time_point<std::chrono::system_clock, std::chrono::duration<long int, std::ratio<1l, 1000000l> > >*)(& t1)), (*(const std::chrono::time_point<std::chrono::system_clock, std::chrono::duration<long int, std::ratio<1l, 1000000l> > >*)(& t2)))’
It seemed to me that "time_point - time_point" gives a "duration".
It does produce a duration, but there are different kinds of durations. std::chrono::duration
is templatized on a representation type and a unit ratio. std::chrono::seconds
for example has a unit ratio of 1, while std::chono::nanoseconds
has a unit ratio of std::nano
, or 1/1000000000. time points have the same template parameters.
The specific unit ratio of std::chrono::system_clock::time_point
is implementation defined, but it is almost certainly less than than that of std::chrono::seconds
. As such, the duration produced from subtracting those two time points has much more precision than can be represented by std::chrono::seconds
. The default behaviour is to not allow assignments that lose precision with durations that have integer representations. So you can either use a duration with enough precision (std::chrono::system_clock::duration
) or cast the result to the duration you want (std::chrono::duration_cast<std::chrono::seconds>(...)
).