The values for s.dyn_tm_usec returned by Windows Mia appears to be in millisecond with 3 digits of precision. I understand that Windows Time of Day clock ticks only occur every 55 milliseconds (18.2 times per second)
On Linux the system returns up to a 6 digit number and the clock ticks every 16 microseconds or so.
The half mark second for s.dyn_tm_usec on Linux is 500000 on Windows it is 500.
The documentation for dyn_tm_usec says, "The current microsecond, relative to dyn_tm_sec or tm_sec. (On Windows, this value is updated only in 50-millisecond increments)."
My problem is that the change in precision screws up the math.
Shouldn't Mia also return 500000, In other words shouldn't the value be multiplied by 1000 before it's returned?
To be clear, I'm not complaing about timing precision (that's OS dependant) but the numeric precision of the value returned.
On Linux the system returns up to a 6 digit number and the clock ticks every 16 microseconds or so.
The half mark second for s.dyn_tm_usec on Linux is 500000 on Windows it is 500.
The documentation for dyn_tm_usec says, "The current microsecond, relative to dyn_tm_sec or tm_sec. (On Windows, this value is updated only in 50-millisecond increments)."
My problem is that the change in precision screws up the math.
Shouldn't Mia also return 500000, In other words shouldn't the value be multiplied by 1000 before it's returned?
To be clear, I'm not complaing about timing precision (that's OS dependant) but the numeric precision of the value returned.
Comment