Nanosecond timing on OS X
Posted: Sun Mar 04, 2012 3:25 pm
I usually use ElapsedMilliseconds() on OS X to time a routine.
If you want a more accurate result, you can use UpTime() which returns the up time of the computer or mach_absolute_time() which is a bit faster.
Even if you divide the result by 1,000,000 to get milliseconds, the result seems to be more accurate compared to ElapsedMilliseconds().
After a while ElapsedMilliseconds() runs behind a little.
Alternative (faster) approach ...
If you want a more accurate result, you can use UpTime() which returns the up time of the computer or mach_absolute_time() which is a bit faster.
Code: Select all
ImportC "/System/Library/Frameworks/CoreServices.framework/CoreServices"
UpTime.q()
AbsoluteDeltaToNanoseconds.q(end_time.q, start_time.q)
EndImport
start_time.q = UpTime()
; ...
end_time.q = UpTime()
nano_seconds.q = AbsoluteDeltaToNanoseconds(end_time, start_time)
After a while ElapsedMilliseconds() runs behind a little.
Alternative (faster) approach ...
Code: Select all
ImportC "/System/Library/Frameworks/CoreServices.framework/CoreServices"
mach_absolute_time.q()
AbsoluteDeltaToNanoseconds.q(end_time.q, start_time.q)
EndImport
start_time.q = mach_absolute_time()
; ...
end_time.q = mach_absolute_time()
nano_seconds.q = AbsoluteDeltaToNanoseconds(end_time, start_time)