So on my system the example below returns 156000 (15.6ms) and 156001, and if I un-comment the timeBeginPeriod and timeEndPeriod lines the timer/interrupt resolution increases and I get 10000 (1ms) and 10001. On NT the max value (lowest reslution) is around 15.6ms, the min value (highest resolution) is 0.976ms. timeBeginPeriod_() affect the resolution of the system clock and timer ticks.The interrupt time is the only Windows clock that guarantees to be monotonous that is, its value only increases over timer. Its value represents the time in units of 100 ns since the system was booted. The interrupt time is the base clock for all timers in Windows (see my recent article A Bug in Windows Timer Management). It is updated every clock interrupt.
Since it's a 64bit value, despite being 1/10000th of a ms, it'll still take around 58454 years before it wraps around to 0 again, nobody have yet to reach a system uptime of that long.

(and even if it does, as long as you use the good old "stop-start" to get your delta value like you'd usually do, even that is not an issue)
For reference 10000 = 1ms. The behavior is similar to GetTickCount_() and timeGetTime_() but with higher precision, and falls between those and QueryPerformanceCounter_()
but as you see, other than the procedure itself, no system calls are used at all, the interrupt time is read directly from KUSER_SHARED_DATA memory which resides at $7FFE0000 and is exposed in the addressable memory of all processes. If your a speed freak I guess you could inline this stuff in your loops etc.
Note! The order of the MOV's and the loop is vital, the reason for this is that the values are fetched in the reverse order that the ISR (clock interrupt service routine) updates them,
comparing the high part with the duplicate high part ensure that we did not read during an interrupt and if we did the loop repeats until we read between interrupts instead.
I have no idea how useful this is to anyone, but it sure was interesting to "discover" it

Just to reiterate! GetInterruptTime() will return the value that basically every timer in Windows is based on or uses in one way or another, directly or indirectly, even GetTickCount_(), timeGetTime_(), and ElapsedMilliseconds() is derived or synced in some way from this value. (Except QueryPerformanceCounter_() which is based on a different counter and may be CPU or crystal based or some other source)
Warning! As far as I know this is available on NT (4+?), i have not tried on Win9x but it'll most likely crash there. So use this only on Windows 5.x/6.x and later.
PS! Fred, I hope the x64 code is ok, I was a bit unsure if the rbx register was ok to use or not as the PB manual do not mention which x64 registers that are safe to use.
Code: Select all
;Public domain, created by Rescator.
;Based on info found at http://www.dcl.hpi.uni-potsdam.de/research/WRK/?p=34
CompilerIf #PB_Compiler_Processor=#PB_Processor_x86
Procedure.q GetInterruptTime()
!_GetInterruptTime_Repeat_Start:
!MOV edx,dword [2147352584+4]
!MOV eax,dword [2147352584]
!MOV ecx,dword [2147352584+8]
!CMP edx,ecx
!JNE _GetInterruptTime_Repeat_Start
ProcedureReturn
EndProcedure
CompilerElse
Procedure.q GetInterruptTime()
!MOV qword rdx,2147352584
!_GetInterruptTime_Repeat_Start:
!MOVSXD rax,dword [rdx+4]
!MOVSXD rbx,dword [rdx]
!MOVSXD rcx,dword [rdx+8]
!CMP rax,rcx
!JNE _GetInterruptTime_Repeat_Start
!SAL rax,32
!ADD rax,rbx
ProcedureReturn
EndProcedure
CompilerEndIf
;timeBeginPeriod_(1)
Define n.i,start.q,stop.q
For n=1 To 10
start=GetInterruptTime()
Delay(1)
stop=GetInterruptTime()
Debug stop-start
Next
;timeEndPeriod_(1)