Hello experts,
I’m wondering how the Windows (XP) CPU Usage measurement may be implemented.
The background is that we have an I/O intensive application exposing only
very low system wide CPU Usage in Task Manager of approximately 1% while
simultaneously running benchmarks show performance degradation of up to 40%
for ALU related benchmarks.
The application does most of its work in a dedicated driver, acquiring and
sorting measurement data at a bandwidth of up to 100 MByte/s.
My question is: How does the CPU Usage measurement and calculation work? Is
it really based on simple sampling by timer interrupt? Is the logging of
load of ISRs and DPCs warranted?
How can such a large discrepancy between monitoring and real performance
degradation be accomplished - by accident or intentionally?
Who knows how the CPU usage measurement works? What does it measure? What
not? Any hint is highly appreciated.
Many thanks,
Volker