Folks,
I am trying a simple experiment in a volume filer IRP_MJ_WRITE path. I want
to see how much time delta increases on a particular machine when we
increase the number of conditionals in the write path. So I added junk code
like this:
//EXPERIMENT BEGINS
LARGE_INTEGER liCurCount;
ULONG ulIncrement;
int nMillSec;
int nSec;
KeQueryTickCount(&liCurCount);
ulIncrement=KeQueryTimeIncrement();
nMillSec=(int)(liCurCount.QuadPart*ulIncrement/10000)%1000;
nSec=(int)(liCurCount.QuadPart*ulIncrement/10000000);
for (int i=0;i <15000;i++){
if(nWrtOffset == extents[i]){
extents[i] = nWrtOffset; // well had to do something to stop compiler
optimization from
// wacking this off as a noop
}
else{
extents[i] = nWrtOffset*2;
}
}
LARGE_INTEGER liCurCount2;
ULONG ulIncrement2;
int nMillSec2;
int nSec2;
KeQueryTickCount(&liCurCount2);
ulIncrement2=KeQueryTimeIncrement();
nMillSec2=(int)(liCurCount2.QuadPart*ulIncrement2/10000)%1000;
nSec2=(int)(liCurCount2.QuadPart*ulIncrement2/10000000);
Trace(VT_TRACELEVEL_HIGH,
(“Time Before: %d.%d, Time After: %d.%d\n”,
nSec, nMillSec, nSec2, nMillSec2));
// EXPERIMENT ENDS
But I can see no diff between nSec and nSec2 and more importantly nMilliSec
and nMilliSec2. I do expect 15000 spurious iterations to geneate tick
counts. ad those false if statements and junk assignment operators seem to
have no effect.
I even traced the code to make sure the compiler didn’t optimize and remove
it, but it is all there. I increased the loop counter from 1000 to 15000 and
still same result.
If I recall correctly, The correspondence between “ticks” and “real time” is
returned by KeQueryTimeIncrement, and *its* resolution is 100ns.
KeQueryTickCount’s “real time resolution” is whatever that returns (times
100ns). That’s typically ~10ms
It the time interrupt getting delayed due to the debugger connection?
The machine in qn is a dual core hyperthreader, so essentially 4 virtual
cores for windows.
I am stumped, and any suggestions will be appreciated.
–
- amitr0