Hi, Jerry.
Thanks for the suggestion. It may, indeed, help when writing my log file.
So I take it that you don’t see any algorithmic problem with my logic
or the way I’m setting up and waiting for the I/O’s?
Also, once I write the log and I know the log is safe on disk, I then
update all the metadata files. That is, each 512-byte chunk of the log
represents a metadata change somewhere in the filesystem (block level
logging). So I then have to copy each 512-byte chunk of the log to its
proper metadata location on disk. Surprisingly (to me), the 1MB of
I/O that this generates takes about the same amount of time (~15 seconds)
as the 1MB writing of the log, even though the log writing is logically
sequential and the follow-on metadata updates are all over the disk.
So in my original post, I talked about the log write since it seemed to me
that it should have been faster.
So even if I implement your suggestion and optimize the log write by
using large I/O’s, I still have the problem of the randomly scattered
512-byte writes all over the disk that I need to do to update the metadata.
Any suggestions there? I tried sorting the list of I/O’s that I send out by
disk block number but this seems to have little or no effect.
Again, I’m afraid I’m missing a basic concept on Windows. Other
platforms, such as Linux, usually have “intelligent” disk drivers under
the filesystem that sort the I/O’s and/or consolidate them. That may be
why I don’t see performance problems on those platforms. Does Windows
do something similar or am I sending the I/O’s directly to the “dumb” hardware?
Sorry if these questions are ignorant!
Tim