My question involves the discrepancies in time I see when copying data
from bulk storage (HDD, SSD, CD/DVD). If I do a transfer at a level
lower than the filesystem (e.g. read \\.\CDROM0) I seem to get much
faster speeds than by copying the files themselves, to the tune of a
5x increase in speed.
It seems to me the creation and management of the files is what adds
the overhead. Can anyone verify this, or propose an alternative? I
have noticed something similar with FLOSS projects I use; their
directory structures cause lots of lag due to IO and the IO they
perform takes a long time. There are typically many files.
My apologies for not having good timing data to discuss. I will see
what I can come up with. If anyone can suggest a methodology I would
appreciate it, it is not immediately obvious to me how I would compare
same data transfer sizes.
I realize this question may not be entirely appropriate, but I hope to
get a good answer here.