R: Re: Architectural question

Thank you Max.

I agree that memory mapped file minimizes the memcpy(), in particular if the
driver uses DIRECT IO.

The problem I have seen with memory mapped files, if the file is a big one,
is when closing the wiev with UnmapViewOfFile().
There is a lot of disk activity to flush the pages back to disk and this
takes a lot of time.
On the other hand flushing the file when each block of data is written is a
synchronuos operation which slows down the speed of data acquisition.

-----Messaggio Originale-----
Da: “Maxim S. Shatskih”
A: “NT Developers Interest List”
Data invio: domenica 19 novembre 2000 22.55
Oggetto: [ntdev] Re: Architectural question

> >Use a memory mapped file, allocated in the application, and use the
virtual
> >address returned by MapViewOfFile() as input parameter to the ReadFile()
> >on the USB device
>
> This is the best way - the least number of memcpy()
>
> Max
>
>
> —
> You are currently subscribed to ntdev as: xxxxx@tin.it
> To unsubscribe send a blank email to $subst(‘Email.Unsub’)
>
>

> * Perform ReadFile system calls from the application and queue each

received data block to another thread in the application to overlap the
WriteFile operations
Do this, be sure you use OVERLAPPED I/O calls, with a number of buffers.
Also, specify the file to be unbuffered.

On the other hand, USB only can transfer 1.2 MBytes/sec, so almost anything
you do may go this fast. You should probably think in terms of latency, so
overlapping multiple buffers should be able to write the data to disk in
just slightly more time than the USB transfer time. If you read 32 MBytes
from the USB device, then wrote 32 MBytes to the disk, your buffers might
also cause paging activity.

* Use a memory mapped file, allocated in the application, and use the
virtual address returned by MapViewOfFile() as input parameter to the
ReadFile() on the USB device
Bad idea. If you pass a 32 MByte memory mapped read buffer to the USB
stack, it may try to page lock the whole thing (or a large chuck). It also
has no idea when to write filled memory to disk, it will just apply some
virtual memory LRU algorithm. YOU know the data is sequentially filling the
buffer, the virtual memory system doesn’t. I’ve also seen the virtual
memory system sometimes thrash horribly, and write only small chunks at a
time, utterly degrading the disk transfer rates.
* Write the data directly from kernel mode, may be with a kernel
thread, using ZwCreateFile etc.etc.
Don’t see any real need. You data rates are not that high, and your buffers
are huge. It’s useful to stay in kernel mode if your going to be making a
lot of user/kernel transitions to get the work done. If you did 64K reads
that each took USB about 50 milliseconds seconds to fill, and 64K disk
writes, the overhead from kernel transitions is almost nil. The potential
for system crashing bugs WILL be much higher if you do the work in kernel mode.

  • Jan