System Cache

Hello Gurus,

I am trying to copy a 4G file from my file system to a local disk (NTFS).

Once the copy starts I notice a rapid increase in System Cache (Task Manager) and the available Physical memory drops to ~4MB (my PC incorporates a 1GB). The PC starts to crawl.

I have looked in the poolTag Utility, and all my buffers which I have allocated seems to be OK.

When I looked at the commands trace (dbgViewer) The IRPs which take place are reading 64KB by system, and then FASTIO by explorer, sometimes there a couple system accesses before a couple of fastIOs.

When I take off the disk(which incorporates the volume with my file system, it is a USB device) The cache memory drops again.

Do you know what might be causing this issue? Is there a way to flush the cache?

Igal

This is typical, and is not related to your file system.

igalk013@013.net.il wrote:

Hello Gurus,

I am trying to copy a 4G file from my file system to a local disk (NTFS).

Once the copy starts I notice a rapid increase in System Cache (Task Manager) and the available Physical memory drops to ~4MB (my PC incorporates a 1GB). The PC starts to crawl.

I have looked in the poolTag Utility, and all my buffers which I have allocated seems to be OK.

When I looked at the commands trace (dbgViewer) The IRPs which take place are reading 64KB by system, and then FASTIO by explorer, sometimes there a couple system accesses before a couple of fastIOs.

When I take off the disk(which incorporates the volume with my file system, it is a USB device) The cache memory drops again.

Do you know what might be causing this issue? Is there a way to flush the cache?

Igal


Kind regards, Dejan
http://www.alfasp.com
File system audit, security and encryption kits.

Dejan,

Thank you for the reply, but the isue is that the computer starts to crawl (nothing is operating, the mouse is not reacting as suppose to).
I have also noticed that if I copy a ~800MB of the file the computer get stacked(=crawling), this is the size of the system cache at that moment(I think it is the used cache as indicated by the task manager).

Does it mean That the whole copied file is being cached? I only want to copy the file.
Is it normal that the PC get stacked? or is it something else I am doing wrong?

PS I have noticed that the size of the file during the copy is already 4G, and it is not increasing, during the copy, can this be a problem?

Igal

Explorer will set the file size to the size of the source file to avoid fragmentation before doing the actual copy.
You can set the system to provide more memory to Programs rather than System Cache (System->Advanced->Performance->Advanced) to avoid this sort of trashing.

igalk013@013.net.il wrote:

Dejan,

Thank you for the reply, but the isue is that the computer starts to crawl (nothing is operating, the mouse is not reacting as suppose to).
I have also noticed that if I copy a ~800MB of the file the computer get stacked(=crawling), this is the size of the system cache at that moment(I think it is the used cache as indicated by the task manager).

Does it mean That the whole copied file is being cached? I only want to copy the file.
Is it normal that the PC get stacked? or is it something else I am doing wrong?

PS I have noticed that the size of the file during the copy is already 4G, and it is not increasing, during the copy, can this be a problem?


Kind regards, Dejan
http://www.alfasp.com
File system audit, security and encryption kits.

Dejan,

It is already set to programs.

Sorry, but I think that I did not understand correctly. Do you mean that Windows cannot deal properly with huge files copying (4GB) and every time a huge file is being copied all resources are allocated for this purpose.
I really think that I did not set something correctly, may be in the CM…

Igal

Dejan,

I have wrriten a small user level application, which copies the file chunk by chunk (not using explorer, and not caching) and I have noticed that the file was copied with no problems, I have monitored the system cache, and stayed stable, relatively low.
With respect to the explorer copy operation: I learn that Windows tries to cache the whole file into memory, but because the cache memory is not enough for 4GB it is getting suffocated. If I am correct, is there a way that a file system driver, or any other means can limit the size of cache allocated for a file?

Igal

This is not something I attributed to FSDs, it was system tweaking.
I know I’ve set my system so that no trashing of this nature occurs. I thought the setting was Memory allocation setting.
Does the same happen if you copy from FSes that are not yours? (if yes, it’s not you - and I’ll try to dig what I tweaked :D)

igalk013@013.net.il wrote:

Dejan,

It is already set to programs.

Sorry, but I think that I did not understand correctly. Do you mean that Windows cannot deal properly with huge files copying (4GB) and every time a huge file is being copied all resources are allocated for this purpose.
I really think that I did not set something correctly, may be in the CM…


Kind regards, Dejan
http://www.alfasp.com
File system audit, security and encryption kits.

Cc routines can be used by both FSDs and FSFDs to control the cache for a specific file. (I wouldn’t suggest a filter doing this for FSes not known to adhere to CM rules - so for now NTFS/FAT only).
See CcSetDirtyPageThreshold and related APIs.

I have wrriten a small user level application, which copies the file chunk by chunk (not using explorer, and not caching) and I have noticed that the file was copied with no problems, I have monitored the system cache, and stayed stable, relatively low.
With respect to the explorer copy operation: I learn that Windows tries to cache the whole file into memory, but because the cache memory is not enough for 4GB it is getting suffocated. If I am correct, is there a way that a file system driver, or any other means can limit the size of cache allocated for a file?


Kind regards, Dejan
http://www.alfasp.com
File system audit, security and encryption kits.

Dejan,

I have tried to copy a 1.5GB file from NTFS(on USB) to NTFS(local) the cache has rose but up to a 600MB and not 870MB as with the 4GB file(from my FS), and it worked OK.

I have tried to limit the number of pages as you suggested to 1000 (it is like ~4MB=1000*4KB per file), It has some improvement, the copy slider has moved a little bit more.By the way I have set the dirty page function right after the CcInitializeCacheMap.

Do you have any more suggestions?

Igal

> Do you have any more suggestions?

Not ATM.


Kind regards, Dejan
http://www.alfasp.com
File system audit, security and encryption kits.

Dejan,

10x

By the way I have increased the size of the virtual memory to 6GB, no progress…

Igal

> I am trying to copy a 4G file from my file system to a local disk (NTFS).

Once the copy starts I notice a rapid increase in System Cache (Task Manager)
and the available Physical memory drops to ~4MB (my PC incorporates a 1GB). The
PC starts to crawl.

Unless the target file has been opened with FILE_FLAG_NO_BUFFERING ( and, apparently, it was not - otherwise, you would not get an increase in the System Cache), this is the only scenario
that you can expect on large reads/writes…

If Cache Manager is involved, your IO requests go via the Memory Manager that works synchronously, i.e. your requests are in the same queue with cached operations on other files and with page fault processing. As a result, you get performance degradation on system-wide basis if you read or write the large amount of data.

By specifying FILE_FLAG_NO_BUFFERING in CreateFile() call you instruct the system to bypass
the system cache, so that all IO operations on the target file go directly to the disk and dont depend on the synchronous nature of Memory Manager’s operations. This is why you can achieve a very noticeable performance improvent on large reads/writes by specifying FILE_FLAG_NO_BUFFERING in CreateFile() call…

Anton Bassov

Anton,

Thank you for the response,

I did understand that the FILE_FLAG_NO_BUFFERING issue is involved, but, doe sit mean that explorer “cannot” copy big files, because it has to go through CM ?!

In other words, If I need to copy a big file from one disk to another I need to write an application which shall copy the file with FILE_FLAG_NO_BUFFERING.Or is it possible that when copying a file from a NTFS to NTFS explorer notices and sets some CM configuration which shall suite the operation, and user shall not be waiting forever for the action to accomplish(The computer really starts crawlin and eventually get stack unless I pull off the USB cable of the disk) ?

Please are there any other configurations which I am not aware of ?

Igal

Igal,

This is a problem that I have run into as well with my application, but with a small twist. Basically they might be playing a heavy CPU/bus/memory application like a newer FPS game and if I am unpacking data in the background, I have to trickle the items in place or else it impacts the main application. The reality is that unless you plan on controlling the dataflow in a filter which ties to a particular device object and captures every write (Paging and non paging) and tries to defer the item into a worker thread with a pend return and pend completed after the wait, you will have to deal with what what has been mentioned above. And the idea I posted here is just something I’m throwing out there off the top of my head, and has other implications let along the web of complexities involved of handing different situations which is a can of worms in itself.

Any USB device (hard drive, memory reader / etc) that I’ve copied large images back and forth to have always come to a crawl (and 90% of my work deals with images north of a gig on god knows what hardware). Its just the nature of the beast. If you can compensate for the raw data transfer rate of the device itself related to the rate of data coming into memory, you may be prevent the sludge effect.

Also I wouldn’t be not too keen on storing/altering the user’s OS registry / cache settings in the background to workaround an issue. That may get you more heartburn than solving it or taking another approach.

–Royal

Gurus,

Well I think that the issue is not a cache manager issue but rather some kind of memory leak. I have noticed that the available memory in the system is dropping rapidly during when the FILE_FLAG_NO_BUFFERING flag is not used (the FSD received also FAST_IO requests), but when the flag is used (No FAST IO lane is involved) there is no drop in the memory available mempry.
As I have mentioned before when the USB disk is removed the available memory returns to normal value (before the copy operation).
I am looking into the Pool Tag Reporter and tried to compare the memory allocation prior and after the copy operation, but none of my FsdAllocatePool buffers are involved in the mass memory Allocations.
The big consumers are : MmSt, UlHT, R520 pools

Any ideas ?

Igal

Well Gurus,

I am getting confused by the system (windows).

It is correct that the cache manager’s final goal is to reduce the second storage usage as possible.
But I never though that it will be on the expence of other resources.

I have implemented 2 runs of copy by an application. The first run I have stopped long before the system run out of memory(task manager). In thisd run I have noticed access to hard disk and then some fast io. Then I run again, and all of the calls were fast io, with no page io requests, as if all the previous copy was stored on local memory.

Is there a way to tell the system not to cache the whole file? or, if I take it to the extreme, if there was a 1TB file, then the system will try to cache the whole file?

Igal