You might wish to start by telling Explorer that the files are
“expensive” to open (the offline file bit does this, for example).
Beyond that, a scheme similar to (2) where you recycle handles in some
LRU fashion is likely to give you the best performance.
There are other possibilities here as well, for example you might wish
to utilize the cache manager in a very different fashion - such as
treating all of your cached data as a separate region within a huge
file. For example, let’s suppose your file system only supports files
up to 4GB in size (addressable by 32 bits). You could then create ONE
“logical” file that you manage via the Cache Manager, allocating a 4GB
address space for each file. While this sounds large, you can handle 4
billion files this way, so I suspect you’ll find that’s not much of a
major constraint. This gives you the advantages of caching but
decouples the cache from the underlying file handles. I’m not saying
that this is the right solution, but rather encouraging you to “think
outside the box” - because that’s what you’ll need to do to make this
work efficiently.
Sounds like an interesting challenge!
Regards,
Tony
Tony Mason
Consulting Partner
OSR Open Systems Resources, Inc.
http://www.osr.com
-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of Jeff Mancuso
Sent: Saturday, May 21, 2005 7:35 PM
To: ntfsd redirect
Subject: [ntfsd] Dealing with a limit to number of open file handles
Hello,
I’m working on a network file system that will be connecting to
remote file servers that have some hard coded maximum number of open
file handles [100, in this case]. This is causing me a world of pain,
since explorer, combined with the cache manager, would like to have
many hundred files open when doing relatively basic operations like
moving an entire directory tree. I understand why this is important,
but I still am confronted with the problem that I cannot actually
create as many handles as explorer or other programs would like. What
is the best way to solve this problem?
1.) Return STATUS_INSUFFICIENT_RESOURCES when I cannot get a real
handle to the file/directory that I’d like to access
2.) Lazily open network resources for read/writes that a user issues
on what they think is an open handle. When a request for a create is
issued I will open a handle internally to that network resource,
determine its existance, create an fcb/ccb, then close my handle to
that resource, only to re-open it when the user issues a
read/write/info command on that handle. Fail appropriately if I can no
longer access that file like I originally could. There are variety of
complications in this method, with both FS consistency and cache
consistency, but it can theoretically be done, I believe.
3.) Some better/more obvious solution I have not been able to come up
with?
Thanks
-Jeff
Questions? First check the IFS FAQ at
https://www.osronline.com/article.cfm?id=17
You are currently subscribed to ntfsd as: unknown lmsubst tag argument:
‘’
To unsubscribe send a blank email to xxxxx@lists.osr.com