SMB, Junctions, Symlinks and Reparse Points

Hello,

I have some curiosity regarding how SMB file sharing works in Windows.
Specifically, what is done in UM and what is done in KM, how is the
SMB server notified when an OpLock is broken and how does the server
interact with Junctions, Symlinks or Reparse Points.

A SMB file sharing daemon could be implemented entirely in userspace
and use the regular user space I/O API to access the underlying File
System. This is what Samba does on UNIX and UNIX-like operating
systems. Some things are better if you have Kernel support though. For
example, an OpLock is not broken only in the case if a remote client
tries to access a file opened previously by another remote client, but
it should also be broken if that file is opened locally. Also, the
local call to CreateFile() should be blocked in the file system stack
until the OpLock would have been broken and acknowledged remotely, to
preserve SMB semantics and cache coherency. This thinks cannot happen
without kernel support. I am curious about the way Windows does this.
What exactly is implemented in KM and what in UM?

I am also curious about how the SMB server handles Junctions, Symlinks
and Reparse Points. Is everything implemented in the File System stack
or does the server implement some explict support. If I `dir’ a remote
share, I still see Junctions and Symlinks, not the targets themselves.
How is this implemented?

Thanks,


Aram Hăvărneanu

> I have some curiosity regarding how SMB file sharing works in Windows.

Specifically, what is done in UM and what is done in KM, how is the
SMB server notified when an OpLock is broken and how does the server
interact with Junctions, Symlinks or Reparse Points.

Last few days I have investigated the LanMan SRV. It seems SRV (and
SRV2) is implemented by a user mode network server that communicates
with a kernel mode file system (implemented in srv.sys and srv2.sys)
by TDI. I have some questions regarding this.

SRV is implemented as a file system but it’s not a real file system.
It’s only functional dispatch function is for FSCTL. Why is SRV a file
system when a simple driver would have done just fine?

SRV seems to bypass the I/O manager. It seems to do this for
efficiency. SRV tries to call the FAST I/O routines directly and if it
doesn’t succeed, it creates IRPs manually instead of relying on the
I/O manager. This IRPs are never completed by the I/O manager and they
are reused, the server seems to keep a pool of IRPs to use them as
necessary – it doesn’t create an IRP for every operation. Also SRV
avoids an additional memory copy by using the raw MDLs when completing
requests. How does SRV know the target DeviceObject of the underlying
FSD it requires for a certain operation? Normally, with Zw*File() API
IRPs are created by the I/O manager and the I/O manager determines the
target DO. How does SRV do this? Also, what are the advantages or
manually creating IRPs instead of calling Zw*File() *AFTER* it was
determined (via the FAST I/O path) that the requested data is not in
the cache. The request would be processed in the FSP anyway, and the
I/O manager overhead for non-cached data would seem minor.

Thanks,


Aram Hăvărneanu