Re: Network Redirector

Hi,

i need some pointers i am writing a Network Redirector and need to know
some things…

Firstly i want to basically just catch and filter UNC paths and if they
are one of a set of predefined few grab the file from HTTP and supply a
local file handle. this will be for reading only.

Can i create the NR with NOT_IMPLEMENTED for all the dispatch methods
and register with MUP only i don’t relaly understand how this all works yet.

How can i pass through any UNC’s that are not in the selected set and
change the ones that need to be changed.

Are there any dispatch methods that MUST be set in the dispatch table
registered with RDBSS

any pointers on somewhere to begin would really be helpful guys

Alberto Moreira wrote:

When I say “seems to work”, I mean, the shipping product is kind of
mature and I haven’t heard as yet of an issue from that general
direction. Yet I’ll deal with VMs when I have to cross that bridge.
Processor performance is not an issue, this is such a chip-intensive
business that I can afford to wallow and splurge in processor power.
Some configurations, specially those running AquariusNet, may have two
or four chips in a system, so, again, we’re talking about a lot of
i/o. Also, we do handle cache coherence in machines where we need to.
Now, this is a 64-bit peripheral on a 64-bit bus, and my major worry
is not being able to see some RAM address on a bus far to my north.
Yet I thought that PCI translation went a long way towards handling
differences between I/O and System bus addressing, or am I wrong ?

Alberto.

----- Original Message ----- From: “Jake Oshins”

> Newsgroups: ntdev
> To: “Windows System Software Devs Interest List”
> Sent: Monday, October 17, 2005 1:10 PM
> Subject: Re:[ntdev] When will 64-bit address DMA actually fail without
> IoGetDmaAdapter
>
>
>> I was about to reply to your original post. But you’ve covered the
>> issues nicely for yourself.
>>
>> To summarize, your drivers will break on chipsets that need extra
>> cache coherence help and on virtualized systems where there is an I/O
>> MMU. Neither of these is particularly common today, but they’ll be
>> much more common in the near future. The drivers will also break on
>> non-x86 machines where DMA address don’t equal CPU-relative physical
>> address, but those machine have become very uncommon in the last five
>> years.
>>
>> –
>> Jake Oshins
>> Windows Kernel Group
>>
>> This posting is provided “AS IS” with no warranties, and confers no
>> rights.
>>
>>
>> “Jan Bottorff” wrote in message
>> news:xxxxx@ntdev…
>>
>>>> I wonder if you could elaborate ? The driver I inherited also
>>>> uses MmGetPhysicalAddress, and it seems to work fine. However, I
>>>> do have the option of copying the user buffer to/from a kernel
>>>> buffer, and doing the DMA - including building the
>>>> Scatter-Gather list - on the kernel buffer. So, any specific
>>>> cases will be highly welcome!
>>>
>>>
>>> I’ve been pondering what architectures will have a problem and had a
>>> few
>>> thoughts.
>>>
>>> 1) The new Intel (and future AMD) cpu virtualization in hardware
>>> must create
>>> the situation where a physical address as seen by the processor
>>> running the
>>> OS in a virtual machine != a physical address as seen by ALL busses.
>>> Things
>>> like VMWare must also have this issue, but also will not run arbitrary
>>> devices. I’m planning on finding the Intel virtualization specs to
>>> understand this better. I’m curious if part of the driving force
>>> behind the
>>> virtual bus driver/virtual function driver architecture is to allow
>>> a path
>>> that works on virtualized copies of the OS. Properly designed, it
>>> offhand
>>> seems like you would run one copy of the virtual bus driver on a
>>> hypervisor
>>> and then each instance of the OS just runs instances of the virtual
>>> function
>>> drivers. It seems like you would need some sort of virtual resources
>>> passed
>>> to the function driver AddDevice routine to describe how the
>>> function driver
>>> and bus driver communicate.
>>>
>>> 2) In the recent past, cpu memory caches were automatically kept
>>> coherent by
>>> hardware. When I think about things like SMP AMD systems
>>> (essentially a NUMA
>>> architecture), it seems extremely inefficient for EVERY processor to
>>> have to
>>> snoop its cache on EVERY cache line DMA. It seems very desirable to
>>> just DMA
>>> data into one of the memory groups without creating snoop traffic
>>> across the
>>> HyperTransport. I don’t know if it’s REQUIRED for PCI(X) and PCI-e
>>> to handle
>>> cache coherence in hardware, or if it’s just how many systems happen
>>> to be.
>>> If hardware doesn’t handle this, it seems like a range of cache on
>>> EVERY
>>> processor will need to get flushed before a DMA transfer happens.
>>> The OS can
>>> just generate inter cpu interrupts (or perhaps there is a way to
>>> generate
>>> special bus cycles that processors can snoop on to flush caches in
>>> parallel). The Windows DMA model would just automatically do
>>> whatever is
>>> needed. Doing this in a driver explicitly might be hard. Any hardware
>>> engineers out there who know what modern bus specs say about cache
>>> coherence?
>>>
>>> 3) The document on Windows DMA mentions IA64 processors having some
>>> issues
>>> if you don’t use the Windows DMA model, although not being a IA64
>>> expert
>>> don’t know the details.
>>>
>>> 4) What does it mean to say something “works”. I’d personally be very
>>> unhappy if my server corrupted a cache line of data once a month.
>>> Testing
>>> for this seems especially difficult. It seems possible you could
>>> create a
>>> program that generated a test I/O load, and generated predictable data.
>>> After running for some time period, like 1 month or 6 months, you
>>> could then
>>> verify if the data stored on disk matched what it should be. I
>>> assume you’d
>>> have to have test systems and control systems, that only were
>>> different in
>>> one component. I’m not really a believer that you can test quality into
>>> software. My experience is software quality is significantly
>>> determined by
>>> the process. I know things like disk drives have reliability data
>>> available
>>> on uncorrected errors per X gigabytes transferred. The real question
>>> comes
>>> down to: all computer hardware has some unavoidable level of data
>>> corruption, and do the drivers+hardware components we add significantly
>>> degrade the system wide level of data corruption. There also is the
>>> question: will customers KNOW about data corruption. As an engineer,
>>> data
>>> corruption is VERY serious to me, although at some companies
>>> management will
>>> just view it as the loss of a few customers. I actually think the
>>> whole open
>>> source movement, where basically nobody is legally responsible for
>>> anything,
>>> is going in the wrong direction in terms of making computers and their
>>> makers accountable and responsible. But that’s a whole other
>>> discussion.
>>>
>>> It’s hard to say how many of these might be a problem in the immediate
>>> future on current OS’s (i.e. W2K3) vs. future OS’s (i.e. Longhorn
>>> server in
>>> 2007 or 2008). It does seem like a problem for a company to sell a
>>> potentially expensive product that will not evolve and function as
>>> customers
>>> assume in the near future, although the computer industry also seems
>>> not to
>>> worry much about obsolescence.
>>>
>>> - Jan
>>>
>>>
>>>
>>>
>>
>>
>>
>> —
>> Questions? First check the Kernel Driver FAQ at
>> http://www.osronline.com/article.cfm?id=256
>>
>> You are currently subscribed to ntdev as: xxxxx@ieee.org
>> To unsubscribe send a blank email to xxxxx@lists.osr.com
>
>
>
> —
> Questions? First check the Kernel Driver FAQ at
> http://www.osronline.com/article.cfm?id=256
>
> You are currently subscribed to ntdev as: xxxxx@omation.com
> To unsubscribe send a blank email to xxxxx@lists.osr.com
>
>