Peter…
You have much closer to the “real mailing list” data than I am.
Do you have any idea why my messages are showing up line-wrapped in such
an odd way?
I swear I had hard returns everywhere I should have had them in this
morning’s posts. I post in “plain text” format. etc.
Yet it seems some of the hard returns get stripped either by Outlook
(not likely because if I send to myself, going all the way out to the
net and back, it comes back ok)… or somewhere else in the path.
If this keeps up I’m moving my mail operations back to the Alpha running
VMS.
— Jamie Hanrahan
Azius Developer Training http://www.azius.com/
Kernel Mode Systems http://www.cmkrnl.com/
Windows Driver Consulting and Training
-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of Peter Viscarola
Sent: Tuesday, March 11, 2003 09:38
To: NT Developers Interest List
Subject: [ntdev] Re: More questions about mapping user &
kernel mode memoryActually, Jamie’s characterization (just sent) about what the
I/O manager does is much clearer than mine… Sorry, too
much multi-tasking.My point stands about MmGetSystemAddressForMdlSafe. There’s
no extrodinary overhead here. And it must be used anytime
access to the buffer’s required outside the context of the
requesting thread, as in PIO devices in the storage stack.Sorry for any confusion,
p
“Peter Viscarola” wrote in message
> news:xxxxx@ntdev…
> >
> >
> > “Phil Barila” wrote in message
> > news:xxxxx@ntdev…
> > >
> > >
> > > I was under the impression that this (along with the probe & lock,
> etc…)
> > > is what the IO Manager does provide access to METHOD_IN/OUT_DIRECT
> > user-mode
> > >
> >
> > No, you’re absolutely correct. This is precisely what the
> I/O Manager
> does
> > for Direct I/O.
> >
> > There’s a long-standing urban legend that claims mapping a
> buffer as
> > for direct I/O (and using MmGetSystemAddressForMdlSafe)
> causes “a lot
> > of overhead.” In fact, doing the mapping entails nothing more than
> allocating
> > and filling in the necessary PTE entries. Certainly not a lot of
> > overhead the way I look at it.
> >
> > Now, there IS a hit that the system takes when UNmapping
> this sort of
> > a buffer. Anytime you invalidate a virtual to physical
> address mapping the
> > TLB will need to be flushed. So that’s a “hit.” But
> everything costs
> > SOMEthing. And getting data is what a driver is about.
> >
> > If this were truly “high overhead” then the entire storage path
> > wouldn’t
> be
> > based on the use of Direct I/O, right?
> >
> > I think the origin of this myth is some statements in the DDK years
> > ago. Old myths die hard,
> >
> > P
> >
> >
> >
> >
> >
>
>
>
> —
> You are currently subscribed to ntdev as: xxxxx@cmkrnl.com
> To unsubscribe send a blank email to xxxxx@lists.osr.com
>