Allocating a big non-continuous physical buffer in user space

Hello,

I’m developing a KMDF device driver for a PCIe card.

This card receives data at a rate of ~2.5Gb/Sec x 4 channels.

The FPGA writes the data to a physical RAM preallocated using
AllocateCommonBuffer.

After trying this on Windows XP-SP3 and Windows Sever 2008-64, it
seems AllocateCommonBuffer fails to allocated buffer greater than
32MB.

Many experts in this forum warned me that using AllocatedCommonBuffer
is a bad idea.

But I wanted to keep the way the old driver (and FPGA) worked. Now I
see those experts were right.

So I want to allocate a big user space buffer (e.g 128MB) that is not
continuous but constructed by physical pages.

How can make sure my buffer is in physical RAM only ?

Then I have to pass the driver the information of the pages
constructing this buffer using IOCTL.

I have to give the FPGA this information in advance.

I do not want the FPGA to wait for the user level to pass the
information via a read queue.

Thanks,
Zvika.

You call DeviceIoControl with the buffer address, and the kernel builds the MDL, then you lock the buffer. In pre-vista, a single MDL could only map under 32M. Now this limit is lifted.

wrote in message news:xxxxx@ntdev…
> You call DeviceIoControl with the buffer address, and the kernel builds
> the MDL, then you lock the buffer. In pre-vista, a single MDL could only
> map under 32M. Now this limit is lifted.
>

On a system where max. MDL is 32 MB, you can send down several ioctl
requests and pend them.

Note that mapping of the user buffer into kernel space can fail
(MmGetSystemAddressForMdlSafe can return NULL!), so have “plan B” for this
case.

– pa

See below…

Hello,

I’m developing a KMDF device driver for a PCIe card.

This card receives data at a rate of ~2.5Gb/Sec x 4 channels.

The FPGA writes the data to a physical RAM preallocated using
AllocateCommonBuffer.

After trying this on Windows XP-SP3 and Windows Sever 2008-64, it
seems AllocateCommonBuffer fails to allocated buffer greater than
32MB.

Many experts in this forum warned me that using AllocatedCommonBuffer
is a bad idea.

But I wanted to keep the way the old driver (and FPGA) worked. Now I
see those experts were right.

So I want to allocate a big user space buffer (e.g 128MB) that is not
continuous but constructed by physical pages.
****
Note that every page of memory is constructed by physical pages.
****

How can make sure my buffer is in physical RAM only ?
*****
Well, since you can’t put it any other place, there’s no way to avoid
putting your buffer in physical memory. If you mean “not paged out”,
which is a different question, then you can either use Direct mode I/O,
which will cause the I/O Manager to do an MmProbeAndLockPages, or use
Neither mode I/O, and do all the (really complex) work yourself. There
are times when Neither mode is appropriate, such as when the user has
large buffers that cannot be successfully locked down, so you have to
build partial MDLs and split the transfer into multiple transfers. But an
app cannot create “a buffer in physical memory” because an app has no
concept of what “physical memory” means.
*****

Then I have to pass the driver the information of the pages
constructing this buffer using IOCTL.
****
Why? Why isn’t ReadFile, WriteFile, or an DeviceIoControl that does a
single transfer sufficient? You are trying to invent a gratuitously
complex solution to a problem that was solved decades ago.
****

I have to give the FPGA this information in advance.
****
No, you have to give it this information when you start the transfer. “In
advance” has no meaning in this context.
****

I do not want the FPGA to wait for the user level to pass the
information via a read queue.
*****
First, you have to realize that the user level does not pass information
via a queue. The queue is what YOU do. More importantly, you are
thinking in terms of synchronous I/O, and you may find that using Asynch
I/O and “pumping up” the queue with a number of buffers may solve your
problem.

Are you supporting scatter/gather?

When I did this, I was getting data overruns until I put 40 ReadFile
requests in via async I/O. Note there is a problem (already discussed in
this forum) that IRPs are not necessarily put into an I/O Completion Port
(IOCP) in the order they are actually completed, but in my case, the
buffers had sequence numbers and they could have been reassembled if
necessary.

You need to restate the problem in terms of what you are trying to
accomplish and why a 32MB Common Buffer is inadequate. Would it work if
you had two or more 32MB common buffers, would that change the picture?
If the FPGA supported scatter/gather, this would help a lot.

Note the reason that it fails might be because the kernel address space is
fragmented and there are not more than 32MB of contiguous virtual
addresses to map it to.
*****

Thanks,
Zvika.


NTDEV is sponsored by OSR

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

Plan B can be one of two forms: app-only or driver-only. For an app, if
you do an I/O operation (let’s just say ReadFile for simplicity of
discussion), and it fails with insufficient resources, then you can split
it into two ReadFile operations of half-size, repeat recursively until you
are down to page-size transfers. Slow, lots of chances for overrun. In
the driver, you have to use Mode Neither, because otherwise the I/O
Manager will do the MmProbeAndLockPagesSafe, and when this fails, it’s the
I/O Manager that fails the IRP and you never see it. So what you have to
do is your own MmProbeAndLockPages, and if it fails, you build a partial
MDL (say, half the size of the input buffer), lock it down, and when it
completes, you then build a partial MDL for the rest, and lock it down.
This is fun, because you can’t do MmProbeAndLockPages from a DPC, so you
have to have a passive-level thread to do this, and this introduces tons
of latency.

It is a very complex, multidimensional design space, and there is no “one
right” answer.
joe

wrote in message news:xxxxx@ntdev…
>> You call DeviceIoControl with the buffer address, and the kernel builds
>> the MDL, then you lock the buffer. In pre-vista, a single MDL could only
>> map under 32M. Now this limit is lifted.
>>
>
> On a system where max. MDL is 32 MB, you can send down several ioctl
> requests and pend them.
>
> Note that mapping of the user buffer into kernel space can fail
> (MmGetSystemAddressForMdlSafe can return NULL!), so have “plan B” for this
> case.
>
> – pa
>
>
>
> —
> NTDEV is sponsored by OSR
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer
>

Hello,

My FPGA will support scatter gather.

The goal is that the data will be written from the FPGA directly to the
allocated RAM.

I do not want to copy the data once again using ReadFile.

I need to allocate more than 32MB because I want as much data as possible.

Currently the FPGA works like this: It gets a start address and size of the
buffer allocated by AllocateCommonBuffer.

Upon ‘start’ command written to the FPGA (which is memory mapped) the data
is written to the physical buffer.

When FPGA reaches to the end of the buffer it starts writing from start.

Currently FPGA does not support scatter gather but as I said this will be
changed soon.

Thanks,
Zvika.

----- Original Message -----
From:
To: “Windows System Software Devs Interest List”
Sent: Saturday, November 05, 2011 05:25
Subject: Re: [ntdev] Allocating a big non-continuous physical buffer in user
space

> See below…
>> Hello,
>>
>> I’m developing a KMDF device driver for a PCIe card.
>>
>> This card receives data at a rate of ~2.5Gb/Sec x 4 channels.
>>
>> The FPGA writes the data to a physical RAM preallocated using
>> AllocateCommonBuffer.
>>
>> After trying this on Windows XP-SP3 and Windows Sever 2008-64, it
>> seems AllocateCommonBuffer fails to allocated buffer greater than
>> 32MB.
>>
>> Many experts in this forum warned me that using AllocatedCommonBuffer
>> is a bad idea.
>>
>> But I wanted to keep the way the old driver (and FPGA) worked. Now I
>> see those experts were right.
>>
>> So I want to allocate a big user space buffer (e.g 128MB) that is not
>> continuous but constructed by physical pages.
>
> Note that every page of memory is constructed by physical pages.
>

>>
>> How can make sure my buffer is in physical RAM only ?
>
> Well, since you can’t put it any other place, there’s no way to avoid
> putting your buffer in physical memory. If you mean “not paged out”,
> which is a different question, then you can either use Direct mode I/O,
> which will cause the I/O Manager to do an MmProbeAndLockPages, or use
> Neither mode I/O, and do all the (really complex) work yourself. There
> are times when Neither mode is appropriate, such as when the user has
> large buffers that cannot be successfully locked down, so you have to
> build partial MDLs and split the transfer into multiple transfers. But an
> app cannot create “a buffer in physical memory” because an app has no
> concept of what “physical memory” means.
>

>>
>> Then I have to pass the driver the information of the pages
>> constructing this buffer using IOCTL.
>
> Why? Why isn’t ReadFile, WriteFile, or an DeviceIoControl that does a
> single transfer sufficient? You are trying to invent a gratuitously
> complex solution to a problem that was solved decades ago.
>

>>
>> I have to give the FPGA this information in advance.
>
> No, you have to give it this information when you start the transfer. “In
> advance” has no meaning in this context.
>

>>
>> I do not want the FPGA to wait for the user level to pass the
>> information via a read queue.
>
> First, you have to realize that the user level does not pass information
> via a queue. The queue is what YOU do. More importantly, you are
> thinking in terms of synchronous I/O, and you may find that using Asynch
> I/O and “pumping up” the queue with a number of buffers may solve your
> problem.
>
> Are you supporting scatter/gather?
>
> When I did this, I was getting data overruns until I put 40 ReadFile
> requests in via async I/O. Note there is a problem (already discussed in
> this forum) that IRPs are not necessarily put into an I/O Completion Port
> (IOCP) in the order they are actually completed, but in my case, the
> buffers had sequence numbers and they could have been reassembled if
> necessary.
>
> You need to restate the problem in terms of what you are trying to
> accomplish and why a 32MB Common Buffer is inadequate. Would it work if
> you had two or more 32MB common buffers, would that change the picture?
> If the FPGA supported scatter/gather, this would help a lot.
>
> Note the reason that it fails might be because the kernel address space is
> fragmented and there are not more than 32MB of contiguous virtual
> addresses to map it to.
>

>>
>> Thanks,
>> Zvika.
>>
>> —
>> NTDEV is sponsored by OSR
>>
>> For our schedule of WDF, WDM, debugging and other seminars visit:
>> http://www.osr.com/seminars
>>
>> To unsubscribe, visit the List Server section of OSR Online at
>> http://www.osronline.com/page.cfm?name=ListServer
>>
>
>
>
> —
> NTDEV is sponsored by OSR
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer

“Zvi Vered” wrote in message news:xxxxx@ntdev…

Hello,

My FPGA will support scatter gather.

Maz’l tov :slight_smile:

The goal is that the data will be written from the FPGA directly to the
allocated RAM.

I do not want to copy the data once again using ReadFile.

Fine. Then the common buffer is not needed any more.
Queue enough requests from user mode (as Mr. Newcomer adviced - about 40)
and program the device to receive directly to the buffers of these requests.
Complete the requests as they are filled, and the app will send new ones.
Each request can be less than 32MB. For simplicity you can make each request
aligned on page border, and size be integer number of pages.

This design has several “interesting points”: how to detect filled up
requests and switch to next one (and complete finished ones),
how to add new requests to the pipeline without stopping it, how to cope
with out-of-order requests completion in the app…
–pa

Dear Pavel,

A data rate of 2.5Gb/Sec is not rare nowadays. There are COTS 10Gb ethernet
adapters.

So I wonder what is the right way to handle such data rate.

Each time the user calls ReadFile, the kernel builds a scatter-gather list
and initiates a DMA transfer.

This is well demonstrated in the PLX9x5x sample PCI driver.

But the external data stream is not freezed.

Let’s say a PC receives UDP packets via 10Gb/sec interface.

User space calls “receivefrom” in an endless while loop to read those
packets.

But windows may decide to do “housekeeping” tasks in background and there
are moments that the receive application does not get CPU time.

Can you tell how data is not lost in this case ?

Thanks,
Zvika.

----- Original Message -----
From: “Pavel A.”
Newsgroups: ntdev
To: “Windows System Software Devs Interest List”
Sent: Saturday, November 05, 2011 20:56
Subject: Re:[ntdev] Allocating a big non-continuous physical buffer in user
space

> “Zvi Vered” wrote in message news:xxxxx@ntdev…
>> Hello,
>>
>> My FPGA will support scatter gather.
>
> Maz’l tov :slight_smile:
>
>> The goal is that the data will be written from the FPGA directly to the
>> allocated RAM.
>>
>> I do not want to copy the data once again using ReadFile.
>
> Fine. Then the common buffer is not needed any more.
> Queue enough requests from user mode (as Mr. Newcomer adviced - about 40)
> and program the device to receive directly to the buffers of these
> requests.
> Complete the requests as they are filled, and the app will send new ones.
> Each request can be less than 32MB. For simplicity you can make each
> request aligned on page border, and size be integer number of pages.
>
> This design has several “interesting points”: how to detect filled up
> requests and switch to next one (and complete finished ones),
> how to add new requests to the pipeline without stopping it, how to cope
> with out-of-order requests completion in the app…
> --pa
>
>
>
> —
> NTDEV is sponsored by OSR
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer

> Hello,

My FPGA will support scatter gather.

****
Do you know what scatter-gather is? From the description below, it sounds
like you need a contiguous buffer, which suggests that it can’t support
scatter-gather
****

The goal is that the data will be written from the FPGA directly to the
allocated RAM.

****
This is how Direct I/O mode works
****

I do not want to copy the data once again using ReadFile.

****
So set DO_DIRECT_IO in the device object
****

I need to allocate more than 32MB because I want as much data as possible.

****
No, *someone* needs to allocate more than 32MB, but why do you think it
has to be you? The app, for example, could allocate it.

I did not know there was a restriction on MDLs to 32MB, but this would
mean that you simply can’t *use* more than 32MB in a single transfer. So
a larger buffer would require multiple transfers, as I described. I don’t
know if the old USB example from earlier Windows still appears in the WDK,
but it shows how to do this in great detail.
****

Currently the FPGA works like this: It gets a start address and size of
the
buffer allocated by AllocateCommonBuffer.

*****
It sounds like it handles one address/size pair. For scatter-gather, you
would have a list of address/size pairs, and you would program your device
with the start address and length of that list. It would then
successively fetch address/size pairs for data transfer, and when it
exhausted the list, it would interrupt.
*****

Upon ‘start’ command written to the FPGA (which is memory mapped) the data
is written to the physical buffer.
******
It doesn’t matter how you access the control and status registers of the
FPGA. What is important is that they exist, and there is a known “start”
command.
******

When FPGA reaches to the end of the buffer it starts writing from start.

*****
That’s not scatter-gather; that’s circular buffering, which is quite
different
*****

Currently FPGA does not support scatter gather but as I said this will be
changed soon.

*****
As soon as you add scatter/gather, you let the application allocate the
storage. You deal with the storage you are given by the user. So there
is no particular reason for the driver to have to allocate a buffer.
*****

Thanks,
Zvika.

----- Original Message -----
From:
> To: “Windows System Software Devs Interest List”
> Sent: Saturday, November 05, 2011 05:25
> Subject: Re: [ntdev] Allocating a big non-continuous physical buffer in
> user
> space
>
>
>> See below…
>>> Hello,
>>>
>>> I’m developing a KMDF device driver for a PCIe card.
>>>
>>> This card receives data at a rate of ~2.5Gb/Sec x 4 channels.
>>>
>>> The FPGA writes the data to a physical RAM preallocated using
>>> AllocateCommonBuffer.
>>>
>>> After trying this on Windows XP-SP3 and Windows Sever 2008-64, it
>>> seems AllocateCommonBuffer fails to allocated buffer greater than
>>> 32MB.
>>>
>>> Many experts in this forum warned me that using AllocatedCommonBuffer
>>> is a bad idea.
>>>
>>> But I wanted to keep the way the old driver (and FPGA) worked. Now I
>>> see those experts were right.
>>>
>>> So I want to allocate a big user space buffer (e.g 128MB) that is not
>>> continuous but constructed by physical pages.
>>
>> Note that every page of memory is constructed by physical pages.
>>

>>>
>>> How can make sure my buffer is in physical RAM only ?
>>
>> Well, since you can’t put it any other place, there’s no way to avoid
>> putting your buffer in physical memory. If you mean “not paged out”,
>> which is a different question, then you can either use Direct mode I/O,
>> which will cause the I/O Manager to do an MmProbeAndLockPages, or use
>> Neither mode I/O, and do all the (really complex) work yourself. There
>> are times when Neither mode is appropriate, such as when the user has
>> large buffers that cannot be successfully locked down, so you have to
>> build partial MDLs and split the transfer into multiple transfers. But
>> an
>> app cannot create “a buffer in physical memory” because an app has no
>> concept of what “physical memory” means.
>>

>>>
>>> Then I have to pass the driver the information of the pages
>>> constructing this buffer using IOCTL.
>>
>> Why? Why isn’t ReadFile, WriteFile, or an DeviceIoControl that does a
>> single transfer sufficient? You are trying to invent a gratuitously
>> complex solution to a problem that was solved decades ago.
>>

>>>
>>> I have to give the FPGA this information in advance.
>>
>> No, you have to give it this information when you start the transfer.
>> “In
>> advance” has no meaning in this context.
>>

>>>
>>> I do not want the FPGA to wait for the user level to pass the
>>> information via a read queue.
>>
>> First, you have to realize that the user level does not pass information
>> via a queue. The queue is what YOU do. More importantly, you are
>> thinking in terms of synchronous I/O, and you may find that using Asynch
>> I/O and “pumping up” the queue with a number of buffers may solve your
>> problem.
>>
>> Are you supporting scatter/gather?
>>
>> When I did this, I was getting data overruns until I put 40 ReadFile
>> requests in via async I/O. Note there is a problem (already discussed
>> in
>> this forum) that IRPs are not necessarily put into an I/O Completion
>> Port
>> (IOCP) in the order they are actually completed, but in my case, the
>> buffers had sequence numbers and they could have been reassembled if
>> necessary.
>>
>> You need to restate the problem in terms of what you are trying to
>> accomplish and why a 32MB Common Buffer is inadequate. Would it work if
>> you had two or more 32MB common buffers, would that change the picture?
>> If the FPGA supported scatter/gather, this would help a lot.
>>
>> Note the reason that it fails might be because the kernel address space
>> is
>> fragmented and there are not more than 32MB of contiguous virtual
>> addresses to map it to.
>>

>>>
>>> Thanks,
>>> Zvika.
>>>
>>> —
>>> NTDEV is sponsored by OSR
>>>
>>> For our schedule of WDF, WDM, debugging and other seminars visit:
>>> http://www.osr.com/seminars
>>>
>>> To unsubscribe, visit the List Server section of OSR Online at
>>> http://www.osronline.com/page.cfm?name=ListServer
>>>
>>
>>
>>
>> —
>> NTDEV is sponsored by OSR
>>
>> For our schedule of WDF, WDM, debugging and other seminars visit:
>> http://www.osr.com/seminars
>>
>> To unsubscribe, visit the List Server section of OSR Online at
>> http://www.osronline.com/page.cfm?name=ListServer
>
>
> —
> NTDEV is sponsored by OSR
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer
>

If you actually have buffer overrun problems, then you need to do internal
buffer allocations and do copies. You can’t avoid the copy in this case.

I have found that it is generally not “housekeeping” tasks (internal
driver threads, DPCs, other ISRs) that are the problem; it is the
horrendously slow round trip to the application. Which is why I used
async I/O and shoved lots of ReadFiles down. In my test harness, I had a
spin control that selected how many IRPs I’d send down. At 35, I was
getting data overruns. At 40, I did not. So I actually ended up sending
down 50, so I’d have some headroom. The result was that we did not have
to spend months rewriting a driver to do internal buffering; the problem
was entirely solved by rewriting the app, which was a vastly simpler task
(as in: it took me five hours)
joe

Dear Pavel,

A data rate of 2.5Gb/Sec is not rare nowadays. There are COTS 10Gb
ethernet
adapters.

So I wonder what is the right way to handle such data rate.

Each time the user calls ReadFile, the kernel builds a scatter-gather list
and initiates a DMA transfer.

This is well demonstrated in the PLX9x5x sample PCI driver.

But the external data stream is not freezed.

Let’s say a PC receives UDP packets via 10Gb/sec interface.

User space calls “receivefrom” in an endless while loop to read those
packets.

But windows may decide to do “housekeeping” tasks in background and there
are moments that the receive application does not get CPU time.

Can you tell how data is not lost in this case ?

Thanks,
Zvika.

----- Original Message -----
From: “Pavel A.”
> Newsgroups: ntdev
> To: “Windows System Software Devs Interest List”
> Sent: Saturday, November 05, 2011 20:56
> Subject: Re:[ntdev] Allocating a big non-continuous physical buffer in
> user
> space
>
>
>> “Zvi Vered” wrote in message news:xxxxx@ntdev…
>>> Hello,
>>>
>>> My FPGA will support scatter gather.
>>
>> Maz’l tov :slight_smile:
>>
>>> The goal is that the data will be written from the FPGA directly to the
>>> allocated RAM.
>>>
>>> I do not want to copy the data once again using ReadFile.
>>
>> Fine. Then the common buffer is not needed any more.
>> Queue enough requests from user mode (as Mr. Newcomer adviced - about
>> 40)
>> and program the device to receive directly to the buffers of these
>> requests.
>> Complete the requests as they are filled, and the app will send new
>> ones.
>> Each request can be less than 32MB. For simplicity you can make each
>> request aligned on page border, and size be integer number of pages.
>>
>> This design has several “interesting points”: how to detect filled up
>> requests and switch to next one (and complete finished ones),
>> how to add new requests to the pipeline without stopping it, how to cope
>> with out-of-order requests completion in the app…
>> --pa
>>
>>
>>
>> —
>> NTDEV is sponsored by OSR
>>
>> For our schedule of WDF, WDM, debugging and other seminars visit:
>> http://www.osr.com/seminars
>>
>> To unsubscribe, visit the List Server section of OSR Online at
>> http://www.osronline.com/page.cfm?name=ListServer
>
>
> —
> NTDEV is sponsored by OSR
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer
>

“Zvi Vered” wrote in message news:xxxxx@ntdev…

> But the external data stream is not freezed.

But your machine likely has several processors. Ideally, one CPU
handles the SG DMA in the driver, another runs the usermode code
to process the data, etc.

> But windows may decide to do “housekeeping” tasks in background and there
> are moments that the receive application does not get CPU time.
>
> Can you tell how data is not lost in this case ?

This is a very hard question, I cannot answer it.
One short and well known answer is “Windows isn’t a reatime OS”,
and AFAIK Microsoft never claimed that Windows is suitable for RT
applications.

Only tests on a specific target system can show real latencies
and prove possibility of what you’re doing.
Even if your tests show good results, that still will not be a formal proof.

– pa

> Maz’l tov :slight_smile:

Sorry, but what is “maz’l tov?” is it “go on” in Hebrew?


Maxim S. Shatskih
Windows DDK MVP
xxxxx@storagecraft.com
http://www.storagecraft.com

> Can you tell how data is not lost in this case ?

Then AFD will allocate tmp kernel memory and keep the UDP data there till the next recvfrom().


Maxim S. Shatskih
Windows DDK MVP
xxxxx@storagecraft.com
http://www.storagecraft.com

Such devices usually have a ring buffer of descriptors, eash describing a physical address and length of data buffer segment. Most of the time the segments are page-aligned and multiple pages in size. You fill these using the SGL.

The device has a counter for the number of BufDescriptors it handled (you can make it 32 bits) and the driver writes a counter of BufDescriptors it built in the ring buffer. The driver takes care not to fill more than the ring buffer worth of descriptors. When the device runs out of buffers (the “handled” count reaches "built count) an underrun condition should be detected and an appropriate error buit set.
The device also needs to have an “interrupt threshold” register. When “handled” count reaches the threshold, an interrupt should be generated. The driver usually set the threshold after the descriptors of the next IRP.

Such design will allow you to post a number of discontiguous IRP buffers to the device, complete them back as the data comes, and post more, while keeping come amount of buffers always pre-posted.

“Maz’l tov” in hebrew is like “Congratulations”.

Zvika.

----- Original Message -----
From: “Maxim S. Shatskih”
Newsgroups: ntdev
To: “Windows System Software Devs Interest List”
Sent: Monday, November 07, 2011 00:58
Subject: Re:[ntdev] Allocating a big non-continuous physical buffer in user
space

> Maz’l tov :slight_smile:

Sorry, but what is “maz’l tov?” is it “go on” in Hebrew?


Maxim S. Shatskih
Windows DDK MVP
xxxxx@storagecraft.com
http://www.storagecraft.com


NTDEV is sponsored by OSR

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

Maxim S. Shatskih wrote:

> Maz’l tov :slight_smile:
Sorry, but what is “maz’l tov?” is it “go on” in Hebrew?

I’ve usually seen it spelled “mazel tov”. It means “good luck” in
Yiddish and is used as a term of congratulations, or as a toast.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

“Tim Roberts” wrote in message news:xxxxx@ntdev…
> Maxim S. Shatskih wrote:
>>> Maz’l tov :slight_smile:
>> Sorry, but what is “maz’l tov?” is it “go on” in Hebrew?
>
> I’ve usually seen it spelled “mazel tov”. It means “good luck” in
> Yiddish and is used as a term of congratulations, or as a toast.
>
> –
> Tim Roberts, xxxxx@probo.com
> Providenza & Boekelheide, Inc.
>

In this specific case, it stands for “good luck” in its other meaning :frowning:

– pa