Hello everyone,
The usual memory allocation routines (ExAllocatePoolXXX) have a limitation based on the type of pool.
Basically, on 32 bit systems the nonpagedpool is of 256 MB and paged pool is around 500-650 MB.
I wanted to know if there is a way to have a buffer of around 1-1.5 GB ?
I know that this might result in a lot of paging. What are the possible ways?
Thank you,
Tushar.
> I wanted to know if there is a way to have a buffer of around 1-1.5 GB ?
Before you ask us the above question, ask yourself another two:
-
How on Earth the system will be able to find contiguous address space to hold a buffer
that large???
-
Why on Earth does this buffer have to be *contiguous* at the time when exactly the same
objective can be achieved simply by breaking it into separate parts and chaining them together???
Therefore, your only limitation is the size of the paged pool itself - I am afraid 1.5G is too much
for being allocated from the paged pool, even if you break it into separate allocations.
Anton Bassov
Besides Anton’s very good questions, can you tell the group why you think
you need this buffer? There are tricks you can play to achieve this, but
most of them are going to result in slow response.
–
Don Burn (MVP, Windows DDK)
Windows 2k/XP/2k3 Filesystem and Driver Consulting
Website: http://www.windrvr.com
Blog: http://msmvps.com/blogs/WinDrvr
Remove StopSpam to reply
wrote in message news:xxxxx@ntdev…
>> I wanted to know if there is a way to have a buffer of around 1-1.5 GB ?
>
> Before you ask us the above question, ask yourself another two:
>
> 1. How on Earth the system will be able to find contiguous address space
> to hold a buffer
> that large???
>
> 2. Why on Earth does this buffer have to be contiguous at the time when
> exactly the same
> objective can be achieved simply by breaking it into separate parts and
> chaining them together???
>
> Therefore, your only limitation is the size of the paged pool itself - I
> am afraid 1.5G is too much
> for being allocated from the paged pool, even if you break it into
> separate allocations.
>
>
> Anton Bassov
>
“How on Earth the system will be able to find contiguous address space to
hold a buffer that large”
Anton sir, I never said that it has to be contiguous. 
So, even if it is in chunks, how is it possible?
Thank you,
Tushar
I was expecting that question Don.
I am exploring the ways of doing things. I am a beginner. A question popped up in my mind. I read stuff. And found out the limitations on the Pool sizes.
I read about functions MmAllocateMappingAddress & MmMapLockedPagesWithReservedMapping.
Are they useful in this case?
Thank you.
Tushar.
There are tricks for doing this by creating a dummy process and then using
that processes address space to allocate the memory you need. This means
that all the accesses have to be in the context of the dummy process again
there are tricks/techiques to do this.
But the bottom line is this buys you very little except in rare
circumstances, in general something is very wrong with your design if you
need this much memory on a 32-bit system. The calls you mentioned will not
work, because they still need to take memory out of the system address
space, and you are asking for half to 3/4 of all that space.
–
Don Burn (MVP, Windows DDK)
Windows 2k/XP/2k3 Filesystem and Driver Consulting
Website: http://www.windrvr.com
Blog: http://msmvps.com/blogs/WinDrvr
Remove StopSpam to reply
wrote in message news:xxxxx@ntdev…
>I was expecting that question Don.
>
> I am exploring the ways of doing things. I am a beginner. A question
> popped up in my mind. I read stuff. And found out the limitations on the
> Pool sizes.
> I read about functions MmAllocateMappingAddress &
> MmMapLockedPagesWithReservedMapping.
> Are they useful in this case?
>
> Thank you.
> Tushar.
>
>
>
Thank you Don sir for your response.
This means that until & unless i have a dedicated hardware (which works as a memory ) it is a bad idea to land up on a design of such kind. Rite?
I am still learning and have almost just started… How could i possibly design? 
Sometimes while reading a question pops up…
Thats it.
Thank you very much.
Tushar
No, if your hardware provides that big a memory space you are going to have
to map and unmap it in pieces, since you have to have if mapped, which is
not the same as allocating paged memory. The are reasons for the tricks I
mentioned but they are rare.
–
Don Burn (MVP, Windows DDK)
Windows 2k/XP/2k3 Filesystem and Driver Consulting
Website: http://www.windrvr.com
Blog: http://msmvps.com/blogs/WinDrvr
Remove StopSpam to reply
wrote in message news:xxxxx@ntdev…
> Thank you Don sir for your response.
>
> This means that until & unless i have a dedicated hardware (which works as
> a memory ) it is a bad idea to land up on a design of such kind. Rite?
> I am still learning and have almost just started… How could i possibly
> design? 
> Sometimes while reading a question pops up…
Thats it.
> Thank you very much.
> Tushar
>
Don,
Just incase if the physical memory is >=2GB, can i make use of the physical pages and map them for my use?
Again, I think by what you told before that 1-1.5 GB is too much, but i can still use the mapping of physical pages for getting memory that is considerably more than getting it thru the Pools. Rite?
-Tushar
Again you have to reserve it which requires tricks which should not be used
on a general purpose system but could be ok for an embeded system. Then
you have to map the memory in piece meal since there is not enough address
space in a 32-bit OS for this stuff.
At that point, I would look at a 64-bit system since then you can do more,
and with Windows Server 2008 the restrictions on memory pools have been
relaxed.
–
Don Burn (MVP, Windows DDK)
Windows 2k/XP/2k3 Filesystem and Driver Consulting
Website: http://www.windrvr.com
Blog: http://msmvps.com/blogs/WinDrvr
Remove StopSpam to reply
wrote in message news:xxxxx@ntdev…
> Don,
> Just incase if the physical memory is >=2GB, can i make use of the
> physical pages and map them for my use?
> Again, I think by what you told before that 1-1.5 GB is too much, but i
> can still use the mapping of physical pages for getting memory that is
> considerably more than getting it thru the Pools. Rite?
>
> -Tushar
>
> I would look at a 64-bit system since then you can do more,
Sure - 1.5G is just funny if you are able to address more than 8T of memory (IIRC, Windows does not use all 512 T that are available on x86_64).
and with Windows Server 2008 the restrictions on memory pools have been relaxed.
I think that size constraints that we encounter in 32-bit OSes are, in practical terms, just meaningless for 64-bit OS - the only constraint that you have is the amount of physically available RAM…
In any case, it just does not matter in context of this discussion - if I got it right, the OP looks at the whole thing from the purely theoretical point of view. Therefore, if you tell him to use 64-bit OS, he will just change his requirements and tell you that he needs 4T buffer…
Anton Bassov
Correct me if I am wrong, but MDL’s in 64-bit Windows are limited to a ridiculously small size just under 32MB. This is in fact just half the size of the already constrained 32-bit Windows limit.
> MDL’s in 64-bit Windows are limited to a ridiculously small size just under 32MB.
Well, when it comes to MDLs, constraints are not going to get relaxed - after all, the fact that you run 64-bit OS does not increase the amount of physical RAM on your machine, does it???
Don’t forget that most machines that are capable of running 64-bit OSes have just 2-4 G
of RAM. Therefore, the constraints should remain the same.
The only exception to that are the OSes that are written specifically for powerful machines, i.e. server editions. I don’t know if the above mentioned limitation applies to them - after all, 64-bit Windows comes in different flavors…
Anton Bassov
Yes. As pointers are twice the size from 32bits, MDLs in 64bit are
effectively limited to half the size.
On Feb 1, 2008 11:42 PM, wrote:
> Correct me if I am wrong, but MDL’s in 64-bit Windows are limited to a
> ridiculously small size just under 32MB. This is in fact just half the size
> of the already constrained 32-bit Windows limit.
>
>
> —
> NTDEV is sponsored by OSR
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer
>
–
Mark Roddy
> As pointers are twice the size from 32bits, MDLs in 64bit are effectively limited to half the size.
As long as the same formula for calculating maximum MDL size is used ( i.e. if the total of variable-size array plus size of MDL structure itself cannot exceed 64K), the above statement holds true. However, as you said yourself on another thread, the info about maximum MDL size that IoAllocateMdl() documentation provides may be obsolete…
Anton Bassov
The MDL size limit is a true limit as I described. It is a problem for some drivers today. Because 64-bit Windows cuts the size in half that makes even more drivers hit that ceiling.
Ok, I am going to go out this limb again and see if I fall off.
The documented limit in IoAllocateMdl is a limit only for IoAllocateMdl. The
USHORT Size field is routinely ignored everywhere except IoAllocateMdl.
Everywhere meaning that you can use MmCreateMdl to build a much bigger MDL.
Or just allocate a glob of memory and roll your own. The actual enforced
limits (outside of IoAllocateMdl) are 2GB on x86 and 4GB on x64.
Note that as all api (Zw or Nt) based IO uses IoAllocateMdl, that is a
pretty effective barrier.
On Feb 5, 2008 8:15 AM, wrote:
> The MDL size limit is a true limit as I described. It is a problem for
> some drivers today. Because 64-bit Windows cuts the size in half that makes
> even more drivers hit that ceiling.
>
>
> —
> NTDEV is sponsored by OSR
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer
>
–
Mark Roddy
Ok, that’s good information. But the problem is many Windows components a driver interacts with use IoAllocateMdl so you can’t work around the limit in those cases. One example where this is a problem is storage devices must artificially limit their transfer length capabilities or else they will fail to start.
Right but the OP’s question was about allocation within a driver and I am
not at all convinced that the limit imposed by IoAllocateMdl is a barrier in
this case.
Storage devices more generally get stuck with the even smaller cluster
allocation size of the file system above them.
On Feb 5, 2008 11:09 PM, wrote:
> Ok, that’s good information. But the problem is many Windows components a
> driver interacts with use IoAllocateMdl so you can’t work around the limit
> in those cases. One example where this is a problem is storage devices must
> artificially limit their transfer length capabilities or else they will fail
> to start.
>
>
> —
> NTDEV is sponsored by OSR
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer
>
–
Mark Roddy