Kernel pool header innards?

I’m trying to look at some kernel pool corruption, and I’d like to
understand the 8 bytes of header on each kernel pool block. The latter
4 bytes are clearly the pool tag. The first two bytes refer to the
previous block and the next two bytes refer to the current block. For
small allocations, the two block descriptors seem to give the number of
8-byte blocks of the allocations, for example, an 0x20-byte allocation
would have “04 00”. For larger sizes, there seems to be some scaling
that goes on, but I can’t figure it out. I’ve seen a 0x9a0-byte
allocation as “34 01” and a 0x660-byte one as “cc 02”. The in-use/free
bit must also be hidden in there somewhere.

Can anyone describe how that all works? I’m most interested in the
current header scheme (XP and up), rather than the older one (NT), but
if anyone knows the older one (which used 16-byte blocks and I don’t
know if there was scaling), that would be useful as well.

Try to search the web for POOL_HEADER or POOL_BLOCK_HEADER, you will find
some reverse-engineering results.

The size fields are IIRC logarithmic - i…e Size == ( 1 << Field ).


Maxim Shatskih, Windows DDK MVP
StorageCraft Corporation
xxxxx@storagecraft.com
http://www.storagecraft.com

“Taed Wynnell” wrote in message news:xxxxx@ntdev…
I’m trying to look at some kernel pool corruption, and I’d like to
understand the 8 bytes of header on each kernel pool block. The latter
4 bytes are clearly the pool tag. The first two bytes refer to the
previous block and the next two bytes refer to the current block. For
small allocations, the two block descriptors seem to give the number of
8-byte blocks of the allocations, for example, an 0x20-byte allocation
would have “04 00”. For larger sizes, there seems to be some scaling
that goes on, but I can’t figure it out. I’ve seen a 0x9a0-byte
allocation as “34 01” and a 0x660-byte one as “cc 02”. The in-use/free
bit must also be hidden in there somewhere.

Can anyone describe how that all works? I’m most interested in the
current header scheme (XP and up), rather than the older one (NT), but
if anyone knows the older one (which used 16-byte blocks and I don’t
know if there was scaling), that would be useful as well.

Thanks; that helped.

The structure is as follows (note the unioning):
kd> dt _POOL_HEADER
+0x000 PreviousSize : Pos 0, 9 Bits
+0x000 PoolIndex : Pos 9, 7 Bits
+0x002 BlockSize : Pos 0, 9 Bits
+0x002 PoolType : Pos 9, 7 Bits
+0x000 Ulong1 : Uint4B
+0x004 ProcessBilled : Ptr32 _EPROCESS
+0x004 PoolTag : Uint4B
+0x004 AllocatorBackTraceIndex : Uint2B
+0x006 PoolTagHash : Uint2B

The key issue is that the sizes are now 9 bits instead of 8.

“Maxim S. Shatskih” wrote in message
news:xxxxx@ntdev…
> Try to search the web for POOL_HEADER or POOL_BLOCK_HEADER, you will
find
> some reverse-engineering results.
>
> The size fields are IIRC logarithmic - i…e Size == ( 1 << Field ).
>
> –
> Maxim Shatskih, Windows DDK MVP
> StorageCraft Corporation
> xxxxx@storagecraft.com
> http://www.storagecraft.com
>
> “Taed Wynnell” wrote in message
news:xxxxx@ntdev…
> I’m trying to look at some kernel pool corruption, and I’d like to
> understand the 8 bytes of header on each kernel pool block. The latter
> 4 bytes are clearly the pool tag. The first two bytes refer to the
> previous block and the next two bytes refer to the current block. For
> small allocations, the two block descriptors seem to give the number of
> 8-byte blocks of the allocations, for example, an 0x20-byte allocation
> would have “04 00”. For larger sizes, there seems to be some scaling
> that goes on, but I can’t figure it out. I’ve seen a 0x9a0-byte
> allocation as “34 01” and a 0x660-byte one as “cc 02”. The in-use/free
> bit must also be hidden in there somewhere.
>
> Can anyone describe how that all works? I’m most interested in the
> current header scheme (XP and up), rather than the older one (NT), but
> if anyone knows the older one (which used 16-byte blocks and I don’t
> know if there was scaling), that would be useful as well.
>
>
>