Using MmMapIoSpace to map physical memory

I’m trying to write a driver that will scan physical memory for a memory forensics application. From the research that I’ve done I see that the kernel API MmMapIoSpace has been used to accomplish this. I am a hobbyist and I know that this has been discussed before on this forum but I have specific questions. To use the function you must specify the permitted caching behavior using the MEMORY_CACHING_TYPE enumeration. The Windows documentation for the enumeration states that:

Processor translation buffers cache virtual to physical address translations. These translation buffers allow many virtual addresses to map a single physical address.

I thought that the purpose of caching address translations was so that every reference to a virtual address would not have to result in using page tables to translate the address into a physical address? What would be the purpose of multiple virtual addresses mapping a single physical address?

_…if a driver maps two different virtual address ranges to the same physical address, it must ensure that it specifies the same caching behavior for both. _

The MmMapIoSpace routine maps a given physical address range to system space. What happens if more than one driver attempts to map the same physical address at the same time while specifiying different caching attributes?

https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/wdm/ne-wdm-_memory_caching_type

What would be the purpose of multiple virtual address mapping into a single physical address?

For one, it’s quite common to have a user-mode mapping and a kernel-mode mapping to a single physical address. That happens, for example, every time you do direct I/O. The debugger needs to do that in order to display physical memory. You might have multiple processes in memory accessing a single physical page; each would have its own virtual mapping.

What happens if more than one driver attempts to map the same physical address at the same time
while specifying different caching attributes?

The documentation page you mentioned tells you this. The fact is, virtual every page you are likely to encounter is MmCached, so the issue doesn’t arise very often.

1 Like

I don’t know why a driver would specify MmNonCached or anything other value than MmCached but it seems that a driver does have the option to specify other caching attributes. You stated that it rarely happens. But what prevents multiple drivers from mapping the same physical address with different caching attributes? Thanks so much for your response.

I don’t know why a driver would specify MmNonCached or anything other value than MmCached

Simply because it may want to map not only RAM but device BARs as well, and memory-mapped devices are mapped for non-cached access

but it seems that a driver does have the option to specify other caching attributes.

There are different types of devices that may be mapped into memory, so that you may want to map them differently…

But what prevents multiple drivers from mapping the same physical address with different caching attributes?

Nothing, but why would you want to slow down the machine by mapping the “regular” RAM for UC access???

Anton Bassov

But what prevents multiple drivers from mapping the same physical address with different caching attributes?

Practicality. Most of us are living in the real world, where drivers need to Do Things. Playing around with the caching attributes is not productive.

Just for the record, there are many, many, many, many parameters and options in the system APIs that are there for rarely used (or imagined) corner cases, which are essentially never used in the real world.

But what prevents multiple drivers from mapping the same physical address with different caching attributes?

The memory manager actually prevents this. It has for years (since at least Visa, IIRC) when it was discovered that differing cache attributes in simultaneously can cause data corruption. IIRC, again, it simply ignores any conflicting attribute and the user gets whatever caching has already been established.

Peter

The memory manager actually prevents this.It has for years (since at least Visa, IIRC) when it was discovered that differing
cache attributes in simultaneously can cause data corruption. IIRC, again, it simply ignores any conflicting attribute
and the user gets whatever caching has already been established.

Please note that, unlike MmMapLockedPagesSpecifyCache() and friends, MmMapIoSpace() may be used for mapping the pages that haven’t got their corresponding entries in PFN database ( i.e the ones that correspond to the device BARs, rather than to RAM). Therefore, IIRC, MmMapIoSpace() does not deal with MM’s structures like PFN database et al - it just modifies a PTE the way you have specified.
As a result, unlike MmMapLockedPagesSpecifyCache(), it is simply not in a position to ignore the specified caching attributes.

Therefore, if someone is just desperate to specify the conflicting caching attributes for the same physical page, MmMapIoSpace() seems to be, indeed, the right way to go…

Anton Bassov

1 Like

It’s not my goal to intentionally specify a conflicting caching attribute. I guess my question is how do I avoid specifying a conflicting caching attribute when I call MmMapIoSpace()? Is there an option not to specify the MEMORY_CACHING_TYPE value or is there a way to know what the caching attribute is before I call MmMapIoSpace()? Please forgive me if my questions are too elementary.

Easy. Unless you definitively KNOW something different, specify MmCached.

I guess my question is how do I avoid specifying a conflicting caching attribute when I call MmMapIoSpace()?

Assuming that you are mapping the"regular" RAM pages, you should always specify MmCached attribute. The only situations when you may want to use non-cached or write-combined caching type are mapping respectively device BARs and frame buffers. If you are mapping device BARs you are supposed to be the owner of the target device anyway so that there is no possibility of a conflict whatsoever, and mapping frame buffers is a rather marginal case that is reserved only for display drivers…

Anton Bassov

What Anton said… while recognizing that you’re deliberately abusing the API and using it for a purpose for which it is not intended.

I’m still not sure this API fails to check for mapping consistency when mapping general purpose RAM… though Anton is very sure… but, regardless, if you follow Anton’s guidelines above you should at best be fine (this is a hobby project, so who cares, really). and at worst maybe the Mm enforces consistency and changes whatever you ask for to make it match preexisting mappings.

Is there an option not to specify the MEMORY_CACHING_TYPE value

No. Because, again, you are deliberately using the API in a situation it was not designed to support. So… yeah. Good luck.

Peter

It’s not my intention to purposely abuse the API. I’ve spent a considerable amount of time researching which API to use to read physical memory and it seems to me that MmMapIoSpace() is the best candidate. I know that a few other tools use this API for the same purpose. Maybe I should be using MmGetPhysicalMemoryRanges() instead. Or maybe I just need to do more research.

so read Tim’s post again and your problem is solved

It’s not my intention to purposely abuse the API

Really? Then, you might wanna note that the API has the terms “map” and “IO Space” in the name. Consider that a clue. If you’re not mapping I/O space, you’re intentionally abusing the API… deliberately using it for a purpose for which it was not designed and in a way that is not supported.

Peter

I’m still not sure this API fails to check for mapping consistency when mapping general purpose RAM… though Anton is very sure…

I’ve got to retract my statement on the subject - it looks like I put a foot in my mouth yet another time. I just forgot that WRK is
available without any restrictions these days, so that we are always in a position to use it as a reference when we speak about
the implementations of API in interest. I checked it in WRK, and MmMapIoSpace(), indeed, gets to PFN database and checks the
consistency of the requested mapping with the existing ones.

In case if you request UC or WC mapping and the target page is a part of a large one, the request fails right on the spot,
because there is a possibility of creating an incoherent overlapping TB entry at some point. Otherwise, the conflicting
request concerning the cache type simply gets overridden.

Anton Bassov