There are a lot of serious problems associated with passing user-mode pointers into kernel-mode components. And these problems apply to nearly all current OS platforms, not just NT.
First, consider all the potential problems that would exist if user-mode processes simply passed pointers to kernel-mode drivers. Every kernel-mode driver would have to implement its own buffer verification – which means that 95% of them would either not do it at all, or would simply get it wrong. The Windows kernel simplifies this by allocating kernel memory (pool) for each I/O control request, if the request is marked “buffered”, and handles shuttling data back and forth. The kernel handles validating addresses and buffer bounds, copying the data, freeing the memory, etc. This allows device-driver developers to focus on the logic of the device, not the details of getting memory safety & security correct. And before you complain that it’s slow – it isn’t, in most cases, or at least it is far from the bottleneck in most driver designs.
And when it is too slow, your driver can choose to use either “direct” memory access, which uses Memory Descriptor Lists (MDLs) to lock down and identify user-mode buffers, or it can use totally raw user-mode pointers. MDLs are structures that the kernel uses to describe locked-down user-mode buffers; the most common use for MDLs is to set up DMA transfers with drivers. (MDLs contain the locked-down physical page addresses of the virtual range.)
Raw user-mode pointers are the most dangerous and difficult transfer mode to get right, and they should only be used in situations where you have proved to yourself that the performance of the buffered and direct solutions is inadequate.
Consider all those problems, all over again, when dealing with user-mode memory that itself contains pointers. The driver must handle locking down and validating every pointer that it deals with. And just because the driver read value V1 at time T1, doesn’t mean that the driver can’t read value V2 at time T2 from the same location. This provides a huge hole for security exploits coming from user-mode components, unless the driver developer is extremely cautious, experienced, and willing to spend (waste?) time on this topic.
In general, you should work hard to avoid such designs – they are excruciatingly difficult to get right. In every single case that I’ve seen, the better design was to consider what data you wanted to move from the app to the driver, and from the driver to the app, and to design a serialized data format that fits in the InputBuffer and OutputBuffer arguments of the DeviceIoControl call. Also, instead of using pointers, use integer offsets within the buffer.
Another problem with exchanging pointers between driver and app occurs when you consider 64-bit platforms. On 64-bit platforms, drivers *must* be 64-bit; the kernel will simply not load 32-bit drivers. But the OS supports both 32-bit and 64-bit apps. There’s a lot of jiggery-pokery behind the scenes to make this work, and luckily your driver can ignore the difference 99% of the time. But if you’re passing pointers between your driver and app, you do have to deal with it, and it’s never fun.
Yet another problem is dealing with user-mode addresses in device drivers, when the device driver code is running in an arbitrary thread/process context. There was just a big discussion here about what “arbitrary” means. The short explanation is that, during certain code paths, your driver code is *not* guaranteed to be running in the same process as it was when a request started. Since a different process may be loaded, the interpretation of a user-mode pointer means something totally different. So dereferencing that pointer can get you into some big trouble.
Correctness and security are very difficult to get right if your driver is dealing with user-mode addresses. I’m not saying it can’t be done, but it’s especially difficult to get right for people who are new to developing drivers on a new platform, and that’s true regardless of which platform you’re dealing with. And there’s usually not even a good reason to do it. The only reason that consistently comes up is performance, and until you’ve measured and proved that buffered I/O or direct I/O is your bottleneck, you’ve probably got bigger perf problems elsewhere.
If you want to learn more about this, there are a lot of “Intro to Windows Drivers” type books on the market. They might not focus specifically on this detail, but with some reading, and serious thinking, you can prove to yourself how dangerous user-mode pointers are.
From: xxxxx@lists.osr.com ?[xxxxx@lists.osr.com]? On Behalf Of xxxxx@lists.osr.com ?[xxxxx@lists.osr.com]?
Sent: Saturday, October 14, 2006 9:27 AM
To: Windows System Software Devs Interest List
Subject: [ntdev] Address translation from user mode to kernel mode (via deviceiocontrol).
Hi Everyone,
When I pass the address of a variable through a DeviceIOControl input or putput parameter the address of the variable seems to be changed to point within kernel space. Is this correct? Why is this necessary? I’ve noticed this with ordinary variables and with double pointers, when double pointers are passed through DeviceIOControl they seem to become unuseable, the memory the array of pointers is pointing to does not seem to be allocated anymore. Can someone explain what is happening or maybe point(!) me to appropriate reading on the subject? I’ve looked through ddk docs but haven’t found anything yet.
Thanks everyone.
— Questions? First check the Kernel Driver FAQ at http://www.osronline.com/article.cfm?id=256 To unsubscribe, visit the List Server section of OSR Online at http://www.osronline.com/page.cfm?name=ListServer