Windows System Software -- Consulting, Training, Development -- Unique Expertise, Guaranteed Results
The free OSR Learning Library has more than 50 articles on a wide variety of topics about writing and debugging device drivers and Minifilters. From introductory level to advanced. All the articles have been recently reviewed and updated, and are written using the clear and definitive style you've come to expect from OSR over the years.
Check out The OSR Learning Library at: https://www.osr.com/osr-learning-library/
In the last several months, I have taken an interest in the modern Windows graphics pipeline, and how Direct3D objects go from usermode command buffers to display miniport drivers for rendering. I was curious if it would be possible to render simple (or complex) geometry to the screen without using DWM and the DirectComposition pipeline, but rather by having a kernel component create a similar rendering pathway.
If I understand the typical user-to-kernel rendering path as follows, the device context of the Direct3D object to which I am currently rendering will have a command buffer directly visible to the underlying application. This command buffer is the very same one that would be interpreted by the usermode display driver functions, such as pfnPresentCb/pfnRenderCb, which send these command buffers off to the display miniport driver. With that being said, my first thoughts on the matter were to see if a custom kernel component could supply WDDM drivers with command buffers for rendering by calling their functions, such as DxgkDdiPresent (as seen in the WDDM operation flow). After, I considered perhaps utilizing the official Microsoft KMDOD sample driver, and exploring if it was possible to use the same VidPn target as the main on-screen-display. Perhaps there would be a different, easier or more preferable way to go about doing this. I am very curious what somebody more experienced on the matter would suggest.
Thank you all for your time, and I hope you're having a great start to the new year.
|Upcoming OSR Seminars|
|OSR has suspended in-person seminars due to the Covid-19 outbreak. But, don't miss your training! Attend via the internet instead!|
|Internals & Software Drivers||19-23 June 2023||Live, Online|
|Writing WDF Drivers||10-14 July 2023||Live, Online|
|Kernel Debugging||16-20 October 2023||Live, Online|
|Developing Minifilters||13-17 November 2023||Live, Online|
You don't have to use DWM and DirectComposition. In fact, exclusive-mode Direct3D applications (like most games) do operate as you describe. For performance reasons, the layering between Direct3D applications and the rendering kernel component is very thin. The command buffers and shaders are communicated almost directly to the hardware with little interference. Each generation of Direct3D API was designed to fit that generation of GPU hardware, and not the other way around (as was the case with GDI).
Tim Roberts, [email protected]
Providenza & Boekelheide, Inc.
Tim, thank you for reply, and I'm happy to hear my head may be in the right place when it comes to communicating this rendering information to the kernel.
Would you happen to know how I could go on to learn about rendering in this fashion, with a completely custom solution? That is to say, I wouldn't just like to run an exclusive-mode Direct3D application, but rather render in a way that doesn't disrupt the other rendering on the system.
Perhaps there is an existing application out there that does the same thing.
Thank you again for your time. I really appreciate hearing from you.
To me nothing seems to be in the right place here nor even heading into the right direction...
Windows operating system has exactly one exclusive and very elaborate graphics pathway. Just as the human body has exactly one exclusive and very elaborate pathway for food intake.
This question is similar to searching alternative paths for food intake. The above mentioned approaches are a bit like eating through the nose (1) or cutting open the throat (2) for food intake.
(1) Pirating other WDDM drivers' display surfaces (not feasible).
(2) External invocation of Kernel Mode DDI like e.g. DxgkDdiPresent (impossible).
If you want me to falsify (1) and (2) in more detail or if you have any further questions, just let me know. I can post more information if needed.
PS: Intravenous feeding is of course also possible (in theory). Windows, DirectX and Nvidia/AMD/Intel WDDM graphics drivers can all together be bypassed completely by installing an own WDM (not WDDM!) Kernel Mode driver for the graphics adapter (probably insufficient knowledge and development resources available in reality).
continuing your analogy, if you are prepared to monopolize a piece of hardware that happens to be hooked up to a monitor, you can easily feed it in any way that you choose. If you want that piece of hardware to also work with other parts of Windows like a desktop, then think about how the plumbing can work in Siamese twins and you will see the difficulty
Sure, I'd love to hear why you believe such approaches would be infeasible. Additionally, would you happen to know how such graphics adapters interact with the OS; or rather where I might be able to find documentation explaining such an idea? When searching for anything graphics related in the kernel, MSDN enjoys redirecting to WDDM articles.
If there's any truth to the documentation I've read thus far, it seems like the graphical components of the OS are layered in a way that prompts me to believe kernel assisted graphics pathways are indeed possible. For instance, modern Nvidia cards allow for hardware overlays that are not processed through DWM. The kernel win32k variants all reference functions and structures relating to such overlays of the past, like legacy D3D9 hardware overlays, modern multi-plane overlays, and DRM-protected video streams. Per Tim's previous comment, exclusive mode Direct3D applications would pass their graphics work off to the kernel for rendering in a way that doesn't involve DirectComposition. And on the topic of DirectComposition, bitmaps are marshaled and passed to DWM, which uses the same Direct3D APIs as everyone else. Such rendering by DWM might not end up in DxgkDdiPresent, but I don't imagine it bypasses the scope of the WDDM and Dxgi interfaces altogether.
I wouldn't like to think of this as trying to eat through the nose or the throat, and I don't believe I'm asking how to feed intravenously between Siamese twins. Instead, if eating is our analogy for graphical rendering, seeing as though Windows has long supported the alternate aforementioned graphics pathways, there's always been more than one mouth. And just like in the typical cases of Siamese twins, both sides eat into the same stomach. I'm just trying to get food into that stomach too, nothing new.
In all seriousness, thank you guys for all of the time you've put into replying. I really do appreciate it, and I'm always excited to learn more about how this process works in a push to render in the way I've described. I'm also very excited to be parenting the twins now... At least until somebody comes up with a wittier analogy. I'm ready for it
Remember that in the early days, effective video was a problem too hard for the hardware to do properly. The same is true of audio to a certain degree and in both realms various creative solutions have been engineered. Creativity is great, but is makes things get very complicated very quickly
If it's not a bother, I'm very curious: had you wanted to learn about the internals of Window's graphics pathways, how would you approach the problem? I've found the documentation pages on the layers of this overall pipeline are very detached from one another, and it becomes difficult to see the forest for the trees. The kernel video driver samples on Microsoft's Github are great, but they're hard to use when the explanations for them on MSDN don't actually give light to how they work into this larger graphics scope. Is there a way to reach out to somebody at Microsoft or one of the vendors, or do you think this would be a stretch for scholarly learning.
Oooooooooooooooooooh - seeing this outcome I wish I had never made this analogy nor even answered to the post in the first place...
You learn a topic by carefully elaborating the right questions to ask. Not by dumping a confusing bunch of false assumptions, misunderstandings and out-of-context-information at other people.
The first post still contained two precise misunderstandings. These two could still be named (see above) and falsified (see below). After that, I can only see an avalanche of misunderstandings, partial understanding and irrelevant information without context.
I wouldn't even know where to start trying to point out mistakes...
Even less I can see anything reasonable which I could extract out of this...
Misunderstanding #1 in first post (as requested above):
You cannot pirate other display drivers' display surface for many reasons. You even name one of them by yourself further below: There could be a hardware overlay. Furthermore there isn't even a single surface which you could pirate. Depending on the size of the DirectX swapchain, it might be 1,2 or n different surfaces typically changing every 33, 16 or less milliseconds. There are many more reasons we can discuss after I see that you have cosidered and understood at least these two simple ones.
Misunderstanding #2 in first post (as requested above):
You cannot externally invoke Kernel Mode DDI (e.g. DxgkDdiPresent). If you only had done your minimum homework then you would have spotted DDI function parameters (handles) which you cannot know and thus cannot supply. These handles are generated by dxgkrnl.sys. If you think you can still supply dummy handles by yourself, then you don't have to wait long for a Blue Screen of Death. WDDM drivers (e.g. Nvidia/AMD/Intel) use these handles to call back into dxgkrnl.sys. There are many more reasons we can discuss after I see that you have considered and understood at least this simplest one.
I am gladly available to answer any well defined elaborate question. Otherwise I am out...
PS: To analyze actually means "to take apart" subjects until every single one is fully understood...
It does not mean "to pile up" partially understood subjects until nobody can understand anything any more...
Well, how I would go about learning about these things now, and how I did go about learning similar things years ago are two quite different things.
Today, when I want to know about a detail, my go to method is to fire up the debugger and consider where I want to place my breakpoints. When I consult the documentation, I typically search to a page close to the one that I want quickly and then skim the page to jog my memory
But when I was first learning about these things many years ago, I had a much different approach. I spent countless hours reading the MSDN documentation and then thinking about what it meant. The thinking was much more important than the reading. For example, I learned about how to program on 16 bit windows from the documents designed to help port 16 bit programs to 32 bit. Thinking about why these documents tell what not to do any more and why that kind of practice became one that needed to be one that was both common and one we were told not to do
Scholarly learning can be a very hard thing to quantify. The last time I was at school as a student (A very long time ago) my professors marked many of my answers wrong until the next office hour - my grades constantly changed dramatically. But it is not really a stable relationship when the supposed student constantly corrects the supposed teachers, and so that didn't last. These days I find myself in a reversed role - I spend almost no time working on code directly myself, but almost all of it in the checking the work of others. And now the largest problems are in the best ways of telling others about the mistakes that they have made in ways that aren't some simple 'no you are wrong' but in some other more constructive way that is designed to lead them into thinking or asking about a more correct solution. A certain amount of arrogance or mock ignorance is required