Hi Experts ,
I am currently facing an scalability issue in my disk filterdriver for encryption . I am seeing all my read completion threads is stuck when there is recursive Hard pagefault when teh system is heavily loaded . .
As I understand in recent window memory compression , now instead of page its is the Store which is getting written in pagefile.sys in compressed format and it get decompressed
on its read .
what could be the easy way to avoid such recursion . I have increased the number of thread to handle Read Completion for scalable test but will this be an only solution ?
Below is the call stack of all of my stuck thread
If I looked the fileobject for this read , it is pagefile.sys and the IRP with thread is also pointing same . so i believe this is a kind of recursive hard pagefault and all my completion thread get stuck one by one.
What could be an optimum size for such store and is the read for store is splited ? Can i pass this to my decryption engine considering the CPU power is more without queuing so that there will not be any such thread deadlock ( Can read completion routing work in < DISPATCH_LEVEL )
Need your suggestions .
It looks like you're new here. If you want to get involved, click one of these buttons!
|Upcoming OSR Seminars|
|Writing WDF Drivers||21 Oct 2019||OSR Seminar Space & ONLINE|
|Internals & Software Drivers||18 Nov 2019||Dulles, VA|
|Kernel Debugging||30 Mar 2020||OSR Seminar Space|
|Developing Minifilters||27 Apr 2020||OSR Seminar Space & ONLINE|