Thank you all very much for the replies.
>> Almost certainly. Can you not define your own ioctl code using =3D
METHOD_IN_DIRECT and allow the system to do the page locking for you?
=20
Actually I’m trying to boost the driver performance so instead of =
variable size DataBuffer I’m passing the pointer to user allocated =
buffer in IOCTL structure and using that User virtual address to map to =
kernel virtual address. So my IOCTL’s structure size is very small. I’m =
doubting on data buffer size which is being copied on various driver =
stacks before reaching to miniport driver.
>>>I hope you are not trying to argue that passing an address and length in =
a buffered ioctl will have higher performance than using =
METHOD_IN_DIRECT and allowing operating system code that has been =
optimized over 20 years do the exact same thing.
No, I’m not arguing about this. The problem here is that I can’t use METHOD_XX_DIRECT for IOCTL_SCSI_MINIPORT.
As I said in my question as a Note part that I’m using IOCTL_SCSI_MINIPORT and it’s Buffered IO. If I change it to METHOD_XX_DIRECT then the control code is no longer IOCTL_SCSI_MINIPORT and it won’t be understood by the port driver.
So the other ways to use METHOD_XX_DIRECT is:
- IOCTL_SCSI_PASS_THROUGH_DIRECT instead of IOCTL_SCSI_MINIPORT.
But here also in this case I can see it’s method used is METHOD_BUFFERED.
#define IOCTL_SCSI_PASS_THROUGH_DIRECT CTL_CODE(IOCTL_SCSI_BASE, 0x0405, METHOD_BUFFERED, FILE_READ_ACCESS | FILE_WRITE_ACCESS)
Now my querry is if I use IOCTL_SCSI_PASS_THROUGH_DIRECT then what’s the method used here…?? Internally it also uses buffered method. Am I right…???
Even if METHOD_XX_DIRECT is used then MSDN Link: (https://msdn.microsoft.com/en-us/library/windows/hardware/ff560521(v=vs.85).aspx)
says, this (IOCTL_SCSI_PASS_THROUGH_DIRECT) request is typically used for transferring larger amounts of data (>16K).
But in my case lagging is more in Random Write/Read case which is 4K in size. So no chances for improvement here.
One more thing I found regarding Direct Io on MSDN Link: (https://docs.microsoft.com/en-us/windows-hardware/drivers/kernel/using-direct-i-o)
[[Drivers that use direct I/O will sometimes also use buffered I/O to handle some IRPs.
In particular, drivers typically use buffered I/O for some I/O control codes for IRP_MJ_DEVICE_CONTROL requests that require data transfers,
regardless of whether the driver uses direct I/O for read and write operations.]]
According to above documentation, Direct Io is not guaranteed even if we use METHOD_XX_DIRECT. Correct if I’m wrong…
Almost certainly. Can you not define your own ioctl code using =
METHOD_IN_DIRECT and allow the system to do the page locking for you?
> Second method as suggested by Tim Roberts
- Custom Control Code.
Here I’m not sure if a Custom Control Code might work with miniport driver.
Any help related how to use Custom Control Code with miniport driver would be appreciated.
So any suggestion to boost the driver performance is appreciated.
>>>You haven’t told us what part of your performance bothers you. Have you =
actually done benchmarks to establish the actual performance? And what =
leads you to believe the performance CAN be improved?
For benchmarks testing, I’m using IoMeter. I’ve taken the performance measurement with IoMeter and found that performance is low as compared to Standard Inbox Driver.
For sequential, difference is negligible but Lagging is more in Random Write/Read around 20%.
Alongwith I’m using my own Win32 application. It uses DeviceIoControl with IOCTL_SCSI_MINIPORT control code.
I found very poor performance with my application as compared to IoMeter.
With my Win32 Application, performance measurement difference is very high. Around 15% in case of sequential and in case of Random Read it’s 20% and Random Write 40%.
Is it the correct way to compare performance measurement taken with a custom Win32 application using DeviceIoControl which follows IOCTL_SCSI_MINIPORT path with IoMeter…??
IoMeter uses ReadFile and WriteFile Windows API but my application follows IOCTL_SCSI_MINIPORT path…
Thanks again for everyone’s help…
Sudhanshu