I wonder how to implement a timeout on a request.
Problem statement: a user application wants to call DeviceIO with a timeout. This means that DeviceIO MUST return before the timeout. If the request cannot be handled before the timeout, the request should fail (and removed from the queue).
Of course, a special case is when the request is sent to the hardware just before the timeout, and the timeout is reached when the hardware is busy processing.
It seems to me that this timeout will seriously complicate the driver.
It depends on how accurate you want your timeout to be. Many timeouts are
in seconds, and having a timer kickoff every half second and checking the
list of queued IRP’s for timed out requests and completing them with an
error is all that is needed.
If you need a timeout that says this request will only last Xms at most,
you are going to have problems anyway since the I/O path in the OS can
impact you.
–
Don Burn (MVP, Windows DDK)
Windows 2k/XP/2k3 Filesystem and Driver Consulting
Website: http://www.windrvr.com
Blog: http://msmvps.com/blogs/WinDrvr
Remove StopSpam to reply
wrote in message news:xxxxx@ntdev…
>I wonder how to implement a timeout on a request.
> Problem statement: a user application wants to call DeviceIO with a
> timeout. This means that DeviceIO MUST return before the timeout. If the
> request cannot be handled before the timeout, the request should fail
> (and removed from the queue).
> Of course, a special case is when the request is sent to the hardware
> just before the timeout, and the timeout is reached when the hardware is
> busy processing.
> It seems to me that this timeout will seriously complicate the driver.
>
Well actually, the timeout will be around 30 msecs (and of course this timeout should start from the time DeviceIO is called). I am pretty sure (but this still has to be tested) that most of the time will be lost in the I/O queue (as there will be tens of threads competing for the hardware).
This timeout is important, because the hardware has to process video streams, and these must run at 25 (PAL) or 30 (NTSC) frames per sec.
xxxxx@barco.com wrote:
Well actually, the timeout will be around 30 msecs (and of course this timeout should start from the time DeviceIO is called). I am pretty sure (but this still has to be tested) that most of the time will be lost in the I/O queue (as there will be tens of threads competing for the hardware).
This timeout is important, because the hardware has to process video streams, and these must run at 25 (PAL) or 30 (NTSC) frames per sec.
What kind of a device are you building here? Usually, a device that is
processing video streams will have a kernel capture driver that worries
about the timing. Why does the user-mode application need to have a
timeout? Will you be driving this with DirectShow? It already has very
well tested timing code in its rendering path.
–
Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.
Tim,
the video streams are actually GPU generated 3D images/frames. The (server-based) user application is handing those images to the driver/hardware, who’s postprocessing them. It’s that application who’s telling the driver not to take more than for instance 1/30th of a second to do the processing.