forwarding cancelled IRPs ?

Hi,

i think i already know the answer to my question but i’d like to be sure…

is it a failure to send a driver allocated IRP that has already been
cancelled
down to another driver? (in other words, will a set Cancel flag survive an
IoCallDriver call and the lower driver (hopefully) recognize the Cancel flag
after setting up a cancel routine and immediatly return?)

of course that’s only a very seldom race condition in case my original IRP
gets cancelled immediatly after i dropped a driver supplied spinlock that
protects the original IRPs CancelRoutine and just before i can call
IoCallDriver…
(i cannot hold the spinlock while calling IoCallDriver cause this could
deadlock
with the completion routine)

and also, is it correct that i must not call IoFreeIrp (in it’s
CompletionRoutine)
for a driver allocated IRP after calling IoCancelIrp on it (think
someone told me
so, but i can’t find anything about this in documentation) ?

regards,
daniel.

No, it is not a failure. As long as you are guaranteeing that your PIRP
which you allocated is still allocated while you are calling
IoCancelIrp, then you can do this (which relates to your 2nd question).
You need to prevent this from happening (thread number indicated by #x)

#1 Completion routine(), processing irp, drops lock
#2 grabs lock, finds irp to cancel, drops lock
#1 calls IoCallDriver, lower driver completes request
#1 your completion routine runs, frees the irp
#2 calls IoCancelIrp on the irp that was just freed

The lower driver should honor the cancel flag if it is going to enqueue
the PIRP. If it is going to synchronously complete the PIRP, the cancel
flag does not matter.

You can call IoFreeIrp once you know the irp has completed back to you
and you are not in the middle of trying to cancel the PIRP. Walter
Oney’s book has a section on how to safely cancel a PIRP and protect
against it going away from underneath you.

d

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of L3sT4Rd
Sent: Tuesday, February 28, 2006 12:38 PM
To: Windows System Software Devs Interest List
Subject: [ntdev] forwarding cancelled IRPs ?

Hi,

i think i already know the answer to my question but i’d like to be
sure…

is it a failure to send a driver allocated IRP that has already been
cancelled
down to another driver? (in other words, will a set Cancel flag survive
an
IoCallDriver call and the lower driver (hopefully) recognize the Cancel
flag
after setting up a cancel routine and immediatly return?)

of course that’s only a very seldom race condition in case my original
IRP
gets cancelled immediatly after i dropped a driver supplied spinlock
that
protects the original IRPs CancelRoutine and just before i can call
IoCallDriver…
(i cannot hold the spinlock while calling IoCallDriver cause this could
deadlock
with the completion routine)

and also, is it correct that i must not call IoFreeIrp (in it’s
CompletionRoutine)
for a driver allocated IRP after calling IoCancelIrp on it (think
someone told me
so, but i can’t find anything about this in documentation) ?

regards,
daniel.


Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

You are safe in this situation. You cannot hold a spinlock when you call
IoCallDriver. But another thread could immediately call IoCancelIrp on the
IRP that you are submitting – which is legal. The lower driver is
responsible for checking whether the IRP has been canceled, and if the lower
driver queues the IRP, it is also responsible for arming it for
cancellation, by setting a cancel routine.

In all cases, this is safe.

However, about your second question, you are *always* responsible for
freeing an IRP that you allocate, whether it is canceled or not.
IoCancelIrp does NOT free your IRP – it only informs the current processor
of the IRP (whomever that might be) that the IRP should be canceled.
“Canceled” just means “no longer processed, and completed with
STATUS_CANCELLED”, but the IRP is STILL completed. Your completion routine
is still called.

When you call IoCancelIrp, you are NOT guaranteed that your completion
routine has run before IoCancelIrp returns. It is completely legal for a
driver not to have a cancel routine installed (the driver may be executing
your IRP when you call IoCancelIrp), or the driver may not be able to return
the IRP to the issuer in the scope of the cancel routine handler. For
example, the IRP may reference memory, and the memory may have been
submitted to a device as part of a DMA request; the driver cannot return the
IRP to the caller while the DMA request is active.

Whoever told you that IoCancelIrp frees the IRP is misinformed, and should
be told so.

IRP cancellation is very tricky. There are many subleties, most of which
take the form of race conditions, most of which cause your driver to
explode. So if you are going to allocate, submit, and possibly cancel IRPs,
you should spend the time to read up on these nuances. I believe OSR has
some decent online articles on IRP cancellation. There are subtleties in
implementing cancellable IRPs, and there are different (though, of course,
related) subleties in issuing and cancelling IRPs.

Most importantly, you need to have a state machine that adequately models
all of the potential states that an IRP can be in. Typically, I use these
states: Idle, Active, ActiveCancelling, Complete, all protected by some
implicit spinlock.

state:
KEVENT CancelEvent;
KSPIN_LOCK SpinLock;
PIRP Irp;

Submit() {
Acquire spinlock
Verify that state is Idle
Set state to Active
Release spinlock
IoSetCompletionRoutine
IoCallDriver
}

CompletionRoutine() {
Acquire spinlock
if (state = Active) {
Release spinlock
Process data
} else if (state = ActiveCancelling) {
State = Idle
KeSetEvent(&CancelEvent);
Release spinlock
} else {
Release spinlock
}
}

Cancel() {
Acquire spinlock
if (state = Active) {
state = ActiveCancelling
Release spinlock
IoCancelIrp(irp)
KeWaitForSingleObject(&CancelEvent);
} else {
// not cancelable
Release spinlock
}
}

Or something similar.

– arlie

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of L3sT4Rd
Sent: Tuesday, February 28, 2006 3:38 PM
To: Windows System Software Devs Interest List
Subject: [ntdev] forwarding cancelled IRPs ?

Hi,

i think i already know the answer to my question but i’d like to be sure…

is it a failure to send a driver allocated IRP that has already been
cancelled down to another driver? (in other words, will a set Cancel flag
survive an IoCallDriver call and the lower driver (hopefully) recognize the
Cancel flag after setting up a cancel routine and immediatly return?)

of course that’s only a very seldom race condition in case my original IRP
gets cancelled immediatly after i dropped a driver supplied spinlock that
protects the original IRPs CancelRoutine and just before i can call
IoCallDriver…
(i cannot hold the spinlock while calling IoCallDriver cause this could
deadlock with the completion routine)

and also, is it correct that i must not call IoFreeIrp (in it’s
CompletionRoutine)
for a driver allocated IRP after calling IoCancelIrp on it (think someone
told me so, but i can’t find anything about this in documentation) ?

regards,
daniel.


Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

Doron and Arlie, thanks for the fast reply :slight_smile:

believe me, i’ve been reading anything i could find about synchronisation
of cancelation the whole last 2 days… now i know that thinking “…and at
last quickly implement cancelation…” was quite stupid :wink:

my problem is that no example met my requirements, yet, because i:
-am setting up a cancel routine for each IRP i receive
-but set a completion routine for a different IRP that i create in
the dispatch routine of the original IRP
-don’t have a StartIO routine
-and also don’t have an internal IRP queue, because i only create a
new IRP and immdiately pass it
down (lower driver does queuing for me. my only concern here is
that some docs discourage from
using the pIrp parameter passed to a cancel routine [but nobody
did clearly explain why, only data-
loss on application termination was mentioned in some NT-Insider
article, but how can a UM app
access an I/O Manager-allocated IRP struct? maybe this only
refered to the user buffers for DIRECT_IO?]
and also whether there’s a need for canceling IRPs in
SURPRISE_REMOVAL, but as this is a pure virtual
device, i assumed a SURPRISE_REMOVAL will never occur…)

as far as i can see the only problem left, is the one pointed out by doron:

#1 Completion routine(), processing irp, drops lock
#2 grabs lock, finds irp to cancel, drops lock
#1 calls IoCallDriver, lower driver completes request
#1 your completion routine runs, frees the irp
#2 calls IoCancelIrp on the irp that was just freed

Arlie, the problem you stated should be quite the same, just vice versa,
cause i free the IRP from
completion routine of lower IRP and not from cancel routine of higher
one…
your solution makes sense to me, except that it looks like your calling
KeWaitForSingleObject from DISPATCH_LEVEL…!? or did you queue
a workitem (or similiar) from within the cancel routine?

Doron, i already looked at Walter Oneys book, but i could not find anything
related to 2 different IRPs and synchronize processing of IRP1,
cancelation of
IRP1 and creation & completion of IRP2 or anything similiar…

what’d be about the following solution (simplified):

Cancel(…, PIRP pHigherIrp):
[…]
KeAcquireSpinLockAtDpcLevel(&lock1);

pLowerIrp = (PIRP)pHigherIrp->Tail.Overlay.DriverContext[0];
pHigherIrp->Tail.Overlay.DriverContext[0] = NULL;

KeAcquireSpinLockAtDpcLevel(&lock2)
KeReleaseSpinLockFromDpcLevel(&lock1);

if (NULL != pLowerIrp)
IoCancelIrp(pLowerIrp);
KeReleaseSpinLockFromDpcLevel(&lock2);

Completion(…, PIRP pLowerIrp):
KeAcquireSpinLock(&lock1, &oldIrql);
oldCancelRoutine = IoSetCancelRoutine(pHigherIrp, NULL);
pHigherIrp->Tail.Overlay.DriverContext[0] = NULL;
KeReleaseSpinLock(&lock1, oldIrql);

if (NULL == oldCancelRoutine) //IRP was cancelled
{
KeAcquireSpinLock(&lock2, &oldIrql);
IoFreeIrp(pLowerIrp);
KeReleaseSpinLock(&lock2, oldIrql);
[…]
} else
IoFreeIrp(pLowerIrp);

regards,
daniel.

Arlie Davis wrote:

You are safe in this situation. You cannot hold a spinlock when you call
IoCallDriver. But another thread could immediately call IoCancelIrp on the
IRP that you are submitting – which is legal. The lower driver is
responsible for checking whether the IRP has been canceled, and if the lower
driver queues the IRP, it is also responsible for arming it for
cancellation, by setting a cancel routine.

In all cases, this is safe.

However, about your second question, you are *always* responsible for
freeing an IRP that you allocate, whether it is canceled or not.
IoCancelIrp does NOT free your IRP – it only informs the current processor
of the IRP (whomever that might be) that the IRP should be canceled.
“Canceled” just means “no longer processed, and completed with
STATUS_CANCELLED”, but the IRP is STILL completed. Your completion routine
is still called.

When you call IoCancelIrp, you are NOT guaranteed that your completion
routine has run before IoCancelIrp returns. It is completely legal for a
driver not to have a cancel routine installed (the driver may be executing
your IRP when you call IoCancelIrp), or the driver may not be able to return
the IRP to the issuer in the scope of the cancel routine handler. For
example, the IRP may reference memory, and the memory may have been
submitted to a device as part of a DMA request; the driver cannot return the
IRP to the caller while the DMA request is active.

Whoever told you that IoCancelIrp frees the IRP is misinformed, and should
be told so.

IRP cancellation is very tricky. There are many subleties, most of which
take the form of race conditions, most of which cause your driver to
explode. So if you are going to allocate, submit, and possibly cancel IRPs,
you should spend the time to read up on these nuances. I believe OSR has
some decent online articles on IRP cancellation. There are subtleties in
implementing cancellable IRPs, and there are different (though, of course,
related) subleties in issuing and cancelling IRPs.

Most importantly, you need to have a state machine that adequately models
all of the potential states that an IRP can be in. Typically, I use these
states: Idle, Active, ActiveCancelling, Complete, all protected by some
implicit spinlock.

state:
KEVENT CancelEvent;
KSPIN_LOCK SpinLock;
PIRP Irp;

Submit() {
Acquire spinlock
Verify that state is Idle
Set state to Active
Release spinlock
IoSetCompletionRoutine
IoCallDriver
}

CompletionRoutine() {
Acquire spinlock
if (state = Active) {
Release spinlock
Process data
} else if (state = ActiveCancelling) {
State = Idle
KeSetEvent(&CancelEvent);
Release spinlock
} else {
Release spinlock
}
}

Cancel() {
Acquire spinlock
if (state = Active) {
state = ActiveCancelling
Release spinlock
IoCancelIrp(irp)
KeWaitForSingleObject(&CancelEvent);
} else {
// not cancelable
Release spinlock
}
}

Or something similar.

– arlie

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of L3sT4Rd
Sent: Tuesday, February 28, 2006 3:38 PM
To: Windows System Software Devs Interest List
Subject: [ntdev] forwarding cancelled IRPs ?

Hi,

i think i already know the answer to my question but i’d like to be sure…

is it a failure to send a driver allocated IRP that has already been
cancelled down to another driver? (in other words, will a set Cancel flag
survive an IoCallDriver call and the lower driver (hopefully) recognize the
Cancel flag after setting up a cancel routine and immediatly return?)

of course that’s only a very seldom race condition in case my original IRP
gets cancelled immediatly after i dropped a driver supplied spinlock that
protects the original IRPs CancelRoutine and just before i can call
IoCallDriver…
(i cannot hold the spinlock while calling IoCallDriver cause this could
deadlock with the completion routine)

and also, is it correct that i must not call IoFreeIrp (in it’s
CompletionRoutine)
for a driver allocated IRP after calling IoCancelIrp on it (think someone
told me so, but i can’t find anything about this in documentation) ?

regards,
daniel.


Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer


Questions? First check the Kernel Driver FAQ at http://www.osronline.com/article.cfm?id=256

To unsubscribe, visit the List Server section of OSR Online at http://www.osronline.com/page.cfm?name=ListServer

> your solution makes sense to me, except that it looks like

your calling KeWaitForSingleObject from DISPATCH_LEVEL…!?
or did you queue a workitem (or similiar) from within the cancel routine?

Sorry, I guess my example was a little too brief. I am most definitely not
suggesting that you call KeWaitForSingleObject from DISPATCH_LEVEL or from a
cancel routine. That’s a recipe for a blue screen.

My “Cancel()” routine was not a cancel routine in the sense of something you
pass to IoSetCancelRoutine. It was a hypothetical routine that you would
call during paths such as IRP_MN_REMOVE_DEVICE or IRP_MN_SURPRISE_REMOVAL.
In other words, it is a routine that handles safely canceling an IRP, and
waiting for the IRP’s completion routine to run. After that routine
completes, you can safely call IoFreeIrp, but not before. Therefore, my
“Cancel()” example routine really should have been called “StopAndWaitForIo”
or something more accurate.

Also, please note that this:

KeAcquireSpinLockAtDpcLevel(&lock2)
KeReleaseSpinLockFromDpcLevel(&lock1);

is a recipe for disaster. This is almost never what you really want to do.
Until you have really mastered async I/O and spinlocks, you should use a
ground rule that you never hold more than one spinlock at a time. And the
only legitimate locking protocols that do involve holding more than one
spinlock at a time do NOT include sequences such as: acquire 1, acquire 2,
release 1, release 2. It is extremely easy to write a driver that is either

  1. not safe, or 2) deadlocks, by doing this. For more info, read a basic
    text on synchronization.

– arlie

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of L3sT4Rd
Sent: Tuesday, February 28, 2006 6:45 PM
To: Windows System Software Devs Interest List
Subject: Re: [ntdev] forwarding cancelled IRPs ?

Doron and Arlie, thanks for the fast reply :slight_smile:

believe me, i’ve been reading anything i could find about synchronisation of
cancelation the whole last 2 days… now i know that thinking “…and at
last quickly implement cancelation…” was quite stupid :wink:

my problem is that no example met my requirements, yet, because i:
-am setting up a cancel routine for each IRP i receive
-but set a completion routine for a different IRP that i create in the
dispatch routine of the original IRP
-don’t have a StartIO routine
-and also don’t have an internal IRP queue, because i only create a new
IRP and immdiately pass it
down (lower driver does queuing for me. my only concern here is that
some docs discourage from
using the pIrp parameter passed to a cancel routine [but nobody did
clearly explain why, only data-
loss on application termination was mentioned in some NT-Insider
article, but how can a UM app
access an I/O Manager-allocated IRP struct? maybe this only refered
to the user buffers for DIRECT_IO?]
and also whether there’s a need for canceling IRPs in
SURPRISE_REMOVAL, but as this is a pure virtual
device, i assumed a SURPRISE_REMOVAL will never occur…)

as far as i can see the only problem left, is the one pointed out by doron:

#1 Completion routine(), processing irp, drops lock
#2 grabs lock, finds irp to cancel, drops lock
#1 calls IoCallDriver, lower driver completes request
#1 your completion routine runs, frees the irp
#2 calls IoCancelIrp on the irp that was just freed

Arlie, the problem you stated should be quite the same, just vice versa,
cause i free the IRP from completion routine of lower IRP and not from
cancel routine of higher one…
your solution makes sense to me, except that it looks like your calling
KeWaitForSingleObject from DISPATCH_LEVEL…!? or did you queue a workitem
(or similiar) from within the cancel routine?

Doron, i already looked at Walter Oneys book, but i could not find anything
related to 2 different IRPs and synchronize processing of IRP1, cancelation
of
IRP1 and creation & completion of IRP2 or anything similiar…

what’d be about the following solution (simplified):

Cancel(…, PIRP pHigherIrp):
[…]
KeAcquireSpinLockAtDpcLevel(&lock1);

pLowerIrp = (PIRP)pHigherIrp->Tail.Overlay.DriverContext[0];
pHigherIrp->Tail.Overlay.DriverContext[0] = NULL;

KeAcquireSpinLockAtDpcLevel(&lock2)
KeReleaseSpinLockFromDpcLevel(&lock1);

if (NULL != pLowerIrp)
IoCancelIrp(pLowerIrp);
KeReleaseSpinLockFromDpcLevel(&lock2);

Completion(…, PIRP pLowerIrp):
KeAcquireSpinLock(&lock1, &oldIrql);
oldCancelRoutine = IoSetCancelRoutine(pHigherIrp, NULL);
pHigherIrp->Tail.Overlay.DriverContext[0] = NULL;
KeReleaseSpinLock(&lock1, oldIrql);

if (NULL == oldCancelRoutine) //IRP was cancelled
{
KeAcquireSpinLock(&lock2, &oldIrql);
IoFreeIrp(pLowerIrp);
KeReleaseSpinLock(&lock2, oldIrql);
[…]
} else
IoFreeIrp(pLowerIrp);

regards,
daniel.

Arlie Davis wrote:

You are safe in this situation. You cannot hold a spinlock when you
call IoCallDriver. But another thread could immediately call
IoCancelIrp on the IRP that you are submitting – which is legal. The
lower driver is responsible for checking whether the IRP has been
canceled, and if the lower driver queues the IRP, it is also
responsible for arming it for cancellation, by setting a cancel routine.

In all cases, this is safe.

However, about your second question, you are *always* responsible for
freeing an IRP that you allocate, whether it is canceled or not.
IoCancelIrp does NOT free your IRP – it only informs the current
processor of the IRP (whomever that might be) that the IRP should be
canceled.
“Canceled” just means “no longer processed, and completed with
STATUS_CANCELLED”, but the IRP is STILL completed. Your completion
routine is still called.

When you call IoCancelIrp, you are NOT guaranteed that your completion
routine has run before IoCancelIrp returns. It is completely legal for
a driver not to have a cancel routine installed (the driver may be
executing your IRP when you call IoCancelIrp), or the driver may not be
able to return the IRP to the issuer in the scope of the cancel routine
handler. For example, the IRP may reference memory, and the memory may
have been submitted to a device as part of a DMA request; the driver
cannot return the IRP to the caller while the DMA request is active.

Whoever told you that IoCancelIrp frees the IRP is misinformed, and
should be told so.

IRP cancellation is very tricky. There are many subleties, most of
which take the form of race conditions, most of which cause your driver
to explode. So if you are going to allocate, submit, and possibly
cancel IRPs, you should spend the time to read up on these nuances. I
believe OSR has some decent online articles on IRP cancellation. There
are subtleties in implementing cancellable IRPs, and there are
different (though, of course,
related) subleties in issuing and cancelling IRPs.

Most importantly, you need to have a state machine that adequately
models all of the potential states that an IRP can be in. Typically, I
use these
states: Idle, Active, ActiveCancelling, Complete, all protected by some
implicit spinlock.

state:
KEVENT CancelEvent;
KSPIN_LOCK SpinLock;
PIRP Irp;

Submit() {
Acquire spinlock
Verify that state is Idle
Set state to Active
Release spinlock
IoSetCompletionRoutine
IoCallDriver
}

CompletionRoutine() {
Acquire spinlock
if (state = Active) {
Release spinlock
Process data
} else if (state = ActiveCancelling) {
State = Idle
KeSetEvent(&CancelEvent);
Release spinlock
} else {
Release spinlock
}
}

Cancel() {
Acquire spinlock
if (state = Active) {
state = ActiveCancelling
Release spinlock
IoCancelIrp(irp)
KeWaitForSingleObject(&CancelEvent);
} else {
// not cancelable
Release spinlock
}
}

Or something similar.

– arlie

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of L3sT4Rd
Sent: Tuesday, February 28, 2006 3:38 PM
To: Windows System Software Devs Interest List
Subject: [ntdev] forwarding cancelled IRPs ?

Hi,

i think i already know the answer to my question but i’d like to be sure…

is it a failure to send a driver allocated IRP that has already been
cancelled down to another driver? (in other words, will a set Cancel
flag survive an IoCallDriver call and the lower driver (hopefully)
recognize the Cancel flag after setting up a cancel routine and
immediatly return?)

of course that’s only a very seldom race condition in case my original
IRP gets cancelled immediatly after i dropped a driver supplied
spinlock that protects the original IRPs CancelRoutine and just before
i can call IoCallDriver…
(i cannot hold the spinlock while calling IoCallDriver cause this could
deadlock with the completion routine)

and also, is it correct that i must not call IoFreeIrp (in it’s
CompletionRoutine)
for a driver allocated IRP after calling IoCancelIrp on it (think
someone told me so, but i can’t find anything about this in documentation)
?

regards,
daniel.


Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer


Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer


Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

If you are already allocating a PIRP to send down the stack for each
PIRP you receive, then you can also allocate a piece of memory to track
those PIRPs if need be. Otherwise, you have the 4 PVOIDs in
DriverContext to store state.

d

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of Arlie Davis
Sent: Tuesday, February 28, 2006 4:10 PM
To: Windows System Software Devs Interest List
Subject: RE: [ntdev] forwarding cancelled IRPs ?

your solution makes sense to me, except that it looks like
your calling KeWaitForSingleObject from DISPATCH_LEVEL…!?
or did you queue a workitem (or similiar) from within the cancel
routine?

Sorry, I guess my example was a little too brief. I am most definitely
not
suggesting that you call KeWaitForSingleObject from DISPATCH_LEVEL or
from a
cancel routine. That’s a recipe for a blue screen.

My “Cancel()” routine was not a cancel routine in the sense of something
you
pass to IoSetCancelRoutine. It was a hypothetical routine that you
would
call during paths such as IRP_MN_REMOVE_DEVICE or
IRP_MN_SURPRISE_REMOVAL.
In other words, it is a routine that handles safely canceling an IRP,
and
waiting for the IRP’s completion routine to run. After that routine
completes, you can safely call IoFreeIrp, but not before. Therefore, my
“Cancel()” example routine really should have been called
“StopAndWaitForIo”
or something more accurate.

Also, please note that this:

KeAcquireSpinLockAtDpcLevel(&lock2)
KeReleaseSpinLockFromDpcLevel(&lock1);

is a recipe for disaster. This is almost never what you really want to
do.
Until you have really mastered async I/O and spinlocks, you should use a
ground rule that you never hold more than one spinlock at a time. And
the
only legitimate locking protocols that do involve holding more than one
spinlock at a time do NOT include sequences such as: acquire 1, acquire
2,
release 1, release 2. It is extremely easy to write a driver that is
either

  1. not safe, or 2) deadlocks, by doing this. For more info, read a
    basic
    text on synchronization.

– arlie

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of L3sT4Rd
Sent: Tuesday, February 28, 2006 6:45 PM
To: Windows System Software Devs Interest List
Subject: Re: [ntdev] forwarding cancelled IRPs ?

Doron and Arlie, thanks for the fast reply :slight_smile:

believe me, i’ve been reading anything i could find about
synchronisation of
cancelation the whole last 2 days… now i know that thinking “…and at
last quickly implement cancelation…” was quite stupid :wink:

my problem is that no example met my requirements, yet, because i:
-am setting up a cancel routine for each IRP i receive
-but set a completion routine for a different IRP that i create in
the
dispatch routine of the original IRP
-don’t have a StartIO routine
-and also don’t have an internal IRP queue, because i only create a
new
IRP and immdiately pass it
down (lower driver does queuing for me. my only concern here is
that
some docs discourage from
using the pIrp parameter passed to a cancel routine [but nobody
did
clearly explain why, only data-
loss on application termination was mentioned in some NT-Insider
article, but how can a UM app
access an I/O Manager-allocated IRP struct? maybe this only
refered
to the user buffers for DIRECT_IO?]
and also whether there’s a need for canceling IRPs in
SURPRISE_REMOVAL, but as this is a pure virtual
device, i assumed a SURPRISE_REMOVAL will never occur…)

as far as i can see the only problem left, is the one pointed out by
doron:

#1 Completion routine(), processing irp, drops lock
#2 grabs lock, finds irp to cancel, drops lock
#1 calls IoCallDriver, lower driver completes request
#1 your completion routine runs, frees the irp
#2 calls IoCancelIrp on the irp that was just freed

Arlie, the problem you stated should be quite the same, just vice versa,
cause i free the IRP from completion routine of lower IRP and not from
cancel routine of higher one…
your solution makes sense to me, except that it looks like your calling
KeWaitForSingleObject from DISPATCH_LEVEL…!? or did you queue a
workitem
(or similiar) from within the cancel routine?

Doron, i already looked at Walter Oneys book, but i could not find
anything
related to 2 different IRPs and synchronize processing of IRP1,
cancelation
of
IRP1 and creation & completion of IRP2 or anything similiar…

what’d be about the following solution (simplified):

Cancel(…, PIRP pHigherIrp):
[…]
KeAcquireSpinLockAtDpcLevel(&lock1);

pLowerIrp = (PIRP)pHigherIrp->Tail.Overlay.DriverContext[0];
pHigherIrp->Tail.Overlay.DriverContext[0] = NULL;

KeAcquireSpinLockAtDpcLevel(&lock2)
KeReleaseSpinLockFromDpcLevel(&lock1);

if (NULL != pLowerIrp)
IoCancelIrp(pLowerIrp);
KeReleaseSpinLockFromDpcLevel(&lock2);

Completion(…, PIRP pLowerIrp):
KeAcquireSpinLock(&lock1, &oldIrql);
oldCancelRoutine = IoSetCancelRoutine(pHigherIrp, NULL);
pHigherIrp->Tail.Overlay.DriverContext[0] = NULL;
KeReleaseSpinLock(&lock1, oldIrql);

if (NULL == oldCancelRoutine) //IRP was cancelled
{
KeAcquireSpinLock(&lock2, &oldIrql);
IoFreeIrp(pLowerIrp);
KeReleaseSpinLock(&lock2, oldIrql);
[…]
} else
IoFreeIrp(pLowerIrp);

regards,
daniel.

Arlie Davis wrote:

You are safe in this situation. You cannot hold a spinlock when you
call IoCallDriver. But another thread could immediately call
IoCancelIrp on the IRP that you are submitting – which is legal. The
lower driver is responsible for checking whether the IRP has been
canceled, and if the lower driver queues the IRP, it is also
responsible for arming it for cancellation, by setting a cancel
routine.

In all cases, this is safe.

However, about your second question, you are *always* responsible for
freeing an IRP that you allocate, whether it is canceled or not.
IoCancelIrp does NOT free your IRP – it only informs the current
processor of the IRP (whomever that might be) that the IRP should be
canceled.
“Canceled” just means “no longer processed, and completed with
STATUS_CANCELLED”, but the IRP is STILL completed. Your completion
routine is still called.

When you call IoCancelIrp, you are NOT guaranteed that your completion
routine has run before IoCancelIrp returns. It is completely legal for

a driver not to have a cancel routine installed (the driver may be
executing your IRP when you call IoCancelIrp), or the driver may not be

able to return the IRP to the issuer in the scope of the cancel routine

handler. For example, the IRP may reference memory, and the memory may

have been submitted to a device as part of a DMA request; the driver
cannot return the IRP to the caller while the DMA request is active.

Whoever told you that IoCancelIrp frees the IRP is misinformed, and
should be told so.

IRP cancellation is very tricky. There are many subleties, most of
which take the form of race conditions, most of which cause your driver

to explode. So if you are going to allocate, submit, and possibly
cancel IRPs, you should spend the time to read up on these nuances. I
believe OSR has some decent online articles on IRP cancellation. There

are subtleties in implementing cancellable IRPs, and there are
different (though, of course,
related) subleties in issuing and cancelling IRPs.

Most importantly, you need to have a state machine that adequately
models all of the potential states that an IRP can be in. Typically, I

use these
states: Idle, Active, ActiveCancelling, Complete, all protected by some

implicit spinlock.

state:
KEVENT CancelEvent;
KSPIN_LOCK SpinLock;
PIRP Irp;

Submit() {
Acquire spinlock
Verify that state is Idle
Set state to Active
Release spinlock
IoSetCompletionRoutine
IoCallDriver
}

CompletionRoutine() {
Acquire spinlock
if (state = Active) {
Release spinlock
Process data
} else if (state = ActiveCancelling) {
State = Idle
KeSetEvent(&CancelEvent);
Release spinlock
} else {
Release spinlock
}
}

Cancel() {
Acquire spinlock
if (state = Active) {
state = ActiveCancelling
Release spinlock
IoCancelIrp(irp)
KeWaitForSingleObject(&CancelEvent);
} else {
// not cancelable
Release spinlock
}
}

Or something similar.

– arlie

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of L3sT4Rd
Sent: Tuesday, February 28, 2006 3:38 PM
To: Windows System Software Devs Interest List
Subject: [ntdev] forwarding cancelled IRPs ?

Hi,

i think i already know the answer to my question but i’d like to be
sure…

is it a failure to send a driver allocated IRP that has already been
cancelled down to another driver? (in other words, will a set Cancel
flag survive an IoCallDriver call and the lower driver (hopefully)
recognize the Cancel flag after setting up a cancel routine and
immediatly return?)

of course that’s only a very seldom race condition in case my original
IRP gets cancelled immediatly after i dropped a driver supplied
spinlock that protects the original IRPs CancelRoutine and just before
i can call IoCallDriver…
(i cannot hold the spinlock while calling IoCallDriver cause this could

deadlock with the completion routine)

and also, is it correct that i must not call IoFreeIrp (in it’s
CompletionRoutine)
for a driver allocated IRP after calling IoCancelIrp on it (think
someone told me so, but i can’t find anything about this in
documentation)
?

regards,
daniel.


Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer


Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer


Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer


Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

> 2) deadlocks, by doing this. For more info, read a basic

text on synchronization.

well…i hoped that i am already kinda fimiliar with synchronization…

one of the four required conditions for a deadlock (“hold & wait”) is, that
each “process” already holds a resource while waiting for another one.
that’s not the case here, the completion routine only holds one spinlock at
a time. even if there’re 2 compl. routines running at the same time, i can’t
see a problem:
-both compl. routines hold a spinlock => the one holding SL2 can
continue
-only one of both holds a spinlock => same situation as if there was
only one routine
-neither of both routines holds a spinlock => no problem

two cancel routines simultaneous:
-both hold a spin lock => no prob., the one holding SL2 can continue
(afterwards either one cancel routine holds both SLs (no prob) or
one cancel r. still holds a SL and additionally one compl. r. =>
same as
if there was only one instance of each routine, no prob.)
-one cancel routine holds both SLs: no prob, it can finish

only thing i’m wondering about (and this might be a problem here) is:
who does the scheduling when some code running at DISPATCH_LEVEL
is calling KeAcquireSpinLock and blocked?
(but that’d be also a problem for a single SpinLock at DISPATCH_LEVEL)
so i guess, KeAcquireSpinLock does either call the scheduler after
recognizing
that someone else is already holding the spinlock, or it does look who
else is actually
holding the spin lock and tells the dispatcher which code to try next?

or did i miss something else here?

regards,
daniel.

Arlie Davis wrote:

>your solution makes sense to me, except that it looks like
>your calling KeWaitForSingleObject from DISPATCH_LEVEL…!?
>or did you queue a workitem (or similiar) from within the cancel routine?
>
>

Sorry, I guess my example was a little too brief. I am most definitely not
suggesting that you call KeWaitForSingleObject from DISPATCH_LEVEL or from a
cancel routine. That’s a recipe for a blue screen.

My “Cancel()” routine was not a cancel routine in the sense of something you
pass to IoSetCancelRoutine. It was a hypothetical routine that you would
call during paths such as IRP_MN_REMOVE_DEVICE or IRP_MN_SURPRISE_REMOVAL.
In other words, it is a routine that handles safely canceling an IRP, and
waiting for the IRP’s completion routine to run. After that routine
completes, you can safely call IoFreeIrp, but not before. Therefore, my
“Cancel()” example routine really should have been called “StopAndWaitForIo”
or something more accurate.

Also, please note that this:

KeAcquireSpinLockAtDpcLevel(&lock2)
KeReleaseSpinLockFromDpcLevel(&lock1);

is a recipe for disaster. This is almost never what you really want to do.
Until you have really mastered async I/O and spinlocks, you should use a
ground rule that you never hold more than one spinlock at a time. And the
only legitimate locking protocols that do involve holding more than one
spinlock at a time do NOT include sequences such as: acquire 1, acquire 2,
release 1, release 2. It is extremely easy to write a driver that is either

  1. not safe, or 2) deadlocks, by doing this. For more info, read a basic
    text on synchronization.

– arlie

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of L3sT4Rd
Sent: Tuesday, February 28, 2006 6:45 PM
To: Windows System Software Devs Interest List
Subject: Re: [ntdev] forwarding cancelled IRPs ?

Doron and Arlie, thanks for the fast reply :slight_smile:

believe me, i’ve been reading anything i could find about synchronisation of
cancelation the whole last 2 days… now i know that thinking “…and at
last quickly implement cancelation…” was quite stupid :wink:

my problem is that no example met my requirements, yet, because i:
-am setting up a cancel routine for each IRP i receive
-but set a completion routine for a different IRP that i create in the
dispatch routine of the original IRP
-don’t have a StartIO routine
-and also don’t have an internal IRP queue, because i only create a new
IRP and immdiately pass it
down (lower driver does queuing for me. my only concern here is that
some docs discourage from
using the pIrp parameter passed to a cancel routine [but nobody did
clearly explain why, only data-
loss on application termination was mentioned in some NT-Insider
article, but how can a UM app
access an I/O Manager-allocated IRP struct? maybe this only refered
to the user buffers for DIRECT_IO?]
and also whether there’s a need for canceling IRPs in
SURPRISE_REMOVAL, but as this is a pure virtual
device, i assumed a SURPRISE_REMOVAL will never occur…)

as far as i can see the only problem left, is the one pointed out by doron:

>#1 Completion routine(), processing irp, drops lock
>#2 grabs lock, finds irp to cancel, drops lock
>#1 calls IoCallDriver, lower driver completes request
>#1 your completion routine runs, frees the irp
>#2 calls IoCancelIrp on the irp that was just freed
>
>
>
Arlie, the problem you stated should be quite the same, just vice versa,
cause i free the IRP from completion routine of lower IRP and not from
cancel routine of higher one…
your solution makes sense to me, except that it looks like your calling
KeWaitForSingleObject from DISPATCH_LEVEL…!? or did you queue a workitem
(or similiar) from within the cancel routine?

Doron, i already looked at Walter Oneys book, but i could not find anything
related to 2 different IRPs and synchronize processing of IRP1, cancelation
of
IRP1 and creation & completion of IRP2 or anything similiar…

what’d be about the following solution (simplified):

Cancel(…, PIRP pHigherIrp):
[…]
KeAcquireSpinLockAtDpcLevel(&lock1);

pLowerIrp = (PIRP)pHigherIrp->Tail.Overlay.DriverContext[0];
pHigherIrp->Tail.Overlay.DriverContext[0] = NULL;

KeAcquireSpinLockAtDpcLevel(&lock2)
KeReleaseSpinLockFromDpcLevel(&lock1);

if (NULL != pLowerIrp)
IoCancelIrp(pLowerIrp);
KeReleaseSpinLockFromDpcLevel(&lock2);

Completion(…, PIRP pLowerIrp):
KeAcquireSpinLock(&lock1, &oldIrql);
oldCancelRoutine = IoSetCancelRoutine(pHigherIrp, NULL);
pHigherIrp->Tail.Overlay.DriverContext[0] = NULL;
KeReleaseSpinLock(&lock1, oldIrql);

if (NULL == oldCancelRoutine) //IRP was cancelled
{
KeAcquireSpinLock(&lock2, &oldIrql);
IoFreeIrp(pLowerIrp);
KeReleaseSpinLock(&lock2, oldIrql);
[…]
} else
IoFreeIrp(pLowerIrp);

regards,
daniel.

Arlie Davis wrote:

>You are safe in this situation. You cannot hold a spinlock when you
>call IoCallDriver. But another thread could immediately call
>IoCancelIrp on the IRP that you are submitting – which is legal. The
>lower driver is responsible for checking whether the IRP has been
>canceled, and if the lower driver queues the IRP, it is also
>responsible for arming it for cancellation, by setting a cancel routine.
>
>In all cases, this is safe.
>
>However, about your second question, you are *always* responsible for
>freeing an IRP that you allocate, whether it is canceled or not.
>IoCancelIrp does NOT free your IRP – it only informs the current
>processor of the IRP (whomever that might be) that the IRP should be
>
>
canceled.

>“Canceled” just means “no longer processed, and completed with
>STATUS_CANCELLED”, but the IRP is STILL completed. Your completion
>routine is still called.
>
>When you call IoCancelIrp, you are NOT guaranteed that your completion
>routine has run before IoCancelIrp returns. It is completely legal for
>a driver not to have a cancel routine installed (the driver may be
>executing your IRP when you call IoCancelIrp), or the driver may not be
>able to return the IRP to the issuer in the scope of the cancel routine
>handler. For example, the IRP may reference memory, and the memory may
>have been submitted to a device as part of a DMA request; the driver
>cannot return the IRP to the caller while the DMA request is active.
>
>Whoever told you that IoCancelIrp frees the IRP is misinformed, and
>should be told so.
>
>IRP cancellation is very tricky. There are many subleties, most of
>which take the form of race conditions, most of which cause your driver
>to explode. So if you are going to allocate, submit, and possibly
>cancel IRPs, you should spend the time to read up on these nuances. I
>believe OSR has some decent online articles on IRP cancellation. There
>are subtleties in implementing cancellable IRPs, and there are
>different (though, of course,
>related) subleties in issuing and cancelling IRPs.
>
>Most importantly, you need to have a state machine that adequately
>models all of the potential states that an IRP can be in. Typically, I
>use these
>states: Idle, Active, ActiveCancelling, Complete, all protected by some
>implicit spinlock.
>
>
>state:
> KEVENT CancelEvent;
> KSPIN_LOCK SpinLock;
> PIRP Irp;
>
>Submit() {
> Acquire spinlock
> Verify that state is Idle
> Set state to Active
> Release spinlock
> IoSetCompletionRoutine
> IoCallDriver
>}
>
>CompletionRoutine() {
> Acquire spinlock
> if (state = Active) {
> Release spinlock
> Process data
> } else if (state = ActiveCancelling) {
> State = Idle
> KeSetEvent(&CancelEvent);
> Release spinlock
> } else {
> Release spinlock
> }
>}
>
>Cancel() {
> Acquire spinlock
> if (state = Active) {
> state = ActiveCancelling
> Release spinlock
> IoCancelIrp(irp)
> KeWaitForSingleObject(&CancelEvent);
> } else {
> // not cancelable
> Release spinlock
> }
>}
>
>
>Or something similar.
>
>– arlie
>
>
>-----Original Message-----
>From: xxxxx@lists.osr.com
>[mailto:xxxxx@lists.osr.com] On Behalf Of L3sT4Rd
>Sent: Tuesday, February 28, 2006 3:38 PM
>To: Windows System Software Devs Interest List
>Subject: [ntdev] forwarding cancelled IRPs ?
>
>Hi,
>
>i think i already know the answer to my question but i’d like to be sure…
>
>is it a failure to send a driver allocated IRP that has already been
>cancelled down to another driver? (in other words, will a set Cancel
>flag survive an IoCallDriver call and the lower driver (hopefully)
>recognize the Cancel flag after setting up a cancel routine and
>immediatly return?)
>
>of course that’s only a very seldom race condition in case my original
>IRP gets cancelled immediatly after i dropped a driver supplied
>spinlock that protects the original IRPs CancelRoutine and just before
>i can call IoCallDriver…
>(i cannot hold the spinlock while calling IoCallDriver cause this could
>deadlock with the completion routine)
>
>and also, is it correct that i must not call IoFreeIrp (in it’s
>CompletionRoutine)
>for a driver allocated IRP after calling IoCancelIrp on it (think
>someone told me so, but i can’t find anything about this in documentation)
>
>
?

>regards,
>daniel.
>
>
>—
>Questions? First check the Kernel Driver FAQ at
>http://www.osronline.com/article.cfm?id=256
>
>To unsubscribe, visit the List Server section of OSR Online at
>http://www.osronline.com/page.cfm?name=ListServer
>
>
>
>
>—
>Questions? First check the Kernel Driver FAQ at
>http://www.osronline.com/article.cfm?id=256
>
>To unsubscribe, visit the List Server section of OSR Online at
>http://www.osronline.com/page.cfm?name=ListServer
>
>
>
>
>


Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer


Questions? First check the Kernel Driver FAQ at http://www.osronline.com/article.cfm?id=256

To unsubscribe, visit the List Server section of OSR Online at http://www.osronline.com/page.cfm?name=ListServer

Scheduling at dispatch level?

There is no scheduling at dispatch level. While you may be interrupted
by a higher-priority interrupt, no other code at the same or lower IRQL
(incuding passive-level) will run on that processor while you’re in a
DPC.

They are called spinlocks because they spin. They don’t yield the
processor until they’re acquired. They may be interrupted, but they
won’t go to sleep.

Remember that your driver code has to be re-entrant. Your completion
routine might call something which triggers another completion, which
causes your completion routine to be re-enterred again before the first
call has exited. If you do this while holding a spinlock, then the
subsequent attempt to acquire the spinlock will cause a deadlock. For
example if you send an IRP to a lower driver while holding a lock, that
might trigger something in the lower driver which completes a previous
IRP.

Even when you’re running at passive-level reentrancy can cause these
same sorts of problems. If thread A acquires a lock (say a
SynchronizationEvent) twice, there’s nothing the scheduler can do to
free the lock up so it can be acquired again. Short of a resource
manager that tries to break deadlocks, you’re going to hang forever.

-p

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of L3sT4Rd
Sent: Tuesday, February 28, 2006 7:13 PM
To: Windows System Software Devs Interest List
Subject: Re: [ntdev] forwarding cancelled IRPs ?

  1. deadlocks, by doing this. For more info, read a basic text on
    synchronization.

well…i hoped that i am already kinda fimiliar with
synchronization…

one of the four required conditions for a deadlock (“hold & wait”) is,
that each “process” already holds a resource while waiting for another
one.
that’s not the case here, the completion routine only holds one spinlock
at a time. even if there’re 2 compl. routines running at the same time,
i can’t see a problem:
-both compl. routines hold a spinlock => the one holding SL2 can
continue
-only one of both holds a spinlock => same situation as if there was
only one routine
-neither of both routines holds a spinlock => no problem

two cancel routines simultaneous:
-both hold a spin lock => no prob., the one holding SL2 can continue
(afterwards either one cancel routine holds both SLs (no prob) or
one cancel r. still holds a SL and additionally one compl. r. =>
same as
if there was only one instance of each routine, no prob.)
-one cancel routine holds both SLs: no prob, it can finish

only thing i’m wondering about (and this might be a problem here) is:
who does the scheduling when some code running at DISPATCH_LEVEL is
calling KeAcquireSpinLock and blocked?
(but that’d be also a problem for a single SpinLock at DISPATCH_LEVEL)
so i guess, KeAcquireSpinLock does either call the scheduler after
recognizing that someone else is already holding the spinlock, or it
does look who else is actually holding the spin lock and tells the
dispatcher which code to try next?

or did i miss something else here?

regards,
daniel.

Arlie Davis wrote:

>your solution makes sense to me, except that it looks like your
>calling KeWaitForSingleObject from DISPATCH_LEVEL…!?
>or did you queue a workitem (or similiar) from within the cancel
routine?
>
>

Sorry, I guess my example was a little too brief. I am most definitely

not suggesting that you call KeWaitForSingleObject from DISPATCH_LEVEL
or from a cancel routine. That’s a recipe for a blue screen.

My “Cancel()” routine was not a cancel routine in the sense of
something you pass to IoSetCancelRoutine. It was a hypothetical
routine that you would call during paths such as IRP_MN_REMOVE_DEVICE
or IRP_MN_SURPRISE_REMOVAL.
In other words, it is a routine that handles safely canceling an IRP,
and waiting for the IRP’s completion routine to run. After that
routine completes, you can safely call IoFreeIrp, but not before.
Therefore, my “Cancel()” example routine really should have been called
“StopAndWaitForIo”
or something more accurate.

Also, please note that this:

KeAcquireSpinLockAtDpcLevel(&lock2)
KeReleaseSpinLockFromDpcLevel(&lock1);

is a recipe for disaster. This is almost never what you really want to
do.
Until you have really mastered async I/O and spinlocks, you should use
a ground rule that you never hold more than one spinlock at a time.
And the only legitimate locking protocols that do involve holding more
than one spinlock at a time do NOT include sequences such as: acquire
1, acquire 2, release 1, release 2. It is extremely easy to write a
driver that is either

  1. not safe, or 2) deadlocks, by doing this. For more info, read a
    basic text on synchronization.

– arlie

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of L3sT4Rd
Sent: Tuesday, February 28, 2006 6:45 PM
To: Windows System Software Devs Interest List
Subject: Re: [ntdev] forwarding cancelled IRPs ?

Doron and Arlie, thanks for the fast reply :slight_smile:

believe me, i’ve been reading anything i could find about
synchronisation of cancelation the whole last 2 days… now i know that

thinking “…and at last quickly implement cancelation…” was quite
stupid :wink:

my problem is that no example met my requirements, yet, because i:
-am setting up a cancel routine for each IRP i receive
-but set a completion routine for a different IRP that i create in
the dispatch routine of the original IRP
-don’t have a StartIO routine
-and also don’t have an internal IRP queue, because i only create a

new IRP and immdiately pass it
down (lower driver does queuing for me. my only concern here is
that some docs discourage from
using the pIrp parameter passed to a cancel routine [but nobody
did clearly explain why, only data-
loss on application termination was mentioned in some NT-Insider

article, but how can a UM app
access an I/O Manager-allocated IRP struct? maybe this only
refered to the user buffers for DIRECT_IO?]
and also whether there’s a need for canceling IRPs in
SURPRISE_REMOVAL, but as this is a pure virtual
device, i assumed a SURPRISE_REMOVAL will never occur…)

as far as i can see the only problem left, is the one pointed out by
doron:

>#1 Completion routine(), processing irp, drops lock
>#2 grabs lock, finds irp to cancel, drops lock
>#1 calls IoCallDriver, lower driver completes request
>#1 your completion routine runs, frees the irp
>#2 calls IoCancelIrp on the irp that was just freed
>
>
>
Arlie, the problem you stated should be quite the same, just vice
versa, cause i free the IRP from completion routine of lower IRP and
not from cancel routine of higher one…
your solution makes sense to me, except that it looks like your calling

KeWaitForSingleObject from DISPATCH_LEVEL…!? or did you queue a
workitem (or similiar) from within the cancel routine?

Doron, i already looked at Walter Oneys book, but i could not find
anything related to 2 different IRPs and synchronize processing of
IRP1, cancelation of
IRP1 and creation & completion of IRP2 or anything similiar…

what’d be about the following solution (simplified):

Cancel(…, PIRP pHigherIrp):
[…]
KeAcquireSpinLockAtDpcLevel(&lock1);

pLowerIrp = (PIRP)pHigherIrp->Tail.Overlay.DriverContext[0];
pHigherIrp->Tail.Overlay.DriverContext[0] = NULL;

KeAcquireSpinLockAtDpcLevel(&lock2)
KeReleaseSpinLockFromDpcLevel(&lock1);

if (NULL != pLowerIrp)
IoCancelIrp(pLowerIrp);
KeReleaseSpinLockFromDpcLevel(&lock2);

Completion(…, PIRP pLowerIrp):
KeAcquireSpinLock(&lock1, &oldIrql);
oldCancelRoutine = IoSetCancelRoutine(pHigherIrp, NULL);
pHigherIrp->Tail.Overlay.DriverContext[0] = NULL;
KeReleaseSpinLock(&lock1, oldIrql);

if (NULL == oldCancelRoutine) //IRP was cancelled
{
KeAcquireSpinLock(&lock2, &oldIrql);
IoFreeIrp(pLowerIrp);
KeReleaseSpinLock(&lock2, oldIrql);
[…]
} else
IoFreeIrp(pLowerIrp);

regards,
daniel.

Arlie Davis wrote:

>You are safe in this situation. You cannot hold a spinlock when you
>call IoCallDriver. But another thread could immediately call
>IoCancelIrp on the IRP that you are submitting – which is legal. The

>lower driver is responsible for checking whether the IRP has been
>canceled, and if the lower driver queues the IRP, it is also
>responsible for arming it for cancellation, by setting a cancel
routine.
>
>In all cases, this is safe.
>
>However, about your second question, you are *always* responsible for
>freeing an IRP that you allocate, whether it is canceled or not.
>IoCancelIrp does NOT free your IRP – it only informs the current
>processor of the IRP (whomever that might be) that the IRP should be
>
>
canceled.

>“Canceled” just means “no longer processed, and completed with
>STATUS_CANCELLED”, but the IRP is STILL completed. Your completion
>routine is still called.
>
>When you call IoCancelIrp, you are NOT guaranteed that your completion

>routine has run before IoCancelIrp returns. It is completely legal
>for a driver not to have a cancel routine installed (the driver may be

>executing your IRP when you call IoCancelIrp), or the driver may not
>be able to return the IRP to the issuer in the scope of the cancel
>routine handler. For example, the IRP may reference memory, and the
>memory may have been submitted to a device as part of a DMA request;
>the driver cannot return the IRP to the caller while the DMA request
is active.
>
>Whoever told you that IoCancelIrp frees the IRP is misinformed, and
>should be told so.
>
>IRP cancellation is very tricky. There are many subleties, most of
>which take the form of race conditions, most of which cause your
>driver to explode. So if you are going to allocate, submit, and
>possibly cancel IRPs, you should spend the time to read up on these
>nuances. I believe OSR has some decent online articles on IRP
>cancellation. There are subtleties in implementing cancellable IRPs,
>and there are different (though, of course,
>related) subleties in issuing and cancelling IRPs.
>
>Most importantly, you need to have a state machine that adequately
>models all of the potential states that an IRP can be in. Typically,
>I use these
>states: Idle, Active, ActiveCancelling, Complete, all protected by
>some implicit spinlock.
>
>
>state:
> KEVENT CancelEvent;
> KSPIN_LOCK SpinLock;
> PIRP Irp;
>
>Submit() {
> Acquire spinlock
> Verify that state is Idle
> Set state to Active
> Release spinlock
> IoSetCompletionRoutine
> IoCallDriver
>}
>
>CompletionRoutine() {
> Acquire spinlock
> if (state = Active) {
> Release spinlock
> Process data
> } else if (state = ActiveCancelling) {
> State = Idle
> KeSetEvent(&CancelEvent);
> Release spinlock
> } else {
> Release spinlock
> }
>}
>
>Cancel() {
> Acquire spinlock
> if (state = Active) {
> state = ActiveCancelling
> Release spinlock
> IoCancelIrp(irp)
> KeWaitForSingleObject(&CancelEvent);
> } else {
> // not cancelable
> Release spinlock
> }
>}
>
>
>Or something similar.
>
>– arlie
>
>
>-----Original Message-----
>From: xxxxx@lists.osr.com
>[mailto:xxxxx@lists.osr.com] On Behalf Of L3sT4Rd
>Sent: Tuesday, February 28, 2006 3:38 PM
>To: Windows System Software Devs Interest List
>Subject: [ntdev] forwarding cancelled IRPs ?
>
>Hi,
>
>i think i already know the answer to my question but i’d like to be
sure…
>
>is it a failure to send a driver allocated IRP that has already been
>cancelled down to another driver? (in other words, will a set Cancel
>flag survive an IoCallDriver call and the lower driver (hopefully)
>recognize the Cancel flag after setting up a cancel routine and
>immediatly return?)
>
>of course that’s only a very seldom race condition in case my original

>IRP gets cancelled immediatly after i dropped a driver supplied
>spinlock that protects the original IRPs CancelRoutine and just before

>i can call IoCallDriver…
>(i cannot hold the spinlock while calling IoCallDriver cause this
>could deadlock with the completion routine)
>
>and also, is it correct that i must not call IoFreeIrp (in it’s
>CompletionRoutine)
>for a driver allocated IRP after calling IoCancelIrp on it (think
>someone told me so, but i can’t find anything about this in
>documentation)
>
>
?

>regards,
>daniel.
>
>
>—
>Questions? First check the Kernel Driver FAQ at
>http://www.osronline.com/article.cfm?id=256
>
>To unsubscribe, visit the List Server section of OSR Online at
>http://www.osronline.com/page.cfm?name=ListServer
>
>
>
>
>—
>Questions? First check the Kernel Driver FAQ at
>http://www.osronline.com/article.cfm?id=256
>
>To unsubscribe, visit the List Server section of OSR Online at
>http://www.osronline.com/page.cfm?name=ListServer
>
>
>
>
>


Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer


Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer


Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

>Scheduling at dispatch level?

There is no scheduling at dispatch level. While you may be interrupted
by a higher-priority interrupt, no other code at the same or lower IRQL
(incuding passive-level) will run on that processor while you’re in a
DPC.

They are called spinlocks because they spin. They don’t yield the
processor until they’re acquired. They may be interrupted, but they
won’t go to sleep.

ok, i think i have to read a bit more about spinlocks…
(what i meant with “scheduling” was, thread A acquires a spinlock and
thread B also tries to acquire the spinlock, now thread B spins forever
and as scheduler cannot operate while another thread is running at
DISPATCH_LEVEL, i thought the spinlock itself must somehow
switch back to thread A, cause otherwise a spinlock would always deadlock
when spinning. but while writing this text, i remembered that it may spin on
a multi processor plattform and only raises irql on single processor
machine.
so now all this makes a lot more sense to me…sorry for this stupid
assumption!)

Remember that your driver code has to be re-entrant. Your completion
routine might call something which triggers another completion, which
causes your completion routine to be re-enterred again before the first
call has exited. If you do this while holding a spinlock, then the
subsequent attempt to acquire the spinlock will cause a deadlock. For
example if you send an IRP to a lower driver while holding a lock, that
might trigger something in the lower driver which completes a previous
IRP.

Even when you’re running at passive-level reentrancy can cause these
same sorts of problems. If thread A acquires a lock (say a
SynchronizationEvent) twice, there’s nothing the scheduler can do to
free the lock up so it can be acquired again. Short of a resource
manager that tries to break deadlocks, you’re going to hang forever.

-p

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of L3sT4Rd
Sent: Tuesday, February 28, 2006 7:13 PM
To: Windows System Software Devs Interest List
Subject: Re: [ntdev] forwarding cancelled IRPs ?

>2) deadlocks, by doing this. For more info, read a basic text on
>synchronization.
>
>
>

well…i hoped that i am already kinda fimiliar with
synchronization…

one of the four required conditions for a deadlock (“hold & wait”) is,
that each “process” already holds a resource while waiting for another
one.
that’s not the case here, the completion routine only holds one spinlock
at a time. even if there’re 2 compl. routines running at the same time,
i can’t see a problem:
-both compl. routines hold a spinlock => the one holding SL2 can
continue
-only one of both holds a spinlock => same situation as if there was
only one routine
-neither of both routines holds a spinlock => no problem

two cancel routines simultaneous:
-both hold a spin lock => no prob., the one holding SL2 can continue
(afterwards either one cancel routine holds both SLs (no prob) or
one cancel r. still holds a SL and additionally one compl. r. =>
same as
if there was only one instance of each routine, no prob.)
-one cancel routine holds both SLs: no prob, it can finish

only thing i’m wondering about (and this might be a problem here) is:
who does the scheduling when some code running at DISPATCH_LEVEL is
calling KeAcquireSpinLock and blocked?
(but that’d be also a problem for a single SpinLock at DISPATCH_LEVEL)
so i guess, KeAcquireSpinLock does either call the scheduler after
recognizing that someone else is already holding the spinlock, or it
does look who else is actually holding the spin lock and tells the
dispatcher which code to try next?

or did i miss something else here?

regards,
daniel.

Arlie Davis wrote:

>>your solution makes sense to me, except that it looks like your
>>calling KeWaitForSingleObject from DISPATCH_LEVEL…!?
>>or did you queue a workitem (or similiar) from within the cancel
>>
>>
routine?

>>
>>
>>
>>
>Sorry, I guess my example was a little too brief. I am most definitely
>
>

>not suggesting that you call KeWaitForSingleObject from DISPATCH_LEVEL
>or from a cancel routine. That’s a recipe for a blue screen.
>
>My “Cancel()” routine was not a cancel routine in the sense of
>something you pass to IoSetCancelRoutine. It was a hypothetical
>routine that you would call during paths such as IRP_MN_REMOVE_DEVICE
>
>
or IRP_MN_SURPRISE_REMOVAL.

>In other words, it is a routine that handles safely canceling an IRP,
>and waiting for the IRP’s completion routine to run. After that
>routine completes, you can safely call IoFreeIrp, but not before.
>Therefore, my “Cancel()” example routine really should have been called
>
>
“StopAndWaitForIo”

>or something more accurate.
>
>Also, please note that this:
>
> KeAcquireSpinLockAtDpcLevel(&lock2)
> KeReleaseSpinLockFromDpcLevel(&lock1);
>
>is a recipe for disaster. This is almost never what you really want to
>
>
do.

>Until you have really mastered async I/O and spinlocks, you should use
>a ground rule that you never hold more than one spinlock at a time.
>And the only legitimate locking protocols that do involve holding more
>than one spinlock at a time do NOT include sequences such as: acquire
>1, acquire 2, release 1, release 2. It is extremely easy to write a
>driver that is either
>1) not safe, or 2) deadlocks, by doing this. For more info, read a
>basic text on synchronization.
>
>– arlie
>
>
>-----Original Message-----
>From: xxxxx@lists.osr.com
>[mailto:xxxxx@lists.osr.com] On Behalf Of L3sT4Rd
>Sent: Tuesday, February 28, 2006 6:45 PM
>To: Windows System Software Devs Interest List
>Subject: Re: [ntdev] forwarding cancelled IRPs ?
>
>Doron and Arlie, thanks for the fast reply :slight_smile:
>
>believe me, i’ve been reading anything i could find about
>synchronisation of cancelation the whole last 2 days… now i know that
>
>

>thinking “…and at last quickly implement cancelation…” was quite
>stupid :wink:
>
>my problem is that no example met my requirements, yet, because i:
> -am setting up a cancel routine for each IRP i receive
> -but set a completion routine for a different IRP that i create in
>the dispatch routine of the original IRP
> -don’t have a StartIO routine
> -and also don’t have an internal IRP queue, because i only create a
>
>

>new IRP and immdiately pass it
> down (lower driver does queuing for me. my only concern here is
>that some docs discourage from
> using the pIrp parameter passed to a cancel routine [but nobody
>did clearly explain why, only data-
> loss on application termination was mentioned in some NT-Insider
>
>

>article, but how can a UM app
> access an I/O Manager-allocated IRP struct? maybe this only
>refered to the user buffers for DIRECT_IO?]
> and also whether there’s a need for canceling IRPs in
>SURPRISE_REMOVAL, but as this is a pure virtual
> device, i assumed a SURPRISE_REMOVAL will never occur…)
>
>as far as i can see the only problem left, is the one pointed out by
>
>
doron:

>
>
>
>
>>#1 Completion routine(), processing irp, drops lock
>>#2 grabs lock, finds irp to cancel, drops lock
>>#1 calls IoCallDriver, lower driver completes request
>>#1 your completion routine runs, frees the irp
>>#2 calls IoCancelIrp on the irp that was just freed
>>
>>
>>
>>
>>
>Arlie, the problem you stated should be quite the same, just vice
>versa, cause i free the IRP from completion routine of lower IRP and
>not from cancel routine of higher one…
>your solution makes sense to me, except that it looks like your calling
>
>

>KeWaitForSingleObject from DISPATCH_LEVEL…!? or did you queue a
>workitem (or similiar) from within the cancel routine?
>
>Doron, i already looked at Walter Oneys book, but i could not find
>anything related to 2 different IRPs and synchronize processing of
>IRP1, cancelation of
>IRP1 and creation & completion of IRP2 or anything similiar…
>
>what’d be about the following solution (simplified):
>
>Cancel(…, PIRP pHigherIrp):
>[…]
> KeAcquireSpinLockAtDpcLevel(&lock1);
>
> pLowerIrp = (PIRP)pHigherIrp->Tail.Overlay.DriverContext[0];
> pHigherIrp->Tail.Overlay.DriverContext[0] = NULL;
>
> KeAcquireSpinLockAtDpcLevel(&lock2)
> KeReleaseSpinLockFromDpcLevel(&lock1);
>
> if (NULL != pLowerIrp)
> IoCancelIrp(pLowerIrp);
> KeReleaseSpinLockFromDpcLevel(&lock2);
>
>Completion(…, PIRP pLowerIrp):
> KeAcquireSpinLock(&lock1, &oldIrql);
> oldCancelRoutine = IoSetCancelRoutine(pHigherIrp, NULL);
> pHigherIrp->Tail.Overlay.DriverContext[0] = NULL;
> KeReleaseSpinLock(&lock1, oldIrql);
>
> if (NULL == oldCancelRoutine) //IRP was cancelled
> {
> KeAcquireSpinLock(&lock2, &oldIrql);
> IoFreeIrp(pLowerIrp);
> KeReleaseSpinLock(&lock2, oldIrql);
> […]
> } else
> IoFreeIrp(pLowerIrp);
>
>regards,
>daniel.
>
>Arlie Davis wrote:
>
>
>
>
>
>>You are safe in this situation. You cannot hold a spinlock when you
>>call IoCallDriver. But another thread could immediately call
>>IoCancelIrp on the IRP that you are submitting – which is legal. The
>>
>>

>>lower driver is responsible for checking whether the IRP has been
>>canceled, and if the lower driver queues the IRP, it is also
>>responsible for arming it for cancellation, by setting a cancel
>>
>>
routine.

>>In all cases, this is safe.
>>
>>However, about your second question, you are *always* responsible for
>>freeing an IRP that you allocate, whether it is canceled or not.
>>IoCancelIrp does NOT free your IRP – it only informs the current
>>processor of the IRP (whomever that might be) that the IRP should be
>>
>>
>>
>>
>canceled.
>
>
>
>
>>“Canceled” just means “no longer processed, and completed with
>>STATUS_CANCELLED”, but the IRP is STILL completed. Your completion
>>routine is still called.
>>
>>When you call IoCancelIrp, you are NOT guaranteed that your completion
>>
>>

>>routine has run before IoCancelIrp returns. It is completely legal
>>for a driver not to have a cancel routine installed (the driver may be
>>
>>

>>executing your IRP when you call IoCancelIrp), or the driver may not
>>be able to return the IRP to the issuer in the scope of the cancel
>>routine handler. For example, the IRP may reference memory, and the
>>memory may have been submitted to a device as part of a DMA request;
>>the driver cannot return the IRP to the caller while the DMA request
>>
>>
is active.

>>Whoever told you that IoCancelIrp frees the IRP is misinformed, and
>>should be told so.
>>
>>IRP cancellation is very tricky. There are many subleties, most of
>>which take the form of race conditions, most of which cause your
>>driver to explode. So if you are going to allocate, submit, and
>>possibly cancel IRPs, you should spend the time to read up on these
>>nuances. I believe OSR has some decent online articles on IRP
>>cancellation. There are subtleties in implementing cancellable IRPs,
>>and there are different (though, of course,
>>related) subleties in issuing and cancelling IRPs.
>>
>>Most importantly, you need to have a state machine that adequately
>>models all of the potential states that an IRP can be in. Typically,
>>I use these
>>states: Idle, Active, ActiveCancelling, Complete, all protected by
>>some implicit spinlock.
>>
>>
>>state:
>> KEVENT CancelEvent;
>> KSPIN_LOCK SpinLock;
>> PIRP Irp;
>>
>>Submit() {
>> Acquire spinlock
>> Verify that state is Idle
>> Set state to Active
>> Release spinlock
>> IoSetCompletionRoutine
>> IoCallDriver
>>}
>>
>>CompletionRoutine() {
>> Acquire spinlock
>> if (state = Active) {
>> Release spinlock
>> Process data
>> } else if (state = ActiveCancelling) {
>> State = Idle
>> KeSetEvent(&CancelEvent);
>> Release spinlock
>> } else {
>> Release spinlock
>> }
>>}
>>
>>Cancel() {
>> Acquire spinlock
>> if (state = Active) {
>> state = ActiveCancelling
>> Release spinlock
>> IoCancelIrp(irp)
>> KeWaitForSingleObject(&CancelEvent);
>> } else {
>> // not cancelable
>> Release spinlock
>> }
>>}
>>
>>
>>Or something similar.
>>
>>– arlie
>>
>>
>>-----Original Message-----
>>From: xxxxx@lists.osr.com
>>[mailto:xxxxx@lists.osr.com] On Behalf Of L3sT4Rd
>>Sent: Tuesday, February 28, 2006 3:38 PM
>>To: Windows System Software Devs Interest List
>>Subject: [ntdev] forwarding cancelled IRPs ?
>>
>>Hi,
>>
>>i think i already know the answer to my question but i’d like to be
>>
>>
sure…

>>is it a failure to send a driver allocated IRP that has already been
>>cancelled down to another driver? (in other words, will a set Cancel
>>flag survive an IoCallDriver call and the lower driver (hopefully)
>>recognize the Cancel flag after setting up a cancel routine and
>>immediatly return?)
>>
>>of course that’s only a very seldom race condition in case my original
>>
>>

>>IRP gets cancelled immediatly after i dropped a driver supplied
>>spinlock that protects the original IRPs CancelRoutine and just before
>>
>>

>>i can call IoCallDriver…
>>(i cannot hold the spinlock while calling IoCallDriver cause this
>>could deadlock with the completion routine)
>>
>>and also, is it correct that i must not call IoFreeIrp (in it’s
>>CompletionRoutine)
>>for a driver allocated IRP after calling IoCancelIrp on it (think
>>someone told me so, but i can’t find anything about this in
>>documentation)
>>
>>
>>
>>
>?
>
>
>
>
>>regards,
>>daniel.
>>
>>
>>—
>>Questions? First check the Kernel Driver FAQ at
>>http://www.osronline.com/article.cfm?id=256
>>
>>To unsubscribe, visit the List Server section of OSR Online at
>>http://www.osronline.com/page.cfm?name=ListServer
>>
>>
>>
>>
>>—
>>Questions? First check the Kernel Driver FAQ at
>>http://www.osronline.com/article.cfm?id=256
>>
>>To unsubscribe, visit the List Server section of OSR Online at
>>http://www.osronline.com/page.cfm?name=ListServer
>>
>>
>>
>>
>>
>>
>>
>—
>Questions? First check the Kernel Driver FAQ at
>http://www.osronline.com/article.cfm?id=256
>
>To unsubscribe, visit the List Server section of OSR Online at
>http://www.osronline.com/page.cfm?name=ListServer
>
>
>
>
>—
>Questions? First check the Kernel Driver FAQ at
>http://www.osronline.com/article.cfm?id=256
>
>To unsubscribe, visit the List Server section of OSR Online at
>http://www.osronline.com/page.cfm?name=ListServer
>
>
>
>
>


Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer


Questions? First check the Kernel Driver FAQ at http://www.osronline.com/article.cfm?id=256

To unsubscribe, visit the List Server section of OSR Online at http://www.osronline.com/page.cfm?name=ListServer

L3sT4Rd wrote:

>
ok, i think i have to read a bit more about spinlocks…
(what i meant with “scheduling” was, thread A acquires a spinlock and
thread B also tries to acquire the spinlock, now thread B spins forever
and as scheduler cannot operate while another thread is running at
DISPATCH_LEVEL, i thought the spinlock itself must somehow
switch back to thread A, cause otherwise a spinlock would always deadlock
when spinning. but while writing this text, i remembered that it may
spin on
a multi processor plattform and only raises irql on single processor
machine.
so now all this makes a lot more sense to me…sorry for this stupid
assumption!)

Right. That’s the key point. On a single processor machine, the system
will never switch away from thread A while it has the spinlock, so
thread B cannot try to acquire it.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

Now that you’ve figured it out, I would recommend using cancel safe queues.
They are very easy to implement and the DDK states that they can be used on
Windows 2000 too by linking csq.lib to your driver. They handle cancelation
for you. They save a lot of work and are well tested

I recently went through the work of hardening the cancelation handling in
one of our drivers and then threw it all away when I learned about cancel
safe queues. It is still a good idea to understand cancelation though.

Jonathan

“L3sT4Rd” wrote in message news:xxxxx@ntdev…
>
>>Scheduling at dispatch level?
>>
>>There is no scheduling at dispatch level. While you may be interrupted
>>by a higher-priority interrupt, no other code at the same or lower IRQL
>>(incuding passive-level) will run on that processor while you’re in a
>>DPC.
>>
>>They are called spinlocks because they spin. They don’t yield the
>>processor until they’re acquired. They may be interrupted, but they
>>won’t go to sleep.
>>
>>
> ok, i think i have to read a bit more about spinlocks…
> (what i meant with “scheduling” was, thread A acquires a spinlock and
> thread B also tries to acquire the spinlock, now thread B spins forever
> and as scheduler cannot operate while another thread is running at
> DISPATCH_LEVEL, i thought the spinlock itself must somehow
> switch back to thread A, cause otherwise a spinlock would always deadlock
> when spinning. but while writing this text, i remembered that it may spin
> on
> a multi processor plattform and only raises irql on single processor
> machine.
> so now all this makes a lot more sense to me…sorry for this stupid
> assumption!)
>
>>Remember that your driver code has to be re-entrant. Your completion
>>routine might call something which triggers another completion, which
>>causes your completion routine to be re-enterred again before the first
>>call has exited. If you do this while holding a spinlock, then the
>>subsequent attempt to acquire the spinlock will cause a deadlock. For
>>example if you send an IRP to a lower driver while holding a lock, that
>>might trigger something in the lower driver which completes a previous
>>IRP.
>>
>>Even when you’re running at passive-level reentrancy can cause these
>>same sorts of problems. If thread A acquires a lock (say a
>>SynchronizationEvent) twice, there’s nothing the scheduler can do to
>>free the lock up so it can be acquired again. Short of a resource
>>manager that tries to break deadlocks, you’re going to hang forever.
>>
>>-p
>>
>>-----Original Message-----
>>From: xxxxx@lists.osr.com
>>[mailto:xxxxx@lists.osr.com] On Behalf Of L3sT4Rd
>>Sent: Tuesday, February 28, 2006 7:13 PM
>>To: Windows System Software Devs Interest List
>>Subject: Re: [ntdev] forwarding cancelled IRPs ?
>>
>>
>>
>>>2) deadlocks, by doing this. For more info, read a basic text on
>>>synchronization.
>>>
>>>
>>
>>well…i hoped that i am already kinda fimiliar with
>>synchronization…
>>
>>one of the four required conditions for a deadlock (“hold & wait”) is,
>>that each “process” already holds a resource while waiting for another
>>one.
>>that’s not the case here, the completion routine only holds one spinlock
>>at a time. even if there’re 2 compl. routines running at the same time,
>>i can’t see a problem:
>> -both compl. routines hold a spinlock => the one holding SL2 can
>>continue
>> -only one of both holds a spinlock => same situation as if there was
>>only one routine
>> -neither of both routines holds a spinlock => no problem
>>
>>two cancel routines simultaneous:
>> -both hold a spin lock => no prob., the one holding SL2 can continue
>> (afterwards either one cancel routine holds both SLs (no prob) or
>> one cancel r. still holds a SL and additionally one compl. r. =>
>>same as
>> if there was only one instance of each routine, no prob.)
>> -one cancel routine holds both SLs: no prob, it can finish
>>
>>only thing i’m wondering about (and this might be a problem here) is:
>>who does the scheduling when some code running at DISPATCH_LEVEL is
>>calling KeAcquireSpinLock and blocked?
>>(but that’d be also a problem for a single SpinLock at DISPATCH_LEVEL)
>>so i guess, KeAcquireSpinLock does either call the scheduler after
>>recognizing that someone else is already holding the spinlock, or it
>>does look who else is actually holding the spin lock and tells the
>>dispatcher which code to try next?
>>
>>or did i miss something else here?
>>
>>regards,
>>daniel.
>>
>>
>>Arlie Davis wrote:
>>
>>
>>>>your solution makes sense to me, except that it looks like your calling
>>>>KeWaitForSingleObject from DISPATCH_LEVEL…!?
>>>>or did you queue a workitem (or similiar) from within the cancel
>>>>
>>routine?
>>>>
>>>>
>>>Sorry, I guess my example was a little too brief. I am most definitely
>>>
>>
>>
>>>not suggesting that you call KeWaitForSingleObject from DISPATCH_LEVEL or
>>>from a cancel routine. That’s a recipe for a blue screen.
>>>
>>>My “Cancel()” routine was not a cancel routine in the sense of something
>>>you pass to IoSetCancelRoutine. It was a hypothetical routine that you
>>>would call during paths such as IRP_MN_REMOVE_DEVICE
>>>
>>or IRP_MN_SURPRISE_REMOVAL.
>>
>>>In other words, it is a routine that handles safely canceling an IRP, and
>>>waiting for the IRP’s completion routine to run. After that routine
>>>completes, you can safely call IoFreeIrp, but not before. Therefore, my
>>>“Cancel()” example routine really should have been called
>>>
>>“StopAndWaitForIo”
>>
>>>or something more accurate.
>>>
>>>Also, please note that this:
>>>
>>> KeAcquireSpinLockAtDpcLevel(&lock2)
>>> KeReleaseSpinLockFromDpcLevel(&lock1);
>>>
>>>is a recipe for disaster. This is almost never what you really want to
>>>
>>do.
>>
>>>Until you have really mastered async I/O and spinlocks, you should use a
>>>ground rule that you never hold more than one spinlock at a time. And
>>>the only legitimate locking protocols that do involve holding more than
>>>one spinlock at a time do NOT include sequences such as: acquire 1,
>>>acquire 2, release 1, release 2. It is extremely easy to write a driver
>>>that is either
>>>1) not safe, or 2) deadlocks, by doing this. For more info, read a basic
>>>text on synchronization.
>>>
>>>– arlie
>>>
>>>
>>>-----Original Message-----
>>>From: xxxxx@lists.osr.com
>>>[mailto:xxxxx@lists.osr.com] On Behalf Of L3sT4Rd
>>>Sent: Tuesday, February 28, 2006 6:45 PM
>>>To: Windows System Software Devs Interest List
>>>Subject: Re: [ntdev] forwarding cancelled IRPs ?
>>>
>>>Doron and Arlie, thanks for the fast reply :slight_smile:
>>>
>>>believe me, i’ve been reading anything i could find about synchronisation
>>>of cancelation the whole last 2 days… now i know that
>>>
>>
>>
>>>thinking “…and at last quickly implement cancelation…” was quite
>>>stupid :wink:
>>>
>>>my problem is that no example met my requirements, yet, because i:
>>> -am setting up a cancel routine for each IRP i receive
>>> -but set a completion routine for a different IRP that i create in the
>>> dispatch routine of the original IRP
>>> -don’t have a StartIO routine
>>> -and also don’t have an internal IRP queue, because i only create a
>>>
>>
>>
>>>new IRP and immdiately pass it
>>> down (lower driver does queuing for me. my only concern here is
>>> that some docs discourage from
>>> using the pIrp parameter passed to a cancel routine [but nobody did
>>> clearly explain why, only data-
>>> loss on application termination was mentioned in some NT-Insider
>>>
>>
>>
>>>article, but how can a UM app
>>> access an I/O Manager-allocated IRP struct? maybe this only refered
>>> to the user buffers for DIRECT_IO?]
>>> and also whether there’s a need for canceling IRPs in
>>> SURPRISE_REMOVAL, but as this is a pure virtual
>>> device, i assumed a SURPRISE_REMOVAL will never occur…)
>>>
>>>as far as i can see the only problem left, is the one pointed out by
>>>
>>doron:
>>
>>>
>>>
>>>>#1 Completion routine(), processing irp, drops lock
>>>>#2 grabs lock, finds irp to cancel, drops lock
>>>>#1 calls IoCallDriver, lower driver completes request
>>>>#1 your completion routine runs, frees the irp
>>>>#2 calls IoCancelIrp on the irp that was just freed
>>>>
>>>>
>>>>
>>>Arlie, the problem you stated should be quite the same, just vice versa,
>>>cause i free the IRP from completion routine of lower IRP and not from
>>>cancel routine of higher one…
>>>your solution makes sense to me, except that it looks like your calling
>>>
>>
>>
>>>KeWaitForSingleObject from DISPATCH_LEVEL…!? or did you queue a
>>>workitem (or similiar) from within the cancel routine?
>>>
>>>Doron, i already looked at Walter Oneys book, but i could not find
>>>anything related to 2 different IRPs and synchronize processing of IRP1,
>>>cancelation of
>>>IRP1 and creation & completion of IRP2 or anything similiar…
>>>
>>>what’d be about the following solution (simplified):
>>>
>>>Cancel(…, PIRP pHigherIrp):
>>>[…]
>>> KeAcquireSpinLockAtDpcLevel(&lock1);
>>>
>>> pLowerIrp = (PIRP)pHigherIrp->Tail.Overlay.DriverContext[0];
>>> pHigherIrp->Tail.Overlay.DriverContext[0] = NULL;
>>>
>>> KeAcquireSpinLockAtDpcLevel(&lock2)
>>> KeReleaseSpinLockFromDpcLevel(&lock1);
>>>
>>> if (NULL != pLowerIrp)
>>> IoCancelIrp(pLowerIrp);
>>> KeReleaseSpinLockFromDpcLevel(&lock2);
>>>
>>>Completion(…, PIRP pLowerIrp):
>>> KeAcquireSpinLock(&lock1, &oldIrql);
>>> oldCancelRoutine = IoSetCancelRoutine(pHigherIrp, NULL);
>>> pHigherIrp->Tail.Overlay.DriverContext[0] = NULL;
>>> KeReleaseSpinLock(&lock1, oldIrql);
>>> if (NULL == oldCancelRoutine) //IRP was cancelled
>>> {
>>> KeAcquireSpinLock(&lock2, &oldIrql);
>>> IoFreeIrp(pLowerIrp);
>>> KeReleaseSpinLock(&lock2, oldIrql);
>>> […]
>>> } else
>>> IoFreeIrp(pLowerIrp);
>>>
>>>regards,
>>>daniel.
>>>
>>>Arlie Davis wrote:
>>>
>>>
>>>
>>>>You are safe in this situation. You cannot hold a spinlock when you
>>>>call IoCallDriver. But another thread could immediately call
>>>>IoCancelIrp on the IRP that you are submitting – which is legal. The
>>>>
>>
>>
>>>>lower driver is responsible for checking whether the IRP has been
>>>>canceled, and if the lower driver queues the IRP, it is also responsible
>>>>for arming it for cancellation, by setting a cancel
>>>>
>>routine.
>>
>>>>In all cases, this is safe.
>>>>
>>>>However, about your second question, you are always responsible for
>>>>freeing an IRP that you allocate, whether it is canceled or not.
>>>>IoCancelIrp does NOT free your IRP – it only informs the current
>>>>processor of the IRP (whomever that might be) that the IRP should be
>>>>
>>>>
>>>canceled.
>>>
>>>
>>>>“Canceled” just means “no longer processed, and completed with
>>>>STATUS_CANCELLED”, but the IRP is STILL completed. Your completion
>>>>routine is still called.
>>>>
>>>>When you call IoCancelIrp, you are NOT guaranteed that your completion
>>>>
>>
>>
>>>>routine has run before IoCancelIrp returns. It is completely legal for
>>>>a driver not to have a cancel routine installed (the driver may be
>>>>
>>
>>
>>>>executing your IRP when you call IoCancelIrp), or the driver may not be
>>>>able to return the IRP to the issuer in the scope of the cancel routine
>>>>handler. For example, the IRP may reference memory, and the memory may
>>>>have been submitted to a device as part of a DMA request; the driver
>>>>cannot return the IRP to the caller while the DMA request
>>>>
>>is active.
>>
>>>>Whoever told you that IoCancelIrp frees the IRP is misinformed, and
>>>>should be told so.
>>>>
>>>>IRP cancellation is very tricky. There are many subleties, most of
>>>>which take the form of race conditions, most of which cause your driver
>>>>to explode. So if you are going to allocate, submit, and possibly
>>>>cancel IRPs, you should spend the time to read up on these nuances. I
>>>>believe OSR has some decent online articles on IRP cancellation. There
>>>>are subtleties in implementing cancellable IRPs, and there are different
>>>>(though, of course,
>>>>related) subleties in issuing and cancelling IRPs.
>>>>
>>>>Most importantly, you need to have a state machine that adequately
>>>>models all of the potential states that an IRP can be in. Typically, I
>>>>use these
>>>>states: Idle, Active, ActiveCancelling, Complete, all protected by some
>>>>implicit spinlock.
>>>>
>>>>
>>>>state:
>>>> KEVENT CancelEvent;
>>>> KSPIN_LOCK SpinLock;
>>>> PIRP Irp;
>>>>
>>>>Submit() {
>>>> Acquire spinlock
>>>> Verify that state is Idle
>>>> Set state to Active
>>>> Release spinlock
>>>> IoSetCompletionRoutine
>>>> IoCallDriver
>>>>}
>>>>
>>>>CompletionRoutine() {
>>>> Acquire spinlock
>>>> if (state = Active) {
>>>> Release spinlock
>>>> Process data
>>>> } else if (state = ActiveCancelling) {
>>>> State = Idle
>>>> KeSetEvent(&CancelEvent);
>>>> Release spinlock
>>>> } else {
>>>> Release spinlock
>>>> }
>>>>}
>>>>
>>>>Cancel() {
>>>> Acquire spinlock
>>>> if (state = Active) {
>>>> state = ActiveCancelling
>>>> Release spinlock
>>>> IoCancelIrp(irp)
>>>> KeWaitForSingleObject(&CancelEvent);
>>>> } else {
>>>> // not cancelable
>>>> Release spinlock
>>>> }
>>>>}
>>>>
>>>>
>>>>Or something similar.
>>>>
>>>>– arlie
>>>>
>>>>
>>>>-----Original Message-----
>>>>From: xxxxx@lists.osr.com
>>>>[mailto:xxxxx@lists.osr.com] On Behalf Of L3sT4Rd
>>>>Sent: Tuesday, February 28, 2006 3:38 PM
>>>>To: Windows System Software Devs Interest List
>>>>Subject: [ntdev] forwarding cancelled IRPs ?
>>>>
>>>>Hi,
>>>>
>>>>i think i already know the answer to my question but i’d like to be
>>>>
>>sure…
>>
>>>>is it a failure to send a driver allocated IRP that has already been
>>>>cancelled down to another driver? (in other words, will a set Cancel
>>>>flag survive an IoCallDriver call and the lower driver (hopefully)
>>>>recognize the Cancel flag after setting up a cancel routine and
>>>>immediatly return?)
>>>>
>>>>of course that’s only a very seldom race condition in case my original
>>>>
>>
>>
>>>>IRP gets cancelled immediatly after i dropped a driver supplied spinlock
>>>>that protects the original IRPs CancelRoutine and just before
>>>>
>>
>>
>>>>i can call IoCallDriver…
>>>>(i cannot hold the spinlock while calling IoCallDriver cause this could
>>>>deadlock with the completion routine)
>>>>
>>>>and also, is it correct that i must not call IoFreeIrp (in it’s
>>>>CompletionRoutine)
>>>>for a driver allocated IRP after calling IoCancelIrp on it (think
>>>>someone told me so, but i can’t find anything about this in
>>>>documentation)
>>>>
>>>>
>>>?
>>>
>>>
>>>>regards,
>>>>daniel.
>>>>
>>>>
>>>>—
>>>>Questions? First check the Kernel Driver FAQ at
>>>>http://www.osronline.com/article.cfm?id=256
>>>>
>>>>To unsubscribe, visit the List Server section of OSR Online at
>>>>http://www.osronline.com/page.cfm?name=ListServer
>>>>
>>>>
>>>>
>>>>
>>>>—
>>>>Questions? First check the Kernel Driver FAQ at
>>>>http://www.osronline.com/article.cfm?id=256
>>>>
>>>>To unsubscribe, visit the List Server section of OSR Online at
>>>>http://www.osronline.com/page.cfm?name=ListServer
>>>>
>>>>
>>>>
>>>>
>>>>
>>>—
>>>Questions? First check the Kernel Driver FAQ at
>>>http://www.osronline.com/article.cfm?id=256
>>>
>>>To unsubscribe, visit the List Server section of OSR Online at
>>>http://www.osronline.com/page.cfm?name=ListServer
>>>
>>>
>>>
>>>
>>>—
>>>Questions? First check the Kernel Driver FAQ at
>>>http://www.osronline.com/article.cfm?id=256
>>>
>>>To unsubscribe, visit the List Server section of OSR Online at
>>>http://www.osronline.com/page.cfm?name=ListServer
>>>
>>>
>>>
>>
>>
>>—
>>Questions? First check the Kernel Driver FAQ at
>>http://www.osronline.com/article.cfm?id=256
>>
>>To unsubscribe, visit the List Server section of OSR Online at
>>http://www.osronline.com/page.cfm?name=ListServer
>>
>>
>>
>>—
>>Questions? First check the Kernel Driver FAQ at
>>http://www.osronline.com/article.cfm?id=256
>>
>>To unsubscribe, visit the List Server section of OSR Online at
>>http://www.osronline.com/page.cfm?name=ListServer
>>
>>
>
>

And the manual queuing mechanisms in KMDF are even better. I’m in the
process of taking CSQ to WDF.

Gary G. Little

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of Jonathan Ludwig
Sent: Wednesday, March 01, 2006 6:41 PM
To: Windows System Software Devs Interest List
Subject: Re:[ntdev] forwarding cancelled IRPs ?

Now that you’ve figured it out, I would recommend using cancel safe queues.
They are very easy to implement and the DDK states that they can be used on
Windows 2000 too by linking csq.lib to your driver. They handle cancelation

for you. They save a lot of work and are well tested

I recently went through the work of hardening the cancelation handling in
one of our drivers and then threw it all away when I learned about cancel
safe queues. It is still a good idea to understand cancelation though.

Jonathan

“L3sT4Rd” wrote in message news:xxxxx@ntdev…
>
>>Scheduling at dispatch level?
>>
>>There is no scheduling at dispatch level. While you may be interrupted
>>by a higher-priority interrupt, no other code at the same or lower IRQL
>>(incuding passive-level) will run on that processor while you’re in a
>>DPC.
>>
>>They are called spinlocks because they spin. They don’t yield the
>>processor until they’re acquired. They may be interrupted, but they
>>won’t go to sleep.
>>
>>
> ok, i think i have to read a bit more about spinlocks…
> (what i meant with “scheduling” was, thread A acquires a spinlock and
> thread B also tries to acquire the spinlock, now thread B spins forever
> and as scheduler cannot operate while another thread is running at
> DISPATCH_LEVEL, i thought the spinlock itself must somehow
> switch back to thread A, cause otherwise a spinlock would always deadlock
> when spinning. but while writing this text, i remembered that it may spin
> on
> a multi processor plattform and only raises irql on single processor
> machine.
> so now all this makes a lot more sense to me…sorry for this stupid
> assumption!)
>
>>Remember that your driver code has to be re-entrant. Your completion
>>routine might call something which triggers another completion, which
>>causes your completion routine to be re-enterred again before the first
>>call has exited. If you do this while holding a spinlock, then the
>>subsequent attempt to acquire the spinlock will cause a deadlock. For
>>example if you send an IRP to a lower driver while holding a lock, that
>>might trigger something in the lower driver which completes a previous
>>IRP.
>>
>>Even when you’re running at passive-level reentrancy can cause these
>>same sorts of problems. If thread A acquires a lock (say a
>>SynchronizationEvent) twice, there’s nothing the scheduler can do to
>>free the lock up so it can be acquired again. Short of a resource
>>manager that tries to break deadlocks, you’re going to hang forever.
>>
>>-p
>>
>>-----Original Message-----
>>From: xxxxx@lists.osr.com
>>[mailto:xxxxx@lists.osr.com] On Behalf Of L3sT4Rd
>>Sent: Tuesday, February 28, 2006 7:13 PM
>>To: Windows System Software Devs Interest List
>>Subject: Re: [ntdev] forwarding cancelled IRPs ?
>>
>>
>>
>>>2) deadlocks, by doing this. For more info, read a basic text on
>>>synchronization.
>>>
>>>
>>
>>well…i hoped that i am already kinda fimiliar with
>>synchronization…
>>
>>one of the four required conditions for a deadlock (“hold & wait”) is,
>>that each “process” already holds a resource while waiting for another
>>one.
>>that’s not the case here, the completion routine only holds one spinlock
>>at a time. even if there’re 2 compl. routines running at the same time,
>>i can’t see a problem:
>> -both compl. routines hold a spinlock => the one holding SL2 can
>>continue
>> -only one of both holds a spinlock => same situation as if there was
>>only one routine
>> -neither of both routines holds a spinlock => no problem
>>
>>two cancel routines simultaneous:
>> -both hold a spin lock => no prob., the one holding SL2 can continue
>> (afterwards either one cancel routine holds both SLs (no prob) or
>> one cancel r. still holds a SL and additionally one compl. r. =>
>>same as
>> if there was only one instance of each routine, no prob.)
>> -one cancel routine holds both SLs: no prob, it can finish
>>
>>only thing i’m wondering about (and this might be a problem here) is:
>>who does the scheduling when some code running at DISPATCH_LEVEL is
>>calling KeAcquireSpinLock and blocked?
>>(but that’d be also a problem for a single SpinLock at DISPATCH_LEVEL)
>>so i guess, KeAcquireSpinLock does either call the scheduler after
>>recognizing that someone else is already holding the spinlock, or it
>>does look who else is actually holding the spin lock and tells the
>>dispatcher which code to try next?
>>
>>or did i miss something else here?
>>
>>regards,
>>daniel.
>>
>>
>>Arlie Davis wrote:
>>
>>
>>>>your solution makes sense to me, except that it looks like your calling
>>>>KeWaitForSingleObject from DISPATCH_LEVEL…!?
>>>>or did you queue a workitem (or similiar) from within the cancel
>>>>
>>routine?
>>>>
>>>>
>>>Sorry, I guess my example was a little too brief. I am most definitely
>>>
>>
>>
>>>not suggesting that you call KeWaitForSingleObject from DISPATCH_LEVEL or

>>>from a cancel routine. That’s a recipe for a blue screen.
>>>
>>>My “Cancel()” routine was not a cancel routine in the sense of something
>>>you pass to IoSetCancelRoutine. It was a hypothetical routine that you
>>>would call during paths such as IRP_MN_REMOVE_DEVICE
>>>
>>or IRP_MN_SURPRISE_REMOVAL.
>>
>>>In other words, it is a routine that handles safely canceling an IRP, and

>>>waiting for the IRP’s completion routine to run. After that routine
>>>completes, you can safely call IoFreeIrp, but not before. Therefore, my
>>>“Cancel()” example routine really should have been called
>>>
>>“StopAndWaitForIo”
>>
>>>or something more accurate.
>>>
>>>Also, please note that this:
>>>
>>> KeAcquireSpinLockAtDpcLevel(&lock2)
>>> KeReleaseSpinLockFromDpcLevel(&lock1);
>>>
>>>is a recipe for disaster. This is almost never what you really want to
>>>
>>do.
>>
>>>Until you have really mastered async I/O and spinlocks, you should use a
>>>ground rule that you never hold more than one spinlock at a time. And
>>>the only legitimate locking protocols that do involve holding more than
>>>one spinlock at a time do NOT include sequences such as: acquire 1,
>>>acquire 2, release 1, release 2. It is extremely easy to write a driver
>>>that is either
>>>1) not safe, or 2) deadlocks, by doing this. For more info, read a basic

>>>text on synchronization.
>>>
>>>– arlie
>>>
>>>
>>>-----Original Message-----
>>>From: xxxxx@lists.osr.com
>>>[mailto:xxxxx@lists.osr.com] On Behalf Of L3sT4Rd
>>>Sent: Tuesday, February 28, 2006 6:45 PM
>>>To: Windows System Software Devs Interest List
>>>Subject: Re: [ntdev] forwarding cancelled IRPs ?
>>>
>>>Doron and Arlie, thanks for the fast reply :slight_smile:
>>>
>>>believe me, i’ve been reading anything i could find about synchronisation

>>>of cancelation the whole last 2 days… now i know that
>>>
>>
>>
>>>thinking “…and at last quickly implement cancelation…” was quite
>>>stupid :wink:
>>>
>>>my problem is that no example met my requirements, yet, because i:
>>> -am setting up a cancel routine for each IRP i receive
>>> -but set a completion routine for a different IRP that i create in the

>>> dispatch routine of the original IRP
>>> -don’t have a StartIO routine
>>> -and also don’t have an internal IRP queue, because i only create a
>>>
>>
>>
>>>new IRP and immdiately pass it
>>> down (lower driver does queuing for me. my only concern here is
>>> that some docs discourage from
>>> using the pIrp parameter passed to a cancel routine [but nobody did

>>> clearly explain why, only data-
>>> loss on application termination was mentioned in some NT-Insider
>>>
>>
>>
>>>article, but how can a UM app
>>> access an I/O Manager-allocated IRP struct? maybe this only refered

>>> to the user buffers for DIRECT_IO?]
>>> and also whether there’s a need for canceling IRPs in
>>> SURPRISE_REMOVAL, but as this is a pure virtual
>>> device, i assumed a SURPRISE_REMOVAL will never occur…)
>>>
>>>as far as i can see the only problem left, is the one pointed out by
>>>
>>doron:
>>
>>>
>>>
>>>>#1 Completion routine(), processing irp, drops lock
>>>>#2 grabs lock, finds irp to cancel, drops lock
>>>>#1 calls IoCallDriver, lower driver completes request
>>>>#1 your completion routine runs, frees the irp
>>>>#2 calls IoCancelIrp on the irp that was just freed
>>>>
>>>>
>>>>
>>>Arlie, the problem you stated should be quite the same, just vice versa,
>>>cause i free the IRP from completion routine of lower IRP and not from
>>>cancel routine of higher one…
>>>your solution makes sense to me, except that it looks like your calling
>>>
>>
>>
>>>KeWaitForSingleObject from DISPATCH_LEVEL…!? or did you queue a
>>>workitem (or similiar) from within the cancel routine?
>>>
>>>Doron, i already looked at Walter Oneys book, but i could not find
>>>anything related to 2 different IRPs and synchronize processing of IRP1,
>>>cancelation of
>>>IRP1 and creation & completion of IRP2 or anything similiar…
>>>
>>>what’d be about the following solution (simplified):
>>>
>>>Cancel(…, PIRP pHigherIrp):
>>>[…]
>>> KeAcquireSpinLockAtDpcLevel(&lock1);
>>>
>>> pLowerIrp = (PIRP)pHigherIrp->Tail.Overlay.DriverContext[0];
>>> pHigherIrp->Tail.Overlay.DriverContext[0] = NULL;
>>>
>>> KeAcquireSpinLockAtDpcLevel(&lock2)
>>> KeReleaseSpinLockFromDpcLevel(&lock1);
>>>
>>> if (NULL != pLowerIrp)
>>> IoCancelIrp(pLowerIrp);
>>> KeReleaseSpinLockFromDpcLevel(&lock2);
>>>
>>>Completion(…, PIRP pLowerIrp):
>>> KeAcquireSpinLock(&lock1, &oldIrql);
>>> oldCancelRoutine = IoSetCancelRoutine(pHigherIrp, NULL);
>>> pHigherIrp->Tail.Overlay.DriverContext[0] = NULL;
>>> KeReleaseSpinLock(&lock1, oldIrql);
>>> if (NULL == oldCancelRoutine) //IRP was cancelled
>>> {
>>> KeAcquireSpinLock(&lock2, &oldIrql);
>>> IoFreeIrp(pLowerIrp);
>>> KeReleaseSpinLock(&lock2, oldIrql);
>>> […]
>>> } else
>>> IoFreeIrp(pLowerIrp);
>>>
>>>regards,
>>>daniel.
>>>
>>>Arlie Davis wrote:
>>>
>>>
>>>
>>>>You are safe in this situation. You cannot hold a spinlock when you
>>>>call IoCallDriver. But another thread could immediately call
>>>>IoCancelIrp on the IRP that you are submitting – which is legal. The
>>>>
>>
>>
>>>>lower driver is responsible for checking whether the IRP has been
>>>>canceled, and if the lower driver queues the IRP, it is also responsible

>>>>for arming it for cancellation, by setting a cancel
>>>>
>>routine.
>>
>>>>In all cases, this is safe.
>>>>
>>>>However, about your second question, you are always responsible for
>>>>freeing an IRP that you allocate, whether it is canceled or not.
>>>>IoCancelIrp does NOT free your IRP – it only informs the current
>>>>processor of the IRP (whomever that might be) that the IRP should be
>>>>
>>>>
>>>canceled.
>>>
>>>
>>>>“Canceled” just means “no longer processed, and completed with
>>>>STATUS_CANCELLED”, but the IRP is STILL completed. Your completion
>>>>routine is still called.
>>>>
>>>>When you call IoCancelIrp, you are NOT guaranteed that your completion
>>>>
>>
>>
>>>>routine has run before IoCancelIrp returns. It is completely legal for
>>>>a driver not to have a cancel routine installed (the driver may be
>>>>
>>
>>
>>>>executing your IRP when you call IoCancelIrp), or the driver may not be
>>>>able to return the IRP to the issuer in the scope of the cancel routine
>>>>handler. For example, the IRP may reference memory, and the memory may
>>>>have been submitted to a device as part of a DMA request; the driver
>>>>cannot return the IRP to the caller while the DMA request
>>>>
>>is active.
>>
>>>>Whoever told you that IoCancelIrp frees the IRP is misinformed, and
>>>>should be told so.
>>>>
>>>>IRP cancellation is very tricky. There are many subleties, most of
>>>>which take the form of race conditions, most of which cause your driver
>>>>to explode. So if you are going to allocate, submit, and possibly
>>>>cancel IRPs, you should spend the time to read up on these nuances. I
>>>>believe OSR has some decent online articles on IRP cancellation. There
>>>>are subtleties in implementing cancellable IRPs, and there are different

>>>>(though, of course,
>>>>related) subleties in issuing and cancelling IRPs.
>>>>
>>>>Most importantly, you need to have a state machine that adequately
>>>>models all of the potential states that an IRP can be in. Typically, I
>>>>use these
>>>>states: Idle, Active, ActiveCancelling, Complete, all protected by some
>>>>implicit spinlock.
>>>>
>>>>
>>>>state:
>>>> KEVENT CancelEvent;
>>>> KSPIN_LOCK SpinLock;
>>>> PIRP Irp;
>>>>
>>>>Submit() {
>>>> Acquire spinlock
>>>> Verify that state is Idle
>>>> Set state to Active
>>>> Release spinlock
>>>> IoSetCompletionRoutine
>>>> IoCallDriver
>>>>}
>>>>
>>>>CompletionRoutine() {
>>>> Acquire spinlock
>>>> if (state = Active) {
>>>> Release spinlock
>>>> Process data
>>>> } else if (state = ActiveCancelling) {
>>>> State = Idle
>>>> KeSetEvent(&CancelEvent);
>>>> Release spinlock
>>>> } else {
>>>> Release spinlock
>>>> }
>>>>}
>>>>
>>>>Cancel() {
>>>> Acquire spinlock
>>>> if (state = Active) {
>>>> state = ActiveCancelling
>>>> Release spinlock
>>>> IoCancelIrp(irp)
>>>> KeWaitForSingleObject(&CancelEvent);
>>>> } else {
>>>> // not cancelable
>>>> Release spinlock
>>>> }
>>>>}
>>>>
>>>>
>>>>Or something similar.
>>>>
>>>>– arlie
>>>>
>>>>
>>>>-----Original Message-----
>>>>From: xxxxx@lists.osr.com
>>>>[mailto:xxxxx@lists.osr.com] On Behalf Of L3sT4Rd
>>>>Sent: Tuesday, February 28, 2006 3:38 PM
>>>>To: Windows System Software Devs Interest List
>>>>Subject: [ntdev] forwarding cancelled IRPs ?
>>>>
>>>>Hi,
>>>>
>>>>i think i already know the answer to my question but i’d like to be
>>>>
>>sure…
>>
>>>>is it a failure to send a driver allocated IRP that has already been
>>>>cancelled down to another driver? (in other words, will a set Cancel
>>>>flag survive an IoCallDriver call and the lower driver (hopefully)
>>>>recognize the Cancel flag after setting up a cancel routine and
>>>>immediatly return?)
>>>>
>>>>of course that’s only a very seldom race condition in case my original
>>>>
>>
>>
>>>>IRP gets cancelled immediatly after i dropped a driver supplied spinlock

>>>>that protects the original IRPs CancelRoutine and just before
>>>>
>>
>>
>>>>i can call IoCallDriver…
>>>>(i cannot hold the spinlock while calling IoCallDriver cause this could
>>>>deadlock with the completion routine)
>>>>
>>>>and also, is it correct that i must not call IoFreeIrp (in it’s
>>>>CompletionRoutine)
>>>>for a driver allocated IRP after calling IoCancelIrp on it (think
>>>>someone told me so, but i can’t find anything about this in
>>>>documentation)
>>>>
>>>>
>>>?
>>>
>>>
>>>>regards,
>>>>daniel.
>>>>
>>>>
>>>>—
>>>>Questions? First check the Kernel Driver FAQ at
>>>>http://www.osronline.com/article.cfm?id=256
>>>>
>>>>To unsubscribe, visit the List Server section of OSR Online at
>>>>http://www.osronline.com/page.cfm?name=ListServer
>>>>
>>>>
>>>>
>>>>
>>>>—
>>>>Questions? First check the Kernel Driver FAQ at
>>>>http://www.osronline.com/article.cfm?id=256
>>>>
>>>>To unsubscribe, visit the List Server section of OSR Online at
>>>>http://www.osronline.com/page.cfm?name=ListServer
>>>>
>>>>
>>>>
>>>>
>>>>
>>>—
>>>Questions? First check the Kernel Driver FAQ at
>>>http://www.osronline.com/article.cfm?id=256
>>>
>>>To unsubscribe, visit the List Server section of OSR Online at
>>>http://www.osronline.com/page.cfm?name=ListServer
>>>
>>>
>>>
>>>
>>>—
>>>Questions? First check the Kernel Driver FAQ at
>>>http://www.osronline.com/article.cfm?id=256
>>>
>>>To unsubscribe, visit the List Server section of OSR Online at
>>>http://www.osronline.com/page.cfm?name=ListServer
>>>
>>>
>>>
>>
>>
>>—
>>Questions? First check the Kernel Driver FAQ at
>>http://www.osronline.com/article.cfm?id=256
>>
>>To unsubscribe, visit the List Server section of OSR Online at
>>http://www.osronline.com/page.cfm?name=ListServer
>>
>>
>>
>>—
>>Questions? First check the Kernel Driver FAQ at
>>http://www.osronline.com/article.cfm?id=256
>>
>>To unsubscribe, visit the List Server section of OSR Online at
>>http://www.osronline.com/page.cfm?name=ListServer
>>
>>
>
>


Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

NOD32 1.1423 (20060301) Information

This message was checked by NOD32 antivirus system.
http://www.eset.com

CSQs work for the driver that is *pending* the request. In this case,
he is the sender of the request and must handle touching the irp w/out a
lock and it completing out from underneath him as well.

d

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of Jonathan Ludwig
Sent: Wednesday, March 01, 2006 4:41 PM
To: Windows System Software Devs Interest List
Subject: Re:[ntdev] forwarding cancelled IRPs ?

Now that you’ve figured it out, I would recommend using cancel safe
queues.
They are very easy to implement and the DDK states that they can be used
on
Windows 2000 too by linking csq.lib to your driver. They handle
cancelation
for you. They save a lot of work and are well tested

I recently went through the work of hardening the cancelation handling
in
one of our drivers and then threw it all away when I learned about
cancel
safe queues. It is still a good idea to understand cancelation though.

Jonathan

“L3sT4Rd” wrote in message news:xxxxx@ntdev…
>
>>Scheduling at dispatch level?
>>
>>There is no scheduling at dispatch level. While you may be
interrupted
>>by a higher-priority interrupt, no other code at the same or lower
IRQL
>>(incuding passive-level) will run on that processor while you’re in a
>>DPC.
>>
>>They are called spinlocks because they spin. They don’t yield the
>>processor until they’re acquired. They may be interrupted, but they
>>won’t go to sleep.
>>
>>
> ok, i think i have to read a bit more about spinlocks…
> (what i meant with “scheduling” was, thread A acquires a spinlock and
> thread B also tries to acquire the spinlock, now thread B spins
forever
> and as scheduler cannot operate while another thread is running at
> DISPATCH_LEVEL, i thought the spinlock itself must somehow
> switch back to thread A, cause otherwise a spinlock would always
deadlock
> when spinning. but while writing this text, i remembered that it may
spin
> on
> a multi processor plattform and only raises irql on single processor
> machine.
> so now all this makes a lot more sense to me…sorry for this stupid
> assumption!)
>
>>Remember that your driver code has to be re-entrant. Your completion
>>routine might call something which triggers another completion, which
>>causes your completion routine to be re-enterred again before the
first
>>call has exited. If you do this while holding a spinlock, then the
>>subsequent attempt to acquire the spinlock will cause a deadlock. For
>>example if you send an IRP to a lower driver while holding a lock,
that
>>might trigger something in the lower driver which completes a previous
>>IRP.
>>
>>Even when you’re running at passive-level reentrancy can cause these
>>same sorts of problems. If thread A acquires a lock (say a
>>SynchronizationEvent) twice, there’s nothing the scheduler can do to
>>free the lock up so it can be acquired again. Short of a resource
>>manager that tries to break deadlocks, you’re going to hang forever.
>>
>>-p
>>
>>-----Original Message-----
>>From: xxxxx@lists.osr.com
>>[mailto:xxxxx@lists.osr.com] On Behalf Of L3sT4Rd
>>Sent: Tuesday, February 28, 2006 7:13 PM
>>To: Windows System Software Devs Interest List
>>Subject: Re: [ntdev] forwarding cancelled IRPs ?
>>
>>
>>
>>>2) deadlocks, by doing this. For more info, read a basic text on
>>>synchronization.
>>>
>>>
>>
>>well…i hoped that i am already kinda fimiliar with
>>synchronization…
>>
>>one of the four required conditions for a deadlock (“hold & wait”) is,
>>that each “process” already holds a resource while waiting for another
>>one.
>>that’s not the case here, the completion routine only holds one
spinlock
>>at a time. even if there’re 2 compl. routines running at the same
time,
>>i can’t see a problem:
>> -both compl. routines hold a spinlock => the one holding SL2 can
>>continue
>> -only one of both holds a spinlock => same situation as if there
was
>>only one routine
>> -neither of both routines holds a spinlock => no problem
>>
>>two cancel routines simultaneous:
>> -both hold a spin lock => no prob., the one holding SL2 can
continue
>> (afterwards either one cancel routine holds both SLs (no prob)
or
>> one cancel r. still holds a SL and additionally one compl. r.
=>
>>same as
>> if there was only one instance of each routine, no prob.)
>> -one cancel routine holds both SLs: no prob, it can finish
>>
>>only thing i’m wondering about (and this might be a problem here) is:
>>who does the scheduling when some code running at DISPATCH_LEVEL is
>>calling KeAcquireSpinLock and blocked?
>>(but that’d be also a problem for a single SpinLock at DISPATCH_LEVEL)
>>so i guess, KeAcquireSpinLock does either call the scheduler after
>>recognizing that someone else is already holding the spinlock, or it
>>does look who else is actually holding the spin lock and tells the
>>dispatcher which code to try next?
>>
>>or did i miss something else here?
>>
>>regards,
>>daniel.
>>
>>
>>Arlie Davis wrote:
>>
>>
>>>>your solution makes sense to me, except that it looks like your
calling
>>>>KeWaitForSingleObject from DISPATCH_LEVEL…!?
>>>>or did you queue a workitem (or similiar) from within the cancel
>>>>
>>routine?
>>>>
>>>>
>>>Sorry, I guess my example was a little too brief. I am most
definitely
>>>
>>
>>
>>>not suggesting that you call KeWaitForSingleObject from
DISPATCH_LEVEL or
>>>from a cancel routine. That’s a recipe for a blue screen.
>>>
>>>My “Cancel()” routine was not a cancel routine in the sense of
something
>>>you pass to IoSetCancelRoutine. It was a hypothetical routine that
you
>>>would call during paths such as IRP_MN_REMOVE_DEVICE
>>>
>>or IRP_MN_SURPRISE_REMOVAL.
>>
>>>In other words, it is a routine that handles safely canceling an IRP,
and
>>>waiting for the IRP’s completion routine to run. After that routine
>>>completes, you can safely call IoFreeIrp, but not before. Therefore,
my
>>>“Cancel()” example routine really should have been called
>>>
>>“StopAndWaitForIo”
>>
>>>or something more accurate.
>>>
>>>Also, please note that this:
>>>
>>> KeAcquireSpinLockAtDpcLevel(&lock2)
>>> KeReleaseSpinLockFromDpcLevel(&lock1);
>>>
>>>is a recipe for disaster. This is almost never what you really want
to
>>>
>>do.
>>
>>>Until you have really mastered async I/O and spinlocks, you should
use a
>>>ground rule that you never hold more than one spinlock at a time.
And
>>>the only legitimate locking protocols that do involve holding more
than
>>>one spinlock at a time do NOT include sequences such as: acquire 1,
>>>acquire 2, release 1, release 2. It is extremely easy to write a
driver
>>>that is either
>>>1) not safe, or 2) deadlocks, by doing this. For more info, read a
basic
>>>text on synchronization.
>>>
>>>– arlie
>>>
>>>
>>>-----Original Message-----
>>>From: xxxxx@lists.osr.com
>>>[mailto:xxxxx@lists.osr.com] On Behalf Of L3sT4Rd
>>>Sent: Tuesday, February 28, 2006 6:45 PM
>>>To: Windows System Software Devs Interest List
>>>Subject: Re: [ntdev] forwarding cancelled IRPs ?
>>>
>>>Doron and Arlie, thanks for the fast reply :slight_smile:
>>>
>>>believe me, i’ve been reading anything i could find about
synchronisation
>>>of cancelation the whole last 2 days… now i know that
>>>
>>
>>
>>>thinking “…and at last quickly implement cancelation…” was quite
>>>stupid :wink:
>>>
>>>my problem is that no example met my requirements, yet, because i:
>>> -am setting up a cancel routine for each IRP i receive
>>> -but set a completion routine for a different IRP that i create in
the
>>> dispatch routine of the original IRP
>>> -don’t have a StartIO routine
>>> -and also don’t have an internal IRP queue, because i only create
a
>>>
>>
>>
>>>new IRP and immdiately pass it
>>> down (lower driver does queuing for me. my only concern here is

>>> that some docs discourage from
>>> using the pIrp parameter passed to a cancel routine [but nobody
did
>>> clearly explain why, only data-
>>> loss on application termination was mentioned in some
NT-Insider
>>>
>>
>>
>>>article, but how can a UM app
>>> access an I/O Manager-allocated IRP struct? maybe this only
refered
>>> to the user buffers for DIRECT_IO?]
>>> and also whether there’s a need for canceling IRPs in
>>> SURPRISE_REMOVAL, but as this is a pure virtual
>>> device, i assumed a SURPRISE_REMOVAL will never occur…)
>>>
>>>as far as i can see the only problem left, is the one pointed out by
>>>
>>doron:
>>
>>>
>>>
>>>>#1 Completion routine(), processing irp, drops lock
>>>>#2 grabs lock, finds irp to cancel, drops lock
>>>>#1 calls IoCallDriver, lower driver completes request
>>>>#1 your completion routine runs, frees the irp
>>>>#2 calls IoCancelIrp on the irp that was just freed
>>>>
>>>>
>>>>
>>>Arlie, the problem you stated should be quite the same, just vice
versa,
>>>cause i free the IRP from completion routine of lower IRP and not
from
>>>cancel routine of higher one…
>>>your solution makes sense to me, except that it looks like your
calling
>>>
>>
>>
>>>KeWaitForSingleObject from DISPATCH_LEVEL…!? or did you queue a
>>>workitem (or similiar) from within the cancel routine?
>>>
>>>Doron, i already looked at Walter Oneys book, but i could not find
>>>anything related to 2 different IRPs and synchronize processing of
IRP1,
>>>cancelation of
>>>IRP1 and creation & completion of IRP2 or anything similiar…
>>>
>>>what’d be about the following solution (simplified):
>>>
>>>Cancel(…, PIRP pHigherIrp):
>>>[…]
>>> KeAcquireSpinLockAtDpcLevel(&lock1);
>>>
>>> pLowerIrp = (PIRP)pHigherIrp->Tail.Overlay.DriverContext[0];
>>> pHigherIrp->Tail.Overlay.DriverContext[0] = NULL;
>>>
>>> KeAcquireSpinLockAtDpcLevel(&lock2)
>>> KeReleaseSpinLockFromDpcLevel(&lock1);
>>>
>>> if (NULL != pLowerIrp)
>>> IoCancelIrp(pLowerIrp);
>>> KeReleaseSpinLockFromDpcLevel(&lock2);
>>>
>>>Completion(…, PIRP pLowerIrp):
>>> KeAcquireSpinLock(&lock1, &oldIrql);
>>> oldCancelRoutine = IoSetCancelRoutine(pHigherIrp, NULL);
>>> pHigherIrp->Tail.Overlay.DriverContext[0] = NULL;
>>> KeReleaseSpinLock(&lock1, oldIrql);
>>> if (NULL == oldCancelRoutine) //IRP was cancelled
>>> {
>>> KeAcquireSpinLock(&lock2, &oldIrql);
>>> IoFreeIrp(pLowerIrp);
>>> KeReleaseSpinLock(&lock2, oldIrql);
>>> […]
>>> } else
>>> IoFreeIrp(pLowerIrp);
>>>
>>>regards,
>>>daniel.
>>>
>>>Arlie Davis wrote:
>>>
>>>
>>>
>>>>You are safe in this situation. You cannot hold a spinlock when you

>>>>call IoCallDriver. But another thread could immediately call
>>>>IoCancelIrp on the IRP that you are submitting – which is legal.
The
>>>>
>>
>>
>>>>lower driver is responsible for checking whether the IRP has been
>>>>canceled, and if the lower driver queues the IRP, it is also
responsible
>>>>for arming it for cancellation, by setting a cancel
>>>>
>>routine.
>>
>>>>In all cases, this is safe.
>>>>
>>>>However, about your second question, you are always responsible
for
>>>>freeing an IRP that you allocate, whether it is canceled or not.
>>>>IoCancelIrp does NOT free your IRP – it only informs the current
>>>>processor of the IRP (whomever that might be) that the IRP should be
>>>>
>>>>
>>>canceled.
>>>
>>>
>>>>“Canceled” just means “no longer processed, and completed with
>>>>STATUS_CANCELLED”, but the IRP is STILL completed. Your completion
>>>>routine is still called.
>>>>
>>>>When you call IoCancelIrp, you are NOT guaranteed that your
completion
>>>>
>>
>>
>>>>routine has run before IoCancelIrp returns. It is completely legal
for
>>>>a driver not to have a cancel routine installed (the driver may be
>>>>
>>
>>
>>>>executing your IRP when you call IoCancelIrp), or the driver may not
be
>>>>able to return the IRP to the issuer in the scope of the cancel
routine
>>>>handler. For example, the IRP may reference memory, and the memory
may
>>>>have been submitted to a device as part of a DMA request; the driver

>>>>cannot return the IRP to the caller while the DMA request
>>>>
>>is active.
>>
>>>>Whoever told you that IoCancelIrp frees the IRP is misinformed, and
>>>>should be told so.
>>>>
>>>>IRP cancellation is very tricky. There are many subleties, most of
>>>>which take the form of race conditions, most of which cause your
driver
>>>>to explode. So if you are going to allocate, submit, and possibly
>>>>cancel IRPs, you should spend the time to read up on these nuances.
I
>>>>believe OSR has some decent online articles on IRP cancellation.
There
>>>>are subtleties in implementing cancellable IRPs, and there are
different
>>>>(though, of course,
>>>>related) subleties in issuing and cancelling IRPs.
>>>>
>>>>Most importantly, you need to have a state machine that adequately
>>>>models all of the potential states that an IRP can be in.
Typically, I
>>>>use these
>>>>states: Idle, Active, ActiveCancelling, Complete, all protected by
some
>>>>implicit spinlock.
>>>>
>>>>
>>>>state:
>>>> KEVENT CancelEvent;
>>>> KSPIN_LOCK SpinLock;
>>>> PIRP Irp;
>>>>
>>>>Submit() {
>>>> Acquire spinlock
>>>> Verify that state is Idle
>>>> Set state to Active
>>>> Release spinlock
>>>> IoSetCompletionRoutine
>>>> IoCallDriver
>>>>}
>>>>
>>>>CompletionRoutine() {
>>>> Acquire spinlock
>>>> if (state = Active) {
>>>> Release spinlock
>>>> Process data
>>>> } else if (state = ActiveCancelling) {
>>>> State = Idle
>>>> KeSetEvent(&CancelEvent);
>>>> Release spinlock
>>>> } else {
>>>> Release spinlock
>>>> }
>>>>}
>>>>
>>>>Cancel() {
>>>> Acquire spinlock
>>>> if (state = Active) {
>>>> state = ActiveCancelling
>>>> Release spinlock
>>>> IoCancelIrp(irp)
>>>> KeWaitForSingleObject(&CancelEvent);
>>>> } else {
>>>> // not cancelable
>>>> Release spinlock
>>>> }
>>>>}
>>>>
>>>>
>>>>Or something similar.
>>>>
>>>>– arlie
>>>>
>>>>
>>>>-----Original Message-----
>>>>From: xxxxx@lists.osr.com
>>>>[mailto:xxxxx@lists.osr.com] On Behalf Of L3sT4Rd
>>>>Sent: Tuesday, February 28, 2006 3:38 PM
>>>>To: Windows System Software Devs Interest List
>>>>Subject: [ntdev] forwarding cancelled IRPs ?
>>>>
>>>>Hi,
>>>>
>>>>i think i already know the answer to my question but i’d like to be
>>>>
>>sure…
>>
>>>>is it a failure to send a driver allocated IRP that has already been

>>>>cancelled down to another driver? (in other words, will a set Cancel

>>>>flag survive an IoCallDriver call and the lower driver (hopefully)
>>>>recognize the Cancel flag after setting up a cancel routine and
>>>>immediatly return?)
>>>>
>>>>of course that’s only a very seldom race condition in case my
original
>>>>
>>
>>
>>>>IRP gets cancelled immediatly after i dropped a driver supplied
spinlock
>>>>that protects the original IRPs CancelRoutine and just before
>>>>
>>
>>
>>>>i can call IoCallDriver…
>>>>(i cannot hold the spinlock while calling IoCallDriver cause this
could
>>>>deadlock with the completion routine)
>>>>
>>>>and also, is it correct that i must not call IoFreeIrp (in it’s
>>>>CompletionRoutine)
>>>>for a driver allocated IRP after calling IoCancelIrp on it (think
>>>>someone told me so, but i can’t find anything about this in
>>>>documentation)
>>>>
>>>>
>>>?
>>>
>>>
>>>>regards,
>>>>daniel.
>>>>
>>>>
>>>>—
>>>>Questions? First check the Kernel Driver FAQ at
>>>>http://www.osronline.com/article.cfm?id=256
>>>>
>>>>To unsubscribe, visit the List Server section of OSR Online at
>>>>http://www.osronline.com/page.cfm?name=ListServer
>>>>
>>>>
>>>>
>>>>
>>>>—
>>>>Questions? First check the Kernel Driver FAQ at
>>>>http://www.osronline.com/article.cfm?id=256
>>>>
>>>>To unsubscribe, visit the List Server section of OSR Online at
>>>>http://www.osronline.com/page.cfm?name=ListServer
>>>>
>>>>
>>>>
>>>>
>>>>
>>>—
>>>Questions? First check the Kernel Driver FAQ at
>>>http://www.osronline.com/article.cfm?id=256
>>>
>>>To unsubscribe, visit the List Server section of OSR Online at
>>>http://www.osronline.com/page.cfm?name=ListServer
>>>
>>>
>>>
>>>
>>>—
>>>Questions? First check the Kernel Driver FAQ at
>>>http://www.osronline.com/article.cfm?id=256
>>>
>>>To unsubscribe, visit the List Server section of OSR Online at
>>>http://www.osronline.com/page.cfm?name=ListServer
>>>
>>>
>>>
>>
>>
>>—
>>Questions? First check the Kernel Driver FAQ at
>>http://www.osronline.com/article.cfm?id=256
>>
>>To unsubscribe, visit the List Server section of OSR Online at
>>http://www.osronline.com/page.cfm?name=ListServer
>>
>>
>>
>>—
>>Questions? First check the Kernel Driver FAQ at
>>http://www.osronline.com/article.cfm?id=256
>>
>>To unsubscribe, visit the List Server section of OSR Online at
>>http://www.osronline.com/page.cfm?name=ListServer
>>
>>
>
>


Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

hi again,

as i now know what i must not do, i still can’t figure out a suitable
solution…

how can i synchronize dispatch and cancel routines of an IRP that i am
pending together
with completion of another IRP that a lower driver is pending?

the pseudo code is like this:

xxxDispatch(pIrp)
{
pLowerIrp = IoAllocateIrp(…) / IoBuildAsynchronousFsdRequest(…)
IoSetCompletionRoutine(pLowerIrp, INVOKE_ALWAYS)
IoSetCancelRoutine(pIrp)
if(AlreadyCancelled(pIrp))
{
IoFreeIrp(pLowerIrp)
IoCompleteRequest(pIrp)
return STATUS_CANCELLED
}
IoCallDriver(pLowerIrp)
return STATUS_PENDING
}

xxxCancel(pIrp)
{
IoCancelIrp(pLowerIrp)
}

xxxCompletion(pLowerIrp)
{
IoFreeIrp(pLowerIrp)
IoCompleteRequest(pIrp)
return STATUS_MORE_PROCESSING_REQUIRED
}

does anyone know any examples / docs / howtos about this?
all 3 routines (dispatch / cancel / complete) may be called at
IRQL == DISPATCH_LEVEL.
everything i found either handles only cancelling of pending IRPs
or completion of IRPs the lower driver is pending but not both…

thanks,
daniel.

Doron Holan wrote:

CSQs work for the driver that is *pending* the request. In this case,
he is the sender of the request and must handle touching the irp w/out a
lock and it completing out from underneath him as well.

d

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of Jonathan Ludwig
Sent: Wednesday, March 01, 2006 4:41 PM
To: Windows System Software Devs Interest List
Subject: Re:[ntdev] forwarding cancelled IRPs ?

Now that you’ve figured it out, I would recommend using cancel safe
queues.
They are very easy to implement and the DDK states that they can be used
on
Windows 2000 too by linking csq.lib to your driver. They handle
cancelation
for you. They save a lot of work and are well tested

I recently went through the work of hardening the cancelation handling
in
one of our drivers and then threw it all away when I learned about
cancel
safe queues. It is still a good idea to understand cancelation though.

Jonathan

“L3sT4Rd” wrote in message news:xxxxx@ntdev…
>
>
>>>Scheduling at dispatch level?
>>>
>>>There is no scheduling at dispatch level. While you may be
>>>
>>>
>interrupted
>
>
>>>by a higher-priority interrupt, no other code at the same or lower
>>>
>>>
>IRQL
>
>
>>>(incuding passive-level) will run on that processor while you’re in a
>>>DPC.
>>>
>>>They are called spinlocks because they spin. They don’t yield the
>>>processor until they’re acquired. They may be interrupted, but they
>>>won’t go to sleep.
>>>
>>>
>>>
>>>
>>ok, i think i have to read a bit more about spinlocks…
>>(what i meant with “scheduling” was, thread A acquires a spinlock and
>>thread B also tries to acquire the spinlock, now thread B spins
>>
>>
>forever
>
>
>>and as scheduler cannot operate while another thread is running at
>>DISPATCH_LEVEL, i thought the spinlock itself must somehow
>>switch back to thread A, cause otherwise a spinlock would always
>>
>>
>deadlock
>
>
>>when spinning. but while writing this text, i remembered that it may
>>
>>
>spin
>
>
>>on
>>a multi processor plattform and only raises irql on single processor
>>machine.
>>so now all this makes a lot more sense to me…sorry for this stupid
>>assumption!)
>>
>>
>>
>>>Remember that your driver code has to be re-entrant. Your completion
>>>routine might call something which triggers another completion, which
>>>causes your completion routine to be re-enterred again before the
>>>
>>>
>first
>
>
>>>call has exited. If you do this while holding a spinlock, then the
>>>subsequent attempt to acquire the spinlock will cause a deadlock. For
>>>example if you send an IRP to a lower driver while holding a lock,
>>>
>>>
>that
>
>
>>>might trigger something in the lower driver which completes a previous
>>>IRP.
>>>
>>>Even when you’re running at passive-level reentrancy can cause these
>>>same sorts of problems. If thread A acquires a lock (say a
>>>SynchronizationEvent) twice, there’s nothing the scheduler can do to
>>>free the lock up so it can be acquired again. Short of a resource
>>>manager that tries to break deadlocks, you’re going to hang forever.
>>>
>>>-p
>>>
>>>-----Original Message-----
>>>From: xxxxx@lists.osr.com
>>>[mailto:xxxxx@lists.osr.com] On Behalf Of L3sT4Rd
>>>Sent: Tuesday, February 28, 2006 7:13 PM
>>>To: Windows System Software Devs Interest List
>>>Subject: Re: [ntdev] forwarding cancelled IRPs ?
>>>
>>>
>>>
>>>
>>>
>>>>2) deadlocks, by doing this. For more info, read a basic text on
>>>>synchronization.
>>>>
>>>>
>>>>
>>>>
>>>well…i hoped that i am already kinda fimiliar with
>>>synchronization…
>>>
>>>one of the four required conditions for a deadlock (“hold & wait”) is,
>>>that each “process” already holds a resource while waiting for another
>>>one.
>>>that’s not the case here, the completion routine only holds one
>>>
>>>
>spinlock
>
>
>>>at a time. even if there’re 2 compl. routines running at the same
>>>
>>>
>time,
>
>
>>>i can’t see a problem:
>>> -both compl. routines hold a spinlock => the one holding SL2 can
>>>continue
>>> -only one of both holds a spinlock => same situation as if there
>>>
>>>
>was
>
>
>>>only one routine
>>> -neither of both routines holds a spinlock => no problem
>>>
>>>two cancel routines simultaneous:
>>> -both hold a spin lock => no prob., the one holding SL2 can
>>>
>>>
>continue
>
>
>>> (afterwards either one cancel routine holds both SLs (no prob)
>>>
>>>
>or
>
>
>>> one cancel r. still holds a SL and additionally one compl. r.
>>>
>>>
>=>
>
>
>>>same as
>>> if there was only one instance of each routine, no prob.)
>>> -one cancel routine holds both SLs: no prob, it can finish
>>>
>>>only thing i’m wondering about (and this might be a problem here) is:
>>>who does the scheduling when some code running at DISPATCH_LEVEL is
>>>calling KeAcquireSpinLock and blocked?
>>>(but that’d be also a problem for a single SpinLock at DISPATCH_LEVEL)
>>>so i guess, KeAcquireSpinLock does either call the scheduler after
>>>recognizing that someone else is already holding the spinlock, or it
>>>does look who else is actually holding the spin lock and tells the
>>>dispatcher which code to try next?
>>>
>>>or did i miss something else here?
>>>
>>>regards,
>>>daniel.
>>>
>>>
>>>Arlie Davis wrote:
>>>
>>>
>>>
>>>
>>>>>your solution makes sense to me, except that it looks like your
>>>>>
>>>>>
>calling
>
>
>>>>>KeWaitForSingleObject from DISPATCH_LEVEL…!?
>>>>>or did you queue a workitem (or similiar) from within the cancel
>>>>>
>>>>>
>>>>>
>>>routine?
>>>
>>>
>>>>>
>>>>>
>>>>Sorry, I guess my example was a little too brief. I am most
>>>>
>>>>
>definitely
>
>
>>>
>>>
>>>>not suggesting that you call KeWaitForSingleObject from
>>>>
>>>>
>DISPATCH_LEVEL or
>
>
>>>>>from a cancel routine. That’s a recipe for a blue screen.
>>>>
>>>>My “Cancel()” routine was not a cancel routine in the sense of
>>>>
>>>>
>something
>
>
>>>>you pass to IoSetCancelRoutine. It was a hypothetical routine that
>>>>
>>>>
>you
>
>
>>>>would call during paths such as IRP_MN_REMOVE_DEVICE
>>>>
>>>>
>>>>
>>>or IRP_MN_SURPRISE_REMOVAL.
>>>
>>>
>>>
>>>>In other words, it is a routine that handles safely canceling an IRP,
>>>>
>>>>
>and
>
>
>>>>waiting for the IRP’s completion routine to run. After that routine
>>>>completes, you can safely call IoFreeIrp, but not before. Therefore,
>>>>
>>>>
>my
>
>
>>>>“Cancel()” example routine really should have been called
>>>>
>>>>
>>>>
>>>“StopAndWaitForIo”
>>>
>>>
>>>
>>>>or something more accurate.
>>>>
>>>>Also, please note that this:
>>>>
>>>> KeAcquireSpinLockAtDpcLevel(&lock2)
>>>> KeReleaseSpinLockFromDpcLevel(&lock1);
>>>>
>>>>is a recipe for disaster. This is almost never what you really want
>>>>
>>>>
>to
>
>
>>>do.
>>>
>>>
>>>
>>>>Until you have really mastered async I/O and spinlocks, you should
>>>>
>>>>
>use a
>
>
>>>>ground rule that you never hold more than one spinlock at a time.
>>>>
>>>>
>And
>
>
>>>>the only legitimate locking protocols that do involve holding more
>>>>
>>>>
>than
>
>
>>>>one spinlock at a time do NOT include sequences such as: acquire 1,
>>>>acquire 2, release 1, release 2. It is extremely easy to write a
>>>>
>>>>
>driver
>
>
>>>>that is either
>>>>1) not safe, or 2) deadlocks, by doing this. For more info, read a
>>>>
>>>>
>basic
>
>
>>>>text on synchronization.
>>>>
>>>>– arlie
>>>>
>>>>
>>>>-----Original Message-----
>>>>From: xxxxx@lists.osr.com
>>>>[mailto:xxxxx@lists.osr.com] On Behalf Of L3sT4Rd
>>>>Sent: Tuesday, February 28, 2006 6:45 PM
>>>>To: Windows System Software Devs Interest List
>>>>Subject: Re: [ntdev] forwarding cancelled IRPs ?
>>>>
>>>>Doron and Arlie, thanks for the fast reply :slight_smile:
>>>>
>>>>believe me, i’ve been reading anything i could find about
>>>>
>>>>
>synchronisation
>
>
>>>>of cancelation the whole last 2 days… now i know that
>>>>
>>>>
>>>>
>>>
>>>
>>>>thinking “…and at last quickly implement cancelation…” was quite
>>>>stupid :wink:
>>>>
>>>>my problem is that no example met my requirements, yet, because i:
>>>> -am setting up a cancel routine for each IRP i receive
>>>> -but set a completion routine for a different IRP that i create in
>>>>
>>>>
>the
>
>
>>>>dispatch routine of the original IRP
>>>> -don’t have a StartIO routine
>>>> -and also don’t have an internal IRP queue, because i only create
>>>>
>>>>
>a
>
>
>>>
>>>
>>>>new IRP and immdiately pass it
>>>> down (lower driver does queuing for me. my only concern here is
>>>>
>>>>
>
>
>
>>>>that some docs discourage from
>>>> using the pIrp parameter passed to a cancel routine [but nobody
>>>>
>>>>
>did
>
>
>>>>clearly explain why, only data-
>>>> loss on application termination was mentioned in some
>>>>
>>>>
>NT-Insider
>
>
>>>
>>>
>>>>article, but how can a UM app
>>>> access an I/O Manager-allocated IRP struct? maybe this only
>>>>
>>>>
>refered
>
>
>>>>to the user buffers for DIRECT_IO?]
>>>> and also whether there’s a need for canceling IRPs in
>>>>SURPRISE_REMOVAL, but as this is a pure virtual
>>>> device, i assumed a SURPRISE_REMOVAL will never occur…)
>>>>
>>>>as far as i can see the only problem left, is the one pointed out by
>>>>
>>>>
>>>>
>>>doron:
>>>
>>>
>>>
>>>>
>>>>
>>>>>#1 Completion routine(), processing irp, drops lock
>>>>>#2 grabs lock, finds irp to cancel, drops lock
>>>>>#1 calls IoCallDriver, lower driver completes request
>>>>>#1 your completion routine runs, frees the irp
>>>>>#2 calls IoCancelIrp on the irp that was just freed
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>Arlie, the problem you stated should be quite the same, just vice
>>>>
>>>>
>versa,
>
>
>>>>cause i free the IRP from completion routine of lower IRP and not
>>>>
>>>>
>from
>
>
>>>>cancel routine of higher one…
>>>>your solution makes sense to me, except that it looks like your
>>>>
>>>>
>calling
>
>
>>>
>>>
>>>>KeWaitForSingleObject from DISPATCH_LEVEL…!? or did you queue a
>>>>workitem (or similiar) from within the cancel routine?
>>>>
>>>>Doron, i already looked at Walter Oneys book, but i could not find
>>>>anything related to 2 different IRPs and synchronize processing of
>>>>
>>>>
>IRP1,
>
>
>>>>cancelation of
>>>>IRP1 and creation & completion of IRP2 or anything similiar…
>>>>
>>>>what’d be about the following solution (simplified):
>>>>
>>>>Cancel(…, PIRP pHigherIrp):
>>>>[…]
>>>> KeAcquireSpinLockAtDpcLevel(&lock1);
>>>>
>>>> pLowerIrp = (PIRP)pHigherIrp->Tail.Overlay.DriverContext[0];
>>>> pHigherIrp->Tail.Overlay.DriverContext[0] = NULL;
>>>>
>>>> KeAcquireSpinLockAtDpcLevel(&lock2)
>>>> KeReleaseSpinLockFromDpcLevel(&lock1);
>>>>
>>>> if (NULL != pLowerIrp)
>>>> IoCancelIrp(pLowerIrp);
>>>> KeReleaseSpinLockFromDpcLevel(&lock2);
>>>>
>>>>Completion(…, PIRP pLowerIrp):
>>>> KeAcquireSpinLock(&lock1, &oldIrql);
>>>> oldCancelRoutine = IoSetCancelRoutine(pHigherIrp, NULL);
>>>> pHigherIrp->Tail.Overlay.DriverContext[0] = NULL;
>>>> KeReleaseSpinLock(&lock1, oldIrql);
>>>> if (NULL == oldCancelRoutine) //IRP was cancelled
>>>> {
>>>> KeAcquireSpinLock(&lock2, &oldIrql);
>>>> IoFreeIrp(pLowerIrp);
>>>> KeReleaseSpinLock(&lock2, oldIrql);
>>>> […]
>>>> } else
>>>> IoFreeIrp(pLowerIrp);
>>>>
>>>>regards,
>>>>daniel.
>>>>
>>>>Arlie Davis wrote:
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>>You are safe in this situation. You cannot hold a spinlock when you
>>>>>
>>>>>
>
>
>
>>>>>call IoCallDriver. But another thread could immediately call
>>>>>IoCancelIrp on the IRP that you are submitting – which is legal.
>>>>>
>>>>>
>The
>
>
>>>
>>>
>>>>>lower driver is responsible for checking whether the IRP has been
>>>>>canceled, and if the lower driver queues the IRP, it is also
>>>>>
>>>>>
>responsible
>
>
>>>>>for arming it for cancellation, by setting a cancel
>>>>>
>>>>>
>>>>>
>>>routine.
>>>
>>>
>>>
>>>>>In all cases, this is safe.
>>>>>
>>>>>However, about your second question, you are always responsible
>>>>>
>>>>>
>for
>
>
>>>>>freeing an IRP that you allocate, whether it is canceled or not.
>>>>>IoCancelIrp does NOT free your IRP – it only informs the current
>>>>>processor of the IRP (whomever that might be) that the IRP should be
>>>>>
>>>>>
>>>>>
>>>>>
>>>>canceled.
>>>>
>>>>
>>>>
>>>>
>>>>>“Canceled” just means “no longer processed, and completed with
>>>>>STATUS_CANCELLED”, but the IRP is STILL completed. Your completion
>>>>>routine is still called.
>>>>>
>>>>>When you call IoCancelIrp, you are NOT guaranteed that your
>>>>>
>>>>>
>completion
>
>
>>>
>>>
>>>>>routine has run before IoCancelIrp returns. It is completely legal
>>>>>
>>>>>
>for
>
>
>>>>>a driver not to have a cancel routine installed (the driver may be
>>>>>
>>>>>
>>>>>
>>>
>>>
>>>>>executing your IRP when you call IoCancelIrp), or the driver may not
>>>>>
>>>>>
>be
>
>
>>>>>able to return the IRP to the issuer in the scope of the cancel
>>>>>
>>>>>
>routine
>
>
>>>>>handler. For example, the IRP may reference memory, and the memory
>>>>>
>>>>>
>may
>
>
>>>>>have been submitted to a device as part of a DMA request; the driver
>>>>>
>>>>>
>
>
>
>>>>>cannot return the IRP to the caller while the DMA request
>>>>>
>>>>>
>>>>>
>>>is active.
>>>
>>>
>>>
>>>>>Whoever told you that IoCancelIrp frees the IRP is misinformed, and
>>>>>should be told so.
>>>>>
>>>>>IRP cancellation is very tricky. There are many subleties, most of
>>>>>which take the form of race conditions, most of which cause your
>>>>>
>>>>>
>driver
>
>
>>>>>to explode. So if you are going to allocate, submit, and possibly
>>>>>cancel IRPs, you should spend the time to read up on these nuances.
>>>>>
>>>>>
>I
>
>
>>>>>believe OSR has some decent online articles on IRP cancellation.
>>>>>
>>>>>
>There
>
>
>>>>>are subtleties in implementing cancellable IRPs, and there are
>>>>>
>>>>>
>different
>
>
>>>>>(though, of course,
>>>>>related) subleties in issuing and cancelling IRPs.
>>>>>
>>>>>Most importantly, you need to have a state machine that adequately
>>>>>models all of the potential states that an IRP can be in.
>>>>>
>>>>>
>Typically, I
>
>
>>>>>use these
>>>>>states: Idle, Active, ActiveCancelling, Complete, all protected by
>>>>>
>>>>>
>some
>
>
>>>>>implicit spinlock.
>>>>>
>>>>>
>>>>>state:
>>>>>KEVENT CancelEvent;
>>>>>KSPIN_LOCK SpinLock;
>>>>>PIRP Irp;
>>>>>
>>>>>Submit() {
>>>>>Acquire spinlock
>>>>>Verify that state is Idle
>>>>>Set state to Active
>>>>>Release spinlock
>>>>>IoSetCompletionRoutine
>>>>>IoCallDriver
>>>>>}
>>>>>
>>>>>CompletionRoutine() {
>>>>>Acquire spinlock
>>>>>if (state = Active) {
>>>>>Release spinlock
>>>>>Process data
>>>>>} else if (state = ActiveCancelling) {
>>>>>State = Idle
>>>>>KeSetEvent(&CancelEvent);
>>>>>Release spinlock
>>>>>} else {
>>>>>Release spinlock
>>>>>}
>>>>>}
>>>>>
>>>>>Cancel() {
>>>>>Acquire spinlock
>>>>>if (state = Active) {
>>>>>state = ActiveCancelling
>>>>>Release spinlock
>>>>>IoCancelIrp(irp)
>>>>>KeWaitForSingleObject(&CancelEvent);
>>>>>} else {
>>>>>// not cancelable
>>>>>Release spinlock
>>>>>}
>>>>>}
>>>>>
>>>>>
>>>>>Or something similar.
>>>>>
>>>>>– arlie
>>>>>
>>>>>
>>>>>-----Original Message-----
>>>>>From: xxxxx@lists.osr.com
>>>>>[mailto:xxxxx@lists.osr.com] On Behalf Of L3sT4Rd
>>>>>Sent: Tuesday, February 28, 2006 3:38 PM
>>>>>To: Windows System Software Devs Interest List
>>>>>Subject: [ntdev] forwarding cancelled IRPs ?
>>>>>
>>>>>Hi,
>>>>>
>>>>>i think i already know the answer to my question but i’d like to be
>>>>>
>>>>>
>>>>>
>>>sure…
>>>
>>>
>>>
>>>>>is it a failure to send a driver allocated IRP that has already been
>>>>>
>>>>>
>
>
>
>>>>>cancelled down to another driver? (in other words, will a set Cancel
>>>>>
>>>>>
>
>
>
>>>>>flag survive an IoCallDriver call and the lower driver (hopefully)
>>>>>recognize the Cancel flag after setting up a cancel routine and
>>>>>immediatly return?)
>>>>>
>>>>>of course that’s only a very seldom race condition in case my
>>>>>
>>>>>
>original
>
>
>>>
>>>
>>>>>IRP gets cancelled immediatly after i dropped a driver supplied
>>>>>
>>>>>
>spinlock
>
>
>>>>>that protects the original IRPs CancelRoutine and just before
>>>>>
>>>>>
>>>>>
>>>
>>>
>>>>>i can call IoCallDriver…
>>>>>(i cannot hold the spinlock while calling IoCallDriver cause this
>>>>>
>>>>>
>could
>
>
>>>>>deadlock with the completion routine)
>>>>>
>>>>>and also, is it correct that i must not call IoFreeIrp (in it’s
>>>>>CompletionRoutine)
>>>>>for a driver allocated IRP after calling IoCancelIrp on it (think
>>>>>someone told me so, but i can’t find anything about this in
>>>>>documentation)
>>>>>
>>>>>
>>>>>
>>>>>
>>>>?
>>>>
>>>>
>>>>
>>>>
>>>>>regards,
>>>>>daniel.
>>>>>
>>>>>
>>>>>—
>>>>>Questions? First check the Kernel Driver FAQ at
>>>>>http://www.osronline.com/article.cfm?id=256
>>>>>
>>>>>To unsubscribe, visit the List Server section of OSR Online at
>>>>>http://www.osronline.com/page.cfm?name=ListServer
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>—
>>>>>Questions? First check the Kernel Driver FAQ at
>>>>>http://www.osronline.com/article.cfm?id=256
>>>>>
>>>>>To unsubscribe, visit the List Server section of OSR Online at
>>>>>http://www.osronline.com/page.cfm?name=ListServer
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>—
>>>>Questions? First check the Kernel Driver FAQ at
>>>>http://www.osronline.com/article.cfm?id=256
>>>>
>>>>To unsubscribe, visit the List Server section of OSR Online at
>>>>http://www.osronline.com/page.cfm?name=ListServer
>>>>
>>>>
>>>>
>>>>
>>>>—
>>>>Questions? First check the Kernel Driver FAQ at
>>>>http://www.osronline.com/article.cfm?id=256
>>>>
>>>>To unsubscribe, visit the List Server section of OSR Online at
>>>>http://www.osronline.com/page.cfm?name=ListServer
>>>>
>>>>
>>>>
>>>>
>>>>
>>>—
>>>Questions? First check the Kernel Driver FAQ at
>>>http://www.osronline.com/article.cfm?id=256
>>>
>>>To unsubscribe, visit the List Server section of OSR Online at
>>>http://www.osronline.com/page.cfm?name=ListServer
>>>
>>>
>>>
>>>—
>>>Questions? First check the Kernel Driver FAQ at
>>>http://www.osronline.com/article.cfm?id=256
>>>
>>>To unsubscribe, visit the List Server section of OSR Online at
>>>http://www.osronline.com/page.cfm?name=ListServer
>>>
>>>
>>>
>>>
>>
>>
>
>
>
>—
>Questions? First check the Kernel Driver FAQ at
>http://www.osronline.com/article.cfm?id=256
>
>To unsubscribe, visit the List Server section of OSR Online at
>http://www.osronline.com/page.cfm?name=ListServer
>
>
>—
>Questions? First check the Kernel Driver FAQ at http://www.osronline.com/article.cfm?id=256
>
>To unsubscribe, visit the List Server section of OSR Online at http://www.osronline.com/page.cfm?name=ListServer
>
>
>

> -and also don’t have an internal IRP queue, because i only create a

new IRP and immdiately pass it

Then you do not need any cancel routines.

Maxim Shatskih, Windows DDK MVP
StorageCraft Corporation
xxxxx@storagecraft.com
http://www.storagecraft.com

> Also, please note that this:

KeAcquireSpinLockAtDpcLevel(&lock2)
KeReleaseSpinLockFromDpcLevel(&lock1);

is a recipe for disaster.

To avoid disasters here, the coder must define a “lesser-greater” relationship
between the locks, and never acquire the “greater” lock while holding the
“lesser” lock.

Maxim Shatskih, Windows DDK MVP
StorageCraft Corporation
xxxxx@storagecraft.com
http://www.storagecraft.com

>> -and also don’t have an internal IRP queue, because i only create a

>new IRP and immdiately pass it
>
>

Then you do not need any cancel routines.

what if an application issued an asynchronous read request, but no data
is available and
the user tries to close the application?
i did already send down an IRP which is still pending, thus i can’t complete
the original IRP (which i am pending) and not dereference the pointer to
the
lower fileobject and the calling process cannot close the handle to my
driver
and thus not terminate…
or am i wrong here??

>>Also, please note that this:

>
> KeAcquireSpinLockAtDpcLevel(&lock2)
> KeReleaseSpinLockFromDpcLevel(&lock1);
>
>is a recipe for disaster.
>
>

To avoid disasters here, the coder must define a “lesser-greater” relationship
between the locks, and never acquire the “greater” lock while holding the
“lesser” lock.

i already discarded the idea of nested spinlocks :wink:
(for reasons other list members pointed out)

This is Drivers 101. That is why any pended IRP in a driver that can take
longer than a few seconds must have a cancel routine so it can be completed.
The OS will protect the system by keeping the required memory locked to your
almost terminated process until it finally completes or the system reboots
if a cancel routine doesn’t clean up the IRP when told to do so.

“L3sT4Rd” wrote in message news:xxxxx@ntdev…
>
>>> -and also don’t have an internal IRP queue, because i only create a
>>> new IRP and immdiately pass it
>>>
>>
>>Then you do not need any cancel routines.
>>
> what if an application issued an asynchronous read request, but no data is
> available and
> the user tries to close the application?
> i did already send down an IRP which is still pending, thus i can’t
> complete
> the original IRP (which i am pending) and not dereference the pointer to
> the
> lower fileobject and the calling process cannot close the handle to my
> driver
> and thus not terminate…
> or am i wrong here??
>
>