Virtual SCSI miniports + requests comlpletion...

Hi everyone,

I think everyone just hates this subject so let’s kill this beast once and
forever. I’m talking about virtual SCSI miniports that complete requests
w/o use of either HwTimer or call to ScsiPortNotification(
RequestTimerCall, … ).

I’ll tell the sequence of actions SCSI miniport driver will perform to
complete the request and you’ll tell me where it can (you think) block and
why. I still cannot get why it should block and cannot make it block as
well…

  1. SCSI miniport registers itself within SCSIPORT with
    MultipleRequestsPerLu set to TRUE in the HW_INITIALIZATION_DATA structure
    it passes to the ScsiPortInitialize( … ). Will be described below why it
    must be done.

  2. SCSI miniport is called at HwStartIo entry point with some SRB with some
    SCSI opcode to execute (inquiry, read, write, mode sense etc).

SCSIPORT holds spinlocks…

  1. Routine for HwStartIo creates work item for this SRB (allocates memory,
    sets back pointers to SRB and SCSI miniport itself etc) and asks SCSIPORT
    for new request by the call to ScsiPortNotification( NextRequest, < target
    address > ). And RETURNS from the HwStartIo with SRB_STATUS_PENDING
    and ->SrbStatus set to SRB_STATUS_PENDING as well.

We’re not in SCSIPORT any more and SCSIPORT does not hold any spinlocks.

  1. Work item callback now get’s called by the system at PASSIVE_LEVEL and
    in the context of the system. It does all required manipulations
    (calls Zw* code to read/write the files, calles the TDI clinet code to
    send/receive data over the TCP connection etc) to process SCSI opcode in
    SRB. Then in constructs new SRB with special vendor SCSI opcode (let it be
    0xFE), set’s new SRB data buffer ptr with the address of old “real” SRB
    (stored in the work item memory at step 3) and calls SCSI miniport
    by it’s pointers stored in work item (step 3) as well.

  2. SCSI miniport is called at HwStartIo entry with this SRB constructed at
    step 4 with opcode 0xFE and old SRB to complete at SRB’s data buffer ptr.
    Miniport differs opcode 0xFE from the others and just calls
    ScsiPortNotification( RequestComplete, … ) for this old SRB passed in
    data buffer ptr, calls again ScsiPortNotification( RequestComplete, … )
    for actual SRB it was called with (constructed at step 4 with 0xFE opcode)
    and calls ScsiPortNotification( NextRequest, < target address > ) to ask
    SCSIPORT for new requests.

Attention! That’s why we enabled multiple requests per logical unit at step

  1. If we would not do it we’ll deadlock at step 5 as SCSI miniport would
    not be able to process the passed SRB until it did not
    complete previous one. It just will not get it.

As the call to the SCSI miniport to complete the previous request was done
from the work item and not from the SCSI miniport context (when SCSIPORT
spinlocks are hold) I see no problem at all. SMP machine or MMP or
uniprocessor at all.

Maybe you’ll point me where problem is?

Again, there is no problem in completing multiple SRBs in HwStartIo as
we’ve set we can process multiple requests per LU and it’s our bussiness
how many we’ve enqueued inside the miniport.

Any comments, ideas and remarks welcomed!!!

P.S. No LU queue lenght comments, OK? There can be tons of workarounds as
well. Separate LU to complete requests per every LU. Nobady will touch your
private LU responding as “processor device” to SCSI inquiry except you in
your driver. No need to warry.

With respect,
Anton Kolomyeytsev


You are currently subscribed to ntdev as: $subst(‘Recip.EmailAddr’)
To unsubscribe send a blank email to leave-ntdev-$subst(‘Recip.MemberIDChar’)@lists.osr.com

I believe your work-queue processing is NOT within the context of ScsiPort.
Certainly I find no ScsiPort function call that permits you to build a
worker thread within ScsiPort and then schedule a worker-item to that
thread.

Given that ScsiPort does not support worker threads, you then must be
initializing the worker thread outside of ScsiPort, and calling a function
in a source module compiled with ntddk.h to schedule that thread. When your
work-item callback function is entered you are outside of ScsiPort,
considered as foreign and as a consequence ignored by any call you may make
to ScsiPort. You can get into ScsiPort context by using a RequestTimerCall
and processing an SRB queue in the eventual call to the ScsiPort DPC, or by
hooking an interrupt and again processing a queue of SRB’s in the ScsiPort
DPC. By far the easiest is to use the RequestTimerCall.

Gary G. Little
Staff Engineer
Broadband Storage, Inc.
xxxxx@broadstor.com

-----Original Message-----
From: xxxxx@hotmail.com [mailto:xxxxx@hotmail.com]
Sent: Monday, August 27, 2001 5:00 PM
To: NT Developers Interest List
Subject: [ntdev] Virtual SCSI miniports + requests comlpletion…

Hi everyone,

I think everyone just hates this subject so let’s kill this beast once and
forever. I’m talking about virtual SCSI miniports that complete requests
w/o use of either HwTimer or call to ScsiPortNotification(
RequestTimerCall, … ).

I’ll tell the sequence of actions SCSI miniport driver will perform to
complete the request and you’ll tell me where it can (you think) block and
why. I still cannot get why it should block and cannot make it block as
well…

  1. SCSI miniport registers itself within SCSIPORT with
    MultipleRequestsPerLu set to TRUE in the HW_INITIALIZATION_DATA structure
    it passes to the ScsiPortInitialize( … ). Will be described below why it
    must be done.

  2. SCSI miniport is called at HwStartIo entry point with some SRB with some
    SCSI opcode to execute (inquiry, read, write, mode sense etc).

SCSIPORT holds spinlocks…

  1. Routine for HwStartIo creates work item for this SRB (allocates memory,
    sets back pointers to SRB and SCSI miniport itself etc) and asks SCSIPORT
    for new request by the call to ScsiPortNotification( NextRequest, < target
    address > ). And RETURNS from the HwStartIo with SRB_STATUS_PENDING
    and ->SrbStatus set to SRB_STATUS_PENDING as well.

We’re not in SCSIPORT any more and SCSIPORT does not hold any spinlocks.

  1. Work item callback now get’s called by the system at PASSIVE_LEVEL and
    in the context of the system. It does all required manipulations
    (calls Zw* code to read/write the files, calles the TDI clinet code to
    send/receive data over the TCP connection etc) to process SCSI opcode in
    SRB. Then in constructs new SRB with special vendor SCSI opcode (let it be
    0xFE), set’s new SRB data buffer ptr with the address of old “real” SRB
    (stored in the work item memory at step 3) and calls SCSI miniport
    by it’s pointers stored in work item (step 3) as well.

  2. SCSI miniport is called at HwStartIo entry with this SRB constructed at
    step 4 with opcode 0xFE and old SRB to complete at SRB’s data buffer ptr.
    Miniport differs opcode 0xFE from the others and just calls
    ScsiPortNotification( RequestComplete, … ) for this old SRB passed in
    data buffer ptr, calls again ScsiPortNotification( RequestComplete, … )
    for actual SRB it was called with (constructed at step 4 with 0xFE opcode)
    and calls ScsiPortNotification( NextRequest, < target address > ) to ask
    SCSIPORT for new requests.

Attention! That’s why we enabled multiple requests per logical unit at step

  1. If we would not do it we’ll deadlock at step 5 as SCSI miniport would
    not be able to process the passed SRB until it did not
    complete previous one. It just will not get it.

As the call to the SCSI miniport to complete the previous request was done
from the work item and not from the SCSI miniport context (when SCSIPORT
spinlocks are hold) I see no problem at all. SMP machine or MMP or
uniprocessor at all.

Maybe you’ll point me where problem is?

Again, there is no problem in completing multiple SRBs in HwStartIo as
we’ve set we can process multiple requests per LU and it’s our bussiness
how many we’ve enqueued inside the miniport.

Any comments, ideas and remarks welcomed!!!

P.S. No LU queue lenght comments, OK? There can be tons of workarounds as
well. Separate LU to complete requests per every LU. Nobady will touch your
private LU responding as “processor device” to SCSI inquiry except you in
your driver. No need to warry.

With respect,
Anton Kolomyeytsev


You are currently subscribed to ntdev as: xxxxx@broadstor.com
To unsubscribe send a blank email to leave-ntdev-$subst(‘Recip.MemberIDChar’)@lists.osr.com


You are currently subscribed to ntdev as: $subst(‘Recip.EmailAddr’)
To unsubscribe send a blank email to leave-ntdev-$subst(‘Recip.MemberIDChar’)@lists.osr.com

> I believe your work-queue processing is NOT within the context of ScsiPort.

Certainly I find no ScsiPort function call that permits you to build a
worker thread within ScsiPort and then schedule a worker-item to that
thread.

Yes of course. I use just ExQueueWorkItem(). Virtual SCSI miniport is not a
real one so it imports the code not only from the SCSIPORT.SYS but from
other kernel mode DLLs as well. This is not a problem.

Given that ScsiPort does not support worker threads, you then must be
initializing the worker thread outside of ScsiPort, and calling a function
in a source module compiled with ntddk.h to schedule that thread.

Yes!

When your work-item callback function is entered you are outside of ScsiPort,
considered as foreign and as a consequence ignored by any call you may make
to ScsiPort.

Hmm… Worker item just builds IRP with the SRB and passes it to the
miniport. No direct call. SRB comes to the miniport in ordinary was thru
the SCSIPORT.SYS. As IOCTL_SCSI_MINIPORT f.e.

You can get into ScsiPort context by using a RequestTimerCall
and processing an SRB queue in the eventual call to the ScsiPort DPC, or by
hooking an interrupt and again processing a queue of SRB’s in the ScsiPort
DPC. By far the easiest is to use the RequestTimerCall.

You do not understand… The whole idea is to avoid using timer calls. To
avoid their latency. That’s the point.


You are currently subscribed to ntdev as: $subst(‘Recip.EmailAddr’)
To unsubscribe send a blank email to leave-ntdev-$subst(‘Recip.MemberIDChar’)@lists.osr.com

Oh but I do understand … I’m working on my own virtual scsi mini-port
layered on top of my own bus driver, and have much the same problem. You
cannot call a ScsiPort function from a foreign function. You must be within
the context of ScsiPort to call a ScsiPort function. There fore, your worker
item can be triggrered by your mini-port and it can then process an SRB or a
queue of SRBs but it cannot complete an SRB. Calling ScsiPortCompleteRequest
completes an SRB, and you can only call that function from within the
context of ScsiPort. Well you could complete the IRP surrounding the SRB but
I doubt you really want to do that.

By context I mean that ScsiPort itself called the function that called you
directly. Your worker-thread, by definition cannot be called directly by
ScsiPort and is treated as foreign.

From your original post:

As the call to the SCSI miniport to complete the previous request was done
from the work item and not from the SCSI miniport context (when SCSIPORT
spinlocks are hold) I see no problem at all. SMP machine or MMP or
uniprocessor at all.

The call to complete the previous request was from the work item and outside
the context of ScsiPort. ScsiPort ignored it and did not complete the SRB
you thought it completed.

As to building an IRP with an SRB … now we really have a problem since the
standard device queue, if that’s what ScsiPort uses, generally only permits
one entry on the queue at a time. If ScsiPort manages it’s own queues then
where are the queues and where do you insert your IRP into that queue? I do
believe the docs say monkeying with ScsiPort queues is verboten.

Gary G. Little
Staff Engineer
Broadband Storage, Inc.
xxxxx@broadstor.com

-----Original Message-----
From: xxxxx@hotmail.com [mailto:xxxxx@hotmail.com]
Sent: Monday, August 27, 2001 5:00 PM
To: NT Developers Interest List
Subject: [ntdev] RE: Virtual SCSI miniports + requests comlpletion…

I believe your work-queue processing is NOT within the context of
ScsiPort.
Certainly I find no ScsiPort function call that permits you to build a
worker thread within ScsiPort and then schedule a worker-item to that
thread.

Yes of course. I use just ExQueueWorkItem(). Virtual SCSI miniport is not a
real one so it imports the code not only from the SCSIPORT.SYS but from
other kernel mode DLLs as well. This is not a problem.

Given that ScsiPort does not support worker threads, you then must be
initializing the worker thread outside of ScsiPort, and calling a function
in a source module compiled with ntddk.h to schedule that thread.

Yes!

When your work-item callback function is entered you are outside of
ScsiPort,
considered as foreign and as a consequence ignored by any call you may
make
to ScsiPort.

Hmm… Worker item just builds IRP with the SRB and passes it to the
miniport. No direct call. SRB comes to the miniport in ordinary was thru
the SCSIPORT.SYS. As IOCTL_SCSI_MINIPORT f.e.

You can get into ScsiPort context by using a RequestTimerCall
and processing an SRB queue in the eventual call to the ScsiPort DPC, or
by
hooking an interrupt and again processing a queue of SRB’s in the ScsiPort
DPC. By far the easiest is to use the RequestTimerCall.

You do not understand… The whole idea is to avoid using timer calls. To
avoid their latency. That’s the point.


You are currently subscribed to ntdev as: xxxxx@broadstor.com
To unsubscribe send a blank email to leave-ntdev-$subst(‘Recip.MemberIDChar’)@lists.osr.com


You are currently subscribed to ntdev as: $subst(‘Recip.EmailAddr’)
To unsubscribe send a blank email to leave-ntdev-$subst(‘Recip.MemberIDChar’)@lists.osr.com

> Attention! That’s why we enabled multiple requests per logical unit at
step

  1. If we would not do it we’ll deadlock at step 5 as SCSI miniport would
    not be able to process the passed SRB until it did not
    complete previous one. It just will not get it.

Even in this case SCSIPORT will not allow 2 SRBs for the same LUN with the
same QueueTag value to go through the miniport. Thus the number of
concurrent SRBs per LUN is limited. This can cause your code to deadlock.

Nobady will touch your private LU responding as “processor device” to SCSI
inquiry except
you in your driver.

For now - maybe yes, though different OSes can have different rules for such
stuff.

Max


You are currently subscribed to ntdev as: $subst(‘Recip.EmailAddr’)
To unsubscribe send a blank email to leave-ntdev-$subst(‘Recip.MemberIDChar’)@lists.osr.com

> As to building an IRP with an SRB … now we really have a problem since
the

standard device queue, if that’s what ScsiPort uses, generally only
permits
one entry on the queue at a time. If ScsiPort manages it’s own queues then
where are the queues and where do you insert your IRP into that queue? I
do
believe the docs say monkeying with ScsiPort queues is verboten.

It has 2 queues - first is per-LUN queue, second is per-miniport device
queue.

Max


You are currently subscribed to ntdev as: $subst(‘Recip.EmailAddr’)
To unsubscribe send a blank email to leave-ntdev-$subst(‘Recip.MemberIDChar’)@lists.osr.com

>Oh but I do understand … I’m working on my own virtual scsi mini-port

layered on top of my own bus driver, and have much the same problem.

Right! That’s what I’ve did some time ago. Own virtual SCSI port + own
miniports. But I do not have context problem as I use absolutely
different driver/minidriver architecture. Not MS-style miniport model.
Maybe less flexible but much more simple to write. If you’re interested -
let me know. If it worked for me it can work for you and others too.

What I’m trying to do right now is to make SCSI miniport that works
with MS SCSIPORT to complete requests w/o use of HwTimer or
ScsiPortNotify(
RequestTimerCall, … ).

You cannot call a ScsiPort function from a foreign function. You must be
within
the context of ScsiPort to call a ScsiPort function. There fore, your
worker
item can be triggrered by your mini-port and it can then process an SRB or
a
queue of SRBs but it cannot complete an SRB. Calling
ScsiPortCompleteRequest
completes an SRB, and you can only call that function from within the
context of ScsiPort.

To be more correct you can call ScsiPortXxx from contexts other then
SCSIPORT
but this calls will be ignored (that’s what I’ve seen). Maybe there could
be
even bigger problems. But I do not care as I really do not do this.

Well you could complete the IRP surrounding the SRB but
I doubt you really want to do that.

No I do not want to do this. IRP will be completed and deallocated by
SCSIPORT
itself.

By context I mean that ScsiPort itself called the function that called
you
directly. Your worker-thread, by definition cannot be called directly by
ScsiPort and is treated as foreign.

I understand what do you mean. See below…

The call to complete the previous request was from the work item and
outside
the context of ScsiPort. ScsiPort ignored it and did not complete the SRB
you thought it completed.

I’ve used wrong word. My fault… When I was talking about IRP passed to
SCSIPORT
I though you would guess I was not calling ScsiPortXxx directly from work
item
callback. What I’ve did in work item is:

SCSI_REQUEST_BLOCK Srb;

PIRP pIrp = IoBuildDeviceIoControlRequest( IOCTL_SCSI_MINIPORT, … )
PIO_STACK_LOCATION pIoStackLocation = IoGetNextIrpStackLocation( pIrp );
pIoStackLocation->Parameters.Scsi.Srb = Srb;

// SCSI address of the other virtual target used for completion
// or real target that pScsiRequestBlock was initiated to.
// Depends of the flags we’ll set in SRB.

Srb.PathId = 0;
Srb.TargetId = 0;
Srb.Lun = 0;

Srb.DataBuffer = pScsiRequestBlock; // the address of the SRB I want to
complete

Srb.SrbFlags |= SRB_FLAG_BYPASS_FROZEN_QUEUE; // ONLY if the target is the
same

// other SRB fields set here

IoCallDriver( pIrp, pDeviceObject ); // device object of the miniport

KeWaitForSingleObject( … );

So in English and not in C I allocate IRP with IOCTL_SCSI_MINIPORT control
code (Now it could pass to miniport), set SCSI address to the either
another
SCSI target used only for completion of the requests of to the real target
and call miniport (thru the SCSIPORT of course) as it will get this
request
and forward it to the miniport. You see I do not call ScsiPortXxx from
the work item callback directly.

As to building an IRP with an SRB … now we really have a problem since
the
standard device queue, if that’s what ScsiPort uses, generally only
permits
one entry on the queue at a time.

I’m not sure I understand you. SCSIPORT keeps the queue of the requests to
each LUN it found on the SCSI bus and submits SRBs to miniport after
miniport
calls ScsiPortNotification( Next[Lu]Request, … ). What do you mean under
“one entry”? If you set you support multiple request per logical unit at
miniport
registration you can call ScsiPortNotification( NextLuRequest, … ) and
enqueue SRBs internally in the miniport. SCSIPORT keeps counting and will
not
allow you to have more then 254 (???) SRBs pending. It just will not
call HwStarIo entry of the miniport until you’ll not call ScsiPortNotify(
RequestComplete, … ) for some SRBs you process currently. But there will
be
definitely more then one SRB in progress. Point me where I’m wrong please.

If ScsiPort manages it’s own queues then where are the queues and where
do you insert your IRP into that queue? I do believe the docs say
monkeying
with ScsiPort queues is verboten.

See I do not touch SCSIPORT queue at all if I submit requests to the other
target (not to the one that SRB to complete came from). The request goes
from the work item (so we do not hold any SCSIPORT spinlocks) and as it’s
ordinary IOCTL_SCSI_MINIPORT call that could come from any UserMode app in
the system or any driver as well I do not see any problems with the queue.
I do not deal with it directly. SCSIPORT manages it on it’s own.

You’ll really have to set SRB_FLAG_BYPASS_FROZEN_QUEUE if you want to use
the same target for completion requests but I do not think it’s a good
idea.
I even not gonna try do this.

Summary: No spinlocks in SCSIPORT (call initated from the work item), no
problems with calling ScsiPortXxx from other contexts (as the call will
end in the miniport and in SCSIPORT context). No problem with the queues
as you submit IRPs to the another target that does not delay processing
them and just calls ScsiPortNotify( RequestComplete, … ) on the fly. I
mean w/o leaving SCSIPORT context.

What will you say about this?


You are currently subscribed to ntdev as: $subst(‘Recip.EmailAddr’)
To unsubscribe send a blank email to leave-ntdev-$subst(‘Recip.MemberIDChar’)@lists.osr.com

I’ve specially asked not to touch queue length as this is just the
question of the implementaion… But if you want to talk about them let’s
do it.

Even in this case SCSIPORT will not allow 2 SRBs for the same LUN with
the
same QueueTag value to go through the miniport.

  1. SCSIPORT pays attention on QueueTag only if you’ve registered miniport
    as it supports tagged command queueing (set TaggedQueueing to TRUE in
    PORT_CONFIGURATION_INFORMATION passed to ScsiPortInitialize call). You’re
    not forced to do this.

  2. If you want to write a tagged command support but wish to avoid tagging
    for some SRB you can construct this SRB with the QueuTag to SP_UNTAGGED and
    will be treated by SCSIPORT as untagged.

  3. I do not think IOCTL_SCSI_MINIPORT (SRB_FUNCTION_IO_CONTROL) do use
    tags at all. IMHO.

Thus the number of concurrent SRBs per LUN is limited. This can cause
your code to deadlock.

Right. But there can be tons of workarounds.

  1. You call at your miniport ScsiPortNotify( NextLuRequest, … ) but do
    not accept “real” commands until command to complete the previous SRB will
    not come. For this you just complete this “temporary invalid” requests with
    SRB_STATUS_BUSY so SCSIPORT will resubmit them later to you. To make the
    SCSIPORT pass your SRB with embedded SRB to complete you’ll have to set
    SRB_FLAG_BYPASS_FROZEN_QUEUE in this SRB. In this case SCSIPORT jumps over
    the IoStartNextPacketByKey and goes with IoStartNextPacket with IRP you’ve
    constructed. But I do not think it’s very good idea… Just do not like
    it.

  2. You can send SRBs with embedded SRBs to complete not to the same target
    but to the other target on the same miniport. This target will just
    accept your SRB, complete both SRBs (SRB you constructed and embedded in
    it
    SRB you wish to complete) and call ScsiPortNotify( NextLuRequest, … ) to
    ask SCSIPORT about more requests to this “completing” target. So two calls
    to complete SRB and one call to ask about new for this target that does
    not
    delay SRB completion. Other targets that do delay completion do not call
    the code to complete SRB at all they just ask about new SRBs. I personally
    like this solution if you care.

>Nobady will touch your private LU responding as “processor device” to
>SCSI inquiry except you in your driver.

For now - maybe yes, though different OSes can have different rules for
such stuff.

OK. Let’s assume you have one more virtual device on the bus. Virtual
CDROM
drive. It responds to SCSIOP_INQUIRY and SCSIOP_STARTSTOP. For any other
SCSI commands send to it it just aborts SRB with CHECK_CONDITION/
NO_INSERTED_MEDIA sense/additional sense. This is emulated CDROM that
emulates it has no media and will not ever has any. So it will be claimed
and contolled by CDROM class driver and you can forecast it’s behavour.
You’ll not need to rewrite your driver as this kind of device can hardly
change it’s behavour in next few years. And this virtual CDROM drive
(target) responds to SRB_FUNCTION_IO_CONTROL in addition to
SRB_FUNCTION_EXECUTE_SCSI and for this I/O control it just finishes the
requests for other targets that need some work to be done on the context
others then SCSIPORT. And this virtual “completion” CDROM does not need to
pass it’s work to anyone (away from SCSIPORT context). That’s all…

What do you think about all this? And would you please say something about
deadlock in SCSIPORT you’ve told so many beautiful words about before. I
still cannot determine where do you see it…


You are currently subscribed to ntdev as: $subst(‘Recip.EmailAddr’)
To unsubscribe send a blank email to leave-ntdev-$subst(‘Recip.MemberIDChar’)@lists.osr.com

Anton,

I think your design works. You appear to have done your homework. If I
understand, then your workitem IOCTL_SCSI_MINIPORT irps are targeted at a
‘special’ dummy LU that exists for the sole purpose of initiating the
completion of other SRBs. This works for me, and I think there is no major
design issue other than that you are working outside the boundaries of the
scsiport/miniport framework. You might want to consider a worker thread/work
queue/event model rather than queueing a workitem per SRB, but that is a
performance, rather than a design issue.

-----Original Message-----
From: xxxxx@hotmail.com [mailto:xxxxx@hotmail.com]

Sent: Tuesday, August 28, 2001 9:00 PM
To: NT Developers Interest List
Subject: [ntdev] Re: Virtual SCSI miniports + requests comlpletion…

I’ve specially asked not to touch queue length as this is just the question
of the implementaion… But if you want to talk about them let’s do it.

Even in this case SCSIPORT will not allow 2 SRBs for the same LUN with
the
same QueueTag value to go through the miniport.

  1. SCSIPORT pays attention on QueueTag only if you’ve registered miniport as
    it supports tagged command queueing (set TaggedQueueing to TRUE in
    PORT_CONFIGURATION_INFORMATION passed to ScsiPortInitialize call). You’re
    not forced to do this.

  2. If you want to write a tagged command support but wish to avoid tagging
    for some SRB you can construct this SRB with the QueuTag to SP_UNTAGGED and
    will be treated by SCSIPORT as untagged.

  3. I do not think IOCTL_SCSI_MINIPORT (SRB_FUNCTION_IO_CONTROL) do use tags
    at all. IMHO.

Thus the number of concurrent SRBs per LUN is limited. This can cause
your code to deadlock.

Right. But there can be tons of workarounds.

  1. You call at your miniport ScsiPortNotify( NextLuRequest, … ) but do
    not accept “real” commands until command to complete the previous SRB will
    not come. For this you just complete this “temporary invalid” requests with
    SRB_STATUS_BUSY so SCSIPORT will resubmit them later to you. To make the
    SCSIPORT pass your SRB with embedded SRB to complete you’ll have to set
    SRB_FLAG_BYPASS_FROZEN_QUEUE in this SRB. In this case SCSIPORT jumps over
    the IoStartNextPacketByKey and goes with IoStartNextPacket with IRP you’ve
    constructed. But I do not think it’s very good idea… Just do not like
    it.

  2. You can send SRBs with embedded SRBs to complete not to the same target
    but to the other target on the same miniport. This target will just
    accept your SRB, complete both SRBs (SRB you constructed and embedded in
    it
    SRB you wish to complete) and call ScsiPortNotify( NextLuRequest, … ) to
    ask SCSIPORT about more requests to this “completing” target. So two calls
    to complete SRB and one call to ask about new for this target that does
    not
    delay SRB completion. Other targets that do delay completion do not call the
    code to complete SRB at all they just ask about new SRBs. I personally like
    this solution if you care.

>Nobady will touch your private LU responding as “processor device” to
>SCSI inquiry except you in your driver.

For now - maybe yes, though different OSes can have different rules for
such stuff.

OK. Let’s assume you have one more virtual device on the bus. Virtual
CDROM
drive. It responds to SCSIOP_INQUIRY and SCSIOP_STARTSTOP. For any other
SCSI commands send to it it just aborts SRB with CHECK_CONDITION/
NO_INSERTED_MEDIA sense/additional sense. This is emulated CDROM that
emulates it has no media and will not ever has any. So it will be claimed
and contolled by CDROM class driver and you can forecast it’s behavour.
You’ll not need to rewrite your driver as this kind of device can hardly
change it’s behavour in next few years. And this virtual CDROM drive
(target) responds to SRB_FUNCTION_IO_CONTROL in addition to
SRB_FUNCTION_EXECUTE_SCSI and for this I/O control it just finishes the
requests for other targets that need some work to be done on the context
others then SCSIPORT. And this virtual “completion” CDROM does not need to
pass it’s work to anyone (away from SCSIPORT context). That’s all…

What do you think about all this? And would you please say something about
deadlock in SCSIPORT you’ve told so many beautiful words about before. I
still cannot determine where do you see it…


You are currently subscribed to ntdev as: xxxxx@stratus.com To
unsubscribe send a blank email to leave-ntdev-$subst(‘Recip.MemberIDChar’)@lists.osr.com


You are currently subscribed to ntdev as: $subst(‘Recip.EmailAddr’)
To unsubscribe send a blank email to leave-ntdev-$subst(‘Recip.MemberIDChar’)@lists.osr.com

Mark,

I think your design works. You appear to have done your homework.

“Homework”. This is really the best name for this task -)

If I understand, then your workitem IOCTL_SCSI_MINIPORT irps are
targeted at a ‘special’ dummy LU that exists for the sole purpose
of initiating the completion of other SRBs.

Exactly. This is second variant of the “homework” solution.

This works for me, and I think there is no major design issue other
than that you are working outside the boundaries of the
scsiport/miniport framework.

You see… This is virtual driver. It does not attach to any real
hardware. In the other time it does “something”. And most of
this “something” (like calling Zw* file I/O code, calling TDI etc)
must be done from the PASSIVE_LEVEL and in known (better system)
context. So I’ll just have to work outside the boundaries of the
SCSIPORT/miniport if I want my driver to do some work. There is
no overhead. Worker threads/items are the tools that will be with
virtual drivers. It does not matter will this drivers be full port or
only miniport drivers, will they use timer callback to complete
SRBs or will they use other approach - worker threads/items will
be with them. Using them to complete the requests in SCSIPORT
miniports is just other task for them. One more. But not the only
one -)

You might want to consider a worker thread/work queue/event
model rather than queueing a workitem per SRB, but that is a
performance, rather than a design issue.

Right. Optimization goes next… But “will it work or will it
crash b/s of ideological reasons?” that was the question. And I’m
not intereted much in optimization as I’m going to use SCSI port
driver not MS SCSIPORT miniports.

And do you want to know the most interesting thing? I’ve tested the
first variant of the solution. The one where LU sends SRBs to
itself to complete own requests. See my SCSI miniport was written
in the way it was not using MultipleRequestPerLu so it was harder
for me to test the “civil” solution. That’s why I went first with
the one with real “self-call”. Where I had to set pScsiRequestBlock->
SrbFlags = ( SRB_FLAGS_DATA_IN | SRB_FLAGS_NO_QUEUE_FREEZE |
SRB_FLAGS_BYPASS_FROZEN_QUEUE ) and call the SCSIPORT with IRP_MJ_SCSI
and some vendor-specific SCSI opcode to complete the request.
This works as well! And does not require extra logical unit and
significant miniport code modification (when someone’s miniport
does not support multiple requests per logical unit). So I think
this solution to my “homework” has right to live too. I’ve
checked the source code of the disk class driver and it seems to
utilize explict queue flags (disk class driver tells SCSIPORT how
to queue the IRPs) just as I do (well, very close to what I do).

With respect,
Anton Kolomyeytsev


You are currently subscribed to ntdev as: $subst(‘Recip.EmailAddr’)
To unsubscribe send a blank email to leave-ntdev-$subst(‘Recip.MemberIDChar’)@lists.osr.com

I gotta study this and right now I have another conflagration that I must
stomp out. You just might have a solution I have been looking for in my 2000
driver.

Gary G. Little
Staff Engineer
Broadband Storage, Inc.
xxxxx@broadstor.com

-----Original Message-----
From: xxxxx@hotmail.com [mailto:xxxxx@hotmail.com]
Sent: Tuesday, August 28, 2001 6:01 PM
To: NT Developers Interest List
Subject: [ntdev] RE: Virtual SCSI miniports + requests comlpletion…

Oh but I do understand … I’m working on my own virtual scsi mini-port
layered on top of my own bus driver, and have much the same problem.

Right! That’s what I’ve did some time ago. Own virtual SCSI port + own
miniports. But I do not have context problem as I use absolutely
different driver/minidriver architecture. Not MS-style miniport model.
Maybe less flexible but much more simple to write. If you’re interested -
let me know. If it worked for me it can work for you and others too.

What I’m trying to do right now is to make SCSI miniport that works
with MS SCSIPORT to complete requests w/o use of HwTimer or
ScsiPortNotify(
RequestTimerCall, … ).

You cannot call a ScsiPort function from a foreign function. You must be
within
the context of ScsiPort to call a ScsiPort function. There fore, your
worker
item can be triggrered by your mini-port and it can then process an SRB or
a
queue of SRBs but it cannot complete an SRB. Calling
ScsiPortCompleteRequest
completes an SRB, and you can only call that function from within the
context of ScsiPort.

To be more correct you can call ScsiPortXxx from contexts other then
SCSIPORT
but this calls will be ignored (that’s what I’ve seen). Maybe there could
be
even bigger problems. But I do not care as I really do not do this.

Well you could complete the IRP surrounding the SRB but
I doubt you really want to do that.

No I do not want to do this. IRP will be completed and deallocated by
SCSIPORT
itself.

By context I mean that ScsiPort itself called the function that called
you
directly. Your worker-thread, by definition cannot be called directly by
ScsiPort and is treated as foreign.

I understand what do you mean. See below…

The call to complete the previous request was from the work item and
outside
the context of ScsiPort. ScsiPort ignored it and did not complete the SRB
you thought it completed.

I’ve used wrong word. My fault… When I was talking about IRP passed to
SCSIPORT
I though you would guess I was not calling ScsiPortXxx directly from work
item
callback. What I’ve did in work item is:

SCSI_REQUEST_BLOCK Srb;

PIRP pIrp = IoBuildDeviceIoControlRequest( IOCTL_SCSI_MINIPORT, … )
PIO_STACK_LOCATION pIoStackLocation = IoGetNextIrpStackLocation( pIrp );
pIoStackLocation->Parameters.Scsi.Srb = Srb;

// SCSI address of the other virtual target used for completion
// or real target that pScsiRequestBlock was initiated to.
// Depends of the flags we’ll set in SRB.

Srb.PathId = 0;
Srb.TargetId = 0;
Srb.Lun = 0;

Srb.DataBuffer = pScsiRequestBlock; // the address of the SRB I want to
complete

Srb.SrbFlags |= SRB_FLAG_BYPASS_FROZEN_QUEUE; // ONLY if the target is the
same

// other SRB fields set here

IoCallDriver( pIrp, pDeviceObject ); // device object of the miniport

KeWaitForSingleObject( … );

So in English and not in C I allocate IRP with IOCTL_SCSI_MINIPORT control
code (Now it could pass to miniport), set SCSI address to the either
another
SCSI target used only for completion of the requests of to the real target
and call miniport (thru the SCSIPORT of course) as it will get this
request
and forward it to the miniport. You see I do not call ScsiPortXxx from
the work item callback directly.

As to building an IRP with an SRB … now we really have a problem since
the
standard device queue, if that’s what ScsiPort uses, generally only
permits
one entry on the queue at a time.

I’m not sure I understand you. SCSIPORT keeps the queue of the requests to
each LUN it found on the SCSI bus and submits SRBs to miniport after
miniport
calls ScsiPortNotification( Next[Lu]Request, … ). What do you mean under
“one entry”? If you set you support multiple request per logical unit at
miniport
registration you can call ScsiPortNotification( NextLuRequest, … ) and
enqueue SRBs internally in the miniport. SCSIPORT keeps counting and will
not
allow you to have more then 254 (???) SRBs pending. It just will not
call HwStarIo entry of the miniport until you’ll not call ScsiPortNotify(
RequestComplete, … ) for some SRBs you process currently. But there will
be
definitely more then one SRB in progress. Point me where I’m wrong please.

If ScsiPort manages it’s own queues then where are the queues and where
do you insert your IRP into that queue? I do believe the docs say
monkeying
with ScsiPort queues is verboten.

See I do not touch SCSIPORT queue at all if I submit requests to the other
target (not to the one that SRB to complete came from). The request goes
from the work item (so we do not hold any SCSIPORT spinlocks) and as it’s
ordinary IOCTL_SCSI_MINIPORT call that could come from any UserMode app in
the system or any driver as well I do not see any problems with the queue.
I do not deal with it directly. SCSIPORT manages it on it’s own.

You’ll really have to set SRB_FLAG_BYPASS_FROZEN_QUEUE if you want to use
the same target for completion requests but I do not think it’s a good
idea.
I even not gonna try do this.

Summary: No spinlocks in SCSIPORT (call initated from the work item), no
problems with calling ScsiPortXxx from other contexts (as the call will
end in the miniport and in SCSIPORT context). No problem with the queues
as you submit IRPs to the another target that does not delay processing
them and just calls ScsiPortNotify( RequestComplete, … ) on the fly. I
mean w/o leaving SCSIPORT context.

What will you say about this?


You are currently subscribed to ntdev as: xxxxx@broadstor.com
To unsubscribe send a blank email to leave-ntdev-$subst(‘Recip.MemberIDChar’)@lists.osr.com


You are currently subscribed to ntdev as: $subst(‘Recip.EmailAddr’)
To unsubscribe send a blank email to leave-ntdev-$subst(‘Recip.MemberIDChar’)@lists.osr.com

Garry,

It will be just great if you’ll have some time to spend on checking this
code.

With respect,
Anton Kolomyeytsev

I gotta study this and right now I have another conflagration that I must
stomp out. You just might have a solution I have been looking for in my 2000
driver.


You are currently subscribed to ntdev as: $subst(‘Recip.EmailAddr’)
To unsubscribe send a blank email to leave-ntdev-$subst(‘Recip.MemberIDChar’)@lists.osr.com

> 1) SCSIPORT pays attention on QueueTag only if you’ve registered miniport

as it supports tagged command queueing (set TaggedQueueing to TRUE in
PORT_CONFIGURATION_INFORMATION passed to ScsiPortInitialize call). You’re
not forced to do this.

Wrong.
First of all, TaggedQueueing and MultipleRequestsPerLu in
PORT_CONFIGURATION_INFORMATION do mean the same from SCSIPORT’s point of
view.
Second, SCSIPORT invents queue tags, not the upper class driver. So,
SCSIPORT does not pay attention to QueueTag of the incoming SRBs.

  1. If you want to write a tagged command support but wish to avoid tagging
    for some SRB you can construct this SRB with the QueuTag to SP_UNTAGGED
    and
    will be treated by SCSIPORT as untagged.

This SRB will block all other SRBs going to this LUN till all other SRBs
will complete.

  1. I do not think IOCTL_SCSI_MINIPORT (SRB_FUNCTION_IO_CONTROL) do use
    tags at all. IMHO.

I afraid it is treated as SP_UNTAGGED and thus cannot run in concurrency
with any other SRBs to the same LUN.
BTW - IOCTL_SCSI_MINIPORT will go to the first existing LUN always.

not come. For this you just complete this “temporary invalid” requests
with
SRB_STATUS_BUSY so SCSIPORT will resubmit them later to you.

After a second will pass?

SCSIPORT pass your SRB with embedded SRB to complete you’ll have to set
SRB_FLAG_BYPASS_FROZEN_QUEUE in this SRB. In this case SCSIPORT jumps over
the IoStartNextPacketByKey and goes with IoStartNextPacket with IRP you’ve
constructed. But I do not think it’s very good idea… Just do not like

Yes, and this seems to be really working, though can possibly introduce
nasty problems with w2k’s power management.

  1. You can send SRBs with embedded SRBs to complete not to the same target
    but to the other target on the same miniport. This target will just

Making a special virtual target in the miniport for “internal use only”? Is
it good?

Max


You are currently subscribed to ntdev as: $subst(‘Recip.EmailAddr’)
To unsubscribe send a blank email to leave-ntdev-$subst(‘Recip.MemberIDChar’)@lists.osr.com

> scsiport/miniport framework. You might want to consider a worker
thread/work

queue/event model rather than queueing a workitem per SRB, but that is a
performance, rather than a design issue.

This is a must in fact.
Using Ex work items (used internally by FSDs and Cc) in the code which is
below the filesystem is a bad idea, the number of Ex worker threads is
limited, so you can have a deadlock.

Max


You are currently subscribed to ntdev as: $subst(‘Recip.EmailAddr’)
To unsubscribe send a blank email to leave-ntdev-$subst(‘Recip.MemberIDChar’)@lists.osr.com

> > 1) SCSIPORT pays attention on QueueTag only if you’ve registered miniport

> as it supports tagged command queueing (set TaggedQueueing to TRUE in
> PORT_CONFIGURATION_INFORMATION passed to ScsiPortInitialize call). You’re
> not forced to do this.

Wrong.
First of all, TaggedQueueing and MultipleRequestsPerLu in
PORT_CONFIGURATION_INFORMATION do mean the same from SCSIPORT’s point of
view.

Thank you for pointing. But this does not change anything.

Second, SCSIPORT invents queue tags, not the upper class driver. So,
SCSIPORT does not pay attention to QueueTag of the incoming SRBs.

Now I see we’re talking about different things.

> 2) If you want to write a tagged command support but wish to avoid tagging
> for some SRB you can construct this SRB with the QueuTag to SP_UNTAGGED
and
> will be treated by SCSIPORT as untagged.

This SRB will block all other SRBs going to this LUN till all other SRBs
will complete.

You can make SCSIPORT ignore all the requests i has already queued. I’ve
wrote how to do it already.

> 3) I do not think IOCTL_SCSI_MINIPORT (SRB_FUNCTION_IO_CONTROL) do use
> tags at all. IMHO.

I afraid it is treated as SP_UNTAGGED and thus cannot run in concurrency
with any other SRBs to the same LUN.
BTW - IOCTL_SCSI_MINIPORT will go to the first existing LUN always.

This even better. As you can make “completion” tareget have address 0:0:0
so all IOCTL_SCSI_MINIPORT calls will go to this target.

> not come. For this you just complete this “temporary invalid” requests
with
> SRB_STATUS_BUSY so SCSIPORT will resubmit them later to you.

After a second will pass?

Right. But I do not like this.

> SCSIPORT pass your SRB with embedded SRB to complete you’ll have to set
> SRB_FLAG_BYPASS_FROZEN_QUEUE in this SRB. In this case SCSIPORT jumps over
> the IoStartNextPacketByKey and goes with IoStartNextPacket with IRP you’ve
> constructed. But I do not think it’s very good idea… Just do not like

Yes, and this seems to be really working, though can possibly introduce
nasty problems with w2k’s power management.

Why? Do you have problems with power management in your virtual targets?
And as I said you can skip this “completion” target and just make SCSIPORT
ignore it’s own queueing.

> 2) You can send SRBs with embedded SRBs to complete not to the same target
> but to the other target on the same miniport. This target will just

Making a special virtual target in the miniport for “internal use only”? Is
it good?

Welll… I do not know. Mylex use it to communicate with arrays on their
RAID controller, HPT366 IDE RAID vendors have one more virtual target as
well. Maybe I can have one too? And I’ll say it third time: you can skip
using another target for completion. That’s just one variant of my
“homework” solution.


You are currently subscribed to ntdev as: $subst(‘Recip.EmailAddr’)
To unsubscribe send a blank email to leave-ntdev-$subst(‘Recip.MemberIDChar’)@lists.osr.com

> This is a must in fact.

Using Ex work items (used internally by FSDs and Cc) in the code which is
below the filesystem is a bad idea, the number of Ex worker threads is
limited, so you can have a deadlock.

You can count your requests and limit the number of concurrent ones (in
fact you’ll never have more outstanding requests then SCSI targets you have
if you’re not queueing the requests internally in miniport), you can have
one worker thread and use own serialization mechanism to avoid using work
item.

I though if there are no free system worker threads my work item callback
will be just called later I never though it can make the system deadlock.
Where did you get this information?

How do you call Zw* stuff w/o use of the threads and work itemsin your
virtual drivers?


You are currently subscribed to ntdev as: $subst(‘Recip.EmailAddr’)
To unsubscribe send a blank email to leave-ntdev-$subst(‘Recip.MemberIDChar’)@lists.osr.com

We have developed a driver in the past that used ExXxxWorkItems. The
driver was a disk filter. If the FSD has used all of the workitems to
process FSD requests, when you get to the lower-driver, there are no
work items left. So, you get a dead-lock. The FSD is waiting for the
disk and the disk is waiting for a work item; which will not become
available until the FSD releases one. DEADLOCK. I have seen it several
times in several drivers; my own and other client drivers.

It is far better to write your own workitem code using
PsCreateSystemThread(). We have such code and it is quarantined to be
dead-lock free.

If you want a copy, send me a private email.

Jamey
xxxxx@storagecraft.com

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of
xxxxx@hotmail.com
Sent: Thursday, August 30, 2001 10:16 AM
To: NT Developers Interest List
Subject: [ntdev] Re: Virtual SCSI miniports + requests comlpletion…

This is a must in fact.
Using Ex work items (used internally by FSDs and Cc) in the code which

is below the filesystem is a bad idea, the number of Ex worker threads

is limited, so you can have a deadlock.

You can count your requests and limit the number of concurrent ones (in
fact you’ll never have more outstanding requests then SCSI targets you
have
if you’re not queueing the requests internally in miniport), you can
have
one worker thread and use own serialization mechanism to avoid using
work
item.

I though if there are no free system worker threads my work item
callback
will be just called later I never though it can make the system
deadlock.
Where did you get this information?

How do you call Zw* stuff w/o use of the threads and work itemsin your
virtual drivers?


You are currently subscribed to ntdev as: xxxxx@storagecraft.com To
unsubscribe send a blank email to leave-ntdev-$subst(‘Recip.MemberIDChar’)@lists.osr.com


You are currently subscribed to ntdev as: $subst(‘Recip.EmailAddr’)
To unsubscribe send a blank email to leave-ntdev-$subst(‘Recip.MemberIDChar’)@lists.osr.com

> We have developed a driver in the past that used ExXxxWorkItems. The

driver was a disk filter. If the FSD has used all of the workitems to
process FSD requests, when you get to the lower-driver, there are no
work items left. So, you get a dead-lock. The FSD is waiting for the
disk and the disk is waiting for a work item; which will not become
available until the FSD releases one. DEADLOCK. I have seen it several
times in several drivers; my own and other client drivers.

Now I understand. I’ve imagined something like this… The only problem I
see: if the system will not crash in your driver (because FSD holds all
work items, I mean all system threads are blocked in work item callbacks)
if will crash in the very first call to ExQueueWorkItem() (or IoXxx work
item code). So it will be your driver or not your driver - system will
crash in any case. Because of low resources… Am I correct?

It is far better to write your own workitem code using
PsCreateSystemThread(). We have such code and it is quarantined to be
dead-lock free.

And use KeAttachProcess()/KeDetachProcess)() in own thread?

If you want a copy, send me a private email.

Thank you very much for your code.


You are currently subscribed to ntdev as: $subst(‘Recip.EmailAddr’)
To unsubscribe send a blank email to leave-ntdev-$subst(‘Recip.MemberIDChar’)@lists.osr.com

If you look at FAST FAT. It uses an internal counter and will not post
more than two (I IRC) work items at a time. This is to keep the FSD from
deal-locking on itself or with anyone else using the work item
components; I suspect.

In my first disk filter driver, I was using a work item for the read
path and a work item for the write path. As soon as the disk activity
got high I would dead-lock. I reworked the code to use two separate
threads. No more dead-lock.

In our more recent work item code, we create and destroy threads
dynamically as required to prevent dead-lock.

Jamey
xxxxx@storagecraft.com

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of
xxxxx@hotmail.com
Sent: Thursday, August 30, 2001 1:44 PM
To: NT Developers Interest List
Subject: [ntdev] Re: Virtual SCSI miniports + requests comlpletion…

We have developed a driver in the past that used ExXxxWorkItems. The
driver was a disk filter. If the FSD has used all of the workitems to
process FSD requests, when you get to the lower-driver, there are no
work items left. So, you get a dead-lock. The FSD is waiting for the
disk and the disk is waiting for a work item; which will not become
available until the FSD releases one. DEADLOCK. I have seen it several

times in several drivers; my own and other client drivers.

Now I understand. I’ve imagined something like this… The only problem
I
see: if the system will not crash in your driver (because FSD holds all
work items, I mean all system threads are blocked in work item
callbacks)
if will crash in the very first call to ExQueueWorkItem() (or IoXxx work

item code). So it will be your driver or not your driver - system will
crash in any case. Because of low resources… Am I correct?

It is far better to write your own workitem code using
PsCreateSystemThread(). We have such code and it is quarantined to be
dead-lock free.

And use KeAttachProcess()/KeDetachProcess)() in own thread?

If you want a copy, send me a private email.

Thank you very much for your code.


You are currently subscribed to ntdev as: xxxxx@storagecraft.com To
unsubscribe send a blank email to leave-ntdev-$subst(‘Recip.MemberIDChar’)@lists.osr.com


You are currently subscribed to ntdev as: $subst(‘Recip.EmailAddr’)
To unsubscribe send a blank email to leave-ntdev-$subst(‘Recip.MemberIDChar’)@lists.osr.com

> ----------

From:
xxxxx@hotmail.com[SMTP:xxxxx@hotmail.com]
Reply To: NT Developers Interest List
Sent: Thursday, August 30, 2001 3:43 PM
To: NT Developers Interest List
Subject: [ntdev] Re: Virtual SCSI miniports + requests comlpletion…

> We have developed a driver in the past that used ExXxxWorkItems. The
> driver was a disk filter. If the FSD has used all of the workitems to
> process FSD requests, when you get to the lower-driver, there are no
> work items left. So, you get a dead-lock. The FSD is waiting for the
> disk and the disk is waiting for a work item; which will not become
> available until the FSD releases one. DEADLOCK. I have seen it several
> times in several drivers; my own and other client drivers.

Now I understand. I’ve imagined something like this… The only problem I
see: if the system will not crash in your driver (because FSD holds all
work items, I mean all system threads are blocked in work item callbacks)
if will crash in the very first call to ExQueueWorkItem() (or IoXxx work
item code). So it will be your driver or not your driver - system will
crash in any case. Because of low resources… Am I correct?

Not exactly. There is no problem with low resources (as memory). The problem
occurs when all worker threads are blocked and something which would wake
them up is queued and waits for a free worker thread. No crash, just
deadlock. Typically, you can move mouse and use already started programs but
every operation which calls FDS is blocked. I also saw this problem several
times under quite different circumstances, for example when NDIS IM driver
was started. The standard way how to avoid it is to use own worker thread as
Jamey mentioned.

There is also a workaround – you can raise number of system worker threads.
Go to HKLM\SYSTEM\CCS\Control\Session Manager\Executive. There are two
DWORDs AdditionalCriticalWorkerThreads and AdditionalDelayedWorkerThreads
typically set to zero. You can raise some number and system will create more
threads on the next start. I don’t think it is a good general solution, just
a workaround. Once I had to use it when installed some commercial software
including driver which used worker threads incorrectly and locked up during
installation. The worst problem with worker threads deadlock is
unpredictability. Sometimes deadlock occurs on one machine and all others
have no problem. This was previously mentioned application case.

> It is far better to write your own workitem code using
> PsCreateSystemThread(). We have such code and it is quarantined to be
> dead-lock free.

And use KeAttachProcess()/KeDetachProcess)() in own thread?

No. Instead, build own technique similar to ExQueueWorkItem for your thread.
Thread can wait for a semaphore which is signaled when a work item is
queued. Dequeues it, processes it and waits again.

Best regards,

Michal Vodicka
Veridicom
(RKK - Skytale)
[WWW: http://www.veridicom.com , http://www.skytale.com]


You are currently subscribed to ntdev as: $subst(‘Recip.EmailAddr’)
To unsubscribe send a blank email to leave-ntdev-$subst(‘Recip.MemberIDChar’)@lists.osr.com