AVSTREAM, usbvideo.sys upper transform filter

Hi,

I am trying to make an Upper Filter to usbvideo.sys .
It won’t do much, just simply get and transfrom incomming frames.
For example (webcaming ,skype) … User can choose this “virtual” cam, will see rgb stripes in the middle of the captured images …etc.
I’ve choosen avstream cause I read that it is a bit more simplier to implement and join to usbvideo.sys . I made sink and source pins , but when i’m trying to connect
my filter with the webcam device, KsStudio says :

“Querying sink pin for data intersection with source pin dataranges…
Failed to get DataIntersection on pin 0
…Failed.
Failed to get DataIntersection from target pin”

I use standard video formats (like in avstream pin centric sample),
my intersect handler is the same too , I’ve tried to return just
STATUS_SUCCESS in the begining, it didn’t work either.

Isn’t that enought to return STATUS_SUCCESS ?
Or can I forward this request to the direct show user or the device after my device ?
What can be the problem ?
Cause my driver will just transform incomming data , the settings are not my task, i think …
I’m not an expert in Kernel streaming (KS) .

Are there any more things I have to implement to make it work ?
Can anybody help me?
Thanks in advance.

Here are my intersect handler and pin descriptors —>

/*******************************************/
// My intersect handler

/********************************************/

NTSTATUS IN_VideoPin::IntersectHandler(
PKSFILTER Filter,
PIRP Irp,
PKSP_PIN PinInstance,
PKSDATARANGE CallerDataRange,
PKSDATARANGE DescriptorDataRange,
ULONG BufferSize,
PVOID Data,
PULONG DataSize
){

return STATUS_SUCCESS;

}

/*******************************************/
// My pin descriptors
/********************************************/

const KSPIN_DESCRIPTOR_EX PinDescriptorArray [2] = {
/**
* INPUT PIN DESCRIPTOR
*/
{
&IN_VideoCapturePinDispatch,// Dispatch table
NULL, // Automation Table
/**
* KSPIN_DESCRIPTOR structure
* See at Microsoft documentation.
*/
{
DEFINE_KSPIN_DEFAULT_INTERFACES, // Interfaces
DEFINE_KSPIN_DEFAULT_MEDIUMS,
SUPPORTED_VIDEO_FORMATS_NUM,
VideoFormatDataRanges,
KSPIN_DATAFLOW_IN,
KSPIN_COMMUNICATION_BOTH,
&KSCATEGORY_CAPTURE,
&IN_guid_StaticPinnameVidCap,
0
},
KSPIN_FLAG_DISPATCH_LEVEL_PROCESSING |
KSPIN_FLAG_PROCESS_IN_RUN_STATE_ONLY |
KSPIN_FLAG_DO_NOT_INITIATE_PROCESSING,
1, // Instances Possible
1, // Instances Necessary
&IN_VideoCapturePinAllocatorFraming, // Allocator Framing
reinterpret_cast (IN_VideoPin::IntersectHandler)
} ,
/ *
* OUTPUT PIN DESCRIPTOR
/
{
&OUT_VideoCapturePinDispatch,// Dispatch table
NULL, // Automation Table
/

* KSPIN_DESCRIPTOR structure
* See at Microsoft documentation.
*/
{
DEFINE_KSPIN_DEFAULT_INTERFACES, // Interfaces
DEFINE_KSPIN_DEFAULT_MEDIUMS,
SUPPORTED_VIDEO_FORMATS_NUM,
VideoFormatDataRanges,
KSPIN_DATAFLOW_OUT,
KSPIN_COMMUNICATION_BOTH,
&KSCATEGORY_VIDEO,
&OUT_guid_StaticPinnameVidCap,
0
},
KSPIN_FLAG_DISPATCH_LEVEL_PROCESSING |
KSPIN_FLAG_PROCESS_IN_RUN_STATE_ONLY |
KSPIN_FLAG_DO_NOT_INITIATE_PROCESSING,
1, // Instances Possible
1, // Instances Necessary
&OUT_VideoCapturePinAllocatorFraming, // Allocator Framing
reinterpret_cast (OUT_VideoPin::IntersectHandler)
}

};

xxxxx@gmail.com wrote:

I am trying to make an Upper Filter to usbvideo.sys .
It won’t do much, just simply get and transfrom incomming frames.
For example (webcaming ,skype) … User can choose this “virtual” cam, will see rgb stripes in the middle of the captured images …etc.
I’ve choosen avstream cause I read that it is a bit more simplier to implement and join to usbvideo.sys.

The word “filter” is overloaded in the driver world. One meaning is the
DirectShow sense of an entity in a streaming graph with pins on which
data flows. The other meaning is of a PnP filter driver, as in “upper
filter” and “lower filter”.

AVStream drivers create filters in the DirectShow sense, but you cannot
use AVStream to create a PnP filter. It won’t work. AVStream drivers
are a port/miniport pair with ks.sys, and ks.sys assumes it is a
function driver. The dispatching isn’t going to work.

Is that what you meant? Or was the phrase “upper filter” in your
message a mistake? Did you simply create an AVStream transform filter,
which you are adding to a graph and connecting to your video camera? If
so, THAT will work, but it is simpler by far to write a user-mode
DirectShow transform filter. Never, never, never run anything in
kernel-mode that does not absolutely, positively have to be in kernel.
User-mode transform filters just a few lines of code, with no kernel
magic at all, and they can do anything that an AVStream transform filter
can do.

I made sink and source pins , but when i’m trying to connect my filter with the webcam device, KsStudio says :

“Querying sink pin for data intersection with source pin dataranges…
Failed to get DataIntersection on pin 0
…Failed.
Failed to get DataIntersection from target pin”

I use standard video formats (like in avstream pin centric sample),
my intersect handler is the same too.

Did you modify the sample? The data ranges in avshws match 320x240
RGB24 and 320x240 YUY2. That’s it. No other sizes, no other formats.

I’ve tried to return just STATUS_SUCCESS in the begining, it didn’t work either.

The DataIntersection and DataSetFormat negotiation in AVStream may be
the most complicated thing about an AVStream driver, and in my view the
samples don’t do a particularly good job of cutting through the
confusion. (I’m not sure I could do any better, however; I’ve done
AVStream for 14 years, and there are still things about that exchange I
don’t understand.) There is a complicated interaction between the
KSDATARANGE structures in your pin descriptors and the Intersection and
SetFormat handlers. If you left the sample code as is, the graph might
be calling DataIntersection to make sure that you are still prepared to
handle 320x240 RGB24, but if the upstream filter isn’t prepared to send
you that format, the connection will fail.

Or can I forward this request to the direct show user or the device after my device ?

Every filter has to be able to tell the system what formats it is
prepared to accept. It is quite possible to use a wildcard (null GUID)
to say “I’m prepared to accept anything”, but there are few filters
where that makes sense.

What can be the problem ?
Cause my driver will just transform incomming data , the settings are not my task, i think …

Of course they are. You can only transform the data if it is in a
format you know how to handle. If some smart aleck source filter tried
to send you CMYK data, your algorithm wouldn’t know how to handle it.
Thus, you MUST advertise the formats you are prepared to handle.

Why are you using KSPIN_FLAG_DO_NOT_INITIATE_PROCESSING?


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

>The word “filter” is overloaded in the driver world. One meaning is the DirectShow sense of an >entity in a streaming graph with pins on which data flows. The other meaning is of a PnP filter >driver, as in “upper filter” and “lower filter”.

AVStream drivers create filters in the DirectShow sense, but you cannot use AVStream to create a >PnP filter. It won’t work. AVStream drivers are a port/miniport pair with ks.sys, and ks.sys >assumes it is a function driver. The dispatching isn’t going to work. Is that what you meant? Or >was the phrase “upper filter” in your message a mistake?

I see. I meant that , cause I’ve read a .doc about WDM USB Video Streaming Upper Filter Driver ( put my Filter Device Object between the Functional Device Object and User Application …) , and I thought that may work somehow with AVStream too.

Did you simply create an AVStream transform filter, which you are adding to a graph and >connecting to your video camera? If so, THAT will work, but it is simpler by far to write a >user-mode DirectShow transform filter. Never, never, never run anything in kernel-mode that does >not absolutely, positively have to be in kernel. User-mode transform filters just a few lines of code, >with no kernel magic at all, and they can do anything that an AVStream transform filter can do.
Did you modify the sample? The data ranges in avshws match 320x240 RGB24 and 320x240 YUY2. >That’s it. No other sizes, no other formats.

I did ,but it doesn’t work , I’ve tried to connect with my web camera ,but somehow pins (my filter’s sink pin and the camera filter’s source pin) wasn’t able to connect. I modified dataranges , to support more dataformats , but no changes. Can be there any other thing that can cause my pins don’t want to connect eachother? (the problem is just with my filter’s sink pin … the source pin works correctly )
I see . I’ll try to keep in mind , but for now I think it’s easier to finish this work, than switch to another.

Of course they are. You can only transform the data if it is in a format you know how to handle. If >some smart aleck source filter tried to send you CMYK data, your algorithm wouldn’t know how to >handle it. Thus, you MUST advertise the formats you are prepared to handle. Why are you using

But if I get for example CMYK ( that I cannot handle ) , then I 'd like to pass through without transformation.

KSPIN_FLAG_DO_NOT_INITIATE_PROCESSING?

Sorry, that was my mistake, cause I was really unhappy to see it doesn’t work … so I’ve done some
“shotgun debugging” with the flags …

Thank you very much for your reply !

xxxxx@gmail.com wrote:

I see. I meant that , cause I’ve read a .doc about WDM USB Video Streaming Upper Filter Driver ( put my Filter Device Object between the Functional Device Object and User Application …) , and I thought that may work somehow with AVStream too.

Nope, that is an entirely different use of the word “filter”. It IS
possible to write a PnP upper filter driver for usbvideo.sys, and many
people have done so, but those are WDM or KMDF drivers, not AVStream.

There is an important difference here. With a PnP filter driver, the
filter driver gets loaded automatically when the function driver is
loaded. With an AVStream filter, the client application has to load
your driver specifically. You don’t get loaded automatically into every
graph. Because of that, I hope it’s clear there is no reason to write
it in kernel mode. Just write a user-mode DirectShow transform filter.

I did ,but it doesn’t work , I’ve tried to connect with my web camera ,but somehow pins (my filter’s sink pin and the camera filter’s source pin) wasn’t able to connect. I modified dataranges , to support more dataformats , but no changes. Can be there any other thing that can cause my pins don’t want to connect eachother?

But are you sure you have added a format that your web camera supports?
If you load your filter in graphedt and look at the input pin
properties, do the formats it shows match your expectations?

But if I get for example CMYK ( that I cannot handle ) , then I 'd like to pass through without transformation.

That’s not really the right way to handle it. If you just expose the
formats you support, then DirectShow will simply not hook you into the
graph.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

>Nope, that is an entirely different use of the word “filter”. It IS possible to write a PnP upper filter >driver for usbvideo.sys, and many people have done so, but those are WDM or KMDF drivers, not >AVStream. There is an important difference here. With a PnP filter driver, the filter driver gets >loaded automatically when the function driver is loaded. With an AVStream filter, the client >application has to load your driver specifically. You don’t get loaded automatically into every graph. >Because of that, I hope it’s clear there is no reason to write it in kernel mode. Just write a >user-mode DirectShow transform filter.

There were some confusion in my head about this, now it’s all clear . Thank you :slight_smile: .
So if I write an user mode webcam filter then it will work with Skype and in any other program uses DirectShow (like a virtual webcam) ? How much work is that with almost zero knowledge about DirectShow filters ? (hours/days)

But are you sure you have added a format that your web camera supports? If you load your filter in >graphedt and look at the input pin properties, do the formats it shows match your expectations?

I wasn’t ,so I tried to simulate that I handle everything , with no success .
It’s my first work with AVStream so I tried to eliminate “unnecessary format checks” (I don’t understand yet) from the code, to see it really works …, cause I dont have so much time .

xxxxx@gmail.com wrote:

There were some confusion in my head about this, now it’s all clear . Thank you :slight_smile: .
So if I write an user mode webcam filter then it will work with Skype and in any other program uses DirectShow (like a virtual webcam) ?

No, but neither will the solution you have proposed. That’s one of the
things I tried to point out in my earlier message.

Consider the typical DirectShow video capture app. The typical
application enumerates the list of video capture devices, picks one, and
adds it to the graph. It then picks a renderer, and adds that to the
graph. It then asks DirectShow to connect the filter’s output pin to
the renderer’s input pin. DirectShow then takes the responsibility of
filling in any additional conversion filters that may be required.

In that scenario, DirectShow is never going to load your filter. You
aren’t necessary. In order for your transform filter (either user-mode
or AVStream) to get involved, the application would have to know about
your filter, so that it could specifically ask for your filter to be
loaded into the graph and hooked up. Standard capture applications
don’t do that.

Your thread here is an absolutely classic case of what Peter calls
“gluing wings on a pig”. (See
http://www.osronline.com/downloads/pp_asking.pdf ). You have a problem
to solve, and rather than ask us about the problem, you asked us a
question about a very specific part of what YOU think the solution has
to be. In this case, you have chosen the wrong solution. Had you told
us from the start that your problem was to add special effects to a
video stream, we could have saved some time and pointed you in the right
direction.

There are really two paths. The simpler path by far is to write a
“virtual camera” source filter. Your filter can then create its own
very simple DirectShow graph to pull frames from the live camera, modify
them, and then pass them off to the original graph. As long as you
register it properly, your virtual camera will be enumerated by all
standard capture. There is a sample capture source filter by Vivek on
http://tmhare.mvps.org/downloads.htm that shows how to do that registration.

The other path is to write a PnP upper filter driver for usbvideo.sys.
This is NOT an AVStream driver. The problem with this solution is that
you have to get deeply involved in the kernel streaming ioctls. You
don’t have AVStream converting things to convenient callbacks with the
leading edge stream pointer abstraction. You’re dealing with the
nitty-gritty. It’s not pretty.

How much work is that with almost zero knowledge about DirectShow filters ? (hours/days)

Neither of those are going to be hours or days. You’re talking many
weeks, perhaps months. DirectShow is powerful, and I’m a huge fan, but
it’s complicated. Everything in DirectShow is done through COM, so
unless you are comfortable with COM, you’re going to have learning curve
time with it before you ever start into the guts of your project.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

Then I say sorry.
Thank you for your guidance , and your patience.
I start learning then.