AVSTREAM minidriver wire between input and output pins

Hello,
I implement in avshws sample render input pin to make virtual webcam driver and now i have very important for me questions:
How to realize in AVSTREAM avshws minidriver that render(input) pin can send video data to capture(output) pin?
That’s the way to do this exist?
And which of them is optimal to support resize video between theses input and output pins inside the minidriver? Thanks.

xxxxx@sibmail.com wrote:

I implement in avshws sample render input pin to make virtual webcam driver and now i have very important for me questions:
How to realize in AVSTREAM avshws minidriver that render(input) pin can send video data to capture(output) pin?
That’s the way to do this exist?

A single filter can certainly have both input pins and output pins.
That’s a “transform filter” in DirectShow terms. There is no sample of
such a filter, but it’s not too hard to figure it out. You’d add
another pin with KSPIN_DATAFLOW_IN, set its data format, and add a class
to handle the callbacks.

And which of them is optimal to support resize video between theses input and output pins inside the minidriver?

Code should only be put in the kernel if there is no user-mode
alternative. In this case, the right way to do resizing is as a
user-mode DirectShow transform filter, NOT as an AVStream filter.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

Thank you for your directions.
But now i have a few impossible to understand things.
1)How from user-mode windows applications i can send video to minidriver input pin? Is it can IKsControl::KsProperty? And which of the property can i use? May be something with code samples?

  1. Now i have transform filter at avshws sample minidriver.I remove from minidriver YUY2 support for pins, now they support only RGB. I switch off m_ImageSynth -> SynthesizeBars (); at FakeHardware fuction in this sample. And as result i have on output pin only on static image that was first at input pin in start moment (experiment with help of Graphstudio). That’s great, but i need equivalent video stream, not only stream full of only first input frame. What and at which functions should I to do for this?
    These is the part of my code…

filter.cpp

const
KSPIN_DESCRIPTOR_EX
CaptureFilterPinDescriptors [CAPTURE_FILTER_PIN_COUNT] = {
//
// Capture Output Pin
//
{
&CapturePinDispatch,
NULL,
{
0, // Interfaces (NULL, 0 == default)
NULL,
0, // Mediums (NULL, 0 == default)
NULL,
SIZEOF_ARRAY(CapturePinDataRanges),// Range Count
CapturePinDataRanges, // Ranges
KSPIN_DATAFLOW_OUT, // Dataflow
KSPIN_COMMUNICATION_BOTH, // Communication
&PIN_CATEGORY_CAPTURE, // Category from uuids.h
&g_PINNAME_VIDEO_CAPTURE, // Name from ksmedia.h
0 // Reserved
},
#ifdef X86
KSPIN_FLAG_GENERATE_MAPPINGS | // Pin Flags
#endif
KSPIN_FLAG_SPLITTER|
KSPIN_FLAG_PROCESS_IN_RUN_STATE_ONLY,
KSINSTANCE_INDETERMINATE, // Instances Possible
1, // Instances Necessary
&CapturePinAllocatorFraming, // Allocator Framing
reinterpret_cast
(CCapturePin::IntersectHandler)
},
//
// Capture Input Pin
//
{
&InputPinDispatch,
NULL,
{
0, // Interfaces (NULL, 0 == default)
NULL,
0, // Mediums (NULL, 0 == default)
NULL,
SIZEOF_ARRAY(CaptureInPinDataRanges),// Range Count
CaptureInPinDataRanges, // Ranges
KSPIN_DATAFLOW_IN, // Dataflow
KSPIN_COMMUNICATION_BOTH, // Communication
&CLSID_VideoInputDeviceCategory, // Category
NULL, // Name
0 // Reserved
},

KSPIN_FLAG_DO_NOT_USE_STANDARD_TRANSPORT | // Pin Flags
KSPIN_FLAG_RENDERER|
KSPIN_FLAG_FRAMES_NOT_REQUIRED_FOR_PROCESSING,
1, // Instances Possible
1, // Instances Necessary
&CapturePinAllocatorFraming, // Allocator Framing
NULL
}
};

3)
>Code should only be put in the kernel if there is no user-mode
alternative. In this case, the right way to do resizing is as a
user-mode DirectShow transform filter, NOT as an AVStream filter.

I need to support many different output resolutions at minidriver and as result theses resolutions i need to support at input of minidriver. What avshws should to support at situations? For example, at first time for minidriver’s output connect for capture application with 640x480 reolution, and after that another application need to get other resolution (may be 320x240 or 1280x1024) simultaneously. What are the programming ways to support these resize?

Thank you 10x!

xxxxx@sibmail.com wrote:

1)How from user-mode windows applications i can send video to minidriver input pin? Is it can IKsControl::KsProperty? And which of the property can i use? May be something with code samples?

I will answer this question, but I don’t think you’re asking the
questions you really want to ask.

You use an AVStream driver in user-mode by adding it to a DirectShow
graph. The ksproxy wrapper will create a DirectShow filter with input
and output pins, and you talk to those pins the same way you talk to any
DirectShow filter. You have your source filter (that is, whatever is
creating the data), and you connect its output pin to the input pin of
the ksproxy wrapper for your driver.

However, I’m not sure this is the direction you should be going. For us
to offer you productive advice, you need to tells us a lot more about
what you are trying to build. Right now, I suspect you are wasting time
by heading off in the wrong direction. Quoting from later in your message:

I need to support many different output resolutions at minidriver and as result theses resolutions i need to support at input of minidriver. What avshws should to support at situations? For example, at first time for minidriver’s output connect for capture application with 640x480 reolution, and after that another application need to get other resolution (may be 320x240 or 1280x1024) simultaneously. What are the programming ways to support these resize?

This is the first time you have mentioned a “capture application.” If
you create an AVStream transform filter, standard applications (like
AMCap) are not going to see it as a capture filter. It can only be used
by custom applications that are creating their own DirectShow graphs –
applications that know about your filter and how to hook it up.

Are you trying to create a virtual camera that can be used by normal
capture applications as if it were a webcam (for example), but one where
you can feed data from user mode? If so, then you are using entirely
the wrong approach. You do NOT want a filter with input pins.

The RIGHT way to do that, as I said in my last message, is to forget
about kernel mode. Create a normal user-mode DirectShow source filter,
and register it in the registry as a video capture device. Applications
like AMcap and Messenger will find it when they go looking for cameras.

By the way, if your driver is being used in three different
applications, then there will be three different instances of the
filter. They will all be separate. Each one can run at whatever
resolution is required without affecting the others.

  1. Now i have transform filter at avshws sample minidriver.I remove from minidriver YUY2 support for pins, now they support only RGB. I switch off m_ImageSynth -> SynthesizeBars (); at FakeHardware fuction in this sample. And as result i have on output pin only on static image that was first at input pin in start moment (experiment with help of Graphstudio). That’s great, but i need equivalent video stream, not only stream full of only first input frame. What and at which functions should I to do for this?

What do you mean by “equivalent video stream”? The frames are generated
by the capture pin. CCapturePin::Process gets called every time a free
buffer is available. You copy your frame data into the leading edge
(either immediately or in some later callback, which is what avshws
does). When you have filled a frame, you advance the leading edge using
KsStreamPointerAdvance. That frees the buffer to move on to the next
filter.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

I’m so clear choose way to develop virtual webcam driver, because one of the goals is to support application Windows Media Center, CameraWare, which can only recieve video stream from windows drivers.
Now i has developed application-source which have video stream and ready to send this video to virtual webcam driver(avshws), but do not know how, because API between application-sorce and driver yet not clear for me.
For virtual webcam driver i have two requirements:
-1. Different media application-sink should have possibility to recieve from my driver simulteanously video stream with different resolutions. That’s why i need to support different output resolutions for driver and the same resolutions for driver’s input, for not doing unnecessary work for resizing video at driver if i have only one media apllication-sink at this moment.

By the way, if your driver is being used in three different
applications, then there will be three different instances of the
filter. They will all be separate. Each one can run at whatever
resolution is required without affecting the others.
Should i send video from my application-source for each of these filters or only for one of them? And how should i send video from application-source and recieve this video from driver? Are some samples code exist?

-2 Do avstream give ability to manage video driver’s output resolution from my application-source side? For example, choose only one driver’s special resolution from the list for output.

Best regards,
Alex

xxxxx@sibmail.com wrote:

I’m so clear choose way to develop virtual webcam driver, because one of the goals is to support application Windows Media Center, CameraWare, which can only recieve video stream from windows drivers.

That’s not true. Windows Media Center will accept video streams from a
user-mode capture source filter. I’ve done it. It does seem to require
a kernel-mode tuner, for reasons that have never been clear to me, but
that tuner can be a fake driver that simply forwards the tune request to
your user-mode filter. That’s what we did.

Now i has developed application-source which have video stream and ready to send this video to virtual webcam driver(avshws), but do not know how, because API between application-sorce and driver yet not clear for me.

You have a couple of choices. One way is to use the AVStream features,
and send your data using custom KS property requests. Another way is to
intercept the ioctl handler before passing it on to AVStream; then you
can call DeviceIoControl directly. The two options have virtually the
same performance.

Also, PLEASE be professional enough to change the name of the driver.
You should never release a driver named “avshws” or “avssamp”.

For virtual webcam driver i have two requirements:
-1. Different media application-sink should have possibility to recieve from my driver simulteanously video stream with different resolutions. That’s why i need to support different output resolutions for driver and the same resolutions for driver’s input, for not doing unnecessary work for resizing video at driver if i have only one media apllication-sink at this moment.

Different applications will be using different instances of your
driver. Those instances will all be separate from each other. If you
need to have several applications reading from a single video stream
(why?), then you will have to manage that a global way.

Ordinarily, a camera has a single plug-and-play ID, which loads a single
driver instance, which creates a single AVStream filter. So, if you
wanted 4 video sources available, you would have 4 PnP IDs, which means
4 instances, and 4 filters (although your driver is only loaded once).
Those would all be separate.

If you only want one plug-and-play ID, then things get more
complicated. You would probably want to have multiple AVStream
filters. (You can have multiple filters in a single driver.) Those
filters would all share a single CCaptureDevice instance, which is where
your global state would go.

That’s a more advanced topic, and there are no samples. You would have
to do careful reading of the AVStream documentation, and do a lot of
experiments.

Should i send video from my application-source for each of these filters or only for one of them?

Does each application need to get exactly the same video? If so, then
you only need to send the video once. I don’t understand what good that
would be, however.

-2 Do avstream give ability to manage video driver’s output resolution from my application-source side? For example, choose only one driver’s special resolution from the list for output.

An AVStream filter has data structures that define every resolution it
is capable of handling. The Intersect and SetFormat handlers negotiate
which of those formats are possible at the current time, and which of
them the application would like to use.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.