Question on AVStream based USB video capture driver

I am implementing AVStream based video capture driver for hardware that is not compliant with UVC. I am following avshws sample given in DDK. Till now I have successfully added logic to parse configuration descriptor and set configuration on device.
I am facing difficulty on data transfer portion. As I am dealing with actual hardware, I do not think that I require DMA and other fake hardware portion in my code.

I am cloning leading edge pointer when my Process callback gets called. But after that in avshws example they have implemented fake DMA logic and scatter/gather queue to process this request. As I have to deal with real hardware how can I move forward from here. Is it suitable if I will create IRP inside Process callback for each request and submit it to lower stack and so some needful work in completion routine. Please guide me how can I handle data transfer interface in AVStream.

xxxxx@slscorp.com wrote:

I am implementing AVStream based video capture driver for hardware that is not compliant with UVC. I am following avshws sample given in DDK. Till now I have successfully added logic to parse configuration descriptor and set configuration on device.
I am facing difficulty on data transfer portion. As I am dealing with actual hardware, I do not think that I require DMA and other fake hardware portion in my code.

No, you don’t.

I am cloning leading edge pointer when my Process callback gets called. But after that in avshws example they have implemented fake DMA logic and scatter/gather queue to process this request. As I have to deal with real hardware how can I move forward from here. Is it suitable if I will create IRP inside Process callback for each request and submit it to lower stack and so some needful work in completion routine. Please guide me how can I handle data transfer interface in AVStream.

There are a couple of ways to do this. What I chose to do is
essentially create my own “continuous reader”. I allocate a set of my
own buffers and URBs. I submit them all as soon as we transition to
KSSTATE_RUN, and I keep them all circulating forever. In the capture
pin Process callback, all I do is keep a count of how many buffers I
got. Just one InterlockedIncrement – that’s it.

Then, in the completion routine for my own buffers, I do all the real
work. At that point, I KNOW I have real data. I do
KsPinGetLeadingEdgeStreamPointer to fetch the next empty buffer, copy
from my buffer to theirs, and update the number of bytes written. If I
fill up the frame, I call KsStreamPointerUnlock( … TRUE) to eject.
Otherwise, I call KsStreamPointerUnlock( … FALSE ) to leave the
partially filled buffer in the stream.

It’s possible to keep using the clone scheme, and submit the user’s
actual buffers to the hardware, but that has its own set of issues with
synchronizing the two.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

Is it require to give KSPIN_FLAG_GENERATE_MAPPINGS flag in KSPIN_DESCRIPTOR_EX for USB? As we are not dealing with DMA so what is use of it. Can I remove this flag?

xxxxx@slscorp.com wrote:

Is it require to give KSPIN_FLAG_GENERATE_MAPPINGS flag in KSPIN_DESCRIPTOR_EX for USB? As we are not dealing with DMA so what is use of it. Can I remove this flag?

Of course you can remove it. Why would you think otherwise? You should
only include the flags that apply to you. And wouldn’t it have been
quicker just to try it?


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

Tim,

How I can add still pin? In avshws example, it only contains video capture pin. I know that to add pin, I have to add member of type KSPIN_DESCRIPTOR_EX in CaptureFilterPinDescriptors structure. Can I copy paste same source that I have used for video capture pin or I require to pass some still pin specific attributes in this descriptor?

If I will expose another pin same as capture pin then how Directshow will know that it is still pin? I am confuse on this.

Can anybody has example of this.

Got one point: Inside KSPIN_DESCRIPTOR in catagory field needs to give PIN_CATEGORY_STILL. Apart from this Is there any other requirements?

xxxxx@slscorp.com wrote:

How I can add still pin? In avshws example, it only contains video capture pin. I know that to add pin, I have to add member of type KSPIN_DESCRIPTOR_EX in CaptureFilterPinDescriptors structure. Can I copy paste same source that I have used for video capture pin or I require to pass some still pin specific attributes in this descriptor?

If I will expose another pin same as capture pin then how Directshow will know that it is still pin? I am confuse on this.

DirectShow doesn’t care what your pins are. Your filter will have two
pins. It’s entirely up to the applications to decide what to do with
those two pins, if anything. Many capture applications just grab the
first pin and render it, without knowing or caring whether there are
other pins.

You will, presumably, have your own demo application that connects to
both pins and decides how to use them.

There are some tricky things to consider with a still pin. Will you
allow the two pins to have different formats? If so, you need to force
that in your intersect handler. When connecting the still pin, you have
to reject any format that doesn’t match the already-connected capture
pin. If they can be different, then you have to be prepared to ask the
hardware for the other format.

How will you trigger a still? Do you have a hardware button, or are you
going to rely on the KSPROPERTY_VIDCAP_VIDEOCONTROL property set to
trigger a still from software?

Got one point: Inside KSPIN_DESCRIPTOR in catagory field needs to give PIN_CATEGORY_STILL. Apart from this Is there any other requirements?

Assuming you want to allow applications to connect to just the capture
pin and ignore the still pin, you need to make sure InstancesNecessary is 0.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

Is it possible to get 60fps speed with Directshow? As my camera is continuously sending 60fps 1080p uncompressed YUY2 frames. But when I capture these frames, I only gets 41 FPS at max and all other frames are dropped.
I measured USB traffic and it is giving data rate according to 60FPS but that FPS are not reflecting in end user application.
Is it possible that these frames may be drop in between driver and end application?(in side directshow)

Another problem I am facing right now is, In my completion routine when I call following function pattern, I will get NULL return value from KsPinGetLeadingEdgeStreamPointer() function.

KsPinGetLeadingEdgeStreamPointer()
//copy from my buffer to stream buffer
KsStreamPointerUnlock(… , TRUE) if frame completed or KsStreamPointerUnlock(… , FALSE) if not completed.

KsPinGetLeadingEdgeStreamPointer() function gives single time valid stream pointer. When I call KsPinGetLeadingEdgeStreamPointer again after KsStreamPointerUnlock(… , TRUE) , it is returning NULL. Before Unlocking stream pointer I am updating its Leading -> StreamHeader ->DataUsed and Leading->OffsetOut.Remaining with correct values. Is there anything missing apart from this.

xxxxx@slscorp.com wrote:

Is it possible to get 60fps speed with Directshow?

Of course. I have a camera that does 320x240 at 120Hz.

As my camera is continuously sending 60fps 1080p uncompressed YUY2 frames. But when I capture these frames, I only gets 41 FPS at max and all other frames are dropped.
I measured USB traffic and it is giving data rate according to 60FPS but that FPS are not reflecting in end user application.
Is it possible that these frames may be drop in between driver and end application?(in side directshow)

How did you measure the USB traffic? You’re talking about 250MB/s, so
you must be using USB 3.0. Is this your own driver? Are you
timestamping the frames? Does the frame rate go up if you do not
timestamp? What application are you using to do the capture? At that
data rate, you can’t do an awful lot of processing on those frames.
You’d have to use the Enhanced Video Renderer, so you can draw into an
overlay or texture surface – you couldn’t preview the frames with GDI.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

xxxxx@slscorp.com wrote:

Another problem I am facing right now is, In my completion routine when I call following function pattern, I will get NULL return value from KsPinGetLeadingEdgeStreamPointer() function.

KsPinGetLeadingEdgeStreamPointer()
//copy from my buffer to stream buffer
KsStreamPointerUnlock(… , TRUE) if frame completed or KsStreamPointerUnlock(… , FALSE) if not completed.

KsPinGetLeadingEdgeStreamPointer() function gives single time valid stream pointer. When I call KsPinGetLeadingEdgeStreamPointer again after KsStreamPointerUnlock(… , TRUE) , it is returning NULL. Before Unlocking stream pointer I am updating its Leading -> StreamHeader ->DataUsed and Leading->OffsetOut.Remaining with correct values. Is there anything missing apart from this.

When the LeadingEdge is null, that means you have run out of empty
buffers. In your allocator framing structure, how many empty buffers
are you requesting?


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

Fail Case : I only get valid stream pointer for first call only

KsPinGetLeadingEdgeStreamPointer()
|
V
// partially fill stream buffer from hardware buffer and call KsStreamPointerUnlock(… , FALSE)
|
V
// after feeling whole frame Call KsStreamPointerUnlock(… , TRUE)
|
V
Again call KsPinGetLeadingEdgeStreamPointer() –> This call returning NULL every time

Working case:

KsPinGetLeadingEdgeStreamPointer()
|
V
// accumulate hardware buffers till not received data equal to frame size
|
V
// after getting data equals to frame size,copy that to stream buffer and call KsStreamPointerUnlock(… , TRUE)
|
V
Again call KsPinGetLeadingEdgeStreamPointer() –> This call returning valid stream pointer

Is there anything I missing in Fail Case scenario.

xxxxx@slscorp.com wrote:

Fail Case : I only get valid stream pointer for first call only

KsPinGetLeadingEdgeStreamPointer()
V
// partially fill stream buffer from hardware buffer and call KsStreamPointerUnlock(… , FALSE)
V
// after feeling whole frame Call KsStreamPointerUnlock(… , TRUE)
V
Again call KsPinGetLeadingEdgeStreamPointer() –> This call returning NULL every time

Working case:

KsPinGetLeadingEdgeStreamPointer()
V
// accumulate hardware buffers till not received data equal to frame size
V
// after getting data equals to frame size,copy that to stream buffer and call KsStreamPointerUnlock(… , TRUE)
V
Again call KsPinGetLeadingEdgeStreamPointer() –> This call returning valid stream pointer

What does your allocator framing structure look like? How many buffers
have you requested? DirectShow pays attention to your numbers. If you
tell it there should only be 1 buffer in rotation, then there will only
be 1 buffer. Have you put debug messages in your Process callback, so
you know how many empty buffers you have been given? Are you creating
any clones that might be preventing the leading edge from ejecting?

My USB capture driver uses this exact scheme, and it works fine. My
Process callback does exactly one thing: increment the “available buffer
count” for debug tracking.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

I am trying to add still pin avshws sample and as per Tim’s suggestion also implementing KSPROPERTY_VIDCAP_VIDEOCONTROL property for still pin for software trigger. I have created AUTOMATION_TABLE and supplied it to still pin KSPIN_DESCRIPTOR_EX. The problem is that I have registered callbacks for KSPROPERTY_VIDEOCONTROL_MODE_S property item but i am not getting trigger for this.

This is my code snap for implementing this task

DEFINE_KSPROPERTY_TABLE(FrameRateProperties)
{
DEFINE_KSPROPERTY_ITEM
(
KSPROPERTY_VIDEOCONTROL_MODE,
KStrGetPropertyHandler, // GetSupported or Handler
sizeof(KSPROPERTY_VIDEOCONTROL_MODE_S), // MinProperty
sizeof(KSPROPERTY_VIDEOCONTROL_MODE_S), // MinData
KStrSetPropertyHandler, // GetSupported or Handler
NULL, // Values
0, // RelationsCount
NULL, // Relations
NULL, // SupportHandler
0 // SerializedSize
),

};
DEFINE_KSPROPERTY_SET_TABLE(VideoStreamProperties)
{
DEFINE_KSPROPERTY_SET
(
&PROPSETID_VIDCAP_VIDEOCONTROL, // Set
SIZEOF_ARRAY(FrameRateProperties), // PropertiesCount
FrameRateProperties, // PropertyItem
0, // FastIoCount
NULL // FastIoTable
),
};

//-
//----------------------------------------------

DEFINE_KSAUTOMATION_TABLE (StillPinAutomationTable)
{
DEFINE_KSAUTOMATION_PROPERTIES (VideoStreamProperties),
DEFINE_KSAUTOMATION_METHODS_NULL,
DEFINE_KSAUTOMATION_EVENTS_NULL
};

I am supplying this StillPinAutomationTable to still pin descriptor as below
//
// still Pin
//
{
&StillPinDispatch,
&StillPinAutomationTable,

My callbacks KStrGetPropertyHandler and KStrSetPropertyHandler are not getting called when I show property of still pin. Is I am missing something apart from this.

xxxxx@slscorp.com wrote:

I am trying to add still pin avshws sample and as per Tim’s suggestion also implementing KSPROPERTY_VIDCAP_VIDEOCONTROL property for still pin for software trigger. I have created AUTOMATION_TABLE and supplied it to still pin KSPIN_DESCRIPTOR_EX. The problem is that I have registered callbacks for KSPROPERTY_VIDEOCONTROL_MODE_S property item but i am not getting trigger for this.

This is my code snap for implementing this task

DEFINE_KSPROPERTY_TABLE(FrameRateProperties)
{
DEFINE_KSPROPERTY_ITEM
(
KSPROPERTY_VIDEOCONTROL_MODE,

You also need to implement KSPROPERTY_VIDEOCONTROL_CAPS, so you can
report which mode bits you support. Right now, when it tried to find
out which modes can be adjusted, your driver is saying “none of them”.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

Thanks Tim.

I tried as per your suggestion and added following to my code but still its not reflecting in property page. Please look into below code.

DEFINE_KSPROPERTY_TABLE(FrameRateProperties)
{
DEFINE_KSPROPERTY_ITEM
(
KSPROPERTY_VIDEOCONTROL_CAPS,
(KStrGetPropertyHandler), // GetSupported or Handler
sizeof(KSPROPERTY_VIDEOCONTROL_CAPS_S), // MinProperty
sizeof(KSPROPERTY_VIDEOCONTROL_CAPS_S), // MinData
FALSE, // SetSupported or Handler
NULL, // Values
0, // RelationsCount
NULL, // Relations
NULL, // SupportHandler
0 // SerializedSize
),
DEFINE_KSPROPERTY_ITEM
(
KSPROPERTY_VIDEOCONTROL_MODE,
KStrGetPropertyHandler, // GetSupported or Handler
sizeof(KSPROPERTY_VIDEOCONTROL_MODE_S), // MinProperty
sizeof(KSPROPERTY_VIDEOCONTROL_MODE_S), // MinData
KStrSetPropertyHandler, // GetSupported or Handler
NULL, // Values
0, // RelationsCount
NULL, // Relations
NULL, // SupportHandler
0 // SerializedSize
),

};

I also tried to give default values for KSPROPERTY_VIDEOCONTROL_MODE and KSPROPERTY_VIDEOCONTROL_CAPS. But that also failed to make impact. Is there anything I am missing apart from this in my code.

xxxxx@slscorp.com wrote:

Thanks Tim.

I tried as per your suggestion and added following to my code but still its not reflecting in property page. Please look into below code.

I also tried to give default values for KSPROPERTY_VIDEOCONTROL_MODE and KSPROPERTY_VIDEOCONTROL_CAPS. But that also failed to make impact. Is there anything I am missing apart from this in my code.

You haven’t shown the code for KStrGetPropertyHandler where you actually
HANDLE these requests. When you get the KSPROPERTY_VIDEOCONTROL_CAPS
request, you have to tell it which video control features you handle,
and of course you have to tell it how many bytes you are returning. I
know this works. My cameras do horizontal and vertical flipping, and if
I set those bits, the checkboxes appear on the property page.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

I am trying to implement your suggestions but main thing is I am not getting trigger in my KStrGetPropertyHandler handler function. So how can I know what I have to return. At least my callback function should trigger then I can decide what to do next.

Currently my KStrGetPropertyHandler function is blank just returning STATUS_SUCCESS from it.

Something I am missing thats why my callback function is not getting called. I needs to find out that.

Here it is my code to support property in AVStream. Note that my callbacks KStrGetPropertyHandler and KStrGetPropertyHandler are not getting called.

NTSTATUS

KStrGetPropertyHandler(
In PIRP Irp,
In PKSIDENTIFIER Request,
Inout PVOID Data
)
{
return STATUS_SUCCESS;
}

NTSTATUS

KStrSetPropertyHandler(
In PIRP Irp,
In PKSIDENTIFIER Request,
Inout PVOID Data
)
{
PKSPROPERTY_VIDEOCONTROL_CAPS_S pS = (PKSPROPERTY_VIDEOCONTROL_CAPS_S) Request; // pointer to the input data
PKSPROPERTY_VIDEOCONTROL_CAPS_S pOutputData = (PKSPROPERTY_VIDEOCONTROL_CAPS_S) Data;

RtlZeroMemory(pOutputData, sizeof(KSPROPERTY_VIDEOCONTROL_CAPS_S));

pOutputData->VideoControlCaps =
KS_VideoControlFlag_ExternalTriggerEnable
| KS_VideoControlFlag_Trigger
;

return STATUS_SUCCESS;
}

/**************************************************************************

DESCRIPTOR AND DISPATCH LAYOUT

**************************************************************************/

GUID g_PINNAME_VIDEO_CAPTURE = {STATIC_PINNAME_VIDEO_CAPTURE};
GUID g_PINNAME_VIDEO_PREVIEW = {STATIC_PINNAME_VIDEO_PREVIEW};
GUID g_PINNAME_VIDEO_STILL = {STATIC_PINNAME_VIDEO_STILL};
//
// CaptureFilterCategories:
//
// The list of category GUIDs for the capture filter.
//
const
GUID
CaptureFilterCategories [CAPTURE_FILTER_CATEGORIES_COUNT] = {
STATICGUIDOF (KSCATEGORY_VIDEO),
STATICGUIDOF (KSCATEGORY_CAPTURE)
};

// ------------------------------------------------------------------------
// Array of all of the property sets supported by video streams
// ------------------------------------------------------------------------

// ------------------------------------------------------------------------
// Property sets for all video capture streams
// ------------------------------------------------------------------------

DEFINE_KSPROPERTY_TABLE(FrameRateProperties)
{

DEFINE_KSPROPERTY_ITEM
(
KSPROPERTY_VIDEOCONTROL_CAPS,
KStrGetPropertyHandler, // GetSupported or Handler
sizeof(KSPROPERTY_VIDEOCONTROL_CAPS_S), // MinProperty
sizeof(KSPROPERTY_VIDEOCONTROL_CAPS_S), // MinData
KStrSetPropertyHandler, // SetSupported or Handler
NULL, // Values
0, // RelationsCount
NULL, // Relations
NULL, // SupportHandler
0 // SerializedSize
),

DEFINE_KSPROPERTY_ITEM
(
KSPROPERTY_VIDEOCONTROL_MODE,
KStrGetPropertyHandler, // GetSupported or Handler
sizeof(KSPROPERTY_VIDEOCONTROL_MODE_S), // MinProperty
sizeof(KSPROPERTY_VIDEOCONTROL_MODE_S), // MinData
KStrSetPropertyHandler, // GetSupported or Handler
NULL, // Values
0, // RelationsCount
NULL, // Relations
NULL, // SupportHandler
0 // SerializedSize
),

};

DEFINE_KSPROPERTY_SET_TABLE(VideoStreamProperties)
{
DEFINE_KSPROPERTY_SET
(
&PROPSETID_VIDCAP_VIDEOCONTROL, // Set
SIZEOF_ARRAY(FrameRateProperties), // PropertiesCount
FrameRateProperties, // PropertyItem
0, // FastIoCount
NULL // FastIoTable
),
};

//-
//----------------------------------------------

DEFINE_KSAUTOMATION_TABLE (StillPinAutomationTable)
{
DEFINE_KSAUTOMATION_PROPERTIES (VideoStreamProperties),
DEFINE_KSAUTOMATION_METHODS_NULL,
DEFINE_KSAUTOMATION_EVENTS_NULL
};
//---------------------------------------------

//
// CaptureFilterPinDescriptors:
//
// The list of pin descriptors on the capture filter.
//
const
KSPIN_DESCRIPTOR_EX
CaptureFilterPinDescriptors [CAPTURE_FILTER_PIN_COUNT] = {
//
// Video Capture Pin
//
{
&CapturePinDispatch,
NULL,
{
0, // Interfaces (NULL, 0 == default)
NULL,
0, // Mediums (NULL, 0 == default)
NULL,
SIZEOF_ARRAY(CapturePinDataRanges),// Range Count
CapturePinDataRanges, // Ranges
KSPIN_DATAFLOW_OUT, // Dataflow
KSPIN_COMMUNICATION_BOTH, // Communication
&PIN_CATEGORY_CAPTURE, // Category
&g_PINNAME_VIDEO_CAPTURE, // Name
0 // Reserved
},
/*KSPIN_FLAG_DISPATCH_LEVEL_PROCESSING|*/
KSPIN_FLAG_PROCESS_IN_RUN_STATE_ONLY,
1, // Instances Possible
0, // Instances Necessary
&CapturePinAllocatorFraming, // Allocator Framing
reinterpret_cast
(CCapturePin::IntersectHandler)
},
//
// still Capture Pin
//
{
&StillPinDispatch,
&StillPinAutomationTable,
{
0, // Interfaces (NULL, 0 == default)
NULL,
0, // Mediums (NULL, 0 == default)
NULL,
SIZEOF_ARRAY(StillPinDataRanges),// Range Count
StillPinDataRanges, // Ranges
KSPIN_DATAFLOW_OUT, // Dataflow
KSPIN_COMMUNICATION_BOTH, // Communication
&PIN_CATEGORY_STILL, // Category
&g_PINNAME_VIDEO_STILL, // Name
0 // Reserved
},
//KSPIN_FLAG_DISPATCH_LEVEL_PROCESSING|
KSPIN_FLAG_PROCESS_IN_RUN_STATE_ONLY,
1, // Instances Possible
0, // Instances Necessary
&StillPinAllocatorFraming, // Allocator Framing
reinterpret_cast
(CStillPin::IntersectHandler)
}
};

const
KSFILTER_DISPATCH
CaptureFilterDispatch = {
CCaptureFilter::DispatchCreate, // Filter Create
NULL, // Filter Close
NULL, // Filter Process
NULL // Filter Reset
};

//
// CaptureFilterDescription:
//
// The descriptor for the capture filter. We don’t specify any topology
// since there’s only one pin on the filter. Realistically, there would
// be some topological relationships here because there would be input
// pins from crossbars and the like.
//
const
KSFILTER_DESCRIPTOR
CaptureFilterDescriptor = {
&CaptureFilterDispatch, // Dispatch Table
NULL, // Automation Table
KSFILTER_DESCRIPTOR_VERSION, // Version
0, // Flags
&KSNAME_Filter, // Reference GUID
DEFINE_KSFILTER_PIN_DESCRIPTORS (CaptureFilterPinDescriptors),
DEFINE_KSFILTER_CATEGORIES (CaptureFilterCategories),
0,
sizeof (KSNODE_DESCRIPTOR),
NULL,
0,
NULL,
NULL // Component ID
};

xxxxx@slscorp.com wrote:

Here it is my code to support property in AVStream. Note that my callbacks KStrGetPropertyHandler and KStrGetPropertyHandler are not getting called.

And you’re sure you’re checking the still pin? In your setup, these
properties are only supported on the still pin, not on the capture pin.

DEFINE_KSPROPERTY_TABLE(FrameRateProperties)
{

DEFINE_KSPROPERTY_ITEM
(
KSPROPERTY_VIDEOCONTROL_CAPS,
KStrGetPropertyHandler, // GetSupported or Handler
sizeof(KSPROPERTY_VIDEOCONTROL_CAPS_S), // MinProperty
sizeof(KSPROPERTY_VIDEOCONTROL_CAPS_S), // MinData
KStrSetPropertyHandler, // SetSupported or Handler
NULL, // Values
0, // RelationsCount
NULL, // Relations
NULL, // SupportHandler
0 // SerializedSize
),

DEFINE_KSPROPERTY_ITEM
(
KSPROPERTY_VIDEOCONTROL_MODE,
KStrGetPropertyHandler, // GetSupported or Handler
sizeof(KSPROPERTY_VIDEOCONTROL_MODE_S), // MinProperty
sizeof(KSPROPERTY_VIDEOCONTROL_MODE_S), // MinData
KStrSetPropertyHandler, // GetSupported or Handler
NULL, // Values
0, // RelationsCount
NULL, // Relations
NULL, // SupportHandler
0 // SerializedSize
),

};

I have MinProperty as sizeof(KSPROPERTY) in both cases. I see what the
documentation says, but in this case the documentation isn’t sensible.
The KSPROPERTY is all that’s needed to describe the property.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.