Virtual camera driver, modifying image in capture buffer?

Im working on a camera driver based on the AVSHWS sample from MSDN this reads a synthetized image in RAM and sends it through the pin

Unfortunately what I really want to do is to read the camera capture buffer image and add an decal to it (a watermark) sending that to the pin.

I cant find a way or a sample on how to do this. I havent even found a way to modify a pixel coming from the capture buffer.

Is this possible? I want my driver to be able to watermark the image coming from the camera before its send to the pin.

Actually if possible if for some reason there’s no image coming from the camera, I want to be able to send a default image through the driver.

xxxxx@gmail.com wrote:

Im working on a camera driver based on the AVSHWS sample from MSDN this reads a synthetized image in RAM and sends it through the pin

Unfortunately what I really want to do is to read the camera capture buffer image and add an decal to it (a watermark) sending that to the pin.

This is not the right way to do this. As a general rule, never do
anything in kernel mode that doesn’t absolutely have to be in kernel.
Image processing, in particular, belongs in user mode.

I cant find a way or a sample on how to do this. I havent even found a way to modify a pixel coming from the capture buffer.

Is this possible? I want my driver to be able to watermark the image coming from the camera before its send to the pin.

Of course it’s possible, even though it’s not advisable. Your driver is
fetching the images from the camera hardware. Your driver is the one
copying those bits to the leading edge of the stream, but that leading
edge is just memory. Only you know where that happens. Also, only you
know what format the bits are in. If the camera is producing MJPEG,
it’s not going to be very easy for you to do a watermark.

Actually if possible if for some reason there’s no image coming from the camera, I want to be able to send a default image through the driver.

Again, this is the wrong way to do this. If the camera is not producing
frames, then you should not return any frames. It is completely up to
the application to decide what to do in that case. YOU should not be
establishing policy on that.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

Thank you I get your point, but lets suppose I do want to do this, just as a learning experiment, where should I start? what would I need to do something like this?

YIKES!

Standard response #8

xxxxx@gmail.com wrote:

From: xxxxx@gmail.com
To: “Windows System Software Devs Interest List”
Subject: RE:[ntdev] Virtual camera driver, modifying image in capture buffer?
Date: Wed, 3 Jun 2015 15:24:54 -0400 (EDT)

Thank you I get your point, but lets suppose I do want to do this, just as a learning experiment, where should I start? what would I need to do something like this?


NTDEV is sponsored by OSR

Visit the list at: http://www.osronline.com/showlists.cfm?list=ntdev

OSR is HIRING!! See http://www.osr.com/careers

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at http://www.osronline.com/page.cfm?name=ListServer

Oh sorry, that just was from the top of my head, didn’t want to sound so predictable.

Look I know this is really not the best way to do this. Since the capture buffer is supposed to be read only at least at this stage. But Im curious if its possible to modify those values from kernel before they are sent to the pin. There are several applications that Im thinking could be implemented this way.

Of course I might be completely wrong and what I need is a part of the driver that can be in user mode. Or even a high level application that is somehow attached

Problem is I have no idea how to start working on such an app, all I have as resource is the WDK samples and a working high level Windows app camera capture I already made. (unfortunately that app is just for testing purposes, I want the virtual camera driver to be able to be used from any windows app)


Gregory G. Dyess wrote:

YIKES!

Standard response #8

I think you have a point here, I shouldn’t be doing this in kernel mode.

Is there a way to attach the image processing high level user mode app to
the driver?

But how could I attach the driver or part of the driver in user mode so it
adds the image processing there?

my goal would be to have the result available in any camera capture app
like Metro App or skype

Thank you so much for your answers.

2015-06-03 12:03 GMT-05:00 Tim Roberts :

> xxxxx@gmail.com wrote:
> > Im working on a camera driver based on the AVSHWS sample from MSDN this
> reads a synthetized image in RAM and sends it through the pin
> >
> > Unfortunately what I really want to do is to read the camera capture
> buffer image and add an decal to it (a watermark) sending that to the pin.
>
> This is not the right way to do this. As a general rule, never do
> anything in kernel mode that doesn’t absolutely have to be in kernel.
> Image processing, in particular, belongs in user mode.
>
>
> > I cant find a way or a sample on how to do this. I havent even found a
> way to modify a pixel coming from the capture buffer.
> >
> > Is this possible? I want my driver to be able to watermark the image
> coming from the camera before its send to the pin.
>
> Of course it’s possible, even though it’s not advisable. Your driver is
> fetching the images from the camera hardware. Your driver is the one
> copying those bits to the leading edge of the stream, but that leading
> edge is just memory. Only you know where that happens. Also, only you
> know what format the bits are in. If the camera is producing MJPEG,
> it’s not going to be very easy for you to do a watermark.
>
>
> > Actually if possible if for some reason there’s no image coming from the
> camera, I want to be able to send a default image through the driver.
>
> Again, this is the wrong way to do this. If the camera is not producing
> frames, then you should not return any frames. It is completely up to
> the application to decide what to do in that case. YOU should not be
> establishing policy on that.
>
> –
> Tim Roberts, xxxxx@probo.com
> Providenza & Boekelheide, Inc.
>
>
> —
> NTDEV is sponsored by OSR
>
> Visit the list at: http://www.osronline.com/showlists.cfm?list=ntdev
>
> OSR is HIRING!! See http://www.osr.com/careers
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer
>

On Jun 3, 2015, at 1:08 PM, xxxxx@gmail.com wrote:

Look I know this is really not the best way to do this. Since the capture buffer is supposed to be read only at least at this stage.

Where did you get this idea? There’s nothing “read only” in the capture path. The driver reads from the hardware into perfectly ordinary memory.

But Im curious if its possible to modify those values from kernel before they are sent to the pin. There are several applications that Im thinking could be implemented this way.

See, here’s the problem. If YOU think it’s a cute idea to apply humorous backgrounds or strange video effects to ingested video, then a bunch of other people have the same idea. Pretty soon, you have 8 layers of filters, and there isn’t enough CPU left to be reactive. This is EXACTLY what led Microsoft to redesign the audio stack in Vista.

If you want to do magic stuff to a camera that you are building, you can have the camera advertise a custom video format, then implement your magic filtering in a DMO codec filter in user-mode. But if you are thinking about creating a general-purpose filter that will apply to ANY camera in ANY application, then you have a big problem. You CAN write an upper filter driver, but that means you have to understand all of the Kernel Streaming ioctls so you can, for example, figure out what format is being used. You also have to understand that cameras use many different formats, and you won’t have any control over that.

Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

-Oh ok I understand your concern, however this is intended for a specific
hardware and driver set.

“Where did you get this idea? There’s nothing “read only” in the capture
path. The driver reads from the hardware into perfectly ordinary memory.”

I got this idea, because the capture buffer I get from the graphics driver
(AVGFX) is apparently a locked surface. (Im not entirely sure if its DX or
GDI)

“If you want to do magic stuff to a camera that you are building, you can
have the camera advertise a custom video format, then implement your magic
filtering in a DMO codec filter in user-mode.”

Yes, this is basically what Im trying to do. Although I havent found a
sample for DMO or MFT for camera input, I have found one for direct show
filters but Windows Store apps wont recognize them

2015-06-04 1:11 GMT-05:00 Tim Roberts :

> On Jun 3, 2015, at 1:08 PM, xxxxx@gmail.com wrote:
> >
> > Look I know this is really not the best way to do this. Since the
> capture buffer is supposed to be read only at least at this stage.
>
> Where did you get this idea? There’s nothing “read only” in the capture
> path. The driver reads from the hardware into perfectly ordinary memory.
>
> > But Im curious if its possible to modify those values from kernel before
> they are sent to the pin. There are several applications that Im thinking
> could be implemented this way.
>
> See, here’s the problem. If YOU think it’s a cute idea to apply humorous
> backgrounds or strange video effects to ingested video, then a bunch of
> other people have the same idea. Pretty soon, you have 8 layers of
> filters, and there isn’t enough CPU left to be reactive. This is EXACTLY
> what led Microsoft to redesign the audio stack in Vista.
>
> If you want to do magic stuff to a camera that you are building, you can
> have the camera advertise a custom video format, then implement your magic
> filtering in a DMO codec filter in user-mode. But if you are thinking
> about creating a general-purpose filter that will apply to ANY camera in
> ANY application, then you have a big problem. You CAN write an upper
> filter driver, but that means you have to understand all of the Kernel
> Streaming ioctls so you can, for example, figure out what format is being
> used. You also have to understand that cameras use many different formats,
> and you won’t have any control over that.
> —
> Tim Roberts, xxxxx@probo.com
> Providenza & Boekelheide, Inc.
>
>
> —
> NTDEV is sponsored by OSR
>
> Visit the list at: http://www.osronline.com/showlists.cfm?list=ntdev
>
> OSR is HIRING!! See http://www.osr.com/careers
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer
>

German Cons wrote:

-Oh ok I understand your concern, however this is intended for a
specific hardware and driver set.

“Where did you get this idea? There’s nothing “read only” in the
capture path. The driver reads from the hardware into perfectly
ordinary memory.”

I got this idea, because the capture buffer I get from the graphics
driver (AVGFX) is apparently a locked surface. (Im not entirely sure
if its DX or GDI)

The source of the memory is irrelevant. I would hope it is obvious that
a capture driver cannot possibly operate if it is handed read-only memory.

How do you know the capture buffer is coming from a graphics driver? In
a normal AVStream driver, you are handed buffers in your Process
callback, and you have no idea where they came from. I get the feeling
this is one of those cases where you’ve told us about 5% of the details
we need to give intelligent advice.

“If you want to do magic stuff to a camera that you are building, you
can have the camera advertise a custom video format, then implement
your magic filtering in a DMO codec filter in user-mode.”

Yes, this is basically what Im trying to do. Although I havent found a
sample for DMO or MFT for camera input, I have found one for direct
show filters but Windows Store apps wont recognize them

Googling for “dmo codec sample” provides a full page of useful hits.

What kinds of Windows Store apps do you need to support? The camera
component in WinRT is idiotically limited; I doubt you’ll be able to do
anything to affect that.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

“The source of the memory is irrelevant. I would hope it is obvious that
a capture driver cannot possibly operate if it is handed read-only memory.”

Ok that makes a lot of sense. I should have thought about that.

I think I have found the root of the problem in avshws what I want is to do
is to change the SynthesisBuffer from this memory pool created here:

m_SynthesisBuffer = reinterpret_cast (
ExAllocatePoolWithTag (
NonPagedPool,
m_ImageSize,
AVSHWS_POOLTAG
)
);

if (!m_SynthesisBuffer) {
Status = STATUS_INSUFFICIENT_RESOURCES;
}

What Im trying is to change that buffer to point into the Graphics Driver
capture buffer

Is this possible?

2015-06-04 11:46 GMT-05:00 Tim Roberts :

> German Cons wrote:
> > -Oh ok I understand your concern, however this is intended for a
> > specific hardware and driver set.
> >
> > “Where did you get this idea? There’s nothing “read only” in the
> > capture path. The driver reads from the hardware into perfectly
> > ordinary memory.”
> >
> > I got this idea, because the capture buffer I get from the graphics
> > driver (AVGFX) is apparently a locked surface. (Im not entirely sure
> > if its DX or GDI)
>
> The source of the memory is irrelevant. I would hope it is obvious that
> a capture driver cannot possibly operate if it is handed read-only memory.
>
> How do you know the capture buffer is coming from a graphics driver? In
> a normal AVStream driver, you are handed buffers in your Process
> callback, and you have no idea where they came from. I get the feeling
> this is one of those cases where you’ve told us about 5% of the details
> we need to give intelligent advice.
>
>
> > “If you want to do magic stuff to a camera that you are building, you
> > can have the camera advertise a custom video format, then implement
> > your magic filtering in a DMO codec filter in user-mode.”
> >
> > Yes, this is basically what Im trying to do. Although I havent found a
> > sample for DMO or MFT for camera input, I have found one for direct
> > show filters but Windows Store apps wont recognize them
>
> Googling for “dmo codec sample” provides a full page of useful hits.
>
> What kinds of Windows Store apps do you need to support? The camera
> component in WinRT is idiotically limited; I doubt you’ll be able to do
> anything to affect that.
>
> –
> Tim Roberts, xxxxx@probo.com
> Providenza & Boekelheide, Inc.
>
>
> —
> NTDEV is sponsored by OSR
>
> Visit the list at: http://www.osronline.com/showlists.cfm?list=ntdev
>
> OSR is HIRING!! See http://www.osr.com/careers
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer
>

German Cons wrote:

I think I have found the root of the problem in avshws what I want is
to do is to change the SynthesisBuffer from this memory pool created here:

What Im trying is to change that buffer to point into the Graphics
Driver capture buffer

Is this possible?

For what purpose? I have a suspicion that you are hacking away at a
driver here without having a clear sense of where your driver fits. An
AVStream capture driver gets video frames from somewhere, and copies
them to the buffers that are handed to it in the stream pointer. So,
where do your frames come from? Is there camera hardware somewhere?
How does a graphics driver come in to play? What do you expect to do
with those capture buffers?

Let me give you an example. Let’s say your USB camera exposes a format
that is handled directly by a texture surface in the graphics driver.
In that case, your driver has no contact with the graphics driver. It
is entirely up to the graph manager (either DirectShow or Media
Foundation) to connect up your capture filter with a renderer filter.
The renderer filter will allocate buffers in graphics memory and send
them to the graph. Your driver will receive those buffers in the stream
pointer in its Process callback, and you will copy frames into those
buffers, but you won’t have any idea that the memory lives in graphics
memory. That’s all abstracted away.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

Ok I have to level with you. Im not able to provide a lot of information
on details and I understand thats a problem, but let me try to explain.

Remember I mentioned I have a set of drivers? well Here is the thing, I
have a graphics
driver that needs to interact with the camera and as you explained the
current
model used by AVSHWS bypasses that graphics driver.

What I need to do is to make the emulator send the information via the
capture buffer
so I can know if the graphics driver is being called in that case.

2015-06-05 12:23 GMT-05:00 Tim Roberts :

> German Cons wrote:
> >
> > I think I have found the root of the problem in avshws what I want is
> > to do is to change the SynthesisBuffer from this memory pool created
> here:
> > …
> > What Im trying is to change that buffer to point into the Graphics
> > Driver capture buffer
> >
> > Is this possible?
>
> For what purpose? I have a suspicion that you are hacking away at a
> driver here without having a clear sense of where your driver fits. An
> AVStream capture driver gets video frames from somewhere, and copies
> them to the buffers that are handed to it in the stream pointer. So,
> where do your frames come from? Is there camera hardware somewhere?
> How does a graphics driver come in to play? What do you expect to do
> with those capture buffers?
>
> Let me give you an example. Let’s say your USB camera exposes a format
> that is handled directly by a texture surface in the graphics driver.
> In that case, your driver has no contact with the graphics driver. It
> is entirely up to the graph manager (either DirectShow or Media
> Foundation) to connect up your capture filter with a renderer filter.
> The renderer filter will allocate buffers in graphics memory and send
> them to the graph. Your driver will receive those buffers in the stream
> pointer in its Process callback, and you will copy frames into those
> buffers, but you won’t have any idea that the memory lives in graphics
> memory. That’s all abstracted away.
>
> –
> Tim Roberts, xxxxx@probo.com
> Providenza & Boekelheide, Inc.
>
>
> —
> NTDEV is sponsored by OSR
>
> Visit the list at: http://www.osronline.com/showlists.cfm?list=ntdev
>
> OSR is HIRING!! See http://www.osr.com/careers
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer
>

German Cons wrote:

Ok I have to level with you. Im not able to provide a lot of information
on details and I understand thats a problem, but let me try to explain.

Remember I mentioned I have a set of drivers? well Here is the thing,
I have a graphics
driver that needs to interact with the camera and as you explained the
current
model used by AVSHWS bypasses that graphics driver.

HOW do you expect the graphics driver to interact with the camera? What
function will the graphics driver serve?

What I need to do is to make the emulator send the information via the
capture buffer
so I can know if the graphics driver is being called in that case.

But you’re throwing out a lot of terms here without connecting any of
them together. You have a real camera, you have an emulator, you have a
graphics driver, and presumably there’s an application somewhere that
hopes to consume the frames. What is the data flow between them? When
a frame arrives from a hardware camera, what’s going to happen to that
frame – step by step – before it gets into the application’s hands?


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

“HOW do you expect the graphics driver to interact with the camera? What
function will the graphics driver serve?”

Oh, ok basically all I want to be able to call the GetCaptureBuffer method
from the driver.
So I can determine if the driver is working.

“But you’re throwing out a lot of terms here without connecting any of
them together. You have a real camera, you have an emulator, you have a
graphics driver, and presumably there’s an application somewhere that
hopes to consume the frames. What is the data flow between them? When
a frame arrives from a hardware camera, what’s going to happen to that
frame – step by step – before it gets into the application’s hands?”

Actually is pretty straightforward, right now Im able to access the capture
buffer
and process the data in it sending the result to the pin.

However if there is NO camera attached, the data is empty.

So I want the AVHWS image synthetizer to create an image in the capture
buffer, so I have data to process.

I dont really need this data to be very complex like the color bar,
anything could work even a single color image.

I think one way to do this, would be to let the image synthetizer create
the image in RAM (what it already does) and then copy it
to the capture buffer. But I dont know how to do that.

I know how to READ the capture buffer, But I dont know how to WRITE in it.

-So basically I want the driver to emulate the camera (when hardware is not
present)
and use the capture buffer so I can test if the graphics capture method is
working correcty.

2015-06-06 13:48 GMT-05:00 Tim Roberts :

> German Cons wrote:
> > Ok I have to level with you. Im not able to provide a lot of information
> > on details and I understand thats a problem, but let me try to explain.
> >
> > Remember I mentioned I have a set of drivers? well Here is the thing,
> > I have a graphics
> > driver that needs to interact with the camera and as you explained the
> > current
> > model used by AVSHWS bypasses that graphics driver.
>
> HOW do you expect the graphics driver to interact with the camera? What
> function will the graphics driver serve?
>
>
> > What I need to do is to make the emulator send the information via the
> > capture buffer
> > so I can know if the graphics driver is being called in that case.
>
> But you’re throwing out a lot of terms here without connecting any of
> them together. You have a real camera, you have an emulator, you have a
> graphics driver, and presumably there’s an application somewhere that
> hopes to consume the frames. What is the data flow between them? When
> a frame arrives from a hardware camera, what’s going to happen to that
> frame – step by step – before it gets into the application’s hands?
>
> –
> Tim Roberts, xxxxx@probo.com
> Providenza & Boekelheide, Inc.
>
>
> —
> NTDEV is sponsored by OSR
>
> Visit the list at: http://www.osronline.com/showlists.cfm?list=ntdev
>
> OSR is HIRING!! See http://www.osr.com/careers
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer
>

" if the graphics capture method is working correcty."

Sorry I meant, “the graphics Driver GetCaptureBuffer Method is working”

2015-06-08 11:10 GMT-05:00 German Cons :

> “HOW do you expect the graphics driver to interact with the camera? What
> function will the graphics driver serve?”
>
> Oh, ok basically all I want to be able to call the GetCaptureBuffer method
> from the driver.
> So I can determine if the driver is working.
>
> “But you’re throwing out a lot of terms here without connecting any of
> them together. You have a real camera, you have an emulator, you have a
> graphics driver, and presumably there’s an application somewhere that
> hopes to consume the frames. What is the data flow between them? When
> a frame arrives from a hardware camera, what’s going to happen to that
> frame – step by step – before it gets into the application’s hands?”
>
> Actually is pretty straightforward, right now Im able to access the
> capture buffer
> and process the data in it sending the result to the pin.
>
> However if there is NO camera attached, the data is empty.
>
> So I want the AVHWS image synthetizer to create an image in the capture
> buffer, so I have data to process.
>
> I dont really need this data to be very complex like the color bar,
> anything could work even a single color image.
>
> I think one way to do this, would be to let the image synthetizer create
> the image in RAM (what it already does) and then copy it
> to the capture buffer. But I dont know how to do that.
>
> I know how to READ the capture buffer, But I dont know how to WRITE in it.
>
> -So basically I want the driver to emulate the camera (when hardware is
> not present)
> and use the capture buffer so I can test if the graphics capture method is
> working correcty.
>
>
>
>
> 2015-06-06 13:48 GMT-05:00 Tim Roberts :
>
>> German Cons wrote:
>> > Ok I have to level with you. Im not able to provide a lot of information
>> > on details and I understand thats a problem, but let me try to explain.
>> >
>> > Remember I mentioned I have a set of drivers? well Here is the thing,
>> > I have a graphics
>> > driver that needs to interact with the camera and as you explained the
>> > current
>> > model used by AVSHWS bypasses that graphics driver.
>>
>> HOW do you expect the graphics driver to interact with the camera? What
>> function will the graphics driver serve?
>>
>>
>> > What I need to do is to make the emulator send the information via the
>> > capture buffer
>> > so I can know if the graphics driver is being called in that case.
>>
>> But you’re throwing out a lot of terms here without connecting any of
>> them together. You have a real camera, you have an emulator, you have a
>> graphics driver, and presumably there’s an application somewhere that
>> hopes to consume the frames. What is the data flow between them? When
>> a frame arrives from a hardware camera, what’s going to happen to that
>> frame – step by step – before it gets into the application’s hands?
>>
>> –
>> Tim Roberts, xxxxx@probo.com
>> Providenza & Boekelheide, Inc.
>>
>>
>> —
>> NTDEV is sponsored by OSR
>>
>> Visit the list at: http://www.osronline.com/showlists.cfm?list=ntdev
>>
>> OSR is HIRING!! See http://www.osr.com/careers
>>
>> For our schedule of WDF, WDM, debugging and other seminars visit:
>> http://www.osr.com/seminars
>>
>> To unsubscribe, visit the List Server section of OSR Online at
>> http://www.osronline.com/page.cfm?name=ListServer
>>
>
>

German Cons wrote:

Oh, ok basically all I want to be able to call the GetCaptureBuffer
method from the driver.
So I can determine if the driver is working.
Actually is pretty straightforward, right now Im able to access the
capture buffer
and process the data in it sending the result to the pin.

However if there is NO camera attached, the data is empty.

I am simply amazed that I have not been able to ask my questions in a
way that actually induces you to answer them. None of what you have
said explains your data flow.

Here’s what I am interpreting, by reading between the lines. I
shouldn’t have to read between the lines, but you haven’t given us the
details.

I’m guessing that your AVStream driver doesn’t actually talk to hardware
at all. Instead, you have some other driver that talks to the camera
(you keep calling it “the graphics driver”, but I don’t think that’s
really what you mean; graphics drivers don’t talk to cameras). You are
calling a function called GetCaptureBuffer in that other driver, and
copying that to the stream pointer leading edge, where it can be
consumed as a captured image. Is that correct?

If that’s really the architecture, then you can’t induce another driver
to generate data for you. However, you can certainly generate an all
blue image in a static memory buffer, then at the point where you get
the frame, you do:

PBYTE pImage = NULL;
if( real camera is present )
{
pImage = GetCaptureBuffer();
}
else
{
pImage = staticBlueImage;
}

I think one way to do this, would be to let the image synthetizer
create the image in RAM (what it already does) and then copy it
to the capture buffer. But I dont know how to do that.

Why? You don’t really need to modify the capture buffer, you need to
change what you return. You can do that using something like the above.

-So basically I want the driver to emulate the camera (when hardware
is not present)
and use the capture buffer so I can test if the graphics capture
method is working correcty.

If the “graphics capture method” returns nothing when there is no
camera, then you can’t really do this without modifying the capture driver.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

“I am simply amazed that I have not been able to ask my questions in a
way that actually induces you to answer them. None of what you have
said explains your data flow.”

Really sorry about that, Like I said earlier Im not able to provide many
details.

" You are calling a function called GetCaptureBuffer in that other driver,
and
copying that to the stream pointer leading edge, where it can be
consumed as a captured image. Is that correct?"

Yes exactly.

"If that’s really the architecture, then you can’t induce another driver
to generate data for you. However, you can certainly generate an all
blue image in a static memory buffer, then at the point where you get
the frame, you do:

PBYTE pImage = NULL;
if( real camera is present )
{
pImage = GetCaptureBuffer();
}
else
{
pImage = staticBlueImage;
}"

Thats great! yes I havent considered simply not doing anything to the
capture buffer
and just use the synthetized image I already have instead if no camera is
detected.

Now here is the question I have to do that, right now what Im doing to add
CaptureBuffer support to AVSHWS
is to add these property methods in the table

DEFINE_KSPROPERTY_TABLE(PinCaptureProperties).
DEFINE_KSPROPERTY_ITEM_DISPLAY_ADAPTER_GUID(CCapturePin::PropertyGetDisplayAdapterGUID),
//THIS WILL WORK IF THIS PROPERTY IS NOT SET

DEFINE_KSPROPERTY_PREFERRED_CAPTURE_SURFACE(CCapturePin::PropertyGetPreferredCaptureSurface),

DEFINE_KSPROPERTY_CURRENT_CAPTURE_SURFACE(CCapturePin::PropertyGetCurrentCaptureSurface,
CCapturePin::PropertySetCurrentCaptureSurface),
DEFINE_KSPROPERTY_MAP_CAPTURE_HANDLE_TO_VRAM_ADDRESS(CCapturePin::PropertyGetMapCaptureHandleToVramAddress),
};

(And the methods of course, too large to publish here)

Using that scheme Now how can I change in realtime the capture surface

so it tries to use my synthetized image (from AVSHWS) instead of the
capture buffer?

BTW: CCapturePin::PropertyGetMapCaptureHandleToVramAddress returns the
capture buffer.

2015-06-08 12:36 GMT-05:00 Tim Roberts :

> German Cons wrote:
> >
> > Oh, ok basically all I want to be able to call the GetCaptureBuffer
> > method from the driver.
> > So I can determine if the driver is working.
> > Actually is pretty straightforward, right now Im able to access the
> > capture buffer
> > and process the data in it sending the result to the pin.
> >
> > However if there is NO camera attached, the data is empty.
> >
>
> I am simply amazed that I have not been able to ask my questions in a
> way that actually induces you to answer them. None of what you have
> said explains your data flow.
>
> Here’s what I am interpreting, by reading between the lines. I
> shouldn’t have to read between the lines, but you haven’t given us the
> details.
>
> I’m guessing that your AVStream driver doesn’t actually talk to hardware
> at all. Instead, you have some other driver that talks to the camera
> (you keep calling it “the graphics driver”, but I don’t think that’s
> really what you mean; graphics drivers don’t talk to cameras). You are
> calling a function called GetCaptureBuffer in that other driver, and
> copying that to the stream pointer leading edge, where it can be
> consumed as a captured image. Is that correct?
>
> If that’s really the architecture, then you can’t induce another driver
> to generate data for you. However, you can certainly generate an all
> blue image in a static memory buffer, then at the point where you get
> the frame, you do:
>
> PBYTE pImage = NULL;
> if( real camera is present )
> {
> pImage = GetCaptureBuffer();
> }
> else
> {
> pImage = staticBlueImage;
> }
>
>
>
> > I think one way to do this, would be to let the image synthetizer
> > create the image in RAM (what it already does) and then copy it
> > to the capture buffer. But I dont know how to do that.
>
> Why? You don’t really need to modify the capture buffer, you need to
> change what you return. You can do that using something like the above.
>
>
> > -So basically I want the driver to emulate the camera (when hardware
> > is not present)
> > and use the capture buffer so I can test if the graphics capture
> > method is working correcty.
>
> If the “graphics capture method” returns nothing when there is no
> camera, then you can’t really do this without modifying the capture driver.
>
> –
> Tim Roberts, xxxxx@probo.com
> Providenza & Boekelheide, Inc.
>
>
> —
> NTDEV is sponsored by OSR
>
> Visit the list at: http://www.osronline.com/showlists.cfm?list=ntdev
>
> OSR is HIRING!! See http://www.osr.com/careers
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer
>

Thanks again for your attention on this thread, Im not able to publish a
lot of the code, due to NDA and regulations,
I really appreciate the effort on trying to understand the problem despite
that limitation.

“Now here is the question I have to do that, right now what Im doing to add
CaptureBuffer support to AVSHWS
is to add these property methods in the table”

Ouch, Please allow me to try to rephrase that, it sounded much better in my
head.

"Now here is what I would like to ask, this is the way Im modifying AVSHWS
in order to incorporate my driver Capture Buffer surface according to msdn
documentation I need to add these properties to the table


Now based on that scheme, it seems I cant just call pin->setCaptureSurface
to change the capture surface back to ram? so how could I do that, how can
I change the capture surface back to the original Ram Synthetized Image ?

Ive been looking for an example on how to do that, but I havent found any.

2015-06-09 14:58 GMT-05:00 German Cons :

> “I am simply amazed that I have not been able to ask my questions in a
> way that actually induces you to answer them. None of what you have
> said explains your data flow.”
>
> Really sorry about that, Like I said earlier Im not able to provide many
> details.
>
> " You are calling a function called GetCaptureBuffer in that other
> driver, and
> copying that to the stream pointer leading edge, where it can be
> consumed as a captured image. Is that correct?"
>
> Yes exactly.
>
> “If that’s really the architecture, then you can’t induce another driver
> to generate data for you. However, you can certainly generate an all
> blue image in a static memory buffer, then at the point where you get
> the frame, you do:
>
> PBYTE pImage = NULL;
> if( real camera is present )
> {
> pImage = GetCaptureBuffer();
> }
> else
> {
> pImage = staticBlueImage;
> }”
>
> Thats great! yes I havent considered simply not doing anything to the
> capture buffer
> and just use the synthetized image I already have instead if no camera is
> detected.
>
> Now here is the question I have to do that, right now what Im doing to add
> CaptureBuffer support to AVSHWS
> is to add these property methods in the table
>
> DEFINE_KSPROPERTY_TABLE(PinCaptureProperties).
>
> DEFINE_KSPROPERTY_ITEM_DISPLAY_ADAPTER_GUID(CCapturePin::PropertyGetDisplayAdapterGUID),
> //THIS WILL WORK IF THIS PROPERTY IS NOT SET
>
> DEFINE_KSPROPERTY_PREFERRED_CAPTURE_SURFACE(CCapturePin::PropertyGetPreferredCaptureSurface),
>
> DEFINE_KSPROPERTY_CURRENT_CAPTURE_SURFACE(CCapturePin::PropertyGetCurrentCaptureSurface,
> CCapturePin::PropertySetCurrentCaptureSurface),
>
> DEFINE_KSPROPERTY_MAP_CAPTURE_HANDLE_TO_VRAM_ADDRESS(CCapturePin::PropertyGetMapCaptureHandleToVramAddress),
> };
>
> (And the methods of course, too large to publish here)
>
> Using that scheme Now how can I change in realtime the capture surface
>
> so it tries to use my synthetized image (from AVSHWS) instead of the
> capture buffer?
>
> BTW: CCapturePin::PropertyGetMapCaptureHandleToVramAddress returns the
> capture buffer.
>
>
>
>
> 2015-06-08 12:36 GMT-05:00 Tim Roberts :
>
>> German Cons wrote:
>> >
>> > Oh, ok basically all I want to be able to call the GetCaptureBuffer
>> > method from the driver.
>> > So I can determine if the driver is working.
>> > Actually is pretty straightforward, right now Im able to access the
>> > capture buffer
>> > and process the data in it sending the result to the pin.
>> >
>> > However if there is NO camera attached, the data is empty.
>> >
>>
>> I am simply amazed that I have not been able to ask my questions in a
>> way that actually induces you to answer them. None of what you have
>> said explains your data flow.
>>
>> Here’s what I am interpreting, by reading between the lines. I
>> shouldn’t have to read between the lines, but you haven’t given us the
>> details.
>>
>> I’m guessing that your AVStream driver doesn’t actually talk to hardware
>> at all. Instead, you have some other driver that talks to the camera
>> (you keep calling it “the graphics driver”, but I don’t think that’s
>> really what you mean; graphics drivers don’t talk to cameras). You are
>> calling a function called GetCaptureBuffer in that other driver, and
>> copying that to the stream pointer leading edge, where it can be
>> consumed as a captured image. Is that correct?
>>
>> If that’s really the architecture, then you can’t induce another driver
>> to generate data for you. However, you can certainly generate an all
>> blue image in a static memory buffer, then at the point where you get
>> the frame, you do:
>>
>> PBYTE pImage = NULL;
>> if( real camera is present )
>> {
>> pImage = GetCaptureBuffer();
>> }
>> else
>> {
>> pImage = staticBlueImage;
>> }
>>
>>
>>
>> > I think one way to do this, would be to let the image synthetizer
>> > create the image in RAM (what it already does) and then copy it
>> > to the capture buffer. But I dont know how to do that.
>>
>> Why? You don’t really need to modify the capture buffer, you need to
>> change what you return. You can do that using something like the above.
>>
>>
>> > -So basically I want the driver to emulate the camera (when hardware
>> > is not present)
>> > and use the capture buffer so I can test if the graphics capture
>> > method is working correcty.
>>
>> If the “graphics capture method” returns nothing when there is no
>> camera, then you can’t really do this without modifying the capture
>> driver.
>>
>> –
>> Tim Roberts, xxxxx@probo.com
>> Providenza & Boekelheide, Inc.
>>
>>
>> —
>> NTDEV is sponsored by OSR
>>
>> Visit the list at: http://www.osronline.com/showlists.cfm?list=ntdev
>>
>> OSR is HIRING!! See http://www.osr.com/careers
>>
>> For our schedule of WDF, WDM, debugging and other seminars visit:
>> http://www.osr.com/seminars
>>
>> To unsubscribe, visit the List Server section of OSR Online at
>> http://www.osronline.com/page.cfm?name=ListServer
>>
>
>

German Cons wrote:

" You are calling a function called GetCaptureBuffer in that other
driver, and
copying that to the stream pointer leading edge, where it can be
consumed as a captured image. Is that correct?"

Yes exactly.

Apparently not “exactly”, because the rest of this message describes a
somewhat different architecture.

You are talking about implementing KSPROPSETID_VramCapture. That
property set is used when capture driver wants to deliver its frames via
a graphics adapter’s VRAM instead of returning them in system memory.
This might be used, for example, to implement hardware-assisted decoding
of video frames. This property set tells DirectShow “you don’t need to
allocate any memory for buffers – instead, I’ll tell you where in this
display adapter’s VRAM that I will return the images.”

So, when you implement those properties, KS will hand you the
VRAM_SURFACE_INFO structure when you fetch the stream leading edge
pointer, and you are expected to copy your next frame into that buffer.
It isn’t really designed to be used when the SOURCE of the frames is VRAM.

Remember that, like most drivers, an AVStream driver has two interfaces
to the outside world: a bottom side that gets closer to hardware, and a
top side that gets closer to applications. Ordinarily, an AVStream
driver pulls frames from some piece of hardware, and copies the frames
into system memory, where they can be viewed on screen or copied to
file. It’s just plumbing. I thought you were saying that your camera
is connected to the graphics card, so that the incoming frames will
arrive in graphics memory. Are you also going to DELIVER those frames
to your clients in the graphics memory? How will the frames be used?

Using that scheme Now how can I change in realtime the capture surface
so it tries to use my synthetized image (from AVSHWS) instead of the
capture buffer?

If you can’t get VRAM from your graphics device when the camera is not
connected, then you can’t use KSPROPSETID_VramCapture. You’ll have to
fail those requests. KS will then allocate system memory like normal,
and hand THAT to you in stream pointer.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

Ok now I understand what is going on.

“are you also going to DELIVER those frames
to your clients in the graphics memory? How will the frames be used?”

The frames are supposed to be processed and send to a pin later.

So let me see If I understand I need to set the properties from the
beginning If Im using avram or not.
And is supposed to switch automatically if no camera is found since there
will be no avram capture buffer
if no capture hardware is found.

Correct?

Unfortunately is not doing that, Im getting a black screen instead.

Let me dig into the code to see what is going on.

2015-06-10 13:07 GMT-05:00 Tim Roberts :

> German Cons wrote:
> >
> > " You are calling a function called GetCaptureBuffer in that other
> > driver, and
> > copying that to the stream pointer leading edge, where it can be
> > consumed as a captured image. Is that correct?"
> >
> > Yes exactly.
>
> Apparently not “exactly”, because the rest of this message describes a
> somewhat different architecture.
>
> You are talking about implementing KSPROPSETID_VramCapture. That
> property set is used when capture driver wants to deliver its frames via
> a graphics adapter’s VRAM instead of returning them in system memory.
> This might be used, for example, to implement hardware-assisted decoding
> of video frames. This property set tells DirectShow “you don’t need to
> allocate any memory for buffers – instead, I’ll tell you where in this
> display adapter’s VRAM that I will return the images.”
>
> So, when you implement those properties, KS will hand you the
> VRAM_SURFACE_INFO structure when you fetch the stream leading edge
> pointer, and you are expected to copy your next frame into that buffer.
> It isn’t really designed to be used when the SOURCE of the frames is VRAM.
>
> Remember that, like most drivers, an AVStream driver has two interfaces
> to the outside world: a bottom side that gets closer to hardware, and a
> top side that gets closer to applications. Ordinarily, an AVStream
> driver pulls frames from some piece of hardware, and copies the frames
> into system memory, where they can be viewed on screen or copied to
> file. It’s just plumbing. I thought you were saying that your camera
> is connected to the graphics card, so that the incoming frames will
> arrive in graphics memory. Are you also going to DELIVER those frames
> to your clients in the graphics memory? How will the frames be used?
>
>
> > Using that scheme Now how can I change in realtime the capture surface
> > so it tries to use my synthetized image (from AVSHWS) instead of the
> > capture buffer?
>
> If you can’t get VRAM from your graphics device when the camera is not
> connected, then you can’t use KSPROPSETID_VramCapture. You’ll have to
> fail those requests. KS will then allocate system memory like normal,
> and hand THAT to you in stream pointer.
>
> –
> Tim Roberts, xxxxx@probo.com
> Providenza & Boekelheide, Inc.
>
>
> —
> NTDEV is sponsored by OSR
>
> Visit the list at: http://www.osronline.com/showlists.cfm?list=ntdev
>
> OSR is HIRING!! See http://www.osr.com/careers
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer
>