Re: Why can't D3DFMT_A8R8G8B8 work together with D3DF- MT_D16?

Moreira, Alberto writes:
> I’m not too sure about D3D, I’m an OpenGL guy, but you could be
> running out of video memory, how big is your card’s memory ? Note
> that A8R8G8B8 needs 4 bytes per pixel, so, 1280x1024 for example,
> requires about 5.2Mb for the front buffer plus 5.2Mb for the back
> buffer. A 16-bit depth buffer requires 1280x1024x2 = 2.6Mb. Now if
> you add one more surface in offscreen memory, that’s 5.2Mb more,
> so, you see, you run out of memory pretty fast, and I’m not even
> talking about texures yet. I’ll be surprised, however, if D3D ties
> the number of bits in the depth buffer to the bit depth of the
> color buffer, these should be independent.

I’m coming at this from the DirectDraw 2D driver side - hence some
confusion and misleading information on my part!

The D3D 8.0 calls CheckDeviceFormat and CheckDepthStencilFormat can be
used to determine the compatibility of a given depth format and render
target colour format but I was incorrect to assume a strict dependence
of one upon the other - which you correctly point out. It appears as
if their compatibility is dependent upon the reported capabilities of
the device/driver.

My impression from the original post was the D3D was itself failing
the call. It’ll do this if the parameters passed are incompatibile
with the (reported) capabilities of the device or are inconsistent
with each other (which is what I originally and incorrectly assumed).
In other words it is possible that the driver is not reporting its
capabilities correctly. The DirectX Caps viewer from the DirectX8.0
SDK samples can be used to examine device capabilities and might help
explain why things are failing.

(And, of course, running out of video memory can happen and will
certainly cause calls to fail but you’ll see the failure to obtain
video memory in the driver itself since it’s the driver which does the
allocation.)

Gordon

>> I’m developing display driver on w2k supporting Direct3D,
>>> However, when the application creating a 3D Device with an
>>> offscreenplain surface?of D3DFMT_A8R8G8B8 and a depth
>>> buffer of D3DFMT_D16,?DX always reports that these two format
>>> can’t work together on this card. In fact, any type of
>>> surface that has alpha pixels can’t work with D3DFMT_D16. Why
>>> so? Thank you
>>
>> An educated guess says the D3DFMT_A8R8G8B8 colour format needs 32
>> bits - 8 bits each for R, G, B and Alpha. However _D16 indicates
>> a 16 bit deep surface - hence the incompatibility. I could be
>> wrong but from looking at the header it looks as if
>> D3DFMT_A4R4G4B4 (or D3DFMT_A1R5G5B5) would be the correct format
>> with alpha for a 16 bit (D3DFMT_D16) surface. Also, if you want
>> D3DFMT_A8R8G8B8 you need D3DFMT_D32.


You are currently subscribed to ntdev as: $subst(‘Recip.EmailAddr’)
To unsubscribe send a blank email to leave-ntdev-$subst(‘Recip.MemberIDChar’)@lists.osr.com