Don't take any stress on my account. It wasn't meant as criticism, I just wondered if there were some technical problems associated with it since AFAIK the old Warp3D doesn't support it.
People are dying. Entire ecosystems are collapsing. We are in the beginning of a mass extinction. And all you can talk about is money and fairytales of eternal economic growth. How dare you! – Greta Thunberg
XviD codec might be worth considering but only if the RGB to YUV conversion could be done using the GPU. The others are just a waste of time.
You'll be able to do this once I get bitmap-as-texture implemented in Warp3D Nova.
I noticed that bitmap-as-texture is now supported in the latest Warp3D Nova, however the W3DNBMFmtInfo example reports all the YUV pixel formats as unsupported so I assume I still can't use it to render into a YUV bitmap?
Edit: I guess what I could do is allocate a PIXF_ALPHA8 bitmap for the render target that is 1.5x as high as the source bitmap and then render the Y, U and V planes to it in different passes.
AFAIK, the hardware doesn't support YUV render targets. You could easily render YUV444 to a standard ARGB bitmap. However, I assume you want YUV420p/410p, which are planar formats, in which case your idea is the way to go: Quote:
I guess what I could do is allocate a PIXF_ALPHA8 bitmap for the render target that is 1.5x as high as the source bitmap and then render the Y, U and V planes to it in different passes.
I added some more error checking (I had missed adding it on some functions apparently) and now I get an error from FBBindBufferTags().
It is "Error 9: unsupported bitmap/texture format" but bitmap is PIXF_ALPHA8 which W3DNBMFmtInfo lists as "can be texture, can be rendered to, max width: 16384, max height: 16384". It should also be nowhere near the max width and height values.
Are you using W3DNTag_Texture instead of W3DNTag_BitMap by any chance? W3DNTag_Texture is for Warp3D Nova textures, whereas W3DNTag_BitMap is for graphics.library bitmaps.
This is where I wish we had a debug layer which would catch such things and tell you what you're doing wrong.
Maybe you can add a tag to force treating a PIXF_CLUT bitmap as PIXF_ALPHA8?
Other than that GetBitMapAttr() could be modified to return PIXF_ALPHA8 but the question is if this will have repercussions, like breaking something else.
I'll just silently treat PIXF_CLUT as an alpha map, and also fix GetBitMapAttr(). The deprecated p96GetBitMapAttr() actually returns the correct value.
However, that does mean waiting for updated versions to be released...
Now that GetBitMapAttr() returns PIXF_ALPHA8 I've managed to get my shader based RGB to YUV420P conversion code working.
BTW do you know what package I need to install to get glslangValidator on my Ubuntu system, or where I can get the source code so I can compile it myself?
Comparing with a similar CPU based RGB to YUV420P conversion routine the CPU based one takes about 177 ms for one 1920x1080 32-bit bitmap and the shader based one takes only 2.5 ms.