@pavel
I cannot program GPUs and do not desire a mythological protagonistic role :D Take my boost tho
@pavel No, it's
RGRGRG
GBGBGB
You lose meaningful data if you ignore half of green pixels.
I see no reason why it couldn't be done. Just take care not to introduce needless copies in your processing path. dmabufs are your friends.
@pavel Since I assume you're going to want to pass the rendered image into some kind of video encoder, you may want to make sure that you match stride and alignment requirements with your target buffer so etnaviv will be able to perform linear rendering rather than de-tile it afterwards (though IIRC it's currently gated behind ETNA_MESA_DEBUG).
@dos @pavel
adding to that, what data type is the image data (float, int, ???) and what data type is expected to come out?
instead of trying to outsource to the GPU, have you considered SIMD? (I assume librem5 and pinephone support NEON)
if the GPU is better suited, another question is whether there's support for compute shaders on the respective GPUs (what is the supported OpenGL version, assuming there is no Vulkan support on these devices)
@tizilogic @pavel It's either 8-bit int, or 10-bit int stored as 16-bit.
GC7000L supports compute shaders, but etnaviv isn't there yet.
Naive debayering is easy, but for good picture quality you need much more than that.
@pavel do you have a single frame of raw pixel data? What is the target API (OpenGL, -ES, Vulkan)?
@pavel It would be great to have some actual frame data from the camera sensor, or some test data, that I can load into a texture and write a shader to do the conversion. With OpenGL-ES (which is what you have) the trick is to load the pixels into a RG texture that is twice as wide and half as high as the original frame, so that "upstairs"/"downstairs" neighbor pixels in consecutive row are of the same primitive color; this avoids issues with arithmetic and texel addressing precision.
This is a OpenGL ES 2.0 solution.
https://github.com/rasmus25/debayer-rpi
There's also support for a software isp in libcamera. I think I've seen some mentions of GPU backed debayering too.
@pavel I'm confused. V4L lets you stream to a CMA dmabuf which should be importable as GL_TEXTURE_EXTERNAL_OES, right? Or am I missing something?
@pavel On 9f076a5, I'm getting 88MB/s with one green channel, 82MB/s with two and 105MB/s with nothing but static gl_FragColor. The three copies it does could be eliminated and I believe texelFetch could make it slightly faster on the GPU side too.
@pavel Megapixels is not an example of how to do things in the most performant way :) OpenGL operates in a VRAM-centric model, it's very copy-heavy. We don't need to copy things around, as our GPUs operate on the exact same memory CPUs do.
See GL_OES_EGL_image_external and https://docs.kernel.org/userspace-api/media/v4l/dmabuf.html
@pavel After eliminating glReadPixels and having the output buffer mmaped instead: "18.9 MB in 0.08s = 244.4 MB/s"
After putting glTexImage2D out of the loop to emulate zero-copy import from V4L as well:
"18.9 MB in 0.05s = 400.1 MB/s"
@pavel Not only you had copies in- and out- of GLES context there, but these copies were sequential - and your benchmark waited until things were copied before proceeding with the next frame, so it was pretty much useless in assessing GPU performance. In practice, GStreamer can happily encode the previous frame while the GPU is busy with the current one, all while CSI controller is already receiving the next one.
@pavel Also, it gets faster when you increase the buffer size, because rendering is so fast you're mostly measuring API overhead 😁
With full 13MP frames: 315.1 MB in 0.62s = 511.3 MB/s
@pavel are you limited to OpenGL ES 2.0 or can you use a more modern version? ES-2.0 is very bare bones in its image format and shader capabilities and efficiently converting 10 bpp will be a PITA, due to lack of texelFetch function.
Anyway, spent the day finding a nice polynomial to linearize the sensor values (LUTs should be avoided if possible, memory access has latency and costs energy, if you can calculate in a few instr. prefer that).
@pavel @datenwolf Current Mesa can do bunch of GLES3 stuff already, including texelFetch, once you force it with MESA_GLES_VERSION_OVERRIDE.
I still have some issues with the linearization LUT. But if you want to get the basic gist of how I approach the whole de-Bayering, here's the code.