Conversation

for right now.

Computers need to convert those or YUV streams to RGB quickly to present them!

I'm just converting my old code to , but it was enough to remind me why I hate the OpenGL + + combo. APIs with implicit global variables can get lost.

Thankfully, I'm nearly done and can go back to shaders and algorithms :-) but using GLES 1.2 (the can't do anything better ;_; )

2
1
1

@dcz I think you meant GLES 2.0 🤔

1
0
0

@dos I actually meant GLSL 1.2, but yes, I didn't check GLES version :)
Either way, a world of pain because tutorials for stuff that old are disappearing from the 'net and understanding it from the reference and first principles is a nightmare.

0
0
0
@dcz Interested! :-). What I wanted to know... does L5 have enough bandwidth to do debayer in ~1Mpix resolution and pass RGB data back to CPU? That would be a) useful example for libcamera and b) might enable recording of long videos.
1
0
0

@pavel We did it on the CPU entirely for the preview and got real-time FPS out of it (I can record a video tomorrow) but I recall the problem with storing video was the eMMC speed and video encoding overhead.

Granted, video recording was done with hacky scripts, so maybe there's more power than expected.

Anyway, I'm going to publish what I have this week - so far UVC cameras. I still need to write the yuv decoder and connect it to the demo app.

1
0
0

@pavel Also, you get bonus points if you figure out how to sum pixels in a GLES 2.0 shader. That would be useful in the future for a statistics shader.

So far, I haven't found anything.

1
0
0
@dcz I'm not really a shader expert. But you could sum ... lets say 16 adjacent pixels in the shader, and then compute final sum on CPU, no?
1
0
0

@pavel I am coming to that same conclusion. With 4096 pixels at a time, I can do it in 2 runs for a 15MPix image.

0
0
1