@pavel the GPU stuff is entirely seperate from the DNG stuff. for video recording the best way is most likely to just feed the preview debayer feed into either the hardware h264 encoder or a software one.
In theory the most optimal path is setting the sensor in a YUV 4:2:0 mode and then feed the frames from the sensor straight into the h264 hardware encoder using the DMA magic in the chip. I don't have a way to display that feed to know that you're recording at the same time though.
@pavel the hw encoding is available in practically all HW, the whole linux driver ecosystem is a whole other story though :(
I think the lens shading correction etc would entirely depend on ISP features available in the platform. To be honest just having any video recording at all would be an improvement and that would be a later worry
@pavel yeah the major issue here is that the CPU is not really fast enough to do any realtime video encoding even without the load of actually dealing with the video capture. making the gpu do more work will make it worse at least on the pinephone because it's all memory bandwidth constrained anyway.
@pavel yep I've done that some years ago. did a livestream to youtube from a pinephone devkit at 720p on ultrafast and it barely can do that.
But that's pumping yuyv frames staight from the kernel through ffmpeg to the web. that leaves zero cpu power left to actually run a preview on screen or correcting anything
@pavel hmm I have no idea. That's the kind of kernel voodoo that makes me ask megi normally