@pavel @datenwolf @dcz @martijnbraam
On OnePlus 6 we have faster CPU (sdm845)
Rear camera max resolution: 2560x1600
CPU debayer works fine, but for now we can't use hw accelerated video encoding.
As workaround, on video capture we're downscaling it to 640x. Than video encoding works fine (check my mastodon for examples).
I've made custom camera app with video downscaling and QR code detection via ZBar lib
@pavel @datenwolf @dcz @martijnbraam Yea, megapixels is great app, but I want to improve pipewire integration. Btw, some performance decrease is because of pipewire (bcoz of buffer copy i guess)
@pavel @datenwolf @dcz @martijnbraam @NekoCWD I played with it over the last days and I already have 526x390 30 FPS encoding with live viewfinder, perfectly synced audio, color correction, lens shading correction, tone mapping, AWB and AE - consuming about 40% CPU. Still needs chromatic shading correction and AF, and I started experimenting with enabling PDAF.
I can also make the sensor output 60 FPS and RAW10. Patches incoming ;) Still only 2 MIPI lanes though, so no 13MP 30FPS yet.
BTW. Recording 30 minutes with screen off ate about 15% of battery, so 3 hours of recording should be possible.
I think both left- and right-masked PDAF pixels are accessible in 1:2 mode with binning disabled, though I haven't done the math yet to be 100% sure. Enabling/disabling it on demand will be somewhat clunky though. I can also read calibration data from OTP, but AFAIK there are no kernel APIs to expose that to userspace :(
https://source.puri.sm/Librem5/linux/-/issues/411#note_285719
Note that the picture is shifted down-left a bit because the driver was misconfiguring the output parameters to include dummy pixels - fixed that too, along with several out-of-range clocks 👻
@pavel @datenwolf @dcz @martijnbraam @NekoCWD It's 526x390, but properly binned (each channel's sample consists of 4 raw pixels averaged), which reduces noise. The shader got heavy though, does only ~35 FPS at this res - but there should be room for optimization. I've been more concerned with its correctness than performance so far.
Stats counting on the CPU with NEON is fast enough for full frame with some subsampling.
I'm giving it some finishing touches and will then publish it of course 😁
@pavel @datenwolf @dcz @martijnbraam @NekoCWD I'd expect moving binning to a separate pre-pass to make it faster, we'll see.
Also, my stats are center-weighted. Millipixels annoyed me with its reluctance to overexpose the sky 😄
@dos @pavel @datenwolf @dcz @martijnbraam @NekoCWD I'm an absolute noob in all these but i have a very naive question : how come older android smartphone can do the same thing with bigger resolution on older chip ? If i compare with an old samsung galaxy s3, it did all this very easily. Is there some secret proprietary sauce to it with specialized closed-source firmware ? Or the librem 5 just has an exotic hardware ?
@lord @datenwolf @dcz @martijnbraam @NekoCWD @pavel Specialized hardware. Phones usually don't have to do image processing nor video encoding on CPU or GPU at all, they have hardware ISP and encoders. L5 does not.
On other hardware there's also a matter of whether there's driver and software support for it, so Android may be able to use that, but pmOS - not necessarily.
@dos @datenwolf @dcz @martijnbraam @NekoCWD @pavel Ok so it really is due to some hardware "missing" and not just some closed-source firmware in the case of L5. Good to know :-)
@pavel @datenwolf @dcz @martijnbraam @NekoCWD I'm just using waylandsink at this point, but it could be passed anywhere else. That's literally the least interesting part of the thing 😂
@pavel @datenwolf @dcz @martijnbraam @NekoCWD By the way, the actual framerate limits (verified to work) are 120 FPS @ 1052x780, 60 FPS at 2104x1560 and 20 FPS (8-bit) or 15 FPS (10-bit) at 4208x3120. Full res could go up to 30 FPS, but only once we get 4 lane configuration to work.
I can even produce 120 FPS video, although it doesn't skip frames only once downscaled to 127x93 right now 😂
@pavel There's plenty of apps that embed GStreamer's output to look at, and you can even skip it completely and simply import the V4L buffer into SDL's GL context and don't create your own one at all. This is just gluing things together at this point.
@pavel Of course. The driver doesn't handle that right now, but it's just some straightforward plumbing. Good entry level task.
@pavel Doing it via GStreamer makes buffer queue management easier, but of course it can be done either way. With SDL you get GL context already, so you just do it the way I showed you some time ago. You skip the context creation part and you're done.