Conversation
To build Megapixels on #postmarketos, do sudo apk add mg g++ libconfig-dev git meson libconfig-dev ninja-build
libraw-dev libexif-dev ninja

sudo apk add opencv-dev
# sudo apk add scdoc -- not really needed

sudo apk add gtk4.0-dev
sudo apk install feedbackd-dev
sudo apk add zbar-dev
sudo apk add pulseaudio-dev

..then proceed as usual.
1
1
2

@pavel "apk install" is not a valid command, and you can combine all "apk add" invocations into one. Also, megapixels is packaged already, so why not just "apk add megapixels"?

2
0
1
@bart Yep, thanks for noticing.

I need to build megapixels sources, because I'm trying to get it working on OnePlus 6. So far, I got ./megapixels-getframe to crash the kernel, hopefully it did not damage the git repositories this time.
0
0
0
@bart Plus, I really need Megapixels 2, and AFAICT packed version is 1.x.
1
0
1

@pavel Ah yes 2.0 is still in beta it seems, it'll appear in our repos once it reaches stable state.

1
0
1
@bart @martijnbraam Could we get megapixels 2, pretty please? :-). Changes from v1 are pretty significant, and I have some work that would be best done post-release...
1
0
1

@pavel @bart well the main issue with megapixels 2 is that if you build it for alpine it won't work because you can't make an opengl context in that GTK version the way megapixels is making it, otherwise it's pretty much ready

1
0
1
@martijnbraam @bart Postmarketos still ships gtk-3, could we simply use that?
1
0
0

@pavel @bart no unless megapixels is rewritten again in gtk3...

I'd rather strip out the gtk part alltogether since I barely use any of the GTK stuff in it

1
0
1
@martijnbraam @bart I assume gtk-3 is quite similar to gtk-4? But you are right, something like SDL would also make sense.

But that also will not be small change, so perhaps should not be done between alpha and release? Maybe release version 2 with "either use old gtk-4 with this or avoid it on Librem 5 etc", and then start another rewrite in megapixels-3?

https://blog.brixit.nl/megapixels-2-0-progress/
1
0
1

@pavel @bart yeah I already have a project folder called gopixels and written some bindings. I guess there's no real reason not to tag an actual release, it's still up to distributions to figure out if/how/when to distribute it

1
0
1
@martijnbraam @bart Yes, I believe doing release now with known limitations is best solution.

Can I somehow convince you to mv gopixels rustpixels? :-)
1
0
0

@pavel @bart nah I rather enjoy writing it in go actually :P The goroutines are also perfect to replace all the threading stuff that's now done in the app

1
0
1

@pavel @bart Megapixels 2.0.0 now is.

and libdng 0.2.2 and libmegapixels 0.2.3

2
3
1
@martijnbraam @bart

If you are doing big redesign... here's how to do video recording.

# Design

ARM has some limitations with DMA, which results in data in DMABUF
being uncached. That is problem on in both V4L->CPU and GPU->CPU
directions, still we can get good video recording on existing
hardware. Here is how to do it.

Put sensor in 4MPix or so mode, and capture to DMABUF.

Reading whole DMABUF from CPU pretty slow, but you can sample data to
do AF/AE. You can also set buffer aside for future reading, and thus
take high-resolution raw photos during video recording.

GPU can work with uncached memory without penalty, and debayer with
downscale is pretty simple operation. It will also provide better
results than downscale in sensor followed by debayer.

GPU can multiply with color conversion matrix easily, and conversion
to YUV format is simple to do. Using subsampled color mode such as
YUY2 enables faster movie encoding and reduces ammount of data
transfered GPU->CPU. As that's uncached and slow, YUV is a big win.

Getting data from GPU->CPU is slow and tricky. You probably want GPU
to compute on frame n while CPU thread is copying from frame n-1 to
CPU memory.

Rest is simple. You can use gstreamer to both encode movie and provide
preview, you can do another GPU pass for preview data. Above can get
0.8MPix, 30fps recording on Librem 5.
1
0
0

@pavel @bart why have gstreamer in the preview pipeline in any case? just use the data that's already in gpu memory and display it at that point without going back to the CPU.

The seperate thread for copying is how megapixels already gets frames from v4l at the moment and with go it will be quite simple to do that everywhere.

1
0
1
@martijnbraam @bart gstreamer is optional for preview. But you really need it for encoding, and so using it for preview is easy.

Main message from that document was that frames should be copied not from v4l, but from gpu (after downscale and probably YUV conversion), because that's a way to do movie recording.
0
0
0