Posts
4612
Following
317
Followers
481
Linux kernel hacker and maintainer etc.

OpenPGP: 3AB05486C7752FE1
@BryanBennett also anything deployed to the cloud has this separation. so you can sort of scientifically/formally say that 99% of time unit tests are run in a contaminated test subject. it is not really an opinion but the actual reality.
0
0
0
i don't really have made anything in my professional career that does not require a separate host and target device.

i neither work on IoT space, it was just an example.
1
0
0
@rolle Blueskyta kokeilen varmaan, jos saan inviten joskus. Threadsia kokeilisin huvikseen, jos se ei vaatisi instatunnareita :-)
1
0
1
@BryanBennett @orva To put this in perspective all the testing methodology was invented in a world where embedded systems were essentially something like Intel 8051 with a custom piece of control code written in assembly language and no operating system whatsoever.
0
0
0

Jarkko Sakkinen

Edited 1 year ago
@BryanBennett @orva Speaking of full-stack, consider some IoT device, let's say a smart light bulb with a bluetooth speaker and a webcam for the sake of example.

We can run integration tests for it designed for the product right now but essentially a major portion of all possible unit tests written to to the components that encompass its software stack have a total zero value. Sure you can check them in your development system but it has much less value than to run them with the hardware where they are deployed. If unit tests were detachable and deployable you could run (a) your integration test suite and (b) all possible tests written each individual component. That would be a measurable improvement e.g. to the product security metrics.
1
0
0
@BryanBennett @orva All I'm saying that it is repeated by-design mistake not to design all layers of tests detachable. It was persistent in autotools and the same bad design is repeated e.g. by cargo.

E.g. unit test suite of kernel called kselftest is counter-example of this. It can be run in-tree but can be also packaged and deployed to the target device. So it is not a law of nature but more like bad engineering practice that is common enough to have become a tradition. Just criticising the obvious I guess...

Usually everything I do there is a cross-compiler involved, so even for unit tests there's a hard border between the world where testing happens and the host system, it is more like a law of nature.
1
0
0
@orva How do you build BuildRoot/Yocto whatever image without Rust support and deploy test suite as a package? I.e. you are testing on target and do not have the Rust environment there at all.

E.g. just because of this pattern I need to put sometimes GNU make to a test image because test frameworks tend to rely on toolchains.
2
0
0

Jarkko Sakkinen

Edited 1 year ago
Not only #rustlang problem with Rust bindings but also an issue with e.g. #Python bindings: you cannot really use them to do any QA for upstream project, unless they are maintained by that project.

The reason being that they are not guaranteed to be in-sync with upstream changes.
0
0
0

Jarkko Sakkinen

Edited 1 year ago

The most of #autotools based open source software is sort of anti-pattern for QA/CI because the test suite is hard-bound to the source project. This is the reason why I rarely (or almost never) run TPM2 TSS test suite.

I wonder if #rustlang continues to follow this anti-pattern or is there cargo install for the test?

It is sort of thing that has been always bad for anything with disjoint host and target system but is part of “craftmanship” because things has been done that way long enough :-)

1
0
0

Jarkko Sakkinen

Edited 1 year ago

I think I will aim at building OS image per CI cycle for keyutils. This guarantees a kernel with configuration options to provide maximum coverage.

For .gitlab-ci.yml I guess it makes sense to then just limit to the branch master i.e. review is manual but red flags will rise up if the reviewer was sloppy :-)

0
0
0

Jarkko Sakkinen

Edited 1 year ago

For both integration tests of my #ZMODEM crate and also for keyutils Gitlab #CI I’ve been looking for solution to implement transparent serial file transfer.

#QEMU allows trivially to convert serial port to UNIX domain socket but it is not natively supported by sz but with a little bit of socat magic it can be apparently converted quite easily again to PTY:

socat -d UNIX-CONNECT:output/images/serial.sock  PTY,raw,echo=0,link=output/images/ptyC0

This allows to drop SSH support completely from BuildRoot config, which makes it much more appealing for automated CI.

0
0
0
I actually ended up putting both serial and monitor as sockets along the lines of:
0
0
0

Jarkko Sakkinen

Edited 1 year ago

For using #QEMU in #CI generating ephemeral #SSH key pair is one option but after playing for a while with that option I realised that you can create named pipes:

mkfifo tty.{in,out}

And then pass -serial pipe:tty to QEMU. After this commands can be emitted to tty.in and the output can be read from tty.out.

I think this a quite good strategy when having to orchestrate headless QEMU instances without any high-level infrastructure, such as libvirt.

1
0
4

@peterkorsgaard Benefit of all this is sort of niche but still important: most of the testing of kernel patches in linux-integrity could be then with the upstream BuildRoot’s QEMU and UEFI targets, only changing option or few in the config and sometimes using LINUX_OVERRIDE_SRCDIR for in-development stuff.

0
1
1

Jarkko Sakkinen

Edited 1 year ago

@peterkorsgaard Or actually they could be there also when swtpm is installed as a system package (via command -v swtpm check).

1
0
0

@peterkorsgaard When upstreaming I’ll also probably want to update start-qemu.sh.in to use getopt for the sake of having easy to comprehend --tpm-version=<1,2> --tpm-device <tis,crb> parameters (when swtpm is enabled for host)

1
0
0
@peterkorsgaard Yes, eventually! I'll just let it mature a bit in here before doing that :-)
1
0
0

GIF-animation was generated with asciinema and agg.

1
0
0

Jarkko Sakkinen

Edited 1 year ago

I packed swtpm for the #QEMU build so it does not have to be installed to the system:

https://github.com/jarkkojs/tpmdd-buildroot-external

start-qemu.sh will automatically setup shenanigans so that swtpm will work as TPM emulation host for QEMU.

After build there’s three options:

  1. TPM2 TIS/FIFO: output/build/images/start-qemu.sh
  2. TPM2 TIS/CRB: output/build/images/start-qemu.sh --tpm-crb
  3. TPM1 TIS/FIFO: output/build/images/start-qemu.sh --tpm1

Right, and neither QEMU needs to be installed to the host. I’m trying to sort of construct this in a way that it would become as CI friendly as possible so that this could be in addition used as a CI workload for keyutils.

#BuildRoot #linux #kernel #tpm

1
1
2
Show older