#teardown and #bootstrap gpg-agent, pcscd to have a working configuration:
#!/usr/bin/env sh
# Copyright (c) Jarkko Sakkinen 2024
# Bootstrap gpg-agent and pcscd for Yubikey use.
GPG_AGENT_SOCKETS=(gpg-agent-ssh.socket
gpg-agent-browser.socket
gpg-agent-extra.socket
gpg-agent-ssh.socket
gpg-agent.socket)
systemctl --user disable --now "${GPG_AGENT_SOCKETS[@]}"
gpgconf --kill gpg-agent
sudo systemctl disable --now pcscd.socket
systemctl --user enable --now gpg-agent.socket gpg-agent-ssh.socket
sudo systemctl enable --now pcscd.socket
Why Curve25519 uses EdDSA for signing, and SECP-P256-R1 and SECP-P256-K1 use ECDSA?
It’s the scale. Curve25519 field has the size that fits within 255 bits, and two previous have the size that fits within 256 bits.
So from that follows:
There is formal backing for this but pure common sense it is exactly like “if loose some, you must gain some” ;-)
I realized that I have something profound broken in my asymmetric TPM2 key series: TPM2 specific keys should only sign, not verify.
struct public_key
, which is the central structure used for built-in, vendor and machine owner keys, should be able to verify the signature, even when the TPM chip is removed.
As a consequence, I will delete all the verification code from the key type(s) and set the supported_ops
a KEYCTL_SUPPORTS_SIGN
, instead of previous KEYCTL_SUPPORTS_SIGN | KEYCTL_SUPPORTS_VERIFY
.
I’ll also rework tests to have two asymmetric keys: one for signing in the chip and other outside holding only the public key. That should also better verify that the feature is working correctly.
Right. I should take my TPM2 signing code and merge it to struct public_key
, use only TPM2_Sign and ditch TPM2_RSA_Decrypt.
Just hit me out of the void. Then e.g. builtin/secondary/machine keyrings, x.509 certiificates etc. is also in the finish line once this feature lands :-)
Three week clocked to the development so far so I think this is going in good phase.
I’ll start a new series (because it is not the exact same feature as before).
Condolences.
Mike Karels of Berkeley Unix/BSDi died of a heart attack on his way home from BSDCon.
Karels was responsible for implementing TCP/IP on BSD, which was later ported to Linux. Since you're reading this, you are benefitting directly from his work.
RIP, Mike. We won't forget you.
Trying to deploy #systemd to BuildRoot build:
Filesystem found in kernel header but not in filesystems-gperf.gperf: BCACHEFS_SUPER_MAGIC
Filesystem found in kernel header but not in filesystems-gperf.gperf: PID_FS_MAGIC
I think I might know how to fix these tho so should not be an issue.
I had QEMU style build. I’m repeal and replacing that with a build that builds 2GB disk image ESP/UEFI compatible. That can then supplied to qemu/libvirt or burned to stick and booted with hardware.
I wonder what is the policy of putting something to scripts/
(not to vmlinux
) that is written with #Rust? I.e. build time utility. Just curious.
And actually, since bindgen
is installed from crates.io, not from kernel tree, should it be actually submitted there, and not to the kernel tree?
Kernel documentation gives pretty bad rationale for bindgen
being in Cargo: “The bindings to the C side of the kernel are generated at build time using the bindgen tool. A particular version is required.” I’m sure there are good reasons to install it using cargo
but why the documentation does not list those reasons, no matter how obvious they might be to some.
So I guess I put my build time tool to crates.io because at least first it is an experiment, and secondly bindgen
is managed like this. But even this does not conclude the story fully. I have no idea in what license that out-of-tree pulled build-time utility is expected to be. It is not documented, or at least I cannot find it documented anywhere.
Other thing that puzzles in #Ethereum and #Swarm is that why waste bandwidth and CPU cycles to #JSON when you could #ASN1 the transaction like:
Root ::= SEQUENCE {
from INTEGER
to INTEGER
value INTEGER
gas INTEGER
gasPrice INTEGER
nonce INTEGER
data OCTET STRING
chainId INTEGER
}
Pretty trivial scalability optimization IMHO. Maybe I submit another talk just to say that hey use ASN1.