James Bottomley posted new version of the #HMAC encryption patches for #TPM2: https://lore.kernel.org/linux-integrity/20231127190854.13310-1-James.Bottomley@HansenPartnership.com/T/#t
I spent some time refactoring the tpm_buf
changes because they were the major glitch for me in the earlier versions, and those patches have been included now to this series, which is of course great. The series is probably rather sooner than later ready for inclusion to the mainline.
This adds up to the TPM2 sealed hard drive encryption by mitigating bus interposers by a factor. An interposer is an actor intercepting traffic between the CPU and a discrete TPM chip (i.e. not firmware TPM).
A bus interposer can reset a TPM and replay PCR’s as the chip returns to its initial state, which resets them. To mitigate this, kernel creates HMAC session for each TPM transaction and derives session key from the so.called null hierarchy, which essentially provides a new random seed per TPM reset.
Therefore, interposer’s ability to reset TPM decreases because kernel will not be able to communicate with the TPM and that way indirectly a malicious act can be detected by far better chances than ever before.
IMHO, this fits quite nicely to the stuff that #OpenSUSE and #Ubuntu have been working on lately.
I switched to #helix editor because three advantages weight me more than disadvantage of having to learn away for #vim shortcuts:
init.lua
(and that big pile of plugins).So for the price of few weeks inconvenience I can stop spending time on text editor configuration and/or figuring out on how to install it.
I used #vim and later on neovim fo the period 1998-2023, even before using Linux. I switched to vim in MS-DOS from text editor called #QEDIT :-)
I think I fixup lookup tables for escaping #zmodem traffic by intercepting traffic from lrzsz with “defacto” parameters, i.e. send large chunk of uniform distributed random data. That way I can be sure that I have all the cases, including those not in the specification form 1988.
Later on I can use that as a CI test for my crate. Sort of fuzzing approach. I just need to record all bytes that follow ASCII 24 and that builds my table.
embedded-io
and embedded-io-async
seem to be something i would want to use also for crates meant to be used with std just to future-proof portability. great that there is solution for std::io
issue finally. for me the disability to e.g. not being able to use std::io::Error
in no_std
code has been a huge turndown.
and since these are provided by the Rust HAL team there’s some trust for long-term lifespan. 3rd party solutions in the basic error types and BufRead
is a maintenance risk imho.
this is good enough as far as i’m concerned
So I did a pull request to the hex
crate:
https://github.com/KokaKiwi/rust-hex/pull/83
This sort of scenario is not too uncommon in data flows especially when you want to squeeze everything bit out. E.g. for something like #ZMODEM you’d prefer the implementation to scale from microcontroller to Intel Xeon.
Usage example in my crate:
if encoding == Encoding::ZHEX {
hex::decode_in_slice(&mut out).or::<io::Error>(Err(ErrorKind::InvalidData.into()))?;
out.truncate(out.len() / 2);
}
One thing that I had to dig up from previous #Enarx work was core::marker::PhantomData
. It is not so well known but pretty important concept inside Rust.
PhantomData
is a zero-sized struct that merely acts as a life-time indicator for the other parameter, which are usually pointers on where it is applied. It is used to implement many of the core structs such as Rc
to name one instance.
It is pretty good lesson on how lifetime parameters interact with the Rust compiler.
I’d even say that if you understand PhantomData
, then you have the basic understanding of Rust, and if not, you still have to learn a bit. It is the block that the whole core library is based on after all.
All the crates that #Google has done for #Rust seem to be like stuff I’ve been looking for to get better control of the memory.
Especially zerocopy
is a time saver as it has all the thinkable stuff that I have used previously core::slice::from_raw_parts
and spent a lot of time thinking of all the possible safety scenarios, such as this recent one:
impl<'a> From<&'a Header> for &'a [u8] {
fn from(value: &Header) -> Self {
// SAFETY: out-of-boundary is not possible, given that the size constraint
// exists in the struct definition. The lifetime parameter links the lifetime
// of the header reference to the slice.
unsafe { from_raw_parts((value as *const Header) as *const u8, size_of::<Header>()) }
}
}
Previously I’ve had to do similar consideration in the #Enarx project. You can do these by hand but it is nice to have a common crate, which is tested by many for these risky scenarios.
Other mentionable crate from Google is tinyvec
, which I’m going to use in zmodem2
to remove internal heap usage.
#zmodem2 is a nice history lesson to develop:
style: cleanup and fix cosmetic stuff
1. This inherits from original `zmodem` crate: "ZLDE" is in-fact ZDLE,
an acronym of "ZMODEM Data Link Escape" character.
2. Fine-tune use statements.
Link: https://wiki.synchro.net/ref:zmodem
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@iki.fi>
That link in the commit message is a great source of information on #zmodem.
converted legacy hard coded test cases for frame to rstest
in the #zmodem 2 crate:
#[cfg(test)]
mod tests {
use crate::frame::*;
#[rstest::rstest]
#[case(Encoding::ZBIN, Type::ZRQINIT, &[ZPAD, ZLDE, Encoding::ZBIN as u8, 0, 0, 0, 0, 0, 0, 0])]
#[case(Encoding::ZBIN32, Type::ZRQINIT, &[ZPAD, ZLDE, Encoding::ZBIN32 as u8, 0, 0, 0, 0, 0, 29, 247, 34, 198])]
fn test_header(
#[case] encoding: Encoding,
#[case] frame_type: Type,
#[case] expected: &[u8]
) {
let header = Header::new(encoding, frame_type);
let mut packet = vec![];
new_frame(&header, &mut packet);
assert_eq!(packet, expected);
}
#[rstest::rstest]
#[case(Encoding::ZBIN, Type::ZRQINIT, &[1, 1, 1, 1], &[ZPAD, ZLDE, Encoding::ZBIN as u8, 0, 1, 1, 1, 1, 98, 148])]
#[case(Encoding::ZHEX, Type::ZRQINIT, &[1, 1, 1, 1], &[ZPAD, ZPAD, ZLDE, Encoding::ZHEX as u8, b'0', b'0', b'0', b'1', b'0', b'1', b'0', b'1', b'0', b'1', 54, 50, 57, 52, b'\r', b'\n', XON])]
fn test_header_with_flags(
#[case] encoding: Encoding,
#[case] frame_type: Type,
#[case] flags: &[u8; 4],
#[case] expected: &[u8]
) {
let header = Header::new(encoding, frame_type).flags(flags);
let mut packet = vec![];
new_frame(&header, &mut packet);
assert_eq!(packet, expected);
}
}
Should be easier to refactor the legacy code now as there is less raw code that might be affected in tests.
I hope I got this right (safety-proprty), i.e. so that references are enforced to have equal life-time:
impl<'a> From<&'a Header> for &'a [u8] {
fn from(value: &Header) -> Self {
// SAFETY: out-of-boundary is not possible, given that the size constraint
// exists in the struct definition. The lifetime parameter links the lifetime
// of the header reference to the slice.
unsafe { from_raw_parts((value as *const Header) as *const u8, size_of::<Header>()) }
}
}