Posts
4417
Following
315
Followers
470
Linux kernel hacker and maintainer etc.

OpenPGP: 3AB05486C7752FE1
@raggi I'm just trying to make sense of what is needed to even theoretically get something made with Rust to a defconfig, I don't specifically vote for ISO but the current model seems broken too. Perhaps Rust Foundation could host repositories just for the language standard and make the process email based. That could also work.
0
0
0
@raggi Thanks for the ignorant comment ;-) I really learned from this.
1
0
0
@raggi E.g. C++ has a mailing list to submit proposals for the standard and associated discussion: https://lists.isocpp.org/mailman/listinfo.cgi/std-proposals. And a full standard associated site. It s also easy to backtrack language related discussions thanks to the mailing list archives.

So it would be up to Rust Foundation in the end how open they want the process be.

I don't mind rustc being in Github as it would be just an implementation of a language standard.
0
0
0

Jarkko Sakkinen

Edited 1 year ago
@raggi Well, nothing is perfect but at least it is company-agnostic entity with representative from each country. Github is a single vendor proprietary entity. And not being even submit a bug without having to a create an account to a proprietary service is a problem. Finally, it would be weird if any kernel's arch defconfig would acquire Rust feature before both gccrs and rustc can compile the kernel. I don't mind even Github or ISO, as long as there is a way to keep toolchains in sync that works well.

Right now it does not matter that much as gccrs is still heavily under development but it would be good to consider this topic early on. Last time I check gccrs it was still lacking e.g. inline assembly...
1
0
0
@matzipan I want to rise red flags on topics at least that would not make sense for defconfig so that we don't make wrong decisions in the kernel :-)

I might be confusing terminology but matter of keeping gccrs and rustc in sync is still relevant.
0
0
0
@matzipan Sorry, but I think I pass without reading :-) And more widely than kernel I don't really care of this issue, nor do I care what Rust community does or doesn't do. I'm not part of that of that community. I'm looking Rust as a tool that we are using.

I just listed stuff that I think is absolutely required for defconfig maturity. It is pretty hard to imagine that Rust would be ever widely accepted in kernel, if gccrs does not reach rustc, and that cross-compatibility would be maintained somehow.

And Github requires an account to a proprietary service, which makes the whole process implementation a closed and proprietary.
1
0
0
Just one example of "semi-opensource": You have to create an account to a proprietary service (Github) to report a compiler bug. This means that if you don't want to use Github, then you are excluded from the project, including reporting legit bugs.

If language standard was in place, only rustc would be concerned by the proprietary development process but you would have a choice of not participating it and still be involved e.g. as part of ISO process or by contributing to gccrs.
0
0
1

Jarkko Sakkinen

For #kernel it is critical to have gccrs features in par with rustc.

Up until that rust-on-linux is a toy feature at most.

IMHO, the language spec should be an ISO/IEC standard and not a "Github standard". This way two toolchains would be easier to keep in par.

With the current infrastructure Rust should be really renamed as MS Rust ;-) It is a semi open-source project controlled by MS infrastructure
and LLVM toolchain. ISO standard would fix a lot here.

#rustlang #rust
3
2
4
@ethorsoe I use this when I have some possibly even unstaged and I don't want go trouble of stashing it. Before I copied the whole directory...
0
0
0
The most motivating situation is when you get to work on something you have zero clue about before and there is a real schedule to bring enough pressure to bring movement. Actually sometimes you can move faster than someone with deep domain expertise because you don't waste time as much considering various options...
0
0
0

Jarkko Sakkinen

Edited 1 year ago

For this worktree is useful:

git worktree add ~/work/linux-tpmdd-master master

When you have find a bug while working on feature branch and want to quickly do a fix without too much context switch…

Then later:

git worktree remove linux-tpmdd-master 
1
0
0

Jarkko Sakkinen

Anyone tried out GNU Poke?
0
0
0

Jarkko Sakkinen

Edited 1 year ago
Have a few possible job options post September so looking quite good. Obviously nothing is closed given the 4 month window but I think it was good idea to knock some doors now to rise awareness.

I guess my priority when picking a job is to get to do something out of sec space, but otherwise as long as it is kernel, all works for me, because everything in that space is (still) interesting.

My first touch of Rust in kernel is not to write code myself but help to get existing ASN.1 code integrated with ASN1_RUST flag. I think learning testing/QA process is the first thing focus in any area of kernel, not writing code. Once you have edit-compile-run in place all comes so much easier...
1
0
3

Jarkko Sakkinen

Next version of #TPM2 asymmetric keys will also have ECDSA signatures. Almost got it ready during the weekend :-)

Should provide pretty good first coverage for https://datatracker.ietf.org/doc/draft-woodhouse-cert-best-practice/.

#linux #kernel #tpm #keys
0
0
0
Signal actually still defines best possible framework, despite not being fully implemented for something you claim to be truly confidential:

1. Create legal barrier with AGPL, this guarantees that the source code is unmodified.
2. Create run-time barrier with SGX/SNP/TDX, this guarantees that the run-time is unmodified. Attestation needs to have an expiration time somewhat like you need to expire share session key.

Signal implements (1) but lacks (2).
0
1
0
Signal actually still defines best possible framework, despite not being fully implemented for something you claim to be truly confidential:

1. Create legal barrier with AGPL, this guarantees that the source code is unmodified.
2. Create run-time barrier with SGX/SNP/TDX, this guarantees that the run-time is unmodified. Attestation needs to have an expiration time somewhat like you need to expire share session key.

Signal implements (1) but lacks (2).
0
1
0

@moritz @malte @katexochen

Actually the value of remote attestation and price to pay for it are related to the control of the machines where you are running your software.

If you run a software in your local hardware or controlled data center, then TPM2 by practical means does all you need for remote attestation.

Confidential computing comes beneficial when you run in the cloud and need to attest that while the deployment is out of your control, it still runs unmodified, and does the expected computation.

One corner case example of this is Signal’s contact delivery, which is claimed to be sealed by Intel SGX. This is a false marketing claim because:

  1. Signal controls its own data centers, so 3rd parties are not a high risk.
  2. Signal source is unmodified by legal enforcement given AGPLv3.
  3. Signal does not deliver CPU attestation to the Signal app so that the app could verify it against Intel CA. This should be done periodically.

This means that Signal can hold into AGPLv3 but they could still just emulate SGX opcodes and do nothing at all. So objectively we can conclude that Signal does fake marketing with SGX.

Remote attestation is worthless if:

  1. You don’t need it.
  2. If you spend money on using wrong type of attestation in a wrong place without proper risk analysis.

Confidential computing is literally broken because there’s no developers. I still use NUC7 from 2018 with a Celeron CPU equipped with SGX2. In that sense all remote attestation in that arena is broken because you don’t have low barrier developing anything on top of it…

1
1
0
Nothing does high waves without low-barrier developer ecosystem, including local machines that can run the payloads...
1
0
0

@moritz @malte @katexochen

Actually the value of remote attestation and price to pay for it are related to the control of the machines where you are running your software.

If you run a software in your local hardware or controlled data center, then TPM2 by practical means does all you need for remote attestation.

Confidential computing comes beneficial when you run in the cloud and need to attest that while the deployment is out of your control, it still runs unmodified, and does the expected computation.

One corner case example of this is Signal’s contact delivery, which is claimed to be sealed by Intel SGX. This is a false marketing claim because:

  1. Signal controls its own data centers, so 3rd parties are not a high risk.
  2. Signal source is unmodified by legal enforcement given AGPLv3.
  3. Signal does not deliver CPU attestation to the Signal app so that the app could verify it against Intel CA. This should be done periodically.

This means that Signal can hold into AGPLv3 but they could still just emulate SGX opcodes and do nothing at all. So objectively we can conclude that Signal does fake marketing with SGX.

Remote attestation is worthless if:

  1. You don’t need it.
  2. If you spend money on using wrong type of attestation in a wrong place without proper risk analysis.

Confidential computing is literally broken because there’s no developers. I still use NUC7 from 2018 with a Celeron CPU equipped with SGX2. In that sense all remote attestation in that arena is broken because you don’t have low barrier developing anything on top of it…

1
1
0
In the current state of art, containers are the worst part of Linux. I always use VM's with libvirt instead because I don't understand how the security boundary is defined.

And e.g. Docker exists as a commercial product mostly because of a failed container design in kernel.
0
0
0
Show older