Conversation

So, kids, what's the moral of the XZ story?

If you're going to backdoor something, make sure that your changes don't impact its performance. Nobody cares about security - but if your backdoor makes the thing half a second slower, some nerd is going to dig it up.

2
14
2

@bontchev

I'm face-palming that we didn't dig into the weird valgrind errors more. In easy hindsight everything around 5.6.1 should have set off alarm bells.

1
0
0

@leeloo @ixs @mattdm @bontchev

This. Some lessons will be hard to solve, such as the maintainer burnout, or reviewing code under a state-actor attack. But some should be super easy and immediate - like making sure that the content of a tarball is identical to the content of the git tag, or not patching sshd and introducing extra dependencies, or not giving maintainer permissions to someone we haven't seen face to face...

1
0
0

@chebra @leeloo @ixs @bontchev

> like making sure that the content of a tarball is identical to the content of the git tag

It is often the case that released tarballs are not identical to the content of a git tag or commit.

Distros have long treated the official tarballs produced by upstreams as "golden".

Doing otherwise introduces a tension — upstream developers may feel we're not shipping their "finished" meant-for-release code. (See the whole "stop packaging my software" thing.)

0
0
0

@systemalias @ixs

the "upstream maintainer" thing wouldn't have helped here, though....

0
0
0

@mattdm @bontchev You're right, but you're talking with hindsight.
The question is not how you would react with your knowledge right now, the question is did things look weird with the knowledge you had right then and there.

Were the valgrind errors weird? Sure they were.

But there was a responsive upstream, the code was in a "weird" place such as ifunc and compiler optimizations were seemingly involved etc.
The maintainer is responsive, on top of things etc.

The insidious thing is, this is exactly how you need the open source community to react for the whole thing to work.
If we are super suspicious about every contribution, need four people to sign off on every commit etc., would we really have that landscape of amazing open source stuff we have?

I think to assume good intentions is important and should not be given up, even with the experience of being betrayed by a bad actor. Imagine how bad the original maintainer of xz must feel in that situation.

Instead of beating us up over not seeing things that were not obvious, we should approach this better:

Were there clear signs that we missed?

Where there things that gave us a hunch of what was going on? Why did we ignore them?

And on the technical side, I guess we could invest some effort in validating that the tarballs we use to build our source is actually corresponding to the source code we're using.
Fedora already does a lot of things right, e.g. we're not using the shipped configure but we're running aclocal and all the other stuff ourselves.
Improving on that is probably going to get us further than starting to question contributors or contributions. We might not just hit a few backdoor attempts but we might also hit a few accidential maintainer fuckups. And that would be a good benefit for the distro in every case.

2
0
0

@ixs @mattdm @bontchev
"If we are super suspicious about every contribution, need four people to sign off on every commit etc., would we really have that landscape of amazing open source stuff we have?"

We could be super suspicious for security sensitive stuff and not for everything else.

In fact, that is how things were supposed to work, the OpenBSD people (who develop OpenSSH) are exactly like that. And this problem only happened because some distros went ahead and applied a patch that never went trough the audit process.

1
0
0

@ixs @mattdm @bontchev

Seems upstream maintainers should even more follow the guidance I give to my teams...

If you don’t understand a change; even in a dependency, don’t merge it.

Trouble is, most just assume their lib dependencies are well tested by upstream before release, and this should theoretically be sufficient.

Heuristic testing by measuring the call stack depth and process tree depth might help.

0
0
0

@leeloo @mattdm @bontchev Good advice. Being super critical about all security relevant things is a really smart idea.

In theory.

In practice, it does not work.

The specific backdoor we're dealing here went out of it's way to ensure that they are *not* happening in security relevant code.
While the security conscious developers are super suspiciously looking at every commit that goes into ssh or openssl, the exploitation happend on the other side of the room in some random compression library nobody cares about.

Suddenly, that unimportant library has become security critical and nobody noticed.

So if you want to be super susipicious for security sensitive stuff, you need to be super suspicious for every commit. Literally *EVERY* *SINGLE* commit *EVERYWHERE*.

That is not feasible.

And as an aside: OpenBSD is having an impressive security track record, no doubt. I think everyone would agree, hands down.
But the reality is, that OpenBSD has not enjoyed the success in the market place Linux has. Probably to a large part due to the much lower velocity...

So no, I do not see your suggestion fly at all. And I say that as someone working in the infosec area...

1
0
0

@ixs @mattdm @bontchev
You have it backwards. Someone wrote a patch that pulled in a random compression library and merged it - that's when that code became security critical and should have been treated as such. And would have been if they had gone through the people who care about security (or the patch flat out rejected).

Then someone noticed that a random library had become security critical without any oversight and decided to use the already existing security lapse to introduce a backdoor.

The problem was entirely preventable. In fact it did not affect any OS or distro that does not merge random patches with no security audit.

1
0
0

@leeloo @mattdm @bontchev I would recommend you to re-read https://www.openwall.com/lists/oss-security/2024/03/29/4 and follow the exploit chain.

Nobody added xz to opensshd.

Multiple distros added libsystemd to opensshd for sd_notify() support.

libsystemd uses xz for totally legit reasons.

And that's your connection how a malicous actor managed to backdoor opensshd indirectly.

And here's the sad thing, libsystemd was unnecessary. This could have done in a better way by just implementing the sd_notify() wire protocol. But from what I heard, there are multiple other ways xz can be pulled into opensshd indirectly.

So that point is moot and my original point persists: EVERYTHING is security critical. And the idea of being suspicious does not scale unfortunately.

1
0
0

@ixs @mattdm @bontchev
"Nobody added xz to opensshd"

And how did opensshd with patched in libsystemd compile without xz? And how did the malicious code in xz even run if nobody added it?

That's not how any of this works. Someone added libsystemd AND xz without attention to the security implications.

0
0
0

@systemalias @ixs

What would they look like? When would you run them? How would you prevent so many falsely positives that they're ignored?

0
0
0

@bontchev What if your changes improve performance?

1
0
0
@geert @bontchev What if your backdoor improves performance? Well... that happened in about Pentium Pro era. And we still don't know what to do with that. Its called "caches" and "speculation" and it leads to very hard to fix information leaks, aka Spectre and Meltdown (and others yet to be discovered) :-(. Too bad it probably was not intentional backdoor.
0
0
1