Conversation

Keep it in mind! First one is a kind of personal mantra

3
2
0

@KernelRecipes The continuity across upstream messaging has been clear since (probably before) 2017. Same observations then too: https://youtu.be/RKadXpQLmPU#t=2796
"If you are not using a stable / long-term kernel, your machine is insecure" - @gregkh

1
1
0

@KernelRecipes Followed up by my more nihilistic take:
https://youtu.be/b2_HAH2kX04#t=373
"Your machine is insecure."

Bottom line remains the same: we have to eliminate bug classes. I'm really excited by all the work that continues on this front between fixing the C language itself and the adoption of Rust. We continue to make steady progress, but can always use more help. :)

0
3
2

@KernelRecipes Sometimes people need reminding that CVEs are just a stand-in for the real goal: fixing vulnerabilities. The point of "the deployment cannot have any CVEs" isn't an arbitrary check list. The goal is to get as close as possible to "the deployment cannot have any vulnerabilities".

The Linux Kernel CNA solves the "tons of false negatives" problem (but creates the "a few false positives" problem), but the result is a more accurate mapping from vulnerabilities to CVEs.

3
2
0

@KernelRecipes So the conclusion from this is that anyone saying "we can't keep up with all the CVEs" is admitting that they can't keep up with all the current (and past!) vulnerabilities present in the kernel.

Either they don't have a threat model, can't triage patches against their threat model, or can't keep up with stable releases due to whatever deployment testing gaps they have.

There are very few deployments I'm aware that can, honestly. This is hardly new, but now it is more visible.

1
4
3

@KernelRecipes But this is why I've been saying for more than a decade, and others have said for way longer, that the solution is eliminating classes of flaws.

Bailing water out of the boat is Sisyphean without also patching the holes in the hull. But since we're already in the water, we have to do both. And the more we can fix the cause of the flaws the less bailing we need to do; so more Rust, safer C.

I look forward to finding design issue vulns instead of the flood of memory safety issues.

2
4
1

@kees @KernelRecipes @Di4na I think more simply it boils down to this. Do we want to fix bugs that are the cause of one vulnerability, or do we want to fix bugs that are the cause of many vulnerabilities? Fixing bugs in frameworks and languages fixes many vulnerabilities. Fixing bugs at the surface that resulted one vulnerability is an unfortunate waste of time that like you said because we’re in the water we gotta plug the holes. But the real long-term answer is to do things like invent a bilge pump.

1
0
0
@kees @KernelRecipes Greg & company is introducing so many false positives into the CVE system that CVEs are now completely useless for kernel. Good job! :-( (And calling it "a few false positives" is not really a good sign).
1
0
0
@kees @KernelRecipes Some people need reminding that marking every bugfix as CVE will not help with security. It will just make people ignore the CVEs.
0
0
2
@KernelRecipes "If you are using latest stable / longterm kernel, your system is insecure, too" -- me. Plus, you are running code that is more experimental than mainline releases, as a bonus (!).
0
0
2

@kurtseifried @kees @KernelRecipes i mean that or... We can also reclassify the problem from "a bug/programming mistake" to "being forced to use an unergonomic tool".

I refuse to let a tablesaw without sawstop technology in my workshop. Why should we not analyze our tools the same way?

1
0
0

@Di4na @kees @KernelRecipes so that’s a good example, a tablesaw can be relatively safe with a saw stop, and the downside of not being able to cut wet wood is generally OK. You should be drying your lumber before you work with it anyways, because it’s gonna shrink a whole bunch of mess up your project if it’s that wet.

However there’s a lot of tools that can’t be made safe, and can’t really be replaced by a safer option. Things like wood chisels or a cordless wood planer. I suspect this is also true of a lot of software. Some of it can definitely be improved or replaced and some of it can’t. At least not at this time.

1
0
0

@kurtseifried @kees @KernelRecipes sure but we are not talking of that. We are not even at the "wait, the tools themselves could be dangerous" step

1
0
0

@kurtseifried @kees @KernelRecipes also note that the use case for planners and wood chisels have been massively reduced in modern "industrial" wood working. By changing the kind of joints we do, and the materials we use, and the target end products

1
0
0

@Di4na @kees @KernelRecipes all my doors, and the doors at my parents house that used to stick no longer do because I got a terrifyingly dangerous cordless planer and used it. There’s no other sane way to fix those doors.

0
0
0

@pavel @KernelRecipes At the LPC CVE BoF, in a room filled with people who care deeply about this topic, there appeared to be consensus that the CNA has traded many false negatives for a few false positives. (I.e. we are now closer to the imagined objective reality of a 1:1 mapping between fixes and CVEs.)

In the past, with distros and researchers mostly causing the CVE assignments, the implied threat model was that of a distro, and didn't represent other models. (But still missed fixes.)

1
0
0

@pavel @KernelRecipes I think of the CNA as doing a first pass at CVEs, and then each deployment can continue triage based on their threat model. This is how it's always been, it's just that severity triage has been moved closer to where it is needed: with those that have a threat model to apply. What has changed is that there isn't yet a place for common threat models to share triage. This used to be the CVEs themself, but that left out all the other threat models and missed tons of fixes.

1
0
0

@pavel @KernelRecipes Deployments always had an obligation to evaluate vulnerabilities and fix them, but now it has become unavoidable and threat model mismatches are glaringly obvious.

Yes, it is possible that for a given threat model, there are now a ton of CVEs that will need to have their severity labeled as "don't care". But this was always true -- but no one triaged fixes, they triaged against the prior CVEs, which were a small subset of the distro threat model. Lots of fixes got missed.

0
0
1

@kees @KernelRecipes
I think many people agree, the difficulty is to get there: it isn't really realistic to freeze C code while replacing it with Rust, which means the bindings will keep breaking while Rust code is developed, causing a lot of pain to whoever maintains the bindings (making subsystem maintainers say "not me")

This, as far as I can see on the mailing list I follow, creates some tension, and I predict this will continue to do so in the coming years, because there is no simple way to convert such a large active codebase with so many users

1
0
0

@jacen @kees @KernelRecipes

As an interested outsider the state of Linux CI is bizarre. It appears largely left to consumers and developers.

When the cycle time for feedback on the commit is days or weeks the dev for the commit has moved on to new work. It is difficult to get that person to care about your problem.

I see the KernelCI project however it is not clear if build failures on it are cause for a backout in for the commit.

2
0
0

@kbrosnan @kees @KernelRecipes
I think you are right: Linux CI is left to consumer and developers. The thing is, there isn't really anyone else. There is no enterprise owning an hosting the kernel. I think only 2 developers are paid by the Linux Foundation, with no infrastructure to do CI on every subsystem tree.

On the media side, I see 2 CI running tests, one from Google, the other from Intel, with someone from Cisco in copy.

That being said, you should expect days before getting feedbacks on a commit you submit for review (by people not paid to do so) a reasonably weeks (or months, for a significat change, or 20 years for PREEMPT_RT) before getting merged in the subsystem.

The main issue with the kernel is to maintain it. So, I guess, for many maintainers, if you can't maintain your code, maybe it's best to leave it out of the kernel tree.

0
0
0

@kbrosnan @jacen @kees @KernelRecipes The CI, while provides amazing value when setup properly, is expensive to run. Even on smaller projects as Mesa3D, it needs to have a team of maintainers to develop it and keep it running. Not mentioning the teams who cares about the physical farms. Same as people agree on one mainline, I think it would be wise for all involved companies to join and work on pre-merge solutions instead doing their own post-merge.

0
0
0