The Linux Kernel project was made an official CVE Numbering Authority (CNA) with exclusive rights to issue CVE identifiers for the Linux kernal in February this year.
While initially this looked like good news, almost three months later, this has turned into a complete and utter disaster.
Over the past months, the Linux Kernel team has issued thousands of CVE identifiers, with the vast majority being for trivial bug fixes and not just security flaws.
Just in May alone, the Linux team issued over 1,100 CVEs, according to Cisco's Jerry Gamblin—a number that easily beat out professional bug bounty programs/platforms run by the likes of Trend Micro ZDI, Wordfence, and Patchstack.
@dangoodin I think that's overly alarmist. @gregkh has talked about this in public. IMO a lot of the "the world is ending" talk is from folks that already had completely unsustainable vuln management practices based around a broken CVE assignment and scoring system, and it's not because the Kernel security folks are doing something wrong it's because the way people update and assess software is broken.
@dangoodin One man's bug is another man's security flaw, especially in kernel space.
I recommend this blog post talking about how the traditional way CVE was working isn't going to cut it any more. Not because, but despite of what the kernel team does: https://opensourcesecurity.io/2024/06/03/why-are-vulnerabilities-out-of-control-in-2024/
And if you are working on a custom kernel build, you should assess all these fixes for security anyway. And if you don't you shouldn't care and pay your vendor to handle it.
@sheogorath @dangoodin also if that relatively small growth (it’s not like it doubled…) is breaking everything what happens if the thousands of vulns documented in the GitHub Secueity Advisory repo all gets CVEs from now on? Or if 1% of open source projects start doing CVEs?
Everything is on fire and ready to break, it was just a matter of time.
@dangoodin also you can get it directly from @gregkh who spoke about it recently on the #osspodcast https://opensourcesecurity.io/2024/02/25/episode-417-linux-kernel-security-with-greg-k-h/ TLDR the Linux kernel is playing the CVE game by the rules. If that’s a problem maybe we need to change the game, not blame the people playing by its rules.
@camdoncady @dangoodin @gregkh @joshbressers @Di4na also the CVE system is starting to break down as well, basically most of what #infosec does is the same as 20 years ago. Good thing the world hasn’t fundamentally changed in terms of software delivery as a service or OpenSource growing into millions of projects that took over the world…
Oh wait a minute… uh oh.
@camdoncady @dangoodin @gregkh @kurtseifried @Di4na I cover that and many other things in this blog post
https://opensourcesecurity.io/2024/06/03/why-are-vulnerabilities-out-of-control-in-2024/
@camdoncady @dangoodin @gregkh @joshbressers @kurtseifried There's a non-vuln-management problem, which is that CVEs are used to inform a lot of academic work; the Linux kernel team using different issuance criteria than everyone else is going to break a lot of academic research.
@adamshostack @camdoncady @dangoodin @gregkh @joshbressers can you give examples of which Linux kernel CVEs should not have been assigned? If so they should be REJECTed. If they are valid, well, then what’s the problem exactly?
@kurtseifried @camdoncady @dangoodin @gregkh @joshbressers I haven't dug in. I'm assuming researcher who Dan quoted is reasonable in saying LK is over-assigning and responding to Camdon's claim that it doesn't matter. If the researcher is wrong, my point isn't relevant.
That said, if two CNAs are making different issuance decisions with similar facts, then that seems worrisome to me.
@adamshostack @kurtseifried @camdoncady @dangoodin @gregkh I guarantee different CNAs make different decisions all the time
The definition of a vulnerability isn't nearly as concrete as it should be. There's a ton of subjectivity in all this data
@joshbressers @kurtseifried @camdoncady @dangoodin @gregkh The definition of vuln is exceptionally concrete... compared to the definition of exposure. 😇
More seriously, at the level of a few it matters much less than at the level of hundreds.
@adamshostack @kurtseifried @camdoncady @dangoodin @gregkh
It's really not though
I've been in many meetings where the definition has been twisted around to call something just a bug
You can see this all the time when a security researcher submits a finding and whoever they are reporting it to declares it not a vulnerability. I guarantee every researcher has many stories like this
@adamshostack @kurtseifried @camdoncady @gregkh @joshbressers
I just updated my to to note the author is @campuscodi
@adamshostack @camdoncady @dangoodin @gregkh @joshbressers this is already well documented. For example, some will issue a CVE for any security vulnerability whether it’s externally found internally found or whatever. Someone will only issue a CVE for externally found vulnerabilities. so that right there means you have dozens or hundreds of vulnerabilities being fixed silently by various vendors, mostly closed source however, so the argument often is that it doesn’t matter because you have to pick up their packages for other vulnerabilities that are publicly known and have CVEs.
And that’s the literal tip of the iceberg. This is some other related issues are like part of the reason I resigned from the CVE board. It’s basically turned into the scoring system from whose line is it anyways?
@kurtseifried @adamshostack @camdoncady @dangoodin @gregkh @joshbressers I don't think the Linux team is performing any kind of malicious compliance, this is just a good faith effort to actually document all possible vulnerability that is showing weaknesses in the process, hidden because other orgs instinctively moderate how they assign CVE IDs and get away with lip service, like your eg of only assigning when external parties already know of the issue.
@kurtseifried @adamshostack @camdoncady @dangoodin @gregkh @joshbressers this Linux consensus was already that they have a ton of churn and code which is public knowledge and may be publicly used, so they would have a huge number of trackable items if they got serious a about documentation, which is what predictably happened.
This is an opportunity to review CVE and the reality of the Linux kernel more than anything else
@adamshostack @kurtseifried @dangoodin @gregkh @joshbressers I'm not sure that "it doesn't matter" is an accurate characterization of my original position - I said that "complete and utter disaster" was "overly alarmist". I think this does matter, I just happen to think that the good outweighs the harm, and we'd still be on a bad path when it comes to CVEs even if the kernel team became a CNA and then never issued a CVE number.
I admit I hadn't thought much of the academic research angle when I originally responded to Dan above. Having given it some more thought, I don't think it changes my position. I've long been of the opinion that the kernel development really stands alone a process and a product, and using the kernel as the lens to assess the open-source community in general isn't a great idea. I think for most academic research, you'd want to tread the kernel separately. If you are doing research on Linux kernel vulnerabilities, then CVEs are probably not the best proxy for "total vulnerabilities" anyways. There's research (that Greg utilizes when deciding what commits to pull into the stable branch as security-relevant) that shows that different subsystems don't assess the security considerations of bugs the same way, so they use a ML ("classic ML", you don't need an LLM for this I don't think) to classify commits. I think it's linked on the OSS podcast page from Greg's episode. So - if you're doing kernel security research, you need to be looking at more than just CVE counts.
I'll give you my opinion on why folks are so bent out of shape about having more CVEs assigned by the kernel maintainers: because for years, commercial companies have been grabbing open-source dependencies by the truckload in order to quickly ship features (read: make money), without having any sort of plan for how they might contribute back or how they would meet their obligations to consumers for keeping their commercial products updated and secure. OSS isn't a "supply chain", it's a natural resource being strip-mined. For years, these companies have been doing the bare minimum in the form of grudgingly patching critical vulnerabilities (frequently only when a POC was published against their product, otherwise they are happy to claim "not reachable in *my* code path" faster than anyone could do a thorough analysis). These same companies then have the temerity to go back to the OSS maintainers and ask them to assess the impact to the *downstream product* - and when they don't like the answer, they accuse unpaid volunteers of "not doing their job."
Microsoft and Red Hat have trained a generation of managers (and devs, honestly) that you can just write software and then run it, without maintenance, for decades. In a world that is as connected and full of malicious actors that doesn't work. At the same time, the propagation of cyber security standards (RMF, SOC, PCI, etc.) have created a so-called security culture that revolves around compliance and not an honest assessment of risk. When you combine those with the growth of CVEs (which was already happening before the kernel went down this path - and also I've seen no credible argument that the kernel team contributed to the NVD falling flat on it's face), you get a ton of gnashing of teeth and wailing.
CC: @campuscodi since I'm really responding to him and not to Dan
@pavel @camdoncady @dangoodin @joshbressers @kurtseifried @gregkh spamming or using it as it should be used?
@Di4na @pavel @camdoncady @dangoodin @kurtseifried @gregkh
One person's spam is another person's email :P
@pavel @joshbressers @camdoncady @dangoodin @gregkh @Di4na so you’re saying the CVEs need to be rejected because they are not vulnerabilities? If so there is a process for that and you should use it.
@adamshostack @camdoncady @dangoodin @gregkh @joshbressers
So I can speak from authority here. I tried for many years to get the CNA community to actually abide by the counting rules that used to be in effect:
https://cve.mitre.org/cve/list_rules_and_guidance/correcting_counting_issues.html
Please note that the above rules are no longer used and were entirely removed, e.g. the CNA Rules no longer include them:
https://www.cve.org/Resources/Roles/Cnas/CNA_Rules_v3.0.pdf
also still missing in the new version coming out in 2 months: https://www.cve.org/Resources/Roles/Cnas/CNA_Rules_v4.0.pdf
So if academics are using CVE as some sort of authoritative data source they have fundamentally failed at academic research (understanding what the data source is and actually represents, this is academia 101).
Also CVE clearly defines what they think a vulnerability is, it's about 2 pages long and as best I can tell the Linux Kernel is 100% abiding by it (excepting the occasional REJECT'ed CVE, but an error rate is to be expected, I can't even recall how many CVEs I issued and had to reject but it was at least dozens. Ironically often due to wanting to label things quickly with a CVE in order to help people get the security ball rolling.
@kurtseifried @camdoncady @dangoodin @gregkh @joshbressers I'll defer to you on the CNA rules issue. On the academic side, I think that "all models are wrong and some models are useful." It's not only ok for an academic to say "We'll use X to represent this and focus on another research question," but possibly preferable for them to not try to redefine what a vuln is.
For example, if I'm doing static analysis research, then I can either "Use CVEs" as a reference dataset, or I can define some other reference. I shudder to think what that other would end up being.
@adamshostack @camdoncady @dangoodin @gregkh @joshbressers Actually I just went into the data, the full reply will probably need to be a blog post with charts, but some data points and then a question, and then more data and then the answer:
GitHub did 1771 CVEs last year and is on track to do slightly more this year (913 so far). Why is nobody complaining about this?
But wait, it gets better: Wordfence is even more extreme, they did:
2021: 154
2022: 169
2023: 880
2024: 1509 (so on track for 3000-3500)
Oh and also VulDB:
2022: 854
2023: 1931
2024: 1394 (so on track for 3000)
and Patchstack:
2021: 41
2022: 370
2023: 2146
2024: 1733 (so on track for 3500-4000)
and ZDI:
2020: 193
2021: 218
2022: 298
2023: 259
2024: 869 (so on track for 1800)
In other words: most people are ignoring most CVEs... and along comes a CNA they can't ignore and it all blows up).
This growing problem was also hidden by a lot of CNAs that people have to pay attention to (like redhat, recently they peaked at 1072 in 2022 and is on track for 400 this year) doing drastically fewer CVEs over the last few years.
@kurtseifried @camdoncady @dangoodin @gregkh @joshbressers I dunno how you can ignore wordfence until you move to a static site generator. 😜
@kurtseifried @camdoncady @dangoodin @gregkh @joshbressers More seriously, that's fascinating data!
@adamshostack @camdoncady @dangoodin @gregkh @joshbressers So, it's the same argument the Linux Kernel people have: enable automatic updates aka run CURRENT and you can largely ignore the CVEs. If you're already upgrading as quickly as you can there's not really anything to be gained by tracking CVEs and... upgrading more fast? Implementing a temporary compensating control (which if you can, why not do it in the first place and leave it forever?).
The reality is most infosec programs are still at the "oh my god it's on fire (AGAIN) what do we do?" and they never think to update their building codes or build a fire department. They just keep running around and buying more buckets to throw water at the fire. That's not a sustainable solution.
@adamshostack @camdoncady @dangoodin @gregkh @joshbressers If I can do this in ten minutes, any academic that's actually doing academic work should have been able to. I mean.. if you put this in a line graph it's really obvious (100 is a lot smaller than 400, and a huge chunk of the "major" CNAs dip in the last year or two pretty significantly).
@kurtseifried @adamshostack @camdoncady @dangoodin @gregkh @joshbressers to be honest if every security issue under the sun is given a proper CVE id (which to amusement and surprise of people is not the case right now : most folks in this list would be very much aware of this IMHO) no one will have time to deal with all of them. since one CNA is actually responsibly putting CVE details out people ave issues.
wordfence, patchstack and likes are mostly wp bugs. I know from my own research there is a lot more where that is coming from and its just tip of an iceburg. every time i have looked at that ecosystem more issues are identified. should we not assign cve's considering an obscure plugin is not important for all or should we focus on better cve details so that people have some semblance of what to keep in mind what to ignore.
@kurtseifried @camdoncady @dangoodin @gregkh @joshbressers How does that help with a static analysis project? My mental model here is "I wrote a new tool that takes source code and emits vulns, and I use CVEs to see if my tool finds all of them"
@anant @kurtseifried @adamshostack @camdoncady @dangoodin @gregkh I used to be on the side of more IDs, because we should be tracking everything
But I think we need to clean up the data first. We already can’t actually handle the volume of IDs we have. More won’t solve any problems
But proper and correct data would solve many problems
@adamshostack @camdoncady @dangoodin @gregkh @joshbressers create your own dataset of vulns? If you rely on CVE you are ignoring a few thousand documented vulns that GitHub security covers for example.
@joshbressers @kurtseifried @adamshostack @camdoncady @dangoodin @gregkh are they mutually exclusive cant we aim for more better tracking as well as more detailed data
@anant @joshbressers @adamshostack @camdoncady @dangoodin @gregkh I find this wonderfully ironic since the Linux Kernel CVEs are basically some of the highest quality CVEs available for one simple reason:
"affected": [
{
"product": "Linux",
"vendor": "Linux",
"defaultStatus": "unaffected",
"repo": "https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git",
"programFiles": [
"fs/nfsd/nfs4xdr.c"
],
"versions": [
{
"version": "83ab8678ad0c",
"lessThan": "6a7b07689af6",
"status": "affected",
"versionType": "git"
},
{
"version": "83ab8678ad0c",
"lessThan": "18180a4550d0",
"status": "affected",
"versionType": "git"
}
]
},
{
"product": "Linux",
"vendor": "Linux",
"defaultStatus": "affected",
"repo": "https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git",
"programFiles": [
"fs/nfsd/nfs4xdr.c"
],
"versions": [
{
"version": "6.7",
"status": "affected"
},
{
"version": "0",
"lessThan": "6.7",
"status": "unaffected",
"versionType": "custom"
},
{
"version": "6.8.10",
"lessThanOrEqual": "6.8.*",
"status": "unaffected",
"versionType": "custom"
},
{
"version": "6.9",
"lessThanOrEqual": "*",
"status": "unaffected",
"versionType": "original_commit_for_fix"
}
]
}
],
"references": [
{
"url": "https://git.kernel.org/stable/c/6a7b07689af6e4e023404bf69b1230f43b2a15bc"
},
{
"url": "https://git.kernel.org/stable/c/18180a4550d08be4eb0387fe83f02f703f92d4e7"
}
],
Show me another CNA with such consistently detailed data on affected versions and links to the actual problem/solution.
@pavel @Di4na @camdoncady @dangoodin @joshbressers @gregkh Speaking of good faith effort...
I'm going to push back and say if you think a CVE should be REJECT'ed it's on you to go do that process and not just complain on Mastodon and expect someone else to do the work.
The CVE system has a process for REJECTing CVEs, which you need to work with, as does the Linux Kernel. Good news, I know the Linux Kernel is making a good faith effort here for a fact because gregkh@social.kernel.org has indeed REJECTed Linux Kernel CVEs.
And people wonder why I left the CVE ecosystem.... Everyone wants it fixed, but beyond complaining on Twitter/Mastodon they won't do the work.
@adamshostack @camdoncady @dangoodin @gregkh @joshbressers Also as an example of counting CVEs changing:
Schneider Electric:
2013: 1
2017: 16
2018: 84
2019: 83
2020: 118
2021: 86
2022: 106
2023: 94
2024: 9
so basically 100 CVEs per year for 5 years and then... looks like they'll do 20 this year? I double checked the data (e.g. what if they do most of their security work in the last half of the year?) and the distributions of CVEs they assigned by month for their entire history is:
01: 59
02: 72
03: 51
04: 68
05: 76
06: 60
07: 55
08: 14
09: 34
10: 14
11: 56
12: 38
so if anything they do most of their work in the first half of the year typically...
So did Schneider Electric suddenly make more secure software and hardware, or did they stop fixing things, or did they stop counting CVEs the same way they used to? My suspicion would be #3, but they might also have just stopped fixing things. I wouldn't bet any money on "it all got magically secure and they lived happily ever after"
@anant @kurtseifried @adamshostack @camdoncady @dangoodin @gregkh they might be mutually exclusive. More ids without better data will probably have unexpected results
Better data has no obvious solution today
@joshbressers @anant @adamshostack @camdoncady @dangoodin @gregkh Ultimately with better data we can do something, without good data everyone gets to keep reinventing the wheel and trying to fix the data themselves (which also leads to the obvious question: if you find a problem in a CVE, and the correct information, we can't it easily be fixed? It's literally easier to fix a security flaw in software than a data error in a CVE.
@kurtseifried @pavel @camdoncady @dangoodin @joshbressers @gregkh that is because they cannot.
This is what the "spam" is revealing. Not that they are bad CVE. That noone had the adequate ressources for it.
So once it is used as it should, everyone panick. Work-as-imagined vs Work-as-designed vs Work-as-done
@pavel @camdoncady @dangoodin @joshbressers @kurtseifried @gregkh except this is taking the pov that he can knows if that bug is a vulnerability or not.
It happens that this is not easy and that the CVE process is indeed to start as such and then revoke it is not. This is how it was designed.
@pavel @Di4na @camdoncady @dangoodin @joshbressers @gregkh so Linux kernel already did the work and assigned a CVE. If you disagree with this, you need to do the work of getting it rejected. if you’re going to make good faith arguments, and then refuse to do anything and just complain and put it on everybody else you’re clearly not arguing in good faith. As such I’m going to block you. Good day.
@Di4na @pavel@social.kernel.org @camdoncady @dangoodin @joshbressers @gregkh
This comment is regarding open source software, some of it may also applied to close or software, but probably not as much due to contracts and payments.
that’s a set of data. I would love to see, I have assigned many thousands of CVE and somewhere obviously easily classified as Vulnerability. For example, I just informed a project of a minor security flaw, where the password reset will tell you if the account name is valid or not. on the flipside the majority of CVE I’ve assigned are probably vulnerable, but for the project building a reproducer, especially a non-Weaponized one is going to take a lot of effort versus just fixing the bug and moving on. Now this is the argument downstream users have to deal with.
I think that’s too bad for them and if they don’t like it, they don’t have to use the software. it’s also white companies like redhead exist. We literally took on that burden of eating the churn and producing updates that were tested and could generally safely be installed except for the few times they couldn’t be.
If someone really wants to prove that the Vulnerability is not a Vulnerability, they are free to do so. But I don’t see it on the project to do this. People making demands on open-source projects and saying everybody else should do it the way they want it done, I think the era of listening to them should probably end. I know I’m going to block people like that from now on.