Conversation
"Findings by static analyzers in Fedora 43" == "nonsense findings that someone wants someone else to wade through to weed out the obvious false-positives in their broken 'security' tool"

Someone needs to seriously reconsider this.

And yes, the tool is obviously broken, I looked at the first 3 "issues" found and just laughed, thinking this was a joke, but it seemed to actually be real, which is sad on so many levels...

{sigh}
3
2
16
@aho I wish, that might actually have spit out something useful...
1
0
1

What would you consider an acceptable ratio of false positives?

2
0
0
@gregkh @aho Nah, I've seen those in curl issues and on musl mailing list, pure waste of CPU and human time.
LLM as fuzzer on the other hand that maybe could be useful.
0
0
0
@kasperd None.

Have someone actually verify that they are real issues before "reporting" them to a project.

But better yet, submit a patch fixing them if you have found and determined that they are a real issue, that's the only way for anyone to take reports like this seriously.

Otherwise, it's just noise.
1
0
3

I strongly disagree with that. It's probably impossible to create a static analysis tool which never produce any false positives. So if you insist there can be no false positives you simply cannot use any such static analysis tool, and that means you lose all of the benefits it could provide.

A too high ratio of false positives can be harmful in a couple of ways. The most obvious problem is that a person has to spend time reviewing each of them, and thus false positives is a waste of time. But if that was the only problem, we could tolerate a quite high ratio of false positives.

If you got 5 reports from a static analysis tool of which there were 4 false positives and one security vulnerability. Then the time spent analyzing the false positives would be well justified since it is what allowed you to find the security vulnerability. Thus from this perspective alone a false positive ratio as high as 80% would still be acceptable.

But another problem with false positives is how it influences the thinking of the person doing the reviews. A too high ratio of false positives could make them assume findings are false positives until proven otherwise. That means during the review you might fail to identify the true findings, which in turn reinforces the belief the ratio of false positives is too high.

How much of a problem this will be for sure will depend on the person reviewing the findings. I am guessing if the ratio of false positives would be around 30% it would be low enough for most persons not to start assuming false positives. If a person sees 70% of true findings and 30% of false positives, I would expect them to keep taking the findings seriously and not dismiss them without proper review.

So even though 30% false positives sounds high. I still think a static analysis tool with that ratio is better than not using static analysis at all.

But where the threshold should lie is certainly a matter of personal preference, which is why I asked the question in the first place.

0
0
0

Sounds like your objection is not so much about the static analysis tool itself, but rather about who should review the findings to identify which are real and which are false positives.

But regardless of that I don't think requiring no false positives is the right metric.

When you say none, you are kind of saying that 1% of false positives would be unacceptable. But if these reports were actually 99% real security holes and 1% false positives. Then the report would most certainly be valuable information and quite devastating.

The reports probably have a lot more than 1% of false positives. I have not tried to judge what the actual percentage of false positives in that report is.

1
0
0

@kasperd @gregkh

1% is a reasonable number..

reality is (and been proven a bunch of times) that "fixing" a false positive gets you between 5% and 10% chance of INTRODUCING a security issue.....

so why 1% and not 5%? most static analysis found real issues are not very severe -- while usually the newly introduced ones are on the worse side of the spectrum.

2
0
0

@kasperd @gregkh

(and yes absolutely, one must manually root out any false positives before posting/bugging upstream maintainers)

0
0
0

@gregkh This "obviously broken" tool has been used inside Red Hat to scan releases of RHEL for more than a decade. If you have ever seen fixes coming from Red Hat for issues identified through static analysis tools , it is very likely those issues were identified through OpenScanHub.

2
0
0

@gregkh The report was sent upstream on suggestion from the Fedora package maintainer. It is unfortunate that the quality of open source analyzers is relatively low and thats what showed up in the report.

0
0
0

There are definitely situations where you should not "fix" the code pointed out by a false positive from a static analysis.

However I do recall one situation where I looked at code pointed out by a static analysis tool and concluded that it was not a security vulnerability, but it was bad quality code which should be rewritten regardless. Would that count as a false positive or not?

The risk of introducing a vulnerability when trying to fix a false positive is certainly real. And the 5-10% risk you mention sounds plausible (I don't have any concrete data on that myself).

I think CVE-2008-0166 is a good example of how bad it can go wrong (even though that resulted from using a runtime analysis rather than a static analysis).

However I think the maintainer will likely be more qualified to identify false positives than the reporter, so even if the reporter does a good job at identifying false positives, I think there will likely be some left for the maintainer to notice.

0
0
0