@gregkh Some people simply lack the skill, but they'd like to add “contribution to the Linux kernel” to their CVs.
Disclaimer: I've no idea if that was actually the case here.
@gregkh I think it goes like this: Fame and fortune awaits whoever actually reports a security problem in the Linux kernel. There is no cost to the reporter in making an attempt, even if the attempt doesn't succeed. So using an LLM to generate what looks superficially like a good report means they have a chance of benefiting, and there's no downside in trying.
Same problem as with email spam, that is.
@gregkh so you're implying that those people actually "think" before submitting such reports...
that is very generous of you
@liw @gregkh there is more. You are coming from a different worldview than the reporter. The reporter, in a lot of cases, *genuinely believe that these tools are super powerful*. They are the AI of your movies. It is a belief I have seen everywhere in my circles of friends. If the AI "discovers a bug" then it has to be real and exist.
Validating does not even come to their mind, because "who am I to doubt the powerful machine". In their mind, they are the inferior, and just the messenger.
They cannot even *imagine* it could be wrong that much, or that validating it is possible.
@gregkh And don't understand why these people are submitting garbage AI report.
What's the goal of it?
@gregkh you know the adage that as soon as a measure becomes a target it stops being a useful measure? I think something like that has happened with bugs and bounties
@gregkh I kind of doubt that they are capable of even testing it, or else they wouldn't use the lying machine in the first place.
@gregkh Yeah, I was trying to be funny in a sarcastic manner, again, and failed, again.
@gregkh
We've spent billions of dollars on AI! You MUST use it, and believe its every pronouncement!
@gregkh I wonder if LLMs are going to cause more problems under authoritarian regimes, where people are conditioned to do what they're told without question. Seems like perfect conditions for modern "AI" to cause all sorts of havoc, with all of it being excusable with "the computer told me to".
@gregkh Yes, let's promote staging (again)! Sounds like a good plan to me.
@gregkh Maybe "Days" should be changed to "Hours"?
@gregkh fwiw, curl just bans these types of people https://hackerone.com/reports/3230082
@gregkh linters literally do their job better than a speculation machine
@winload_exe @gregkh Almost, but not quite, as if linters and other tools were carefully designed to do a particular job, and thus do it well.
@gregkh Fun story... One month, it was my job to run Klokwork (static code analysis) against our own code, because somebody in management had decided it's important to fix all "vulnerabilities" that an automated tool can find. An expensive tool, mind you.
Two senior engineers and lots of build resources, for a month, and we changed hundreds of thousands of lines of code (some by script).
1/x
@gregkh After all that, the Product Manager did not want to merge it into production/main, because "too many lines of code changes".
I learned a lesson - when tasked, always ask "if I do this, will you ship it" of your Product Manager. Or just take the money to waste time...
But for fun, I ran Klokwork against the linux kernel source (we were cross-compiling an ARM kernel and rootfs/dist of our own) and the "violations" were voluminous.
But somehow nobody was worried about that.
2/x
@gregkh I don't think of LLM-based coding "assistants" any differently - in the hands of experts, probably useful. In the hands of ignorant, lazy people seeking quick solutions, dangerously untrustworthy results nobody wants.
And a distraction from efforts that could really improve your software or service.
3/3