Conversation
It has now been 0 days since a AI-hallucinated "security report" was sent to the kernel security team.

Right now we seem to be averaging about 1 per week, not bad overall probably compared to other projects.

To be fair, a real security bug was recently found with an "ai tool", but the authors of that at least took the time to verify it was real before sending it to us, and they provided a patch, so not all is doom and gloom.
3
43
99

@gregkh we got a bunch raised on the issue tracker that look like generative AI was involved. They did turn out to be real - although outside of our normal security boundary. I did appreciate they came with reproducers which worked and demonstrated the problem.

These tools do have their uses but we need to keep the slop at bay.

0
0
0

@gregkh I would love to see all those reports somewhere. Together with other projects. Maybe a Mailing List: LLM-Fails-ML. On the other Hand, lets not waste even more time on those.

1
0
0

@gregkh One per week seems very manageable for a project the size of Linux. I assume the reason why it's not one a day instead is because there's nothing to pervert the incentives here.

0
0
0
@hanfi @gregkh Making them public would help people to avoid wasting time in the future by adding the author's email address to ai-slop filters / PLONK. Or adding their domain because sometimes that's representing a company approach.
0
0
0