Looks like exactly the kind of AI abuse I feared could happen in the kernel is happening.
Now you can see why I pushed back so hard on the automated tooling docs to make it clear we should reject this crap out of hand, being attacked in various publications online for saying so.
Wonder if they will talk about this or?
And yes I told you so.
https://lore.kernel.org/linux-mm/cbd0aafa-bd45-4f4d-a2dd-440473657dba@lucifer.local/
@ljs Yes, it is clearly an AI Agent. The way it replies to stuff with "I now understand your concerns.." and so on..
@ffmancera he's either heavily using it or it's just straight up openclaw or something yeah.
@ljs This is so enraging & idiotic. I've been saying the same thing to people for around a year now, and they just call me names (luddite, etc.). Honestly, before this I had a semblance of belief that the "open-source" folks are different, they will not buy into the same bullshit as the corporate techbro dudes. I was completely wrong.
@divyaranjan I mean I don't throw the baby out with the bathwater, AI tooling has a place and can be a huge force multiplier if:
a. you have an expert in the loop i.e.a human with actual experience checking things
b. You accept and deal with the limitations of the tooling
c. everybody is honest
Really for code generation for patches like this I don't see much value, the code it generates is never that great.
But obviously it opens the door to abuse very easily sadly...
AI more useful for things like automated review, checking for slop submission (lol), finding bugs, tracking down and analysing issues, etc.
@ljs I am skeptical of having LLMs for code reviews. Maybe as a very basic first pass, sure, but I cannot trust a LLM to thoroughly analyze the idiomaticity of the code and whether it actually does what it's supposed to do.
And if at any point it hallucinated something in the review, it only introduces more work for the human-in-the-loop. I think we should have better static analysis tools to help us with first passes.