@corbet It will be interesting to see how the use of the Assisted-by tag develops. I'm sure not everyone uses it. And if it changes the reception of a patch to be negative, surely people will be less forthcoming about LLM usage too. And, of course, a contribution based on a lie is not a great way to build trust either.
I also see trivial patches with Assisted-by that make me think, why? Couldn't you have done this yourself and learned something in the process.
@corbet It could be one more facet of our looming neo-feudal moment. Bespoke, tested code for the elites, slop-upon-slop code for the peasants.
I suppose some people will surely turn to LLMs to help navigate the social "are you serious" gauntlet.
@corbet yes, it raises new questions, like, what happen when they will ran out of tokens.
@corbet it bodes well for proponents of outright banning LLM contributions to the kernel. just ban that shit, it's not that hard.
some people will disregard the policy. upon being caught out, ban them from all kernel spaces. simple as
@corbet
PR submitter blacklists are coming, if not here already.
@WesternInfidels @corbet Mail servers use of "greylisting" (initially denying suspect connections and then let the retry through) helps avoid this "drive-by" kind of traffic. Maybe need something similar for PRs?
@corbet Can we stop using the propaganda language "machine-generated code" for this? It's copyright-laundered code of unknown origin.
@corbet when someone new turns up with lots of new patches, it makes u wonder. Next you might see people pushing and demanding the same changes. Gives me an XZ utils vibe.
@corbet Like @pluralistic said, we are filling our walls with asbestos.
@corbet "Unwilling to defend" is a good test for the motivation of the submitter. And, unfortunately, motivation is a good test of the cost of accepting the submission.
If the motivation is self-aggrandisement (aka boosting your brand) then you know you are inheriting instant technical debt.
You can simply ignore stuff that looks a bit shifty for a while and see what happens.
I don't think you have any formal contractual obligations or SLAs to anyone at all when it comes to Linux. You might have some sort of community obligation (whatever that might mean).
You are our documentor-in-chief and other unlikely sobriquets and there has to be a point where you can say: "Please bugger off and go and boil your head" although I have noted you tend to a more conciliatory style.
The frontline troops at LWN have always managed to get the tone just right in the harshest of discussions and diffuse them suitably.
@corbet In my eyes, this is just a continuation of an older trend of submitting patches with no maintainability consideration.
I’ve seen it many times: a discussion of a feature is underway, someone creates a patch with no tests and barely implemented happy path, someone else asks a few month later why it’s not merged…
It’s automated now.
@corbet sounds like you should use an LLM to generate an initial response to any PR, and then gauge if there's investment on the other side.
Another option - simply refuse to review large PRs.
Thank you. This puts well into words one of my main concerns about LLM-generated or assisted code or documentation.
@corbet The kernel is kind of unique though in that it can offer life-changing opportunities to contributors.
The vast bulk of other FOSS projects can't. The idea that contributors will stick around or take responsibility for maintaining their work is meaningless.
For these less mighty projects, contributors are entirely drive-by and it only happens during the short window the project is on their radar.
The project maintainer has to take on responsibility for whatever was offered, AI or not.
@corbet I remember in my FOSS past failure to communicate was show stopper. As you say, you need someone responsible.
I personally treat everyone not doing their own code not responsible. I don't care what you do to get started, but own it. Simple as that.
@corbet @WesternInfidels I definitely have (not in the kernel, but still). It's such an uncanny-valley feeling when you thought you were talking to a human but then realize you weren't really.
@corbet I fear the precedent accepting this kind of thing will establish.
Not only in terms of asymmetric levels of AI slop vs. maintainer resource, but also concerns around social engineering from bad actors (not this doc patch but other recent, clearly AI, series).
Sadly mm is being run in a very chaotic fashion where, without pushback from sub-maintainers, we would just take this kind of thing.
It's very tiring to have to constantly be on alert for that, and I hope at some point the culture will change and we can move away from default-take-everything mode.
Another issue here is that it's often hard to really call out AI even if it really seems like it - if somebody denies it, you don't want it to turn into a witch trial.
As always, many social issues at play :)
@corbet this is the core problem with LLM generated anything. There is no personal or emotional bond with the generated output.
I love to code and learn from my mistakes. I am highly invested.
I cannot imagine anyone using LLMs doing or feeling the same.
@corbet @untitaker I moderate a fading Discord server with almost 2,000 members but only a handful of active chatters. The server is anti-LLM-based-AI, anti-cryptocurrency.
In late 2025 we published a couple of data centric apps and then things petered out.
But every couple of weeks or months someone shows up to promote a new framework for our "funding problem" since we are entirely volunteer run, and not a nonprofit. And it is almost always a framework obviously created in "AI". It has the pattern of deluging us with new concepts and buzzwords. And almost always relies on some blockchain implementation to "handle" funding in a money laundering sense.
So we do the due diligence, ask clarifying questions, and after 0-1 rounds they ghost us. Every single time. They lose interest or find the challenge of answering pointed questions insurmountable or their pet AI's memory runs out. It's turning into a meme. I agree.
Kick them out. They and their temporary machines won't be able to maintain the effort.