Conversation
Edited 24 days ago

Yet more AI slop doc changes being sent.

You might think I'm harsh in how I reply, but it's so disrespectful to:

a. lie about you having done the work and
b. waste our time with this crap.

If we tolerate it even slightly, the flood gates will open and maintainers already have very little time.

Now upstream is moving towards hobbyist for me that goes only more so...!

3
1
1

just think about the level of fucking contempt you have to have to get AI to generate some shit you don't understand, possibly even get it to send the patch mail and expect a pat on the head.

It's unreal really...

2
1
1

@ljs we had someone come in with a fully AI generated „solution“ for virtualizing macOS on M4, untested and obviously non-functional ofc, and then get pissed and started insulting us when we told them it was useless :/

1
0
1

@sven yeah, the correct approach is to VERY firmly say NO NO NO right away. Then ideally block.

These people are either good faith-but very very misled on what 'helps', AI nutters who think it can magically do things it can't, or - the vast majority I think - bad faith people either trolling, lazy or after kudos they didn't earn.

In any of those cases a short, sharp, shock saying 'go away' very clearly is needed.

It's all a bit parental really...

1
0
0

@ljs yeah, it baffles me that people think we can’t use LLMs ourselves if we wanted to and would rather use them through a slow meat-based proxy without background knowledge or any useful skills instead :/

It’s partially why we have a very strict „no AI at all“ policy right now

1
0
0

@sven exactly...

I am actually in favour of AI for things like code review, finding bugs, when used in conjunction with people who _understand the thing_.

For kernel it still generates useless god awful code, so that side's not helpful, but the other stuff - yes.

But AI slop is never wanted, ever.

0
0
0

@ljs Yes, I understand how you feel. It sucks. I had a similar situation with someone recently that moved on to review random stuff in staging drivers.. now @gregkh is dealing with them.

Also this person somehow got a linux.dev account which is a common pattern within AI users. I guess they believe the linux.dev email is giving them some authority or something I don't know.

Btw, reporting them to linux.dev does not help at all :(

1
0
0

@ljs stinks you have to deal with that. I use llm-assisted stuff at work, but it's not like a public consumption thing and a lot of times it is a single use tool.

Open source folks having to deal with this low effort crud filling up the pipes is sad. So much about all this is sad.

1
0
0

@christoff using LLMs is totally fine, there's loads of ways in which it's useful.

Sending wholly-generated AI slop you have no understanding of let alone ability to check is NOT fine :)

0
0
0
@ljs It's pretty foreseeable that we'll be seeing a lot more of this! In the AI era, the barrier to submitting patches has basically disappeared. Anyone can just ask Codex or Claude to spot some minor issues and whip up a few patches. I honestly feel like we might soon reach a point where we just can't keep up with reviewing all of them. AI coding has only really been a thing for two years now. I can't even begin to imagine what it'll be like in another two!
1
0
2

@haoli yup this is why the 'just another tool' take is so very wrong.

We need to have better processes for banning/dismissing people/slop basically.

Maybe an anti-LLM slop bot (itself an LLM)?

1
0
1
@ljs exactly, for example, email could have a trust score, kind of like spam detection. or patches should get the same treatment as essays: run them through a detector to see how likely they were written by AI :P
0
0
1