Conversation

the AI slop in security reports have developed slightly over time. Less mind-numbingly stupid reports now, but instead almost *everyone* writes their reports with AI so they still get overly long and complicated to plow through. And every follow-up question is another minor essay discussing pros and cons with bullet points and references to multiple specifications.

Exhausting nonetheless.

7
2
1

@bagder whenever I ask an LLM something, I prefix my question with “briefly:” or “be concise”.

I’m curious if this would work, considering that many may copy-paste your response into an LLM.

1
0
0

@whynothugo I would prefer if they tried using their brains

1
0
0
@bagder I thought you said “exhausting nothingness” for a moment and I think that also fits generally.
0
0
3

@Thewolfofallstreetbiz360 I miss them just about every day these days... 🤔

0
0
0

@bagder Same, I’m just staying within the domain of what’s feasible.

1
0
0

@bagder Maybe you should start using AI to generate those follow-up questions… just to help them understand why you don’t like that. 🤔

0
0
0

you can all see this in the recently disclosed curl issues on hackerone: https://hackerone.com/curl/hacktivity?type=team

1
0
0

@bagder Have you ever considered that by posting these comments online you're helping train them to bypass your objections?

1
0
0

@oneloop you mean because of my few complaints they would start doing 20 line responses instead of 400 line ones? First: why would that be bad? But then: I'm absolutely sure they don't adapt to my few comments in a world drowning in content.

1
0
0

@bagder On the first point: I imagine that an LLM improving in style is way way easier than improving in substance. So if LLMs read your posts and improve in style, you'll get the same substance-slop but harder for you to detect. That's bad for you.

On the second point. I'm not so sure about that. I'll come back on this point.

1
0
0

@oneloop I'm not fighting the use of AI. I'm fighting users making my life harder. If an AI can explain their issue in 20 lines and the issue is legit, that's a pretty big win.

2
0
0

@bagder Back on the second point. Here's a link [1]

This article calls this "poisoning" because they're talking about malicious attacks.

[1] https://www.anthropic.com/research/small-samples-poison

1
0
0

@bagder Here's the passage I wanted to show you:

> poisoning attacks require a near-constant number of documents regardless of model and training data size. This finding challenges the existing assumption that larger models require proportionally more poisoned data. Specifically, we demonstrate that by injecting just 250 malicious documents into pretraining data, adversaries can successfully backdoor LLMs ranging from 600M to 13B parameters.

1
0
0

@bagder To me this suggests that an LLM could indeed learn from one specific individual if it deems that individual "important" enough.

In that article they talk about triggering on a "specific phrase", but an LLM has its own internal representation of the text, so in principle might trigger on different things like specific phrase plus the name of the user on a certain website, or something.

1
0
0

@oneloop so back to: that sounds like it would be good for me

0
0
0

@bagder I understand, so I have a follow up question for you: annoying formatting style aside, are you seeing that AI actually finds bugs?

1
0
0

@bagder I'm not sure people understand how cognitively tiring these things are until they have had to deal with it themselves. When they first started appearing as bug reports I wasted a lot of time trying to figure out where an issue was, or even if there was an issue at all. But now if I see an LLM generated report my brain just skips it over which isn't always the right thing to do either.

1
0
0

@chillicampari yeah, I've come to do that as well and I skip straight to asking follow-up / clarification questions directly instead of trying to read that wall of text

0
0
0

@bagder
Happy 2026 and thanks for keeping on reporting on how it affects one of the most overly integrated software programs

0
0
0

@bagder just curious, did someone reported you can pipe curl output into shell, which will lead to RCE? blobcatthisisfine2

0
0
0