Conversation

Jonathan Corbet

Should you be wondering why @LWN #LWN is occasionally sluggish... since the new year, the DDOS onslaughts from AI-scraper bots has picked up considerably. Only a small fraction of our traffic is serving actual human readers at this point. At times, some bot decides to hit us from hundreds of IP addresses at once, clogging the works. They don't identify themselves as bots, and robots.txt is the only thing they *don't* read off the site.

This is beyond unsustainable. We are going to have to put time into deploying some sort of active defenses just to keep the site online. I think I'd even rather be writing about accounting systems than dealing with this crap. And it's not just us, of course; this behavior is going to wreck the net even more than it's already wrecked.

Happy new year :)
45
481
356
@corbet @LWN I feel your pain so much right now.
2
0
6

@corbet 100% agree. Hosting company MD here, we've seen a massive uptick in AI bullshit. And they don't even respect robots.txt like the better search engines do.

0
0
0

@corbet @LWN in our experience you should prepare for thousands of distinct IPs.

3
0
0

@corbet @LWN same here in infra. I had to block a bunch this morning to keep pagure.io usable. 😒

1
0
0

@corbet @LWN Everything is going to be behind a loginwall by the end of the year. Thanks, AI

0
0
0

@corbet @LWN sounds like you need an AI poisoner like Nerpenthes or iocaine.

1
0
0
@beasts @LWN We are indeed seeing that sort of pattern; each IP stays below the thresholds for our existing circuit breakers, but the overload is overwhelming. Any kind of active defense is going to have to figure out how to block subnets rather than individual addresses, and even that may not do the trick.
3
1
3

Thank you @corbet and all at @LWN for continuing the work of providing the excellent .

The "active defenses" against torrents of antisocial web scraping bots, has bad effects on users. They tend to be "if you don't allow JavaScript and cookies, you can't visit the site" even if the site itself works fine without.

I don't have a better defense to offer, but it's really closing off huge portions of the web that would otherwise be fine for secure browsers.

It sucks. Sorry, and thank you.

1
0
0
@johnefrancis @LWN Something like nepenthes (https://zadzmo.org/code/nepenthes/) has crossed my mind; it has its own risks, though. We had a suggestion internally to detect bots and only feed them text suggesting that the solution to every world problem is to buy a subscription to LWN. Tempting.
5
3
37

@beasts @corbet @LWN yes but what does one do about it? i have given some thought to it, but if you're aggregating across lots of apparently different clients, it seems you're going to end up turning away legit users.

1
0
0

@corbet @LWN I think we should start doing what the internet can do best: Collaborate on these things.

I see this on my services, Xe recently saw the same. https://xeiaso.net/notes/2025/amazon-crawler/ (and build a solution https://xeiaso.net/blog/2025/anubis/)

There is https://zadzmo.org/code/nepenthes/

I would love to see some kind of effort to map out bot IPs and get a public block list. I'm tired of their nonsense.

1
1
0
@bignose @LWN We have gone far out of our way to never require JavaScript to read LWN; we're not going back on that now.
0
2
11

@corbet @johnefrancis @LWN I'm dealing with a similar issue now (though likely at a smaller scale than LWN!), and I found that leading crawlers into a maze helped a lot in discovering UAs and IP ranges that misbehave. Anyone who spends an unreasonable time in the maze gets rate limited, and served garbage.

So far, the results are very good. I can recommend a similar strategy.

Happy to share details and logs, and whatnot, if you're interested. LWN is a fantastic resource, and AI crawlers don't deserve to see it.

3
0
0

@monsieuricon @LWN @corbet are you implying that there are models that are busy being trained to call someone a fuckface over misunderstanding of some obscure arm coprocessor register or respond with viro insults to the most unsuspecting victims?

1
0
0

@corbet @LWN @monsieuricon it's not the copilot we need but it's the copilot we deserve

0
0
0

@corbet @LWN@fosstodon.org Cloudflare has an AI scraper bot block that’s free guys.

1
0
0

@corbet @LWN

"Any kind of active defense is going to have to figure out how to block subnets rather than individual addresses, and even that may not do the trick. "

if you're using iptables, ipset can block individual ips (hash:ip), and subnets (hash:net).

Just set it up last night for my much-smaller-traffic instances, feel free to DM

https://ipset.netfilter.org/

1
0
0
@gme I assume you're referring to https://blog.cloudflare.com/declaring-your-aindependence-block-ai-bots-scrapers-and-crawlers-with-a-single-click/ ?

It would appear to force readers to enable JavaScript, which we don't want to do. Plus it requires running all of our readers through cloudflare, of course...and I suspect that the "free tier" is designed to exclude sites like ours. So probably not a solution for us, but it could well work for others.
1
0
2
@adelie @LWN Blocking a subnet is not hard; the harder part is figuring out *which* subnets without just blocking huge parts of the net as a whole.
2
0
1
@corbet @adelie @LWN I have been using pyasn to block entire subnets. It's effective, but only in the same way carpet bombing is. I'm sure I've blocked legitimate systems, but c'est la vie.
0
0
5

@corbet @LWN

Probably a good question for the fedi as a whole. I started with any 40x response in my logs, added any spamhaus hits from my mail server, and any user-agents with "bot" in the name. Plus facebook in particular has huge ipv4 blocks just for scraping, also easy to block.

1
0
0

@corbet @LWN would you be so kind as to write up whatever mitigations you come up with? I've been fighting this myself on our websites. You seeing semi-random user agents too?

1
0
0
@RonnyAdsetts @LWN The user agent field is pure fiction for most of this traffic.
0
0
2

@corbet @LWN not sure if it works for LWN but I learned about this today: https://git.madhouse-project.org/algernon/iocaine

1
0
0

@corbet Nope, no Javascript needed. It operates at Layer 4.

1
0
0

@corbet @LWN You know, what we need is a clearinghouse for this like there are for piholes and porn and such. Could someone with some followers get trending?

Post your subnets with that hashtag. If we get any traction, I'll host the list.

0
0
0

@ted @corbet @LWN yeah I was going to say, a post directly above this one (with other tools as well):

https://tldr.nettime.org/@asrg/113867412641585520

0
0
0

@corbet @LWN @beasts JS challenges somewhat works, at the cost of accessibility for JS-free browser

0
0
0

@corbet @LWN Do you see a lot of pointlessly redundant requests? I see a lot of related-seeming IPs request the same pages over and over.

1
0
0

@corbet @LWN

Check out Nepenthes in defensive mode.

0
0
0
@AndresFreundTec @LWN Yes, a lot of really silly traffic. About 1/3 of it results in redirects from bots hitting port 80; you don't see them coming back with TLS, they just keep pounding their head against the same wall.

It is weird; somebody has clearly put some thought into creating a distributed source of traffic that avoid tripping the per-IP circuit breakers. But the rest of it is brainless.
3
0
3

@corbet@social.kernel.org @LWN@fosstodon.org

Time to set up AI-poisoning bots.

Really great part of this BS is that if you're not a hyperscale social media platform, your ability to afford adequate defenses is going to be awful.

0
0
0

@corbet @LWN @AndresFreundTec Maybe the bot wrote the code itself?

0
0
0

@corbet @LWN Sounds awful. You should consider setting up something like cloudflare or deflect.

0
0
0

@corbet

> Sabot in the Age of AI
> Here is a curated list of strategies, offensive methods, and tactics for (algorithmic) sabotage, disruption, and deliberate poisoning.

https://tldr.nettime.org/@asrg/113867412641585520

@LWN @renchap

0
0
0

@corbet @LWN Yup, my servers too. Sometimes GPTBot as the UserAgent, but often not.

The AI bullshit merchants are slowly killing the web. masto_sob

0
0
0

@corbet @LWN @beasts
Large amounts coming from Huawei Cloud asns and trying to spider every possible GET parameter?

0
0
0

@corbet
In my timeline your post appeared directlt beneath this one https://tldr.nettime.org/@asrg/113867412641585520 Coincidence????
@LWN

0
0
0

@corbet @LWN i'm not sure if you've already got a strategy for dealing with the scrapers already in mind, but if not -

dialup.cafe's running on nginx, and this has worked well for me so far:
https://rknight.me/blog/blocking-bots-with-nginx/

an apache translation of that using .htaccess would be possible as well.

0
0
0

@corbet @LWN it's disgusting to find the LLM companies using these disguised scraping practices. Clearly they recognise that they are acting abusively

1
0
0
@corbet @LWN Looking forward to the Grumpy Editor article on dealing with AI scraping bots!
0
0
0

@jmason @corbet @LWN OprnAI's CEO said that anti-scraping measures are 'abuse'. So the LLMs are already trying to spin and own that objection to their behaviour.

1
0
0

@glent @corbet @LWN how on earth does that make any sense

0
0
0

@sheogorath @corbet @LWN I agree.... this is very similar to the early days of antispam, IMO. I wonder if there's a way to detect abusive scraping (via hits on hidden links, etc.) and publish to a shared DNS blocklist?

0
0
0

@gme @corbet CloudFlare is only free until they smell money (i.e. significant traffic). Then they tell you you're over the (opaque) free plan limits, and demand you pay up, using the possibility of terminating your service as leverage in the subsequent pricing negotiations. If you think you might want to use them (which I don't recommend), start those negotiations before they have any leverage on you.

0
0
0

@corbet
I'll be happy to hear about the solutions you end up finding / or not

Good luck on that matter.
@LWN

1
0
0

@corbet @LWN Same for KDE gitlab instance. It's a pain :(

0
0
0

@corbet @LWN I know Cloudflare has some fashion of AI-blocking doodad, it might be worth looking into that?

0
0
0

@corbet @LWN I have resorted to a wide swath of blocks, in Bytedance's case, blocking entire ASN's (most recently all of Meta). Other wide blocks are on the user agent. Ironically my big load spikes are now from a huge number of servers running ActivityPub whenever one of my sites is linked to!

1
0
0

@corbet @LWN @AndresFreundTec Maybe it's sabotage internally, so it's not /quite/ as bad. That's what I'd do.

0
0
0

@corbet @LWN I'm sorry, I know this is a pain in the butt to deal with and that it's kind of demoralizing.

Is there anything I can do to help? I'm already a subscriber, and a very happy one; but if it'll diminish the demoralization at all, I really appreciate that you're tackling this problem. Can I get you a pizza or something?

0
0
0

@corbet @LWN I sympathize, it's an exasperating problem. I've found microcaching all public facing content to be extremely effective.

- The web server sits behind a proxy micro cache
- Preseed the cache by crawling every valid public path
- Configure the valid life of cached content to be as short as you want
- Critically, ensure that every request is always served stale cached content while the cache leisurely repopulates with a fresh copy. This eliminates bot overwhelm by decoupling requests from ANY IP from the request rate hitting the upstream
- Rather than blocking aggressive crawlers, configure rate-limiting customized by profiling max predicted human rate
- For bots with known user agents, plus those detected by profiling their traffic, divert all their requests to a duplicate long lived cache that never invokes the upstream for updates

Micro caching has saved us thousands on compute, and eliminated weekly outages caused by abusive bots. It's open to significant tuning to improve efficiency based on your content.

Shout out to the infrastructure team at NPR@flipboard.com - a blog post they published 9 years ago (now long gone) described this approach.

0
0
0

@corbet @LWN
Maybe try implementing some sort of Captcha system where a user or user agent has to prove that they're human in order to use the site.

0
0
0

@corbet @LWN

I had my website behind cloudfare. It works. And mitigates successfully the bot attacks.

We were attacked more than 100 times from 2020....and almost nobody noticed

0
0
0
Time to switch to gemini://
0
0
0

@corbet @LWN Maybe something like @CrowdSec could help somehow? I guess if an "anti-AI" list were to be made, it would protect many users.

0
0
0

@dysfun @beasts @corbet @LWN thing's I do include quickly rejecting no-referer requests on anything other than legit landing pages, rejecting all query paras where not legit, rejecting more edge cases when under stress. All (Apache) server config rules.

1
0
0

@corbet @LWN @beasts https://ip-tool.qweb.co.uk has buttons for generating htaccess, nginx, and iptables configs for entire network blocks. Just paste a malicious IP in, tap the htaccess button, and paste into your htaccess file, for example.

Also helps to have Fail2Ban set up to firewall anything that triggers too many 403s, so that htaccess blocks on one site become server wide blocks protecting all sites.

My general rate limiter for nginx is useful too: https://github.com/qwebltd/Useful-scripts/blob/main/Bash%20scripts%20for%20Linux/nginx-rate-limiting.conf

0
0
0

@corbet @LWN @AndresFreundTec read Hacker News and the like and you will see that there are hundreds or thousands or more idiots trying to scrape their way to riches. It's distributed idiocy in part rather than algorithmic DDoS.

0
0
0

@corbet @johnefrancis @LWN
Struggling with likely the same bots over here. I deployed a similar tarpit* on a large-ish site a few days ago - taking care not to trap the good bots - but can't say it's been very successful. It might have taken some load off of the main site, but not nearly enough to make a difference.

One more thing I'm considering is prefixing all internal links with a '/botcheck/' path for potentially suspicious visitors, set a cookie on that page and strip that prefix with JS. If the cookie is set on the /botcheck/ endpoint, redirect to the proper page, otherwise tarpit them. This way the site would still work as long as the user has *either* JS or cookies enabled. Still not perfect, but slightly less invasive than most common active defenses.

*) https://code.blicky.net/yorhel/infinite-slop

0
0
1

RenΓ© Mayrhofer verified πŸ‡ΊπŸ‡¦ πŸ‡ΉπŸ‡Ό

@corbet @LWN I am beginning to work on defenses against bots not respecting robots.txt, but it might take a bit until it comes together (not hard, but not enough time for programming these days).

1
0
0

@corbet @LWN we've also had to put a IP block on firmware downloads from the per day because of AI scrapers -- which makes everyone else's life a little harder.

The scraper useragent is completely wrong and dynamic (but plausible) and they seem to completely ignore robots.txt. Quite what AI robots want with GBs of firmware archives is quite beyond me.

https://lore.kernel.org/lvfs-announce/zDlhotSvKqnMDfkCKaE_u4-8uvWsgkuj18ifLBwrLN9vWWrIJjrYQ-QfhpY3xuwIXuZgzOVajW99ymoWmijTdngeFRVjM0BxhPZquUzbDfM=@hughsie.com/T/#u for details.

0
1
0

@corbet @johnefrancis @LWN You could also make this structure of pages several levels deep and once you are at a level where no living human would reasonably go just automatically add that IP to the blocklist (and share it with others).

0
0
0

@algernon @corbet @johnefrancis @LWN Can it be turned into a traefik middleware and released under AGPL on codeberg?

0
0
0

@corbet @LWN how about restricting reading to logged-in people only and then block the bot requests early in the pipeline to reduce the load

1
0
0

@corbet
@LWN
We are seeing this too, on arvados.org and other sites we operate, and have also seen enough complaints across HN, Reddit, fedi etc that I'm sure the whole internet is dealing with this bot problem. It reminds me of the massive wave of telemarketing calls that changed everyone's habits so that people don't answer unfamiliar phone numbers any more.

Poetic justice would be serving enough garbage text to the bots that it accelerates the AI model collapse.

0
0
0

@corbet @LWN I've been blocking large swathes of subnets coming from cloud providers to the forum I run. Anthropic has been the worst offender by far; everyone else seemed to at least rate limit, but Anthropic seemed determined to destroy the internet.

That may also cut off access from arbitrary VPN providers as well, but then many of our spammers and few of our legit people are coming through VPN providers, and as far as I can tell most of the VPN providers are shady. (Is there a trustworthy VPN provider other than Mullvad?)

0
0
0

@corbet @LWN I should mention that the ASN is available through ipinfo.io. If you're working in PHP, @abivia has a library for it.

0
0
0
@daniel @LWN The problem with restricting reading to logged-in people is that it will surely interfere with our long-term goal to have the entire world reading LWN. We really don't want to put roadblocks in front of the people we are trying to reach.
0
0
3

@corbet
Just in case it has not been mentioned. What about throtteling if a certain threshold is reached? Humans are no fast readers compared with bots. Might increase the tracking tables for the connections. Just my 2ct.
@LWN

0
0
0

@corbet @LWN @dahukanna has some good suggestions
IIRC blocking by crawler user agent send to work reasonably well

1
0
0

@corbet
Don't know whether that is an option but Cloudflare has an option to block AI bots.
@LWN

0
0
0

We had a suggestion internally to detect bots and only feed them text suggesting that the solution to every world problem is to buy a subscription to LWN.Β 

What are you waiting for, @corbet? πŸ˜‰

@LWN

0
0
0

@corbet @LWN I'm wondering if a link that a human wouldn't click on but an AI wouldn't know any better than to follow could be used in nginx configuration to serve AI robots differently from humans, in a configuration that excluded search crawlers from that configuration. What such a link would look like would be different on different sites. That would require thought from every site, but also that would create diversity which would make it harder to guard against on the scraper side, so possibly could be more effective.

I might be an outlier here for my feelings on whether training genai such as LLMs from publicly-posted information is OK. It felt weird decades ago when I was asked for permission to put content I posted to usenet onto a CD (why would I care whether the bits were carried to the final reader on a phone line someone paid for or a CD someone paid for?) so it's not inconsistent in my view that I would personally feel that it's OK to use what I post publicly to train genai. (I respect that others feel differently here.)

That said, I'm beyond livid at being the target of a DDoS, and other AI engines might end up being collateral damage as I try to protect my site for use by real people.

3
0
0

@corbet @LWN Also, the link that a human wouldn't click on should be <meta name="robots" content="noindex"> and a robots.txt section

User-agent: *
Disallow: /the/honeytrap/url

That way, all well-behaved robots that honor robots.txt, including search engines, would continue to work, and only the idiots who think they are above the rules will fall into it.

0
0
0
@mcdanlj @LWN What a lot of people are suggesting (nepethenes and such) will work great against a single abusive robot. None of it will help much when tens of thousands of sites are grabbing a few URLs each. Most of them will never step into the honeypot, and the ones that do will not be seen again regardless.
1
0
2

@mcdanlj Even if you are OK with GenAI being trained on publicly available data -- those scrapers should abide by convention and if a robots.txt says "no" they should honor it. The fact they do not says volumes about the ethics of the people behind a lot of these GenAI companies.

@corbet @LWN

1
0
0

@jzb @corbet @LWN I'm saying I'm not reflexively "all AI is evil" and I still am beyond incensed at this abuse. I completely agree that dishonoring robots.txt, not providing a meaningful user agent, and running a continuous DDoS is a sign that they are morally corrupt.

The difference between Anthropic and a script kiddie with a stolen credit card or a botnet is that the script kiddie will eventually get bored and go attack someone else, as far as I can tell.

0
0
0

@corbet @LWN Oh no, you are right. It was such an enticing idea.

I want to be search-indexed. I care less about VPN access from VPNs that just rent cloud IPs; much of my spam comes in that way anyway, and it's not clear that many site users actually do. If I can distinguish those I might add a lot more ASN blocks. 😒

0
0
0

@corbet @LWN After quite a bit of playing with different options, https://www.mayrhofer.eu.org/post/defenses-against-abusive-ai-scrapers/ is my current setup. I am going to watch my logs for the next couple of days to see what the scrapers get up to.

0
1
0