@corbet 100% agree. Hosting company MD here, we've seen a massive uptick in AI bullshit. And they don't even respect robots.txt like the better search engines do.
Thank you @corbet and all at @LWN for continuing the work of providing the excellent #LWN.
The "active defenses" against torrents of antisocial web scraping bots, has bad effects on users. They tend to be "if you don't allow JavaScript and cookies, you can't visit the site" even if the site itself works fine without.
I don't have a better defense to offer, but it's really closing off huge portions of the web that would otherwise be fine for secure browsers.
It sucks. Sorry, and thank you.
@corbet @LWN I think we should start doing what the internet can do best: Collaborate on these things.
I see this on my services, Xe recently saw the same. https://xeiaso.net/notes/2025/amazon-crawler/ (and build a solution https://xeiaso.net/blog/2025/anubis/)
There is https://zadzmo.org/code/nepenthes/
I would love to see some kind of effort to map out bot IPs and get a public block list. I'm tired of their nonsense.
@corbet @johnefrancis @LWN I'm dealing with a similar issue now (though likely at a smaller scale than LWN!), and I found that leading crawlers into a maze helped a lot in discovering UAs and IP ranges that misbehave. Anyone who spends an unreasonable time in the maze gets rate limited, and served garbage.
So far, the results are very good. I can recommend a similar strategy.
Happy to share details and logs, and whatnot, if you're interested. LWN is a fantastic resource, and AI crawlers don't deserve to see it.
@monsieuricon @LWN @corbet are you implying that there are models that are busy being trained to call someone a fuckface over misunderstanding of some obscure arm coprocessor register or respond with viro insults to the most unsuspecting victims?
@corbet @LWN @monsieuricon it's not the copilot we need but it's the copilot we deserve
@corbet @LWN@fosstodon.org Cloudflare has an AI scraper bot block thatβs free guys.
"Any kind of active defense is going to have to figure out how to block subnets rather than individual addresses, and even that may not do the trick. "
if you're using iptables, ipset can block individual ips (hash:ip), and subnets (hash:net).
Just set it up last night for my much-smaller-traffic instances, feel free to DM
Also tarpits! And nepenthes and nepenthes-adjacent tech!
https://tldr.nettime.org/@asrg/113867412641585520
https://gist.github.com/flaviovs/103a0dbf62c67ff371ff75fc62fdded3
@corbet @LWN not sure if it works for LWN but I learned about this today: https://git.madhouse-project.org/algernon/iocaine
@corbet Nope, no Javascript needed. It operates at Layer 4.
@corbet @LWN You know, what we need is a clearinghouse for this like there are for piholes and porn and such. Could someone with some followers get #AIblacklist trending?
Post your subnets with that hashtag. If we get any traction, I'll host the list.
@corbet @LWN I was just reading about https://git.madhouse-project.org/algernon/iocaine
@corbet@social.kernel.org @LWN@fosstodon.org
Time to set up AI-poisoning bots.
Really great part of this BS is that if you're not a hyperscale social media platform, your ability to afford adequate defenses is going to be awful.
@corbet @LWN @AndresFreundTec Maybe the bot wrote the code itself?
> Sabot in the Age of AI
> Here is a curated list of strategies, offensive methods, and tactics for (algorithmic) sabotage, disruption, and deliberate poisoning.
@corbet
In my timeline your post appeared directlt beneath this one https://tldr.nettime.org/@asrg/113867412641585520 Coincidence????
@LWN
@corbet @LWN i'm not sure if you've already got a strategy for dealing with the scrapers already in mind, but if not -
dialup.cafe's running on nginx, and this has worked well for me so far:
https://rknight.me/blog/blocking-bots-with-nginx/
an apache translation of that using .htaccess would be possible as well.
@sheogorath @corbet @LWN I agree.... this is very similar to the early days of antispam, IMO. I wonder if there's a way to detect abusive scraping (via hits on hidden links, etc.) and publish to a shared DNS blocklist?
@gme @corbet CloudFlare is only free until they smell money (i.e. significant traffic). Then they tell you you're over the (opaque) free plan limits, and demand you pay up, using the possibility of terminating your service as leverage in the subsequent pricing negotiations. If you think you might want to use them (which I don't recommend), start those negotiations before they have any leverage on you.
@corbet
This just came into my timeline, in case it helps : https://tldr.nettime.org/@asrg/113867412641585520
@LWN
@corbet @LWN @AndresFreundTec Maybe it's sabotage internally, so it's not /quite/ as bad. That's what I'd do.
@corbet @LWN I'm sorry, I know this is a pain in the butt to deal with and that it's kind of demoralizing.
Is there anything I can do to help? I'm already a subscriber, and a very happy one; but if it'll diminish the demoralization at all, I really appreciate that you're tackling this problem. Can I get you a pizza or something?
@algernon @corbet @johnefrancis @LWN thank you for offering to protect a thing I love
@corbet @LWN I sympathize, it's an exasperating problem. I've found microcaching all public facing content to be extremely effective.
- The web server sits behind a proxy micro cache
- Preseed the cache by crawling every valid public path
- Configure the valid life of cached content to be as short as you want
- Critically, ensure that every request is always served stale cached content while the cache leisurely repopulates with a fresh copy. This eliminates bot overwhelm by decoupling requests from ANY IP from the request rate hitting the upstream
- Rather than blocking aggressive crawlers, configure rate-limiting customized by profiling max predicted human rate
- For bots with known user agents, plus those detected by profiling their traffic, divert all their requests to a duplicate long lived cache that never invokes the upstream for updates
Micro caching has saved us thousands on compute, and eliminated weekly outages caused by abusive bots. It's open to significant tuning to improve efficiency based on your content.
Shout out to the infrastructure team at NPR@flipboard.com - a blog post they published 9 years ago (now long gone) described this approach.
@corbet @LWN @beasts https://ip-tool.qweb.co.uk has buttons for generating htaccess, nginx, and iptables configs for entire network blocks. Just paste a malicious IP in, tap the htaccess button, and paste into your htaccess file, for example.
Also helps to have Fail2Ban set up to firewall anything that triggers too many 403s, so that htaccess blocks on one site become server wide blocks protecting all sites.
My general rate limiter for nginx is useful too: https://github.com/qwebltd/Useful-scripts/blob/main/Bash%20scripts%20for%20Linux/nginx-rate-limiting.conf
@corbet @LWN @AndresFreundTec read Hacker News and the like and you will see that there are hundreds or thousands or more idiots trying to scrape their way to riches. It's distributed idiocy in part rather than algorithmic DDoS.
@corbet @johnefrancis @LWN
Struggling with likely the same bots over here. I deployed a similar tarpit* on a large-ish site a few days ago - taking care not to trap the good bots - but can't say it's been very successful. It might have taken some load off of the main site, but not nearly enough to make a difference.
One more thing I'm considering is prefixing all internal links with a '/botcheck/' path for potentially suspicious visitors, set a cookie on that page and strip that prefix with JS. If the cookie is set on the /botcheck/ endpoint, redirect to the proper page, otherwise tarpit them. This way the site would still work as long as the user has *either* JS or cookies enabled. Still not perfect, but slightly less invasive than most common active defenses.
@corbet @LWN we've also had to put a IP block on firmware downloads from the #LVFS per day because of AI scrapers -- which makes everyone else's life a little harder.
The scraper useragent is completely wrong and dynamic (but plausible) and they seem to completely ignore robots.txt. Quite what AI robots want with GBs of firmware archives is quite beyond me.
@corbet @johnefrancis @LWN You could also make this structure of pages several levels deep and once you are at a level where no living human would reasonably go just automatically add that IP to the blocklist (and share it with others).
@algernon @corbet @johnefrancis @LWN Can it be turned into a traefik middleware and released under AGPL on codeberg?
@corbet
@LWN
We are seeing this too, on arvados.org and other sites we operate, and have also seen enough complaints across HN, Reddit, fedi etc that I'm sure the whole internet is dealing with this bot problem. It reminds me of the massive wave of telemarketing calls that changed everyone's habits so that people don't answer unfamiliar phone numbers any more.
Poetic justice would be serving enough garbage text to the bots that it accelerates the AI model collapse.
@corbet @LWN I've been blocking large swathes of subnets coming from cloud providers to the forum I run. Anthropic has been the worst offender by far; everyone else seemed to at least rate limit, but Anthropic seemed determined to destroy the internet.
That may also cut off access from arbitrary VPN providers as well, but then many of our spammers and few of our legit people are coming through VPN providers, and as far as I can tell most of the VPN providers are shady. (Is there a trustworthy VPN provider other than Mullvad?)
@corbet @LWN @dahukanna has some good suggestions
IIRC blocking by crawler user agent send to work reasonably well
@monsieuricon @LWN @corbet Is this why lore is so slow today?
@corbet @LWN uhm, someone could, for a friend, check this out? https://git.madhouse-project.org/algernon/iocaine#iocaine
@corbet @LWN I'm wondering if a link that a human wouldn't click on but an AI wouldn't know any better than to follow could be used in nginx configuration to serve AI robots differently from humans, in a configuration that excluded search crawlers from that configuration. What such a link would look like would be different on different sites. That would require thought from every site, but also that would create diversity which would make it harder to guard against on the scraper side, so possibly could be more effective.
I might be an outlier here for my feelings on whether training genai such as LLMs from publicly-posted information is OK. It felt weird decades ago when I was asked for permission to put content I posted to usenet onto a CD (why would I care whether the bits were carried to the final reader on a phone line someone paid for or a CD someone paid for?) so it's not inconsistent in my view that I would personally feel that it's OK to use what I post publicly to train genai. (I respect that others feel differently here.)
That said, I'm beyond livid at being the target of a DDoS, and other AI engines might end up being collateral damage as I try to protect my site for use by real people.
@corbet @LWN Also, the link that a human wouldn't click on should be <meta name="robots" content="noindex">
and a robots.txt section
User-agent: *
Disallow: /the/honeytrap/url
That way, all well-behaved robots that honor robots.txt, including search engines, would continue to work, and only the idiots who think they are above the rules will fall into it.
@jzb @corbet @LWN I'm saying I'm not reflexively "all AI is evil" and I still am beyond incensed at this abuse. I completely agree that dishonoring robots.txt, not providing a meaningful user agent, and running a continuous DDoS is a sign that they are morally corrupt.
The difference between Anthropic and a script kiddie with a stolen credit card or a botnet is that the script kiddie will eventually get bored and go attack someone else, as far as I can tell.
@corbet @LWN Oh no, you are right. It was such an enticing idea.
I want to be search-indexed. I care less about VPN access from VPNs that just rent cloud IPs; much of my spam comes in that way anyway, and it's not clear that many site users actually do. If I can distinguish those I might add a lot more ASN blocks. π’
@corbet @LWN After quite a bit of playing with different options, https://www.mayrhofer.eu.org/post/defenses-against-abusive-ai-scrapers/ is my current setup. I am going to watch my logs for the next couple of days to see what the scrapers get up to.