"If Linux can be maintained by sending patches to an email mailing list, 'doesn’t work at scale' arguments are skill issues."
https://dbushell.com/2026/04/29/github-is-sinking/
Typical ML argument: "If I can read something legally, why can't I train an LLM on it?"
Humans are capable of reading things and later writing a similar thing that is still a copyright violation. If I go and write a book that follows the plot line of Star Wars, that's still a copyright violation, even if no text is literally the same. If I play the melody to a song on my piano and release it without the appropriate mechanical cover license, that's also a copyright violation.
The reason this does not happen often is that, as humans, we are aware that that's plagiarism and there are rules. Sometimes it happens by accident, and people still get sued and lose.
LLMs have no such awareness and routinely output things which are blatant copyright violations when appropriately prompted. That means the model weights encode that work, and therefore, are themselves a derivative work.
Your brain encodes a massive amount of copyrighted information. You are not a walking copyright violation because humans aren't data, can't be copied and distributed en masse, have human rights, etc. This is why "mind reading machines" are a classic dystopian plot point (monetizing your thoughts etc).
An LLM is not a human, does not have human rights, nor human privileges. It is data, and if it encodes copyrighted information, that's a derivative work. If you aren't following the license of the training data, that's a copyright violation.
A lot of people are apparently happily running a script clearly marked as a root exploit from some random website using curl | bash
Some do inspect the script, but then still run it using curl | bash anyway.
Incidentally, this very relevant blogpost about detecting curl | bash and serving different scripts based on that is almost exactly a decade old:
https://web.archive.org/web/20230318063325/https://www.idontplaydarts.com/2016/04/detecting-curl-pipe-bash-server-side/
Once again, my professional recommendation in response to the latest Linux kernel vulnerability in the news is that you should gather up all your electronic devices, cast them into the sea, and retreat to the woods.
Each night, gather your children and tell them tales of the Before Times when the hubris of humanity grew so large that we made idols of sand and spoke to them as equals. Remind them that the sand, of course, did not speak or think, but we imagined it could, and let it guide us to folly.
Should a stranger ever come to your village with a glowing rectangle, encourage the youth to beat them with sticks.
I was explaining how we built #bluefin with buildstream and bootc to a coworker and he goes.
"So you made Gentoo but cloud native."
And now I am never going to shut about it lol.
We now require proof of work before you can submit a #curl security report.
Like mowing @bagder 's lawn or washing his car.😌
The #Linux 6.19.y series is now end of life:
""This is the LAST 6.19.y kernel to be released, this branch is now end-of-life. Please move to the 7.0.y kernel branch at this point in time.""
https://lore.kernel.org/all/2026042220-coastline-flirt-ad3c@gregkh/
"During one of my presentations at Open Source Summit Japan🇯🇵 the past year, I talked about a bug I found while addressing -Wflex-array-member-not-at-end issues in the Linux kernel. [...]
[...] not-at-end FAMs are a compiler extension that may cause undefined behavior, and compilers don't handle the sizes of objects containing them consistently. For this reason, they are now deprecated..[...]"🐧
There is virtually **no** AI slop security reports anymore submitted about #curl. They don't seem to happen any longer.
Almost everyone still uses AI though.
1. GenAI is probably going to impact us but how? Nobody knows.
2. The worst thing about GenAI isn't the technology, it's the shitty people: https://karlbode.com/the-problem-with-ai-is-shitty-human-beings [<must-read]
3. We can’t have a grown-up conversation on the subject because the trillion-dollar bet’s fear+greed pressure crowds out truth.
4. When the bubble pops, the shitty people will melt away. Then we can maybe figure it out.
5. We so *SO* need that bubble to pop. Next week would be ideal.
you ever write code so inefficient they have to update the whole power grid
„By Wednesday morning, Anthropic representatives had used a copyright takedown request to force the removal of more than 8,000 copies and adaptations of the raw Claude Code instructions—known as source code—that developers had shared on programming platform GitHub.“
Because if there’s one thing GenAI companies absolutely don’t take lightly, it’s copyright.
https://www.wsj.com/tech/ai/anthropic-races-to-contain-leak-of-code-behind-claude-ai-agent-4bc5acc7