"We will ban you and ridicule you in public if you waste our time on crap reports."
These two @lwn articles are prime examples of why good journalism matters and why you should pay money to make sure it thrives:
They both look beyond the shiny statements from the different parties involved and outside commentators such as @torvalds in this case and explain just how it is from a mostly neutral[1] point of view so that you can make your own judgments.
* GPLv2 and installation requirements – https://lwn.net/Articles/1052842/
* SFC v. VIZIO: who can enforce the GPL? – https://lwn.net/Articles/1052734/
[1] We are humans, and even if we try, we are never completely neutral – and a publication like #LWN that targets the FLOSS community obviously will somewhat look at things from the view of its target audience.
This post by Bruce Schneier contains so many thoughtful soundbites:
> The question is not simply whether copyright law applies to AI. It is why the law appears to operate so differently depending on who is doing the extracting and for what purpose.
> Like the early internet, AI is often described as a democratizing force. But also like the internet, AI’s current trajectory suggests something closer to consolidation.
https://www.schneier.com/blog/archives/2026/01/ai-and-the-corporate-capture-of-knowledge.html
I talked for more than two hours (135 mins to be precise) about upstream Linux kernel hardening at Okayama University this afternoon. 🐧👨🏽💻🎙
I just uploaded my slides here: https://embeddedor.com/blog/presentations/#Enhancing_spatial_safety_Better_array-bounds_checking_in_C_and_Linux_Okayama_University_%E2%80%93Guest_talk
I really enjoyed the session. The students were amazing. They were well prepared and asked a lot of questions. 👏🏼👏🏼
#Linux #OpenSource #Education #Mentoring #Okayama #OkayamaUniversity
A thought that popped into my head when I woke up at 4 am and couldn’t get back to sleep…
Imagine that AI/LLM tools were being marketed to workers as a way to do the same work more quickly and work fewer hours without telling their employers.
“Use ChatGPT to write your TPS reports, go home at lunchtime. Spend more time with your kids!” “Use Claude to write your code, turn 60-hour weeks into four-day weekends!” “Collect two paychecks by using AI! You can hold two jobs without the boss knowing the difference!”
Imagine if AI/LLM tools were not shareholder catnip, but a grassroots movement of tooling that workers were sharing with each other to work less. Same quality of output, but instead of being pushed top-down, being adopted to empower people to work less and “cheat” employers.
Imagine if unions were arguing for the right of workers to use LLMs as labor saving devices, instead of trying to protect members from their damage.
CEOs would be screaming bloody murder. There’d be an overnight industry in AI-detection tools and immediate bans on AI in the workplace. Instead of Microsoft CoPilot 365, Satya would be out promoting Microsoft SlopGuard - add ons that detect LLM tools running on Windows and prevent AI scrapers from harvesting your company’s valuable content for training.
The media would be running horror stories about the terrible trend of workers getting the same pay for working less, and the awful quality of LLM output. Maybe they’d still call them “hallucinations,” but it’d be in the terrified tone of 80s anti-drug PSAs.
What I’m trying to say in my sleep-deprived state is that you shouldn’t ignore the intent and ill effects of these tools. If they were good for you, shareholders would hate them.
You should understand that they’re anti-worker and anti-human. TPTB would be fighting them tooth and nail if their benefits were reversed. It doesn’t matter how good they get, or how interesting they are: the ultimate purpose of the industry behind them is to create less demand for labor and aggregate more wealth in fewer hands.
Unless you happen to be in a very very small club of ultra-wealthy tech bros, they’re not for you, they’re against you. #AI #LLMs #claude #chatgpt
I've been trying to quit Google for years, and I finally did it: https://jimmunroe.net/writing/divestment-december.html
Anger at the techno-fascists wasn't enough on its own:
I got a big boost of inspiration and mutual aid from the brilliant community at @yunohost who provide ways to install and maintain -- with very little technical knowledge --
digital services like forums, cloud services and media streaming apps. Check them out at https://YunoHost.org !
The kernel CNA assigned their 10000th CVE last week, CVE-2025-68750
So far the “stats” look like:
Year Reserved Assigned Rejected A+R Returned Total
2019: 0 2 1 3 47 50
2020: 0 17 0 17 33 50
2021: 0 732 24 756 16 772
2022: 3 2041 47 2088 0 2091
2023: 1 1464 47 1511 0 1512
2024: 6 3069 96 3165 0 3171
2025: 73 2421 39 2460 0 2533
Total: 83 9746 254 10000 96 10179
Note, the “year” is the year the bug was fixed in the kernel tree, NOT the year the CVE was applied for/assigned.
@trashheap The “argument” by the SFC is complete garbage, and always has been. There has been no question about the license, and I have made it very clear over the years. And the SFC knows that.
So when they argue their incorrect reading of the GPLv2 in court, they are absolutely not doing GPLv2 enforcement. They are trying to further an agenda that is invalid, and always has been, and is explicitly against the wishes of the actual copyright holders.
So the SFC is just pure trash.
If they want to “protect” some project, let them protect a project that asks for it - not one that is known to not want their kind of protection.
Because what they are doing is a racket, plain and simple.
Rare footage of @gregkh signing an autograph with the phrase "do not use old kernels!" at Open Source Summit Korea 2025, after one of his sessions.