Posts
143
Following
370
Followers
296
Dr. WiFi. Linux kernel hacker at Red Hat. Networking, XDP, etc. He/Him.

Again I stand by what I said.

"I can't believe I even have to talk about these people. That's how ridiculous it s."

We have the equivalent of the scientologists running the field of AI with untold amounts of money and power and we're the ones who have to do point by point rebuttals of their eugenicists dreams.

https://www.rollingstone.com/culture/culture-commentary/billionaires-psychology-tech-politics-1235358129/

4
10
0

I went to a talk lately that was mostly about something else, but the speaker came out with:

“If you only remember one thing from this talk, remember this. Everyone in this room who likes helping people, raise your hand.”

Every hand, or nearly every hand, went up.

“If you like asking other people for help, keep your hand up.”

Almost every hand went back down.

“As you can see, people like helping you. When you ask for help, you’re making them feel good, even if you don’t like asking.”

I’ve genuinely forgotten the rest of the presentation but I won’t forget that.

1
36
3

You don't have to put on the red light.

0
11
1

David Chisnall (*Now with 50% more sarcasm!*)

Edited 5 days ago

A few people have asked me recently questions along the lines of ‘how mature is as a technology?’ The analogy that I usually use is the 386’s memory management unit (MMU). This shipped 40 years ago, at a time when most desktops did not do memory protection, though larger systems usually did. Similarly, most systems today do not do object-granularity memory safety.

The 386 shipped after the 286 had tried a very different model for the same goal and had failed to provide abstractions that were usable and performant. Similarly, things like Intel MPX have failed to provide the memory safety guarantees of CHERI and thins like Arm’s POE2 have failed to provide the kind of useable programmer model for fine-grained compartmentalisation model that CHERI enables, yet both technologies have shown that these are important problems to solve.

The 386’s MMU had a bunch of features that you’d recognise today. Page tables were radix trees, for example, just as they are on a modern system. It wasn’t perfect, but it was enough for Linux and Windows NT to reach Bill Gates’ goal of ‘a computer on every desk’, at least in wealthy countries (and the cost of the MMU was not the blocker elsewhere). For over 20 years, operating systems were able to use it to provide useful abstractions with strong security properties.

It was not perfect. Few people thought, in 1985, that PCs would run VMs because they barely had enough memory to run two programs, let alone two operating systems. But the design was able to evolve. It grew to support more than 4 GiB of physical memory with PAE, to support nested paging for virtualisation with VT-x, and so on. It made some mistakes (read permission implied execute, for example) but these were fixed in later systems that were able to provide programmers with the same abstractions.

And this is why I’m excited about the progress towards the Y (CHERI) base architecture in RISC-V, and why I believe now is the right time for it. There’s a lot of CHERI that’s very stable. Most of the things from our 2015 paper are still there. A few newer things are also now well tested and useful. These are an excellent foundation for a base architecture that I’m confident we can standardise and that software and hardware vendors can support for at least the next 20 years.

At the same time, in (and some other projects) we have a few extensions that we know are validated in software and very useful in some domains (in our case, for microcontrollers) but not as useful elsewhere. Some of the things we’ve done in CHERIoT would be terrible ideas in large out-of-order machines, but would be great to standardise as extensions because they are useful on microcontrollers, and some might be useful on some accelerators. It would be great if these could share toolchain bits.

There are also some exciting research ideas. I’d be much less excited by CHERI if I thought we were finished. 40 years after MMUs became mainstream, people are still finding exciting new use cases for them and new ways to extend them to enable more software models and that’s what a new foundational piece of architecture should provide. I firmly believe CHERI has the same headroom. After Y is standardised, I expect to see decades more vendor and standard extensions that are doing things no one has thought of yet, but which would be impossible without CHERI as a foundation.

0
5
0

Toke Høiland-Jørgensen

Video of my DevConf.cz talk, "Beware of the kernel RTNL mutex" is available in the room stream, starting at 53:20

https://m.youtube.com/watch?v=BBwN-fzEtAs&t=3220s
0
1
2
computer thoughts, rambling, terrible analogy
Show content

Posts from @amy and @neil have helped me out into words a feeling that’s been rumbling around in my head for a while.

My personal computers are like my house. The OS is like a butler or head house maid. Sure they control what goes on, who gets in, etc. but I own the house and can fire and replace them if I need/want to. Other applications are guests in my house, a band that I’ve hired to entertain me, a scribe to write my documents, an opponent to play board games with. But ultimately it’s my house, I choose what happens.

My phone feels more like a hotel. Yes I can do what I want, within reason, but if I want to change the wallpaper or install a new manager I can’t. I can move to another hotel, sure but it’s a pain in the butt and I have to carefully take all my luggage and things work a bit differently. If the hotel wants to play death metal during breakfast that’s their business. The only choice I have it to not eat breakfast. If my phone wanto segverve me ads I can just not use my phone.

Most people prefer to live in their own house. Yet we use our phones as much, if not more than our computers. Why do we accept this lack of ownership over one of the most important computers in our lives.

1
5
1
On software projects, we have an acute sense of "this project has technical debt", which we feel every time we're working on something else in the project other than paying off that technical debt. And generally, developers tend to have a lot of respect for taking time paying off technical debt to improve maintainability.

As individuals, we also have technical debt. Things like needing to fix our working environment. We feel that every time we're working on something else. We should have the same respect for working on our own technical debt as we do for working on a project's. Don't feel guilt for spending time improving your setup or your work, if it eliminates a source of stress or annoyance or slowdown for you.
0
3
0

When we throw up our hands and say none of it matters, we're doing the fascists’ work for them. They don't need to hide their corruption if they can convince us it's pointless to look. They don't need to silence truth-tellers if we've already decided truth is meaningless.

https://www.citationneeded.news/it-matters-i-care/

4
16
2
Edited 23 days ago

Proposal: A “yes but only if there’s an agenda” option for meeting invitations.

The calendar equivalent of “come back with a warrant”.

1
11
0

"Part of our task in the face of generative AI is to make an argument for the value of thinking – laboured, painful, frustrating thinking."

"[W]e also need to hold our institutions accountable. [...] university administrators are highly susceptible to the temptations of technology-driven downsizing, big tech donations, and the appearance of being on the cutting edge."

https://activehistory.ca/blog/2025/06/11/on-generative-ai-in-the-classroom-give-up-give-in-or-stand-up/

3
3
0
Edited 12 days ago

Since I've left my last job, I've been thinking about the guy who used me as an alternative to ChatGPT whenever he hit a problem that he couldn't vibe code the answer out of at work.

He basically rotted his own brain by compulsively using ChatGPT in lieu of actually thinking with most any of the projects he was working on. Instead of taking the time to read through code in our framework, look up documentation, or do any sort of debugging, he instead just begged and pleaded with ChatGPT to try and get somewhere because "it was faster." Basically just really hammering his brain with the Programmer's Slot Machine. (@davidgerard wrote a really good article here about this specific gambling addiction angle here. I highly highly recommend reading/watching the corresponding YouTube video:
https://pivot-to-ai.com/2025/06/05/generative-ai-runs-on-gambling-addiction-just-one-more-prompt-bro/ )

Back to the story; When that wasn't working, which was a significant portion of the time, he'd then just turn and use me as a "more informed alternative" to ChatGPT.

I worked fully remote and the majority of our interactions was via a Teams chat. which apparently crossed some wires in his monkey brain and made him start just... Basically verbally barraging me like he would with the company ChatGPT instance. No thoughts at all, just an immediate process of:

- Ask vague question
- Get guess for an answer with a request for more details
- Try applying the guess blindly without thinking if it's applicable at all
- Have it not work and just report back that it didn't work.
- No follow-up details, no further explanation of what was going on or what he's trying to do. Nothing added past the original vague situation
- If lucky, I might get a screenshot of part of the error, meticulously sliced before it gave something useful in the output because he stopped reading error output to things and made no attempt to understand it. (Why? ChatGPT can do that part!)
- Rinse and Repeat until I get fed up and get into a call with him
- Fix the thing in less than a minute, pointing out that he should have been able to tell what was wrong almost immediately if he actually dropped a break-point and debugged the code at *literally any point* along the way
- Fuck off immediately after getting his fix, no thank you or anything
- start the process anew the following day when he vibe coded himself into a corner all over again

I literally had to go to leadership and make them have a talk with him and get him to leave me the fuck alone at work, after repeated attempts to establish boundaries about it, due to how much time it sucked out of me being able to work on other projects. Effectively just doubling up my work and slamming me with burn out right at the start of the year for absolutely no reason other than his belligerent insistence to just Not Do His Job Without His Hand Being Held By A Chat Window.

It rapidly went from a "He sometimes asks informed questions that I can answer and help him with. I enjoy working with him" to "The dude isn't even trying in the slightest and is now basically offloading his work onto me because he broke his capacity to actually do work independently of an external chat window. I fucking hate him and I hope he gets in a car wreck so I can get a break from the bleakness of dealing with him every goddamn morning"

ChatGPT has basically just been an absolute blight for me since it's inception. Going from the team being generally pro-crypto to intensely pro-genAI/LLM because their favorite scammers (er.. I mean YouTubers) had them hooked on a fantasy of some day making it Big by jumping from one Hype cycle to the next. I sincerely was very close to just finding an entirely different career path altogether because of just how incredibly shitty it was working with that team on just about anything, but lacking the job experience on the resume to land someplace else.

Nobody wanted to be an actual expert, nobody really wanted to learn anything. They had their degree and ChatGPT, which means they learned all they ever will need. ...While working in an industry that tends to re-invent itself every half decade or so while half-assing solutions with an outsourced bullshit generator. 🫠

All in the name of "Well it got me from point A to point B faster." and leaving it at that, despite taking significantly longer than they should have from the get go over it.

I've seen and lived what an AI Fueled future looks like:
Mediocre men harassing their talented and likely autistic peers until their peers just up and fuckin leave to a different organization out of frustration and exhaustion.

I think down the road, we'll be able to measure the negative impact using LLMs has on people's cognitive faculties by comparing it to horse kicks to the head, and only be exaggerating it by a little bit.

5
21
1

Lokjo - a European online map

Edited 9 days ago

UPDATE 2: We're carefully back online again, with a few exceptions:

https://mapstodon.space/@lokjo/114657821935053517

Can we have a lill boost please? The fediverse is the only place we're on.

We're a replacement for googlemaps.

European, non-commercial, pro-local.

https://www.lokjo.com

Thanks for sharing 😊

1
15
0

Toke Høiland-Jørgensen

New blog post: "Slow travel"

Reflections on the feeling of travelling slowly, enjoying the soothing feeling of actually crossing the intervening distance between two places.

https://blog.tohojo.dk/2025/06/slow-travel.html
0
1
2

Fabulousness from Rose Ann Prevec. ;)

0
19
1
Edited 16 days ago

Why Bell Labs worked so well, and could innovate so much, while today’s innovation, in spite of the huge private funding, goes in hype-and-fizzle cycles that leave relatively little behind, is a question I’ve been asking myself a lot in the past years.

And I think that the author of this article has hit the nail on its head on most of the reasons - but he didn’t take the last step in identifying the root cause.

What Bell Labs achieved within a few decades is probably unprecedented in human history:

  • They employed folks like Nyquist and Shannon, who laid the foundations of modern information theory and electronic engineering while they were employees at Bell.

  • They discovered the first evidence of the black hole at the center of our galaxy in the 1930s while analyzing static noise on shortwave transmissions.

  • They developed in 1937 the first speech codec and the first speech synthesizer.

  • They developed the photovoltaic cell in the 1940, and the first solar cell in the 1950s.

  • They built the first transistor in 1947.

  • They built the first large-scale electronic computers (from Model I in 1939 to Model VI in 1949).

  • They employed Karnaugh in the 1950s, who worked on the Karnaugh maps that we still study in engineering while he was an employee at Bell.

  • They contributed in 1956 (together with AT&T and the British and Canadian telephone companies) to the first transatlantic communications cable.

  • They developed the first electronic musics program in 1957.

  • They employed Kernighan, Thompson and Ritchie, who created UNIX and the C programming language while they were Bell employees.

And then their rate of innovation suddenly fizzled out after the 1980s.

I often hear that Bell could do what they did because they had plenty of funding. But I don’t think that’s the main reason. The author rightly points out that Google, Microsoft and Apple have already made much more profit than Bell has ever seen in its entire history. Yet, despite being awash with money, none of them has been as impactful as Bell. Nowadays those companies don’t even innovate much besides providing you with a new version of Android, of Windows or the iPhone every now and then. And they jump on the next hype wagon (social media, AR/VR, Blockchain, AI…) just to deliver half-baked products that (especially in Google’s case) are abandoned as soon as the hype bubble bursts.

Let alone singlehandedly spear innovation that can revolutionize an entire industry, let alone make groundbreaking discoveries that engineers will still study a century later.

So what was Bell’s recipe that Google and Apple, despite having much more money and talented people, can’t replicate? And what killed that magic?

Well, first of all Bell and Kelly had an innate talent in spotting the “geekiest” among us. They would often recruit from pools of enthusiasts that had built their own home-made radio transmitters for fun, rather than recruiting from the top business schools, or among those who can solve some very abstract and very standardized HackerRank problems.

And they knew how to manage those people. According to Kelly’s golden rule:

How do you manage genius? You don’t

Bell specifically recruited people that had that strange urge of tinkering and solving big problems, they were given their lab and all the funding that they needed, and they could work in peace. Often it took years before Kelly asked them how their work was progressing.

Compare it to a Ph.D today who needs to struggle for funding, needs to produce papers that get accepted in conferences, regardless of their level of quality, and must spend much more time on paperwork than on actual research.

Or to an engineer in a big tech company that has to provide daily updates about their progress, has to survive the next round of layoffs, has to go through endless loops of compliance, permissions and corporate bureaucracy in order to get anything done, has his/her performance evaluated every 3 months, and doesn’t even have control on what gets shipped - that control has been taken away from engineers and given to PMs and MBA folks.

Compare that way of working with today’s backlogs, metrics, micromanaging and struggle for a dignified salary or a stable job.

We can’t have the new Nyquist, Shannon or Ritchie today simply because, in science and engineering, we’ve moved all the controls away from the passionate technical folks that care about the long-term impact of their work, and handed them to greedy business folks who only care about short-term returns for their investors.

So we ended up with a culture that feels like talent must be managed, even micromanaged, otherwise talented people will start slacking off and spending their days on TikTok.

But, as Kelly eloquently put it:

“What stops a gifted mind from just slacking off?” is the wrong question to ask. The right question is, “Why would you expect information theory from someone who needs a babysitter?”

Or, as Peter Higgs (the Higgs boson guy) put it:

It’s difficult to imagine how I would ever have enough peace and quiet in the present sort of climate to do what I did in 1964… Today I wouldn’t get an academic job. It’s as simple as that. I don’t think I would be regarded as productive enough.

Or, as Shannon himself put it:

I’ve always pursued my interests without much regard for final value or value to the world. I’ve spent lots of time on totally useless things.

So basically the most brilliant minds of the 20th century would be considered lazy slackers today and be put on a PIP because they don’t deliver enough code or write enough papers.

So the article is spot on in identifying why Bell could invent, within a few years, all it did, while Apple, despite having much more money, hasn’t really done anything new in the past decade. MBAs, deadlines, pseudo-objective metrics and short-termism killed scientific inquiry and engineering ingenuity.

But the author doesn’t go one step further and identify the root cause.

It correctly spots the business and organizational issues that exist in managing talent today, but it doesn’t go deeper into their economic roots.

You see, MBA graduates and CEOs didn’t destroy the spirit of scientific and engineering ingenuity spurred by the Industrial Revolution just because they’re evil. I mean, there’s a higher chance for someone who has climbed the whole corporate ladder to be a sociopath than there is for someone you randomly picked from the street, but not to the point where they would willingly tame and screw up the most talented minds of their generation, and try and squeeze them into a Jira board or a metric that looks at the number of commits, out of pure sadism.

They did so because the financial incentives have drastically changed from the times of Bells Labs.

The Bells Labs were basically publicly funded. AT&T operated the telephone lines in the US, paid by everyone who used telephones, and they reinvested a 1% tax into R&D (the Bells Labs). And nobody expected a single dime of profits to come out from the Bells Labs.

And btw, R&D was real R&D with no strings attached at the time. In theory also my employer does R&D today - but we just ended up treating whatever narrow iterative feature requested by whatever random PM as “research and development”.

And at the time the idea of people paying taxes, so talented people in their country could focus on inventing the computer, the Internet or putting someone on the moon, without the pressure of VCs asking for their dividends, wasn’t seen as a socialist dystopia. It was before the neoliberal sociopaths of the Chicago school screwed up everything.

And, since nobody was expecting a dime back, nobody would put deadlines on talented people, nobody hired unqualified and arrogant business specialists to micromanage them, nobody would put them on a performance improvement plan if they were often late at their daily standups or didn’t commit enough lines of code in the previous quarter. So they had time to focus on how to solve some of the most complex problems that humans ever faced.

So they could invent the transistor, the programming infrastructure still used to this day, and lay the foundations of what engineers study today.

The most brilliant minds of our age don’t have this luxury. So they can’t revolutionarize our world like those in the 20th century did. Somebody else sets their priorities and their deadlines. They can’t think of moonshots because they’re forced to work on the next stupid mobile app that the next stupid VC wants to release to market so they could get insanely rich. They have to worry about companies trying to replace them with AI bots and business managers wanting to release products themselves by “vibe coding”, just to ask those smart people to clean up the mess they’ve done. They are seen as a cost, not as a resource.

Then of course they can’t invent the next transistor, or bring the next breakthrough in information theory.

Then of course all you get, after one year of the most brilliant minds of our generation working at the richest company that has ever existed, is just a new iPhone.

https://links.fabiomanganiello.com/share/683ee70d0409e6.66273547

10
61
1

There is a sad and frustrating repetitiveness to my cartoons about Gaza, but still I think it's important to keep drawing them, just as it's important to keep sharing the images from Gaza.

Today's cartoon for Trouw: https://www.trouw.nl/cartoons/tjeerd-royaards~bcb45712/

0
8
0

Toke Høiland-Jørgensen

Gave a talk at the Lund Linux Conference today honouring the work and life of Dave Taht. It was a mix of personal and technical, talking about Dave and his approach to improving the internet, combined with a whirlwind tour of the history of the #bufferbloat project and the innovations in #Linux that it has led to.

Unfortunately it wasn't recorded, but the slides are available here: https://github.com/xdp-project/xdp-project/blob/main/conference/LLC2025/honouring-the-life-of-dave-taht.pdf

"A man is not dead while his name is still spoken."

-Going Postal, Chapter 4 prologue
0
2
7
Edited 24 days ago

What I really don't like about the topic here in Europe is how it, again, just like with The Cloud, falls into the trap of nationalism instead of EU wide cooperation. I see national groups and lobbyists running around claiming that must have borders like countries. The German solution here, the Dutch solution here, the French doing something completely different. Interoperability ignored. People, let's not fall into that simplistic way of thinking again.

3
1
0
Show older