Honestly, at this point, my horror at the "The computers are totally thinking!" types is shifting.
For a long time, I considered it mostly a mark of how gullible some folks are.
But as the march of "progress" marching forward, I realize the horror is actually how small they imagine other people to be.
To the sufficiently advanced booster, we're all just p-zombies.
Every consensus position paper I read from software research about AI right now:
- AI should provide assistance
- but also make sure people don't use assistance
- should be a command center
- but not make people "managers"
- should synthesize data based on patterns
- but never reify patterns and ignore outliers
- "critical thinking"*
*which is what exactly
If I had to identify a list of skills in high impact engineers, it would include:
- ecological awe
- intellectual humility
- respect for the complexity of unfamiliar problems
- cross functional communication
- resilience engineering
- marketing and sales
(“Technical skills” aren’t in my top ten)
Hello #portfolioday,
I'm Bison a woodcarver from France ✌️
I take commissions, and would love to work on team projects.
🐙 https://bisonrimant.fr
✉️ bisonrimant@gmail.com
"I used AI. It worked. I hated it." by @mttaggart https://taggart-tech.com/reckoning/
This is a really good blogpost. And I"m sure it'll make some people unhappy to read whether they're pro or anti genAI. What's good about @mttaggart's blogpost is he talks honestly about how using Claude Code did actually solve the problem he set out to do. It needed various guardrails, but they were possible to set up, and the project worked. But the post is also completely clear and honest about how miserable it was:
- It removed the joy from the process
- If you aim to do the right thing and carefully evaluate the output, your job ends up eventually becoming "tapping the Y key"
- Ramifications on people learning things
- Plenty of other ethical analysis
- And the nagging wonder whether to use it next time, despite it being miserable.
I think this is important, because it *is* true that these tools are getting to the point where they can accomplish a lot of tasks, but the caveat space is very large (cotd)
I freaking love my #SyncThing setup, it makes workflow across multiple devices on a local network — Linux and Windows desktops, an Android phone, occasionally even a Steam Deck — so easy and seamless! There's an initial learning/setup curve and at one point Windows threw a curve ball by arbitrarily switching folders around, but once you get used to the basic concepts things make sense, troubleshooting isn't too much trouble, and it's a breeze to flexibly configure folders for different uses like one-way backups (e.g. sending phone pics and footage to desktop) and two-way synchronization (e.g. keeping folders synced between desktops or between a desktop and phone).
Them: if you set aside all the ethical concerns…
Me: this is what evil is. This is how evil talks.
I’m not trans. As far as I know, my kids aren’t at the moment. Nobody in my family is. None of my good friends are. I don’t prescribe medication for gender affirming care. I don’t specifically treat folks for gender dysphoria. So why do I give a fuck?
You don’t punch down. There’s no transgender machine producing trans judiciary and trans politicians and trans PACs to come for my cisgender. There are trans folks being discriminated against, outed, beaten, jailed, and killed. I hate bullies.
Ok, today’s xkcd is glorious!
[edit] I should probably mention you need to visit the site for the full effect, it’s more than just a comic today!
For no reason at all, please give me your favourite cow-related figures of speech! (Stuff like "No use crying over spilled milk" or "until the cows come home", puns extremely welcome)
Dr. Chuck Tingle putting difficult things into words as always 🔥
We can remove strncpy() from the Linux kernel finally! I did the last 6 instances, and dropped all the implementations:
https://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git/log/?h=dev/v7.0-rc2/strncpy
Over the last 6 years working on this, there were 362 commits by 70 contributors. The folks with more than 1 commit were:
211 Justin Stitt <justinstitt@google.com>
22 Xu Panda <xu.panda@zte.com.cn>
21 Kees Cook <kees@kernel.org>
17 Thorsten Blum <thorsten.blum@linux.dev>
12 Arnd Bergmann <arnd@arndb.de>
4 Pranav Tyagi <pranav.tyagi03@gmail.com>
4 Lee Jones <lee@kernel.org>
2 Steven Rostedt <rostedt@goodmis.org>
2 Sam Ravnborg <sam@ravnborg.org>
2 Marcelo Moreira <marcelomoreira1905@gmail.com>
2 Krzysztof Kozlowski <krzk@kernel.org>
2 Kalle Valo <kvalo@kernel.org>
2 Jaroslav Kysela <perex@perex.cz>
2 Daniel Thompson <danielt@kernel.org>
2 Andrew Lunn <andrew@lunn.ch>
Thank you to all of you! (And especially to Justin Stitt who took on the brunt of the work.)
I've been thinking about the FCC's insane new ban on foreign-made routers. Note the end of the BBC story at https://www.bbc.com/news/articles/c74787w149zo:
"One exception to the general absence of US-made routers is the newer Starlink WiFi router. Starlink is part of Elon Musk's company SpaceX.
"The company says the Starlink routers are made in Texas."
And per the FCC's FAQ (https://www.fcc.gov/faqs-recent-updates-fcc-covered-list-regarding-routers-produced-foreign-countries), even US-written software (or, I assume, open source software like OpenWRT) won't exempt foreign-made routers from the ban.
I'm looking for full-time work!
I work at the intersection of social and technical systems, and specialize in building up people, programs, partnerships, and organizations around open source.
I have a deep track record in complex community relations, am fluent in the nuts and bolts of many technologies, and have spanned governance, org development, nonprofit and people management, comms, marketing, events, and beyond.
Let's fly! 
My #Wikipedia request for comment just closed, finally banning #AI content in articles! "The use of LLMs to generate or rewrite article content is prohibited"
Kudos to all who participated in writing the guideline (especially Kowal2701) and the whole WikiProject AI Cleanup team, this was very much a group effort!
https://en.wikipedia.org/wiki/Wikipedia:Writing_articles_with_large_language_models/RfC
It really bums me out that I keep seeing blog posts from technical people like "putting aside the obvious moral and ethical implications of LLMs, I'm interested in evaluating whether they can be useful for my work."
Like "putting aside the obvious moral and ethical concerns of breaking into my neighbours' houses, I'm interested in evaluating whether this can be useful for acquiring other people's valuables."
"And some people say 'These people, they're not even trying to be British. They come here, they bring their own culture, their food, speak their own language, and try to take over the whole bloody place.'
Dunno. Sounds pretty British to me."
~Trevor Noah
😂
The goal is to make corporate data less profitable.
Even stuff as simple as setting your birthdate to 1970-01-01 everywhere, adding [TEST] or [DELETED] as your name or account notes anywhere you don't need them to know your name.
Using plugins like AdNauseam to poison ad trackers (and cost them marketing dollars).
Using VPNs set to different locations.
Signing into data broker sites to "correct" outdated info (they'll often let you do that with little-to-no proof of identity, but will require your passport or state ID in order to delete your info). Bonus points if you correct it to someone else's info on their site that's similar to yours.
Only fill in required fields when you sign up for anything, but only provide correct info if it matters for you to use the service, otherwise provide plausible, but incorrect, data.
If you use LLMs anywhere, use the free tier and always vote thumbs up for bad answers and down for good ones. It wastes their resources and drives up their costs while making their training data worse.
Every single PR that is extruded or summarized by an AI product weakens exit strategies by undermining parallel tooling. Our choice to adopt AI, or even to insufficiently oppose its adoption, means we are that much more vulnerable to *infrastructure* becoming enclosed.
That's true in the obvious way: in the most generous interpretation of AI, if you're renting your brain, someone else can jack the prices on you or turn off projects they don't like.