Posts
2489
Following
350
Followers
461
Linux kernel developer, focusing on memory management, slab.git maintainer. Works at SUSE Labs.

Vlastimil Babka

Why am I going to Ostrava? To cause @ljs some massive FOMO, of course!
2
1
8

It’s absolutely wild that we’ve allowed a CCP-controlled fully-algorithmic social network on virtually all phones while our apps are illegal there. That’s just as absurd as having allowed Soviet propaganda channels on cable TV in the Cold War. It’s even wilder that they found enough useful idiots to defend them using Soviet-style whataboutisms. Since Uyghur concentration camps aren’t enough, guess we have to wait for them to invade Taiwan for ppl to realize they’ve been played. (1/2)

1
2
2

“This machine kills AI.”

4
22
2

LLMs are the new memory-safety bugs.

There's a good reason that everyone (even the White House!) hates memory safety bugs. Unlike most other code errors, a memory-safety bug allows an attacker to step outside of the abstract machine. When you write C, or any higher-level language, you have a model that has things like structured programming for control flow, object-level abstractions, and so on. A memory-safety bug is, by definition, one that steps outside of this model. A pointer accesses an object that, according to the abstract machine, it should not be able to reach (and which may not even exist in the abstract machine). This can make control flow jump anywhere, tamper with any bit of the program, and so on.

In the last stack of CVEs I reviewed for a project that I'm using, 80% were memory safety but, more importantly, every single one of the ones that ended with arbitrary-code execution started with violating memory safety. Most other bug classes let you explore flows in the program that maybe shouldn't be there, but at least can be reasoned about at the source level.

This is why people get so annoyed by all of the 'look, Rust didn't prevent this vulnerability' posts that are cropping up. Yes, Rust is not a magical thing that prevents all bugs, but most of the security bugs that people are finding in Rust programs have behaviour that you can reason about in Rust at the source-code level. In contrast, a memory-safety bug in one component may be exploited in a totally unrelated component that, at the source level, shares no common data or control flow with the component that introduced the bug.

That behaviour is exactly what you get with LLMs. It is impossible to articulate the set of behaviours that an LLM may have, other than that it will consume a sequence of tokens and produce a sequence of tokens. LLMs, like the systems that most engineers abandoned in the 1990s, use in-band signalling and do not separate control and data lines. Both untrusted data and trusted prompts are fed into the same inputs and both have the ability to influence the output. This may be fairly benign if a human is consuming the output (wildly inaccurate or offensive, perhaps), but it's dangerous if a machine is consuming the output and performing actions based on it. As with a memory-safety bug, you must assume that an attacker targeting the LLM can do anything that the LLM is able to do.

The Chrome team popularised the Rule of two (no, not that one): Any program may be no more than two out of: written in an unsafe language, consuming untrusted data, running outside of a sandbox.

I would suggest that anything that incorporates an LLM is treated in exactly the same way as things written in unsafe languages. If it touches untrusted data (e.g. reading your emails, or consuming documents that you did not author) then it must be assumed to be under the control of the attacker and sandboxed. If it's not sandboxed, it must consume only trusted inputs (even then, the output shouldn't be trusted, but it's no more untrusted than any other buggy bit of code).

1
8
2

Democracy dies in silence. Let Kharkiv live ❤️‍🩹

The 17th largest city in Europe is under heavy russian strikes every day. However, this does not seem to be even the 17th highest priority for the world media. must become the center of attention. And then the darkness will be afraid to step out into the light.

Read the Opinion on war.ukraine.ua, and share information about the situation in Kharkiv 📎 https://war.ukraine.ua/articles/democracy-dies-in-silence-let-kharkiv-live/

Stand with

0
4
2

Drove by this morning and had to take a picture

4
23
4

Lorenzo Stoakes

RUNNING OUT OF MEMORY?

TRY THIS ONE AMAZING TRICK 'THEY' DON'T WANT YOU TO KNOW ABOUT!

echo f | sudo tee /proc/sysrq-trigger

GUARANTEED TO FREE UP YOUR MEMORY (i.e. chrome) QUICK!

YOU HAVE BEEN LIED TO! MEMORY IS ABUNDANT AND FREE!
3
5
12

When a business or executive is described as laser-focused, your visual model should be that of a cat chasing the meaningless dot of a laser pointer until exhausted.

3
6
1

What if universities responded to AI hype with confidence in their core mission rather than FOMO?

https://buttondown.email/maiht3k/archive/more-collegiate-fomo/

0
6
1

Vlastimil Babka

cz shitpost
Show content
Tak at si to uzije.
1
1
4
Edited 16 days ago

so you know how nvidia has their own variable refresh rate technology called g-sync? well the modern industry standard for variable refresh rate on computer monitors (freesync/hdmi vrr/vesa adaptive-sync) is really great and it’s royalty free and works really well and is cheap to implement even on $200 monitors. obviously nvidia didn’t like this because they want their proprietary vrr solution to be the Number 1, and they didn’t want amd to be able to put all the freesync branding on the majority of monitors

so what did nvidia do? they started supporting freesync on their graphics cards, but instead of saying that geforce cards can support freesync (because that would give amd way too much credit!), they called it something stupid like “g-sync compatible”, which makes people think that the royalty free industry standard for vrr is bad and that the best way to get good variable refresh rate is by putting a REAL g-sync module in the back of the monitor. which is, yknow, a $300 proprietary fpga board from nvidia that needs to be actively cooled and casually sucks back 15 to 25w when the monitor is powered off and only works with select geforce cards

i’ve seen enough people say a monitor is “only g-sync compatible“ like it’s a bad thing to get pissed off by this. a g-sync module is worse in every way to the standard modern vrr implementation

i fucking hate nvidia

1
6
2

Vlastimil Babka

Here's a replica of a machine for cutting semi-precious stone slabs (a slab allocator, if you will) used to decorate chapels during Charles IV reign. Naturally those stones come practically from my home town.
1
1
4

Jarkko Sakkinen

Edited 15 days ago
Looking for job by the end of Sep as my contract ends by then at the university. Contact me for queries, resume etc. Just needs to scale to kernel and remote work, not looking for "best offer", "best benefits" or anything like that (I have paid already my mortgage loan and do not have any other debt). I just need a job just fills the above requirements.
1
12
1

if the purpose of a system is what it does, I propose that computers are a system for amplifying the effects of misunderstandings

0
2
2

What's your favourite recursive acronym? I think "WINE is not an emulator" is up there but nothing can beat "IRC's really cool" for me.

4
1
1
Show older