AI agents can potentially gain extensive access to user data, and even write or execute arbitrary code.
OpenAI Codex CLI uses #Landlock sandboxing to reduce the risk of buggy or malicious commands: https://github.com/openai/codex/pull/763
For now, it only blocks arbitrary file changes, but there’s room to strengthen protections further, and the ongoing rewrite in #Rust will help: https://github.com/openai/codex/pull/629
Landlock is designed for exactly this kind of use case, providing unprivileged and flexible access control.
Excellent news yesterday, the #CHERIoT RTOS paper was accepted at SOSP!
Huge thanks to @hle, who led on rewriting the rejected submission and made numerous improvements to the implementation.
We now have CHERIoT papers in top architecture and OS venues, I guess security and networking are the next places to aim for!