Compare with and steal any ideas you like from mine if you like. I've got a semi-decent curl|bash pattern covered, and also add network filtering via pasta (which may be more robust than rolling your own). https://github.com/reubenfirmin/bubblewrap-tui
Ohh! thanks for sharing this. You are using DNS proxy which is interesting and useful if a process doesn't respect the HTTPS_PROXY/HTTP_PROXY/etc. env vars that I'm injecting. I will take a look, very interesting.
This looks really good - the CLI interface design is solid, and I especially like the secrets / network proxy pattern - but the thing it needs most is copiously detailed documentation about exactly how the sandbox mechanism works - and how it was tested.
There are dozens of projects like this emerging right now. They all share the same challenge: establishing credibility.
I'm loathe to spend time evaluating them unless I've seen robust evidence that the architecture is well thought through and the tool has been extensively tested already.
My ideal sandbox is one that's been used by hundreds of people in a high-stakes environment already. That's a tall order, but if I'm going to spend time evaluating one the next best thing is documentation that teaches me something about sandboxing and demonstrates to me how competent and thorough the process of building this one has been.
UPDATE: On further inspection there's a lot that I like about this one. The CLI design is neat, it builds on a strong underlying library (the OpenAI Codex implementation) and the features it does add - mainly the network proxy being able to modify headers to inject secrets - are genuinely great ideas.
> There are dozens of projects like this emerging right now. They all share the same challenge: establishing credibility.
Care to elaborate on the kind of "credibility" to be established here? All these bazillion sandboxing tools use the same underlying frameworks for isolation (e.g., ebpf, landlock, VMs, cgroups, namespaces) that are already credible.
The problem is that those underlying frameworks can very easily be misconfigured. I need to know that the higher level sandboxing tools were written by people with a deep understanding of the primitives that they are building on, and a very robust approach to testing that their assumptions hold and they don't have any bugs in their layer that affect the security of the overall system.
Most people are building on top of Apple's sandbox-exec which is itself almost entirely undocumented!
the credential injection via MITM proxy is the most interesting part to me. the standard approach for agents is environment variables, which means the agent process can read them directly. having the sandbox intercept network calls and swap in credentials at the proxy layer means the agent code has a placeholder and never sees the real value -- useful when running less-trusted agent code or third-party tools.
the deny-by-default network policy also matters specifically for agent use: without it there is nothing stopping a tool call from exfiltrating context window contents to an arbitrary endpoint. most sandboxes focus on filesystem isolation and treat network as an afterthought.
Thanks and agreed! Zerobox uses the Deno sandboxing policy and also the same pattern for cred injection (placeholders as env vars, replaced at network call time).
Real secrets are never readable by any processes inside the sandbox:
Do you know if there's a widely shared name for this pattern? I've been collecting examples of it recently - it's a really good idea - but I'm not sure if there's good terminology. "Credential injection" is one option I've seen floating around.
simonw, I have been seeing "credential injection" and "credential tokenizing" (a la tokenizer: https://github.com/superfly/tokenizer). I'm also seeing credential "surrogates" mentioned.
I am currently working on a mitm proxy for use with devcontainers to try to implement this pattern, but I'm certainly not the only one!
Not sure. I took this idea from the Deno sandboxing docs. They also do the exact same thing, different sandboxing mechanism though (I think Deno has it's own way of sandboxing subprocesses).
Thanks for sharing that. Zerobox _does_ use the native OS sandboxing mechanisms (e.g. seatbelt) under the hood. I'm not trying to reinvent the wheel when it comes to sandboxing.
Re the URLs, I agree, that's why I added wildcard support, e.g. `*.openai.com` for secret injection as well as network call filtering.
You know, the thing is, that it is super easy to create such tools with AI nowadays. …and if you create your own, you can avoid these unnecessary abstractions. You get exactly what you want.
Zerobox creates a cert in `~/.zerobox/cert` on the first proxy run and reuses that. The MTIM process uses that cert to make the calls, inject certs, etc. This is actually done by the underlying Codex crate.
Yeah, but how does the sandboxed process “know” that it has to go through the proxy? How does it trust your certificate? Is the proxy fully transparent?
Very interesting. I just started researching this topic yesterday to build something for adjacent use cases (sandboxing LLM authored programs). My initial prototype is using a wasm based sandbox, but I want something more robust and flexible.
Some of my use cases are very latency sensitive. What sort of overhead are you seeing?
Wasm sandboxes are fast for pure compute but get painful the moment LLM code needs filesystem access or subprocess spawning. And it will, constantly. Containers with seccomp filters give you near-native speed and way broader syscall support — overhead is basically startup time (~2s cold, sub-second warm). For anything IO-heavy it's not even close. We're doing throwaway containers at https://cyqle.in if anyone's curious.
Technical debt is not always bad. Deliberate technical debt taken on with eyes open to ship faster is a legitimate business strategy. The problem is accidental technical debt from poor decisions compounding silently.
Again, it’s blacklisting so kind of impossible to get right. I’ve looked at this many times, but in order for things to properly work, you have to create a huge, huge, huge, huge sandbox file.
Especially for your application that you any kind of Apple framework.
I'd feel safer with default-deny on reads as well, but I know from past experience that this gets tricky fast - tools like Node.js and uv and Python all have a bunch of files they need to be able to read that you might not predict in advance.
Might still be possible to do that in a DX-friendly way though, if you make it easy to manually approve reads the first time and use that to build a profile that can be reused on subsequent command invocations.
That being said, what the default DX shouldl be? What paths to deny by default? That's something I've been thinking about and I'd love to hear your thoughts.
That's a really tough question. I always worry about credentials that are tucked away in ~/.folders in my home directory like in ~/.aws - but you HAVE to provide access to some of those like ~/.claude because otherwise Claude Code won't work.
That's why rather than a default set I'm interested in an option where I get to approve things on first run - maybe something like this:
zerobox --build-profile claude-profile.txt -- claude
The above command would create an empty claude-profile.txt file and then give me a bunch of interactive prompts every time Claude tried to access a file, maybe something like:
claude wants to read ~/.claude/config.txt
A) allow that file, D) allow full ~/.claude directory, X) exit
You would then clatter through a bunch of those the first time you run Claude and your decisions would be written to claude-profile.txt - then once that file exists you can start Claude in the future like this:
zerobox --profile claude-profile.txt -- claude
(This is literally the first design I came up with after 30s of thought, I'm certain you could do much better.)
Fantastic! I like that idea. I'm also exploring an option to define profiles, but also have predefines profiles that ships with the binary (e.g. Claude, then block all `.env` reads, etc.)
I agree to some extend. I'm using the OpenAI Codex crates for sandboxing though, which I think it's properly tested? They launched last year and iterated many times. I will add a note though, thanks!
I like tools like this, but they all seem to share the same underlying shape: take an arbitrary process and try to restrict it with OS primitives + some policy layer (flags, proxies, etc).
That works, but it also means correctness depends heavily on configuration, i.e. you’re starting with a lot of ambient authority and trying to subtract from it enforcement ends up split across multiple layers (kernel, wrapper, proxy)
An alternative model is to flip it: Instead of sandboxing arbitrary programs, run workflows in an environment where there is no general network/filesystem access at all, and every external interaction has to go through explicit capabilities.
In that setup, there’s nothing to "block" because the dangerous primitives aren’t exposed, execution can be deterministic/replayable, so you can actually audit behavior. Thus, secrets don’t enter the execution context, they’re only used at the boundary
It feels closer to capability-based systems than traditional sandboxing. Curious how people here think about that tradeoff vs OS-level sandbox + proxy approaches.
Zerobox uses the same kernel mechanisms (namespaces + seccomp) but no daemon, no root and cold start ~10ms (Docker is much worse in that regard).
Docker gives you full filesystem isolation and resource limits. Zerobox gives you granular file/network/credential controls with near zero overhead. You can in fact use Zerobox _inside_ Docker (e.g. for secret management)
Forgot about that, was mostly thinking about how AI agents with unrestricted permissions would ideally have some external logging and monitoring, so there would be a record of what it touched. A trace has all of the raw information, so some kind of wrapper around that would be useful.
This is more a criticism of codex's linux-sandboxing, which you're just wrapping, but it's the first I've ever looked at it. I don't see how it makes sense to invoke bwrap as a forked subprocess. Bubblewrap can't do anything beyond what you can do with unshare directly, which you can simply invoke as a system call without needing to spawn a subprocess or requiring the user to have bwrap installed. It kinds of reeks of amateur hour when developers effectively just translate shell scripts into compiled languages by using whatever variant of "system" is available to make the same command invocations you would make through a shell, as opposed to actually using the system call API. Especially when the invocation is crafted from user input, there's a long history of exploits arising from stuff like this. Writing it in Rust does nothing for you when you're just using Rust to call a different CLI tool that isn't written in Rust.
Thanks for sharing this, I read your comment multiple times. What would be the alternative though? It is true that the program being written in Rust doesn't solve the problem of spawning subprocesses, but what's the alternative in that case?
I'd love to learn more please. I'm interested in sandboxing AI tools/agents regardless of the underlying mechanism (I explored Firecracker VMs briefly as well, terrible cross platform support though).