I use a Mac, and wanted to be able to run MacOS programs like Xcode and iOS simulator, so I wrote a couple of different sandbox projects:
- SandVault (https://github.com/webcoyote/sandvault) runs the AI agent in a low-privilege account
- ClodPod (https://github.com/webcoyote/clodpod) runs the AI agent inside a MacOS VM
In both cases I map my code directories using shares/mounts.
I find that I use the low-privilege account solution more because it's easier to setup and doesn't require the overhead of a full VM
I have time machine and just let them fly with --dangerously-skip-permissions on my Mac. Worst thing it's done is back up a database, delete the database, and then run git clean locally which also wiped out the backup, so I'm not saying there are no dangers but honestly I've made worse mistakes and probably more frequently so I generally trust Claude with about the same level of access as me now.
Most common is deleting files etc but if you're using git and have backups it's barely noticeable
Yeah I've got hourly backups out to multiple remote servers. My dev machine is in essence fungible. If it gets hosed, I'll wipe the drive and drop a good backup in. If it catches fire, I'll pick up a different machine and drop in the good backup.
I have more important things to waste my time on than writing absurd sandboxes to run AI agents without guardrails in. What even?
How are you going to notice that while working on ~/projects/acme3000 it for some reason deleted ~/photos/2003/once-in-a-lifetime-holiday/?
Backups are great when you know you need to restore.
I could ask this question without AI. How are you going to notice that while you were working on ~/projects/acme3000, you for some reason deleted ~/photos/2003/once-in-a-lifetime-holiday/?
Of course, AI is not a real person, and it does make mistakes that you or I probably would not. However, this class of mistake—deleting completely unrelated directories—does not appear to be a common failure mode. (Something like deleting all of ~ doesn’t count here—that would be immediately noticeable and could be restored from a backup.)
(Disclaimer, I’m not OP and I wouldn’t run Claude with —-dangerously-skip-permissions on my own system)
Isn't the problem that of finding out a consistency heuristic? For example, test that the resulting state is consistent with your test suite.
If it is a directory that gets deleted, then you can diff it with a previous state. If you don't control the state and don't know the surface area that you should observe, then yes, you're inviting trouble if agents run amok.
That's something new. I'll have to try it
Thanks!
> Have you had any "learned the hard way" moments?
A big lesson for us is that you still need to be careful even in a sandbox.
We've been running Claude/Codex/Gemini in sandboxed YOLO mode and have seen some interesting bypass attempts. [1]
A few examples:
- created fake npm tarballs and forged SHA‑512s in our package‑lock.json
- masked failures with `|| true`, making blocked operations look successful
- cloned a workspace, edited the clone, then replaced the workspace w the clone to bypass file‑path deny rules
So, we’ve learned to default to verbose logging, patch bypasses as we see them, and try to keep iteration loops short.
[1] https://voratiq.com/blog/yolo-in-the-sandbox/
I watched Claude download the rust toolchain and build a user land networking stack to get around some container sandboxing restrictions I had in place. Tbf to Claude I was prompting it in ways that were not explicitly to get it to do this but were intentionally putting it in conflict with the sandboxing.
Yes, typically the agent is just trying to do what it's been instructed to do, but sometimes it's too naive to realize its approach is a bit sketchy.
And actually, one way we've hardened our sandbox is by tasking agents with impossible tasks (within the sandbox), then analyzing and patching each workaround.
I feel like a crazy person reading these comments, "oh it tries to bypass limitations, delete files, and generally nuke my system... But it's cool, I trust it"
I'm using Catnip (https://github.com/wandb/catnip). It runs Claude Code in YOLO mode inside a Docker container, and also manages multiple Claude instances running in Git worktrees. I'm pretty happy with it but would be happier if it addressed limiting network access to guard against exfiltration.
I create a separate Linux user (which doesn't have sudo rights) for each project. I have to log each user in to Claude code or codex, but then I can use ordinary Unix permissions to keep the bots under control and isolated.
Funny you should mention this, I just added a simple filesystem sandbox to my coding agent.
Check it out:
https://github.com/jacobsparts/agentlib/blob/main/src/agentl...
The framework is all python, but I used C for this helper. It uses unprivileged user namespaces to mount an overlay and run an arbitrary command, then when the command finishes, it writes a tarball of edits, which I use to create a unified diff. The framework orchestrates it all transparently, but the helper itself could be used standalone. Here's a short document about the sandbox in the context of it's use in my project:
https://github.com/jacobsparts/agentlib/blob/main/docs/sandb...
I also have a version that uses SUID instead of unprivileged user namespaces, available by request.
I often use claude code with --dangerously-skip-permissions but every once in a while it bites me. I've learned to use git for everything and put instructions to always commit BEFORE writes in CLAUDE.md. Claude can go off the rails on harder bug fixes, especially if there are multiple rounds of context compacting, it can really screw things up. It usually honors guidance not to modify outside of the project, but a simple sandbox adds so much, after the session is over you can see what changed and decide what to do with it. It really helps with the problem where it makes unexpected changes to the codebase, which you might not even notice otherwise, which can introduce serious bugs. The permission models of all the coding agents are rough--either you can't get anything done, or you throw caution to the wind. Full sandboxes are quite restrictive, which is why I rolled by own. Honestly your best option right now is just to have good version control and run coding agents in dedicated environments.
I use https://github.com/longregen/claude-sandbox
It uses bubblewrap (no root needed) and only exposes ~/.cache stuff and the current folder (no git credentials, no ssh credentials, and as few permissions as it's feasible).
bubblewrap is a little bit more lightweight than docker (afaiu no overlayfs, launches way faster), but has the same underlying mechanisms for security (cgroups)
I have a web ui for managing / interacting with opencode sessions.
Everything runs as a pod in my homelab cluster so I can let them "bypass" permissions and just restrict the pods.
I wanted something like Claude code web with access to more models / local LLMs / my monorepo tooling, so far it's been great.
The output is a PR so it's hard for it to break anything.
The biggest benefit is probably that it makes it easier to start stuff when I'm out - feels like a much better use of downtime like I'm not waiting to get home to start a session after I have an idea.
The monorepo tooling is a bit win too, for a bunch of things I just have 1 way to do it and clear instructions for them to use the binaries that get bundled into new sessions so it gets things "right" more often.
Using Claude Code and Amp (free mode) with no sandbox.
I don't run Claude Code in YOLO mode, I just approve commands the first time I'm asked about them.
Using them since July I haven't found any problem with data loss and the clanker have not tried to delete my $HOME.
I do similar but it's incredible how our threat model has changed so much to allow this. I have to trust this one node package (and all its dependencies) and Anthropic more than I trust my email provider, my ISP or my browser.
Who'd have imagined remote code execution as a service would have caught on as much as it has!
This is why I don't use Claude Code on my personal machine. My work machine, sure, my work encourages that. My personal machine, I use Claude through Zed with an API key, and manually approve every command.
For CC - unprivileged LXC on a proxmox server. That's enough to catch mishaps like deleting all your sht while still being a reasonable transparent isolation layer. Plus my entire homesetup is geared towards LXC anyway.
Keen to give firecracker another go though. Last I explored that it still felt pretty rough. (on UX not tech quality)
I spin a Firecracker VM with a custom image that has all the things I need.
Thanks for the share, but I'm having a hard time understanding this.
On step 2, it's only jailing VS Code. Shouldn't it also jail the Git repo you're working on (and disable `git push` somehow), as well as all the env libs?
Also, isn't the point of this to auto approve everything?
devcontainers, without credentials to the git remote.
is firejail safe to use for this purpose? any tips?