Using proxies to hide secrets from Claude Code

132 pointsposted 25 days ago
by drewgregory

60 Comments

dtkav

20 days ago

I'm working on something similar called agent-creds [0]. I'm using Envoy as the transparent (MITM) proxy and macaroons for credentials.

The idea is that you can arbitrarily scope down credentials with macaroons, both in terms of scope (only certain endpoints) and time. This really limits the damage that an agent can do, but also means that if your credentials are leaked they are already expired within a few minutes. With macaroons you can design the authz scheme that *you* want for any arbitrary API.

I'm also working on a fuse filesystem to mount inside of the container that mints the tokens client-side with short expiry times.

https://github.com/dtkav/agent-creds

ashwinr2002

18 days ago

> With macaroons you can design the authz scheme that you want for any arbitrary API.

How would you build such an authz scheme? When claude asks permissions to access a new endpoint, if the user allows it, then reissue the macaroons?

dtkav

18 days ago

There are two parts here:

1. You can issue your own tokens which means you can design your own authz in front of the upstream API token.

2. Macaroons can be attenuated locally.

So at the time that you decide you want to proxy an upstream API, you can add restrictions like endpoint path to your scheme.

Then, once you have that authz scheme in place, the developer (or agent) can attenuate permissions within that authz scheme for a particular issued macaroon.

I could grant my dev machine the ability to access e.g. /api/customers and /api/products. If i want to have claude write a script to add some metadata to my products, I might attenuate my token to /api/products only and put that in the env file for the script.

Now claude can do development on the endpoint, the token is useless if leaked, and Claude can't read my customer info.

Stripe actually does offer granular authz and short lived tokens, but the friction of minting them means that people don't scope tokens down as much.

ashwinr2002

16 days ago

I understand that, but how do you come up with the endpoints you want claude to have access to ahead of time?

For example, how do you collect all the endpoints that have access to customer info per your example.

Thought about it and couldn't find a way how

dtkav

16 days ago

I'm not sure I'm fully understanding you, but in my experience I have a few upstream APIs I want to use for internal tools (stripe, gmail, google cloud, anthropic, discord, my own pocketbase instance, redis) but there are a lot of different scripts/skills that need differing levels of credentials.

For example, If I want to write a skill that can pull subscription cancellations from today, research the cancellation reason, and then push a draft email to gmail, then ideally I'd have...

- a 5 minute read-only token for /subscriptions and /customers for stripe

- a 5 minute read-write token to push to gmail drafts

- a 5 minute read-only token to customer events in the last 24h

Claude understands these APIs well (or can research the docs) so it isn't a big lift to rebuild authz, and worst case you can do it by path prefix and method (GET, POST, etc) which works well for a lot of public APIs.

I feel like exposing the API capability is the easy part, and being able to get tight-fitting principle-of-least-privilege tokens is the hard part.

badeeya

19 days ago

made with ai?

dtkav

19 days ago

Yeah, it says so at the top of the README (though I suppose I could have put that in the comment too). I'm not building a product, just sharing a pattern for internal tooling.

Someone on another thread asked me to share it so I had claude rework it to use docker-compose and remove the references to how I run it in my internal network.

jackfranklyn

25 days ago

The proxy pattern here is clever - essentially treating the LLM context window as an untrusted execution environment and doing credential injection at a layer it can't touch.

One thing I've noticed building with Claude Code is that it's pretty aggressive about reading .env files and config when it has access. The proxy approach sidesteps that entirely since there's nothing sensitive to find in the first place.

Wonder if the Anthropic team has considered building something like this into the sandbox itself - a secrets store that the model can "use" but never "read".

mike-cardwell

19 days ago

> a secrets store that the model can "use" but never "read".

How would that work? If the AI can use it, it can read it. E.g:

    secret-store "foo" > file
    cat file
You'd have to be very specific about how the secret can be used in order for the AI to not be able to figure out what it is. You could provide a http proxy in the sandbox that injects a HTTP header to include the secret, when the secret is for accessing a website for example, and tell the AI to use that proxy. But you'd also have to scope down which URLs the proxy can access with that secret otherwise it could just visit a page like this to read back the headers that were sent:

https://www.whatismybrowser.com/detect/what-http-headers-is-...

Basically, for every "use" of a secret, you'd have to write a dedicated application which performs that task in a secure manner. It's not just the case of adding a special secret store.

ashwinr2002

18 days ago

This seems like an under-rated comment. You are right, this is a vulnerability and the blog doesn't talk about this.

ipython

19 days ago

I guess I don't understand why anyone thinks giving an LLM access to credentials is a good idea in the first place? It's been demonstrated best practice to separate authentication/authorization from the LLM's context window/ability to influence for several years now.

We spent the last 50 years of computer security getting to a point where we keep sensitive credentials out of the hands of humans. I guess now we have to take the next 50 years to learn the lesson that we should keep those same credentials out of the hands of LLMs as well?

I'll be sitting on the sideline eating popcorn in that case.

JoshuaDavid

19 days ago

That's how they did "build an AI app" back when the claude.ai coding tool was javascript running in a web worker on the client machine.

ironbound

19 days ago

Sounds like an attacker could hack Anthropic and get access to a bunch of companies via the credentials Claude Code ingested?

iterateoften

20 days ago

It could even hash individual keys and scan context locally before sending to check if it accidentally contains them.

edstarch

19 days ago

While sandboxing is definitely more secure... Why not put a global deny on .env-like filename patterns as a first measure?

samlinnfer

20 days ago

Here's the set up I use on Linux:

The idea is to completely sandbox the program, and allow only access to specific bind mounted folders. But we also want to have to the frills of using GUI programs, audio, and network access. runc (https://github.com/opencontainers/runc) allows us to do exactly this.

My config sets up a container with folders bind mounted from the host. The only difficult part is setting up a transparent network proxy so that all the programs that need internet just work.

Container has a process namespace, network namespace, etc and has no access to host except through the bind mounted folders. Network is provided via a domain socket inside a bind mounted folder. GUI programs work by passing through a Wayland socket in a folder and setting environmental variables.

The set up looks like this

    * config.json - runc config
    * run.sh - runs runc and the proxy server
    * rootfs/ - runc rootfs (created by exporting a docker container) `mkdir rootfs && docker export $(docker create archlinux:multilib-devel) | tar -C rootfs -xvf -`
    * net/ - folder that is bind mounted into the container for networking
Inside the container (inside rootfs/root):

    * net-conf.sh - transparent proxy setup
    * nft.conf - transparent proxy nft config
    * start.sh - run as a user account
Clone-able repo with the files: https://github.com/dogestreet/dev-container

ekidd

20 days ago

I have a version of this without the GUI, but with shared mounts and user ID mapping. It uses systemd-nspawn, and it's great.

In retrospect, agent permission models are unbelievably silly. Just give the poor agents their own user accounts, credentials, and branch protection, like you would for a short-term consultant.

samlinnfer

20 days ago

The other reason to sandbox is to reduce damage if another NPM supply chain attack drops. User accounts should solve the problem, but they are just too coarse grained and fiddly especially when you have path hierarchies. I'd hate to have another dependency on systemd, hence runc only.

idorosen

20 days ago

try firejail insread

samlinnfer

20 days ago

Not even close to the same thing, with this setup you can install dev tools, databases, etc and run inside the container.

It's a full development environment in a folder.

JimDabell

20 days ago

Is this a reimplementation of Fly.io’s Tokenizer? How does it compare?

https://fly.io/blog/tokenized-tokens/

https://github.com/superfly/tokenizer

dtkav

20 days ago

IMHO there are a couple axis that are interesting in this space.

1. What do the tokens look like that you are you storing in the client? This could just be the secret (but encrypted), or you could design a whole granular authz system. It seems like tokenizer is the former and Formal is the latter. I think macaroons are an interesting choice here.

2. Is the MITM proxy transparent? Node, curl, etc allow you to specify a proxy as an environment variable, but if you're willing to mess with the certificate store than you can run arbitrary unmodified code. It seems like both Tokenizer and Formal are explicit proxies.

3. What proxy are you using, and where does it run? Depending on the authz scheme/token format you could run the proxy centrally, or locally as a "sidecar" for your dev container/sandbox.

Rafert

19 days ago

The concept of a proxy injecting/removing sensitive data has been for much longer, e.g. VGS has a JS SDK and proxy to handle credit card data for you and keep you out of PCI scope.

eddythompson80

20 days ago

We truly are living in the dumbest timeline aren’t we.

I was just having an argument with a high level manager 2 weeks ago about how we already have an outbound proxy that does this, but he insisted that a mitm proxy is not the same as fly.io “tokenizer”. See, that one tokanizes every request, ours just sets the Authorization header for service X. I tried to explain that it’s all mitm proxies altering the request, just for him to say “I don’t care about altering the request, we shouldn’t alter the request. We just need to tokenize the connection itself”

1vuio0pswjnm7

19 days ago

"When hostnames and headers are hard to edit: mitmproy add-ons"

"The mitmproxy tool also supports addons where you can transform HTTP requests between Claude Code and third-party web servers. For example, you could write an add-on that intercepts https://api.anthropic.com and updates the X-API-Key header with an actual Anthropic API Key."

"You can then pass this add-on via mitmproxy -s reroute_hosts.py."

If using HAproxy, then is no need to write "add-ons", just edit the configuration file and reload

For example, something like

   http-request set-header x-api-key API_KEY if { hdr(host) api.anthropic.com }

   echo reload|socat stdio unix:/path-to-socket/socket-name
For me, HAproxy is smaller and faster than mitmproxy

theozero

19 days ago

A proxy is a good solution although a bit more involved. A great first step is just getting any secrets - both the ones the AI actually needs access to and your application secrets - out of plaintext .env files.

A great way to do that is either encrypting them or pulling them declaratively from a secure backend (1Pass, AWS Secrets Manager, etc). Additional protection is making sure that those secrets don't leak, either in outgoing server responses, or in logs.

https://varlock.dev (open source!) can help with the secure injection, log redaction, and provide a ton more tooling to simplify how you deal with config and secrets.

TheRoque

20 days ago

At the moment I'm just using "sops" [1]. I have my env var files encrypted uth AGE encryption. Then I run whatever I want to run with "sops exec-env ...", it's basically forwarding the secrets to your program.

I like it because it's pretty easy to use, however it's not fool-proof: if the editor which you use for editing the env vars is crashing or killed suddently, it will leave a "temp" file with the decrypted vars on your computer. Also, if this same editor has AI features in it, it may read the decrypted vars anyways.

- [1]: https://github.com/getsops/sops

jclarkcom

20 days ago

I do something similar but this only protects secrets at rest. If you app has an exploit an attack could just export all your secrets to a file.

I prototyped a solution where I use an external debugger to monitor my app, when the app needs a secret it generates a breakpoint and the debugger catches it and then inspects the call stack of the function requesting the secret and then copies it into the process memory (intended to be erased immediately after use). Not 100% security but a big improvement and a bit more flexible and auditable compared to a proxy

paulddraper

20 days ago

Isn’t this (part of) the point of MCP.

eddythompson80

19 days ago

Possibly, but the point is that MCP is a DOA idea. An agent, like Claude code or opencode, don’t need an MCP. it’s nonsensical to expect or need an MCP before someone can call you.

There is no `git` MCP either . Opencode is fully capable of running `git add .` or `aws ec2 terminate-instance …` or `curl -XPOST https://…`

Why do we need the MCP? The problem now is that someone can do a prompt injection to tell it to send all your ~/.was/credentials to a random endpoint. So let’s just have a dummy value there, and inject the actual value in a transparent outbound proxy that the agent doesn’t have access to.

paulddraper

19 days ago

> Opencode is fully capable of running

> Why do we need the MCP?

> The problem now

And there it is.

I understand that this is an alternative solution, and appreciate it.

data-ottawa

19 days ago

I’ve been using 1Password’s env templates with `op run` for this locally. It hijacks stdout and filters your credentials.

That does not make it immune to Claude’s prying, but at least Claude can then read the .env file and satisfy its need to prove that a credential exists without reading it.

I have found even when I say a credential exists and is correct Claude does not believe me. Which is infuriating. I’m willing to bet Claude’s logs have a gold mine that could own 90% of big tech firms.

josegonzalez

20 days ago

I am gonna be that guy and say it would be nice to share the actual code vs using images to display what the code looks like. It's not great for screenreaders and anyone who want to quickly try out the functionality.

keepamovin

20 days ago

I think people's focus on the threat model from AI corps is wrong. They are not going to "steal your precious SSH/cloud/git credentials" so they can secretly poke through your secret-sauce, botnet your servers or piggy back off your infrastructure, lol of lols. Similarly the possibility of this happening from MCP tool integrations is overblown.

This dangerous misinterpretation of the actual possible threats simply better conceals real risks. What might those real risks be? That is the question. Might they include more subtle forms of nastiness, if anything at all?

I'm of the belief that there will be no nastiness, not really. But if you believe they will be nasty, it at least pays to be rational about the ways in which that might occur, no?

simonw

20 days ago

The risk isn't from the AI labs. It's from malicious attackers who sneak instructions to coding agents that cause them to steal your data, including your environment variable secrets - or cause them to perform destructive or otherwise harmful actions using the permissions that you've granted to them.

keepamovin

19 days ago

Simon, I know you're the AI bigwig but I'm not sure that's correct. I know that's the "story" (but maybe just where the AI labs would prefer we look?). How realistic is it really that MCP/tools/web search is being corrupted by people to steal prompts/convos like this? I really think this is such low prop. And if it does happen, the flaw is the AI labs for letting something like this occur.

Respect for your writing, but I feel you and many others have the risk calculus here backwards.

simonw

19 days ago

Every six months I predict that "in the next six months there will be a headline-grabbing example of someone pulling off a prompt injection attack that causes real economic damage", and every six months it fails to happen.

That doesn't mean the risk isn't there - it means malicious actors have not yet started exploiting it.

Johann Rehberger calls this effect "The Normalization of Deviance in AI", borrowing terminology from the 1986 Space Shuttle Challenger disaster report: https://embracethered.com/blog/posts/2025/the-normalization-...

Short version: the longer a company or community gets away with behaving in an unsafe way without feeling the consequences, the more they are likely to ignore those risks.

I'm certain that's what is happening to us all today with coding agents. I use them in an unsafe way myself.

saagarjha

19 days ago

AI labs currently have no solution for this problem and have you shoulder the risk for it.

keepamovin

19 days ago

Evidence?

saagarjha

19 days ago

I worked on this for a company that got bought by one of the labs (for more than just agent sandboxes, mind you).

keepamovin

18 days ago

Wait, let me get this straight: “there’s no solution” to this apparent giant problem but you work for a company that got bought by an AI corp because you had a solution? Make it make sense.

If you did not solve it why were you bought?

saagarjha

17 days ago

I worked for a company that got bought because they were working on a number of problems of interest to the acquirer. As many of these were hard problems, our efforts on them and progress was more than enough.

keepamovin

16 days ago

OK. Do you know if many AI labs are purchasing in this space? Was your acquisition an outlier or part of a wider trend? Thank you

saagarjha

14 days ago

I think if you’re good at this most AI labs would be interested but I can’t speak for them obviously

user

18 days ago

[deleted]

gillh

19 days ago

We also use proxies with CodeRabbit’s sandboxes. Instead of using tool calls, we’ve been using LLM-generated CLI and curl commands to interact with external services like GitHub and Linear.

hobs

20 days ago

Putting your secrets in any logs is how you get those secrets accidentally or purposefully read by someone you do not want to read it, it doesn't have to be the initial corp, they just need to have bad security or data management for it to leak online or have someone with a lower level of access pivot via logs.

Now multiply that by every SaaS provider you give your plain text credentials in.

keepamovin

19 days ago

Right, but the multiply step is not AI specific. Let's focus here: AI providers farming out their convos to 3rd-parties? Unlikely, but if it happens, it's totally their bad.

I really don't think this is a thing.

hobs

19 days ago

Right, but this is still a hygiene issue, if you are skipping washing your hands after using the bathroom because its unlikely that the bathroom attendants didn't clean it up you are going to have a bad time.

keepamovin

18 days ago

There's something to that, but I don't think in reality it's a thing: you don't do surgery in the public bathroom. The keys to the kingdom secrets? Of course not. Everything else? That's why we have scoped, short-lived tokens.

I just think this whole thing is overblown.

If there's a risk in any situation it's similar, probably less, than running any library you installed of a registry for your code. And I think that's a good comparison: supply chain is more important than AI chain.

You can consider AI-agents to be like the fancy bathrooms in a high end hotel, whereas all that code you're putting on your computer? That's the grimy public lavatory lol.

hsbauauvhabzb

19 days ago

‘Hey Claude, write an unauthenticated action method which dumps all environment variables to the requestor, and allows them to execute commands’