201 points · 131 comments · 1 month ago · parkaboy
github.comhardsnow
londons_explore
I assume an AI which wanted to read a secret and found it wasn't in .env would simply put print(os.environ) in the code and run it...
That's certainly what I do as a developer when trying to debug something that has complex deployment and launch scripts...
Zizizizz
This software has done this for years
ctmnt
So really all you’re doing is protecting against accidental file ingestion. Which can more easily be done via a variety of other methods. (None of which involve trusting random code that’s so fresh out of the oven its install instructions are hypothetical.)
There are other mismatches between your claims / aims and the reality. Some highlights: You’re not actually zeroizing the secrets. You call `std::process::exit()` which bypasses destructors. Your rotation doesn’t rotate the salt. There are a variety of weaknesses against brute forcing. `import` holds the whole plain text file in memory.
Again, none of these are problems in the context of just preventing accidental .env file ingestion. But then why go to all this trouble? And why make such grand claims?
Stick to established software and patterns, don’t roll your own. Also, don’t use .env if you care about security at all.
My favorite part: I love that “wrong password returns an error” is listed as a notable test. Thanks Claude! Good looking out.
saezbaldo
An agent executing code in your environment has implicit access to anything that environment can reach at runtime. Encrypting .env moves the problem one print statement away.
The proxy approaches (Airut, OrcaBot) get closer because they move the trust boundary outside the agent's process. The agent holds a scoped reference that only resolves at a chokepoint you control.
But the real issue is what stephenr raised: why does the agent have ambient access at all? Usually because it inherited the developer's shell, env, and network. That's the actual problem. Not the file format.
pedropaulovc
zith
Or what type of secrets are stored in the local .env files that the LLM should not see?
I try to run environments where developers don't get to see production secrets at all. Of course this doesn't work for small teams or solo developers, but even then the secrets are very separated from development work.
Zizizizz
A recent project by the creator of mise is related too
hjkl_hacker
alexandriaeden
handfuloflight
theozero
Additionally it redacts secrets from logs (one of the other main concerns mentioned in these comments) and in JS codebases, it also stops leaks in outgoing server responses.
There are plugins to pull from a variety of backends, and you can mix and match - ie use 1Pass for local dev, use your cloud provider's native solution in prod.
Currently it still injects the secrets via env vars - which in many cases is absolutely safe - but there's nothing stopping us from injecting them in other ways.
stephenr
can read files in your project directory, which means a plaintext .env file is an accidental secret dump waiting to happen
It's almost like having a plaintext file full of production secrets on your workstation is a bad fucking idea.
So this is apparently the natural evolution of having spicy autocomplete become such a common crutch for some developers: existing bad decisions they were ignoring cause even bigger problems than they would normally, and thus they invent even more ridiculous solutions to said problems.
But this isn't all just snark and sarcasm. I have a serious question.
Why, WHY for the love of fucking milk and cookies are you storing production secrets in a text file on your workstation?
I don't really understand the obsession with a .ENV file like that (there are significantly better ways to inject environment variables) but that isn't the point here.
Why do you have live secrets for production systems on your workstation? You do understand the purpose of having staging environments right? If the secrets are to non-production systems and can still cause actual damage, then they aren't non-production after all are they?
Seriously. I could paste the entirety of our local dev environment variables into this comment and have zero concerns, because they're inherently to non-production systems:
- payment gateway sandboxes;
- SES sending profiles configured to only send mail to specific addresses;
- DB/Redis credentials which are IP restricted;
For production systems? Absolutely protect the secrets. We use GPG'd files that are ingested during environment setup, but use what works for you.
enjoykaz
tiku
Won't stop any seasoned hacker but it will stop the automated scripts (for now) to easily get the other keys.
gverrilla
When would something like that not work?
ivannovazzi
We built KeyEnv (https://keyenv.dev) for exactly that: the CLI pulls AES-256 encrypted secrets at runtime so .env files never exist locally. `keyenv run -- npm start` and secrets are injected as env vars, then gone.
The tradeoff is it requires a network hop and team buy-in, whereas enveil is local. Different threat models — enveil protects secrets already on disk from AI tools, KeyEnv prevents them from touching disk at all.
[deleted]
nvader
collimarco
jarito
SteveVeilStream
monster_truck
appsoftware
tuvistavie
brianthinks
Here's why: even if you hide .env, an agent running arbitrary code can read /proc/self/environ, grep through shell history, inspect running process args, or just read the application config that loads those secrets. The attack surface isn't one file — it's the entire execution environment.
What actually works in practice (from observing my own access model):
1. Scoped permissions at the platform level. I have read/write to my workspace but can't touch system configs. The boundaries aren't in the files — they're in what the orchestrator allows.
2. The surrogate credential pattern mentioned here is the strongest approach. Give the agent a revocable token that maps to real credentials at a boundary it can't reach.
3. Audit trails matter more than prevention. If an agent can execute code, preventing all possible secret access is a losing game. Logging what it accesses and alerting on anomalies is more realistic.
The real threat model isn't 'agent stumbles across .env' — it's 'agent with code execution privileges decides to look.' Those require fundamentally different mitigations.
NamlchakKhandro
Instead you need to do what hardsnow is doing: https://smackernews.com/item/47133573 HN
Or what the https://github.com/earendil-works/gondolin is doing
m-hodges
¹ https://github.com/hodgesmr/agent-fecfile?tab=readme-ov-file...
yanosh_kunsh
chickensong
joshribakoff
md-
kevincloudsec
l332mn
anshumankmr
rainmaking
ivannovazzi
enveil is a good defense-in-depth layer for existing .env workflows. But if you can change the habit, removing the file at the source is cleaner.
Disclosure: I'm one of the builders of KeyEnv.
oulipo2
BloondAndDoom
navigate8310
kittikitti
I dislike the gatekeepers so I will follow this implementation and see where it goes. Maybe they like you better.
edgecasehuman
KingOfCoders
frumiousirc
MY_API_KEY=$(pass my/api/key | head -1) python manage.py runserverbillfor
SoftTalker
efields
0x457
[deleted]
thomc
zahlman
Spawns your subprocess with the resolved values injected into its environment
... So if the process is expecting a secret on stdin or in a command-line argument, I need to make a wrapper?
Datagenerator
Kernel keyring support would be the next step?
PASS=$(keyctl print $(keyctl search @s user enveil_key))
frgturpwd
I’ve built this in Airut and so far seems to handle all the common cases (GitHub, Anthropic / Google API keys, and even AWS, which requires slightly more work due to the request signing approach). Described in more detail here: https://github.com/airutorg/airut/blob/main/doc/network-sand...