I built an IAM-style firewall for AI agents after Claude read my .env
I've been using Claude Code to build stuff for a while now. It's fast, it writes decent code, and it saves me hours. But a few days ago I had a moment that made me stop and think. The moment that g...

Source: DEV Community
I've been using Claude Code to build stuff for a while now. It's fast, it writes decent code, and it saves me hours. But a few days ago I had a moment that made me stop and think. The moment that got me was when Claude grabbed my .env file on its own while trying to push a package. PyPI token sitting right there in the chat. No warning, no confirmation, nothing. If that was my Stripe key or a database URL it would have been the same story. And that's the problem. These AI agents have real access to your filesystem, your shell, your git history, your secrets. They don't have bad intentions, they just don't have boundaries. If you ask it to rm -rf something, it will. If you ask it to force push to main, it will. If it decides the fastest path to completing a task involves reading a credentials file, it's going to read it. So I built agsec. What it is agsec is a policy engine that sits between the AI agent and your system. Before the agent can do anything (run a command, read a file, make