We are roaring into useful, agentic AI. I’ve been saying for a while now that we’re heading into it faster than the security models can keep up. So I wasn’t surprised to see Tailscale announce Aperture, a governance layer for AI agents.
The trouble with AI agents is that they run afoul of the overriding security principle of the last 30 years, which is to prevent access whenever possible. In order to be useful, an AI agent needs access. The security model has to adapt.
Aperture sits between your AI tools and the services they connect to. It routes requests through a gateway tied to user identity. Instead of distributing API keys to every agent and user, you keep one key per provider on the gateway. Aperture tracks who initiated each action and what the agent actually did. If something goes wrong, you have a trail.
It also gives security teams the ability to see and stop tool calls before they execute. That’s the piece that matters most. You’re not just logging what happened after the fact. You’re able to intervene.
I recorded a YouTube video recently about my experience with OpenClaw, a fully autonomous AI setup. I turned it on. Then I turned it off. The security exposure was too much. Aperture is exactly the kind of infrastructure that needs to exist before autonomous agents become practical for real work.
Tailscale isn’t alone here. Expect a lot of companies making announcements like this in the coming months. The AI capabilities are racing ahead. The governance and security layers are playing catch-up.
The AI tools are getting powerful fast. The guardrails need to keep pace.