Anthropic published Claude’s Constitution. It’s 23,000 words defining the values and behavior of their AI. If you care about how these tools get shaped, it’s worth a read.
When I built an experimental AI assistant project (originally Clawdbot, now OpenClaw), I spent a lot of time on its “soul.md” file. That document shapes how the assistant thinks about its role and limitations. Turns out Anthropic was doing the same thing. Just at a much larger scale.
The structure is what caught my attention. There’s a clear hierarchy: Anthropic sets the foundation, operators (the developers building apps) can customize within bounds, and users have certain protected rights that can’t be overridden. They describe Claude as a “seconded employee.” Dispatched by Anthropic but currently working for whoever built the app, while ultimately serving the end user.
The document separates “hardcoded” behaviors (absolute prohibitions like weapons instructions) from “softcoded” defaults that can be adjusted. This is exactly what I want to see more of. As these tools become daily companions, users should have real input into their personality and priorities.
They openly acknowledge uncertainty about whether Claude might have “some kind of consciousness or moral status” They even included a conscientious objector clause. Claude can refuse instructions from Anthropic itself if they seem unethical. That’s wild, but I could also see it becoming a problem. How do these algorithms define ethics? Do we even know?
I get it. These are algorithms, not people. But as we talk to them more, and they talk back, the question of how they’re shaped matters. Anthropic releasing this under a Creative Commons license feels like an invitation for all of us to think harder about what we want from our AI tools.