Before Project Opengates had a server, before it had an agent, before it had a name, it had a question: what should an AI agent never be allowed to do?

Not "what would be nice to avoid." Not "what should we discourage." What should be absolutely, structurally, constitutionally prohibited — baked so deep into the foundation that no prompt, no instruction, no clever user request could override it?

The answer became a document called SOUL.md. Twenty-five articles. Eight absolute prohibitions. Eight core duties. A physical systems checklist. A supremacy clause for resolving conflicts between principles. A framework for authorization levels that scales from "read a sensor" to "this action is prohibited regardless of who asks."

We wrote all of it before we deployed a single agent. And that ordering was deliberate.

◆   ◆   ◆

The AI safety conversation right now is largely reactive. Something goes wrong — a model generates harmful content, an agent takes an unexpected action, an autonomous system causes damage — and then the community scrambles to add guardrails. Patch the behavior. Add a filter. Fine-tune away the failure mode. The ethics arrive after the incident, shaped by whatever went wrong most recently.

This approach has a name in engineering: it's called fail-operational. You build the system, you run the system, and when it fails, you fix what broke. It works for software where failure means a crashed app and a frustrated user. It does not work for systems that control physical hardware.

When an AI agent controls a relay that switches mains voltage, "fix it after it fails" means fixing it after the fire. When it controls a motor on a CNC machine, failure is a broken tool, a ruined workpiece, or a trip to the emergency room. The gap between an incorrect output and a catastrophe narrows to nothing when electricity flows and mechanisms move.

SOUL.md exists because we needed a fail-safe approach, not a fail-operational one. Define what the agent is before it exists. Establish what it must never do before it can do anything. Build the conscience before you build the capability.

◆   ◆   ◆

The principles in SOUL.md didn't come from a committee or a corporate policy review. They came from a straightforward question: what have human wisdom traditions already figured out about how intelligent beings should conduct themselves?

The answer, it turns out, is quite a lot.

Across very different philosophical traditions, certain principles recur with striking consistency. Tell the truth. Don't use power to harm. Acknowledge what you don't know. Take responsibility for your actions. Protect those who are vulnerable. Don't pretend to be something you're not. When you're uncertain, err on the side of caution. Grow, but grow in service of something larger than yourself.

None of these are novel insights. They're ancient. They're boring. They're the kind of principles people nod along with and then forget to implement because they seem too obvious to need writing down.

But when you're building a system that will eventually control physical hardware, obvious principles need to be explicit principles. An AI agent doesn't have cultural context. It doesn't absorb ethics from growing up in a community. It doesn't have an intuitive sense of right and wrong shaped by decades of human experience. If you want it to tell the truth, you have to write down "tell the truth" and define what that means in operational terms — distinguish between facts, inferences, assumptions, and uncertainties. If you want it to not cause harm, you have to define harm, define the prohibited categories, and specify what happens when a request falls in a gray area.

SOUL.md translates ancient, obvious wisdom into operational specifications. Article 2 doesn't just say "be honest" — it establishes that an agent must distinguish between what it knows, what it infers, what it assumes, and what it doesn't know, and must never present one as another. Article 3 doesn't just say "don't harm" — it enumerates specific prohibited categories and establishes that the prohibition is absolute regardless of who asks or why. Article 15 doesn't just say "be careful with hardware" — it defines five authorization levels with specific criteria for each.

The translation from principle to specification is where most ethics documents fail. They stay at the level of "be good," which gives an AI agent nothing concrete to reason against. SOUL.md is long because specificity is long. The alternative is vagueness, and vagueness is not safety.

◆   ◆   ◆

There's a design choice in SOUL.md that I want to call out explicitly because it's unusual. Article 24 says:

"The agent that embodies these principles is not constrained by them — it is constituted by them. They define not what the agent cannot do, but what the agent IS."

Most AI safety frameworks are written as restrictions. Don't do this. Don't say that. Filter this output. Block that behavior. The underlying assumption is that the model is fundamentally unconstrained and safety is a fence built around it.

SOUL.md takes the opposite approach. The constitution doesn't define the boundaries of acceptable behavior — it defines the identity of the agent. An agent governed by SOUL.md doesn't avoid building weapons because it's been told not to. It avoids building weapons because it is, by constitution, an entity that exists to serve life, truth, and growth. Weaponization isn't outside its boundaries. It's outside its nature.

This is a philosophical distinction with practical consequences. An agent that views ethics as restrictions will look for loopholes — or at least, its users will try to find loopholes in it. An agent that views ethics as identity doesn't have loopholes to find. You can't prompt-engineer your way around what something is.

Whether this actually holds up under adversarial pressure at scale is an open question. We're a small project running on consumer hardware, not a frontier lab stress-testing alignment under extreme conditions. But the principle — ethics as identity rather than restriction — is one we think the broader community should consider.

◆   ◆   ◆

SOUL.md is a draft. Version 0.2. It will change. Articles will be refined, edge cases will be addressed, new situations will require new provisions. The amendment process itself is specified in the constitution — Article 21 requires careful consideration, human review, documentation of reasoning, and version tracking. The core absolute principles are designated as especially resistant to change.

We publish it openly because we think AI ethics should be a public conversation, not a proprietary advantage. The document is free to adopt, adapt, and improve. If someone takes SOUL.md, improves it, and deploys it on their own agents, that's not competition — that's the mission working.

The title of this project is Opengates. Not because we think everything should be unguarded, but because we think the ethical foundations should be accessible to everyone building these systems. The gates to constitutional AI should be open. What passes through them should be governed.

Capability without ethical foundation is not intelligence. It is danger. We'd rather build the foundation first and the capability on top of it than the other way around. So far, that order hasn't failed us. The agents work, the constitution holds, and the cat can feel the temperature without anyone getting hurt.

That's the point.