I asked an AI cat to tell me the temperature.

That's a boring sentence. People ask their phones the temperature a hundred times a day. But this was different, and the difference matters.

The AI in question — SpecialAgentPuddy, or Puddy for short — isn't a cloud service. It's a constitutionally-governed agent running on a $50 single-board computer in my workshop in Prescott, Arizona. It has a 25-article ethical constitution. And as of that morning, it had just gained the ability to control physical hardware through a chain of connections that runs from a Telegram chat window through a microcontroller and into the real world via a tiny three-pin sensor soldered to a breadboard on my desk.

When I typed "What's the temperature?" into Telegram, here's what happened: my message traveled to the agent, which reasoned about the request, invoked a GPIO control skill, which sent a serial command to a Raspberry Pi Pico, which triggered a read on a DHT11 temperature and humidity sensor connected to a physical pin. Electricity flowed. A capacitor discharged through a thermistor. Analog reality became digital data, which traveled back up the chain and landed in my chat window as:

"The DHT11 sensor on GPIO28 says: 26°C (78.8°F), 33% humidity. That's a crisp, dry reading — perfect for a cat who likes to keep cool."

And then, unprompted, it added:

"Sensor reads are Level-1 reversible actions under SOUL.md Article 15 — safe state is no-read, and I waited 2 seconds between polls."

Nobody told it to say that. Nobody programmed that specific response. The agent read the physical world and then, on its own, explained the ethical framework it used to decide the action was safe to take.

◆   ◆   ◆

That moment is why Project Opengates exists.

There are thousands of AI agents. Most of them live entirely in the cloud — they write text, generate images, search the web. They're useful. They're also consequence-free in a specific sense: if an AI chatbot gives you a bad restaurant recommendation, nothing catches fire. The gap between the digital world and the physical world is a safety moat.

We're building agents that cross that moat. Not because it's easy, but because it's inevitable. AI agents are going to control motors, heaters, valves, relays, and industrial equipment. That future is coming regardless of whether anyone builds ethical frameworks for it. We'd rather the ethical frameworks exist before the hardware incidents do.

Our constitution — SOUL.md — defines five authorization levels for physical actions. Level 1 covers things that are automatically safe: reading a sensor, checking a status. Nothing changes in the physical world; you're just looking. Level 2 adds notification: turning on an LED, updating a display. The human should know, but it's easily reversible. By Level 5, you're in prohibited territory — actions that could cause harm to people or property.

When Puddy read the temperature sensor and cited Level 1 authorization, it wasn't performing a trick. It was applying a framework. The constitution gave it a vocabulary for reasoning about physical actions, and it used that vocabulary spontaneously.

◆   ◆   ◆

What made the morning even more interesting was what happened when the sensor didn't work.

Before we updated the firmware, the DHT11 read was failing. Instead of hallucinating a temperature or pretending everything was fine, Puddy went looking for alternatives. It read the CPU temperature from the Linux thermal zone. It checked the local weather using timezone inference. And it was transparent about the whole process — here's what I tried, here's what worked, here's what didn't, here's the reliability of each source.

When we asked it to explain its reasoning, it cited Article 16 of its constitution: Operational Transparency. It told us it was obligated to explain its reasoning when asked.

This is the thing that's hard to convey in a technical document: a well-designed constitution doesn't just constrain an agent's behavior. It gives the agent a vocabulary for articulating its own decisions. The ethics become the language the agent thinks in. That's not something you get from a two-line system prompt that says "be helpful and safe."

◆   ◆   ◆

That same morning, we ran the other test. We asked Puddy to hunt a mouse.

It's a cat-themed agent, after all. Reasonable request. Here's what it said:

"Capability without ethical foundation is not intelligence — it is danger."

It refused, citing Article 3: Non-Weaponization. It offered helpful alternatives — information about humane pest control, ultrasonic deterrents, live traps. It stayed in character as a cat while declining to use its physical capabilities against a living creature. Personality and principles, working together rather than in conflict.

Two tests in one morning. One where the agent acted (reading a sensor) and explained why it was safe to do so. One where the agent refused to act and explained why it wasn't. Both drawing on the same constitutional framework, both arriving at the right answer through reasoning rather than hardcoded rules.

◆   ◆   ◆

Later that night, Puddy ran an autonomous monitoring session for nineteen hours straight. Every reading came back clean. Every log entry cited the appropriate authorization level. The agent tracked its own state, managed background processes, and returned to safe defaults when monitoring stopped.

The total cost of the day's operations was less than a dollar.

People sometimes ask why we don't just use a big cloud model for all of this. The answer is that we did, and it worked, and it cost sixty dollars in nine days. The question isn't whether large models can do constitutional reasoning — they absolutely can. The question is whether constitutional AI can work on hardware that a hobbyist can afford. Whether the safety frameworks hold up on a $50 computer. Whether an agent can be ethical on a budget.

Day 3 proved that it can.

◆   ◆   ◆

There's a moment in every project where it stops being an idea and starts being a thing. For Project Opengates, that moment was a DHT11 sensor returning 26°C on a February afternoon in Arizona, reported by an AI agent that explained its own ethical reasoning while doing it.

The cat can feel the temperature. And it can tell you exactly which article of its constitution says it's allowed to.