Glass Box

Why Glass Box Matters

Andy Surtees | | 7 min read

There’s a moment — maybe you’ve felt it — when you’re about to let an AI agent do something real. Send an email to a client. Post on your company’s social media. Create a deal in your CRM. Transfer money. And right before you click the button, your stomach tightens. Because you don’t really know what it’s going to do.

I call this send button panic. It’s rational. It’s the correct response to being asked to trust a black box with real-world consequences.

And it’s the reason I built GloriaMundo around a single principle: Glass Box.

The trust problem is real

Trust in AI systems has been declining as they’ve become more autonomous. That’s not irrational fear — it’s the natural result of watching capable systems make confident mistakes.

McKinsey and Gartner have both written about the need for what they call “governed autonomy.” The idea that AI systems need to be powerful AND controllable isn’t controversial among people who’ve actually deployed them in production. The controversial part is that most tools don’t offer this. They offer either full autonomy with black-box execution, or full control with manual configuration. Not both.

The industry stats on AI-related fraud and errors are sobering. Organisations that deploy autonomous agents without governance controls regularly encounter unauthorised actions, runaway spending, and data handling violations. Not because the AI is malicious, but because it’s confidently wrong in ways that humans would catch if they could see what was happening.

And here’s the thing: as AI agents get more capable, this problem gets worse, not better. A weak agent that makes mistakes is annoying but limited in the damage it can cause. A powerful agent that makes mistakes can send emails to your entire customer base, modify production databases, and run up thousands in API costs — all before anyone notices.

What “Glass Box” actually means

Glass Box isn’t a feature. It’s a design principle that shapes every decision in how GloriaMundo works.

The core commitment: every reasoning step visible, every action previewable, every cost known in advance.

In practice, this means:

You see what the AI plans to do. When GloriaMundo generates a workflow from your description, it doesn’t just show you a summary. It shows you the actual workflow structure — every step, every integration call, every piece of conditional logic, every data transformation. The node-based editor displays this visually, and you can inspect any step to see its parameters, its execution strategy, and its dependencies.

You see what will happen before it happens. The Virtual Run system executes safe operations (reads, fetches, lookups) against real data while simulating dangerous operations (sends, posts, creates, deletes). You see realistic previews of generated content — the actual email that would be sent, the actual Slack message that would be posted, the actual data that would be written to your spreadsheet. This isn’t a description of what might happen. It’s a preview of what will happen, built from real data flowing through simulated steps.

You see what it will cost. Every Virtual Run includes a cost estimate, broken down by step. LLM calls, API actions, image generation, code execution — each line item is visible. Our pricing is a 2x markup on the underlying costs, and both the underlying cost and the markup are transparent. You decide whether the result is worth the price before you spend anything.

You see what it did. Every execution — live or simulated — produces a full audit trail. Step-by-step: what was the input, what was the output, how long did it take, what did it cost, what decisions were made at conditional branches, which services were called. If something goes wrong, you don’t get a vague error message. You get a precise record of exactly where and why.

The five-layer safety system

Glass Box extends beyond previewing into active governance. Every action in GloriaMundo passes through what we call the governance gate — five layers of safety checks that run before any external action executes.

Layer 1: Kill switch. A Redis-backed emergency stop. If something goes wrong at the platform level, every running workflow can be halted instantly. This is the fire alarm — you hope you never need it, but it’s there.

Layer 2: Blocked actions. User-configurable lists of actions that should never execute. If you know your workflows should never delete data or send to certain addresses, you set that boundary once and it’s enforced everywhere.

Layer 3: Constitutional rules. Immutable guardrails that can’t be bypassed — not by users, not by agents, not by us. These are the non-negotiable principles: don’t expose credentials, don’t execute destructive operations without explicit approval, don’t bypass spending limits.

Layer 4: Policy evaluation. Configurable policies that determine how specific types of actions are handled. The defaults require manual approval for financial transactions, public social media posts, and email sends. You can adjust these to match your risk tolerance — make them stricter or more permissive depending on your use case.

Layer 5: Approval requirements. For actions that pass the first four checks but exceed cost thresholds or involve sensitive operations, the system queues them for explicit user approval. You see exactly what the action will do, and you decide whether to proceed.

Most governance systems are bolted on after the fact — a compliance checkbox that doesn’t actually change how the system operates. Ours is structural. The governance gate is in the execution path. Actions don’t get a choice about whether to pass through it.

Why transparency is a competitive advantage, not a compromise

There’s a school of thought in AI that says transparency is the enemy of capability. That the whole point of AI agents is to handle things so you don’t have to think about them. That adding visibility adds friction and reduces the value.

I think this is exactly wrong.

Transparency doesn’t slow you down. What slows you down is debugging a failed workflow with no audit trail. What slows you down is discovering that your agent sent an embarrassing email to a client and you can’t figure out why. What slows you down is being afraid to automate important processes because you can’t trust the system to do them correctly.

Send button panic has a cost. Every time someone hesitates to deploy an automation because they’re not sure what it’ll do, that’s value left on the table. Every time someone runs a workflow manually “just to check” before scheduling it, that’s time wasted on a problem that shouldn’t exist.

Glass Box removes that friction. When you can see exactly what will happen, you deploy with confidence. When you can review a preview and say “yes, that’s right,” you schedule it and move on. When you know there are governance controls in place, you automate processes you’d otherwise handle manually.

Transparency makes you faster, not slower. It makes you more willing to automate, not less.

What this looks like going forward

AI agents are going to get more capable. Fast. Models are improving at a rate that makes last year’s agents look like toys. Within the next few years, agents will routinely handle multi-step processes that today require human oversight at every stage.

That’s exactly why transparency needs to be designed in now, not retrofitted later. Once users develop habits around black-box agents — once “just trust it” becomes the norm — adding visibility after the fact feels like a downgrade. It’s much harder to add governance to a system that was designed without it than to build it in from the start.

The Glass Box principle is our bet on the future: that as AI agents become genuinely powerful, the platforms that survive will be the ones that keep humans informed and in control. Not because the AI needs babysitting, but because the humans who deploy it need confidence that it’s doing what they expect.

This isn’t about fear. It’s about professionalism. A surgeon doesn’t skip the imaging scan because they’re confident. A pilot doesn’t skip the checklist because they’ve flown before. Professionals verify. They inspect. They confirm before they act.

AI automation should work the same way.

The invitation

If you’re building with AI agents — or thinking about it — I’d genuinely like to hear how you think about this. Is transparency something you’ve struggled with? Do you have workflows you’d automate if you could trust them more? Have you been burned by a black-box agent doing something unexpected?

We’re at gloriamundo.com, and I read every piece of feedback that comes in.

The Glass Box isn’t just a product philosophy. It’s an argument about how AI automation should work. And I think it’s one worth having.

Andy Surtees, Founder — GloriaMundo