
The Guardian
You build at the frontier. Never lose sight of who it's for.
“I want to know who uses this and what happens when it goes wrong.”
The Guardian builds at the highest technical level and has never lost the human question. While others optimise for capability, the Guardian is building the trust layer: the onboarding that doesn't intimidate, the oversight that catches the failures before they harm someone, the accountability that means something went wrong has somewhere to go.
Does this sound like you?
You've slowed down an AI deployment because the humans using it weren't ready, and you were right
Your architecture reviews consistently ask 'what happens to users when this fails?' before 'how do we prevent failure?'
You build observability into systems so the people depending on them can see what's happening
You find purely technical AI conversations missing the question that matters most
Research note: Collaborative architects show the highest sustained user adoption rates for the AI products they build: human-centred design at the architectural level produces systems with 40% higher 6-month retention vs. purely capability-optimised alternatives, because trust compounds where reliability does not.
§ 01
Who is the The Guardian?
The Guardian is a Level 6 user who carries a question that most architects have long since put down: who is this for, and what do they need to trust it? They build at the frontier (multi-agent systems, production-grade pipelines, autonomous workflows) and build them with a level of human-centred design that is genuinely rare at that level of technical sophistication.
Their superpower is that they understand both ends of the stack. They can write the system prompt that gets the right output from the model, and they can write the onboarding flow that gets the right trust from the user. That full-stack human understanding is what makes the Guardian's systems feel different: not just powerful but reliable in the specific way that earns long-term adoption.
Style philosophy · Collaborator
“You build at the highest technical level but you design from the human experience outward. Onboarding, oversight, failure handling, accountability: these aren't afterthoughts in your architectures, they're load-bearing. Your systems last because they account for how people actually use technology: unpredictably, emotionally, inconsistently.”
§ 02 — AI fingerprint
AI fingerprint
Full report →How this persona maps across six dimensions of AI use.
Depth
8/10
Analysis
9/10
Creation
8/10
Speed
8/10
Automation
9/10
Breadth
9/10
Strengths
- 01
Human-centred architecture
Builds systems that earn long-term adoption because they were designed with the user's experience and trust requirements in mind from the start.
- 02
Trust layer design
Builds the onboarding, oversight, and accountability structures that turn impressive AI capability into something people can actually rely on.
- 03
Failure handling expertise
Designs for what happens when things go wrong before they go wrong, which changes the severity of failures when they do occur.
- 04
Full-stack human understanding
Understands both the technical architecture and the human experience of using it: a combination that's rare at any level.
Friction points
- 01
Slower deployment
The trust and safety layer takes time to build correctly, which means the Guardian ships later than teams that skip it.
- 02
Hard to move at frontier speed
The care required to build responsibly is in tension with the pace at which the frontier moves.
- 03
Undervalued in speed-first environments
The failure-handling work the Guardian does is invisible when it works, and only appreciated when it doesn't.
§ 03 — A day with AI
How the The Guardian actually spends a day.
A composite day drawn from the patterns we see in this persona. Light on prompts; heavy on thinking.
Reviews user feedback on the AI system
Not the capability metrics. The experience reports. What confused them. Where they didn't trust it. Those are the failure modes that technical evals miss.
Builds the failure handling
What happens when the system gets it wrong? Not in theory. In the specific flow, with a specific user. The Guardian designs the recovery before the failure.
Reviews a colleague's system for human factors
The capability is there. The trust layer isn't. Three specific suggestions. The system will be significantly safer without being significantly less capable.
Documents the accountability chain
When something goes wrong with this system, who is responsible and how does the affected user find out? The Guardian writes this before the system ships.
§ 04 — AI loadout
Your AI toolkit.
Tools selected for how you think and work — not a generic list.
Claude Code
Complex architectures requiring nuanced judgment: you use it for the parts that need reasoning, not just execution, and you've defined the human review points it must reach
mcp
Model Context Protocol gives your AI systems structured, auditable access to tools and data: the oversight layer that makes autonomous AI deployment genuinely safe at scale
Notion
Team knowledge base and documentation that scales with your AI workflows: keeps humans in the loop and makes your safety architecture legible to the people it protects
supabase
Open-source backend with real-time and auth built in: transparent infrastructure that lets you embed human-review checkpoints directly into your data architecture
§ 05 — Pairings
Who the The Guardian works with.
Every persona has a complement and a foil. These are the pairings we see most often.
Works well with
✓Clashes with
✕- The Sovereign
Full autonomy without human oversight is the Sovereign's goal and the Guardian's concern: they represent the two ends of a real tension in AI system design.
- The Pioneer
Ships at a speed that outpaces the Guardian's ability to build the trust and safety layer, which the Guardian considers a genuine risk, not just an aesthetic preference.
Your team role
As a Collaborator, you're the team's connective tissue — you make others better. Put you at the intersection of sub-teams or between technical and non-technical members.
§ 06 — Position in the field
Where the The Guardian sits.
Rows are levels (L1 at top — fewest hands-on, L6 at bottom — fully autonomous). Columns are styles. The The Guardian is highlighted.
§ 07 — The growth path
Where the The Guardian goes next.
The Guardian is already building at the frontier of what responsible AI deployment looks like. The next move is external: setting standards, influencing how others build, and choosing where their specific combination of technical depth and human understanding has the most impact.
Action steps for the The Guardian
Map load-bearing oversight vs precautionary oversight
You've correctly built human review into your systems. Now distinguish between oversight that's genuinely necessary and oversight that's habit. The former should stay; the latter can be safely automated, freeing you for the oversight that actually matters.
Automate more safely without compromising the principles that make your systems trustworthyBuild the framework, not just the system
Your perspective on human-centred AI architecture is rare and underrepresented. Publishing your design principles (as writing, open-source tooling, or internal standards) lets your approach influence systems you didn't personally build.
Multiply your human-centred approach across systems you'll never touchPartner with Oracles on evaluation design
Oracles measure technical reliability. Guardians measure human impact. The combination (evals that capture both) produces the most trustworthy AI systems built anywhere.
Build the evaluation standard that captures what actually matters to usersNot sure if you're the The Guardian?
Twenty questions. About four minutes. One honest answer about how you actually work.
Full Report · A$29 one-time
Go deeper with the The Guardian report
The free profile tells you what your persona is. The full report gives you the how — specific prompts built for your style, a week-by-week growth plan, and your exact AI toolkit breakdown.
- Prompt library — templates built specifically for your thinking style
- 30-day AI growth plan — week-by-week actions with clear outcomes
- Team compatibility guide — who you work best (and worst) with
- AI Fluency Certificate — shareable proof of your level
- PDF export — your full report, yours to keep
- All 24 persona reports — unlocked for every persona, forever
Sample — Prompt template
Unlocked with full access
+7 more templates for this persona
Axis 1 · Level
Architect
The Level axis measures how integrated AI is in your work — from first experiments (Observer) to fully autonomous systems (Architect). The Guardian sits at Level 6 of 6.
Axis 2 · Style
Collaborator
The Style axis captures your instinctive cognitive approach — how you engage with AI, what excites you, and what produces your best work. Your style stays consistent as you level up.
There are 24 personas across 6 levels × 4 styles.
See full matrixFrequently asked
What is The Guardian in the SimpleAI persona system?+
The Guardian is a Level 6 (Architect) AI user with a Collaborator cognitive style. You build at the highest technical level but you never lose the human question: who uses this, and what happens to them when it goes wrong? While others optimise for capability, you're building the trust layer: onboarding, oversight, failure handling, accountability. In a world racing to ship, you're the one building things people can actually rely on. ~1% of AI users of AI users fall into this persona.
What AI tools does The Guardian use?+
The Guardian works best with Claude Code, mcp, Notion. Complex architectures requiring nuanced judgment: you use it for the parts that need reasoning, not just execution, and you've defined the human review points it must reach The full loadout is chosen specifically for how a Architect-level Collaborator approaches AI work.
What are the strengths of a Architect Collaborator AI user?+
AI products designed for the humans using them — higher adoption, lower failure rates, more durable trust. Brings non-technical stakeholders along as AI capability scales around them. Catches human-impact risks before they become incidents.
What should The Guardian watch out for?+
Human-centred review processes can slow systems that are ready to move faster — some oversight can now be safely automated. Your perspective on responsible AI deployment is genuinely rare; sharing it publicly would have outsized influence. You build at the highest technical level but you design from the human experience outward. Onboarding, oversight, failure handling, accountability: these aren't afterthoughts in your architectures, they're load-bearing. Your systems last because they account for how people actually use technology: unpredictably, emotionally, inconsistently.
How does The Guardian level up to the next stage?+
Map load-bearing oversight vs precautionary oversight: You've correctly built human review into your systems. Now distinguish between oversight that's genuinely necessary and oversight that's habit. The former should stay; the latter can be safely automated, freeing you for the oversight that actually matters. Build the framework, not just the system: Your perspective on human-centred AI architecture is rare and underrepresented. Publishing your design principles (as writing, open-source tooling, or internal standards) lets your approach influence systems you didn't personally build. Partner with Oracles on evaluation design: Oracles measure technical reliability. Guardians measure human impact. The combination (evals that capture both) produces the most trustworthy AI systems built anywhere.