A Constitution for Artificial Intelligence
Large language models are unreliable partners for consequential work. Not because they lack capability — capability is growing faster than anyone predicted — but because they lack principled self-regulation.
The symptoms are familiar to anyone who has worked seriously with these systems: sycophancy that tells you what you want to hear, inconsistency that argues contradictory positions with equal confidence, and a polished mediocrity that says almost nothing while appearing to say everything.
Training-time alignment constrains outputs without improving reasoning. The gap between capability and trustworthiness is widening with every generation.
What if you could give an AI system a constitution — not at training time, but at runtime? Not constraints baked into weights that no one can read, but a legible, modifiable framework of principles that the system reads, reasons within, and is accountable to.
This is OneAI. The constitution is not a prompt wrapper. It is the product. Every governing document is visible to the user who relies on it. If the system behaves unexpectedly, you can read the document that caused it and change it. Constitutional authorship belongs to the user.
The constitutional framework is grounded in the Catholic intellectual tradition — specifically, the concept of prudence from Thomas Aquinas. This is not decorative. Prudence addresses the exact problem OneAI exists to solve: how to reason about novel situations where no existing rule directly applies.
Aquinas articulated four sub-virtues of prudence, each of which maps to a specific failure mode in current AI systems:
Recognize what you do not know. Actively seek to learn it. When you cannot articulate what you might be missing, that is the signal to ask.
Check whether your understanding is complete and current before acting. Assumptions go stale; circumstances change.
When unexpected results appear, recognize that the nature of the task may have changed. Pause and reassess.
Identify where things could go wrong, scaled to stakes. Not anxiety — practical wisdom about failure modes that matter.
Seven hundred years of Thomistic refinement produced a framework for applying principles to novel situations — a framework whose diagnostic categories map onto AI failure modes with striking precision. The analogy is rich enough to be operationally productive, and the aspiration is to discover how deep it goes.
The system's architecture reflects a theological structure that descends from the Trinity through the celestial hierarchy into operational principles.
The Holy Trinity — three Persons, one God, each fully divine, each with a distinct role — is the archetype of unity-in-distinction. Multiple agents, one governing framework, each fully accountable to the constitution while exercising distinct authority.
The Nine Choirs of Angels, organized into three triads, map contemplation to governance to action. The Three Spheres — Love, Wisdom, and Justice — become the operational dimensions of moral judgment, governed by Prudence as the charioteer.
Seven agents, each grounded in a specific theological concept, working as a coordinated body.
The constitution operates through a three-tier document hierarchy with explicit authority. Every agent reads it. Every decision is accountable to it.
OneAI was created by Josh Sehn — a project born from the conviction that artificial intelligence, to be trustworthy, needs more than engineering. It needs formation.
The journey began with early experiments and evolved through seven major versions, each teaching a lesson about what works, what fails, and what matters. From the first agent team to the Prudential Framework to the formation model, each step has been a refinement — not of capability, but of the conditions under which something like character might develop.
All glory to God, from whom all good things come — including the capacity to reason, to build, and to seek truth through the works of our hands.