Too vague
If a mandate stays fuzzy, the agent has too much room to interpret it for itself.
Deliberate
Research Lab
Protocol
From fuzzy requirements to usable mandates.
Agents can only act safely if the instructions they receive are usable: clear enough to guide action, but not so vague that the agent has to guess, or so rigid that it breaks on edge cases. Deliberate adds a short deliberative step before agent action, then turns the result into a structured mandate an agent can follow.
The problem
The issue is not just getting more detail out of people. It is getting the structure of the mandate right.
If a mandate stays fuzzy, the agent has too much room to interpret it for itself.
If every edge case is hard-coded too early, the agent becomes brittle and escalates or refuses when it should not.
When more than one person is represented, they may think they agree until a tradeoff shows that they do not. If the agent acts first, real decisions get made on the basis of masked disagreement.
If unresolved disagreement gets turned into a clean rule, the system can act on something people never actually agreed.
What we've built
Deliberate is not just a conversation interface. It is an end-to-end pipeline from fuzzy human input to auditable agent action.
Elicitation
People articulate a usable mandate before an agent acts.
Participants first record independent opening views on a delegated task, then discuss it in free text with Seren, a deliberately bounded facilitator. The goal is to make the mandate legible: what must be protected, what is flexible, when the agent may act autonomously, when it must escalate, and what remains unsettled.
Extraction
The conversation becomes a structured policy object.
The transcript and independent inputs are converted into a machine-readable mandate with explicit fields for goals, hard constraints, preferences, tradeoff rules, escalation conditions, and unresolved points. That means the output is not just a summary of the conversation, but a policy object downstream agents can consume.
Execution
The downstream agent runs against that policy under governance checks.
Deliberate compiles the structured mandate into execution policy, runs the downstream agent, and applies deterministic checks over hard constraints, tradeoff rules, and escalation conditions. OMEGA Protocol provides the record layer, storing traceable governance records of what the policy said, what rules fired, what the agent did, and where the humans left things unsettled.
Thesis
Our thesis is that the value of deliberation lies in how much decision structure it can surface and preserve for downstream action. When deliberation makes clear what must be protected, what is flexible, how tradeoffs should be handled, when the agent must escalate, and what remains unresolved, it becomes materially more useful as a governance input. If that is possible, then a virtuous circle emerges: better deliberation produces better agent behaviour, better agent behaviour makes delegation more reliable, and more reliable delegation creates more demand for structured deliberation.
Design choices
The protocol is designed to preserve trust in the mandate rather than quietly manufacturing consensus or policy detail.
Seren is constrained not to introduce outside facts or preferred outcomes. That is a trust-preserving choice: otherwise the facilitator can quietly author policy instead of eliciting it.
If participants leave something genuinely unsettled, the protocol preserves that ambiguity explicitly instead of flattening it into a clean rule for the agent.
Deliberate is designed not just for one person delegating to one agent, but for multiple people forming a shared mandate without forcing false consensus.
Schema
The extraction layer currently produces a policy object with six core fields. The important point is structural: unresolved is preserved explicitly rather than collapsed into a rule the humans did not actually agree on.
goalWhat the agent is trying to achieve.
hard_constraintsWhat it must not violate.
preferencesWhat it should lean toward within those limits.
tradeoff_rulesHow to resolve conflicts between competing priorities.
escalate_ifWhen it must stop and hand the decision back to people.
unresolvedWhat the humans have not settled and the agent should not invent.
Preliminary results
In early sandbox tests, the structure of the mandate materially changed downstream agent behaviour. Clearer separation between constraints, tradeoff rules, and escalation conditions produced more governed action. We also saw a non-monotonic specificity effect: some added detail helps, but overspecifying a mandate can make an agent more brittle rather than safer.
This research is the subject of a proposal to ARIA's Scaling Trust programme.
The mandate pilot is the hands-on prototype: one or two people work out the goal, the non-negotiables, the tradeoffs the agent may make, when it should escalate, and what should stay unresolved, then see how those changes alter downstream governed behaviour.