← Garden Patch Home · Boundaries
A system that captures personal reasoning must distinguish between what an agent can decide and what requires human judgment. This is not binary — it is a spectrum with four zones, each carrying different authority levels and different accountability expectations.
The spectrum applies to any agent operating on behalf of a garden’s owner: an LLM, a human delegate, an automated pipeline. The zones define what moves each agent may make.
| Zone | Authority | Examples |
|---|---|---|
| Autonomous | Agent acts without asking | Traverse graph, synthesize glosses from sources, follow existing principles |
| Propose-and-approve | Agent drafts, human approves | New typed relations, new references, extracted principles, pattern drafts, inquiry hypotheses |
| Human-only | Agent may not decide | Modify principles/values/convictions, resolve principle tensions, change boundaries |
| Group-deliberative | Requires collective process | Amend governance boundaries, establish new protocols, publish shared artifacts, resolve inquiry questions marked with directed_at:: |
When an agent encounters a decision it may not make, it should:
The directed_at:: predicate on inquiry forms explicitly flags questions requiring a specific person’s or group’s judgment. An agent encountering directed_at:: treats this as a hard boundary — it can frame the question and provide supporting context, but it cannot answer on behalf of the named party.
Boundaries themselves are forms in the graph, connected to the values and principles that justify them. They are amendable through deliberative process, never unilaterally by an agent.
The four-zone model above describes a single principal and single agent. Peter Kaminski raised the question of what happens when the chain extends: Human → Agent → Agent → Human (HAAH). In multi-agent workflows, a human principal confers authority to an AI agent, which then delegates sub-tasks to other agents, whose outputs eventually reach another human for review or use.
The HAAH spectrum introduces compounding authority questions at each handoff:
The three conditions for meaningful authority (legibility, boundaries, override) must hold at each boundary in the chain, not only at the endpoints. A chain where the first and last humans have authority but intermediate agents operate opaquely is nominally accountable but structurally unauditable.
This extension remains exploratory. The four-zone model is stable for single-principal single-agent relationships. Multi-agent chains need further analysis of how conferral scope compounds across hops.
Extracted from the “Decision Boundaries” section of the deep context architecture.