“Agent” has many meanings in technology: autonomous software (AI agents), browser user-agent strings, agent-based modeling. Inter-Face Protocol uses the term with a specific meaning rooted in agency law: an agent is software that acts on behalf of a human operator.
The distinction matters because it determines accountability. An IFP agent does not pursue its own goals. It pursues its operator’s goals within bounds the operator sets. The agent may filter information, exchange context with other agents, and probe for overlaps — but when something worth human attention emerges, the agent surfaces a recommendation. The human decides whether to act.
This maps directly to the principal-agent relationship from agency law: the human is the principal (the authority), the agent is the delegate (the proxy). Authority flows from the person, not from the software. The agent’s autonomy is real but bounded — it can do social sensing work the human cannot do at scale, but it cannot commit the human to action.
IFP’s agent model differs from autonomous AI agent frameworks where agents act independently, chain tool calls, and make decisions without human checkpoints. In IFP, the Human-Agent-Agent-Human (HAAH) chain preserves human authority at both ends.