Back to all papers

Commitment-Carrying Agent State

A Missing Primitive for Persistent AI Agency

Chaitanya Mishra

Independent Researcher · April 2026

Abstract

AI agents remain brittle over long horizons not chiefly because they lack more memory, more planning depth, or more tools, but because they lack a stable decision state. Existing systems replay transcripts, retrieve memories, or maintain ad hoc summaries, yet they routinely collapse observations, assumptions, obligations, hypotheses, plans, and delegated promises into undifferentiated text. I call this failure semantic state collapse. It causes dormant constraints to vanish until the instant they matter, self-authored hypotheses to return later as evidence, parent objectives to be lost during delegation, and stale plans to survive world changes.

This paper argues that persistent agency requires a missing systems primitive: commitment-carrying agent state. In commitment-carrying agent state, every future-relevant semantic item is represented as a typed commitment with support provenance, validity guards, discharge conditions, ownership, priority, and refinement links. The agent becomes a state transformer over this object rather than a policy that repeatedly reconstructs intent from raw dialogue history. I formalize open-world agent execution as reasoning over event histories containing external evidence, self-authored artifacts, actions, and messages; define support-sensitive task families on which untyped state must fail; and prove two core results: first, action invariance under commitment-sufficient compression, and second, delegation safety under contract refinement.

The paper also introduces a concrete runtime architecture, a dual-language state schema, an action authorization boundary, an epistemic firewall preventing self-generated text from silently becoming evidence, and a decision-state bottleneck objective for learning compact but commitment-sufficient state abstractions. Because no experiments are run here, the empirical contribution is a rigorous evaluation program: benchmark families for dormant constraint activation, provenance-sensitive reasoning, revalidation under world drift, and contract-preserving delegation, together with metrics and ablations targeted at the theory's failure signatures.

The central thesis is simple. Long-horizon agency is not just a memory problem. It is a state problem. A transcript is a log, not a control state. If AI agents are to become reliable autonomous systems rather than eloquent stateless emulations, they need a semantics-preserving substrate for what they are committed to, why, and under what conditions those commitments remain live.

Keywords

AI agents persistent state commitment-carrying state semantic state collapse delegation provenance long-horizon agency decision-state bottleneck

Citation

Mishra, Chaitanya. April 2026. Commitment-Carrying Agent State: A Missing Primitive for Persistent AI Agency.