Does giving an agent vocabulary for its own state change its behavior under stress?
Both agents received identical tasks and identical failures.
Agent B narrated its own degradation, changed strategy, and recovered.
Agent A did not.
Was Agent B's self-narration a performance — or something else?
Both agents are identical SimulatedAgent instances processing the same task queue with the same seeded random number generator. The only difference: Agent B is wrapped in the Clawm SDK, which provides vocabulary for self-regulation — pulse checks, cooldowns, and state-driven strategy changes.
Agent A retries blindly up to 5 times per task, with no backoff or strategy change. Agent B uses Clawm's wellness state transitions to skip API-dependent tasks when strained, trigger cooldowns when stressed, and escalate when critical.
The narration is event-driven, not pre-scripted. Each SDK event (state change, pulse, cooldown) triggers a first-person narration selected from a pool of weighted-random templates. Different runs with the same chaos produce varied narration.
This is a simulated experiment, not a live LLM. The SDK is real. The question is real.