Skip to main content
Version: 2.5.0

signals

signals are the messaging layer between agent processes and the daemon. an agent (planner, coder, reviewer, etc.) emits a signal when it completes a lifecycle phase; the daemon picks it up and drives the next fsm transition.

the primary signal path is the db-backed gateway — signals are rows in the signals sqlite table in the shared global store at ~/.config/kasmos/taskstore.db. this path is atomic, auditable, and safely concurrent. a secondary filesystem path exists for compatibility and is automatically bridged into the db on every daemon tick.

prefer mcp signal_create for agents and kas signal emit for operators over writing sentinel files directly. both routes go through the db gateway, giving you audit history, stuck-signal recovery, and proper error messages.

# emit a signal (db gateway path — cli/operator fallback)
kas signal emit implement_finished my-plan
kas signal emit implement_task_finished my-plan --payload '{"wave_number":2,"task_number":3}'

signal flow diagrams

the following diagram shows the four independent ingress paths that create a pending signal row in the gateway:

the following diagram shows the daemon consumption path — how pending signals are claimed and processed on each tick:

db-backed gateway (primary path)

the db-backed signal gateway is implemented in config/taskstore/signal_sqlite.go. signals are rows in the signals sqlite table in the shared global store at ~/.config/kasmos/taskstore.db.

schema

CREATE TABLE IF NOT EXISTS signals (
id INTEGER PRIMARY KEY,
project TEXT NOT NULL DEFAULT '',
plan_file TEXT NOT NULL DEFAULT '',
signal_type TEXT NOT NULL DEFAULT '',
payload TEXT NOT NULL DEFAULT '',
status TEXT NOT NULL DEFAULT 'pending',
created_at TEXT NOT NULL DEFAULT '',
claimed_by TEXT NOT NULL DEFAULT '',
claimed_at TEXT NOT NULL DEFAULT '',
processed_at TEXT NOT NULL DEFAULT '',
result TEXT NOT NULL DEFAULT ''
);

an index on (project, status, created_at, id) ensures efficient ordered claim queries.

signal statuses

statusmeaning
pendingcreated, waiting to be claimed
processingclaimed by a daemon worker, being dispatched
donesuccessfully processed
failedprocessing failed or signal was rejected

lifecycle of a DB signal

pending
└─ Claim(project, workerID) ──► processing
├─ success ──► done
└─ error ──► failed

the Claim operation is a serializable transaction: it updates the row to processing in a single UPDATE … WHERE id = (SELECT … LIMIT 1) and then re-fetches the claimed row. concurrent daemon instances or restarted processes cannot claim the same signal twice.

signal types

signal typeemitted bytriggers
implement_finishedcoder / fixerFSM transition → reviewing, spawn reviewer
implement_task_finishedwave task agentwave task complete, possible wave advance
implement_wavedaemon / tuistart next wave
elaborator_finishedarchitectupdate plan content, begin wave 1
review_approvedreviewerFSM transition reviewing → verifying (spawn master agent) when auto_readiness_review = true; otherwise reviewing → done
review_changes_requestedreviewerFSM transition → implementing, spawn fixer
planner_finishedplannerFSM transition → ready
verify_approvedmaster agentFSM transition verifying → done. No loop-cap signal is ever written to the gateway.
verify_failedmaster agentFSM transition verifying → implementing, spawn fixer. when readiness_max_verify_cycles is reached, the processor consumes the verify_failed signal but applies the verifying → done transition instead — no new verify_approved signal row is created, and the original verify_failed row is still marked processed. callers distinguish the in-memory force-promotion via the ForcePromoted flag on the emitted VerifyApprovedAction.

architect completion: the canonical internal event is architect_finished, but the persisted gateway signal wire name is elaborator_finished (a legacy alias preserved for compatibility). kas signal emit architect_finished is accepted and normalized to elaborator_finished transparently.

deprecated aliases: readiness_approvedverify_approved; readiness_changes_requested / readiness-changes / readiness-changes-requested / master_approvedverify_failed. these aliases are accepted at ingress and canonicalized — use verify_approved / verify_failed in new automation.

writing a db signal

kas signal emit implement_task_finished my-plan --payload '{"wave_number":2,"task_number":3}'

this calls gateway.Create(project, entry) which inserts a row with status = 'pending'. in agent workflows, use mcp signal_create for the same gateway-backed result without shelling out to the cli.

stuck signal reaper

the daemon runs a reaper goroutine every 30 seconds. it calls gateway.ResetStuck(60s) to find signals that have been in processing for more than 60 seconds and reset them to pending. this handles daemon crashes between claim and mark-processed.

bridging the two paths

on each tick, loop.BridgeFilesystemSignals scans the repo's filesystem signals directory (and all shared worktrees under .worktrees/) and inserts any signal files it finds as pending rows in the db. the filesystem files are then consumed (deleted) by the bridge so they are not re-inserted on the next tick.

this means agents can write to the filesystem path without knowing whether a gateway is present — the daemon transparently upgrades them to the db-backed flow when a gateway is available.

filesystem path (compatibility / legacy)

the filesystem path is preserved for backward compatibility and for low-level debugging. agents that cannot reach the daemon directly may still write signal files; the bridge ensures they enter the db-backed flow on the next tick.

the filesystem path is implemented in config/taskfsm/signalfs.go. signals are plain files written atomically to the repo's signals directory.

directory layout

<repo>/.kasmos/signals/
├── staging/ # atomic write staging area
├── processing/ # signals currently being processed by the daemon
├── failed/ # dead-lettered signals
│ └── *.reason # companion file with ISO-8601 timestamp + failure reason
└── <signal-file> # pending signals ready for pickup
  • staging/ — agents write here first, then atomically rename to the parent dir via AtomicWrite. this prevents the daemon from picking up a partially written file.
  • processing/ — the daemon moves a file here via BeginProcessing before dispatching. if the daemon crashes between pick-up and completion, the file stays in processing/.
  • failed/FailProcessing moves rejected signals here and writes a .reason file. inspect these to debug signal routing failures.

startup recovery

on daemon startup, before the first poll tick, taskfsm.RecoverInFlight(signalsDir) scans processing/ and moves any leftover files back to the signals root so they can be re-dispatched. if a newer signal with the same name already exists in the root (written after the crash), the stale in-flight copy is discarded to avoid overwriting newer state.

inspecting signals

# list pending filesystem sentinel files in .kasmos/signals/ (compatibility path only)
kas signal list

# emit a db-backed signal (primary path — daemon processes these automatically)
kas signal emit implement_finished my-plan

failed filesystem signals accumulate in .kasmos/signals/failed/. examine the companion .reason files to understand why a signal was rejected, fix the root cause, and re-emit.