Skip to main content
Version: latest

multi-repo

the kasmos daemon can manage any number of repositories simultaneously. each repo runs its own poll loop. the task store and signal gateway are shared globally (~/.config/kasmos/taskstore.db) — project namespacing inside the db prevents collisions between repos. the signal directory (<repo>/.kasmos/signals/) and the wave orchestrator (loop.Processor) remain per-repo.

registration

at startup

repos listed in ~/.config/kasmos/daemon.toml are registered before the daemon enters its poll loop:

repos = [
"/home/user/project-a",
"/home/user/project-b",
"/home/user/project-c",
]

at runtime

use kas daemon add while the daemon is running:

kas daemon add /home/user/new-project

this sends POST /v1/repos to the control socket. the repo is registered immediately and included in the next tick.

to remove a repo:

kas daemon remove /home/user/old-project

this sends DELETE /v1/repos/<project> and closes the repo's task store.

project naming

the project name is always filepath.Base(path) — the last component of the absolute path.

/home/user/kasmos → project: "kasmos"
/work/my-lib → project: "my-lib"

duplicate basename rejection — if you try to add /work/foo when /archive/foo is already registered, the daemon rejects it:

repo with basename "foo" already registered (path: /archive/foo);
rename one of the directories or use distinct names

this constraint exists because the project name is the primary key for signal routing, task store queries, and the control api. two repos with the same basename would cause signal misrouting.

per-repo resources

when a repo is registered, RepoManager.Add opens (or reuses) the shared global db and records the following per-repo state:

resourcelocationshared?purpose
task store~/.config/kasmos/taskstore.dbglobal (namespaced by project)sqlite task/subtask/signal storage
signal gatewaysame global DB fileglobal (namespaced by project)atomic signal claim/mark-processed
signals directory<repo>/.kasmos/signals/per-repofilesystem sentinel files, bridged into the gateway each tick

the RepoManager.Add call lazy-opens the global db once; subsequent Add calls for other repos reuse the same open Store and SignalGateway handles. project namespacing inside the db ensures that task and signal rows from backend and frontend never collide, even though they share the same sqlite file.

opening the global db can fail (e.g. ~/.config/kasmos/ does not exist or the db is locked). this is non-fatal — the repo is registered with Store: nil and SignalGateway: nil. the daemon logs a warning and skips the repo on each tick rather than crashing.

a repo with a nil store becomes fully functional as soon as the db is accessible and the repo is removed and re-added (or the daemon is restarted).

per-repo processor

each repo also gets a dedicated loop.Processor instance:

proc := loop.NewProcessor(loop.ProcessorConfig{
AutoReviewFix: m.autoReviewFix,
Store: store,
Project: project,
MaxReviewFixCycles: m.maxReviewFixCycles,
Hooks: hooks,
})

the processor is created once and reused across all tick cycles. this is important because loop.Processor holds in-memory WaveOrchestrator state for active wave plans — if the processor were recreated on each tick, in-progress wave progress would be lost.

shared worktrees

on each tick, loop.BridgeFilesystemSignals scans not only <repo>/.kasmos/signals/ but also all directories under <repo>/.worktrees/:

<repo>/.worktrees/
├── plan-feature-x/
│ └── .kasmos/signals/ ← also scanned
└── plan-bugfix-y/
└── .kasmos/signals/ ← also scanned

any filesystem sentinel files found are inserted as pending rows in the shared signal gateway. this is compatibility plumbing — it lets older agents or tooling that writes sentinel files continue to work without changes. new agent workflows should emit signals directly via mcp signal_create; kas signal emit remains the operator and cli fallback that writes to the same gateway without touching the filesystem.

hooks

when a repo is registered, config.LoadHooksForRepo(path) reads its .kasmos/config.toml for any [[hooks]] entries:

[[hooks]]
type = "webhook"
url = "https://example.com/notify"
events = ["review_approved", "implement_finished"]

hooks fire after fsm transitions and are scoped to the repo where they are configured. each repo has its own taskfsm.HookRegistry — hooks from one repo never fire for another.

viewing registered repos

kas daemon status

this prints all registered repos, their project names, and the count of currently active plans per project:

daemon running (uptime: 3h14m22s)
repos (2):
kasmos /home/user/kasmos active: 1
my-lib /home/user/my-lib active: 0

example: two-project setup

# ~/.config/kasmos/daemon.toml
repos = [
"/home/user/backend",
"/home/user/frontend",
]
auto_advance_waves = true
auto_review_fix = true
max_review_fix_cycles = 2

both projects (backend, frontend) share the global task store at ~/.config/kasmos/taskstore.db. each gets its own SignalsDir (<project>/.kasmos/signals/) and its own loop.Processor. project namespacing inside the shared db ensures that plans, waves, and signals from one project never interfere with the other.