When To Decompose
Decompose when at least one of these is true:- The request spans multiple features or surfaces.
- One focused worker would likely exceed about 3 backlog items.
- Workstreams can be isolated by file/domain with low merge risk.
When Not To Decompose
Keep a single worker when:- The task is a tiny fix (single-file edit, typo, one-function bug).
- The work is mostly sequential and parallel workers would idle.
- Coupling is high enough that splitting increases integration risk.
Limits and Naming
Hard constraints from the decomposition protocol:- Maximum
5SubTurtles per user request. - Each SubTurtle backlog should have
3-7items. - Use lowercase hyphenated names:
<project>-<feature>.
Decomposition Workflow
Extract capabilities
Parse the user request into concrete deliverables (UI, API, tests, deployment, etc.).
Group into workstreams
Create independent SubTurtle candidates by feature boundary or ownership boundary.
Partition running vs queued
Start only unblocked streams now; queue blocked streams until dependencies complete.
Enforce limits
Keep at most 5 active streams and 3-7 backlog items per stream by merging/simplifying.
Dependency Handling
IfB depends on A, this is required behavior:
- Spawn
Afirst. - Record
Bas queued. - Spawn
Bimmediately afterAfinishes.
This keeps autonomous progress predictable and avoids wasted loops on tasks that are waiting for contracts, schemas, or shared interfaces.
Multi-SubTurtle Spawn Reliability
For 2+ spawns in one request,META_SHARED.md requires a reliability protocol:
- Prefer Bash here-doc + stdin state seeding.
- Use each spawn’s built-in
ctl listoutput to verify runtime success. - If a stream stalls, check current running list and spawn only missing workers.
- Report exact outcome: running names plus skipped/failed names and reasons.
Worked Patterns
- Frontend
- API
- Full Stack
Request: “Build a dashboard with search, filters, export.”Decomposition:
dashboard-search(running)dashboard-filters(running)dashboard-export(queued, depends on search query contract)
