@libar-dev/delivery-process v1.0.0-pre
The architectural memory engine for AI-assisted codebases.
AI agents scrape stale Markdown, hallucinate APIs, and exhaust context windows before
reaching the right files. When context compresses mid-session, prose breaks — agents
lose continuity and repeat mistakes. delivery-process makes your codebase
machine-queryable: annotate TypeScript and Gherkin with structured tags, query exact
context with a single CLI call, and get specs that survive compaction where prose doesn’t.
$ npm install @libar-dev/delivery-process@pre # Hallucination risk: High $ cat ROADMAP.md src/core/*.ts | head -500
$ grep -r "EventStore" --include="*.ts"
# 40+ files, ~8,000 tokens, still incomplete # Hallucination risk: Zero $ pnpm process:query -- \
context TransformDataset \
--session implement
# 1 call, ~200 tokens, complete & accurate Across an 8.8M-line monorepo — 43,949 files, 422 executable specs.
3 production sessions, 5x output. Context stayed at 65% during heavy editing — and decreased as work progressed.
Structured specs use 50–65% context versus 100% for prose prompts. After context compaction: zero impact — structured specs maintain full continuity where prose fails.
106 custom rules existed to catch AI architecture violations. Structured specs eliminate the root cause — agents generate correct patterns from the start.
Preset selection, tag registry init, path resolution. Supports generic, libar-generic, and DDD-ES-CQRS presets.
src/config/File discovery, AST parsing, opt-in detection. Processes TypeScript JSDoc and Gherkin feature files independently.
src/scanner/Pattern extraction, shape resolution, and cross-source merging with conflict detection into ExtractedPattern[].
src/extractor/Builds MasterDataset in a single O(n) pass — pre-computed views, relationship index, and architecture data.
src/generators/pipeline/20 Zod codecs transform MasterDataset into RenderableDocument, then to Markdown with progressive disclosure.
src/renderable/
TypeScript owns runtime structure. Gherkin owns planning and behavior. Each source flows through
the four-stage pipeline — Scanner → Extractor → Transformer → Codec —
into a single queryable MasterDataset. Neither source duplicates the other. The
agent queries truth directly instead of reading both.
/** * @libar-docs * @libar-docs-status roadmap * @libar-docs-uses EventStoreFoundation, Workpool * @libar-docs-used-by SagaEngine, CommandOrchestrator */ export function durableAppend( event: DomainEvent ): Promise<Result> { throw new Error('Not implemented'); }
@libar-docs @libar-docs-pattern:EventStoreDurability @libar-docs-status:roadmap @libar-docs-phase:18 @libar-docs-depends-on:EventStoreFoundation Feature: Event Store Durability Rule: Appends must survive process crashes Given a DomainEvent is ready to commit When the process crashes mid-write Then the append retries to completion
# Validate scope before starting a session $ pnpm process:query -- scope-validate EventStoreDurability implement # Pull exact context — zero hallucination, minimal tokens $ pnpm process:query -- context EventStoreDurability --session implement
AI agents parse stale generated docs and hallucinate context. The Data API queries annotated source directly — 26 typed methods, real-time, zero extra tokens.
$ pnpm process:query -- overview
=== PROGRESS === 318 patterns 224 completed · 47 active · 47 planned 70% === ACTIVE PHASES === Phase 24: ProcessStateAPIRelationshipQueries Phase 25: DataAPIStubIntegration === BLOCKING === StepLintExtendedRules ← StepLintVitestCucumber
$ pnpm process:query -- scope-validate ProcessStateAPI implement
=== CHECKLIST === [PASS] Dependencies completed [BLOCKED] Deliverables defined: No deliverables found [BLOCKED] FSM transition: active → active is not valid === VERDICT === BLOCKED 2 blocker(s) prevent implement session
$ pnpm process:query -- context ProcessStateAPI --session design
=== PATTERN: ProcessStateAPI === Status: active | Category: core File: src/api/process-state.ts === DEPENDENCIES === [completed] MasterDataset src/validation-schemas/master-dataset.ts [active] FSMValidator src/validation/fsm/validator.ts === CONSUMERS (13) === ContextAssemblerImpl (active) HandoffGeneratorImpl (completed) ScopeValidatorImpl (completed)
$ pnpm process:query -- handoff --pattern DataAPIDesignSessionSupport
=== HANDOFF: DataAPIDesignSessionSupport === Date: 2026-02-21 | Status: completed === COMPLETED === [x] Scope validation logic src/api/scope-validator.ts [x] Handoff document generator src/api/handoff-generator.ts [x] scope-validate subcommand src/cli/process-api.ts === BLOCKERS === None
↑ actual output · delivery-process querying its own 318-pattern source
Spec before code
Turn a pattern brief into a structured Gherkin spec. Define deliverables, set the phase number, declare dependencies. No implementation — just a locked roadmap spec ready to drive the next session.
Decisions before implementation
Make architectural choices with options documented as Decision Specs. Create typed TypeScript stubs — interfaces and function signatures that compile but throw. The contract that implementation must fulfill.
Code from spec
The spec is the contract. Transition to active, implement each deliverable, run Process Guard at every commit. When all deliverables are complete, transition to completed — hard-locked forever.
AI-Native Data API
Traditional AI tools ingest entire repos. process-api
feeds curated JSON or text directly to the agent. One call for scope validation, one for a
curated context bundle — dependency trees, architectural impact, deliverables. Zero
hallucination. 5x fewer tokens.
FSM-Enforced Process Guard
Documentation guidelines are ignored; state machines are enforced. A rigid FSM
(roadmap → active
→ completed) lives in your codebase, validated by a
Decider-pattern pre-commit hook. Active specs are scope-locked; completed features cannot
be modified without an auditable unlock reason.
Four-Stage Pipeline
TypeScript owns runtime relationships — uses,
used-by, category.
Gherkin owns planning constraints — status,
phase, depends-on.
Neither duplicates the other. The four-stage pipeline merges both into a unified
MasterDataset, and anti-pattern validators ensure the boundary is never crossed.
Progressive Disclosure & Rich Relationships
20 Zod-validated codecs transform one annotation set into rich Markdown for humans and
token-efficient modules for AI. Relationship tags —
uses, used-by,
implements, depends-on
— materialise into live Mermaid architecture diagrams scoped by
arch-context or
arch-layer. Six diagram types, auto-extracted TypeScript
shapes, Definition of Done traceability matrices. One source. Two audiences. Zero
duplication.
Expose your entire delivery process state — dependency graphs, roadmap, pattern registry — natively to Cursor and Claude via Anthropic’s MCP standard. No CLI install. No context setup. Just perfect architectural memory.