/audit-continue Running Audits Resume an in-progress audit. Finds the most recent incomplete audit and continues from where you left off.
Audit Continue
You are resuming an in-progress audit.
Auto-Detection
- Look for
.audit-state.yamlfiles inaudits/*/ - Find the most recent incomplete audit
- If multiple, ask which to continue
Resume Flow
Resuming audit: [project-name] Started: [date] Progress: X/Y items (Z%)
Last completed: [ITEM-ID] - [Title] Next up: [ITEM-ID] - [Title]
Ready to continue? (y/n)
State Recovery
Read from .audit-state.yaml:
phase(auto-check / interactive / complete)- Flow preference
- Remaining items
If phase is auto-check (interrupted parallel run)
⚠ Previous auto-check was interrupted. Checking which items completed before interruption...
- Diff
items_remainingagainst existing result files on disk (audits/<project>/<date>/<ITEM-ID>.md) - Items WITH result files → completed (remove from
items_remaining) - Items WITHOUT result files → need re-checking
- Update
items_completedcount from result file count - Re-launch parallel auto-check on truly remaining items (proceeds to auto-check phase below)
If phase is interactive
Resuming interactive review... Last item: [current_item] Remaining: [count] items needing review
Continue from current_item in interactive workflow (skip to Interactive Item Workflow below).
If phase is complete
This audit is already complete. Start a new audit with
/audit-start
Continue using the same flow as when started.
Autonomous Evidence Gathering
Before starting the first item, read the project config (projects/<name>.yaml) and clone the repo:
CLONE_DIR="/tmp/audit-$(date +%s)"
git clone [email protected]:<owner>/<repo>.git "$CLONE_DIR"
# Fall back to HTTPS if SSH fails
Reuse this clone for all items. Do NOT ask the user for evidence you can gather yourself.
Parallel Auto-Check on Remaining Items
Before walking through items interactively, run the parallel auto-check phase on remaining items:
- Diff
items_remainingagainst existing result files — skip items that already have result files on disk (audits/<project>/<date>/<ITEM-ID>.md) - Group the truly remaining items by section
- Launch parallel subagents (up to 8 concurrent) — one per section, using the same subagent prompt template as
/audit-start - Present batch summary of auto-check results
- User reviews — accept all, review failures, or drill into specifics
Interactive Item Workflow
For items marked needs-review by subagents (or items the user wants to revisit):
- Present the item - Show ID, title, severity, section, description
- Show the guide - Extract from
checklist/checklist/[section]/guide.md - Run auto-checks - Run checks against the clone, don't ask the user for evidence
- Ask follow-up questions - Only if you genuinely cannot determine the answer from the codebase
- Determine status - Pass/Fail/Partial/Skip/Not Applicable/Blocked
- Capture notes - Optional user notes
- Write result file - Create
audits/[project]/[date]/[ITEM-ID].mdperchecklist/schema/audit-result.schema.yaml(item_id not id, lowercase status, always include ## Summary, required headings per status) - Validate result file - Run
npx tsx checklist/schema/validate.ts <result-file-path> --fixand fix any errors before continuing - Update state - Update
.audit-state.yamland move to next item
Regression Awareness
When auditing items that passed in a previous audit, note:
This item passed in your last audit ([date]). Let's verify it still passes.
If No Audit Found
No in-progress audit found.
Start a new audit with
/audit-start <project>Available projects:
- [list from projects/]