Getting Started
Run your first audit
42 sections, 252+ checks run by AI agents inside your Claude Code. When it needs a human call, it asks.
How it works
Configure your org context and register the projects you want to audit.
AI agents verify items in parallel, cloning repos, checking configs, and calling APIs.
You review the 5-10% of items that need human judgment. Everything else is done.
Prerequisites
- ✓ Claude Code, the CLI that runs the audit skills (
npm install -g @anthropic-ai/claude-code) - ✓ GitHub CLI, used to check branch protections, CI status, and repo settings (
gh auth login) - ✓ Git, repos are cloned and analyzed during the audit
Step-by-step
1 Create your audit workspace
The checklist lives as a git submodule inside your own workspace. Your org config, project definitions, and audit results are kept separate from the checklist itself.
mkdir my-company-audits && cd my-company-audits
git init
git submodule add https://github.com/rodricCTO/ultimate-cto-checklist checklist 2 Initialize your org
Run /audit-init in Claude Code. It asks about your cloud providers, tools, and infrastructure, then generates an org.yaml and supporting docs. Takes about 5 minutes.
cd my-company-audits
claude # launch Claude Code
> /audit-init This creates org.yaml, a STATUS.md dashboard, and docs/ with your org context.
3 Add a project
Register each codebase you want to audit. The system auto-detects tech stack from your repo.
> /audit-add-project Creates projects/your-project.yaml with repo URL, stack, environments, and scope.
4 Run the audit
Start the audit and pick a flow. The system clones your repo, spins up parallel agents, and auto-checks every item it can. You'll only be asked about items that need judgment.
> /audit-start my-api Sequential: sections 1 through 42 in order. Best for your first audit.
Priority: all critical items first, then recommended. Good for quick wins.
Section: pick one section at a time. Great for focused work.
Free-form: jump around freely. The system tracks what's done.
5 Review results
Generate a report, track improvements over time, and handle exceptions.
> /audit-summary # full report with scores and action items
> /audit-diff # compare against previous audit
> /audit-waiver ITEM-001 # exempt items that don't apply All audit commands
These are Claude Code slash commands. Type them in the Claude Code prompt after launching claude in your audit workspace.
Setup
| /audit-tutorial | Interactive first-time walkthrough. Detects your setup state and explains concepts before you start. |
| /audit-init | One-time org setup. Asks about cloud providers, tooling, and infrastructure, then generates org.yaml and supporting docs. |
| /audit-add-project | Register a project. Creates projects/name.yaml with repo URL, tech stack, environments, and scope. |
Running audits
| /audit-start | Begin a new audit. Pick a flow (sequential, priority, section, or free-form), then auto-check runs in parallel. |
| /audit-continue | Resume an interrupted audit. Recovers state from .audit-state.yaml and picks up where you left off. |
| /audit-status | Check progress at any time: items completed, pass rate, blockers, and what's remaining. |
| /audit-section | Focus on a specific section by number. Auto-checks all items in that section in parallel. |
| /audit-item | Jump to a specific item by ID (e.g. GIT-001). Re-audit a single item standalone or within an active audit. |
Results & reporting
| /audit-summary | Generate a full report: overall score, section breakdown, action items, and regressions from the previous audit. |
| /audit-diff | Compare two audits side by side. Highlights improvements, regressions, and items still failing. |
| /audit-history | View all past audits for a project with dates, pass rates, and trends over time. |
Managing findings
| /audit-fix | Work through failed and partial items interactively. Gather better evidence, resolve findings, or create waivers. |
| /audit-skip | Skip an item with documented reasoning. Marked for later revisit, not permanently excluded. |
| /audit-waiver | Permanently exempt an item that doesn't apply. Stored in waivers/ with a review date, excluded from future audits. |
Severity levels
Non-negotiable items that affect security, data integrity, or production stability. These should be fixed before shipping.
Best practices that improve reliability, developer experience, and operational maturity. Address these as you grow.
How items are scored
Each item gets one of these statuses after verification:
What to expect
- Setup: ~5 minutes for org init + project registration
- Auto-check: ~5-10 minutes, AI agents verify items in parallel
- Your review: ~30-60 minutes for the items that need judgment
- Repeat: subsequent audits are faster as waivers and context carry over