FLOW-002 recommended PR Process
PRs reviewed by AI agent + human
AI review tooling configured alongside human reviewers
Question to ask
"Who caught the last bug before it hit production?"
What to check
- ☐ AI review bot present (CodeRabbit, Copilot, etc.)
- ☐ Human reviewers also participate
- ☐ AI feedback addressed, not ignored
Verification guide
Severity: Recommended
Check automatically:
Check for AI review bot in recent PR comments:
# List recent merged PRs gh pr list --state merged --limit 5 --json number # For each PR, check reviewers and commenters gh pr view <number> --json reviews,comments --jq '{reviews: [.reviews[].author.login], comments: [.comments[].author.login]}'Look for common AI review bots:
# Check for bot reviewers (common patterns) gh pr view <number> --json reviews,comments --jq '.reviews[].author.login, .comments[].author.login' | grep -iE "coderabbit|copilot|sweep|sourcery|codeclimate|sonar"Check for GitHub App installations:
# List installed apps (requires admin access) gh api repos/{owner}/{repo}/installations 2>/dev/null
Cross-reference with:
- GIT-006 (PRs require approval) - covers the human approval requirement
- FLOW-001 (PR quality) - AI review complements human review
Pass criteria:
- AI review bot configured and actively commenting on PRs
- Human reviewers also present (not AI-only reviews)
- AI feedback appears to be addressed (not ignored)
Fail criteria:
- No AI review tooling
- AI reviews present but ignored
- Only AI reviews, no human involvement
If no AI bot found, ask user: "No AI review bot detected. Are you using an AI code review tool (CodeRabbit, GitHub Copilot, etc.)? This is recommended but not critical."
Evidence to capture:
- AI review bot name (if found)
- Sample PR showing AI + human review
- Ratio of PRs with AI review participation