ERR-007 recommended AI-Driven Error Handling
Auto-create PRs for fixes
AI identifies fixable errors and creates labeled PRs requiring human review
Question to ask
"How many fixable errors have sat in Sentry for months?"
Verification guide
Severity: Recommended
When AI identifies a fixable error, it should be able to propose a fix as a PR, not just report the problem.
Check automatically:
- Check for automated PR creation workflows:
# PR creation in scripts
grep -riE "create.*pr|create.*pull|gh pr create|octokit.*pull" --include="*.ts" --include="*.js" --include="*.py" --include="*.yml" scripts/ jobs/ .github/workflows/ 2>/dev/null | head -10
- Check for Sentry → GitHub integration:
# Error to PR pipeline
grep -riE "sentry.*github|sentry.*pr|error.*fix.*pr|auto.*fix" --include="*.ts" --include="*.js" --include="*.py" . 2>/dev/null | grep -v node_modules | head -10
- Check for AI-assisted fix workflows:
# Claude/AI fix automation
grep -riE "claude.*fix|ai.*pr|automated.*fix|auto.*patch" .github/workflows/*.yml scripts/ CLAUDE.md 2>/dev/null
If not found in code, ask user:
- "When Claude identifies a fixable error, does it create a PR automatically?"
- "Is there a workflow from 'error detected' → 'fix proposed' → 'PR opened'?"
- "What's the human approval step before AI-generated fixes merge?"
Cross-reference with:
- ERR-006 (AI review identifies what to fix)
- ERR-008 (triage determines which errors warrant auto-fix)
- FLOW-002 (AI + human PR review)
Pass criteria:
- Workflow exists: Sentry error → AI analysis → PR created
- PRs are clearly labeled as AI-generated
- Human review required before merge (not fully autonomous)
Fail criteria:
- No automated PR creation from errors
- Fixes are always manual (human writes code after seeing error)
Partial (acceptable):
- AI suggests fix in Slack/issue but doesn't create PR (human opens PR)
- PR creation exists but not connected to Sentry (manual trigger)
Evidence to capture:
- PR creation workflow
- Labeling/tagging for AI-generated PRs
- Human review requirement