ARCH-001 recommended SOLID Principles
SOLID Principles Audited Regularly
Code follows SOLID principles and is regularly audited for compliance, preferably by AI agents
Question to ask
"When did you last audit for god classes?"
What to check
- ☐ AI code review integration (CLAUDE.md, GitHub Actions)
- ☐ Architecture documentation (ADRs, coding standards)
- ☐ Proxy metrics (file sizes, import counts, god classes)
Pass criteria
- ✓ AI agent is part of code review process
- ✓ Architecture principles documented
- ✓ No major SOLID violations in critical paths
- ✓ Evidence of recent architecture review
Related items
Verification guide
Severity: Recommended
Code should follow SOLID principles (Single Responsibility, Open-Closed, Liskov Substitution, Interface Segregation, Dependency Inversion) and be regularly audited for compliance. AI agents are well-suited to this assessment.
Check automatically:
- Check for AI code review integration:
# Check for Claude/AI agent configuration
ls -la CLAUDE.md AGENTS.md .claude* 2>/dev/null
# Check for AI reviewer in GitHub Actions
grep -rE "claude|anthropic|openai|ai-review|code-review" .github/workflows/ 2>/dev/null
# Check for architecture guidelines in agent config
grep -iE "solid|architecture|single responsibility|dependency injection" CLAUDE.md AGENTS.md 2>/dev/null
- Check for architecture documentation:
# Look for architecture decision records (ADRs)
find . -type d -name "adr*" -o -name "decisions" 2>/dev/null
find . -type f -name "*.md" | xargs grep -liE "architecture|solid|design principle" 2>/dev/null | head -10
# Check for documented coding standards
ls -la CONTRIBUTING.md docs/architecture* docs/coding-standards* 2>/dev/null
- Proxy metrics (heuristics, not definitive):
# Very large files may indicate SRP violations (>500 lines)
find src -name "*.ts" -o -name "*.js" -o -name "*.py" 2>/dev/null | xargs wc -l 2>/dev/null | sort -n | tail -20
# Files with many imports may have coupling issues (>20 imports)
for f in $(find src -name "*.ts" 2>/dev/null | head -20); do
count=$(grep -c "^import" "$f" 2>/dev/null || echo 0)
if [ "$count" -gt 20 ]; then echo "$f: $count imports"; fi
done
# God classes - files with many exported functions/classes
grep -rE "^export (function|class|const)" src --include="*.ts" 2>/dev/null | cut -d: -f1 | sort | uniq -c | sort -n | tail -10
AI agent assessment (the core verification):
- AI agent should review codebase for SOLID violations during PR review
- Periodic architecture audits (monthly/quarterly) by AI agent
- Agent checks for: god classes, interface bloat, concrete dependencies, inheritance misuse
- Agent flags issues with specific recommendations
Ask user:
- "Is AI code review part of your PR process?"
- "How are architecture decisions reviewed?"
- "When was the last architecture audit?"
Cross-reference with:
- ARCH-002 (code complexity - related health metric)
- TEST-005 (CRAP score)
- FLOW-002 (AI + human review in PR process)
Pass criteria:
- AI agent is part of code review process and checks for architecture issues
- Architecture principles documented (in CLAUDE.md, ADRs, or similar)
- No major SOLID violations in critical paths (as assessed by AI review)
- Evidence of recent architecture review
Fail criteria:
- No AI-assisted code review
- No documented architecture standards
- Known god classes or tightly coupled code ignored
- "We don't do architecture reviews"
Evidence to capture:
- How SOLID compliance is assessed (AI agent, manual review, both)
- Location of architecture guidelines
- Recent examples of AI catching architecture issues
- Date of last architecture audit