AI-DLC: AI-Augmented Development Life Cycle
Traditional SDLC approaches fail to leverage AI capabilities at critical decision points, leading to incomplete requirements, missed edge cases, and knowledge silos. AI-DLC introduces AI as an active participant in collaborative elaboration sessions alongside PM, RD, UX, and Architect.
Key Insight: Requirements quality improves dramatically when diverse perspectives challenge assumptions simultaneously. AI adds a tireless participant that can instantly surface edge cases, check for inconsistencies, and maintain comprehensive documentation.
Problem Statement
Current State: Sequential Knowledge Transfer
Pain Points from Current Approach
| Role | Blind Spot | Consequence |
|---|---|---|
| PM | Technical constraints, API limitations | Unrealistic requirements |
| RD | UX implications, user journey gaps | Poor user experience |
| UX | Technical feasibility, data availability | Designs that can't be built |
| Architect | Business context, priority trade-offs | Over-engineered solutions |
| All | Edge cases, error scenarios | Production bugs |
Real-World Examples
Incomplete Edge Case Coverage:
PM writes "User can upload profile photo." No one asks: What file types? Max size? What if upload fails mid-way? What about users with slow connections? These get discovered during QA or production.
Technical-Business Misalignment:
Architect designs microservices for "scalability" without knowing this feature serves 100 users total. Over-engineering wastes months.
Late UX Discovery:
UX designs beautiful multi-step wizard. RD implements. Testing reveals mobile users abandon at step 3 due to complex camera permissions flow nobody considered.
Proposal: Mob Elaboration with AI
Target State
What is Mob Elaboration?
Mob Elaboration is a time-boxed collaborative session where all stakeholders (PM, RD, UX, Architect, AI) work together to transform a requirement seed into a complete, implementation-ready specification.
Role Responsibilities in Mob Elaboration
| Role | Primary Contribution | AI Augmentation |
|---|---|---|
| PM | Business context, priorities, success criteria | AI challenges assumptions, suggests metrics |
| RD | Technical feasibility, effort estimates, API design | AI generates API drafts, identifies integration points |
| UX | User flows, interaction patterns, accessibility | AI surfaces edge cases, generates error states |
| Architect | System impact, scalability, security | AI checks consistency with existing architecture |
| AI | Documentation, pattern recognition, completeness checking | Real-time synthesis, scenario generation |
Mob Elaboration Session Structure
Pre-Session (Async, 30 min)
## Requirement Seed Template
### Feature Name
[One-line description]
### Business Context
- Why now?
- Who requested?
- What problem does it solve?
### Initial Scope
- Must have:
- Nice to have:
- Out of scope:
### Known Constraints
- Timeline:
- Budget:
- Technical:Session Phases (60-90 minutes total)
Phase 1: Context Alignment (10 min)
PM presents requirement seed. AI summarizes and asks clarifying questions.
## AI Clarification Checklist
- [ ] User persona defined?
- [ ] Success metrics specified?
- [ ] Integration points identified?
- [ ] Data sources confirmed?
- [ ] Error handling discussed?Phase 2: Multi-Perspective Challenge (30 min)
Each role challenges the requirement from their perspective. AI facilitates and documents.
AI Prompts During Challenge:
- "What happens if [X] fails?"
- "How does this interact with existing [Y]?"
- "What's the user's next step after [Z]?"
- "Is this consistent with how we handle [similar feature]?"
Phase 3: Specification Generation (20 min)
AI generates draft specification based on discussion. Team reviews and refines.
## AI-Generated Spec Draft Structure
### Feature Overview
[AI synthesizes from discussion]
### User Stories
[AI extracts from UX discussion]
### API Contract
[AI drafts from RD discussion]
### Architecture Notes
[AI captures from Architect input]
### Edge Cases & Error Handling
[AI compiles from all perspectives]
### Test Scenarios
[AI generates from requirements]
### Open Questions
[AI tracks unresolved items]Phase 4: Commitment & Next Steps (10 min)
Team confirms scope, assigns owners, schedules follow-up if needed.
AI Agent Capabilities Required
During Session
| Capability | Purpose | Example |
|---|---|---|
| Real-time Transcription | Capture all discussion | Meeting notes with speaker attribution |
| Pattern Matching | Surface related features | "This is similar to how we handle X in Y" |
| Consistency Checking | Flag contradictions | "Earlier PM said A, but RD mentioned B" |
| Scenario Generation | Surface edge cases | "What if user does X while Y is happening?" |
| Spec Drafting | Generate documentation | API contracts, user stories, test cases |
Knowledge Required
Implementation Roadmap
Phase 1: Pilot (Week 1-2)
Goal: Run 2-3 Mob Elaboration sessions with volunteer teams.
Deliverables:
- [ ] Session facilitation guide
- [ ] AI prompt templates for each phase
- [ ] Feedback collection template
- [ ] Session recording and analysis
Phase 2: Tooling (Week 3-4)
Goal: Establish tooling for effective AI participation.
Deliverables:
- [ ] AI agent configuration for Mob Elaboration
- [ ] Real-time transcription integration
- [ ] Spec template generation
- [ ] Session artifact storage
Phase 3: Process Integration (Week 5-6)
Goal: Integrate Mob Elaboration into standard development workflow.
Deliverables:
- [ ] Updated PRD process requiring Mob Elaboration for complex features
- [ ] Scheduling templates and cadence
- [ ] Success metrics dashboard
- [ ] Training materials for all roles
Phase 4: Continuous Improvement (Ongoing)
Goal: Iterate based on outcomes.
Deliverables:
- [ ] Monthly retrospective on session effectiveness
- [ ] AI prompt refinement based on feedback
- [ ] Pattern library from successful sessions
- [ ] Cross-team knowledge sharing
CLAUDE.md Integration
Add to project CLAUDE.md:
## Mob Elaboration
### Session Participation
When participating in Mob Elaboration sessions:
- **Context**: Load relevant CLAUDE.md, existing specs, and API docs
- **Role**: Facilitate discussion, surface gaps, generate documentation
- **Output**: Complete spec draft with scenarios and test cases
### Facilitation Prompts
During elaboration, actively probe:
- Edge cases: "What if [X] fails/times out/returns empty?"
- Consistency: "How does this align with existing [Y] feature?"
- Completeness: "What happens after the user completes [Z]?"
- Data: "Where does [data point] come from?"
### Spec Generation
After elaboration, generate:
1. Feature overview synthesized from PM input
2. User stories from UX discussion
3. API contract draft from RD discussion
4. Architecture notes from Architect input
5. Edge cases compiled from all perspectives
6. Test scenarios covering happy path and errors
### Knowledge Sources
For Mob Elaboration context, load:
- `CLAUDE.md` - Project conventions
- `docs/specs/` - Existing feature specs
- `docs/api/` - API documentation
- Related feature implementations in codebaseSuccess Metrics
| Metric | Before | Target | How to Measure |
|---|---|---|---|
| Requirements completeness | ~60% | >90% | Audit specs for missing edge cases |
| Late-stage scope changes | High | -50% | Track scope changes after dev starts |
| Requirement-related bugs | ~30% of bugs | <10% | Tag bugs by root cause |
| Time to first implementation | Days | Hours | Measure spec-to-code gap |
| Cross-role alignment | Low | High | Post-session survey |
Frequently Asked Questions
When should we use Mob Elaboration?
Use for:
- New features with multiple stakeholders
- Features touching multiple systems
- High-risk or high-visibility features
- Features with unclear requirements
Skip for:
- Simple bug fixes
- Well-defined small enhancements
- Purely technical refactoring
- Single-owner features
How do we handle remote participants?
- Use video conferencing with screen sharing
- AI transcribes in real-time
- Dedicated note-taker for non-verbal cues
- Async pre-read required for all participants
What if stakeholders disagree during session?
- AI documents both positions
- PM makes final call on business priority
- Architect makes final call on technical approach
- Unresolved items flagged for escalation
- Session continues with documented disagreement
How does this differ from traditional meetings?
| Traditional Meeting | Mob Elaboration |
|---|---|
| Sequential speaking | Collaborative building |
| Notes taken after | Real-time AI documentation |
| Single-perspective output | Multi-perspective synthesis |
| Follow-up required | Complete spec in session |
| Knowledge silos | Shared understanding |
Anti-Patterns to Avoid
1. AI as Decision Maker
Problem: Deferring decisions to AI instead of human judgment.
Solution: AI provides options and surfaces tradeoffs. Humans decide.
2. Session Without Preparation
Problem: Stakeholders arrive without reading requirement seed.
Solution: Required async pre-read. AI quizzes understanding at session start.
3. Missing Perspectives
Problem: Running session without all roles represented.
Solution: Minimum attendance requirements. Reschedule if key roles missing.
4. Spec as Final Word
Problem: Treating generated spec as unchangeable.
Solution: Spec is living document. Regular review and update cycles.
Related Proposals
- Continuous Context Cleanup - Maintaining clean context for AI
- Ubiquitous Language - Shared terminology for effective communication
- Agent-Friendly Knowledge Base - Knowledge access for AI agents
Related Principles
- E1: Design-Centric Work - Design as primary human activity
- C3: AI First - AI-native development
- G3: Centralized Requirements Management - Requirements flow
Related: Continuous Context Cleanup | Back: Proposals Overview
References
- AI-Driven Development Life Cycle (AWS DevOps Blog) - AWS's approach to AI-augmented SDLC
- AI-DLC Introduction (iThome) - Practical introduction to AI-DLC concepts
- Claude Code Documentation - Official documentation for CLAUDE.md AI guidance files
- Mob Programming - The foundational practice that inspired Mob Elaboration