Skip to content

AI-DLC: AI-Augmented Development Life Cycle

Traditional SDLC approaches fail to leverage AI capabilities at critical decision points, leading to incomplete requirements, missed edge cases, and knowledge silos. AI-DLC introduces AI as an active participant in collaborative elaboration sessions alongside PM, RD, UX, and Architect.

Key Insight: Requirements quality improves dramatically when diverse perspectives challenge assumptions simultaneously. AI adds a tireless participant that can instantly surface edge cases, check for inconsistencies, and maintain comprehensive documentation.

Problem Statement

Current State: Sequential Knowledge Transfer

Pain Points from Current Approach

RoleBlind SpotConsequence
PMTechnical constraints, API limitationsUnrealistic requirements
RDUX implications, user journey gapsPoor user experience
UXTechnical feasibility, data availabilityDesigns that can't be built
ArchitectBusiness context, priority trade-offsOver-engineered solutions
AllEdge cases, error scenariosProduction bugs

Real-World Examples

Incomplete Edge Case Coverage:

PM writes "User can upload profile photo." No one asks: What file types? Max size? What if upload fails mid-way? What about users with slow connections? These get discovered during QA or production.

Technical-Business Misalignment:

Architect designs microservices for "scalability" without knowing this feature serves 100 users total. Over-engineering wastes months.

Late UX Discovery:

UX designs beautiful multi-step wizard. RD implements. Testing reveals mobile users abandon at step 3 due to complex camera permissions flow nobody considered.

Proposal: Mob Elaboration with AI

Target State

What is Mob Elaboration?

Mob Elaboration is a time-boxed collaborative session where all stakeholders (PM, RD, UX, Architect, AI) work together to transform a requirement seed into a complete, implementation-ready specification.

Role Responsibilities in Mob Elaboration

RolePrimary ContributionAI Augmentation
PMBusiness context, priorities, success criteriaAI challenges assumptions, suggests metrics
RDTechnical feasibility, effort estimates, API designAI generates API drafts, identifies integration points
UXUser flows, interaction patterns, accessibilityAI surfaces edge cases, generates error states
ArchitectSystem impact, scalability, securityAI checks consistency with existing architecture
AIDocumentation, pattern recognition, completeness checkingReal-time synthesis, scenario generation

Mob Elaboration Session Structure

Pre-Session (Async, 30 min)

markdown
## Requirement Seed Template

### Feature Name
[One-line description]

### Business Context
- Why now?
- Who requested?
- What problem does it solve?

### Initial Scope
- Must have:
- Nice to have:
- Out of scope:

### Known Constraints
- Timeline:
- Budget:
- Technical:

Session Phases (60-90 minutes total)

Phase 1: Context Alignment (10 min)

PM presents requirement seed. AI summarizes and asks clarifying questions.

markdown
## AI Clarification Checklist
- [ ] User persona defined?
- [ ] Success metrics specified?
- [ ] Integration points identified?
- [ ] Data sources confirmed?
- [ ] Error handling discussed?

Phase 2: Multi-Perspective Challenge (30 min)

Each role challenges the requirement from their perspective. AI facilitates and documents.

AI Prompts During Challenge:

  • "What happens if [X] fails?"
  • "How does this interact with existing [Y]?"
  • "What's the user's next step after [Z]?"
  • "Is this consistent with how we handle [similar feature]?"

Phase 3: Specification Generation (20 min)

AI generates draft specification based on discussion. Team reviews and refines.

markdown
## AI-Generated Spec Draft Structure

### Feature Overview
[AI synthesizes from discussion]

### User Stories
[AI extracts from UX discussion]

### API Contract
[AI drafts from RD discussion]

### Architecture Notes
[AI captures from Architect input]

### Edge Cases & Error Handling
[AI compiles from all perspectives]

### Test Scenarios
[AI generates from requirements]

### Open Questions
[AI tracks unresolved items]

Phase 4: Commitment & Next Steps (10 min)

Team confirms scope, assigns owners, schedules follow-up if needed.

AI Agent Capabilities Required

During Session

CapabilityPurposeExample
Real-time TranscriptionCapture all discussionMeeting notes with speaker attribution
Pattern MatchingSurface related features"This is similar to how we handle X in Y"
Consistency CheckingFlag contradictions"Earlier PM said A, but RD mentioned B"
Scenario GenerationSurface edge cases"What if user does X while Y is happening?"
Spec DraftingGenerate documentationAPI contracts, user stories, test cases

Knowledge Required

Implementation Roadmap

Phase 1: Pilot (Week 1-2)

Goal: Run 2-3 Mob Elaboration sessions with volunteer teams.

Deliverables:

  • [ ] Session facilitation guide
  • [ ] AI prompt templates for each phase
  • [ ] Feedback collection template
  • [ ] Session recording and analysis

Phase 2: Tooling (Week 3-4)

Goal: Establish tooling for effective AI participation.

Deliverables:

  • [ ] AI agent configuration for Mob Elaboration
  • [ ] Real-time transcription integration
  • [ ] Spec template generation
  • [ ] Session artifact storage

Phase 3: Process Integration (Week 5-6)

Goal: Integrate Mob Elaboration into standard development workflow.

Deliverables:

  • [ ] Updated PRD process requiring Mob Elaboration for complex features
  • [ ] Scheduling templates and cadence
  • [ ] Success metrics dashboard
  • [ ] Training materials for all roles

Phase 4: Continuous Improvement (Ongoing)

Goal: Iterate based on outcomes.

Deliverables:

  • [ ] Monthly retrospective on session effectiveness
  • [ ] AI prompt refinement based on feedback
  • [ ] Pattern library from successful sessions
  • [ ] Cross-team knowledge sharing

CLAUDE.md Integration

Add to project CLAUDE.md:

markdown
## Mob Elaboration

### Session Participation
When participating in Mob Elaboration sessions:
- **Context**: Load relevant CLAUDE.md, existing specs, and API docs
- **Role**: Facilitate discussion, surface gaps, generate documentation
- **Output**: Complete spec draft with scenarios and test cases

### Facilitation Prompts
During elaboration, actively probe:
- Edge cases: "What if [X] fails/times out/returns empty?"
- Consistency: "How does this align with existing [Y] feature?"
- Completeness: "What happens after the user completes [Z]?"
- Data: "Where does [data point] come from?"

### Spec Generation
After elaboration, generate:
1. Feature overview synthesized from PM input
2. User stories from UX discussion
3. API contract draft from RD discussion
4. Architecture notes from Architect input
5. Edge cases compiled from all perspectives
6. Test scenarios covering happy path and errors

### Knowledge Sources
For Mob Elaboration context, load:
- `CLAUDE.md` - Project conventions
- `docs/specs/` - Existing feature specs
- `docs/api/` - API documentation
- Related feature implementations in codebase

Success Metrics

MetricBeforeTargetHow to Measure
Requirements completeness~60%>90%Audit specs for missing edge cases
Late-stage scope changesHigh-50%Track scope changes after dev starts
Requirement-related bugs~30% of bugs<10%Tag bugs by root cause
Time to first implementationDaysHoursMeasure spec-to-code gap
Cross-role alignmentLowHighPost-session survey

Frequently Asked Questions

When should we use Mob Elaboration?

Use for:

  • New features with multiple stakeholders
  • Features touching multiple systems
  • High-risk or high-visibility features
  • Features with unclear requirements

Skip for:

  • Simple bug fixes
  • Well-defined small enhancements
  • Purely technical refactoring
  • Single-owner features

How do we handle remote participants?

  • Use video conferencing with screen sharing
  • AI transcribes in real-time
  • Dedicated note-taker for non-verbal cues
  • Async pre-read required for all participants

What if stakeholders disagree during session?

  1. AI documents both positions
  2. PM makes final call on business priority
  3. Architect makes final call on technical approach
  4. Unresolved items flagged for escalation
  5. Session continues with documented disagreement

How does this differ from traditional meetings?

Traditional MeetingMob Elaboration
Sequential speakingCollaborative building
Notes taken afterReal-time AI documentation
Single-perspective outputMulti-perspective synthesis
Follow-up requiredComplete spec in session
Knowledge silosShared understanding

Anti-Patterns to Avoid

1. AI as Decision Maker

Problem: Deferring decisions to AI instead of human judgment.

Solution: AI provides options and surfaces tradeoffs. Humans decide.

2. Session Without Preparation

Problem: Stakeholders arrive without reading requirement seed.

Solution: Required async pre-read. AI quizzes understanding at session start.

3. Missing Perspectives

Problem: Running session without all roles represented.

Solution: Minimum attendance requirements. Reschedule if key roles missing.

4. Spec as Final Word

Problem: Treating generated spec as unchangeable.

Solution: Spec is living document. Regular review and update cycles.

Related: Continuous Context Cleanup | Back: Proposals Overview

References