Disclaimer
This framework is designed as a cohesive system. Partial adoption may produce unexpected or counterproductive results.
SCOPE OF PRACTICE
The processes and proposals in this document have only been partially validated through real-world implementation in one product. For processes related to organizational or other product changes, the author's team does not have authority to make modifications, so these remain theoretical recommendations.
MANAGEMENT DECISION REQUIRED
This whitepaper identifies 22 management decision points that require executive-level attention and commitment. If management finds these recommendations difficult to implement but does not consult with teams on long-term planning, or unilaterally decides not to implement these changes while only provisioning AI tool accounts, we recommend not adopting AI workflows at all and maintaining current practices instead.
Management Decisions Overview
The following decisions require management review and planning before AI workflow adoption:
1. Process Changes (4 decisions)
- AI-DLC - Mob Elaboration sessions replacing sequential handoffs
- Continuous Context Cleanup - Five-phase context maintenance cycle
- AI-First Decision Making - Spec-driven decision processes
- Spec Extraction - Extract specs from legacy systems
2. Resource Allocation & Investment (4 decisions)
- C2: Continuous Learning - Allocate 10-20% sprint capacity for learning
- Design System - Unified component library investment
- Agent Knowledge Base - Knowledge base migration investment
- Global Requirement Store - Centralized requirements infrastructure
3. Policy Decisions (3 decisions)
- C1: Context Engineering Competency - Mandatory for all roles
- E4: Rigorous Verification - Review quality standards
- G1: Single Source of Truth - Documentation governance
4. Training & Skill Development (2 decisions)
- C2: Continuous Learning - Learning requirements for all roles
- C1: Context Engineering Competency - Training paths
5. Tool & Infrastructure (6 decisions)
- Design System - shadcn/ui adoption with MCP integration
- Multi-Product Spec Management - Hierarchical specification system
- Agent Knowledge Base - Git-based Markdown knowledge system
- Global Requirement Store - Product-centric requirement organization
- Tech Radar & Roadmaps - Technology governance system
- Ubiquitous Language - Standardized terminology
6. Cultural Changes (3 decisions)
- C3: Best Practice Alignment - Adopt established patterns
- C4: No Reinventing Standards - Defer to authoritative sources
- C5: Coordinated AI Feedback Loop - Share knowledge generously
Minimum Viable Adoption Guide
24 Decisions Don't All Need to Be Made on Day 1
These decisions are distributed throughout the transformation journey. This guide helps you understand when to make which decisions. If full adoption is not feasible, consult with your teams first to assess which parts are suitable for your organization's current situation.
Essential Decisions (8 items)
If you can only do 8 things, these are the essential decisions to launch AI workflows:
| # | Decision | Decision Maker | Timing |
|---|---|---|---|
| 1 | Executive Commitment - Confirm organization's willingness to invest resources and time in transformation | CEO/VP | Day 0 |
| 2 | Select Pilot Team - Designate 1-2 teams for initial pilot | VP Engineering | Day 0 |
| 3 | Allocate Learning Time - Decide sprint percentage for learning (recommended 10-20%) | Department Head | Phase 1 |
| 4 | Establish CLAUDE.md Standard - Define format and maintenance responsibility for project-level AI guidance files | Tech Lead | Phase 1 |
| 5 | Define Review Process - Establish review standards and workflow for AI-generated code | Tech Lead | Phase 1 |
| 6 | Establish Feedback Collection Mechanism - Regularly collect issues and improvement suggestions from pilot teams | Project Lead | Phase 1 |
| 7 | Select Design System Direction - Decide whether to adopt shadcn/ui or other solutions | Frontend Lead | Phase 2 |
| 8 | Build Ubiquitous Language Glossary - Begin organizing cross-team terminology standards | PM + Tech Lead | Phase 2 |
Phased Decision Timeline
| Phase | Timing | Decision Categories | Count |
|---|---|---|---|
| Day 0 | Before adoption decision | Executive commitment, pilot selection | 2 |
| Phase 1 | Q1 Foundation | Training policy, review process, tool standards | 4 |
| Phase 2 | Q2 Pilot | Design system, ubiquitous language | 2 |
| Phase 3 | Q3+ Scale | Organizational structure, role transformation, cultural change | Remaining |
Decision Dependencies
Phase Gate Checklist
Before advancing to the next phase, confirm these questions have answers:
Day 0 → Phase 1:
- [ ] Has executive leadership explicitly committed to supporting the transformation?
- [ ] Has the pilot team been selected and agreed to participate?
- [ ] Is there a dedicated person responsible for driving this initiative?
Phase 1 → Phase 2:
- [ ] Has the team completed foundational training (context engineering, spec writing)?
- [ ] Has the review process been established and operational?
- [ ] Has the CLAUDE.md standard been defined?
Phase 2 → Phase 3:
- [ ] Has the pilot produced concrete results?
- [ ] Has team feedback been collected and analyzed?
Risks of Partial Adoption
The proposals and principles in this whitepaper are interconnected by design. They address different aspects of the same underlying challenge: enabling effective human-AI collaboration in software development. Adopting isolated pieces without their supporting elements can create new problems rather than solving existing ones.
Failure Scenarios
Scenario 1: AI Tools Without Governance
What happens: Team adopts AI coding assistants without establishing Guiding Principles or proper review mechanisms.
Result:
- AI-generated code with inconsistent quality enters production
- Technical debt accumulates faster than before
- Team loses trust in AI tools entirely
Scenario 2: Spec-Driven Development Without Context Engineering
What happens: Organization mandates specification documents without implementing context engineering practices or Ubiquitous Language.
Result:
- Specs become administrative burden rather than AI-consumable context
- AI agents cannot effectively use poorly-structured specifications
- Teams revert to ad-hoc communication, specs become stale
Scenario 3: Organizational Flattening Without Process Changes
What happens: Management reduces organizational layers without corresponding Workflow Framework.
Result:
- Middle management removed but no replacement mechanism; unclear who makes decisions
- Knowledge silos persist without horizontal coordination mechanisms
- Teams feel unsupported rather than empowered
Scenario 4: Design System Without AI Optimization
What happens: Team builds a Design System following traditional patterns without considering AI discoverability.
Result:
- AI or humans continue creating duplicate components
- Design system becomes another artifact AI cannot find
- Resources spent on design system, but AI-generated code quality unchanged
Warning Signs
| Symptom | Likely Cause |
|---|---|
| "AI is making things worse" | Missing quality controls or review mechanisms |
| "Specs are just overhead" | Missing context engineering or AI-friendly formatting |
| "Nobody follows the process" | Missing organizational alignment or incentives |
| "AI keeps making the same mistakes" | Missing continuous context cleanup or guidance documents |
| "Teams are more siloed than before" | Missing horizontal coordination structures |
Consequences of Ignoring Decisions
If an organization only provisions AI tool accounts without addressing these decisions:
- Context Pollution accelerates - AI generates inconsistent outputs from scattered documentation
- Quality Degradation - AI-generated code without proper review introduces defects
- Technical Debt Acceleration - AI generates code faster than the organization can review/maintain
- Role Confusion - Traditional responsibilities become unclear when AI handles execution
- Team Frustration - Staff struggle without proper training, support, and clear expectations
The AI tools themselves are not the transformation. The organizational changes are.
Conclusion
This framework reflects the author's years of observation about what enables effective human-AI collaboration. The interconnections are not accidental - they reflect real dependencies discovered through practice.
Before dismissing any element as unnecessary, ask:
- What problem does this element solve?
- What other elements depend on it?
- What will fill the gap if we skip it?
Partial adoption is possible, but must be deliberate and informed. Understand what you're choosing not to adopt and prepare alternatives for the gaps you're creating.
Back: Overview