AI-Assisted Development - Phase 2: Prompt Engineering for Production Code
Transform your documentation into phase-based prompts that make AI agents build features correctly the first time.
Part 2 of 3 in the AI-Assisted Development Workflow series.
← Phase 1: Documentation | Next: Phase 3 - Development Execution →
The Missing Bridge
You've created comprehensive documentation. You have AI coding tools. But there's a critical gap between them—the translation layer that transforms static documentation into dynamic AI instructions.
This phase builds that bridge. You'll transform your documentation into prompt engineering infrastructure that consistently guides AI toward production-quality code that fits your system.
Without structured prompts, AI assistance provides incremental gains—perhaps 20% faster development with significant rework overhead. With systematic prompt engineering, teams typically see 200-300% velocity improvements with minimal refactoring. This isn't about marginal productivity gains; it's about fundamental workflow transformation.
The Context Problem
AI coding assistants face the same challenge as any new developer joining your team: they need to understand your system's architecture, established patterns, and domain-specific constraints. The critical difference is that AI can't ask clarifying questions during standup, can't observe patterns through code review, and can't gradually absorb implicit knowledge through team osmosis.
The three dimensions of context AI requires:
Architectural context encompasses how your system is structured, which patterns govern different layers, and why specific architectural decisions were made. Without this, AI generates technically correct code that violates your architectural boundaries.
Historical context covers how you've solved similar problems previously, what patterns you've established across features, and what conventions your team has adopted. This prevents AI from reinventing solutions that already exist in your codebase.
Relational context defines where each feature fits in the broader system, what it depends on, and what will depend on it in the future. This ensures AI generates code that integrates cleanly rather than creating isolated islands.
Prompt engineering is the discipline of packaging these three context dimensions into formats AI agents can consume and apply consistently.
Organizing Work into Development Phases
The most effective prompt engineering strategy mirrors actual software development workflows: organizing work into progressive phases where each phase builds on previous foundations.
Why phase-based organization works:
- Progressive complexity - Each phase assumes completion of previous phases, allowing AI to reference established patterns and infrastructure
- Natural checkpoints - Phase boundaries provide clear points for validation, testing, and architectural review
- Context accumulation - Later phases benefit from all context and patterns established in earlier phases
- Scope management - Phases create natural boundaries that prevent scope creep and maintain focus
Phase characteristics:
- Single prompt file per phase - Keeps all related features together, preventing context fragmentation
- Clear goal statement - Defines what success looks like for the entire phase, not just individual features
- Explicit dependencies - Documents what must exist from previous phases before this phase can begin
- Manageable timeline - Typically 1-3 weeks of work; longer loses coherence, shorter creates artificial divisions
Keeping all features for a phase in a single prompt file prevents context fragmentation. When AI sees what came before and what comes next, it maintains consistency naturally. Splitting features across multiple files forces you to constantly re-establish context.
Creating Your Context Repository
Your documentation provides the foundation, but AI agents need a persistent workspace where they can access this context reliably across multiple conversations and development sessions.
Repository structure for prompt engineering:
project-root/
├── docs/
│ ├── prompts/
│ │ ├── phase-0-setup.md
│ │ ├── phase-1-mvp.md
│ │ ├── phase-2-advanced.md
│ │ └── phase-3-polish.md
│ ├── architecture/
│ │ └── [your Phase 1 docs]
│ └── project-instructions.md
├── .cursorrules (or equivalent)
└── src/
Essential components:
Project instructions file - Provides permanent context that applies to every AI interaction:
- Your technology stack and architectural patterns
- Code style conventions and naming standards
- Testing requirements and quality expectations
- References to detailed documentation
- Current development phase and focus
Phase prompt files - Contain complete specifications for all features in each phase:
- Phase goals and success criteria
- Feature definitions with acceptance criteria
- Implementation guidance organized by layers
- Known limitations and future considerations
- Testing requirements for validation
IDE configuration files - Establish permanent rules for AI behavior in your development environment (e.g., .cursorrulesDefinitionConfiguration file in Cursor IDE that establishes project-specific patterns, conventions, and constraints for AI code generation. Acts as permanent context for all AI interactions.):
- Project-specific patterns and conventions
- Common gotchas and anti-patterns to avoid
- Security requirements and validation rules
- File organization and naming standards
This structure creates a permanent knowledge base that AI agents reference throughout development, ensuring consistency across features and development sessions.
From Roadmap to Executable Prompts
Your release roadmap from Phase 1 documentation defines what you're building. Prompt engineering transforms that into how AI should build it.
The transformation process involves five steps:
Phase extraction - Group related features into logical development phases based on dependencies, complexity, and architectural layers. MVP features form Phase 1, advanced capabilities become Phase 2, optimization and polish constitute Phase 3.
Feature breakdown - Decompose each phase into discrete features, each with clear boundaries and well-defined interfaces with other features. Features should be independently implementable and testable.
Layer definition - For each feature, identify the architectural layers that need implementation. Most features span multiple layers: data persistence, business logic, API contracts, and user interface.
Acceptance criteria specification - Define testable conditions that unambiguously determine when a feature is complete. Good criteria are specific, measurable, achievable, relevant, and testable.
Implementation guidance - Provide layer-by-layer implementation instructions that reference your architectural patterns, existing code examples, and quality expectations.
Feature template structure:
Feature X.Y: [Feature Name]
Goal: What this accomplishes and why it matters
Architecture: How this fits your system design
- Reference relevant architecture documentation sections
- Explain integration points with existing features
- Identify architectural patterns to apply
Implementation Layers:
- Data layer - Schema, repositories, data access patterns
- Logic layer - Business rules, workflows, validations
- API layer - Endpoints, contracts, error handling
- Client layer - API consumption, state management
- UI layer - Components, interactions, user feedback
Acceptance Criteria:
- Specific, testable condition 1
- Specific, testable condition 2
- Test coverage meets quality standards
Testing Requirements:
- Unit tests for business logic
- Integration tests for workflows
- End-to-end tests for user journeys
Known Limitations: What you're explicitly NOT implementing and why
This structure provides AI with complete context for implementation while maintaining focus on one feature at a time.
The Special Case of Infrastructure Setup
Project initialization presents unique challenges for AI-assisted development. Unlike feature implementation where AI follows established patterns, infrastructure setup creates those patterns from scratch.
Why infrastructure is different:
AI agents excel at pattern following but struggle with pattern creation. Infrastructure decisions have cascading implications—choosing one state management approach over another affects every subsequent feature. Getting these foundational decisions wrong creates technical debt that compounds throughout the project.
Recommended infrastructure workflow:
Step 1: Get guidance, not generation - Ask AI for step-by-step commands and explanations. Don't request generated configuration files yet. AI provides command-line instructions; you execute them manually and verify results.
Step 2: Manual execution and verification - Run each setup command yourself. Check output for errors. Verify folder structure matches expectations. Validate that dependencies installed correctly.
Step 3: Structural confirmation - Show AI your actual results. Paste your folder tree. List installed packages. Ask if anything is missing or misconfigured.
Step 4: Configuration generation - Only after manual verification and confirmation, ask AI to generate configuration files. Now it has concrete context about your actual setup rather than theoretical assumptions.
Step 5: Pattern documentation - Document the patterns you've established during setup. These become the foundation that guides all subsequent feature development.
Manual infrastructure setup prevents architectural inconsistencies that compound through your entire project. The few hours spent on careful initialization save weeks of refactoring later.
Implementing Features Layer by Layer
When AI implements an entire feature simultaneously, it juggles multiple concerns across different architectural layers. The code works but lacks cohesion. Layer-by-layer implementation produces more maintainable results.
Benefits of layered implementation:
- Focused context - AI concentrates on one architectural layer's concerns at a time, producing more coherent code
- Natural validation - Each layer can be tested before moving to the next, catching issues early
- Context accumulation - Each completed layer provides concrete context for the next layer
- Pattern consistency - AI follows established patterns within each layer rather than inventing hybrid approaches
The layer progression:
Data schema first - Define entities, relationships, constraints, and indexes. Everything else depends on how you structure persistent data.
Data access second - Establish repository or data access patterns. Create the boundary between storage mechanisms and business logic.
Business logic third - Implement domain rules, workflows, and validations. This layer contains your system's unique intelligence.
API contracts fourth - Expose business logic through well-defined interfaces. Handle routing, validation, transformation, and error formatting.
Client integration fifth - Consume APIs with type-safe methods that handle errors and transformations appropriately.
State management sixth - Define how application state flows through your UI layer, ensuring consistency and predictability.
UI components seventh - Implement user-facing interfaces using your design system and established component patterns.
Testing throughout - Validate each layer immediately after implementation, preventing cascading failures.
Context accumulation in action:
- Data access layer sees actual schema → uses correct column names and types
- Business logic sees data access interface → delegates to actual repository methods
- API layer sees business service contracts → calls actual service methods
- UI components see API client → invoke actual endpoints with proper types
This progressive refinement produces better integrated code than generating everything simultaneously.
Prompt Engineering Best Practices
Effective prompts share common characteristics that maximize AI comprehension and output quality.
Specificity eliminates ambiguity:
- ❌ "Create a user authentication system"
- ✅ "Implement JWT-based authentication with email/password login, following the specification in phase-1-prompts.md section 1.1, using patterns established in existing AuthService class"
Context prevents invention:
- ❌ Asking AI to implement features without referencing existing patterns
- ✅ Attaching 2-3 relevant files showing established patterns AI should follow
Layered instructions maintain focus:
- ❌ "Build the entire user management feature"
- ✅ "Implement user data schema as specified, including validation constraints and indexes for common queries"
Explicit acceptance criteria define success:
- ❌ "Make sure authentication works properly"
- ✅ Specific testable conditions: "User can register with valid email/password, receives JWT token with 24-hour expiration, invalid credentials return 401 with error message"
Progressive complexity builds naturally:
- ❌ Jumping to advanced features before core functionality exists
- ✅ Following phase sequence where each feature assumes previous features are complete
Quality expectations prevent technical debt:
Always include:
- Input validation for all user-provided data
- Error handling for all failure scenarios
- Unit tests with 80%+ coverage
- Integration tests for critical paths
- Security checks for authentication/authorization
Anti-pattern awareness guides away from common mistakes:
Never:
- Hardcode configuration values
- Skip input validation
- Ignore error handling
- Generate code without tests
- Create functions longer than 50 lines
These patterns, when established in your project instructions and phase prompts, create consistent expectations that AI can reliably meet.
The Implementation Workflow
When you're ready to implement features using your prepared prompts, follow this systematic approach that maximizes context effectiveness.
Context loading strategy:
Step-by-step process:
1. Start with permanent context - Your IDE rules file and project instructions establish baseline understanding that applies to every interaction.
2. Load phase-specific context - Reference the current phase prompt file containing all features you're implementing in this development cycle.
3. Attach relevant documentation - Include 2-3 architecture documentation sections directly related to the current feature. More context isn't better—relevant context is better.
4. Show pattern examples - Attach 1-2 existing files that demonstrate patterns AI should follow. These examples are more powerful than written descriptions.
5. Provide specific instruction - Give clear, focused direction about the current task, the specific layer you're implementing, and the acceptance criteria that define success.
The principle of relevant attachment:
Attach 3-5 files maximum per AI interaction. Beyond this number, context becomes noise rather than signal. Focus on:
- The current phase prompt (always relevant)
- Architecture sections for the current feature area (not the entire architecture doc)
- One or two existing files showing the exact patterns you want AI to follow
- Test files showing your testing standards and approaches
What not to attach:
- Entire documentation folders (too broad, dilutes focus)
- Unrelated features from other phases (introduces irrelevant patterns)
- Infrastructure details when implementing UI features (wrong abstraction level)
- Multiple examples of the same pattern (creates confusion about which to follow)
Validation and Quality Control
AI-generated code requires systematic validation before integration. This isn't about mistrust—it's about maintaining quality standards and catching issues early.
Validation checklist:
Specification compliance
- All acceptance criteria implemented
- Error cases handled appropriately
- Edge cases considered and addressed
- Integration points implemented correctly
Pattern consistency
- Code structure matches established patterns
- Naming conventions followed throughout
- Error handling approach aligns with existing code
- Testing approach matches project standards
Architectural alignment
- Component responsibilities respected
- Layer boundaries maintained
- Dependencies flow in correct direction
- No circular dependencies introduced
Security posture
- All inputs validated
- Authentication/authorization checked where required
- Sensitive data handled appropriately
- No security vulnerabilities introduced
Performance characteristics
- No obvious performance anti-patterns
- Database queries optimized appropriately
- Resource cleanup handled correctly
- Caching used where beneficial
Test coverage
- Unit tests cover business logic
- Integration tests validate workflows
- Edge cases have test coverage
- Tests are maintainable and clear
When validation fails:
Provide specific, actionable feedback rather than vague criticism. AI responds well to concrete corrections:
- ❌ "The error handling isn't right"
- ✅ "Add validation for empty email, malformed email format, and password length < 8 characters. Return 400 status with descriptive error messages for each case."
Iterative refinement is expected and normal. First attempts rarely achieve perfection. The goal is steady improvement toward acceptance criteria, not flawless initial output.
Measuring Prompt Engineering Effectiveness
Track these metrics to assess whether your prompt engineering approach is working:
Using metrics for improvement:
- Low first-try success? Your prompts likely lack sufficient context. Add more architectural details and pattern examples.
- High revision cycles? Instructions are probably too vague. Increase specificity in acceptance criteria and implementation guidance.
- Inconsistent patterns? Your rules file needs strengthening. Document anti-patterns to avoid and provide more pattern examples.
- Poor test coverage? Testing requirements aren't explicit enough. Specify exactly what test scenarios each feature requires.
- Frequent integration issues? Dependencies aren't well-documented. Strengthen relational context in your prompts.
These metrics provide objective feedback about prompt quality, allowing continuous improvement of your engineering approach.
Common Pitfalls and How to Avoid Them
Vague instructions that leave too much to AI interpretation:
Teams often provide high-level goals without sufficient detail. AI fills gaps with assumptions that rarely align with your needs. Solution: Be specific about expected behavior, error handling, validation rules, and integration points.
Context overload that dilutes relevant information:
Attaching every documentation file seems helpful but creates noise. AI struggles to identify what's relevant when buried in irrelevant details. Solution: Curate context carefully—attach only what's directly applicable to the current task.
Missing roadmap visibility that creates disconnected features:
Implementing features in isolation without understanding how they relate creates integration challenges. AI can't anticipate future needs without roadmap context. Solution: Keep entire phase prompts in single files so AI sees what came before and what comes next.
Insufficient pattern examples leading to reinvention:
Describing patterns in prose is less effective than showing concrete examples. AI learns better from code than from written descriptions. Solution: Always attach 1-2 files demonstrating the exact patterns you want AI to follow.
Absent quality expectations that allow technical debt:
Without explicit quality standards, AI optimizes for functionality alone. Security, performance, and maintainability require explicit requirements. Solution: Document quality expectations in project rules and reference them in every feature prompt.
Over-reliance on AI without human validation:
Accepting AI output without review leads to accumulated issues that compound over time. AI assistance should augment human judgment, not replace it. Solution: Treat AI as a highly skilled junior developer who needs code review.
Key Principles for Effective Prompt Engineering
After working through hundreds of features with AI assistance, certain principles consistently separate effective from ineffective approaches:
Vague prompts without context produce generic code that doesn't fit your system. AI makes assumptions, generates inconsistent patterns, and requires multiple revision cycles. Features work in isolation but don't integrate cleanly.
Structured prompts with architectural context, pattern examples, and explicit acceptance criteria produce production-ready code that fits your system. First-try success rates reach 70%+, pattern consistency exceeds 95%, and integration issues stay below 10%.
Context is everything - AI generates dramatically better code when it understands your complete system. Invest in providing that context upfront.
Specificity eliminates ambiguity - Vague prompts produce vague results. Precise prompts with concrete acceptance criteria produce precise implementations.
Organization enables scale - Phase-based structure maintains coherence as projects grow. Without organization, context fragments and quality degrades.
Patterns over descriptions - Showing AI examples of established patterns is more effective than describing those patterns in prose.
Iteration is normal - First attempts rarely achieve perfection. Expect refinement cycles and view them as part of the process.
Validation is non-negotiable - AI assistance accelerates development but doesn't replace human judgment. Every AI-generated artifact requires review.
Documentation drives quality - The quality of your prompts directly correlates with the quality of your documentation. Weak documentation produces weak prompts.
Consistency beats cleverness - Simple, consistent approaches that work reliably are better than sophisticated approaches that work sometimes.
What You've Accomplished
By completing Phase 2, you've established the complete prompt engineering infrastructure needed for systematic AI-assisted development:
Central context repository containing all documentation, accessible consistently across development sessions.
Phase-based prompt organization that maintains feature coherence and provides natural validation checkpoints.
Project configuration establishing permanent context and rules that apply to every AI interaction.
Systematic workflow for loading context, implementing features, and validating results.
Quality framework ensuring AI-generated code meets your architectural and security standards.
Measurement approach allowing continuous improvement of your prompt engineering effectiveness.
This infrastructure transforms Phase 3—actual development execution—from ad-hoc AI assistance into systematic, production-quality development acceleration.
Next: Putting Prompts Into Practice
Phase 3: Development Execution covers the actual implementation workflow:
- Configuring AI development environments with your rules and prompts
- Layer-by-layer implementation strategies that maintain code quality
- Testing integration approaches that catch issues early
- Quality gates that prevent AI-generated technical debt from accumulating
- Release preparation processes that ensure production readiness
The prompt engineering work you've completed provides the foundation. Phase 3 shows you how to execute against that foundation to deliver production-quality features at accelerated velocity.