11 KiB
Extract Patterns from Iterations
Analyze generated iterations to extract successful patterns for the pattern library.
Usage
/project:extract-patterns <iterations_dir> <pattern_library_path> [analysis_depth]
Arguments
iterations_dir- Directory containing generated iterations to analyzepattern_library_path- Path where pattern library JSON will be savedanalysis_depth- Optional: "quick" (top 3 patterns) or "deep" (top 5 patterns, default)
Examples
# Extract patterns from output directory
/project:extract-patterns output pattern_library/patterns.json
# Quick extraction (3 patterns per category)
/project:extract-patterns output pattern_library/patterns.json quick
# Deep analysis (5 patterns per category)
/project:extract-patterns output pattern_library/patterns.json deep
How It Works
This command implements pattern recognition inspired by multi-shot prompting principles:
- Example Collection: Gather all iterations as potential examples
- Quality Scoring: Evaluate each iteration across multiple dimensions
- Pattern Identification: Extract successful approaches and techniques
- Example Selection: Choose 3-5 most exemplary and diverse patterns
- Library Update: Save patterns in structured format for future use
Implementation Steps
You are the pattern extraction agent. Follow this workflow:
Step 1: Load and Inventory Iterations
# List all files in iterations directory
Bash: find iterations_dir -type f | sort
# Read each iteration file
For each file:
- Read file
- Store content
- Note file path and metadata
Step 2: Analyze Structural Patterns
Extract patterns related to file organization and architecture:
For each iteration:
Analyze:
- File structure and organization
- Naming conventions used
- Code/content architecture
- Module organization (if applicable)
- Separation of concerns
Score based on:
- Clarity and consistency
- Scalability of approach
- Adherence to best practices
- Innovation in structure
Identify top 3-5 structural patterns:
{
"name": "Modular Three-Layer Architecture",
"description": "Separates data, logic, and presentation into distinct sections",
"example_file": "output/iteration_7.html",
"key_characteristics": [
"Clear section boundaries with comments",
"Data defined separately from rendering logic",
"Reusable component structure",
"Self-documenting organization"
],
"success_metrics": "High readability score (95%), easy to extend, follows separation of concerns",
"code_snippet": "<!-- Example of clear section separation -->\n<!-- DATA LAYER -->\n...\n<!-- LOGIC LAYER -->\n...\n<!-- PRESENTATION LAYER -->\n..."
}
Step 3: Analyze Content Quality Patterns
Extract patterns related to content excellence:
For each iteration:
Analyze:
- Documentation quality and completeness
- Code/content clarity and readability
- Comment quality and usefulness
- Error handling approaches
- User experience considerations
Score based on:
- Comprehensiveness of documentation
- Clarity of explanations
- Thoughtfulness of implementation
- Attention to edge cases
Identify top 3-5 content quality patterns:
{
"name": "Progressive Disclosure Documentation",
"description": "Layers documentation from overview to deep technical details",
"example_file": "output/iteration_12.html",
"key_characteristics": [
"High-level summary at top",
"Inline comments for complex logic",
"Detailed API documentation in separate section",
"Examples embedded with explanations"
],
"success_metrics": "Easy for beginners and experts alike, 100% of functions documented",
"code_snippet": "/**\n * HIGH-LEVEL: This function renders...\n * \n * TECHNICAL: Uses D3.js force simulation...\n * \n * EXAMPLE: renderGraph(data) -> visual output\n */"
}
Step 4: Analyze Innovation Patterns
Extract creative and novel approaches:
For each iteration:
Analyze:
- Unique problem-solving approaches
- Creative implementations
- Novel feature combinations
- Innovative UX/DX decisions
- Unexpected but effective solutions
Score based on:
- Originality compared to other iterations
- Effectiveness of the innovation
- Replicability in other contexts
- Impact on quality or functionality
Identify top 3-5 innovation patterns:
{
"name": "Self-Validating Data Pipeline",
"description": "Data includes validation logic that runs automatically",
"example_file": "output/iteration_15.html",
"key_characteristics": [
"Data objects include .validate() method",
"Automatic validation before rendering",
"Clear error messages for invalid data",
"Self-documenting data requirements"
],
"success_metrics": "Zero runtime errors due to data issues, excellent developer experience",
"code_snippet": "const dataPoint = {\n value: 42,\n validate() {\n if (this.value < 0) throw new Error('...');\n return true;\n }\n};"
}
Step 5: Analyze Quality & Testing Patterns
Extract patterns for ensuring quality:
For each iteration:
Analyze:
- Testing approaches (if present)
- Validation strategies
- Error handling patterns
- Defensive programming techniques
- Quality assurance methods
Score based on:
- Robustness of error handling
- Thoroughness of validation
- Testability of implementation
- Resilience to edge cases
Identify top 3-5 quality patterns:
{
"name": "Guard Clause Pattern with Fallbacks",
"description": "Early validation with graceful degradation for missing data",
"example_file": "output/iteration_9.html",
"key_characteristics": [
"Input validation at function entry",
"Specific error messages for each validation",
"Fallback defaults for optional parameters",
"Never crashes, always renders something"
],
"success_metrics": "100% uptime even with malformed data, excellent error messages",
"code_snippet": "function render(data) {\n if (!data) return renderEmpty();\n if (!Array.isArray(data)) data = [data];\n if (data.length === 0) return renderNoData();\n // ... continue with rendering\n}"
}
Step 6: Build Pattern Library JSON
Construct the complete pattern library:
{
"version": "1.2",
"last_updated": "2025-10-10T14:30:00Z",
"total_iterations_analyzed": 15,
"analysis_depth": "deep",
"patterns": {
"structural": [
{ "name": "...", "description": "...", ... },
{ "name": "...", "description": "...", ... },
{ "name": "...", "description": "...", ... }
],
"content": [
{ "name": "...", "description": "...", ... },
{ "name": "...", "description": "...", ... },
{ "name": "...", "description": "...", ... }
],
"innovation": [
{ "name": "...", "description": "...", ... },
{ "name": "...", "description": "...", ... },
{ "name": "...", "description": "...", ... }
],
"quality": [
{ "name": "...", "description": "...", ... },
{ "name": "...", "description": "...", ... },
{ "name": "...", "description": "...", ... }
]
},
"metadata": {
"extraction_date": "2025-10-10T14:30:00Z",
"source_directory": "output/",
"iterations_count": 15,
"patterns_extracted": 12,
"avg_quality_score": 8.4,
"most_common_theme": "Modular architecture with clear separation"
}
}
Step 7: Save and Report
# Write pattern library to JSON file
Write pattern_library_path with JSON content
# Generate extraction report
Create summary showing:
- Patterns extracted per category
- Quality score distribution
- Most innovative iteration
- Most structurally sound iteration
- Recommended patterns for next wave
Pattern Selection Criteria
When choosing which patterns to include (3-5 per category):
- Diversity: Select patterns that represent different approaches
- Clarity: Choose patterns that are easy to understand and replicate
- Effectiveness: Prioritize patterns with demonstrated success
- Transferability: Pick patterns applicable to various contexts
- Exemplary Quality: Select from top 20% of iterations only
Multi-Shot Prompting Principles Applied
This extraction process implements key multi-shot prompting concepts:
- Example Quality: Only top 20% iterations become examples (high bar)
- Diversity: 3-5 patterns prevent overfitting to single approach
- Relevance: Patterns are categorized for targeted application
- Edge Cases: Innovation category captures unusual but effective approaches
- Uniform Structure: All patterns follow consistent JSON schema
Update Strategy
If pattern library already exists:
1. Load existing library
2. Extract patterns from NEW iterations only
3. Merge with existing patterns:
- Keep patterns with highest success metrics
- Remove duplicates (similar patterns)
- Maintain 3-5 patterns per category limit
- Increment version number
- Update metadata
Validation
Before saving pattern library:
Validate that:
- JSON is well-formed
- Each pattern has all required fields
- Code snippets are valid (if applicable)
- Success metrics are specific and measurable
- Examples are diverse within each category
- Version number is incremented correctly
Output Report
Generate a summary report:
# Pattern Extraction Report
## Analysis Summary
- Iterations analyzed: {count}
- Analysis depth: {quick|deep}
- Patterns extracted: {total}
## Patterns by Category
### Structural Patterns ({count})
1. {pattern_name}: {brief_description}
2. {pattern_name}: {brief_description}
...
### Content Quality Patterns ({count})
1. {pattern_name}: {brief_description}
2. {pattern_name}: {brief_description}
...
### Innovation Patterns ({count})
1. {pattern_name}: {brief_description}
2. {pattern_name}: {brief_description}
...
### Quality & Testing Patterns ({count})
1. {pattern_name}: {brief_description}
2. {pattern_name}: {brief_description}
...
## Exemplary Iterations
- Best structural: {file_path}
- Best content: {file_path}
- Most innovative: {file_path}
- Highest quality: {file_path}
## Pattern Library Saved
Location: {pattern_library_path}
Version: {version}
## Recommendations
- Use {pattern_name} for structural consistency
- Apply {pattern_name} for content quality
- Consider {pattern_name} for innovation
- Implement {pattern_name} for robustness
Notes
- Pattern extraction is automatic but can be manually refined
- Library grows with each wave but maintains size limit (3-5 per category)
- Patterns serve as multi-shot examples for future iterations
- Quality bar rises naturally as better patterns are discovered
- Pattern library is spec-agnostic and can be reused across projects
Related Commands
/project:infinite-synthesis- Main loop using pattern library/project:analyze-patterns- Analyze pattern library effectiveness