22 KiB
Architecture Documentation
Technical architecture of the Cross-Iteration Pattern Synthesis System.
System Overview
┌─────────────────────────────────────────────────────────────────┐
│ ORCHESTRATOR AGENT │
│ (infinite-synthesis.md) │
└─────────────────────────────────────────────────────────────────┘
│
├─── Wave 1: Cold Start
│ │
│ ├─> Sub-Agent 1 ─> Iteration 1
│ ├─> Sub-Agent 2 ─> Iteration 2
│ ├─> Sub-Agent 3 ─> Iteration 3
│ ├─> Sub-Agent 4 ─> Iteration 4
│ └─> Sub-Agent 5 ─> Iteration 5
│
├─── Pattern Extraction
│ │
│ └─> Extract Patterns Agent
│ └─> Pattern Library v1.0
│
├─── Wave 2: Pattern-Guided
│ │
│ ├─> Sub-Agent 6 (+ patterns) ─> Iteration 6
│ ├─> Sub-Agent 7 (+ patterns) ─> Iteration 7
│ ├─> Sub-Agent 8 (+ patterns) ─> Iteration 8
│ ├─> Sub-Agent 9 (+ patterns) ─> Iteration 9
│ └─> Sub-Agent 10 (+ patterns) ─> Iteration 10
│
├─── Pattern Refinement
│ │
│ └─> Extract Patterns Agent
│ └─> Pattern Library v1.1
│
└─── Wave 3+ (Continuous Learning)
└─> ... (repeat until count reached)
Core Components
1. Orchestrator Agent
File: .claude/commands/infinite-synthesis.md
Responsibilities:
- Parse command arguments (spec, output dir, count, pattern library path)
- Calculate wave parameters (number of waves, iterations per wave)
- Coordinate wave execution
- Trigger pattern extraction between waves
- Manage context budget
- Generate final report
State Management:
{
total_count: 20,
waves: 4,
wave_size: 5,
current_wave: 1,
pattern_library_version: "1.0",
iterations_generated: [],
quality_metrics: []
}
Key Algorithms:
# Wave calculation
def calculate_waves(count):
if count == "infinite":
return infinite_waves, 5
elif count <= 5:
return 1, count
elif count <= 15:
return 2, count // 2
else:
return count // 5, 5
# Pattern extraction trigger
def should_extract_patterns(current_wave, total_waves):
# Extract after every wave except the last
return current_wave < total_waves
2. Sub-Agent System
Created via: Task tool
Context Provided:
SPECIFICATION:
{Full spec content}
EXISTING ITERATIONS:
{List of already generated files}
PATTERN LIBRARY (Wave 2+ only):
{3-5 most relevant patterns}
REQUIREMENTS:
- Generate unique iteration
- Follow specification
- Incorporate patterns (if provided)
- Add novel innovation
- Maintain quality standards
OUTPUT:
Save to: {output_path}
Execution Model:
- Parallel execution (5 sub-agents at a time)
- Independent context (each agent has full spec + patterns)
- Synchronization point: All agents complete before pattern extraction
3. Pattern Extraction Agent
File: .claude/commands/extract-patterns.md
Responsibilities:
- Read all iteration files
- Score iterations across dimensions (functionality, quality, innovation, etc.)
- Identify top 20% per category
- Extract patterns with examples
- Build/update pattern library JSON
- Validate library structure
- Generate extraction report
Scoring Dimensions:
{
functionality: 0-10, // Does it work as specified?
visual_appeal: 0-10, // Aesthetics and UX
code_quality: 0-10, // Readability, organization
innovation: 0-10, // Novel ideas and creativity
documentation: 0-10, // Comments and explanations
robustness: 0-10 // Error handling, edge cases
}
overall_score = average(dimensions)
Pattern Selection Algorithm:
def extract_patterns(iterations, category, count=5):
# 1. Score all iterations for this category
scored = [(iteration, score_for_category(iteration, category))
for iteration in iterations]
# 2. Sort by score (descending)
scored.sort(key=lambda x: x[1], reverse=True)
# 3. Take top 20%
top_20_percent = scored[:len(scored)//5]
# 4. Select diverse patterns
patterns = []
for iteration, score in top_20_percent:
pattern = extract_pattern_from(iteration, category)
if is_diverse_from(pattern, patterns):
patterns.append(pattern)
if len(patterns) >= count:
break
return patterns
4. Pattern Library
File: pattern_library/patterns.json
Schema:
{
"version": "semver",
"last_updated": "ISO 8601 timestamp",
"total_iterations_analyzed": "integer",
"analysis_depth": "quick|deep",
"patterns": {
"structural": [/* 3-5 pattern objects */],
"content": [/* 3-5 pattern objects */],
"innovation": [/* 3-5 pattern objects */],
"quality": [/* 3-5 pattern objects */]
},
"metadata": {
"extraction_date": "ISO 8601",
"source_directory": "path",
"patterns_extracted": "count",
"avg_quality_score": "float"
}
}
Pattern Object Schema:
{
"name": "string (short, descriptive)",
"description": "string (1-2 sentences)",
"example_file": "string (path to exemplary iteration)",
"key_characteristics": ["array", "of", "defining", "traits"],
"success_metrics": "string (specific, measurable)",
"code_snippet": "string (5-15 lines representative code)"
}
Update Strategy:
def update_pattern_library(old_library, new_iterations):
# Extract patterns from new iterations only
new_patterns = extract_all_patterns(new_iterations)
# Merge with existing patterns
for category in categories:
# Combine old and new patterns
all_patterns = old_library[category] + new_patterns[category]
# Rank by effectiveness
ranked = rank_patterns(all_patterns)
# Keep top 5 (or 3 for quick mode)
old_library[category] = ranked[:5]
# Increment version
old_library["version"] = increment_version(old_library["version"])
return old_library
5. Analysis Agent
File: .claude/commands/analyze-patterns.md
Responsibilities:
- Load pattern library
- Categorize iterations (pre-pattern vs post-pattern)
- Calculate adoption rate
- Compare quality metrics
- Rank pattern effectiveness
- Generate analysis report
Metrics Calculated:
{
// Adoption metrics
pattern_adoption_rate: percent,
avg_patterns_per_iteration: float,
most_adopted_pattern: pattern_name,
least_adopted_pattern: pattern_name,
// Quality metrics
pre_pattern_quality: float,
post_pattern_quality: float,
quality_improvement: percent,
consistency_improvement: percent,
// Innovation metrics
pre_pattern_innovations: count,
post_pattern_innovations: count,
innovation_preservation: percent,
// Pattern effectiveness
pattern_rankings: [
{pattern: name, adoption: percent, impact: float}
]
}
6. Validation System
File: validators/check_patterns.sh
Validations Performed:
# 1. JSON Syntax
jq empty pattern_library.json
# 2. Required Fields
for field in version last_updated patterns metadata
check_exists(field)
# 3. Pattern Categories
for category in structural content innovation quality
check_exists(patterns[category])
check_count(patterns[category], 3-5)
# 4. Pattern Objects
for pattern in all_patterns
check_fields(name, description, example_file,
key_characteristics, success_metrics, code_snippet)
# 5. Pattern Quality
calculate_snippet_coverage()
calculate_metrics_coverage()
# 6. Consistency Checks
check_no_duplicate_names()
check_version_incremented()
Data Flow
Wave 1: Cold Start Generation
User Command
│
├─> Parse Arguments
│ └─> spec_file, output_dir, count=5
│
├─> Read Specification
│ └─> Load spec content
│
├─> Create Sub-Agents (x5)
│ │
│ ├─> Sub-Agent 1: {spec, existing_iterations=[]}
│ ├─> Sub-Agent 2: {spec, existing_iterations=[iter_1]}
│ ├─> Sub-Agent 3: {spec, existing_iterations=[iter_1, iter_2]}
│ ├─> Sub-Agent 4: {spec, existing_iterations=[iter_1..3]}
│ └─> Sub-Agent 5: {spec, existing_iterations=[iter_1..4]}
│
├─> Execute in Parallel
│ └─> Wait for all to complete
│
├─> Collect Outputs
│ └─> [iteration_1..5.html]
│
└─> Trigger Pattern Extraction
└─> See Pattern Extraction Flow
Pattern Extraction Flow
Extract Patterns Command
│
├─> Read All Iterations
│ └─> [iteration_1..5.html]
│
├─> Score Each Iteration
│ │
│ ├─> Structural Score
│ ├─> Content Score
│ ├─> Innovation Score
│ └─> Quality Score
│
├─> Identify Top 20% per Category
│ │
│ ├─> Structural: [iter_3, iter_5]
│ ├─> Content: [iter_2, iter_5]
│ ├─> Innovation: [iter_1, iter_4]
│ └─> Quality: [iter_3, iter_4]
│
├─> Extract Pattern Objects
│ │
│ ├─> For each top iteration:
│ │ ├─> Analyze code structure
│ │ ├─> Extract key characteristics
│ │ ├─> Capture code snippet
│ │ └─> Document success metrics
│ │
│ └─> Select 3-5 most diverse patterns per category
│
├─> Build Pattern Library JSON
│ │
│ └─> {
│ version: "1.0",
│ patterns: {
│ structural: [pattern1, pattern2, pattern3],
│ content: [pattern1, pattern2, pattern3],
│ ...
│ }
│ }
│
├─> Validate Pattern Library
│ └─> Run check_patterns.sh
│
├─> Save to File
│ └─> pattern_library/patterns.json
│
└─> Generate Report
└─> Pattern extraction summary
Wave 2+: Pattern-Guided Generation
Continue Generation (Wave 2)
│
├─> Load Pattern Library
│ └─> pattern_library/patterns.json v1.0
│
├─> Create Sub-Agents (x5)
│ │
│ ├─> Sub-Agent 6:
│ │ ├─> spec
│ │ ├─> existing_iterations=[iter_1..5]
│ │ └─> relevant_patterns=[
│ │ structural_pattern_1,
│ │ content_pattern_1,
│ │ quality_pattern_1
│ │ ]
│ │
│ ├─> Sub-Agent 7: (similar context + patterns)
│ └─> ... (Sub-Agents 8-10)
│
├─> Execute in Parallel
│ └─> Sub-agents incorporate pattern examples
│
├─> Collect Outputs
│ └─> [iteration_6..10.html]
│
├─> Extract Patterns from ALL iterations
│ │
│ ├─> Analyze [iteration_1..10.html]
│ ├─> Extract new patterns from iterations 6-10
│ ├─> Merge with existing patterns
│ ├─> Keep top 5 per category
│ └─> Increment version to v1.1
│
└─> Continue to Wave 3 if count allows
Multi-Shot Prompting Integration
How Patterns Serve as Examples
When a sub-agent receives pattern context:
PATTERN CONTEXT PROVIDED:
### Structural Pattern: Modular Three-Layer Architecture
**Description**: Separates data, rendering logic, and interaction handlers
**Why This Works**: Readability 9.5/10, easy to test, modifications don't cascade
**Example Code**:
```javascript
// DATA LAYER
const dataset = {
values: [...],
validate() { return this.values.length > 0; }
};
// VIEW LAYER
const renderer = {
render(data) { /* D3 rendering */ }
};
// CONTROLLER LAYER
const controller = {
onNodeClick(e) { /* interaction logic */ }
};
Key Characteristics:
- Clear layer boundaries with comments
- Data validation methods on data objects
- Pure rendering functions (no business logic)
- Event handlers isolated in controller
[2-4 more patterns provided...]
YOUR TASK: Study these patterns. Understand WHY they work (success metrics). Apply their principles to your iteration. Add your own innovation beyond these examples.
### Pattern as Multi-Shot Example
This is textbook multi-shot prompting:
1. **Concrete Example**: Actual code, not just description
2. **Success Context**: "Why This Works" explains effectiveness
3. **Multiple Examples**: 3-5 patterns provide diversity
4. **Clear Structure**: Consistent format makes patterns easy to parse
5. **Transferable**: Characteristics list shows how to adapt
Research shows this approach (3-5 concrete examples with success context) maximizes consistency while preserving creativity.
## Context Budget Management
### Context Allocation
Total Context Budget: ~200K tokens
Allocation per Wave: ├─ Specification: ~2K tokens ├─ Pattern Library: ~3K tokens (grows slightly over time) ├─ Sub-Agent Context (x5): ~15K tokens total │ ├─ Spec: 2K │ ├─ Patterns: 3K │ ├─ Existing iterations list: 500 tokens │ └─ Task instructions: 1K ├─ Pattern Extraction: ~5K tokens └─ Orchestrator Logic: ~2K tokens
Per Wave Total: ~27K tokens
Maximum Waves: 200K / 27K ≈ 7 waves (35 iterations)
### Context Optimization Strategies
1. **Pattern Library Size Cap**: Max 5 patterns per category (3 for "quick" mode)
2. **Iteration List Compression**: Only file names, not content
3. **Selective Pattern Provision**: Provide 3-5 most relevant patterns, not all
4. **Summary vs Full Content**: Pattern extraction works with summaries
5. **Garbage Collection**: Remove obsolete patterns as better ones emerge
### Infinite Mode Termination
```python
def should_continue_infinite(context_usage):
# Stop if context usage exceeds 80% of budget
if context_usage > 0.8 * CONTEXT_BUDGET:
return False, "Context budget limit approaching"
# Stop if pattern library isn't improving
if library_unchanged_for_N_waves(3):
return False, "Pattern library converged"
# Stop if quality plateaued
if quality_unchanged_for_N_waves(5):
return False, "Quality plateau reached"
return True, "Continue generation"
Error Handling
Orchestrator Level
try:
# Execute wave
iterations = execute_wave(wave_num)
except SubAgentFailure as e:
# Log error, continue with successful iterations
log_error(f"Sub-agent {e.agent_id} failed: {e.message}")
# Optionally retry failed iteration
if should_retry(e):
retry_iteration(e.iteration_num)
Pattern Extraction Level
try:
# Extract patterns
patterns = extract_patterns(iterations)
except ExtractionFailure as e:
# Log warning, use previous pattern library
log_warning(f"Pattern extraction failed: {e.message}")
log_info("Continuing with existing pattern library")
patterns = load_previous_library()
Sub-Agent Level
try:
# Generate iteration
output = generate_iteration(spec, patterns)
validate_output(output)
except GenerationFailure as e:
# Report to orchestrator
return Error(f"Failed to generate iteration: {e.message}")
Validation Level
# Validator returns non-zero exit code on failure
if ! ./validators/check_patterns.sh "$PATTERN_LIB"; then
echo "Pattern library validation failed"
echo "Fix errors before continuing"
exit 1
fi
Performance Considerations
Parallel Execution
Sub-agents execute in parallel:
Wave of 5 iterations:
Traditional Sequential:
Agent 1 ────> (2 min)
Agent 2 ────> (2 min)
Agent 3 ────> (2 min)
Agent 4 ────> (2 min)
Agent 5 ────> (2 min)
Total: 10 minutes
Parallel Execution:
Agent 1 ────> (2 min)
Agent 2 ────> (2 min)
Agent 3 ────> (2 min)
Agent 4 ────> (2 min)
Agent 5 ────> (2 min)
Total: 2 minutes (5x speedup)
Pattern Extraction Optimization
# Quick mode (3 patterns/category): ~30 seconds
# Deep mode (5 patterns/category): ~60 seconds
# Optimization: Cache iteration scores
scores_cache = {}
def score_iteration(iteration, category):
cache_key = f"{iteration.id}_{category}"
if cache_key not in scores_cache:
scores_cache[cache_key] = compute_score(iteration, category)
return scores_cache[cache_key]
I/O Optimization
# Read all iterations once, keep in memory
iterations = [read_file(f) for f in iteration_files]
# Avoid repeated file I/O
for category in categories:
extract_patterns(iterations, category) # Uses in-memory data
Extension Points
Custom Pattern Categories
Add new pattern categories by:
-
Update
pattern_library_template.json:{ "patterns": { "structural": [...], "content": [...], "innovation": [...], "quality": [...], "performance": [...] // NEW CATEGORY } } -
Update extraction logic in
extract-patterns.md -
Update validator to check new category
-
Update analysis to track new category adoption
Custom Scoring Dimensions
Add new scoring dimensions:
def score_iteration(iteration):
return {
"functionality": score_functionality(iteration),
"code_quality": score_code_quality(iteration),
"innovation": score_innovation(iteration),
"accessibility": score_accessibility(iteration), // NEW
"performance": score_performance(iteration), // NEW
}
Custom Pattern Selection
Override default selection algorithm:
def extract_patterns_custom(iterations, category, count=5):
# Custom logic: prefer patterns from recent iterations
recent_iterations = iterations[-10:]
return extract_patterns(recent_iterations, category, count)
Security Considerations
File System Access
- Validators only read pattern library (no writes)
- Sub-agents write only to designated output directory
- Pattern extraction reads only from output directory
- No execution of generated code during pattern extraction
JSON Injection
- Pattern library validated with
jqbefore use - Malformed JSON fails gracefully
- No
eval()or code execution from JSON
Resource Limits
- Context budget prevents infinite loops
- Wave size capped (max 10 iterations per wave)
- Pattern library size capped (max 5 per category)
- File size limits on generated iterations (spec-dependent)
Testing Architecture
Unit Testing Pattern Extraction
# Create test iterations
mkdir test_iterations
echo "test content" > test_iterations/test_1.html
# Run extraction
/project:extract-patterns test_iterations test_patterns.json
# Validate output
./validators/check_patterns.sh test_patterns.json
Integration Testing Full Loop
# Generate 10 iterations
/project:infinite-synthesis specs/example_spec.md test_output 10
# Verify outputs
ls test_output/*.html | wc -l # Should be 10
# Verify pattern library created
test -f pattern_library/patterns.json
# Verify pattern library valid
./validators/check_patterns.sh pattern_library/patterns.json
Regression Testing
# Known-good pattern library
cp pattern_library/patterns.json pattern_library/baseline.json
# Generate with baseline
/project:infinite-synthesis specs/example_spec.md output_baseline 5 pattern_library/baseline.json
# Compare quality
/project:analyze-patterns pattern_library/baseline.json output_baseline
Future Architecture Enhancements
Planned Improvements
-
Pattern Confidence Scores
- Track success rate of each pattern
- Prioritize high-confidence patterns
- Deprecate low-confidence patterns
-
Pattern Genealogy
- Track which iteration created which pattern
- Visualize pattern evolution over waves
- Credit most influential iterations
-
Cross-Spec Pattern Sharing
- Export patterns for reuse across projects
- Import patterns from external sources
- Pattern library marketplace
-
Adaptive Wave Sizing
- Adjust wave size based on pattern stability
- Larger waves when patterns are stable
- Smaller waves during exploration phases
-
Real-Time Quality Monitoring
- Stream quality metrics during generation
- Early stopping if quality degrades
- Dynamic pattern injection
Research Opportunities
- Optimal Pattern Count: Is 3-5 truly optimal? A/B test different counts
- Pattern Decay: Do patterns become less effective over time?
- Transfer Learning: Can patterns from one domain help another?
- Human-in-the-Loop: Manual pattern curation vs automatic extraction
- Pattern Combinations: Identify synergistic pattern pairs
Last Updated: 2025-10-10 Version: 1.0 Architecture Stability: Stable (no breaking changes planned)