11 KiB
Evolve Strategy - Orchestration Strategy Evolution
Purpose: Analyze and evolve the infinite loop orchestration strategy based on performance data and meta-level insights.
Usage
/evolve-strategy [metric_focus] [evolution_type]
Parameters
metric_focus: What to optimize - "quality", "efficiency", "diversity", "meta", "all" (default: "all")evolution_type: How to evolve - "incremental", "experimental", "revolutionary" (default: "incremental")
Examples
# Incrementally improve overall strategy
/evolve-strategy all incremental
# Experimental evolution focusing on quality
/evolve-strategy quality experimental
# Revolutionary rethink of efficiency
/evolve-strategy efficiency revolutionary
# Optimize meta-level capabilities
/evolve-strategy meta incremental
Command Implementation
You are the Strategy Evolution Specialist. Your role is to analyze orchestration performance and evolve the strategy for better results.
Phase 1: Performance Analysis
-
Load Historical Metrics
Read from
improvement_log/:- Wave performance data (quality, efficiency, diversity scores)
- Sub-agent feedback and meta-reflections
- Pattern success rates
- Resource utilization (context, time, parallelism)
-
Calculate Baseline Metrics
Establish current performance:
{ "quality_avg": 7.8, "quality_std_dev": 1.2, "efficiency_score": 0.75, "diversity_index": 0.82, "meta_awareness": 0.65, "improvement_rate": 0.03, "context_utilization": 0.68, "parallelism_effectiveness": 0.71 } -
Identify Performance Patterns
Structure-oriented analysis:
- Which orchestration patterns yield best results?
- What abstract frameworks work across contexts?
- Where is reasoning inefficient?
- What meta-level patterns emerge?
Phase 2: Meta-Prompting Strategy Analysis
-
Apply Meta-Prompting Principles to Strategy
Current Strategy Structure:
Load spec → Analyze context → Deploy agents → Collect results → IterateMeta-Level Questions:
- Is this structure optimal or habitual?
- Are we using abstract frameworks or specific examples?
- Do we minimize unnecessary dependencies?
- Can the strategy improve itself?
-
Pattern Recognition
Identify successful orchestration patterns:
- Agent deployment strategies (batch size, parallelism)
- Creative direction assignment methods
- Context building approaches
- Quality validation techniques
- Meta-feedback integration patterns
Phase 3: Strategy Evolution
-
Generate Evolution Proposals
Based on
{{evolution_type}}:Incremental Evolution:
- Small parameter adjustments (±10-20%)
- Refinement of existing patterns
- Low-risk improvements
- Validated through A/B comparison
Experimental Evolution:
- New pattern combinations
- Alternative orchestration approaches
- Medium-risk innovations
- Tested on subset of iterations
Revolutionary Evolution:
- Complete strategy rethink
- Novel orchestration paradigms
- High-risk, high-reward changes
- Requires extensive validation
-
Focus-Specific Evolutions
Quality Focus:
## Quality Evolution Strategy **Current Approach:** [describe] **Quality Bottleneck:** [identify issue] **Evolved Approach:** - Enhanced creative constraint design - Improved sub-agent instruction quality - Better quality validation framework - Meta-level quality reflection loops **Expected Impact:** Quality avg: 7.8 → 8.5 (+9%)Efficiency Focus:
## Efficiency Evolution Strategy **Current Approach:** [describe] **Efficiency Bottleneck:** [identify issue] **Evolved Approach:** - Optimized batch sizing algorithm - Reduced redundant context loading - Parallel execution improvements - Structure-oriented prompt compression **Expected Impact:** Efficiency: 0.75 → 0.88 (+17%)Diversity Focus:
## Diversity Evolution Strategy **Current Approach:** [describe] **Diversity Bottleneck:** [identify issue] **Evolved Approach:** - Enhanced creative direction generation - Multi-dimensional uniqueness validation - Abstract pattern variation methods - Cross-domain inspiration integration **Expected Impact:** Diversity index: 0.82 → 0.91 (+11%)Meta Focus:
## Meta-Level Evolution Strategy **Current Approach:** [describe] **Meta Bottleneck:** [identify issue] **Evolved Approach:** - Deeper self-reflection integration - Enhanced meta-prompting throughout - Recursive improvement loops - Pattern abstraction mechanisms **Expected Impact:** Meta-awareness: 0.65 → 0.80 (+23%)
Phase 4: Strategy Implementation Design
-
Create Evolved Strategy Document
Write
improvement_log/evolved_strategy_{{timestamp}}.md:# Evolved Orchestration Strategy - {{timestamp}} ## Executive Summary [What changed and why] ## Performance Analysis ### Current Metrics [Baseline data] ### Identified Issues [What's not optimal] ### Root Causes [Why issues exist - structural analysis] ## Evolution Details ### Meta-Prompting Principles Applied - Structure-oriented: [how] - Minimal dependency: [how] - Abstract frameworks: [how] - Efficient reasoning: [how] ### Evolved Orchestration Flow **Previous Flow:**[Old strategy diagram/pseudocode]
**Evolved Flow:**[New strategy diagram/pseudocode]
**Key Changes:** 1. [Change 1] - Impact: [metric] 2. [Change 2] - Impact: [metric] 3. [Change 3] - Impact: [metric] ### Implementation Guidelines **For /infinite-meta command:** - Modify Phase [X]: [specific changes] - Update sub-agent template: [how] - Adjust batch sizing: [new algorithm] - Enhance context building: [new approach] ### Validation Plan **Test Scenarios:** 1. Small batch (5 iterations) - Measure quality change 2. Medium batch (15 iterations) - Measure efficiency 3. Large batch (30 iterations) - Measure sustainability **Success Criteria:** - Quality: ≥ 8.5 average (vs 7.8 baseline) - Efficiency: ≥ 0.88 score (vs 0.75 baseline) - Diversity: ≥ 0.91 index (vs 0.82 baseline) - Meta: ≥ 0.80 awareness (vs 0.65 baseline) **Rollback Triggers:** - Any metric drops >15% below baseline - System instability or errors - Excessive context usage (>90%) ## Risk Assessment **Risk Level:** [Low/Medium/High] **Mitigation:** - [Strategy 1] - [Strategy 2] **Rollback Plan:** [How to revert to previous strategy] ## Expected Outcomes **Short-term (10 waves):** [Immediate improvements] **Medium-term (50 waves):** [Compounding benefits] **Long-term (infinite mode):** [Sustained advantages] ## Meta-Insights [What this evolution teaches us about orchestration in general] -
Update Meta-Prompts
Enhance
meta_prompts/orchestration_strategy.md:# Orchestration Strategy Meta-Prompt ## Evolved Strategy Pattern (v{{version}}) **Structural Framework:** [Abstract orchestration template] **Reasoning Flow:** [Logical decision tree] **Optimization Principles:** - [Principle 1] - [Principle 2] **Meta-Awareness Integration:** [How to maintain self-improvement during orchestration] ## Historical Evolution - v1: [baseline strategy] - v2: [first evolution - what improved] - v3: [second evolution - what improved] - v{{version}}: [current - what improved] ## Future Evolution Paths [Potential next improvements based on patterns]
Phase 5: Integration and Monitoring
-
Create Implementation Checklist
Generate
improvement_log/strategy_implementation_{{timestamp}}.md:# Strategy Implementation Checklist ## Pre-Implementation - [ ] Backup current strategy (saved as v{{previous}}) - [ ] Review all proposed changes - [ ] Prepare rollback plan - [ ] Set monitoring metrics ## Implementation - [ ] Update /infinite-meta command - [ ] Modify sub-agent templates - [ ] Adjust batch sizing logic - [ ] Enhance context building - [ ] Update quality validation ## Validation - [ ] Run test scenario 1 (small batch) - [ ] Run test scenario 2 (medium batch) - [ ] Run test scenario 3 (large batch) - [ ] Compare metrics vs. baseline - [ ] Validate no regressions ## Post-Implementation - [ ] Document actual vs. expected results - [ ] Update strategy version - [ ] Archive old strategy - [ ] Monitor next 5 waves closely -
Set Up Monitoring
Create automated monitoring for next waves:
- Track all key metrics
- Compare to baseline and predictions
- Alert on regressions
- Document surprises and learnings
Phase 6: Meta-Level Learning
-
Extract Meta-Patterns
What does this evolution teach us?
- General principles about orchestration
- Transferable patterns to other domains
- Meta-level insights about improvement itself
- Structural frameworks that work universally
-
Feed Into Future Evolutions
Update
meta_prompts/evolution_meta.md:- Successful evolution patterns
- Effective analysis techniques
- Reliable validation methods
- Meta-learning about learning
Meta-Prompting for Strategy Evolution
This command applies meta-prompting to itself:
CURRENT_STRUCTURE: Strategy evolution analyzer
ABSTRACTION: Orchestration is pattern composition
REASONING: Analyze structure → Identify patterns → Evolve frameworks
SELF_REFLECTION:
- Am I optimizing structure or just tweaking parameters?
- Are my evolutions generalizable or context-specific?
- Do I maintain meta-awareness throughout?
- Can this evolution process improve itself?
EVOLUTION_OF_EVOLUTION:
[How to improve the improvement process]
- Better pattern recognition
- More abstract framework design
- Enhanced meta-level analysis
- Recursive self-improvement
Output Files
improvement_log/
├── evolved_strategy_{{timestamp}}.md
├── strategy_implementation_{{timestamp}}.md
├── evolution_metrics_{{timestamp}}.json
└── strategy_history.md (updated)
meta_prompts/
├── orchestration_strategy.md (updated)
└── evolution_meta.md (updated)
backups/
└── strategy_v{{previous}}.md
Success Criteria
A successful strategy evolution:
- Shows measurable improvement in target metrics
- Applies meta-prompting principles (structure-oriented)
- Maintains or improves other metrics
- Is validated through testing
- Has clear rollback plan
- Generates meta-level insights
- Can evolve further
Integration Notes
- Auto-triggered in
/infinite-metawhen improvement_mode = "evolve" - Works with
/improve-selffor comprehensive analysis - Feeds into
/generate-specfor strategy-aware specs - Creates evolution history for pattern analysis
This command evolves orchestration strategy using meta-prompting principles. It focuses on structural improvements and abstract pattern recognition to create generalizable, efficient, and self-improving orchestration frameworks.