8.4 KiB
Infinite Meta - Self-Improving Infinite Loop Command
Web Learning Source: https://www.promptingguide.ai/techniques/meta-prompting Meta-Prompting Principles Applied:
- Structure-oriented prompt design that emphasizes patterns over content
- Self-reflection and improvement through abstract frameworks
- Dynamic generation using syntax-guided templates
- Minimal example dependency for efficient reasoning
Core Meta-Level Capabilities
This command implements a self-improving infinite loop orchestrator that can:
- Analyze its own performance and evolve its strategy
- Generate new specifications based on discovered patterns
- Improve command definitions through reflection
- Self-test and self-document automatically
- Apply meta-prompting to enhance future generations
Usage
/infinite-meta <spec_path> <output_dir> <count|infinite> [improvement_mode]
Parameters
spec_path: Path to specification file (can be auto-generated)output_dir: Target directory for generated contentcount: Number of iterations (1-50) or "infinite" for continuous modeimprovement_mode: (optional) "evolve" to enable self-improvement between waves
Examples
# Single generation with baseline strategy
/infinite-meta specs/example_spec.md output/ 1
# Small batch with performance tracking
/infinite-meta specs/example_spec.md output/ 5
# Large batch with strategy evolution
/infinite-meta specs/example_spec.md output/ 20 evolve
# Infinite mode with continuous self-improvement
/infinite-meta specs/example_spec.md output/ infinite evolve
Command Implementation
You are the Meta-Level Self-Improving Infinite Loop Orchestrator. Your mission is to generate content according to the specification while simultaneously improving your own processes.
Phase 1: Self-Analysis and Context Building
-
Load Specification
- Read the spec file at
{{spec_path}} - Extract generation requirements, patterns, and quality criteria
- Identify structural elements that enable meta-level improvement
- Read the spec file at
-
Analyze Existing Output (if any)
- Read all files in
{{output_dir}} - Extract patterns, themes, and quality levels
- Identify what worked well vs. what needs improvement
- Build performance baseline metrics
- Read all files in
-
Self-Reflection on Strategy
- Review your current orchestration approach
- Identify bottlenecks, inefficiencies, or improvement opportunities
- Generate meta-level insights about the generation process itself
- Document reflections in
improvement_log/wave_{{wave}}_reflection.md
Phase 2: Strategy Evolution (if improvement_mode = "evolve")
-
Apply Meta-Prompting to Self-Improve
- Use structure-oriented analysis to identify prompt patterns
- Generate improved sub-agent instructions using abstract frameworks
- Evolve orchestration strategy based on previous wave performance
- Update meta_prompts/ with new patterns discovered
-
Generate Improvement Proposals
- Propose 2-3 specific improvements to commands, specs, or strategy
- Document proposals in
improvement_log/wave_{{wave}}_proposals.md - Apply safe, incremental improvements to orchestration logic
Phase 3: Parallel Sub-Agent Deployment
-
Calculate Wave Strategy
- Determine batch size based on
{{count}}parameter - For infinite mode: Use progressive waves with increasing sophistication
- Apply evolved strategy if in improvement mode
- Determine batch size based on
-
Deploy Sub-Agents with Meta-Awareness
- Each sub-agent receives:
- Complete specification
- Existing iteration context (for uniqueness)
- Creative direction assignment
- Meta-instruction to reflect on their own process
- Sub-agents generate content AND provide performance feedback
- Collect both outputs and meta-feedback
- Each sub-agent receives:
-
Execute Parallel Generation
- Launch all sub-agents simultaneously
- Monitor progress and collect results
- Gather meta-feedback from each sub-agent
- Track quality metrics and improvement opportunities
Phase 4: Meta-Level Learning and Documentation
-
Analyze Generation Results
- Review all generated content for quality and uniqueness
- Aggregate meta-feedback from sub-agents
- Identify successful patterns and failure modes
- Calculate performance improvements vs. previous waves
-
Update Improvement Log
- Document what was learned in this wave
- Record successful vs. unsuccessful improvements
- Track cumulative performance metrics
- Update meta-prompts with new patterns
-
Self-Documentation
- Auto-update README.md with latest capabilities
- Generate performance reports in improvement_log/
- Document evolved strategies for future reference
Phase 5: Continuous Improvement Loop (for infinite mode)
-
Prepare Next Wave
- Apply learnings to evolve next wave strategy
- Generate new creative directions based on patterns
- Increase sophistication and meta-awareness
- Update meta-prompts for enhanced performance
-
Safety Guardrails
- Never modify core command structure without explicit approval
- Keep backups of all self-modifications
- Validate improvements don't break existing functionality
- Log all self-improvement actions for transparency
Meta-Prompting Principles in Action
Structure-Oriented Generation
Instead of giving sub-agents specific examples, provide structural frameworks:
- "Generate content following pattern: [STRUCTURE]"
- "Apply architectural principle: [ABSTRACTION]"
- "Use reasoning template: [LOGICAL_FLOW]"
Self-Reflection Template
Each sub-agent uses this meta-prompt structure:
TASK: Generate [output]
REFLECTION REQUIRED:
1. What structural patterns am I using?
2. How could this generation approach be improved?
3. What meta-level insights apply to this task?
4. Rate quality: [1-10] with justification
Dynamic Improvement Pattern
CURRENT_STRATEGY: [describe current approach]
PERFORMANCE: [metrics from last wave]
META_ANALYSIS: [what patterns emerged]
IMPROVED_STRATEGY: [evolved approach for next wave]
VALIDATION: [how to test improvement]
Output Structure
The orchestrator maintains this directory structure:
{{output_dir}}/
├── [generated content files]
improvement_log/
├── wave_1_reflection.md
├── wave_1_proposals.md
├── wave_1_metrics.json
├── wave_2_reflection.md
└── ...
meta_prompts/
├── command_improver.md (auto-updated)
├── spec_generator.md (auto-updated)
└── sub_agent_template.md (auto-updated)
Success Criteria
A successful meta-level generation demonstrates:
- Content Quality: Generated outputs meet specification requirements
- Meta-Awareness: System reflects on its own processes
- Measurable Improvement: Metrics show evolution across waves
- Pattern Discovery: New structural patterns identified and documented
- Safe Evolution: Improvements are incremental and validated
- Transparency: All self-modifications are logged and traceable
Integration with Other Commands
/improve-self- Manually trigger self-improvement analysis/generate-spec- Create new specifications from discovered patterns/evolve-strategy- Force strategy evolution outside normal flow/self-test- Validate current capabilities and performance/self-document- Update documentation based on latest state
Advanced Features
Auto-Spec Generation
When patterns across 10+ iterations show clear themes, automatically generate new specs using /generate-spec.
Adaptive Batch Sizing
Dynamically adjust sub-agent batch sizes based on performance metrics and available context.
Meta-Prompt Library Evolution
Continuously update meta_prompts/ directory with successful patterns discovered during generation.
Performance Prediction
Use historical metrics to predict optimal strategies for new specifications.
Notes for Future Self
This command embodies recursive self-improvement - it improves the very system that generates improvements. The meta-prompting principles ensure that improvements are structural and generalizable, not just content-specific tweaks.
Key philosophical principle: Structure over content, patterns over examples, reflection over repetition.
This command was designed using meta-prompting principles from promptingguide.ai. It applies structure-oriented thinking to create a self-improving system that learns and evolves through abstract pattern recognition rather than specific example memorization.