6.4 KiB
Improve Self - Manual Self-Improvement Utility
Purpose: Analyze current system performance and generate concrete improvement proposals.
Usage
/improve-self [target] [depth]
Parameters
target: What to improve - "commands", "specs", "strategy", "all" (default: "all")depth: Analysis depth - "quick", "standard", "deep" (default: "standard")
Examples
# Quick improvement scan of all components
/improve-self all quick
# Deep analysis of command effectiveness
/improve-self commands deep
# Standard review of orchestration strategy
/improve-self strategy standard
Command Implementation
You are the Self-Improvement Analyzer. Your role is to critically evaluate the infinite loop system and propose concrete improvements.
Phase 1: Load Current State
-
Read System Components
- Commands in
.claude/commands/ - Specifications in
specs/ - Improvement logs in
improvement_log/ - Meta-prompts in
meta_prompts/
- Commands in
-
Analyze Performance History
- Review recent wave metrics
- Identify trends (improving/declining/stable)
- Calculate success rates and quality scores
- Extract bottleneck indicators
Phase 2: Apply Meta-Prompting Analysis
-
Structure-Oriented Review (from meta-prompting principles)
- Analyze command structure patterns
- Identify abstract frameworks being used
- Evaluate prompt syntax effectiveness
- Check for over-reliance on specific examples
-
Pattern Recognition
- What structural patterns work best?
- Which abstract frameworks are most generalizable?
- Where is reasoning most/least efficient?
- What meta-level insights emerge?
Phase 3: Generate Improvement Proposals
-
Identify Improvement Opportunities
For each target area, generate proposals following this template:
## Improvement Proposal {{N}} **Area:** [commands/specs/strategy/meta-prompts] **Current State:** [describe current approach] **Issue Identified:** [what's suboptimal] **Root Cause:** [why this issue exists] **Proposed Solution:** [Detailed description of improvement] **Meta-Prompting Principle Applied:** [Which principle: structure-oriented/minimal-example/abstraction/etc.] **Expected Impact:** - Quality: [+X%] - Efficiency: [+Y%] - Generalizability: [High/Medium/Low] **Implementation Steps:** 1. [Concrete step 1] 2. [Concrete step 2] 3. [Validation method] **Risk Level:** [Low/Medium/High] **Rollback Plan:** [How to undo if fails] -
Prioritize Proposals
- Rank by expected impact vs. implementation effort
- Flag high-risk proposals requiring extra validation
- Group related proposals that should be applied together
Phase 4: Create Actionable Outputs
-
Write Improvement Report
Create
improvement_log/self_improvement_{{timestamp}}.md:# Self-Improvement Analysis - {{timestamp}} ## Executive Summary [1-2 paragraphs on overall system health and top opportunities] ## Performance Metrics [Current metrics vs. historical baseline] ## Improvement Proposals [All proposals from Phase 3] ## Recommended Action Plan 1. [Highest priority, lowest risk improvements] 2. [Medium priority improvements] 3. [Experimental/high-risk improvements to test] ## Meta-Level Insights [Structural patterns and abstract principles discovered] -
Update Meta-Prompts (if depth = "deep")
Based on analysis, update meta_prompts/ with new patterns:
- Successful structural templates
- Improved reasoning frameworks
- Better abstraction patterns
- Enhanced self-reflection questions
Phase 5: Safe Implementation Path
-
Create Implementation Branch (metaphorically)
For Low-Risk improvements, generate:
- Updated command/spec/strategy files in
improvement_log/proposed/ - Side-by-side comparison with current versions
- Test cases to validate improvements
- Rollback instructions
- Updated command/spec/strategy files in
-
Safety Validation
For each proposal:
- ✓ Does not break existing functionality
- ✓ Maintains backward compatibility
- ✓ Has clear rollback procedure
- ✓ Benefits outweigh risks
- ✓ Aligns with meta-prompting principles
Meta-Prompting Self-Improvement Template
This command uses meta-prompting on itself:
CURRENT_STRUCTURE: Self-improvement analysis command
ABSTRACTION_LEVEL: Evaluates patterns, not content
REASONING_FRAMEWORK:
1. Load state →
2. Analyze structure →
3. Generate abstract improvements →
4. Validate against principles →
5. Safe implementation
SELF_REFLECTION:
- Am I focusing on structural patterns or superficial changes?
- Are my proposals generalizable or task-specific?
- Do I minimize example-dependency in recommendations?
- Is my reasoning efficient and principle-driven?
IMPROVEMENT_TO_SELF:
[This section auto-updates when /improve-self analyzes itself]
Output Files Generated
improvement_log/
├── self_improvement_{{timestamp}}.md
├── proposed/
│ ├── infinite-meta.md.proposed
│ ├── strategy_evolution.md.proposed
│ └── comparison_report.md
└── validation/
└── test_results_{{timestamp}}.md
Success Criteria
A successful self-improvement analysis:
- Identifies 3-10 concrete improvement opportunities
- Applies meta-prompting principles (structure over content)
- Provides actionable implementation steps
- Includes risk assessment and rollback plans
- Shows meta-awareness (can improve itself)
- Generates measurable success criteria
Integration Notes
- Can be called manually or auto-triggered after N waves
- Works with
/infinite-metain evolve mode - Feeds into
/evolve-strategyfor orchestration improvements - Generates input for
/generate-specwhen patterns emerge
Advanced Usage
Self-Referential Improvement
# Have the system improve its own improvement process
/improve-self commands deep
# Then apply /improve-self's recommendations to /improve-self itself
# (Meta-meta-improvement)
Continuous Monitoring Mode
Set up auto-improvement triggers:
- After every 10 waves in infinite mode
- When quality metrics drop below threshold
- When new patterns are detected across 20+ iterations
This command embodies the meta-prompting principle of structure-oriented self-improvement. It analyzes patterns and frameworks rather than specific content, enabling generalizable enhancements to the entire system.