242 lines
7.8 KiB
Markdown
242 lines
7.8 KiB
Markdown
# START HERE - Infinite Loop Variant 4
|
|
|
|
## Quick Overview
|
|
|
|
This is **Infinite Loop Variant 4: Quality Evaluation & Ranking System**.
|
|
|
|
**What it does**: Generates iterations with automated quality evaluation, ranking, and continuous improvement using the ReAct pattern (Reasoning + Acting + Observation).
|
|
|
|
**Key Innovation**: Every iteration is scored across 3 dimensions (Technical, Creativity, Compliance), then ranked and analyzed to drive quality improvement in subsequent waves.
|
|
|
|
## 5-Minute Quick Start
|
|
|
|
### 1. Understand What You Have
|
|
|
|
Read these files in order:
|
|
1. **README.md** (10 min) - Complete system overview
|
|
2. **CLAUDE.md** (5 min) - How to use with Claude Code
|
|
3. **WEB_RESEARCH_INTEGRATION.md** (5 min) - How ReAct pattern was applied
|
|
|
|
### 2. Try a Simple Command
|
|
|
|
```bash
|
|
/project:infinite-quality specs/example_spec.md output/ 5
|
|
```
|
|
|
|
This will:
|
|
- Generate 5 iterations
|
|
- Evaluate each on Technical, Creativity, Compliance
|
|
- Rank them by composite score
|
|
- Generate a quality report
|
|
|
|
### 3. Review the Output
|
|
|
|
Check `output/quality_reports/` for:
|
|
- `evaluations/*.json` - Individual iteration scores
|
|
- `rankings/ranking_report.md` - Complete rankings
|
|
- `reports/wave_1_report.md` - Comprehensive quality analysis
|
|
|
|
## Directory Guide
|
|
|
|
```
|
|
.
|
|
├── README.md ← Start here for full documentation
|
|
├── CLAUDE.md ← Claude Code usage instructions
|
|
├── WEB_RESEARCH_INTEGRATION.md ← How ReAct pattern was applied
|
|
├── DELIVERABLE_CHECKLIST.md ← Verification of completeness
|
|
├── START_HERE.md ← You are here
|
|
│
|
|
├── .claude/commands/ ← All commands
|
|
│ ├── infinite-quality.md ← Main command
|
|
│ ├── evaluate.md ← Evaluation utility
|
|
│ ├── rank.md ← Ranking utility
|
|
│ └── quality-report.md ← Report generation
|
|
│
|
|
├── specs/ ← Specifications & standards
|
|
│ ├── example_spec.md ← Example spec with quality criteria
|
|
│ └── quality_standards.md ← Default evaluation standards
|
|
│
|
|
├── evaluators/ ← Evaluation logic
|
|
│ ├── technical_quality.md ← How to score technical quality
|
|
│ ├── creativity_score.md ← How to score creativity
|
|
│ └── spec_compliance.md ← How to check compliance
|
|
│
|
|
├── templates/ ← Report templates
|
|
│ └── quality_report.md ← Quality report structure
|
|
│
|
|
└── config/ ← Configuration
|
|
└── scoring_weights.json ← Customize scoring weights
|
|
```
|
|
|
|
## What Makes This Variant Special?
|
|
|
|
### 1. ReAct Pattern (Reasoning + Acting + Observation)
|
|
|
|
Every evaluation follows a cycle:
|
|
- **THOUGHT**: Reason about quality before scoring
|
|
- **ACTION**: Systematically evaluate with evidence
|
|
- **OBSERVATION**: Analyze results to inform next actions
|
|
|
|
This makes evaluation transparent, fair, and continuously improving.
|
|
|
|
### 2. Multi-Dimensional Quality
|
|
|
|
Iterations scored on 3 dimensions:
|
|
- **Technical Quality** (35%): Code, architecture, performance, robustness
|
|
- **Creativity Score** (35%): Originality, innovation, uniqueness, aesthetic
|
|
- **Spec Compliance** (30%): Requirements, naming, structure, standards
|
|
|
|
Excellence requires balance, not just one dimension.
|
|
|
|
### 3. Continuous Improvement (Infinite Mode)
|
|
|
|
Each wave learns from previous waves:
|
|
- Top performers reveal success patterns
|
|
- Quality gaps drive creative directions
|
|
- Rankings identify improvement opportunities
|
|
- Strategy adapts based on observations
|
|
|
|
## Common Commands
|
|
|
|
### Generate with Quality Evaluation
|
|
|
|
```bash
|
|
# Small batch (5 iterations)
|
|
/project:infinite-quality specs/example_spec.md output/ 5
|
|
|
|
# Medium batch (20 iterations)
|
|
/project:infinite-quality specs/example_spec.md output/ 20
|
|
|
|
# Infinite mode (continuous improvement)
|
|
/project:infinite-quality specs/example_spec.md output/ infinite
|
|
|
|
# With custom scoring weights
|
|
/project:infinite-quality specs/example_spec.md output/ 10 config/scoring_weights.json
|
|
```
|
|
|
|
### Evaluate Single Iteration
|
|
|
|
```bash
|
|
# Evaluate all dimensions
|
|
/evaluate all output/iteration_001.html specs/example_spec.md
|
|
|
|
# Evaluate specific dimension
|
|
/evaluate technical output/iteration_001.html
|
|
/evaluate creativity output/iteration_001.html
|
|
/evaluate compliance output/iteration_001.html specs/example_spec.md
|
|
```
|
|
|
|
### Rank All Iterations
|
|
|
|
```bash
|
|
# Rank by composite score
|
|
/rank output/
|
|
|
|
# Rank by specific dimension
|
|
/rank output/ technical
|
|
/rank output/ creativity
|
|
```
|
|
|
|
### Generate Quality Report
|
|
|
|
```bash
|
|
# Report for all iterations
|
|
/quality-report output/
|
|
|
|
# Report for specific wave (infinite mode)
|
|
/quality-report output/ 3
|
|
```
|
|
|
|
## Key Files to Read
|
|
|
|
### For Users
|
|
1. **README.md** - Complete documentation
|
|
2. **specs/example_spec.md** - See what a quality-focused spec looks like
|
|
3. **specs/quality_standards.md** - Understand evaluation criteria
|
|
|
|
### For Developers/Customizers
|
|
1. **CLAUDE.md** - How Claude Code should use this system
|
|
2. **evaluators/*.md** - Evaluation logic details
|
|
3. **config/scoring_weights.json** - Customize scoring
|
|
4. **templates/quality_report.md** - Report structure
|
|
|
|
### For Understanding ReAct Integration
|
|
1. **WEB_RESEARCH_INTEGRATION.md** - Complete analysis of ReAct application
|
|
2. **.claude/commands/infinite-quality.md** - See T-A-O structure
|
|
3. **evaluators/technical_quality.md** - See reasoning in action
|
|
|
|
## Example Output
|
|
|
|
After running `/project:infinite-quality specs/example_spec.md output/ 5`:
|
|
|
|
```
|
|
output/
|
|
├── iteration_001.html
|
|
├── iteration_002.html
|
|
├── iteration_003.html
|
|
├── iteration_004.html
|
|
├── iteration_005.html
|
|
└── quality_reports/
|
|
├── evaluations/
|
|
│ ├── iteration_001_evaluation.json
|
|
│ ├── iteration_002_evaluation.json
|
|
│ ├── iteration_003_evaluation.json
|
|
│ ├── iteration_004_evaluation.json
|
|
│ └── iteration_005_evaluation.json
|
|
├── rankings/
|
|
│ ├── ranking_report.md ← Read this for rankings
|
|
│ └── ranking_data.json
|
|
└── reports/
|
|
└── wave_1_report.md ← Read this for insights
|
|
```
|
|
|
|
## What You'll Learn
|
|
|
|
By studying this variant, you'll learn:
|
|
|
|
1. **ReAct Pattern**: How to interleave reasoning, acting, and observation
|
|
2. **Quality Assessment**: How to evaluate multi-dimensional quality
|
|
3. **Continuous Improvement**: How observations drive strategy adaptation
|
|
4. **Evidence-Based Evaluation**: How to ground scores in concrete evidence
|
|
5. **Multi-Agent Coordination**: How to orchestrate evaluation across agents
|
|
|
|
## Next Steps
|
|
|
|
1. ✅ Read README.md for complete overview
|
|
2. ✅ Run example command to see system in action
|
|
3. ✅ Review generated quality reports
|
|
4. ✅ Read WEB_RESEARCH_INTEGRATION.md to understand ReAct
|
|
5. ✅ Customize scoring weights for your needs
|
|
6. ✅ Create your own specs with quality criteria
|
|
|
|
## Questions?
|
|
|
|
- **How is this different from original infinite loop?**
|
|
→ Adds automated quality evaluation, ranking, and ReAct-driven improvement
|
|
|
|
- **What is ReAct?**
|
|
→ Reasoning + Acting pattern that interleaves thought and action cycles
|
|
→ See WEB_RESEARCH_INTEGRATION.md for details
|
|
|
|
- **Can I customize evaluation criteria?**
|
|
→ Yes! Edit `specs/quality_standards.md` and `config/scoring_weights.json`
|
|
|
|
- **What's infinite mode?**
|
|
→ Continuous generation with quality-driven improvement across waves
|
|
→ Each wave learns from previous wave observations
|
|
|
|
- **Is this production-ready?**
|
|
→ Yes! All commands documented and ready to use
|
|
→ Example spec provided to get started
|
|
|
|
## Credits
|
|
|
|
**Pattern**: Infinite Agentic Loop + ReAct Reasoning
|
|
**Web Research**: https://www.promptingguide.ai/techniques/react
|
|
**Created**: 2025-10-10
|
|
**Iteration**: 4 of infinite loop variant progressive series
|
|
|
|
---
|
|
|
|
**Ready to start?** Open **README.md** for the full guide!
|