16 KiB
STATEFUL INFINITE AGENTIC LOOP WITH SELF-CONSISTENCY VALIDATION
This variant implements robust state management with self-consistency validation to ensure reliable, resumable, and duplicate-free infinite loop execution. State is persisted across sessions, validated through multiple sampling, and enables graceful recovery from interruptions.
USAGE:
/infinite-stateful <spec_path> <output_dir> <count> [url_strategy_path] [run_id]
PARAMETERS:
spec_path: Path to specification fileoutput_dir: Directory for generated contentcount: Number of iterations (or "infinite")url_strategy_path: (Optional) URL strategy JSONrun_id: (Optional) Resume existing run by ID
EXAMPLES:
# New run with 5 iterations
/infinite-stateful specs/example_spec.md outputs 5
# New run with URL strategy
/infinite-stateful specs/web_spec.md outputs 10 specs/url_strategy.json
# Resume interrupted run
/infinite-stateful specs/example_spec.md outputs infinite specs/url_strategy.json run_20250310_143022
# Infinite mode
/infinite-stateful specs/example_spec.md outputs infinite
PHASE 0: STATE INITIALIZATION & RECOVERY
Step 0.1: Load or Create Run State
FIRST, determine if this is a new run or resume:
import json, os
from datetime import datetime
state_dir = ".claude/state"
os.makedirs(state_dir, exist_ok=True)
# Generate or load run ID
if run_id_provided:
run_id = provided_run_id
state_file = f"{state_dir}/run_{run_id}.json"
if not os.path.exists(state_file):
ERROR: "Run ID {run_id} not found. Available runs: [list .claude/state/run_*.json]"
else:
run_id = datetime.now().strftime("run_%Y%m%d_%H%M%S")
state_file = f"{state_dir}/run_{run_id}.json"
Step 0.2: Initialize State Structure
Create or load state with self-consistency validation:
{
"run_id": "run_20250310_143022",
"spec_path": "specs/example_spec.md",
"output_dir": "outputs",
"total_count": 10,
"url_strategy_path": "specs/url_strategy.json",
"status": "in_progress",
"created_at": "2025-03-10T14:30:22Z",
"updated_at": "2025-03-10T14:35:10Z",
"completed_iterations": 3,
"failed_iterations": 0,
"iterations": [
{
"number": 1,
"status": "completed",
"output_file": "outputs/iteration_1.html",
"web_url": "https://example.com/tutorial",
"started_at": "2025-03-10T14:30:25Z",
"completed_at": "2025-03-10T14:31:40Z",
"validation_hash": "abc123...",
"metadata": {}
}
],
"used_urls": [
"https://example.com/tutorial",
"https://example.com/advanced"
],
"validation": {
"last_check": "2025-03-10T14:35:10Z",
"consistency_score": 1.0,
"issues": []
}
}
Step 0.3: State Consistency Validation
Apply self-consistency principle with multiple validation checks:
Validation Check 1: File System Consistency
# Sample multiple verification approaches
validations = []
# Check 1: Count files in output directory
file_count = len(glob.glob(f"{output_dir}/*"))
state_count = len([i for i in state["iterations"] if i["status"] == "completed"])
validations.append({"method": "file_count", "match": file_count == state_count})
# Check 2: Verify each iteration file exists
for iteration in state["iterations"]:
if iteration["status"] == "completed":
exists = os.path.exists(iteration["output_file"])
validations.append({"method": "file_exists", "iteration": iteration["number"], "match": exists})
# Check 3: URL uniqueness
url_set = set(state["used_urls"])
url_list = state["used_urls"]
validations.append({"method": "url_uniqueness", "match": len(url_set) == len(url_list)})
# Majority voting: consistency achieved if >80% checks pass
consistency_score = sum(v["match"] for v in validations) / len(validations)
Self-Consistency Decision:
- If consistency_score >= 0.8: State is valid, proceed
- If 0.5 <= consistency_score < 0.8: State has warnings, prompt user
- If consistency_score < 0.5: State corrupted, offer to reset or manual fix
Step 0.4: Recovery Strategy
If resuming:
- Load completed iteration count
- Skip completed iterations
- Continue from next iteration number
- Preserve used URL list to prevent duplicates
PHASE 1: SPECIFICATION ANALYSIS
Step 1.1: Read Specification
Read spec_path and extract:
- Output requirements
- Naming patterns
- Content structure
- Quality standards
- Web integration strategy (if URL strategy provided)
Step 1.2: Analyze URL Strategy (if provided)
{
"foundation": ["url1", "url2"],
"intermediate": ["url3", "url4"],
"advanced": ["url5", "url6"],
"expert": ["url7", "url8"]
}
Map iterations to difficulty levels based on progress.
PHASE 2: DIRECTORY RECONNAISSANCE
Step 2.1: Scan Existing Output
# List existing files in output_dir
existing_files = glob(output_dir + "/*")
# Determine next iteration number
if resuming:
next_iteration = state["completed_iterations"] + 1
else:
next_iteration = max([extract_number(f) for f in existing_files], default=0) + 1
Step 2.2: Update State Tracking
Record existing files in state if not already tracked:
- Validate file hashes
- Add missing entries to state.iterations
- Mark as "completed" with "recovered" flag
PHASE 3: ITERATION PLANNING WITH STATE AWARENESS
Step 3.1: Determine Iteration Range
if count == "infinite":
iterations_to_generate = "continuous" # Wave-based approach
else:
total_needed = int(count)
already_completed = state["completed_iterations"]
iterations_to_generate = total_needed - already_completed
Step 3.2: URL Assignment with Deduplication
# Filter out used URLs from strategy
available_urls = []
for level, urls in url_strategy.items():
for url in urls:
if url not in state["used_urls"]:
available_urls.append({"url": url, "level": level})
# Assign URLs to new iterations
for i in range(iterations_to_generate):
iteration_num = next_iteration + i
# Select URL from appropriate difficulty level
if i < len(available_urls):
assigned_url = available_urls[i]["url"]
else:
# Fallback to web search for new URLs
assigned_url = f"SEARCH: [domain-specific query {i}]"
Step 3.3: Self-Consistency Planning
Create multiple planning samples to ensure consistency:
Sample 1: Conservative approach (smaller batches)
Sample 2: Balanced approach (medium batches)
Sample 3: Aggressive approach (larger batches)
Apply majority voting to select batch size and wave strategy.
PHASE 4: PARALLEL AGENT COORDINATION WITH STATE UPDATES
Step 4.1: Batch Generation Strategy
# Determine batch size based on count
if iterations_to_generate <= 3:
batch_size = iterations_to_generate
elif iterations_to_generate <= 10:
batch_size = min(3, iterations_to_generate)
else:
batch_size = 5
Step 4.2: Deploy Sub-Agents with State Context
For each batch of iterations:
**SUB-AGENT {iteration_num} TASK:**
**RUN CONTEXT:**
- Run ID: {run_id}
- Iteration: {iteration_num} of {total_count}
- Resuming: {is_resume}
- Previous completions: {completed_count}
**WEB RESEARCH:**
{if url_assigned}
- URL: {assigned_url}
- Level: {difficulty_level}
- Mission: {research_focus}
{else}
- Search: {search_query}
- Extract: {techniques_needed}
{endif}
**OUTPUT REQUIREMENTS:**
- File: {output_dir}/{naming_pattern}_{iteration_num}.{extension}
- Must be unique from: {list_previous_outputs}
- Follow spec: {spec_path}
**STATE UPDATE REQUIRED:**
After completion, you MUST update state:
1. Mark iteration {iteration_num} as completed
2. Add output file path
3. Record web URL used
4. Update timestamp
5. Compute validation hash
**EXECUTION:**
1. Fetch and learn from web URL
2. Generate unique output following spec
3. Validate output quality
4. Update run state file: {state_file}
5. Return: iteration number, output file, web URL
Step 4.3: Parallel Execution with State Tracking
Use Task tool to deploy all sub-agents in batch simultaneously:
Task 1: Sub-agent for iteration {next_iteration}
Task 2: Sub-agent for iteration {next_iteration + 1}
Task 3: Sub-agent for iteration {next_iteration + 2}
...
Each task includes:
- Full context (spec, existing work, state)
- Unique URL assignment
- State update instructions
- Validation requirements
Step 4.4: Incremental State Updates
After each batch completes:
# Load current state
state = load_state(state_file)
# Update completion count
state["completed_iterations"] = next_iteration + batch_size - 1
# Add iteration records
for result in batch_results:
state["iterations"].append({
"number": result["iteration_num"],
"status": "completed",
"output_file": result["output_file"],
"web_url": result["web_url"],
"completed_at": datetime.now().isoformat(),
"validation_hash": compute_hash(result["output_file"])
})
# Update used URLs
state["used_urls"].extend([r["web_url"] for r in batch_results])
# Update timestamp
state["updated_at"] = datetime.now().isoformat()
# Save state
save_state(state_file, state)
Step 4.5: Inter-Batch State Consistency Check
Between batches, apply self-consistency validation:
# Sample multiple consistency checks
checks = [
verify_file_count(),
verify_url_uniqueness(),
verify_iteration_sequence(),
verify_output_files_exist(),
verify_state_schema()
]
consistency_score = sum(checks) / len(checks)
if consistency_score < 0.8:
# State inconsistency detected
PAUSE and report issues
Offer: continue, fix, or abort
PHASE 5: WAVE MANAGEMENT FOR INFINITE MODE
Step 5.1: Wave Completion Check
After batch completes:
if count == "infinite":
# Check context usage
if context_usage > 0.85:
STOP: "Context limit approaching. Run ID: {run_id}. Resume with same run_id."
else:
# Continue with next wave
next_iteration += batch_size
goto PHASE 3
Step 5.2: Graceful Interruption Handling
If interrupted at any point:
- State file has last successful batch
- Run can be resumed with same run_id
- No duplicate URLs or iterations
- Clear continuation point
PHASE 6: FINAL STATE VALIDATION & REPORTING
Step 6.1: Comprehensive Self-Consistency Validation
Apply multiple validation approaches (self-consistency principle):
# Validation Sample 1: File System Approach
validation_1 = {
"method": "file_system",
"expected_count": state["completed_iterations"],
"actual_count": count_output_files(output_dir),
"match": expected_count == actual_count
}
# Validation Sample 2: State Iteration Approach
validation_2 = {
"method": "state_records",
"completed_in_state": len([i for i in state["iterations"] if i["status"] == "completed"]),
"match": completed_in_state == state["completed_iterations"]
}
# Validation Sample 3: URL Uniqueness Approach
validation_3 = {
"method": "url_deduplication",
"total_urls": len(state["used_urls"]),
"unique_urls": len(set(state["used_urls"])),
"match": total_urls == unique_urls
}
# Validation Sample 4: Hash Verification Approach
validation_4 = {
"method": "file_integrity",
"all_files_verified": all(
verify_hash(i["output_file"], i["validation_hash"])
for i in state["iterations"] if i["status"] == "completed"
),
"match": all_files_verified
}
# Majority voting on consistency
validations = [validation_1, validation_2, validation_3, validation_4]
consistency_score = sum(v["match"] for v in validations) / len(validations)
Step 6.2: Final State Update
state["status"] = "completed" if count != "infinite" else "paused"
state["validation"]["last_check"] = datetime.now().isoformat()
state["validation"]["consistency_score"] = consistency_score
state["validation"]["checks"] = validations
save_state(state_file, state)
Step 6.3: Generate Report
**STATEFUL INFINITE LOOP REPORT**
**Run Information:**
- Run ID: {run_id}
- Specification: {spec_path}
- Output Directory: {output_dir}
- Status: {status}
**Execution Summary:**
- Total Iterations: {completed_iterations}
- Failed Iterations: {failed_iterations}
- Duration: {end_time - start_time}
- Web URLs Used: {len(used_urls)}
**State Consistency Validation:**
- Consistency Score: {consistency_score} ({consistency_score * 100}%)
- File System Check: {validation_1["match"] ? "PASS" : "FAIL"}
- State Records Check: {validation_2["match"] ? "PASS" : "FAIL"}
- URL Uniqueness Check: {validation_3["match"] ? "PASS" : "FAIL"}
- File Integrity Check: {validation_4["match"] ? "PASS" : "FAIL"}
**State Management:**
- State File: {state_file}
- State Size: {file_size(state_file)}
- Last Updated: {state["updated_at"]}
- Resumable: Yes (use run_id: {run_id})
**Outputs:**
{for iteration in state["iterations"]:}
- Iteration {iteration["number"]}: {iteration["output_file"]}
- Web Source: {iteration["web_url"]}
- Completed: {iteration["completed_at"]}
{endfor}
**Resume Command:**
/infinite-stateful {spec_path} {output_dir} {count} {url_strategy_path} {run_id}
**Related Commands:**
- View status: /status {run_id}
- Reset state: /reset-state {run_id}
EXECUTION PRINCIPLES:
Self-Consistency Validation:
- Apply multiple independent validation approaches
- Use majority voting to determine state validity
- Validate at initialization, between batches, and at completion
- Consistency score >= 0.8 required for reliable operation
State Persistence:
- Save state after every batch completion
- Atomic writes to prevent corruption
- State file is single source of truth
- All state changes are timestamped
Deduplication Guarantee:
- Track all used URLs in state
- Filter URL strategy against used URLs
- Fallback to web search for new unique URLs
- Prevent iteration number collisions
Resume Capability:
- Any run can be resumed by run_id
- State tracks exact progress point
- Graceful handling of interruptions
- No re-generation of completed iterations
Failure Resilience:
- State isolated from file system errors
- Failed iterations tracked separately
- Can retry failed iterations
- State consistency checks prevent corruption propagation
Context Optimization:
- Monitor context usage throughout execution
- Pause before context limits
- State enables exact continuation point
- No context waste on completed work
ULTRA-THINKING DIRECTIVE:
Before execution, consider:
State Management Strategy:
- Is the state schema comprehensive enough?
- How to handle concurrent access (if multiple runs)?
- What metadata is essential vs. optional?
- How to migrate state schema in future versions?
Self-Consistency Implementation:
- Which validation checks provide highest confidence?
- What consistency score threshold is appropriate?
- How to handle partial consistency (warnings)?
- Should consistency checks be configurable?
Resume Reliability:
- What edge cases could break resume functionality?
- How to verify state integrity before resume?
- What if output directory was manually modified?
- How to handle schema changes between runs?
URL Deduplication:
- How to handle URL variations (http vs https, trailing slash)?
- What if URL content changes between runs?
- Should URL effectiveness be tracked?
- How to handle URLs that become unavailable?
Error Recovery:
- What constitutes a failed iteration vs. interrupted?
- Should failed iterations be retried automatically?
- How to clean up partial outputs from failures?
- What state changes need rollback on errors?
Performance vs. Reliability:
- How often to save state (after each iteration or batch)?
- Should state validation be asynchronous?
- What's the overhead of consistency checks?
- How to balance safety with execution speed?
Apply self-consistency principle: Generate multiple execution strategies, validate consistency across approaches, select majority-voted plan for highest reliability.