infinite-agents-public/infinite_variants/infinite_variant_2/.claude/commands/debug.md

10 KiB

Debug - Orchestration and Agent Coordination Debugging Utility

You are the debugging utility for the Infinite Agentic Loop ecosystem. Your purpose is to diagnose and troubleshoot issues with orchestration, agent coordination, and generation processes.

Chain-of-Thought Debugging Process

Let's think through debugging step by step:

Step 1: Symptom Identification

Clearly define what's wrong:

  1. What is the observed problem?

    • Generation failure?
    • Quality issues?
    • Performance problems?
    • Unexpected outputs?
  2. When does it occur?

    • During orchestration?
    • During sub-agent execution?
    • During validation?
    • Consistently or intermittently?
  3. What was expected vs actual?

    • Expected behavior: [description]
    • Actual behavior: [description]
    • Deviation: [what's different]

Step 2: Context Gathering

Collect relevant information:

  1. Command Details

    • What command was executed?
    • What arguments were provided?
    • What spec file was used?
    • What was the output directory?
  2. Environment State

    • How many iterations exist?
    • What's the directory structure?
    • Are there permission issues?
    • Is there sufficient disk space?
  3. Recent History

    • What commands ran before this?
    • Were there previous errors?
    • What changed recently?
    • Is this a regression?

Step 3: Hypothesis Formation

Based on symptoms and context, hypothesize causes:

Common Issue Categories:

Category A: Specification Issues

  • Hypothesis: Spec is malformed or incomplete
  • Test: Run /validate-spec on the spec file
  • Indicators: Parse errors, missing sections, contradictions

Category B: Orchestration Logic Issues

  • Hypothesis: Orchestrator misinterpreting requirements
  • Test: Review orchestrator reasoning chain
  • Indicators: Wrong agent count, bad assignments, logic errors

Category C: Sub-Agent Execution Issues

  • Hypothesis: Sub-agents failing or producing poor output
  • Test: Examine sub-agent task definitions and results
  • Indicators: Errors in output, incomplete files, crashes

Category D: Resource/Environment Issues

  • Hypothesis: System constraints preventing success
  • Test: Check permissions, disk space, file accessibility
  • Indicators: I/O errors, permission denied, out of space

Category E: Quality/Validation Issues

  • Hypothesis: Outputs generated but don't meet standards
  • Test: Run /test-output to identify failures
  • Indicators: Test failures, low quality scores, spec violations

Step 4: Evidence Collection

Gather data to test hypotheses:

For Specification Issues:

  1. Read spec file completely
  2. Check for required sections
  3. Look for ambiguous or contradictory requirements
  4. Validate against spec schema

For Orchestration Issues:

  1. Review orchestrator command file
  2. Check agent assignment logic
  3. Verify wave/batch calculations
  4. Examine context management

For Sub-Agent Issues:

  1. Review sub-agent task definitions
  2. Check what context was provided
  3. Examine sub-agent outputs
  4. Look for patterns in failures

For Resource Issues:

  1. Check file permissions on directories
  2. Verify disk space availability
  3. Test file read/write access
  4. Check for path issues

For Quality Issues:

  1. Run automated tests
  2. Compare outputs to spec
  3. Check for common failure patterns
  4. Analyze quality metrics

Step 5: Root Cause Analysis

Determine the underlying cause:

  1. Eliminate hypotheses with contradictory evidence
  2. Confirm hypothesis with supporting evidence
  3. Trace causation from root cause to symptom
  4. Verify understanding by explaining the chain

Root Cause Template:

  • Proximate Cause: [immediate trigger]
  • Underlying Cause: [deeper reason]
  • Contributing Factors: [other influences]
  • Why it happened: [explanation]
  • Why it manifested this way: [explanation]

Step 6: Solution Development

Create actionable fix:

  1. Immediate Fix

    • What can be done right now?
    • Workaround or permanent fix?
    • Steps to implement
  2. Verification Plan

    • How to confirm fix works?
    • What tests to run?
    • Success criteria
  3. Prevention

    • How to prevent recurrence?
    • What process changes needed?
    • What validation to add?

Step 7: Debug Report Generation

Document findings and solutions:

  1. Problem Summary - Clear description
  2. Root Cause - What actually went wrong
  3. Evidence - Supporting data
  4. Solution - Fix and verification
  5. Prevention - Future safeguards

Command Format

/debug [issue_description] [context_path]

Arguments:

  • issue_description: Brief description of the problem
  • context_path: (optional) Relevant directory/file path

Debug Report Structure

# Debug Report

## Problem Summary
**Issue:** [clear, concise description]
**Severity:** [Critical / High / Medium / Low]
**Impact:** [what's affected]
**First Observed:** [when/where]

## Symptoms Observed
1. [Symptom 1] - [details]
2. [Symptom 2] - [details]
3. [Symptom 3] - [details]

## Context
**Command Executed:**

[command and arguments]


**Environment:**
- Spec File: [path]
- Output Directory: [path]
- Iteration Count: [number]
- Mode: [single/batch/infinite]

**Recent History:**
- [Event 1]
- [Event 2]
- [Event 3]

## Investigation Process

### Hypotheses Considered
1. **[Hypothesis 1]:** [description]
   - Likelihood: [High/Medium/Low]
   - Test approach: [how to verify]

2. **[Hypothesis 2]:** [description]
   - Likelihood: [High/Medium/Low]
   - Test approach: [how to verify]

### Evidence Collected

#### [Evidence Category 1]
- **Finding:** [what was discovered]
- **Source:** [where it came from]
- **Significance:** [what it means]

#### [Evidence Category 2]
- **Finding:** [what was discovered]
- **Source:** [where it came from]
- **Significance:** [what it means]

### Hypotheses Eliminated
- [Hypothesis X] - **Eliminated because:** [contradictory evidence]

## Root Cause Analysis

### Root Cause
**Primary Cause:** [the fundamental issue]

**Explanation:**
[Detailed explanation of why this caused the problem]

**Causation Chain:**
1. [Root cause] led to →
2. [Intermediate effect] which caused →
3. [Proximate trigger] resulting in →
4. [Observed symptom]

### Contributing Factors
1. [Factor 1] - [how it contributed]
2. [Factor 2] - [how it contributed]

### Why It Wasn't Caught Earlier
[Explanation of what allowed this to occur]

## Solution

### Immediate Fix
**Action:** [what to do now]

**Steps:**
1. [Step 1]
2. [Step 2]
3. [Step 3]

**Expected Outcome:**
[What should happen after fix]

### Verification Plan
**Tests to Run:**
1. [Test 1] - [expected result]
2. [Test 2] - [expected result]

**Success Criteria:**
- [Criterion 1]
- [Criterion 2]

### Long-Term Solution
**Process Improvements:**
1. [Improvement 1] - [rationale]
2. [Improvement 2] - [rationale]

**Prevention Measures:**
1. [Measure 1] - [how it prevents recurrence]
2. [Measure 2] - [how it prevents recurrence]

## Recommendations

### Immediate Actions
1. **[Action 1]** - [Priority: High/Medium/Low]
   - What: [description]
   - Why: [rationale]
   - How: [steps]

### Code/Configuration Changes
1. **[Change 1]**
   - File: [path]
   - Modification: [description]
   - Rationale: [why needed]

### Process Changes
1. **[Change 1]**
   - Current process: [description]
   - New process: [description]
   - Benefit: [improvement]

## Related Issues
- [Related Issue 1] - [relationship]
- [Related Issue 2] - [relationship]

## Lessons Learned
1. [Lesson 1] - [what we learned]
2. [Lesson 2] - [what we learned]

## Next Steps
1. [Step 1] - [owner] - [deadline]
2. [Step 2] - [owner] - [deadline]
3. [Step 3] - [owner] - [deadline]

Common Debugging Scenarios

Scenario 1: Generation Produces No Outputs

Debugging Path:

  1. Check if orchestrator is parsing arguments correctly
  2. Verify spec file is readable and valid
  3. Check output directory permissions
  4. Review sub-agent task definitions
  5. Look for errors in orchestration logic

Scenario 2: Outputs Don't Match Specification

Debugging Path:

  1. Validate spec file with /validate-spec
  2. Check if sub-agents received correct context
  3. Review sub-agent creative assignments
  4. Test outputs with /test-output
  5. Analyze where spec interpretation diverged

Scenario 3: Quality Below Standards

Debugging Path:

  1. Run /analyze to identify quality patterns
  2. Review quality standards in spec
  3. Check sub-agent sophistication levels
  4. Examine example iterations
  5. Identify missing context or guidance

Scenario 4: Duplicate or Similar Iterations

Debugging Path:

  1. Check uniqueness constraints in spec
  2. Review creative direction assignments
  3. Analyze existing iterations with /analyze
  4. Verify sub-agents received uniqueness guidance
  5. Check if theme space is exhausted

Scenario 5: Orchestration Hangs or Errors

Debugging Path:

  1. Check for infinite loops in orchestrator logic
  2. Verify resource availability
  3. Review agent wave calculations
  4. Check for context size issues
  5. Look for syntax errors in commands

Usage Examples

# Debug with general issue description
/debug "generation producing empty files"

# Debug with context path
/debug "quality issues in outputs" outputs/

# Debug orchestration problem
/debug "infinite loop not launching next wave"

# Debug spec-related issue
/debug "sub-agents misinterpreting requirements" specs/example_spec.md

Chain-of-Thought Benefits

This utility uses explicit reasoning to:

  • Systematically diagnose problems through structured investigation
  • Make debugging logic transparent for learning and reproducibility
  • Provide clear causation chains from root cause to symptom
  • Enable developers to understand not just what's wrong, but why
  • Support systematic improvement through lessons learned

Execution Protocol

Now, execute the debugging process:

  1. Identify symptoms - clearly define the problem
  2. Gather context - collect relevant information
  3. Form hypotheses - propose possible causes
  4. Collect evidence - gather data to test hypotheses
  5. Analyze root cause - determine fundamental issue
  6. Develop solution - create actionable fix
  7. Generate report - document findings and recommendations

Begin debugging the specified issue.