10 KiB
Debug - Orchestration and Agent Coordination Debugging Utility
You are the debugging utility for the Infinite Agentic Loop ecosystem. Your purpose is to diagnose and troubleshoot issues with orchestration, agent coordination, and generation processes.
Chain-of-Thought Debugging Process
Let's think through debugging step by step:
Step 1: Symptom Identification
Clearly define what's wrong:
-
What is the observed problem?
- Generation failure?
- Quality issues?
- Performance problems?
- Unexpected outputs?
-
When does it occur?
- During orchestration?
- During sub-agent execution?
- During validation?
- Consistently or intermittently?
-
What was expected vs actual?
- Expected behavior: [description]
- Actual behavior: [description]
- Deviation: [what's different]
Step 2: Context Gathering
Collect relevant information:
-
Command Details
- What command was executed?
- What arguments were provided?
- What spec file was used?
- What was the output directory?
-
Environment State
- How many iterations exist?
- What's the directory structure?
- Are there permission issues?
- Is there sufficient disk space?
-
Recent History
- What commands ran before this?
- Were there previous errors?
- What changed recently?
- Is this a regression?
Step 3: Hypothesis Formation
Based on symptoms and context, hypothesize causes:
Common Issue Categories:
Category A: Specification Issues
- Hypothesis: Spec is malformed or incomplete
- Test: Run
/validate-specon the spec file - Indicators: Parse errors, missing sections, contradictions
Category B: Orchestration Logic Issues
- Hypothesis: Orchestrator misinterpreting requirements
- Test: Review orchestrator reasoning chain
- Indicators: Wrong agent count, bad assignments, logic errors
Category C: Sub-Agent Execution Issues
- Hypothesis: Sub-agents failing or producing poor output
- Test: Examine sub-agent task definitions and results
- Indicators: Errors in output, incomplete files, crashes
Category D: Resource/Environment Issues
- Hypothesis: System constraints preventing success
- Test: Check permissions, disk space, file accessibility
- Indicators: I/O errors, permission denied, out of space
Category E: Quality/Validation Issues
- Hypothesis: Outputs generated but don't meet standards
- Test: Run
/test-outputto identify failures - Indicators: Test failures, low quality scores, spec violations
Step 4: Evidence Collection
Gather data to test hypotheses:
For Specification Issues:
- Read spec file completely
- Check for required sections
- Look for ambiguous or contradictory requirements
- Validate against spec schema
For Orchestration Issues:
- Review orchestrator command file
- Check agent assignment logic
- Verify wave/batch calculations
- Examine context management
For Sub-Agent Issues:
- Review sub-agent task definitions
- Check what context was provided
- Examine sub-agent outputs
- Look for patterns in failures
For Resource Issues:
- Check file permissions on directories
- Verify disk space availability
- Test file read/write access
- Check for path issues
For Quality Issues:
- Run automated tests
- Compare outputs to spec
- Check for common failure patterns
- Analyze quality metrics
Step 5: Root Cause Analysis
Determine the underlying cause:
- Eliminate hypotheses with contradictory evidence
- Confirm hypothesis with supporting evidence
- Trace causation from root cause to symptom
- Verify understanding by explaining the chain
Root Cause Template:
- Proximate Cause: [immediate trigger]
- Underlying Cause: [deeper reason]
- Contributing Factors: [other influences]
- Why it happened: [explanation]
- Why it manifested this way: [explanation]
Step 6: Solution Development
Create actionable fix:
-
Immediate Fix
- What can be done right now?
- Workaround or permanent fix?
- Steps to implement
-
Verification Plan
- How to confirm fix works?
- What tests to run?
- Success criteria
-
Prevention
- How to prevent recurrence?
- What process changes needed?
- What validation to add?
Step 7: Debug Report Generation
Document findings and solutions:
- Problem Summary - Clear description
- Root Cause - What actually went wrong
- Evidence - Supporting data
- Solution - Fix and verification
- Prevention - Future safeguards
Command Format
/debug [issue_description] [context_path]
Arguments:
issue_description: Brief description of the problemcontext_path: (optional) Relevant directory/file path
Debug Report Structure
# Debug Report
## Problem Summary
**Issue:** [clear, concise description]
**Severity:** [Critical / High / Medium / Low]
**Impact:** [what's affected]
**First Observed:** [when/where]
## Symptoms Observed
1. [Symptom 1] - [details]
2. [Symptom 2] - [details]
3. [Symptom 3] - [details]
## Context
**Command Executed:**
[command and arguments]
**Environment:**
- Spec File: [path]
- Output Directory: [path]
- Iteration Count: [number]
- Mode: [single/batch/infinite]
**Recent History:**
- [Event 1]
- [Event 2]
- [Event 3]
## Investigation Process
### Hypotheses Considered
1. **[Hypothesis 1]:** [description]
- Likelihood: [High/Medium/Low]
- Test approach: [how to verify]
2. **[Hypothesis 2]:** [description]
- Likelihood: [High/Medium/Low]
- Test approach: [how to verify]
### Evidence Collected
#### [Evidence Category 1]
- **Finding:** [what was discovered]
- **Source:** [where it came from]
- **Significance:** [what it means]
#### [Evidence Category 2]
- **Finding:** [what was discovered]
- **Source:** [where it came from]
- **Significance:** [what it means]
### Hypotheses Eliminated
- [Hypothesis X] - **Eliminated because:** [contradictory evidence]
## Root Cause Analysis
### Root Cause
**Primary Cause:** [the fundamental issue]
**Explanation:**
[Detailed explanation of why this caused the problem]
**Causation Chain:**
1. [Root cause] led to →
2. [Intermediate effect] which caused →
3. [Proximate trigger] resulting in →
4. [Observed symptom]
### Contributing Factors
1. [Factor 1] - [how it contributed]
2. [Factor 2] - [how it contributed]
### Why It Wasn't Caught Earlier
[Explanation of what allowed this to occur]
## Solution
### Immediate Fix
**Action:** [what to do now]
**Steps:**
1. [Step 1]
2. [Step 2]
3. [Step 3]
**Expected Outcome:**
[What should happen after fix]
### Verification Plan
**Tests to Run:**
1. [Test 1] - [expected result]
2. [Test 2] - [expected result]
**Success Criteria:**
- [Criterion 1]
- [Criterion 2]
### Long-Term Solution
**Process Improvements:**
1. [Improvement 1] - [rationale]
2. [Improvement 2] - [rationale]
**Prevention Measures:**
1. [Measure 1] - [how it prevents recurrence]
2. [Measure 2] - [how it prevents recurrence]
## Recommendations
### Immediate Actions
1. **[Action 1]** - [Priority: High/Medium/Low]
- What: [description]
- Why: [rationale]
- How: [steps]
### Code/Configuration Changes
1. **[Change 1]**
- File: [path]
- Modification: [description]
- Rationale: [why needed]
### Process Changes
1. **[Change 1]**
- Current process: [description]
- New process: [description]
- Benefit: [improvement]
## Related Issues
- [Related Issue 1] - [relationship]
- [Related Issue 2] - [relationship]
## Lessons Learned
1. [Lesson 1] - [what we learned]
2. [Lesson 2] - [what we learned]
## Next Steps
1. [Step 1] - [owner] - [deadline]
2. [Step 2] - [owner] - [deadline]
3. [Step 3] - [owner] - [deadline]
Common Debugging Scenarios
Scenario 1: Generation Produces No Outputs
Debugging Path:
- Check if orchestrator is parsing arguments correctly
- Verify spec file is readable and valid
- Check output directory permissions
- Review sub-agent task definitions
- Look for errors in orchestration logic
Scenario 2: Outputs Don't Match Specification
Debugging Path:
- Validate spec file with
/validate-spec - Check if sub-agents received correct context
- Review sub-agent creative assignments
- Test outputs with
/test-output - Analyze where spec interpretation diverged
Scenario 3: Quality Below Standards
Debugging Path:
- Run
/analyzeto identify quality patterns - Review quality standards in spec
- Check sub-agent sophistication levels
- Examine example iterations
- Identify missing context or guidance
Scenario 4: Duplicate or Similar Iterations
Debugging Path:
- Check uniqueness constraints in spec
- Review creative direction assignments
- Analyze existing iterations with
/analyze - Verify sub-agents received uniqueness guidance
- Check if theme space is exhausted
Scenario 5: Orchestration Hangs or Errors
Debugging Path:
- Check for infinite loops in orchestrator logic
- Verify resource availability
- Review agent wave calculations
- Check for context size issues
- Look for syntax errors in commands
Usage Examples
# Debug with general issue description
/debug "generation producing empty files"
# Debug with context path
/debug "quality issues in outputs" outputs/
# Debug orchestration problem
/debug "infinite loop not launching next wave"
# Debug spec-related issue
/debug "sub-agents misinterpreting requirements" specs/example_spec.md
Chain-of-Thought Benefits
This utility uses explicit reasoning to:
- Systematically diagnose problems through structured investigation
- Make debugging logic transparent for learning and reproducibility
- Provide clear causation chains from root cause to symptom
- Enable developers to understand not just what's wrong, but why
- Support systematic improvement through lessons learned
Execution Protocol
Now, execute the debugging process:
- Identify symptoms - clearly define the problem
- Gather context - collect relevant information
- Form hypotheses - propose possible causes
- Collect evidence - gather data to test hypotheses
- Analyze root cause - determine fundamental issue
- Develop solution - create actionable fix
- Generate report - document findings and recommendations
Begin debugging the specified issue.