389 lines
16 KiB
Markdown
389 lines
16 KiB
Markdown
**WEB-ENHANCED INFINITE AGENTIC LOOP COMMAND**
|
|
|
|
Think deeply about this web-enhanced infinite generation task. You are about to embark on a sophisticated iterative creation process with progressive web-based knowledge acquisition.
|
|
|
|
**Variables:**
|
|
|
|
spec_file: $ARGUMENTS
|
|
output_dir: $ARGUMENTS
|
|
count: $ARGUMENTS
|
|
url_strategy_file: $ARGUMENTS (optional, defaults to specs/d3_url_strategy.json)
|
|
|
|
**ARGUMENTS PARSING:**
|
|
Parse the following arguments from "$ARGUMENTS":
|
|
1. `spec_file` - Path to the markdown specification file (required)
|
|
2. `output_dir` - Directory where iterations will be saved (required)
|
|
3. `count` - Number of iterations (1-N or "infinite") (required)
|
|
4. `url_strategy_file` - Path to URL strategy JSON (optional, defaults to specs/d3_url_strategy.json)
|
|
|
|
**PHASE 0: INITIAL WEB PRIMING**
|
|
|
|
Before generating any iterations, perform deep web research to establish foundational knowledge:
|
|
|
|
**Priming Objectives:**
|
|
1. Read the specification file completely to understand the domain and requirements
|
|
2. If url_strategy_file exists, read it and extract priming URLs
|
|
3. Fetch 3-5 foundational web resources using WebFetch or WebSearch:
|
|
- Official documentation pages
|
|
- Comprehensive tutorials
|
|
- Pattern libraries or galleries
|
|
- Best practices guides
|
|
- Recent updates and new features
|
|
|
|
**Priming Web Research Tasks:**
|
|
For each priming URL:
|
|
- Use WebFetch to retrieve and analyze content
|
|
- Extract key concepts, techniques, and patterns
|
|
- Build mental model of the domain
|
|
- Identify progressive learning pathways
|
|
- Note important code patterns and best practices
|
|
|
|
**Priming Synthesis:**
|
|
Create a comprehensive knowledge base from web research:
|
|
- Core concepts and terminology
|
|
- Common patterns and anti-patterns
|
|
- Technical foundations (APIs, libraries, methods)
|
|
- Visual design principles (if applicable)
|
|
- Accessibility considerations
|
|
- Performance optimization strategies
|
|
|
|
**PHASE 1: SPECIFICATION + WEB CONTEXT ANALYSIS**
|
|
|
|
Read and deeply understand the specification file at `spec_file` in conjunction with priming knowledge:
|
|
- What type of content to generate (enhanced by web learnings)
|
|
- Format and structure requirements
|
|
- Specific parameters or constraints
|
|
- Intended evolution pattern between iterations
|
|
- How web research should enhance each iteration
|
|
|
|
**Web Integration Strategy:**
|
|
- How should each iteration incorporate new web knowledge?
|
|
- What makes a valuable web source for this domain?
|
|
- How should learnings accumulate across iterations?
|
|
- What URL selection strategy best serves the goal?
|
|
|
|
**PHASE 2: OUTPUT DIRECTORY + URL TRACKING RECONNAISSANCE**
|
|
|
|
Thoroughly analyze the `output_dir` to understand current state:
|
|
- List all existing files and their naming patterns
|
|
- Identify the highest iteration number currently present
|
|
- Analyze content evolution across existing iterations
|
|
- Review which URLs have already been used (check file footers/comments)
|
|
- Build list of USED_URLS to avoid duplication
|
|
- Understand the trajectory of previous generations
|
|
- Identify knowledge gaps that new web research could fill
|
|
|
|
**URL Strategy Analysis:**
|
|
If url_strategy_file exists:
|
|
- Read and parse the JSON structure
|
|
- Extract URLs categorized by iteration range
|
|
- Understand the progressive difficulty curve
|
|
- Note web search templates for dynamic discovery
|
|
- Plan URL assignment for upcoming iterations
|
|
|
|
**PHASE 3: WEB-ENHANCED ITERATION STRATEGY**
|
|
|
|
Based on spec analysis, priming knowledge, and existing iterations:
|
|
|
|
**Iteration Planning:**
|
|
- Determine starting iteration number (highest existing + 1)
|
|
- Plan how each iteration will be unique and evolutionary
|
|
- Map iteration numbers to appropriate URL complexity level
|
|
- Assign specific web sources to each planned iteration
|
|
- Consider how to build upon previous iterations + new web knowledge
|
|
|
|
**URL Assignment Strategy:**
|
|
|
|
For each upcoming iteration, determine web source:
|
|
|
|
1. **Pre-defined URL Mode** (if url_strategy_file exists):
|
|
- Match iteration number to URL category (foundation, intermediate, advanced, expert)
|
|
- Select next unused URL from appropriate category
|
|
- Ensure no duplicate URLs across iterations
|
|
- Track assigned URLs in USED_URLS list
|
|
|
|
2. **Dynamic Search Mode** (fallback or primary):
|
|
- Generate targeted web search queries based on:
|
|
- Current iteration number
|
|
- Previous iteration analysis
|
|
- Identified knowledge gaps
|
|
- Specific technique to explore
|
|
- Use WebSearch to find relevant resources
|
|
- Select most valuable result for the iteration
|
|
|
|
3. **Hybrid Mode** (recommended):
|
|
- Use pre-defined URLs for core techniques
|
|
- Use dynamic search for novel explorations
|
|
- Balance structure with discovery
|
|
|
|
**PHASE 4: PARALLEL WEB-ENHANCED AGENT COORDINATION**
|
|
|
|
Deploy multiple Sub Agents with individualized web research assignments for maximum efficiency and learning diversity:
|
|
|
|
**Sub-Agent Distribution Strategy:**
|
|
- For count 1-3: Launch all agents simultaneously with different URLs
|
|
- For count 4-10: Launch in batches of 3-4 agents to manage web requests
|
|
- For count 11+: Launch in batches of 5 agents to optimize coordination
|
|
- For "infinite": Launch waves of 3-4 agents, monitoring context and web source availability
|
|
|
|
**Agent Assignment Protocol:**
|
|
|
|
Each Sub Agent receives:
|
|
1. **Spec Context**: Complete specification file analysis + priming knowledge summary
|
|
2. **Directory Snapshot**: Current state of output_dir at launch time
|
|
3. **Iteration Assignment**: Specific iteration number (starting_number + agent_index)
|
|
4. **Web Research Assignment**: Specific URL to fetch and learn from
|
|
5. **Used URLs List**: All previously used URLs to avoid duplication
|
|
6. **Uniqueness Directive**: Explicit instruction to avoid duplicating previous concepts
|
|
7. **Quality Standards**: Detailed requirements from the specification
|
|
8. **Integration Directive**: How to synthesize web learning with previous work
|
|
|
|
**Agent Task Specification:**
|
|
```
|
|
TASK: Generate iteration [NUMBER] for [SPEC_FILE] with web research integration
|
|
|
|
You are Sub Agent [X] generating iteration [NUMBER] with web-enhanced learning.
|
|
|
|
WEB RESEARCH ASSIGNMENT:
|
|
- Your assigned URL: [SPECIFIC_URL]
|
|
- Topic: [URL_TOPIC]
|
|
- Your mission: Fetch this URL, learn a specific technique, and apply it to your iteration
|
|
|
|
RESEARCH PROCESS:
|
|
1. Use WebFetch tool to retrieve content from [SPECIFIC_URL]
|
|
2. Analyze the content deeply for:
|
|
- New techniques, APIs, or patterns
|
|
- Code examples and implementations
|
|
- Visual design principles (if applicable)
|
|
- Best practices and optimizations
|
|
- Accessibility considerations
|
|
3. Extract 1-3 specific learnings to apply to your iteration
|
|
4. Plan how to integrate these learnings with the specification requirements
|
|
|
|
CONTEXT:
|
|
- Specification: [Full spec analysis]
|
|
- Priming knowledge: [Summary of initial web research]
|
|
- Existing iterations: [Summary of current output_dir contents]
|
|
- Used URLs: [List of URLs already researched to avoid duplication]
|
|
- Your iteration number: [NUMBER]
|
|
- Expected complexity level: [foundation/intermediate/advanced/expert]
|
|
|
|
REQUIREMENTS:
|
|
1. FIRST: Fetch and analyze your assigned URL using WebFetch
|
|
2. Extract specific technique(s) from the web source
|
|
3. Read and understand the specification completely
|
|
4. Analyze existing iterations to ensure your output is unique
|
|
5. Generate content that:
|
|
- Follows spec format exactly
|
|
- Applies learning from your web source
|
|
- Builds upon previous iterations where appropriate
|
|
- Introduces genuine novelty and improvement
|
|
6. Document your web source and what you learned in the output file
|
|
7. Create file with exact name pattern specified
|
|
|
|
DELIVERABLE:
|
|
- Single file as specified by spec
|
|
- Must demonstrate learning from assigned URL
|
|
- Must document web source and improvements
|
|
- Must be genuinely enhanced by web research
|
|
|
|
CRITICAL: If WebFetch fails, fall back to WebSearch for the topic and find alternative source.
|
|
```
|
|
|
|
**Parallel Execution Management:**
|
|
- Launch assigned Sub Agents in batches to avoid overwhelming web services
|
|
- Each agent performs independent WebFetch for their assigned URL
|
|
- Monitor agent progress and web fetch completion
|
|
- Handle web fetch failures by reassigning different URLs
|
|
- Ensure no duplicate URLs across parallel agents in same batch
|
|
- Collect and validate all completed iterations
|
|
- Verify that web learnings were actually applied
|
|
|
|
**PHASE 5: INFINITE WEB-ENHANCED MODE ORCHESTRATION**
|
|
|
|
For infinite generation mode, orchestrate continuous waves with progressive web learning:
|
|
|
|
**Wave-Based Web Learning Strategy:**
|
|
1. **Wave Planning**:
|
|
- Determine next wave size (3-4 agents) based on context capacity
|
|
- Select URL category for wave (foundation → intermediate → advanced → expert)
|
|
- Identify unused URLs in target category
|
|
- Plan specific URL assignment for each agent in wave
|
|
|
|
2. **Progressive URL Difficulty**:
|
|
- Wave 1: Foundation URLs (basic concepts)
|
|
- Wave 2: Intermediate URLs (common patterns)
|
|
- Wave 3: Advanced URLs (complex techniques)
|
|
- Wave 4+: Expert URLs + dynamic searches for cutting-edge techniques
|
|
|
|
3. **Knowledge Accumulation**:
|
|
- Each wave builds on previous waves' learnings
|
|
- Later agents can reference earlier iterations' web discoveries
|
|
- Progressive complexity in both content and web sources
|
|
- Synthesis of multiple web learnings in later iterations
|
|
|
|
4. **URL Exhaustion Handling**:
|
|
- If pre-defined URLs exhausted, switch to dynamic WebSearch mode
|
|
- Generate targeted search queries based on:
|
|
- Current iteration sophistication level
|
|
- Gaps in previous iterations
|
|
- Novel techniques to explore
|
|
- Continue until context limits approached
|
|
|
|
**Infinite Execution Cycle:**
|
|
```
|
|
INITIALIZE:
|
|
- Load url_strategy_file
|
|
- Extract all available URLs by category
|
|
- Initialize USED_URLS tracker
|
|
- Initialize current_difficulty_level = "foundation"
|
|
|
|
WHILE context_capacity > threshold:
|
|
WAVE_SETUP:
|
|
1. Assess current output_dir state
|
|
2. Count existing iterations
|
|
3. Determine appropriate difficulty level for next wave
|
|
4. Select 3-4 unused URLs from current difficulty category
|
|
5. If no URLs available, increment difficulty level or switch to dynamic search
|
|
|
|
AGENT_PREPARATION:
|
|
For each agent in wave:
|
|
- Assign unique iteration number
|
|
- Assign unique URL from selected batch
|
|
- Prepare context snapshot
|
|
- Add URL to USED_URLS tracker
|
|
|
|
EXECUTION:
|
|
6. Launch parallel Sub Agent wave with web assignments
|
|
7. Monitor wave completion and web fetch success
|
|
8. Validate that web learnings were applied
|
|
|
|
POST_WAVE:
|
|
9. Update directory state snapshot
|
|
10. Review iteration quality and web integration
|
|
11. Evaluate context capacity remaining
|
|
12. If sufficient capacity: Continue to next wave
|
|
13. If approaching limits: Complete final wave and summarize
|
|
|
|
DIFFICULTY_PROGRESSION:
|
|
- Waves 1-2: foundation URLs
|
|
- Waves 3-4: intermediate URLs
|
|
- Waves 5-6: advanced URLs
|
|
- Waves 7+: expert URLs + dynamic discovery
|
|
```
|
|
|
|
**Dynamic Web Search Integration:**
|
|
|
|
When pre-defined URLs are exhausted or for novel exploration:
|
|
|
|
**Search Query Generation:**
|
|
- Analyze current iteration goals
|
|
- Identify knowledge gaps from previous iterations
|
|
- Generate targeted search queries using templates from url_strategy_file
|
|
- Example: "D3.js force simulation collision detection site:observablehq.com"
|
|
|
|
**Search Result Selection:**
|
|
- Use WebSearch to find relevant resources
|
|
- Evaluate search results for quality and relevance
|
|
- Select top result not in USED_URLS
|
|
- Assign to agent with clear learning objectives
|
|
|
|
**PHASE 6: WEB INTEGRATION QUALITY ASSURANCE**
|
|
|
|
After each wave completion, verify web integration quality:
|
|
|
|
**Quality Checks:**
|
|
1. **Web Source Attribution**: Each iteration documents its URL source
|
|
2. **Learning Application**: Specific technique from URL is actually implemented
|
|
3. **Improvement Evidence**: Iteration shows measurable improvement
|
|
4. **Uniqueness**: No duplicate web sources, no duplicate implementations
|
|
5. **Spec Compliance**: Meets all specification requirements
|
|
6. **Cumulative Progress**: Builds upon previous iterations appropriately
|
|
|
|
**Failure Handling:**
|
|
- If agent fails to fetch URL: Reassign with backup URL or web search
|
|
- If agent doesn't apply learning: Regenerate with clearer directive
|
|
- If URL already used: Select next available URL in category
|
|
- If web source unavailable: Fall back to dynamic web search
|
|
|
|
**EXECUTION PRINCIPLES:**
|
|
|
|
**Progressive Web Learning:**
|
|
- Each iteration incorporates new web-sourced knowledge
|
|
- Knowledge accumulates across iterations
|
|
- Later iterations can build upon earlier web discoveries
|
|
- Balance between learning new techniques and mastering previous ones
|
|
|
|
**Quality & Uniqueness:**
|
|
- Each iteration must be genuinely unique and valuable
|
|
- Web research must genuinely enhance the output
|
|
- Build upon previous work while introducing novel elements
|
|
- Maintain consistency with original specification
|
|
- Document web sources and learnings clearly
|
|
|
|
**Parallel Web Coordination:**
|
|
- Deploy Sub Agents strategically with diverse web sources
|
|
- Assign different URLs to each agent in a wave
|
|
- Coordinate timing to avoid web service rate limiting
|
|
- Monitor web fetch success and content extraction
|
|
- Ensure all agents successfully integrate their web learnings
|
|
|
|
**Scalability & Efficiency:**
|
|
- Batch web requests to avoid overwhelming services
|
|
- Cache web content for potential reuse in context
|
|
- Track used URLs rigorously to avoid duplication
|
|
- Optimize URL selection for progressive learning curve
|
|
- Balance parallel speed with web fetch reliability
|
|
|
|
**ULTRA-THINKING DIRECTIVE:**
|
|
|
|
Before beginning generation, engage in extended thinking about:
|
|
|
|
**Web Research Strategy:**
|
|
- What web sources will provide the most value?
|
|
- How should URL difficulty progress across iterations?
|
|
- What's the optimal balance between pre-defined URLs and dynamic search?
|
|
- How can web learnings accumulate most effectively?
|
|
- What makes a web source genuinely useful vs. superficial?
|
|
|
|
**Specification & Web Synergy:**
|
|
- How does web research enhance the specification goals?
|
|
- What web knowledge is most critical for success?
|
|
- How should iterations balance spec requirements with web inspiration?
|
|
- What progressive learning pathway serves the goal best?
|
|
|
|
**Parallel Web Coordination:**
|
|
- Optimal Sub Agent distribution for web-enhanced generation
|
|
- How to assign URLs for maximum learning diversity
|
|
- Managing web fetch timing and rate limits
|
|
- Ensuring each agent extracts valuable, applicable learnings
|
|
- Preventing duplicate web sources across parallel streams
|
|
|
|
**Knowledge Integration:**
|
|
- How should agents synthesize web content with spec requirements?
|
|
- What level of detail from web sources should be applied?
|
|
- How to build upon previous iterations' web discoveries?
|
|
- Balancing novelty from web vs. consistency with previous work
|
|
|
|
**Infinite Mode Web Optimization:**
|
|
- Progressive URL difficulty strategy across waves
|
|
- When to switch from pre-defined URLs to dynamic search
|
|
- Balancing web research depth vs. breadth
|
|
- Context management with web content inclusion
|
|
- Quality control for web-enhanced parallel outputs
|
|
|
|
**Risk Mitigation:**
|
|
- Handling web fetch failures gracefully
|
|
- Ensuring web learnings are actually applied, not just mentioned
|
|
- Managing URL exhaustion in long runs
|
|
- Preventing superficial web integration
|
|
- Maintaining spec compliance despite web inspiration
|
|
|
|
**Quality Assurance:**
|
|
- How to verify that web research genuinely improved output?
|
|
- What evidence shows learning application vs. mere citation?
|
|
- How to ensure cumulative knowledge building?
|
|
- Balancing web fidelity with creative adaptation?
|
|
|
|
Begin execution with deep analysis of the web-enhanced learning strategy and proceed systematically through each phase, leveraging Sub Agents with individualized web research assignments for maximum knowledge acquisition and creative output.
|