Initial commit - fork of Backlog.md with Docker deployment for backlog.jeffemmett.com
🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
|
|
@ -0,0 +1,234 @@
|
|||
# ⚠️ **IMPORTANT**
|
||||
|
||||
Follow the instructions below for instructions on how to work with Backlog.md on tasks in this repository [agent-guidelines.md](src/guidelines/agent-guidelines.md)
|
||||
|
||||
## Commands
|
||||
|
||||
### Development
|
||||
|
||||
- `bun i` - Install dependencies
|
||||
- `bun test` - Run tests
|
||||
- `bun run format` - Format code with Biome
|
||||
- `bun run lint` - Lint and auto-fix with Biome
|
||||
- `bun run check` - Run all Biome checks (format + lint)
|
||||
- `bun run build` - Build the CLI tool
|
||||
- `bun run cli` - Uses the CLI tool directly
|
||||
|
||||
### Testing
|
||||
|
||||
- `bun test` - Run all tests
|
||||
- `bun test <filename>` - Run specific test file
|
||||
|
||||
### Configuration Management
|
||||
|
||||
- `bun run cli config list` - View all configuration values
|
||||
- `bun run cli config get <key>` - Get a specific config value (e.g. defaultEditor)
|
||||
- `bun run cli config set <key> <value>` - Set a config value with validation
|
||||
|
||||
## Core Structure
|
||||
|
||||
- **CLI Tool**: Built with Bun and TypeScript as a global npm package (`@backlog.md`)
|
||||
- **Source Code**: Located in `/src` directory with modular TypeScript structure
|
||||
- **Task Management**: Uses markdown files in `.backlog/` directory structure
|
||||
- **Workflow**: Git-integrated with task IDs referenced in commits and PRs
|
||||
|
||||
## Code Standards
|
||||
|
||||
- **Runtime**: Bun with TypeScript 5
|
||||
- **Formatting**: Biome with tab indentation and double quotes
|
||||
- **Linting**: Biome recommended rules
|
||||
- **Testing**: Bun's built-in test runner
|
||||
- **Pre-commit**: Husky + lint-staged automatically runs Biome checks before commits
|
||||
|
||||
The pre-commit hook automatically runs `biome check --write` on staged files to ensure code quality. If linting errors
|
||||
are found, the commit will be blocked until fixed.
|
||||
|
||||
# === BACKLOG.MD GUIDELINES START ===
|
||||
# Instructions for the usage of Backlog.md CLI Tool
|
||||
|
||||
## 1. Source of Truth
|
||||
|
||||
- Tasks live under **`backlog/tasks/`** (drafts under **`backlog/drafts/`**).
|
||||
- Every implementation decision starts with reading the corresponding Markdown task file.
|
||||
- Project documentation is in **`backlog/docs/`**.
|
||||
- Project decisions are in **`backlog/decisions/`**.
|
||||
|
||||
## 2. Defining Tasks
|
||||
|
||||
### **Title**
|
||||
|
||||
Use a clear brief title that summarizes the task.
|
||||
|
||||
### **Description**: (The **"why"**)
|
||||
|
||||
Provide a concise summary of the task purpose and its goal. Do not add implementation details here. It
|
||||
should explain the purpose and context of the task. Code snippets should be avoided.
|
||||
|
||||
### **Acceptance Criteria**: (The **"what"**)
|
||||
|
||||
List specific, measurable outcomes that define what means to reach the goal from the description. Use checkboxes (`- [ ]`) for tracking.
|
||||
When defining `## Acceptance Criteria` for a task, focus on **outcomes, behaviors, and verifiable requirements** rather
|
||||
than step-by-step implementation details.
|
||||
Acceptance Criteria (AC) define *what* conditions must be met for the task to be considered complete.
|
||||
They should be testable and confirm that the core purpose of the task is achieved.
|
||||
**Key Principles for Good ACs:**
|
||||
|
||||
- **Outcome-Oriented:** Focus on the result, not the method.
|
||||
- **Testable/Verifiable:** Each criterion should be something that can be objectively tested or verified.
|
||||
- **Clear and Concise:** Unambiguous language.
|
||||
- **Complete:** Collectively, ACs should cover the scope of the task.
|
||||
- **User-Focused (where applicable):** Frame ACs from the perspective of the end-user or the system's external behavior.
|
||||
|
||||
- *Good Example:* "- [ ] User can successfully log in with valid credentials."
|
||||
- *Good Example:* "- [ ] System processes 1000 requests per second without errors."
|
||||
- *Bad Example (Implementation Step):* "- [ ] Add a new function `handleLogin()` in `auth.ts`."
|
||||
|
||||
### Task file
|
||||
|
||||
Once a task is created it will be stored in `backlog/tasks/` directory as a Markdown file with the format
|
||||
`task-<id> - <title>.md` (e.g. `task-42 - Add GraphQL resolver.md`).
|
||||
|
||||
### Additional task requirements
|
||||
|
||||
- Tasks must be **atomic** and **testable**. If a task is too large, break it down into smaller subtasks.
|
||||
Each task should represent a single unit of work that can be completed in a single PR.
|
||||
|
||||
- **Never** reference tasks that are to be done in the future or that are not yet created. You can only reference
|
||||
previous
|
||||
tasks (id < current task id).
|
||||
|
||||
- When creating multiple tasks, ensure they are **independent** and they do not depend on future tasks.
|
||||
Example of wrong tasks splitting: task 1: "Add API endpoint for user data", task 2: "Define the user model and DB
|
||||
schema".
|
||||
Example of correct tasks splitting: task 1: "Add system for handling API requests", task 2: "Add user model and DB
|
||||
schema", task 3: "Add API endpoint for user data".
|
||||
|
||||
## 3. Recommended Task Anatomy
|
||||
|
||||
```markdown
|
||||
# task‑42 - Add GraphQL resolver
|
||||
|
||||
## Description (the why)
|
||||
|
||||
Short, imperative explanation of the goal of the task and why it is needed.
|
||||
|
||||
## Acceptance Criteria (the what)
|
||||
|
||||
- [ ] Resolver returns correct data for happy path
|
||||
- [ ] Error response matches REST
|
||||
- [ ] P95 latency ≤ 50 ms under 100 RPS
|
||||
|
||||
## Implementation Plan (the how)
|
||||
|
||||
1. Research existing GraphQL resolver patterns
|
||||
2. Implement basic resolver with error handling
|
||||
3. Add performance monitoring
|
||||
4. Write unit and integration tests
|
||||
5. Benchmark performance under load
|
||||
|
||||
## Implementation Notes (only added after working on the task)
|
||||
|
||||
- Approach taken
|
||||
- Features implemented or modified
|
||||
- Technical decisions and trade-offs
|
||||
- Modified or added files
|
||||
```
|
||||
|
||||
## 6. Implementing Tasks
|
||||
|
||||
Mandatory sections for every task:
|
||||
|
||||
- **Implementation Plan**: (The **"how"**) Outline the steps to achieve the task. Because the implementation details may
|
||||
change after the task is created, **the implementation notes must be added only after putting the task in progress**
|
||||
and before starting working on the task.
|
||||
- **Implementation Notes**: Document your approach, decisions, challenges, and any deviations from the plan. This
|
||||
section is added after you are done working on the task. It should summarize what you did and why you did it. Keep it
|
||||
concise but informative.
|
||||
|
||||
**IMPORTANT**: Do not implement anything else that deviates from the **Acceptance Criteria**. If you need to
|
||||
implement something that is not in the AC, update the AC first and then implement it or create a new task for it.
|
||||
|
||||
## 2. Typical Workflow
|
||||
|
||||
```bash
|
||||
# 1 Identify work
|
||||
backlog task list -s "To Do" --plain
|
||||
|
||||
# 2 Read details & documentation
|
||||
backlog task 42 --plain
|
||||
# Read also all documentation files in `backlog/docs/` directory.
|
||||
# Read also all decision files in `backlog/decisions/` directory.
|
||||
|
||||
# 3 Start work: assign yourself & move column
|
||||
backlog task edit 42 -a @{yourself} -s "In Progress"
|
||||
|
||||
# 4 Add implementation plan before starting
|
||||
backlog task edit 42 --plan "1. Analyze current implementation\n2. Identify bottlenecks\n3. Refactor in phases"
|
||||
|
||||
# 5 Break work down if needed by creating subtasks or additional tasks
|
||||
backlog task create "Refactor DB layer" -p 42 -a @{yourself} -d "Description" --ac "Tests pass,Performance improved"
|
||||
|
||||
# 6 Complete and mark Done
|
||||
backlog task edit 42 -s Done --notes "Implemented GraphQL resolver with error handling and performance monitoring"
|
||||
```
|
||||
|
||||
### 7. Final Steps Before Marking a Task as Done
|
||||
|
||||
Always ensure you have:
|
||||
|
||||
1. ✅ Marked all acceptance criteria as completed (change `- [ ]` to `- [x]`)
|
||||
2. ✅ Added an `## Implementation Notes` section documenting your approach
|
||||
3. ✅ Run all tests and linting checks
|
||||
4. ✅ Updated relevant documentation
|
||||
|
||||
## 8. Definition of Done (DoD)
|
||||
|
||||
A task is **Done** only when **ALL** of the following are complete:
|
||||
|
||||
1. **Acceptance criteria** checklist in the task file is fully checked (all `- [ ]` changed to `- [x]`).
|
||||
2. **Implementation plan** was followed or deviations were documented in Implementation Notes.
|
||||
3. **Automated tests** (unit + integration) cover new logic.
|
||||
4. **Static analysis**: linter & formatter succeed.
|
||||
5. **Documentation**:
|
||||
- All relevant docs updated (README, backlog/docs, backlog/decisions, etc.).
|
||||
- Task file **MUST** have an `## Implementation Notes` section added summarising:
|
||||
- Approach taken
|
||||
- Features implemented or modified
|
||||
- Technical decisions and trade-offs
|
||||
- Modified or added files
|
||||
6. **Review**: code reviewed.
|
||||
7. **Task hygiene**: status set to **Done** via CLI (`backlog task edit <id> -s Done`).
|
||||
8. **No regressions**: performance, security and licence checks green.
|
||||
|
||||
⚠️ **IMPORTANT**: Never mark a task as Done without completing ALL items above.
|
||||
|
||||
## 9. Handy CLI Commands
|
||||
|
||||
| Purpose | Command |
|
||||
|------------------|------------------------------------------------------------------------|
|
||||
| Create task | `backlog task create "Add OAuth"` |
|
||||
| Create with desc | `backlog task create "Feature" -d "Enables users to use this feature"` |
|
||||
| Create with AC | `backlog task create "Feature" --ac "Must work,Must be tested"` |
|
||||
| Create with deps | `backlog task create "Feature" --dep task-1,task-2` |
|
||||
| Create sub task | `backlog task create -p 14 "Add Google auth"` |
|
||||
| List tasks | `backlog task list --plain` |
|
||||
| View detail | `backlog task 7 --plain` |
|
||||
| Edit | `backlog task edit 7 -a @{yourself} -l auth,backend` |
|
||||
| Add plan | `backlog task edit 7 --plan "Implementation approach"` |
|
||||
| Add AC | `backlog task edit 7 --ac "New criterion,Another one"` |
|
||||
| Add deps | `backlog task edit 7 --dep task-1,task-2` |
|
||||
| Add notes | `backlog task edit 7 --notes "We added this and that feature because"` |
|
||||
| Mark as done | `backlog task edit 7 -s "Done"` |
|
||||
| Archive | `backlog task archive 7` |
|
||||
| Draft flow | `backlog draft create "Spike GraphQL"` → `backlog draft promote 3.1` |
|
||||
| Demote to draft | `backlog task demote <task-id>` |
|
||||
| Config editor | `backlog config set defaultEditor "code --wait"` |
|
||||
| View config | `backlog config list` |
|
||||
|
||||
## 10. Tips for AI Agents
|
||||
|
||||
- **Always use `--plain` flag** when listing or viewing tasks for AI-friendly text output instead of using Backlog.md
|
||||
interactive UI.
|
||||
- When users mention to creat a task, they mean to create a task using Backlog.md CLI tool.
|
||||
|
||||
# === BACKLOG.MD GUIDELINES END ===
|
||||
|
|
@ -0,0 +1,2 @@
|
|||
# Auto detect text files and perform LF normalization
|
||||
* text=auto
|
||||
|
After Width: | Height: | Size: 22 KiB |
|
|
@ -0,0 +1,25 @@
|
|||
---
|
||||
name: Bug Report
|
||||
about: Create a bug report to help us improve
|
||||
title: "[Bug]: "
|
||||
labels: bug
|
||||
---
|
||||
|
||||
**Describe the bug**
|
||||
A clear and concise description of what the bug is.
|
||||
|
||||
**To Reproduce**
|
||||
Steps to reproduce the behavior:
|
||||
1. Go to '...'
|
||||
2. Click on '...'
|
||||
3. See error
|
||||
|
||||
**Expected behavior**
|
||||
A clear and concise description of what you expected to happen.
|
||||
|
||||
**Environment**
|
||||
- OS: [e.g., Windows 11]
|
||||
- Node version: [e.g., 20]
|
||||
|
||||
**Additional context**
|
||||
Add any other context about the problem here.
|
||||
|
|
@ -0,0 +1,15 @@
|
|||
---
|
||||
name: Feature Request
|
||||
about: Suggest a new feature or enhancement
|
||||
title: "[Feature]: "
|
||||
labels: enhancement
|
||||
---
|
||||
|
||||
**Is your feature request related to a problem? Please describe.**
|
||||
A clear description of what problem you want to solve.
|
||||
|
||||
**Describe the solution you'd like**
|
||||
A clear and concise description of what you want to happen.
|
||||
|
||||
**Additional context**
|
||||
Add any other context or screenshots about the feature request here.
|
||||
|
|
@ -0,0 +1,19 @@
|
|||
## Summary
|
||||
Briefly explain the purpose of this pull request.
|
||||
|
||||
## Related Tasks
|
||||
List task IDs this PR closes, e.g. `closes task-29`.
|
||||
|
||||
> **📋 Important:** All PRs must have an associated task in the backlog.
|
||||
> - If no task exists, create one first using: `backlog task create "Your task title"`
|
||||
> - Follow the [task guidelines](../src/guidelines/agent-guidelines.md) when creating tasks
|
||||
> - Tasks should be atomic, testable, and well-defined with clear acceptance criteria
|
||||
|
||||
## Task Checklist
|
||||
- [ ] I have created a corresponding task in `backlog/tasks/`
|
||||
- [ ] The task has clear acceptance criteria
|
||||
- [ ] I have added an implementation plan to the task
|
||||
- [ ] All acceptance criteria in the task are marked as completed
|
||||
|
||||
## Testing
|
||||
Describe how you tested your changes.
|
||||
|
After Width: | Height: | Size: 163 KiB |
|
After Width: | Height: | Size: 1.2 MiB |
|
After Width: | Height: | Size: 205 KiB |
|
After Width: | Height: | Size: 17 KiB |
|
After Width: | Height: | Size: 23 KiB |
|
|
@ -0,0 +1,50 @@
|
|||
# ⚠️ **IMPORTANT**
|
||||
|
||||
1. Read the [README.md](README.md)
|
||||
2. Read the [agent-guidelines.md](src/guidelines/agent-guidelines.md)
|
||||
|
||||
## Commands
|
||||
|
||||
### Development
|
||||
|
||||
- `bun i` - Install dependencies
|
||||
- `bun test` - Run tests
|
||||
- `bun run format` - Format code with Biome
|
||||
- `bun run lint` - Lint and auto-fix with Biome
|
||||
- `bun run check` - Run all Biome checks (format + lint)
|
||||
- `bun run build` - Build the CLI tool
|
||||
- `bun run cli` - Uses the CLI tool directly
|
||||
|
||||
### Testing
|
||||
|
||||
- `bun test` - Run all tests
|
||||
- `bun test <filename>` - Run specific test file
|
||||
|
||||
### Configuration Management
|
||||
|
||||
- `bun run cli config list` - View all configuration values
|
||||
- `bun run cli config get <key>` - Get a specific config value (e.g. defaultEditor)
|
||||
- `bun run cli config set <key> <value>` - Set a config value with validation
|
||||
|
||||
## Core Structure
|
||||
|
||||
- **CLI Tool**: Built with Bun and TypeScript as a global npm package (`npm i -g backlog.md`)
|
||||
- **Source Code**: Located in `/src` directory with modular TypeScript structure
|
||||
- **Task Management**: Uses markdown files in `backlog/` directory structure
|
||||
- **Workflow**: Git-integrated with task IDs referenced in commits and PRs
|
||||
|
||||
## Code Standards
|
||||
|
||||
- **Runtime**: Bun with TypeScript 5
|
||||
- **Formatting**: Biome with tab indentation and double quotes
|
||||
- **Linting**: Biome recommended rules
|
||||
- **Testing**: Bun's built-in test runner
|
||||
- **Pre-commit**: Husky + lint-staged automatically runs Biome checks before commits
|
||||
|
||||
The pre-commit hook automatically runs `biome check --write` on staged files to ensure code quality. If linting errors
|
||||
are found, the commit will be blocked until fixed.
|
||||
|
||||
## Git Workflow
|
||||
|
||||
- **Branching**: Use feature branches when working on tasks (e.g. `tasks/task-123-feature-name`)
|
||||
- **Committing**: Use the following format: `TASK-123 - Title of the task`
|
||||
|
After Width: | Height: | Size: 1.3 MiB |
|
After Width: | Height: | Size: 16 KiB |
|
|
@ -0,0 +1,4 @@
|
|||
|
||||
|
||||
https://github.com/user-attachments/assets/a282c648-ffaa-46fc-b3d7-5ab36ca54cbd
|
||||
|
||||
|
After Width: | Height: | Size: 25 KiB |
|
After Width: | Height: | Size: 116 KiB |
|
|
@ -0,0 +1,103 @@
|
|||
name: CI
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [main]
|
||||
pull_request:
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
checks: write
|
||||
pull-requests: write
|
||||
|
||||
jobs:
|
||||
test:
|
||||
name: lint-and-unit-test
|
||||
strategy:
|
||||
matrix:
|
||||
os: [ubuntu-latest, macos-latest, windows-latest]
|
||||
runs-on: ${{ matrix.os }}
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: oven-sh/setup-bun@v1
|
||||
with:
|
||||
bun-version: 1.3.3
|
||||
- uses: actions/cache@v4
|
||||
id: cache
|
||||
with:
|
||||
path: ~/.bun/install/cache
|
||||
key: ${{ runner.os }}-${{ matrix.os }}-bun-${{ hashFiles('**/bun.lock') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-${{ matrix.os }}-bun-
|
||||
- run: bun install --frozen-lockfile --linker=isolated
|
||||
- run: bun run lint
|
||||
- name: Run tests
|
||||
run: |
|
||||
if [[ "${{ matrix.os }}" == "windows-latest" ]]; then
|
||||
# Run tests with increased timeout on Windows to handle slower file operations
|
||||
bun test --timeout=15000 --reporter=junit --reporter-outfile=test-results.xml
|
||||
else
|
||||
# Run tests with increased timeout to handle Bun shell operations
|
||||
bun test --timeout=10000 --reporter=junit --reporter-outfile=test-results.xml
|
||||
fi
|
||||
shell: bash
|
||||
|
||||
build-test:
|
||||
name: compile-and-smoke-test
|
||||
strategy:
|
||||
matrix:
|
||||
include:
|
||||
- os: ubuntu-latest
|
||||
target: bun-linux-x64-baseline
|
||||
- os: macos-latest
|
||||
target: bun-darwin-x64
|
||||
- os: windows-latest
|
||||
target: bun-windows-x64-baseline
|
||||
runs-on: ${{ matrix.os }}
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: oven-sh/setup-bun@v1
|
||||
with:
|
||||
bun-version: 1.3.3
|
||||
- uses: actions/cache@v4
|
||||
id: cache
|
||||
with:
|
||||
path: ~/.bun/install/cache
|
||||
key: ${{ runner.os }}-${{ matrix.os }}-bun-${{ hashFiles('**/bun.lock') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-${{ matrix.os }}-bun-
|
||||
- run: bun install --frozen-lockfile --linker=isolated
|
||||
- name: Prime Bun cache for baseline target (Windows workaround)
|
||||
if: ${{ contains(matrix.target, 'baseline') && matrix.os == 'windows-latest' }}
|
||||
shell: bash
|
||||
run: |
|
||||
# Workaround for https://github.com/oven-sh/bun/issues/13513
|
||||
# Build a dummy project from C:\ to prime the baseline binary cache
|
||||
cd /c
|
||||
mkdir -p bun-cache-primer
|
||||
cd bun-cache-primer
|
||||
echo 'console.log("cache primer")' > index.js
|
||||
bun build --compile --target=${{ matrix.target }} ./index.js --outfile primer.exe || true
|
||||
cd $GITHUB_WORKSPACE
|
||||
- name: Build standalone binary
|
||||
shell: bash
|
||||
run: |
|
||||
VER="$(jq -r .version package.json)"
|
||||
OUT="backlog-test${{ contains(matrix.target,'windows') && '.exe' || '' }}"
|
||||
bun build src/cli.ts \
|
||||
--compile --minify --sourcemap \
|
||||
--target=${{ matrix.target }} \
|
||||
--define __EMBEDDED_VERSION__="\"${VER}\"" \
|
||||
--outfile="$OUT"
|
||||
- name: Smoke-test binary
|
||||
shell: bash
|
||||
run: |
|
||||
FILE="backlog-test${{ contains(matrix.target,'windows') && '.exe' || '' }}"
|
||||
chmod +x "$FILE"
|
||||
if [[ "${{ matrix.os }}" == "windows-latest" ]]; then
|
||||
powershell -command ".\\$FILE --version"
|
||||
powershell -command ".\\$FILE --help"
|
||||
else
|
||||
"./$FILE" --version
|
||||
"./$FILE" --help
|
||||
fi
|
||||
|
|
@ -0,0 +1,312 @@
|
|||
name: Release multi-platform executables
|
||||
|
||||
on:
|
||||
push:
|
||||
tags: ['v*.*.*']
|
||||
|
||||
permissions:
|
||||
contents: write
|
||||
id-token: write
|
||||
|
||||
jobs:
|
||||
build:
|
||||
name: build-${{ matrix.target }}
|
||||
strategy:
|
||||
matrix:
|
||||
include:
|
||||
- os: ubuntu-latest
|
||||
target: bun-linux-x64-baseline
|
||||
- os: ubuntu-latest
|
||||
target: bun-linux-arm64
|
||||
- os: macos-latest
|
||||
target: bun-darwin-x64
|
||||
- os: macos-latest
|
||||
target: bun-darwin-arm64
|
||||
- os: windows-latest
|
||||
target: bun-windows-x64-baseline
|
||||
runs-on: ${{ matrix.os }}
|
||||
env:
|
||||
BIN: backlog-bin${{ contains(matrix.target,'windows') && '.exe' || '' }}
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: oven-sh/setup-bun@v1
|
||||
with:
|
||||
bun-version: 1.3.3
|
||||
- uses: actions/cache@v4
|
||||
id: cache
|
||||
with:
|
||||
path: ~/.bun/install/cache
|
||||
key: ${{ runner.os }}-${{ matrix.target }}-bun-${{ hashFiles('**/bun.lock') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-${{ matrix.target }}-bun-
|
||||
- run: bun install --frozen-lockfile
|
||||
- name: Sync version to tag
|
||||
shell: bash
|
||||
run: |
|
||||
TAG="${GITHUB_REF##refs/tags/v}"
|
||||
jq ".version = \"$TAG\"" package.json > tmp.json && mv tmp.json package.json
|
||||
- name: Prime Bun cache for baseline target (Windows workaround)
|
||||
if: ${{ contains(matrix.target, 'baseline') && contains(matrix.target, 'windows') }}
|
||||
shell: bash
|
||||
run: |
|
||||
# Workaround for https://github.com/oven-sh/bun/issues/13513
|
||||
# Build a dummy project from C:\ to prime the baseline binary cache
|
||||
cd /c
|
||||
mkdir -p bun-cache-primer
|
||||
cd bun-cache-primer
|
||||
echo 'console.log("cache primer")' > index.js
|
||||
bun build --compile --target=${{ matrix.target }} ./index.js --outfile primer.exe || true
|
||||
cd $GITHUB_WORKSPACE
|
||||
- name: Compile standalone binary
|
||||
shell: bash
|
||||
run: |
|
||||
mkdir -p dist
|
||||
bun build src/cli.ts --compile --minify --target=${{ matrix.target }} --define __EMBEDDED_VERSION__="\"${GITHUB_REF##refs/tags/v}\"" --outfile=dist/${{ env.BIN }}
|
||||
- name: Make binary executable (non-Windows)
|
||||
if: ${{ !contains(matrix.target,'windows') }}
|
||||
run: chmod +x "dist/${{ env.BIN }}"
|
||||
- name: Check build output and move binary
|
||||
shell: bash
|
||||
run: |
|
||||
echo "Contents of dist/:"
|
||||
ls -la dist/
|
||||
echo "Moving dist/${{ env.BIN }} to ${{ env.BIN }}"
|
||||
mv dist/${{ env.BIN }} ${{ env.BIN }}
|
||||
echo "Final binary size:"
|
||||
ls -lh ${{ env.BIN }}
|
||||
- uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: backlog-${{ matrix.target }}
|
||||
path: ${{ env.BIN }}
|
||||
|
||||
npm-publish:
|
||||
needs: build
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- name: Prepare npm package
|
||||
shell: bash
|
||||
run: |
|
||||
mkdir -p dist
|
||||
cp scripts/cli.cjs dist/cli.js
|
||||
cp scripts/resolveBinary.cjs dist/resolveBinary.cjs
|
||||
cp scripts/postuninstall.cjs dist/postuninstall.cjs
|
||||
chmod +x dist/cli.js
|
||||
- name: Create npm-ready package.json
|
||||
shell: bash
|
||||
run: |
|
||||
TAG="${GITHUB_REF##refs/tags/v}"
|
||||
jq 'del(.devDependencies,.scripts.prepare,.scripts.preinstall,.type) |
|
||||
.version = "'$TAG'" |
|
||||
.bin = {backlog:"cli.js"} |
|
||||
.files = ["cli.js","resolveBinary.cjs","postuninstall.cjs","package.json","README.md","LICENSE"] |
|
||||
.scripts = {"postuninstall": "node postuninstall.cjs"} |
|
||||
.repository = {"type":"git","url":"https://github.com/MrLesk/Backlog.md"} |
|
||||
.optionalDependencies = {
|
||||
"backlog.md-linux-x64" : "'$TAG'",
|
||||
"backlog.md-linux-arm64": "'$TAG'",
|
||||
"backlog.md-darwin-x64" : "'$TAG'",
|
||||
"backlog.md-darwin-arm64": "'$TAG'",
|
||||
"backlog.md-windows-x64": "'$TAG'"
|
||||
}' package.json > dist/package.json
|
||||
cp LICENSE README.md dist/ 2>/dev/null || true
|
||||
- uses: actions/setup-node@v5
|
||||
with:
|
||||
node-version: 20
|
||||
- name: Configure npm for trusted publishing
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
npm install -g npm@11.6.0
|
||||
npm --version
|
||||
- name: Dry run trusted publish
|
||||
run: |
|
||||
cd dist
|
||||
npm publish --access public --dry-run
|
||||
- name: Publish to npm
|
||||
run: |
|
||||
cd dist
|
||||
npm publish --access public
|
||||
|
||||
publish-binaries:
|
||||
needs: [build, npm-publish]
|
||||
strategy:
|
||||
matrix:
|
||||
include:
|
||||
- target: bun-linux-x64-baseline
|
||||
package: backlog.md-linux-x64
|
||||
os: linux
|
||||
cpu: x64
|
||||
- target: bun-linux-arm64
|
||||
package: backlog.md-linux-arm64
|
||||
os: linux
|
||||
cpu: arm64
|
||||
- target: bun-darwin-x64
|
||||
package: backlog.md-darwin-x64
|
||||
os: darwin
|
||||
cpu: x64
|
||||
- target: bun-darwin-arm64
|
||||
package: backlog.md-darwin-arm64
|
||||
os: darwin
|
||||
cpu: arm64
|
||||
- target: bun-windows-x64-baseline
|
||||
package: backlog.md-windows-x64
|
||||
os: win32
|
||||
cpu: x64
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/download-artifact@v4
|
||||
with:
|
||||
name: backlog-${{ matrix.target }}
|
||||
timeout-minutes: 15
|
||||
- name: Prepare package
|
||||
shell: bash
|
||||
run: |
|
||||
TAG="${GITHUB_REF##refs/tags/v}"
|
||||
mkdir -p pkg
|
||||
# Rename the binary to the expected name
|
||||
if [[ -f backlog-bin.exe ]]; then
|
||||
mv backlog-bin.exe pkg/backlog.exe
|
||||
elif [[ -f backlog-bin ]]; then
|
||||
mv backlog-bin pkg/backlog
|
||||
else
|
||||
echo "Error: No binary found"
|
||||
ls -la
|
||||
exit 1
|
||||
fi
|
||||
cp LICENSE README.md pkg/ 2>/dev/null || true
|
||||
cat <<EOF > pkg/package.json
|
||||
{
|
||||
"name": "${{ matrix.package }}",
|
||||
"version": "${TAG}",
|
||||
"os": ["${{ matrix.os }}"],
|
||||
"cpu": ["${{ matrix.cpu }}"],
|
||||
"files": ["backlog${{ contains(matrix.target,'windows') && '.exe' || '' }}","package.json","LICENSE"],
|
||||
"repository": {
|
||||
"type": "git",
|
||||
"url": "https://github.com/MrLesk/Backlog.md"
|
||||
}
|
||||
}
|
||||
EOF
|
||||
- name: Ensure executable permission (non-Windows)
|
||||
if: ${{ !contains(matrix.target,'windows') }}
|
||||
shell: bash
|
||||
run: |
|
||||
chmod +x pkg/backlog
|
||||
- uses: actions/setup-node@v5
|
||||
with:
|
||||
node-version: 20
|
||||
- name: Configure npm for trusted publishing
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
npm install -g npm@11.6.0
|
||||
npm --version
|
||||
- name: Dry run platform publish
|
||||
run: |
|
||||
cd pkg
|
||||
npm publish --access public --dry-run
|
||||
- name: Publish platform package
|
||||
run: |
|
||||
cd pkg
|
||||
npm publish --access public
|
||||
|
||||
install-sanity:
|
||||
name: install-sanity-${{ matrix.os }}
|
||||
needs: [publish-binaries, npm-publish]
|
||||
strategy:
|
||||
matrix:
|
||||
os: [ubuntu-latest, macos-latest, windows-latest]
|
||||
runs-on: ${{ matrix.os }}
|
||||
steps:
|
||||
- uses: actions/setup-node@v5
|
||||
with:
|
||||
node-version: 20
|
||||
registry-url: https://registry.npmjs.org
|
||||
- name: Install and run backlog -v (Unix)
|
||||
if: ${{ matrix.os != 'windows-latest' }}
|
||||
shell: bash
|
||||
run: |
|
||||
set -euxo pipefail
|
||||
VERSION="${GITHUB_REF##refs/tags/v}"
|
||||
mkdir sanity && cd sanity
|
||||
npm init -y >/dev/null 2>&1
|
||||
npm i "backlog.md@${VERSION}"
|
||||
npx backlog -v
|
||||
- name: Install and run backlog -v (Windows)
|
||||
if: ${{ matrix.os == 'windows-latest' }}
|
||||
shell: pwsh
|
||||
run: |
|
||||
$ErrorActionPreference = 'Stop'
|
||||
$Version = $env:GITHUB_REF_NAME.TrimStart('v')
|
||||
mkdir sanity | Out-Null
|
||||
Set-Location sanity
|
||||
npm init -y | Out-Null
|
||||
npm i "backlog.md@$Version"
|
||||
npx backlog -v
|
||||
|
||||
github-release:
|
||||
needs: build
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/download-artifact@v4
|
||||
with:
|
||||
path: release-assets
|
||||
timeout-minutes: 15
|
||||
- name: Rename binaries for release
|
||||
run: |
|
||||
echo "=== Debug: Downloaded artifacts ==="
|
||||
find release-assets -type f -exec ls -lh {} \;
|
||||
echo "=== Processing artifacts ==="
|
||||
mkdir -p binaries
|
||||
for dir in release-assets/*/; do
|
||||
if [ -d "$dir" ]; then
|
||||
target=$(basename "$dir" | sed 's/backlog-//')
|
||||
echo "Processing target: $target"
|
||||
echo "Directory contents:"
|
||||
ls -la "$dir"
|
||||
binary=$(find "$dir" -name "backlog-bin*" -type f)
|
||||
if [ -n "$binary" ]; then
|
||||
echo "Found binary: $binary ($(ls -lh "$binary" | awk '{print $5}'))"
|
||||
if [[ "$target" == *"windows"* ]] && [[ "$binary" == *".exe" ]]; then
|
||||
cp "$binary" "binaries/backlog-${target}.exe"
|
||||
echo "Copied to binaries/backlog-${target}.exe ($(ls -lh "binaries/backlog-${target}.exe" | awk '{print $5}'))"
|
||||
else
|
||||
cp "$binary" "binaries/backlog-${target}"
|
||||
echo "Copied to binaries/backlog-${target} ($(ls -lh "binaries/backlog-${target}" | awk '{print $5}'))"
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
done
|
||||
echo "=== Final binaries ==="
|
||||
ls -lh binaries/
|
||||
- uses: softprops/action-gh-release@v1
|
||||
with:
|
||||
files: binaries/*
|
||||
|
||||
update-readme:
|
||||
needs: build
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: oven-sh/setup-bun@v1
|
||||
with:
|
||||
bun-version: 1.3.3
|
||||
- run: bun install --frozen-lockfile
|
||||
- name: Sync version to tag
|
||||
shell: bash
|
||||
run: |
|
||||
TAG="${GITHUB_REF##refs/tags/v}"
|
||||
jq ".version = \"$TAG\"" package.json > tmp.json && mv tmp.json package.json
|
||||
- name: Export board to README with version
|
||||
shell: bash
|
||||
run: |
|
||||
TAG="${GITHUB_REF##refs/tags/v}"
|
||||
bun run cli board export --readme --export-version "v$TAG"
|
||||
- name: Commit changes
|
||||
uses: stefanzweifel/git-auto-commit-action@v4
|
||||
with:
|
||||
commit_message: "docs: update README with latest board status and version [skip ci]"
|
||||
branch: main
|
||||
file_pattern: README.md package.json
|
||||
|
|
@ -0,0 +1,18 @@
|
|||
name: Shai-Hulud 2.0 Security Check
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [main]
|
||||
pull_request:
|
||||
branches: [main]
|
||||
|
||||
jobs:
|
||||
security-check:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: gensecaihq/Shai-Hulud-2.0-Detector@v1
|
||||
with:
|
||||
fail-on-critical: true
|
||||
|
|
@ -0,0 +1,65 @@
|
|||
# dependencies (bun install)
|
||||
node_modules
|
||||
|
||||
# output
|
||||
out
|
||||
dist
|
||||
cli
|
||||
*.tgz
|
||||
|
||||
# code coverage
|
||||
coverage
|
||||
*.lcov
|
||||
|
||||
# logs
|
||||
logs
|
||||
_.log
|
||||
report.[0-9]_.[0-9]_.[0-9]_.[0-9]_.json
|
||||
|
||||
# dotenv environment variable files
|
||||
.env
|
||||
.env.development.local
|
||||
.env.test.local
|
||||
.env.production.local
|
||||
.env.local
|
||||
|
||||
# caches
|
||||
.eslintcache
|
||||
.cache
|
||||
*.tsbuildinfo
|
||||
|
||||
# bun build artifacts
|
||||
*.bun-build
|
||||
.*.bun-build
|
||||
|
||||
# IntelliJ based IDEs
|
||||
.idea
|
||||
|
||||
# Finder (MacOS) folder config
|
||||
.DS_Store
|
||||
|
||||
# Claude Code session data
|
||||
.claude
|
||||
# But include the agents directory
|
||||
!.claude/agents
|
||||
!.claude/agents/**
|
||||
|
||||
# Gemini CLI session data
|
||||
.gemini
|
||||
|
||||
# Example project folder
|
||||
example-project/
|
||||
|
||||
# blessed resources
|
||||
|
||||
# Test temp directories created by bun tests
|
||||
tmp/
|
||||
.tmp-test-*
|
||||
|
||||
# Local files
|
||||
*.local.*
|
||||
|
||||
# Performance benchmark artifacts
|
||||
test-benchmark-report.json
|
||||
|
||||
.conductor
|
||||
|
|
@ -0,0 +1,5 @@
|
|||
# Add Bun to PATH if not present
|
||||
export BUN_INSTALL="$HOME/.bun"
|
||||
export PATH="$BUN_INSTALL/bin:$PATH"
|
||||
|
||||
bun lint-staged
|
||||
|
|
@ -0,0 +1,33 @@
|
|||
# Source files
|
||||
src/
|
||||
scripts/
|
||||
test-npm-package/
|
||||
|
||||
# Build artifacts
|
||||
backlog
|
||||
*.bun-build
|
||||
.*.bun-build
|
||||
|
||||
# Development files
|
||||
.github/
|
||||
.backlog/
|
||||
.claude/
|
||||
.git/
|
||||
.gitignore
|
||||
.npmignore
|
||||
biome.json
|
||||
bun.lock
|
||||
tsconfig.json
|
||||
|
||||
# Tests
|
||||
*.test.ts
|
||||
*.test.js
|
||||
|
||||
# Other
|
||||
.DS_Store
|
||||
*.log
|
||||
node_modules/
|
||||
.env*
|
||||
|
||||
# Keep only essential files for npm package
|
||||
# package.json, README.md, LICENSE, and cli.js are included by default
|
||||
|
|
@ -0,0 +1,84 @@
|
|||
|
||||
<!-- BACKLOG.MD MCP GUIDELINES START -->
|
||||
|
||||
<CRITICAL_INSTRUCTION>
|
||||
|
||||
## BACKLOG WORKFLOW INSTRUCTIONS
|
||||
|
||||
This project uses Backlog.md MCP for all task and project management activities.
|
||||
|
||||
**CRITICAL GUIDANCE**
|
||||
|
||||
- If your client supports MCP resources, read `backlog://workflow/overview` to understand when and how to use Backlog for this project.
|
||||
- If your client only supports tools or the above request fails, call `backlog.get_workflow_overview()` tool to load the tool-oriented overview (it lists the matching guide tools).
|
||||
|
||||
- **First time working here?** Read the overview resource IMMEDIATELY to learn the workflow
|
||||
- **Already familiar?** You should have the overview cached ("## Backlog.md Overview (MCP)")
|
||||
- **When to read it**: BEFORE creating tasks, or when you're unsure whether to track work
|
||||
|
||||
These guides cover:
|
||||
- Decision framework for when to create tasks
|
||||
- Search-first workflow to avoid duplicates
|
||||
- Links to detailed guides for task creation, execution, and completion
|
||||
- MCP tools reference
|
||||
|
||||
You MUST read the overview resource to understand the complete workflow. The information is NOT summarized here.
|
||||
|
||||
</CRITICAL_INSTRUCTION>
|
||||
|
||||
<!-- BACKLOG.MD MCP GUIDELINES END -->
|
||||
|
||||
When you're working on a task, you should assign it yourself: -a @codex
|
||||
|
||||
In addition to the rules above, please consider the following:
|
||||
At the end of every task implementation, try to take a moment to see if you can simplify it.
|
||||
When you are done implementing, you know much more about a task than when you started.
|
||||
At this point you can better judge retrospectively what can be the simplest architecture to solve the problem.
|
||||
If you can simplify the code, do it.
|
||||
|
||||
## Commands
|
||||
|
||||
### Development
|
||||
|
||||
- `bun i` - Install dependencies
|
||||
- `bun test` - Run all tests
|
||||
- `bunx tsc --noEmit` - Type-check code
|
||||
- `bun run check .` - Run all Biome checks (format + lint)
|
||||
- `bun run build` - Build the CLI tool
|
||||
- `bun run cli` - Uses the CLI tool directly
|
||||
|
||||
### Testing
|
||||
|
||||
- `bun test` - Run all tests
|
||||
- `bun test <filename>` - Run specific test file
|
||||
|
||||
### Configuration Management
|
||||
|
||||
- `bun run cli config list` - View all configuration values
|
||||
- `bun run cli config get <key>` - Get a specific config value (e.g. defaultEditor)
|
||||
- `bun run cli config set <key> <value>` - Set a config value with validation
|
||||
|
||||
## Core Structure
|
||||
|
||||
- **CLI Tool**: Built with Bun and TypeScript as a global npm package (`npm i -g backlog.md`)
|
||||
- **Source Code**: Located in `/src` directory with modular TypeScript structure
|
||||
- **Task Management**: Uses markdown files in `backlog/` directory structure
|
||||
- **Workflow**: Git-integrated with task IDs referenced in commits and PRs
|
||||
|
||||
## Code Standards
|
||||
|
||||
- **Runtime**: Bun with TypeScript 5
|
||||
- **Formatting**: Biome with tab indentation and double quotes
|
||||
- **Linting**: Biome recommended rules
|
||||
- **Testing**: Bun's built-in test runner
|
||||
- **Pre-commit**: Husky + lint-staged automatically runs Biome checks before commits
|
||||
|
||||
The pre-commit hook automatically runs `biome check --write` on staged files to ensure code quality. If linting errors
|
||||
are found, the commit will be blocked until fixed.
|
||||
|
||||
## Git Workflow
|
||||
|
||||
- **Branching**: Use feature branches when working on tasks (e.g. `tasks/task-123-feature-name`)
|
||||
- **Committing**: Use the following format: `TASK-123 - Title of the task`
|
||||
- **Github CLI**: Use `gh` whenever possible for PRs and issues
|
||||
|
||||
|
|
@ -0,0 +1,216 @@
|
|||
<!-- BACKLOG.MD MCP GUIDELINES START -->
|
||||
|
||||
<CRITICAL_INSTRUCTION>
|
||||
|
||||
## BACKLOG WORKFLOW INSTRUCTIONS
|
||||
|
||||
This project uses Backlog.md MCP for all task and project management.
|
||||
|
||||
**CRITICAL RESOURCE**: Read `backlog://workflow/overview` to understand when and how to use Backlog for this project.
|
||||
|
||||
- **First time working here?** Read the overview resource IMMEDIATELY to learn the workflow
|
||||
- **Already familiar?** You should have the overview cached ("## Backlog.md Overview (MCP)")
|
||||
- **When to read it**: BEFORE creating tasks, or when you're unsure whether to track work
|
||||
|
||||
The overview resource contains:
|
||||
- Decision framework for when to create tasks
|
||||
- Search-first workflow to avoid duplicates
|
||||
- Links to detailed guides for task creation, execution, and completion
|
||||
- MCP tools reference
|
||||
|
||||
You MUST read the overview resource to understand the complete workflow. The information is NOT summarized here.
|
||||
|
||||
</CRITICAL_INSTRUCTION>
|
||||
|
||||
<!-- BACKLOG.MD MCP GUIDELINES END -->
|
||||
|
||||
## Commands
|
||||
|
||||
### Development
|
||||
- `bun i` - Install dependencies
|
||||
- `bun test` - Run all tests
|
||||
- `bun run build` - Build the CLI tool
|
||||
- `bun run cli` - Use the CLI tool directly
|
||||
|
||||
### Testing & Quality
|
||||
- `CLAUDECODE=1 bun test` - Run all tests with failures-only output (RECOMMENDED - full output is too long for Claude)
|
||||
- `bun test <filename>` - Run specific test file
|
||||
- `bun test src/**/*.test.ts` - Unit tests only
|
||||
- `bun test src/mcp/**/*.test.ts` - MCP tests only
|
||||
- `bun test --watch` - Run tests in watch mode
|
||||
- `bunx tsc --noEmit` - Type-check code
|
||||
- `bun run check .` - Run all Biome checks (format + lint)
|
||||
|
||||
**Development Strategy**: Test specific files during development, run full suite before commits.
|
||||
**Important**: Always use `CLAUDECODE=1` when running full test suite - the default verbose output exceeds Claude's consumption limits.
|
||||
|
||||
### Performance Benchmarking
|
||||
- `bun run benchmark` - Run performance benchmark on all test files
|
||||
- Runs each test file individually and measures execution time
|
||||
- Groups results by test prefix (mcp-, cli-, board-, etc.)
|
||||
- Generates `test-benchmark-report.json` with detailed timing data
|
||||
- Shows top 10 slowest tests and performance breakdown by category
|
||||
|
||||
### Pre-Commit Validation (REQUIRED)
|
||||
**Claude MUST verify all pass before committing:**
|
||||
```bash
|
||||
bunx tsc --noEmit # TypeScript compilation
|
||||
bun run check . # Lint/format
|
||||
CLAUDECODE=1 bun test --timeout 180000 # Full test suite (failures-only output)
|
||||
```
|
||||
|
||||
|
||||
### Configuration
|
||||
- `bun run cli config list` - View all configuration values
|
||||
- `bun run cli config get <key>` - Get specific value (e.g. defaultEditor)
|
||||
- `bun run cli config set <key> <value>` - Set with validation
|
||||
|
||||
## Core Structure
|
||||
- **CLI Tool**: Built with Bun and TypeScript as a global npm package (`npm i -g backlog.md`)
|
||||
- **Source Code**: Located in `/src` directory with modular TypeScript structure
|
||||
- **Task Management**: Uses markdown files in `backlog/` directory structure
|
||||
- **Git Workflow**: Task IDs referenced in commits and PRs (`TASK-123 - Title`)
|
||||
- **Branching**: Use feature branches when working on tasks (e.g. `tasks/task-123-feature-name`)
|
||||
|
||||
## Code Standards
|
||||
- **Runtime**: Bun with TypeScript 5
|
||||
- **Formatting**: Biome with tab indentation and double quotes
|
||||
- **Linting**: Biome recommended rules
|
||||
- **Testing**: Bun's built-in test runner
|
||||
- **Pre-commit**: Husky + lint-staged automatically runs Biome checks before commits
|
||||
|
||||
The pre-commit hook automatically runs `biome check --write` on staged files to ensure code quality. If linting errors are found, the commit will be blocked until fixed.
|
||||
|
||||
## Architecture Guidelines
|
||||
- **Separation of Concerns**: CLI logic and utility functions are kept separate to avoid side effects during testing
|
||||
- **Utility Functions**: Reusable utility functions (like ID generators) are placed in `src/utils/` directory
|
||||
- **No Side Effects on Import**: Modules should not execute CLI code when imported by other modules or tests
|
||||
- **Branching**: Use feature branches when working on tasks (e.g. `tasks/task-123-feature-name`)
|
||||
- **Committing**: Use the following format: `TASK-123 - Title of the task`
|
||||
- **Github CLI**: Use `gh` whenever possible for PRs and issues
|
||||
|
||||
## MCP Architecture Principles
|
||||
- **MCP is a Pure Protocol Wrapper**: Protocol translation ONLY - no business logic, no feature extensions
|
||||
- **CLI Feature Parity**: MCP = strict subset of CLI capabilities
|
||||
- **Core API Usage**: All operations MUST use Core APIs (never direct filesystem/git)
|
||||
- **Shared Utilities**: Reuse exact same utilities as CLI (`src/utils/task-builders.ts`)
|
||||
- **🔒 Local Development Only**: stdio transport only (see [/backlog/docs/mcp/README.md](backlog/docs/mcp/README.md))
|
||||
|
||||
**Violations to Avoid**:
|
||||
- Custom business logic in MCP handlers
|
||||
- Direct filesystem or git operations
|
||||
- Features beyond CLI capabilities
|
||||
|
||||
See MCP implementation in `/src/mcp/` for development details.
|
||||
|
||||
## CLI Multi-line Input (description/plan/notes)
|
||||
The CLI preserves input literally; `\n` sequences in normal quotes are not converted. Use one of the following when you need real newlines:
|
||||
|
||||
- **Bash/Zsh (ANSI‑C quoting)**:
|
||||
- `backlog task edit 42 --notes $'Line1\nLine2'`
|
||||
- `backlog task edit 42 --plan $'1. A\n2. B'`
|
||||
- **POSIX (printf)**:
|
||||
- `backlog task edit 42 --desc "$(printf 'Line1\nLine2')"`
|
||||
- **PowerShell (backtick)**:
|
||||
- `backlog task edit 42 --desc "Line1\`nLine2"`
|
||||
|
||||
*Note: `"...\n..."` passes literal backslash+n, not newline*
|
||||
|
||||
## Using Bun
|
||||
Default to using Bun instead of Node.js:
|
||||
|
||||
- Use `bun <file>` instead of `node <file>` or `ts-node <file>`
|
||||
- Use `bun test` instead of `jest` or `vitest`
|
||||
- Use `bun build <file.html|file.ts|file.css>` instead of `webpack` or `esbuild`
|
||||
- Use `bun install` instead of `npm install` or `yarn install` or `pnpm install`
|
||||
- Use `bun run <script>` instead of `npm run <script>` or `yarn run <script>` or `pnpm run <script>`
|
||||
- Bun automatically loads .env, so don't use dotenv
|
||||
- Run `bunx tsc --noEmit` to perform TypeScript compilation checks as often as convenient
|
||||
|
||||
### Key APIs
|
||||
- `Bun.serve()` supports WebSockets, HTTPS, and routes. Don't use `express`
|
||||
- `bun:sqlite` for SQLite. Don't use `better-sqlite3`
|
||||
- `Bun.redis` for Redis. Don't use `ioredis`
|
||||
- `Bun.sql` for Postgres. Don't use `pg` or `postgres.js`
|
||||
- `WebSocket` is built-in. Don't use `ws`
|
||||
- Prefer `Bun.file` over `node:fs`'s readFile/writeFile
|
||||
- Bun.$`ls` instead of execa
|
||||
|
||||
## Frontend Development
|
||||
Use HTML imports with `Bun.serve()`. Don't use `vite`. HTML imports fully support React, CSS, Tailwind.
|
||||
|
||||
### Build Commands (/src/web/)
|
||||
- `bun run build:css` - Build Tailwind CSS
|
||||
- `bun run build` - Build CSS + compile CLI binary
|
||||
|
||||
### Architecture
|
||||
- **HTML Imports**: Use `Bun.serve()` with direct .tsx/.jsx imports (no bundler needed)
|
||||
- **CSS**: Tailwind CSS processed via `@tailwindcss/cli`
|
||||
- **React**: Components in `/src/web/components/`, contexts in `/src/web/contexts/`
|
||||
- **Bundling**: Bun handles .tsx/.jsx transpilation automatically
|
||||
|
||||
### Server Example
|
||||
```ts
|
||||
import index from "./index.html"
|
||||
|
||||
Bun.serve({
|
||||
routes: {
|
||||
"/": index,
|
||||
"/api/users/:id": {
|
||||
GET: (req) => {
|
||||
return new Response(JSON.stringify({ id: req.params.id }));
|
||||
},
|
||||
},
|
||||
},
|
||||
// optional websocket support
|
||||
websocket: {
|
||||
open: (ws) => { ws.send("Hello, world!"); },
|
||||
message: (ws, message) => { ws.send(message); },
|
||||
close: (ws) => { /* handle close */ }
|
||||
},
|
||||
development: { hmr: true, console: true }
|
||||
})
|
||||
```
|
||||
|
||||
### Frontend Component Example
|
||||
HTML files can import .tsx, .jsx or .js files directly and Bun's bundler will transpile & bundle automatically:
|
||||
|
||||
```html
|
||||
<!-- index.html -->
|
||||
<html>
|
||||
<body>
|
||||
<h1>Hello, world!</h1>
|
||||
<script type="module" src="./frontend.tsx"></script>
|
||||
</body>
|
||||
</html>
|
||||
```
|
||||
|
||||
```tsx
|
||||
// frontend.tsx
|
||||
import React from "react";
|
||||
import './index.css'; // CSS imports work directly
|
||||
import { createRoot } from "react-dom/client";
|
||||
|
||||
const root = createRoot(document.body);
|
||||
|
||||
export default function Frontend() {
|
||||
return <h1>Hello, world!</h1>;
|
||||
}
|
||||
|
||||
root.render(<Frontend />);
|
||||
```
|
||||
|
||||
Run with: `bun --hot ./index.ts`
|
||||
|
||||
## Testing
|
||||
Use `bun test` to run tests:
|
||||
|
||||
```ts
|
||||
import { test, expect } from "bun:test";
|
||||
|
||||
test("hello world", () => {
|
||||
expect(1).toBe(1);
|
||||
});
|
||||
```
|
||||
|
||||
For more information, read the Bun API docs in `node_modules/bun-types/docs/**.md`.
|
||||
|
|
@ -0,0 +1,19 @@
|
|||
# Contributing to Backlog.md
|
||||
|
||||
Thank you for your interest in contributing to Backlog.md. This project is managed using the Backlog.md workflow and we welcome community involvement.
|
||||
|
||||
## Opening Issues
|
||||
|
||||
- Search existing issues before creating a new one.
|
||||
- Provide a clear description of the problem or proposal.
|
||||
- Reference the related task ID when applicable.
|
||||
|
||||
## Pull Requests
|
||||
|
||||
1. Fork the repository and create a branch named after the task ID and a short description (e.g. `task-27-contributing-guidelines`).
|
||||
2. Make your changes and commit them with the task ID in the message.
|
||||
3. Run tests with `bun test` and ensure they pass.
|
||||
4. Format and lint the code using `npx biome check .`.
|
||||
5. Open a pull request referencing the issue or task it addresses.
|
||||
|
||||
Please read [AGENTS.md](AGENTS.md) for detailed rules that apply to contributors and AI agents.
|
||||
|
|
@ -0,0 +1,196 @@
|
|||
## Local Development
|
||||
|
||||
> **Runtime requirement:** Use Bun 1.2.23. Later Bun 1.3.x builds currently trigger a websocket CPU regression ([oven-sh/bun#23536](https://github.com/oven-sh/bun/issues/23536)), which also affects `backlog browser`. Our CI is pinned to 1.2.23 until the upstream fix lands.
|
||||
|
||||
Run these commands to bootstrap the project:
|
||||
|
||||
```bash
|
||||
bun install
|
||||
```
|
||||
|
||||
Run tests:
|
||||
|
||||
```bash
|
||||
bun test
|
||||
```
|
||||
|
||||
Format and lint:
|
||||
|
||||
```bash
|
||||
npx biome check .
|
||||
```
|
||||
|
||||
For contribution guidelines, see [CONTRIBUTING.md](CONTRIBUTING.md).
|
||||
|
||||
## MCP Development Setup
|
||||
|
||||
This project supports MCP (Model Context Protocol) integration. To develop and test MCP features:
|
||||
|
||||
### Prerequisites
|
||||
|
||||
Install at least one AI coding assistant:
|
||||
- [Claude Code](https://claude.ai/download)
|
||||
- [OpenAI Codex CLI](https://openai.com/codex)
|
||||
- [Google Gemini CLI](https://cloud.google.com/gemini/docs/codeassist/gemini-cli)
|
||||
|
||||
### Local MCP Testing
|
||||
|
||||
#### 1. Start MCP Server in Development Mode
|
||||
|
||||
```bash
|
||||
# Terminal 1: Start the MCP server
|
||||
bun run mcp
|
||||
|
||||
# Optional: include debug logs
|
||||
bun run mcp -- --debug
|
||||
```
|
||||
|
||||
The server will start and listen on stdio. You should see log messages confirming the stdio transport is active.
|
||||
|
||||
#### 2. Configure Your Agent
|
||||
|
||||
Choose one of the methods below based on your agent:
|
||||
|
||||
**Claude Code (Recommended for Development):**
|
||||
```bash
|
||||
# Add to project (creates .mcp.json)
|
||||
claude mcp add backlog-dev -- bun run mcp
|
||||
```
|
||||
|
||||
**Codex CLI:**
|
||||
```bash
|
||||
# Edit ~/.codex/config.toml
|
||||
[mcp_servers.backlog-dev]
|
||||
command = "bun"
|
||||
args = ["run", "mcp"]
|
||||
```
|
||||
|
||||
**Gemini CLI:**
|
||||
```bash
|
||||
gemini mcp add backlog-dev bun run mcp
|
||||
```
|
||||
|
||||
#### 3. Test the Connection
|
||||
|
||||
Open your agent and test:
|
||||
- "Show me all tasks in this project"
|
||||
- "Create a test task called 'Test MCP Integration'"
|
||||
- "Display the current board"
|
||||
|
||||
#### 4. Development Workflow
|
||||
|
||||
1. Make changes to MCP tools in `src/mcp/tools/`
|
||||
2. Restart the MCP server (Ctrl+C, then re-run)
|
||||
3. Restart your AI agent
|
||||
4. Test your changes
|
||||
|
||||
### Testing Individual Agents
|
||||
|
||||
Each AI agent has different configuration requirements. Start the server from your project root and follow the assistant's instructions to register it:
|
||||
|
||||
```bash
|
||||
backlog mcp start
|
||||
```
|
||||
|
||||
### Testing with MCP Inspector
|
||||
|
||||
Use the Inspector tooling when you want to exercise the stdio server outside an AI agent.
|
||||
|
||||
#### GUI workflow (`npx @modelcontextprotocol/inspector`)
|
||||
|
||||
1. Launch the Inspector UI in a terminal: `npx @modelcontextprotocol/inspector`
|
||||
2. Choose **STDIO** transport.
|
||||
3. Fill the connection fields exactly as follows:
|
||||
- **Command**: `bun`
|
||||
- **Arguments** (enter each item separately): `--cwd`, `/Users/<you>/Projects/Backlog.md`, `src/cli.ts`, `mcp`, `start`
|
||||
- Remove any proxy token; it is not needed for local stdio.
|
||||
4. Connect and use the tools/resources panes to issue MCP requests.
|
||||
|
||||
> Replace `/Users/<you>/Projects/Backlog.md` with the absolute path to your local Backlog.md checkout.
|
||||
|
||||
`bun run mcp` by itself prints Bun's `$ bun …` preamble, which breaks the Inspector’s JSON parser. If you prefer using the package script here, add `--silent` so the startup log disappears:
|
||||
|
||||
```
|
||||
Command: bun
|
||||
Arguments: run, --silent, mcp
|
||||
```
|
||||
|
||||
> Remember to substitute your own project directory for `/Users/<you>/Projects/Backlog.md`.
|
||||
|
||||
#### CLI workflow (`npx @modelcontextprotocol/inspector-cli`)
|
||||
|
||||
Run the CLI helper when you want to script quick checks:
|
||||
|
||||
```bash
|
||||
npx @modelcontextprotocol/inspector-cli \
|
||||
--cli \
|
||||
--transport stdio \
|
||||
--method tools/list \
|
||||
-- bun --cwd /Users/<you>/Projects/Backlog.md src/cli.ts mcp start
|
||||
```
|
||||
|
||||
The key detail in both flows is to call `src/cli.ts mcp start` directly (or `bun run --silent mcp`) so stdout stays pure JSON for the MCP handshake.
|
||||
|
||||
### Adding New MCP Agents
|
||||
|
||||
|
||||
### Project Structure
|
||||
|
||||
```
|
||||
backlog.md/
|
||||
├── src/
|
||||
│ ├── mcp/
|
||||
│ │ ├── errors/ # MCP error helpers
|
||||
│ │ ├── resources/ # Read-only resource adapters
|
||||
│ │ ├── tools/ # MCP tool implementations
|
||||
│ │ ├── utils/ # Shared utilities
|
||||
│ │ ├── validation/ # Input validators
|
||||
│ │ └── server.ts # createMcpServer entry point
|
||||
└── docs/
|
||||
├── mcp/ # User-facing MCP docs
|
||||
└── development/ # Developer docs
|
||||
```
|
||||
|
||||
## Release
|
||||
|
||||
Backlog.md now relies on npm Trusted Publishing with GitHub Actions OIDC. The
|
||||
release workflow builds binaries, publishes all npm packages, and records
|
||||
provenance automatically. Follow the steps below to keep the setup healthy.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- Choose the release version and ensure your git tag follows the
|
||||
`v<major.minor.patch>` pattern. The workflow automatically rewrites
|
||||
`package.json` files to match the tag, so you do **not** need to edit the
|
||||
version field manually.
|
||||
- In npm's **Trusted publishers** settings, link the
|
||||
`MrLesk/Backlog.md` repository and the `Release multi-platform executables`
|
||||
workflow for each package: `backlog.md`,
|
||||
`backlog.md-linux-{x64,arm64}`, `backlog.md-darwin-{x64,arm64}`, and
|
||||
`backlog.md-windows-x64`.
|
||||
- Remove the legacy `NODE_AUTH_TOKEN` repository secret. Publishing now uses
|
||||
the GitHub-issued OIDC token, so no long-lived npm tokens should remain.
|
||||
- The workflow activates `npm@latest` (currently 11.6.0 as of 2025-09-18) via
|
||||
Corepack to satisfy npm's trusted publishing requirement of version 11.5.1 or
|
||||
newer. If npm raises the minimum version again, the latest tag will pick it
|
||||
up automatically.
|
||||
|
||||
### Publishing steps
|
||||
|
||||
1. Commit the version bump and create a matching tag. You can either push the
|
||||
tag from your terminal
|
||||
```bash
|
||||
git tag v<major.minor.patch>
|
||||
git push origin main v<major.minor.patch>
|
||||
```
|
||||
or create a GitHub Release in the UI (which creates the tag automatically).
|
||||
Both paths trigger the same `Release multi-platform executables` workflow.
|
||||
2. Monitor the workflow run:
|
||||
- `Dry run trusted publish` and `Dry run platform publish` confirm that
|
||||
npm accepts the trusted publisher token before any real publish.
|
||||
- Publishing uses trusted publishing (no tokens) so npm automatically records
|
||||
provenance; no additional CLI flags are required.
|
||||
3. After the workflow completes, verify provenance on npm by opening each
|
||||
package's **Provenance** tab or by running `npm view <package> --json | jq '.dist.provenance'`.
|
||||
|
||||
[← Back to README](README.md)
|
||||
|
|
@ -0,0 +1,31 @@
|
|||
# Use Bun base image
|
||||
FROM oven/bun:1 AS base
|
||||
WORKDIR /app
|
||||
|
||||
# Install dependencies (including dev for build)
|
||||
FROM base AS install
|
||||
RUN mkdir -p /temp/dev
|
||||
COPY package.json bun.lock* bunfig.toml /temp/dev/
|
||||
# Install all dependencies (needed for build:css)
|
||||
RUN cd /temp/dev && bun install --frozen-lockfile --ignore-scripts
|
||||
|
||||
# Copy application code
|
||||
FROM base AS release
|
||||
COPY --from=install /temp/dev/node_modules node_modules
|
||||
COPY . .
|
||||
|
||||
# Build CSS
|
||||
RUN bun run build:css
|
||||
|
||||
# Initialize a default backlog project
|
||||
RUN bun src/cli.ts init "Backlog Server" || true
|
||||
|
||||
# Expose port
|
||||
EXPOSE 6420
|
||||
|
||||
# Set environment
|
||||
ENV NODE_ENV=production
|
||||
ENV PORT=6420
|
||||
|
||||
# Run the web server
|
||||
CMD ["bun", "src/cli.ts", "browser", "--port", "6420"]
|
||||
|
|
@ -0,0 +1,72 @@
|
|||
<!-- BACKLOG.MD MCP GUIDELINES START -->
|
||||
|
||||
<CRITICAL_INSTRUCTION>
|
||||
|
||||
## BACKLOG WORKFLOW INSTRUCTIONS
|
||||
|
||||
This project uses Backlog.md MCP for all task and project management.
|
||||
|
||||
**CRITICAL RESOURCE**: Read `backlog://workflow/overview` to understand when and how to use Backlog for this project.
|
||||
|
||||
- **First time working here?** Read the overview resource IMMEDIATELY to learn the workflow
|
||||
- **Already familiar?** You should have the overview cached ("## Backlog.md Overview (MCP)")
|
||||
- **When to read it**: BEFORE creating tasks, or when you're unsure whether to track work
|
||||
|
||||
The overview resource contains:
|
||||
- Decision framework for when to create tasks
|
||||
- Search-first workflow to avoid duplicates
|
||||
- Links to detailed guides for task creation, execution, and completion
|
||||
- MCP tools reference
|
||||
|
||||
You MUST read the overview resource to understand the complete workflow. The information is NOT summarized here.
|
||||
|
||||
</CRITICAL_INSTRUCTION>
|
||||
|
||||
<!-- BACKLOG.MD MCP GUIDELINES END -->
|
||||
|
||||
## Commands
|
||||
|
||||
### Development
|
||||
|
||||
- `bun i` - Install dependencies
|
||||
- `bun test` - Run tests
|
||||
- `bun run format` - Format code with Biome
|
||||
- `bun run lint` - Lint and auto-fix with Biome
|
||||
- `bun run check` - Run all Biome checks (format + lint)
|
||||
- `bun run build` - Build the CLI tool
|
||||
- `bun run cli` - Uses the CLI tool directly
|
||||
|
||||
### Testing
|
||||
|
||||
- `bun test` - Run all tests
|
||||
- `bun test <filename>` - Run specific test file
|
||||
|
||||
### Configuration Management
|
||||
|
||||
- `bun run cli config list` - View all configuration values
|
||||
- `bun run cli config get <key>` - Get a specific config value (e.g. defaultEditor)
|
||||
- `bun run cli config set <key> <value>` - Set a config value with validation
|
||||
|
||||
## Core Structure
|
||||
|
||||
- **CLI Tool**: Built with Bun and TypeScript as a global npm package (`npm i -g backlog.md`)
|
||||
- **Source Code**: Located in `/src` directory with modular TypeScript structure
|
||||
- **Task Management**: Uses markdown files in `backlog/` directory structure
|
||||
- **Workflow**: Git-integrated with task IDs referenced in commits and PRs
|
||||
|
||||
## Code Standards
|
||||
|
||||
- **Runtime**: Bun with TypeScript 5
|
||||
- **Formatting**: Biome with tab indentation and double quotes
|
||||
- **Linting**: Biome recommended rules
|
||||
- **Testing**: Bun's built-in test runner
|
||||
- **Pre-commit**: Husky + lint-staged automatically runs Biome checks before commits
|
||||
|
||||
The pre-commit hook automatically runs `biome check --write` on staged files to ensure code quality. If linting errors
|
||||
are found, the commit will be blocked until fixed.
|
||||
|
||||
## Git Workflow
|
||||
|
||||
- **Branching**: Use feature branches when working on tasks (e.g. `tasks/task-123-feature-name`)
|
||||
- **Committing**: Use the following format: `TASK-123 - Title of the task`
|
||||
- **Github CLI**: Use `gh` whenever possible for PRs and issues
|
||||
|
|
@ -0,0 +1,21 @@
|
|||
MIT License
|
||||
|
||||
Copyright (c) 2025 Backlog.md
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
|
|
@ -0,0 +1,493 @@
|
|||
<h1 align="center">Backlog.md</h1>
|
||||
<p align="center">Markdown‑native Task Manager & Kanban visualizer for any Git repository</p>
|
||||
|
||||
<p align="center">
|
||||
<code>npm i -g backlog.md</code> or <code>bun add -g backlog.md</code> or <code>brew install backlog-md</code> or <code>nix run github:MrLesk/Backlog.md</code>
|
||||
</p>
|
||||
|
||||

|
||||
|
||||
|
||||
---
|
||||
|
||||
> **Backlog.md** turns any folder with a Git repo into a **self‑contained project board**
|
||||
> powered by plain Markdown files and a zero‑config CLI.
|
||||
|
||||
## Features
|
||||
|
||||
* 📝 **Markdown-native tasks** -- manage every issue as a plain `.md` file
|
||||
|
||||
* 🤖 **AI-Ready** -- Works with Claude Code, Gemini CLI, Codex & any other MCP or CLI compatible AI assistants
|
||||
|
||||
* 📊 **Instant terminal Kanban** -- `backlog board` paints a live board in your shell
|
||||
|
||||
* 🌐 **Modern web interface** -- `backlog browser` launches a sleek web UI for visual task management
|
||||
|
||||
* 🔍 **Powerful search** -- fuzzy search across tasks, docs & decisions with `backlog search`
|
||||
|
||||
* 📋 **Rich query commands** -- view, list, filter, or archive tasks with ease
|
||||
|
||||
* 📤 **Board export** -- `backlog board export` creates shareable markdown reports
|
||||
|
||||
* 🔒 **100 % private & offline** -- backlog lives entirely inside your repo and you can manage everything locally
|
||||
|
||||
* 💻 **Cross-platform** -- runs on macOS, Linux, and Windows
|
||||
|
||||
* 🆓 **MIT-licensed & open-source** -- free for personal or commercial use
|
||||
|
||||
|
||||
---
|
||||
|
||||
## <img src="./.github/5-minute-tour-256.png" alt="5-minute tour" width="28" height="28" align="center"> Five‑minute tour
|
||||
```bash
|
||||
# 1. Make sure you have Backlog.md installed (global installation recommended)
|
||||
bun i -g backlog.md
|
||||
or
|
||||
npm i -g backlog.md
|
||||
or
|
||||
brew install backlog-md
|
||||
|
||||
# 2. Bootstrap a repo + backlog and choose the AI Agent integration mode (MCP, CLI, or skip)
|
||||
backlog init "My Awesome Project"
|
||||
|
||||
# 3. Create tasks manually
|
||||
backlog task create "Render markdown as kanban"
|
||||
|
||||
# 4. Or ask AI to create them: Claude Code, Gemini CLI, or Codex (Agents automatically use Backlog.md via MCP or CLI)
|
||||
Claude I would like to build a search functionality in the web view that searches for:
|
||||
* tasks
|
||||
* docs
|
||||
* decisions
|
||||
Please create relevant tasks to tackle this request.
|
||||
|
||||
# 5. See where you stand
|
||||
backlog board view or backlog browser
|
||||
|
||||
# 6. Assign tasks to AI (Backlog.md instructions tell agents how to work with tasks)
|
||||
Claude please implement all tasks related to the web search functionality (task-10, task-11, task-12)
|
||||
* before starting to write code use 'ultrathink mode' to prepare and add an implementation plan to the task
|
||||
* use multiple sub-agents when possible and dependencies allow
|
||||
```
|
||||
|
||||
All data is saved under `backlog` folder as human‑readable Markdown with the following format `task-<task-id> - <task-title>.md` (e.g. `task-10 - Add core search functionality.md`).
|
||||
|
||||
---
|
||||
|
||||
## <img src="./.github/web-interface-256.png" alt="Web Interface" width="28" height="28" align="center"> Web Interface
|
||||
|
||||
Launch a modern, responsive web interface for visual task management:
|
||||
|
||||
```bash
|
||||
# Start the web server (opens browser automatically)
|
||||
backlog browser
|
||||
|
||||
# Custom port
|
||||
backlog browser --port 8080
|
||||
|
||||
# Don't open browser automatically
|
||||
backlog browser --no-open
|
||||
```
|
||||
|
||||
**Features:**
|
||||
- Interactive Kanban board with drag-and-drop
|
||||
- Task creation and editing with rich forms
|
||||
- Interactive acceptance criteria editor with checklists
|
||||
- Real-time updates across all views
|
||||
- Responsive design for desktop and mobile
|
||||
- Task archiving with confirmation dialogs
|
||||
- Seamless CLI integration - all changes sync with markdown files
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## 🔧 MCP Integration (Model Context Protocol)
|
||||
|
||||
The easiest way to connect Backlog.md to AI coding assistants like Claude Code, Codex, and Gemini CLI is via the MCP protocol.
|
||||
You can run `backlog init` (even if you already initialized Backlog.md) to set up MCP integration automatically, or follow the manual steps below.
|
||||
|
||||
### Client guides
|
||||
|
||||
> [!IMPORTANT]
|
||||
> When adding the MCP server manually, you should add some extra instructions in your CLAUDE.md/AGENTS.md files to inform the agent about Backlog.md.
|
||||
> This step is not required when using `backlog init` as it adds these instructions automatically.
|
||||
> Backlog.md's instructions for agents are available at [`/src/guidelines/mcp/agent-nudge.md`](/src/guidelines/mcp/agent-nudge.md).
|
||||
|
||||
<details>
|
||||
<summary><strong>Claude Code</strong></summary>
|
||||
|
||||
```bash
|
||||
claude mcp add backlog --scope user -- backlog mcp start
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary><strong>Codex</strong></summary>
|
||||
|
||||
```bash
|
||||
codex mcp add backlog backlog mcp start
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary><strong>Gemini CLI</strong></summary>
|
||||
|
||||
```bash
|
||||
gemini mcp add backlog -s user backlog mcp start
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
Use the shared `backlog` server name everywhere – the MCP server auto-detects whether the current directory is initialized and falls back to `backlog://init-required` when needed.
|
||||
|
||||
### Manual config
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"backlog": {
|
||||
"command": "backlog",
|
||||
"args": ["mcp", "start"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Once connected, agents can read the Backlog.md workflow instructions via the resource `backlog://docs/task-workflow`.
|
||||
Use `/mcp` command in your AI tool (Claude Code, Codex) to verify if the connection is working.
|
||||
|
||||
---
|
||||
|
||||
## <img src="./.github/cli-reference-256.png" alt="CLI Reference" width="28" height="28" align="center"> CLI reference
|
||||
|
||||
### Project Setup
|
||||
|
||||
| Action | Example |
|
||||
|-------------|------------------------------------------------------|
|
||||
| Initialize project | `backlog init [project-name]` (creates backlog structure with a minimal interactive flow) |
|
||||
| Re-initialize | `backlog init` (preserves existing config, allows updates) |
|
||||
| Advanced settings wizard | `backlog config` (no args) — launches the full interactive configuration flow |
|
||||
|
||||
`backlog init` keeps first-run setup focused on the essentials:
|
||||
- **Project name** – identifier for your backlog (defaults to the current directory on re-run).
|
||||
- **Integration choice** – decide whether your AI tools connect through the **MCP connector** (recommended) or stick with **CLI commands (legacy)**.
|
||||
- **Instruction files (CLI path only)** – when you choose the legacy CLI flow, pick which instruction files to create (CLAUDE.md, AGENTS.md, GEMINI.md, Copilot, or skip).
|
||||
- **Advanced settings prompt** – default answer “No” finishes init immediately; choosing “Yes” jumps straight into the advanced wizard documented in [Configuration](#configuration).
|
||||
|
||||
You can rerun the wizard anytime with `backlog config`. All existing CLI flags (for example `--defaults`, `--agent-instructions`, or `--install-claude-agent true`) continue to provide fully non-interactive setups, so existing scripts keep working without change.
|
||||
|
||||
### Documentation
|
||||
|
||||
- Document IDs are global across all subdirectories under `backlog/docs`. You can organize files in nested folders (e.g., `backlog/docs/guides/`), and `backlog doc list` and `backlog doc view <id>` work across the entire tree. Example: `backlog doc create -p guides "New Guide"`.
|
||||
|
||||
### Task Management
|
||||
|
||||
| Action | Example |
|
||||
|-------------|------------------------------------------------------|
|
||||
| Create task | `backlog task create "Add OAuth System"` |
|
||||
| Create with description | `backlog task create "Feature" -d "Add authentication system"` |
|
||||
| Create with assignee | `backlog task create "Feature" -a @sara` |
|
||||
| Create with status | `backlog task create "Feature" -s "In Progress"` |
|
||||
| Create with labels | `backlog task create "Feature" -l auth,backend` |
|
||||
| Create with priority | `backlog task create "Feature" --priority high` |
|
||||
| Create with plan | `backlog task create "Feature" --plan "1. Research\n2. Implement"` |
|
||||
| Create with AC | `backlog task create "Feature" --ac "Must work,Must be tested"` |
|
||||
| Create with notes | `backlog task create "Feature" --notes "Started initial research"` |
|
||||
| Create with deps | `backlog task create "Feature" --dep task-1,task-2` |
|
||||
| Create sub task | `backlog task create -p 14 "Add Login with Google"`|
|
||||
| Create (all options) | `backlog task create "Feature" -d "Description" -a @sara -s "To Do" -l auth --priority high --ac "Must work" --notes "Initial setup done" --dep task-1 -p 14` |
|
||||
| List tasks | `backlog task list [-s <status>] [-a <assignee>] [-p <parent>]` |
|
||||
| List by parent | `backlog task list --parent 42` or `backlog task list -p task-42` |
|
||||
| View detail | `backlog task 7` (interactive UI, press 'E' to edit in editor) |
|
||||
| View (AI mode) | `backlog task 7 --plain` |
|
||||
| Edit | `backlog task edit 7 -a @sara -l auth,backend` |
|
||||
| Add plan | `backlog task edit 7 --plan "Implementation approach"` |
|
||||
| Add AC | `backlog task edit 7 --ac "New criterion" --ac "Another one"` |
|
||||
| Remove AC | `backlog task edit 7 --remove-ac 2` (removes AC #2) |
|
||||
| Remove multiple ACs | `backlog task edit 7 --remove-ac 2 --remove-ac 4` (removes AC #2 and #4) |
|
||||
| Check AC | `backlog task edit 7 --check-ac 1` (marks AC #1 as done) |
|
||||
| Check multiple ACs | `backlog task edit 7 --check-ac 1 --check-ac 3` (marks AC #1 and #3 as done) |
|
||||
| Uncheck AC | `backlog task edit 7 --uncheck-ac 3` (marks AC #3 as not done) |
|
||||
| Mixed AC operations | `backlog task edit 7 --check-ac 1 --uncheck-ac 2 --remove-ac 4` |
|
||||
| Add notes | `backlog task edit 7 --notes "Completed X, working on Y"` (replaces existing) |
|
||||
| Append notes | `backlog task edit 7 --append-notes "New findings"` |
|
||||
| Add deps | `backlog task edit 7 --dep task-1 --dep task-2` |
|
||||
| Archive | `backlog task archive 7` |
|
||||
|
||||
#### Multi‑line input (description/plan/notes)
|
||||
|
||||
The CLI preserves input literally; `\n` sequences are not auto‑converted. Use one of the following to insert real newlines:
|
||||
|
||||
- **Bash/Zsh (ANSI‑C quoting)**
|
||||
- Description: `backlog task create "Feature" --desc $'Line1\nLine2\n\nFinal paragraph'`
|
||||
- Plan: `backlog task edit 7 --plan $'1. Research\n2. Implement'`
|
||||
- Notes: `backlog task edit 7 --notes $'Completed A\nWorking on B'`
|
||||
- Append notes: `backlog task edit 7 --append-notes $'Added X\nAdded Y'`
|
||||
- **POSIX sh (printf)**
|
||||
- `backlog task create "Feature" --desc "$(printf 'Line1\nLine2\n\nFinal paragraph')"`
|
||||
- **PowerShell (backtick)**
|
||||
- `backlog task create "Feature" --desc "Line1`nLine2`n`nFinal paragraph"`
|
||||
|
||||
Tip: Help text shows Bash examples with escaped `\\n` for readability; when typing, `$'\n'` expands to a newline.
|
||||
|
||||
### Search
|
||||
|
||||
Find tasks, documents, and decisions across your entire backlog with fuzzy search:
|
||||
|
||||
| Action | Example |
|
||||
|--------------------|------------------------------------------------------|
|
||||
| Search tasks | `backlog search "auth"` |
|
||||
| Filter by status | `backlog search "api" --status "In Progress"` |
|
||||
| Filter by priority | `backlog search "bug" --priority high` |
|
||||
| Combine filters | `backlog search "web" --status "To Do" --priority medium` |
|
||||
| Plain text output | `backlog search "feature" --plain` (for scripts/AI) |
|
||||
|
||||
**Search features:**
|
||||
- **Fuzzy matching** -- finds "authentication" when searching for "auth"
|
||||
- **Interactive filters** -- refine your search in real-time with the TUI
|
||||
- **Live filtering** -- see results update as you type (no Enter needed)
|
||||
|
||||
### Draft Workflow
|
||||
|
||||
| Action | Example |
|
||||
|-------------|------------------------------------------------------|
|
||||
| Create draft | `backlog task create "Feature" --draft` |
|
||||
| Draft flow | `backlog draft create "Spike GraphQL"` → `backlog draft promote 3.1` |
|
||||
| Demote to draft| `backlog task demote <id>` |
|
||||
|
||||
### Dependency Management
|
||||
|
||||
Manage task dependencies to create execution sequences and prevent circular relationships:
|
||||
|
||||
| Action | Example |
|
||||
|-------------|------------------------------------------------------|
|
||||
| Add dependencies | `backlog task edit 7 --dep task-1 --dep task-2` |
|
||||
| Add multiple deps | `backlog task edit 7 --dep task-1,task-5,task-9` |
|
||||
| Create with deps | `backlog task create "Feature" --dep task-1,task-2` |
|
||||
| View dependencies | `backlog task 7` (shows dependencies in task view) |
|
||||
| Validate dependencies | Use task commands to automatically validate dependencies |
|
||||
|
||||
**Dependency Features:**
|
||||
- **Automatic validation**: Prevents circular dependencies and validates task existence
|
||||
- **Flexible formats**: Use `task-1`, `1`, or comma-separated lists like `1,2,3`
|
||||
- **Visual sequences**: Dependencies create visual execution sequences in board view
|
||||
- **Completion tracking**: See which dependencies are blocking task progress
|
||||
|
||||
### Board Operations
|
||||
|
||||
| Action | Example |
|
||||
|-------------|------------------------------------------------------|
|
||||
| Kanban board | `backlog board` (interactive UI, press 'E' to edit in editor) |
|
||||
| Export board | `backlog board export [file]` (exports Kanban board to markdown) |
|
||||
| Export with version | `backlog board export --export-version "v1.0.0"` (includes version in export) |
|
||||
|
||||
### Statistics & Overview
|
||||
|
||||
| Action | Example |
|
||||
|-------------|------------------------------------------------------|
|
||||
| Project overview | `backlog overview` (interactive TUI showing project statistics) |
|
||||
|
||||
### Web Interface
|
||||
|
||||
| Action | Example |
|
||||
|-------------|------------------------------------------------------|
|
||||
| Web interface | `backlog browser` (launches web UI on port 6420) |
|
||||
| Web custom port | `backlog browser --port 8080 --no-open` |
|
||||
|
||||
### Documentation
|
||||
|
||||
| Action | Example |
|
||||
|-------------|------------------------------------------------------|
|
||||
| Create doc | `backlog doc create "API Guidelines"` |
|
||||
| Create with path | `backlog doc create "Setup Guide" -p guides/setup` |
|
||||
| Create with type | `backlog doc create "Architecture" -t technical` |
|
||||
| List docs | `backlog doc list` |
|
||||
| View doc | `backlog doc view doc-1` |
|
||||
|
||||
### Decisions
|
||||
|
||||
| Action | Example |
|
||||
|-------------|------------------------------------------------------|
|
||||
| Create decision | `backlog decision create "Use PostgreSQL for primary database"` |
|
||||
| Create with status | `backlog decision create "Migrate to TypeScript" -s proposed` |
|
||||
|
||||
### Agent Instructions
|
||||
|
||||
| Action | Example |
|
||||
|-------------|------------------------------------------------------|
|
||||
| Update agent files | `backlog agents --update-instructions` (updates CLAUDE.md, AGENTS.md, GEMINI.md, .github/copilot-instructions.md) |
|
||||
|
||||
### Maintenance
|
||||
|
||||
| Action | Example |
|
||||
|-------------|------------------------------------------------------|
|
||||
| Cleanup done tasks | `backlog cleanup` (move old completed tasks to completed folder) |
|
||||
|
||||
Full help: `backlog --help`
|
||||
|
||||
---
|
||||
|
||||
## <img src="./.github/configuration-256.png" alt="Configuration" width="28" height="28" align="center"> Configuration
|
||||
|
||||
Backlog.md merges the following layers (highest → lowest):
|
||||
|
||||
1. CLI flags
|
||||
2. `backlog/config.yml` (per‑project)
|
||||
3. `~/backlog/user` (per‑user)
|
||||
4. Built‑ins
|
||||
|
||||
### Configuration Commands
|
||||
|
||||
| Action | Example |
|
||||
|-------------|------------------------------------------------------|
|
||||
| View all configs | `backlog config list` |
|
||||
| Get specific config | `backlog config get defaultEditor` |
|
||||
| Set config value | `backlog config set defaultEditor "code --wait"` |
|
||||
| Enable auto-commit | `backlog config set autoCommit true` |
|
||||
| Bypass git hooks | `backlog config set bypassGitHooks true` |
|
||||
| Enable cross-branch check | `backlog config set checkActiveBranches true` |
|
||||
| Set active branch days | `backlog config set activeBranchDays 30` |
|
||||
|
||||
### Interactive wizard (`backlog config`)
|
||||
|
||||
Run `backlog config` with no arguments to launch the full interactive wizard. This is the same experience triggered from `backlog init` when you opt into advanced settings, and it walks through the complete configuration surface:
|
||||
- Cross-branch accuracy: `checkActiveBranches`, `remoteOperations`, and `activeBranchDays`.
|
||||
- Git workflow: `autoCommit` and `bypassGitHooks`.
|
||||
- ID formatting: enable or size `zeroPaddedIds`.
|
||||
- Editor integration: pick a `defaultEditor` with availability checks.
|
||||
- Web UI defaults: choose `defaultPort` and whether `autoOpenBrowser` should run.
|
||||
|
||||
Skipping the wizard (answering “No” during init) applies the safe defaults that ship with Backlog.md:
|
||||
- `checkActiveBranches=true`, `remoteOperations=true`, `activeBranchDays=30`.
|
||||
- `autoCommit=false`, `bypassGitHooks=false`.
|
||||
- `zeroPaddedIds` disabled.
|
||||
- `defaultEditor` unset (falls back to your environment).
|
||||
- `defaultPort=6420`, `autoOpenBrowser=true`.
|
||||
|
||||
Whenever you revisit `backlog init` or rerun `backlog config`, the wizard pre-populates prompts with your current values so you can adjust only what changed.
|
||||
|
||||
### Available Configuration Options
|
||||
|
||||
| Key | Purpose | Default |
|
||||
|-------------------|--------------------|-------------------------------|
|
||||
| `defaultAssignee` | Pre‑fill assignee | `[]` |
|
||||
| `defaultStatus` | First column | `To Do` |
|
||||
| `statuses` | Board columns | `[To Do, In Progress, Done]` |
|
||||
| `dateFormat` | Date/time format | `yyyy-mm-dd hh:mm` |
|
||||
| `timezonePreference` | Timezone for dates | `UTC` |
|
||||
| `includeDatetimeInDates` | Add time to new dates | `true` |
|
||||
| `defaultEditor` | Editor for 'E' key | Platform default (nano/notepad) |
|
||||
| `defaultPort` | Web UI port | `6420` |
|
||||
| `autoOpenBrowser` | Open browser automatically | `true` |
|
||||
| `remoteOperations`| Enable remote git operations | `true` |
|
||||
| `autoCommit` | Automatically commit task changes | `false` |
|
||||
| `bypassGitHooks` | Skip git hooks when committing (uses --no-verify) | `false` |
|
||||
| `zeroPaddedIds` | Pad all IDs (tasks, docs, etc.) with leading zeros | `(disabled)` |
|
||||
| `checkActiveBranches` | Check task states across active branches for accuracy | `true` |
|
||||
| `activeBranchDays` | How many days a branch is considered active | `30` |
|
||||
| `onStatusChange` | Shell command to run on status change | `(disabled)` |
|
||||
|
||||
> Editor setup guide: See [Configuring VIM and Neovim as Default Editor](backlog/docs/doc-002%20-%20Configuring-VIM-and-Neovim-as-Default-Editor.md) for configuration tips and troubleshooting interactive editors.
|
||||
|
||||
> **Note**: Set `remoteOperations: false` to work offline. This disables git fetch operations and loads tasks from local branches only, useful when working without network connectivity.
|
||||
|
||||
> **Git Control**: By default, `autoCommit` is set to `false`, giving you full control over your git history. Task operations will modify files but won't automatically commit changes. Set `autoCommit: true` if you prefer automatic commits for each task operation.
|
||||
|
||||
> **Git Hooks**: If you have pre-commit hooks (like conventional commits or linters) that interfere with backlog.md's automated commits, set `bypassGitHooks: true` to skip them using the `--no-verify` flag.
|
||||
|
||||
> **Performance**: Cross-branch checking ensures accurate task tracking across all active branches but may impact performance on large repositories. You can disable it by setting `checkActiveBranches: false` for maximum speed, or adjust `activeBranchDays` to control how far back to look for branch activity (lower values = better performance).
|
||||
|
||||
> **Status Change Callbacks**: Set `onStatusChange` to run a shell command whenever a task's status changes. Available variables: `$TASK_ID`, `$OLD_STATUS`, `$NEW_STATUS`, `$TASK_TITLE`. Per-task override via `onStatusChange` in task frontmatter. Example: `'if [ "$NEW_STATUS" = "In Progress" ]; then claude "Task $TASK_ID ($TASK_TITLE) has been assigned to you. Please implement it." & fi'`
|
||||
|
||||
> **Date/Time Support**: Backlog.md now supports datetime precision for all dates. New items automatically include time (YYYY-MM-DD HH:mm format in UTC), while existing date-only entries remain unchanged for backward compatibility. Use the migration script `bun src/scripts/migrate-dates.ts` to optionally add time to existing items.
|
||||
|
||||
---
|
||||
|
||||
## 💡 Shell Tab Completion
|
||||
|
||||
Backlog.md includes built-in intelligent tab completion for bash, zsh, and fish shells. Completion scripts are embedded in the binary—no external files needed.
|
||||
|
||||
**Quick Installation:**
|
||||
```bash
|
||||
# Auto-detect and install for your current shell
|
||||
backlog completion install
|
||||
|
||||
# Or specify shell explicitly
|
||||
backlog completion install --shell bash
|
||||
backlog completion install --shell zsh
|
||||
backlog completion install --shell fish
|
||||
```
|
||||
|
||||
**What you get:**
|
||||
- Command completion: `backlog <TAB>` → shows all commands
|
||||
- Dynamic task IDs: `backlog task edit <TAB>` → shows actual task IDs from your backlog
|
||||
- Smart flags: `--status <TAB>` → shows configured status values
|
||||
- Context-aware suggestions for priorities, labels, and assignees
|
||||
|
||||
📖 **Full documentation**: See [completions/README.md](completions/README.md) for detailed installation instructions, troubleshooting, and examples.
|
||||
|
||||
---
|
||||
|
||||
## <img src="./.github/sharing-export-256.png" alt="Sharing & Export" width="28" height="28" align="center"> Sharing & Export
|
||||
|
||||
### Board Export
|
||||
|
||||
Export your Kanban board to a clean, shareable markdown file:
|
||||
|
||||
```bash
|
||||
# Export to default Backlog.md file
|
||||
backlog board export
|
||||
|
||||
# Export to custom file
|
||||
backlog board export project-status.md
|
||||
|
||||
# Force overwrite existing file
|
||||
backlog board export --force
|
||||
|
||||
# Export to README.md with board markers
|
||||
backlog board export --readme
|
||||
|
||||
# Include a custom version string in the export
|
||||
backlog board export --export-version "v1.2.3"
|
||||
backlog board export --readme --export-version "Release 2024.12.1-beta"
|
||||
```
|
||||
|
||||
Perfect for sharing project status, creating reports, or storing snapshots in version control.
|
||||
|
||||
---
|
||||
|
||||
<!-- BOARD_START -->
|
||||
|
||||
## 📊 Backlog.md Project Status (v1.26.0)
|
||||
|
||||
This board was automatically generated by [Backlog.md](https://backlog.md)
|
||||
|
||||
Generated on: 2025-12-03 22:22:53
|
||||
|
||||
| To Do | In Progress | Done |
|
||||
| --- | --- | --- |
|
||||
| **TASK-310** - Strengthen Backlog workflow overview emphasis on reading detailed guides [@codex] | └─ **TASK-24.1** - CLI: Kanban board milestone view [@codex] | **TASK-309** - Improve TUI empty state when task filters return no results [@codex] |
|
||||
| **TASK-270** - Prevent command substitution in task creation inputs [@codex] | | **TASK-333** - Keep cross-branch tasks out of plain CLI/MCP listings [@codex]<br>*#cli #mcp #bug* |
|
||||
| **TASK-268** - Show agent instruction version status [@codex] | | **TASK-332** - Unify CLI task list/board loading and view switching UX [@codex]<br>*#cli #ux #loading* |
|
||||
| **TASK-267** - Add agent instruction version metadata [@codex] | | **TASK-331** - Fix content store refresh dropping cross-branch tasks [@codex]<br>*#bug #content-store* |
|
||||
| **TASK-260** - Web UI: Add filtering to All Tasks view [@codex]<br>*#web-ui #filters #ui* | | **TASK-330** - Fix browser/CLI sync issue when reordering cross-branch tasks<br>*#bug #browser* |
|
||||
| **TASK-259** - Add task list filters for Status and Priority<br>*#tui #filters #ui* | | **TASK-328** - Make filename sanitization stricter by default [@codex]<br>*#feature* |
|
||||
| **TASK-257** - Deep link URLs for tasks in board and list views | | **TASK-327** - Fix loadTaskById to search remote branches<br>*#bug #task-loading #cross-branch* |
|
||||
| **TASK-200** - Add Claude Code integration with workflow commands during init<br>*#enhancement #developer-experience* | | **TASK-326** - Add local branch task discovery to board loading<br>*#bug #task-loading #cross-branch* |
|
||||
| **TASK-218** - Update documentation and tests for sequences<br>*#sequences #documentation #testing* | | **TASK-324** - Add browser UI initialization flow for uninitialized projects<br>*#enhancement #browser #ux* |
|
||||
| **TASK-217** - Create web UI for sequences with drag-and-drop<br>*#sequences #web-ui #frontend* | | **TASK-289** - Implement resource templates list handler to return empty list instead of error [@codex]<br>*#mcp #enhancement* |
|
||||
| └─ **TASK-217.03** - Sequences web UI: move tasks and update dependencies<br>*#sequences* | | **TASK-280** - Fix TUI task list selection and detail pane synchronization bug [@codex]<br>*#bug #tui* |
|
||||
| └─ **TASK-217.04** - Sequences web UI: tests<br>*#sequences* | | **TASK-273** - Refactor search [@codex]<br>*#core #search* |
|
||||
| └─ **TASK-217.02** - Sequences web UI: list sequences<br>*#sequences* | | **TASK-322** - Fix flake.nix for devenv compatibility<br>*#nix #bug-fix* |
|
||||
| **TASK-240** - Improve binary resolution on Apple Silicon (Rosetta/arch mismatch) [@codex]<br>*#packaging #bug #macos* | | **TASK-321** - Status change callbacks in task frontmatter [@codex] |
|
||||
| **TASK-239** - Feature: Auto-link tasks to documents/decisions + backlinks [@codex]<br>*#web #enhancement #docs* | | **TASK-320** - Refactor and fix move mode implementation [@claude]<br>*#bug #tui #high-priority* |
|
||||
| **TASK-222** - Improve task and subtask visualization in web UI | | **TASK-318** - Fix editor stdio inheritance for interactive editors (vim/neovim) [@samvincent]<br>*#bug #editor #vim* |
|
||||
| **TASK-208** - Add paste-as-markdown support in Web UI<br>*#web-ui #enhancement #markdown* | | |
|
||||
|
||||
<!-- BOARD_END -->
|
||||
|
||||
### License
|
||||
|
||||
Backlog.md is released under the **MIT License** – do anything, just give credit. See [LICENSE](LICENSE).
|
||||
|
|
@ -0,0 +1,139 @@
|
|||
# Kanban Board Export (powered by Backlog.md)
|
||||
Generated on: 2025-07-12 18:27:55
|
||||
Project: Backlog.md
|
||||
|
||||
| To Do | In Progress | Done |
|
||||
| --- | --- | --- |
|
||||
| **task-172** - Order tasks by status and ID in both web and CLI lists (Assignees: none, Labels: none) | **└─ task-24.1** - CLI: Kanban board milestone view (Assignees: @codex, Labels: none) | **task-173** - Add CLI command to export Kanban board to markdown (Assignees: @claude, Labels: none) |
|
||||
| **task-171** - Implement drafts list functionality in CLI and web UI (Assignees: none, Labels: none) | | **task-169** - Fix browser and board crashes (Assignees: @claude, Labels: none) |
|
||||
| **task-116** - Add dark mode toggle to web UI (Assignees: none, Labels: none) | | **task-168** - Fix editor integration issues with vim/nano (Assignees: @claude, Labels: none) |
|
||||
| | | **task-167** - Add --notes option to task create command (Assignees: @claude, Labels: none) |
|
||||
| | | **task-166** - Audit and fix autoCommit behavior across all commands (Assignees: none, Labels: bug, config) |
|
||||
| | | **task-165** - Fix BUN_OPTIONS environment variable conflict (Assignees: none, Labels: bug) |
|
||||
| | | **task-164** - Add auto_commit config option with default false (Assignees: none, Labels: enhancement, config) |
|
||||
| | | **task-163** - Fix intermittent git failure in task edit (Assignees: none, Labels: bug) |
|
||||
| | | **task-120** - Add offline mode configuration for remote operations (Assignees: none, Labels: enhancement, offline, config) |
|
||||
| | | **task-119** - Add documentation and decisions pages to web UI (Assignees: none, Labels: none) |
|
||||
| | | **└─ task-119.1** - Fix comprehensive test suite for data model consistency (Assignees: none, Labels: none) |
|
||||
| | | **└─ task-119.2** - Core architecture improvements and ID generation enhancements (Assignees: none, Labels: none) |
|
||||
| | | **task-118** - Add side navigation menu to web UI (Assignees: none, Labels: none) |
|
||||
| | | **└─ task-118.1** - UI/UX improvements and responsive design enhancements (Assignees: none, Labels: none) |
|
||||
| | | **└─ task-118.2** - Implement health check API endpoint for web UI monitoring (Assignees: none, Labels: none) |
|
||||
| | | **└─ task-118.3** - Advanced search and navigation features beyond basic requirements (Assignees: none, Labels: none) |
|
||||
| | | **task-115** - Add live health check system to web UI (Assignees: none, Labels: none) |
|
||||
| | | **task-114** - cli: filter task list by parent task (Assignees: none, Labels: none) |
|
||||
| | | **task-112** - Add Tab key switching between task and kanban views with background loading (Assignees: none, Labels: none) |
|
||||
| | | **task-111** - Add editor shortcut (E) to kanban and task views (Assignees: none, Labels: none) |
|
||||
| | | **task-108** - Fix bug: Acceptance criteria removed when updating description (Assignees: none, Labels: none) |
|
||||
| | | **task-107** - Add agents --update-instructions command (Assignees: none, Labels: none) |
|
||||
| | | **task-106** - Add --desc alias for description flag (Assignees: none, Labels: none) |
|
||||
| | | **task-105** - Remove dot from .backlog folder name (Assignees: none, Labels: none) |
|
||||
| | | **task-104** - Add --notes flag to task edit command for implementation notes (Assignees: @claude, Labels: none) |
|
||||
| | | **task-101** - Show task file path in plain view (Assignees: none, Labels: none) |
|
||||
| | | **task-100** - Add embedded web server to Backlog CLI (Assignees: none, Labels: none) |
|
||||
| | | **└─ task-100.1** - Setup React project structure with shadcn/ui (Assignees: none, Labels: none) |
|
||||
| | | **└─ task-100.2** - Create HTTP server module (Assignees: none, Labels: none) |
|
||||
| | | **└─ task-100.3** - Implement API endpoints (Assignees: none, Labels: none) |
|
||||
| | | **└─ task-100.4** - Build Kanban board component (Assignees: none, Labels: none) |
|
||||
| | | **└─ task-100.5** - Create task management components (Assignees: none, Labels: none) |
|
||||
| | | **└─ task-100.6** - Add CLI browser command (Assignees: none, Labels: none) |
|
||||
| | | **└─ task-100.7** - Bundle web assets into executable (Assignees: none, Labels: none) |
|
||||
| | | **└─ task-100.8** - Add documentation and examples (Assignees: none, Labels: none) |
|
||||
| | | **task-99** - Fix loading screen border rendering and improve UX (Assignees: none, Labels: none) |
|
||||
| | | **task-98** - Invert task order in Done column only (Assignees: @Cursor, Labels: ui, enhancement) |
|
||||
| | | **task-97** - Cross-branch task ID checking and branch info (Assignees: @Cursor, Labels: none) |
|
||||
| | | **task-96** - Fix demoted task board visibility - check status across archive and drafts (Assignees: @Cursor, Labels: none) |
|
||||
| | | **task-95** - Add priority field to tasks (Assignees: @claude, Labels: enhancement) |
|
||||
| | | **task-94** - CLI: Show created task file path (Assignees: @claude, Labels: cli, enhancement) |
|
||||
| | | **task-93** - Fix Windows agent instructions file reading hang (Assignees: @claude, Labels: none) |
|
||||
| | | **task-92** - CI: Fix intermittent Windows test failures (Assignees: @claude, Labels: none) |
|
||||
| | | **task-91** - Fix Windows issues: empty task list and weird Q character (Assignees: @MrLesk, Labels: bug, windows, regression) |
|
||||
| | | **task-90** - Fix task list scrolling behavior - selector should move before scrolling (Assignees: @claude, Labels: bug, ui) |
|
||||
| | | **task-89** - Add dependency parameter for task create and edit commands (Assignees: @claude, Labels: cli, enhancement) |
|
||||
| | | **task-88** - Fix missing metadata and implementation plan in task view command (Assignees: @claude, Labels: bug, cli) |
|
||||
| | | **task-87** - Make agent guideline file updates idempotent during init (Assignees: @claude, Labels: enhancement, cli, init) |
|
||||
| | | **task-86** - Update agent guidelines to emphasize outcome-focused acceptance criteria (Assignees: none, Labels: documentation, agents) |
|
||||
| | | **task-85** - Merge and consolidate loading screen functions (Assignees: @claude, Labels: refactor, optimization) |
|
||||
| | | **task-84** - Add -ac flag for acceptance criteria in task create/edit (Assignees: @claude, Labels: enhancement, cli) |
|
||||
| | | **task-83** - Add case-insensitive status filter support (Assignees: @claude, Labels: enhancement, cli) |
|
||||
| | | **task-82** - Add --plain flag to task view command for AI agents (Assignees: @claude, Labels: none) |
|
||||
| | | **task-81** - Fix task list navigation skipping issue (Assignees: @claude, Labels: none) |
|
||||
| | | **task-80** - Preserve case in task filenames for better agent discoverability (Assignees: @AI, Labels: enhancement, ai-agents) |
|
||||
| | | **task-79** - Fix task list ordering - sort by decimal ID not string (Assignees: @AI, Labels: bug, regression) |
|
||||
| | | **task-77** - Migrate from blessed to bblessed for better Bun and Windows support (Assignees: @ai-agent, Labels: refactoring, dependencies, windows) |
|
||||
| | | **task-76** - Add Implementation Plan section (Assignees: @claude, Labels: docs, cli) |
|
||||
| | | **task-75** - Fix task selection in board view - opens wrong task (Assignees: @ai-agent, Labels: bug, ui, board) |
|
||||
| | | **task-74** - Fix TUI crash on Windows by disabling blessed tput (Assignees: @codex, Labels: bug, windows) |
|
||||
| | | **task-73** - Fix Windows binary package name resolution (Assignees: @codex, Labels: bug, windows, packaging) |
|
||||
| | | **task-72** - Fix board view on Windows without terminfo (Assignees: none, Labels: bug, windows) |
|
||||
| | | **task-71** - Fix single task view regression (Assignees: @codex, Labels: none) |
|
||||
| | | **task-70** - CI: eliminate extra binary download (Assignees: @codex, Labels: ci, packaging) |
|
||||
| | | **task-69** - CLI: start tasks IDs at 1 (Assignees: @codex, Labels: none) |
|
||||
| | | **task-68** - Verify Windows binary uses .exe (Assignees: @codex, Labels: packaging) |
|
||||
| | | **task-67** - Add -p shorthand for --parent option in task create command (Assignees: none, Labels: cli, enhancement) |
|
||||
| | | **task-61** - Embed blessed in standalone binary (Assignees: @codex, Labels: cli, packaging) |
|
||||
| | | **task-59** - Simplify init command with modern CLI (Assignees: @codex, Labels: cli) |
|
||||
| | | **task-58** - Unify task list view to use task viewer component (Assignees: @codex, Labels: none) |
|
||||
| | | **task-57** - Fix version command to support -v flag and display correct version (Assignees: @codex, Labels: none) |
|
||||
| | | **task-56** - Simplify TUI blessed import (Assignees: @codex, Labels: refactor) |
|
||||
| | | **task-54** - CLI: fix init prompt colors (Assignees: @codex, Labels: bug) |
|
||||
| | | **└─ task-55** - CLI: simplify init text prompt (Assignees: @codex, Labels: bug) |
|
||||
| | | **task-53** - Fix blessed screen bug in Bun install (Assignees: @codex, Labels: bug) |
|
||||
| | | **task-52** - CLI: Filter tasks list by status or assignee (Assignees: @codex, Labels: none) |
|
||||
| | | **task-51** - Code-path styling (Assignees: none, Labels: enhancement) |
|
||||
| | | **task-50** - Borders & padding (Assignees: none, Labels: enhancement) |
|
||||
| | | **task-49** - Status styling (Assignees: Claude, Labels: enhancement) |
|
||||
| | | **task-48** - Footer hint line (Assignees: none, Labels: enhancement) |
|
||||
| | | **task-47** - Sticky header in detail view (Assignees: none, Labels: enhancement) |
|
||||
| | | **task-46** - Split-pane layout (Assignees: none, Labels: enhancement) |
|
||||
| | | **task-45** - Safe line-wrapping (Assignees: none, Labels: enhancement) |
|
||||
| | | **task-44** - Checklist alignment (Assignees: none, Labels: ui, enhancement) |
|
||||
| | | **task-43** - Remove duplicate Acceptance Criteria and style metadata (Assignees: none, Labels: ui, enhancement) |
|
||||
| | | **task-42** - Visual hierarchy (Assignees: none, Labels: ui, enhancement) |
|
||||
| | | **task-41** - CLI: Migrate terminal UI to bblessed (Assignees: Claude, Labels: cli) |
|
||||
| | | **└─ task-41.1** - CLI: bblessed init wizard (Assignees: Claude, Labels: cli) |
|
||||
| | | **└─ task-41.2** - CLI: bblessed task view (Assignees: Claude, Labels: cli) |
|
||||
| | | **└─ task-41.3** - CLI: bblessed doc view (Assignees: Claude, Labels: cli) |
|
||||
| | | **└─ task-41.4** - CLI: bblessed board view (Assignees: Claude, Labels: cli) |
|
||||
| | | **└─ task-41.5** - CLI: audit remaining UI for bblessed (Assignees: Claude, Labels: cli) |
|
||||
| | | **task-40** - CLI: Board command defaults to view (Assignees: @codex, Labels: cli) |
|
||||
| | | **task-39** - CLI: fix empty agent instruction files on init (Assignees: @codex, Labels: cli, bug) |
|
||||
| | | **task-38** - CLI: Improved Agent Selection for Init (Assignees: @AI, Labels: none) |
|
||||
| | | **task-36** - CLI: Prompt for project name in init (Assignees: @codex, Labels: none) |
|
||||
| | | **task-35** - Finalize package.json metadata for publishing (Assignees: @codex, Labels: none) |
|
||||
| | | **task-34** - Split README.md for users and contributors (Assignees: @codex, Labels: docs) |
|
||||
| | | **task-32** - CLI: Hide empty 'No Status' column (Assignees: none, Labels: cli, bug) |
|
||||
| | | **task-31** - Update README for open source (Assignees: none, Labels: docs) |
|
||||
| | | **task-29** - Add GitHub templates (Assignees: none, Labels: github, docs) |
|
||||
| | | **task-27** - Add CONTRIBUTING guidelines (Assignees: none, Labels: docs, github) |
|
||||
| | | **task-25** - CLI: Export Kanban board to README (Assignees: none, Labels: none) |
|
||||
| | | **task-24** - Handle subtasks in the Kanban view (Assignees: none, Labels: none) |
|
||||
| | | **task-23** - CLI: Kanban board order tasks by ID ASC (Assignees: none, Labels: none) |
|
||||
| | | **task-22** - CLI: Prevent double dash in task filenames (Assignees: none, Labels: none) |
|
||||
| | | **task-21** - Kanban board vertical layout (Assignees: none, Labels: none) |
|
||||
| | | **task-20** - Add agent guideline to mark tasks In Progress on start (Assignees: none, Labels: agents) |
|
||||
| | | **task-19** - CLI - fix default task status and remove Draft from statuses (Assignees: none, Labels: none) |
|
||||
| | | **└─ task-13.1** - CLI: Agent Instruction File Selection (Assignees: none, Labels: cli, agents) |
|
||||
| | | **task-7** - Kanban Board: Implement CLI Text-Based Kanban Board View (Assignees: none, Labels: cli, command) |
|
||||
| | | **└─ task-7.1** - CLI: Kanban board detect remote task status (Assignees: none, Labels: none) |
|
||||
| | | **task-6** - CLI: Argument Parsing, Help, and Packaging (Assignees: none, Labels: cli, command) |
|
||||
| | | **└─ task-6.1** - CLI: Local installation support for bunx/npx (Assignees: none, Labels: cli) |
|
||||
| | | **└─ task-6.2** - CLI: GitHub Actions for Build & Publish (Assignees: none, Labels: ci) |
|
||||
| | | **task-5** - CLI: Implement Docs & Decisions CLI Commands (Basic) (Assignees: none, Labels: cli, command) |
|
||||
| | | **task-4** - CLI: Task Management Commands (Assignees: none, Labels: cli, command) |
|
||||
| | | **└─ task-4.1** - CLI: Task Creation Commands (Assignees: @MrLesk, Labels: cli, command) |
|
||||
| | | **└─ task-4.2** - CLI: Task Listing and Viewing (Assignees: @MrLesk, Labels: cli, command) |
|
||||
| | | **└─ task-4.3** - CLI: Task Editing (Assignees: @MrLesk, Labels: cli, command) |
|
||||
| | | **└─ task-4.4** - CLI: Task Archiving and State Transitions (Assignees: @MrLesk, Labels: cli, command) |
|
||||
| | | **└─ task-4.5** - CLI: Init prompts for reporter name and global/local config (Assignees: @MrLesk, Labels: cli, config) |
|
||||
| | | **└─ task-4.6** - CLI: Add empty assignee array field for new tasks (Assignees: @MrLesk, Labels: cli, command) |
|
||||
| | | **└─ task-4.7** - CLI: Parse unquoted created_date (Assignees: @MrLesk, Labels: cli, command) |
|
||||
| | | **└─ task-4.8** - CLI: enforce description header (Assignees: none, Labels: none) |
|
||||
| | | **└─ task-4.9** - CLI: Normalize task-id inputs (Assignees: none, Labels: cli, bug) |
|
||||
| | | **└─ task-4.10** - CLI: enforce Agents to use backlog CLI to mark tasks Done (Assignees: none, Labels: cli, agents) |
|
||||
| | | **└─ task-4.11** - Docs: add definition of done to agent guidelines (Assignees: none, Labels: docs, agents) |
|
||||
| | | **└─ task-4.12** - CLI: Handle task ID conflicts across branches (Assignees: none, Labels: none) |
|
||||
| | | **└─ task-4.13** - CLI: Fix config command local/global logic (Assignees: none, Labels: none) |
|
||||
| | | **task-3** - CLI: Implement `backlog init` Command (Assignees: @MrLesk, Labels: cli, command) |
|
||||
| | | **task-2** - CLI: Design & Implement Core Logic Library (Assignees: @MrLesk, Labels: cli, core-logic, architecture) |
|
||||
| | | **task-1** - CLI: Setup Core Project (Bun, TypeScript, Git, Linters) (Assignees: @MrLesk, Labels: cli, setup) |
|
||||
|
|
@ -0,0 +1,41 @@
|
|||
{
|
||||
"$schema": "https://biomejs.dev/schemas/2.3.8/schema.json",
|
||||
"vcs": {
|
||||
"enabled": true,
|
||||
"clientKind": "git",
|
||||
"useIgnoreFile": true
|
||||
},
|
||||
"files": {
|
||||
"ignoreUnknown": false,
|
||||
"includes": ["src/**/*.ts", "scripts/**/*.cjs", "*.json", "**/*.json", "!**/.claude"]
|
||||
},
|
||||
"formatter": {
|
||||
"enabled": true,
|
||||
"indentStyle": "tab",
|
||||
"lineWidth": 120
|
||||
},
|
||||
"assist": { "actions": { "source": { "organizeImports": "on" } } },
|
||||
"linter": {
|
||||
"enabled": true,
|
||||
"rules": {
|
||||
"recommended": true,
|
||||
"style": {
|
||||
"noParameterAssign": "error",
|
||||
"useAsConstAssertion": "error",
|
||||
"useDefaultParameterLast": "error",
|
||||
"useEnumInitializers": "error",
|
||||
"useSelfClosingElements": "error",
|
||||
"useSingleVarDeclarator": "error",
|
||||
"noUnusedTemplateLiteral": "error",
|
||||
"useNumberNamespace": "error",
|
||||
"noInferrableTypes": "error",
|
||||
"noUselessElse": "error"
|
||||
}
|
||||
}
|
||||
},
|
||||
"javascript": {
|
||||
"formatter": {
|
||||
"quoteStyle": "double"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,9 @@
|
|||
[test]
|
||||
# Timeout for individual tests - increased for Windows compatibility
|
||||
timeout = "10s"
|
||||
|
||||
# Reduce memory usage during test runs to prevent WSL2 crashes
|
||||
smol = true
|
||||
|
||||
# Reduce concurrency to help with Windows file system contention
|
||||
# Note: This is a future-proofing setting as Bun may add concurrency controls
|
||||
|
|
@ -0,0 +1,14 @@
|
|||
# Completion Scripts Reference
|
||||
|
||||
The shell completion scripts in this directory serve as reference documentation.
|
||||
The actual scripts are embedded in the backlog binary (see src/commands/completion.ts).
|
||||
|
||||
During development, the CLI will read these files first if they exist,
|
||||
otherwise it falls back to the embedded versions.
|
||||
|
||||
Files:
|
||||
- backlog.bash - Bash completion reference
|
||||
- _backlog - Zsh completion reference
|
||||
- backlog.fish - Fish completion reference
|
||||
- README.md - Installation and usage guide
|
||||
- EXAMPLES.md - Detailed examples and how it works
|
||||
|
|
@ -0,0 +1,218 @@
|
|||
# Zsh Completion Examples
|
||||
|
||||
This document demonstrates how the zsh completion script works for the backlog CLI.
|
||||
|
||||
## How It Works
|
||||
|
||||
When you press TAB in zsh, the completion system:
|
||||
|
||||
1. Captures the current command line buffer (`$BUFFER`)
|
||||
2. Captures the cursor position (`$CURSOR`)
|
||||
3. Calls the `_backlog` completion function
|
||||
4. The function runs: `backlog completion __complete "$BUFFER" "$CURSOR"`
|
||||
5. Parses the newline-separated completions
|
||||
6. Presents them using `_describe`
|
||||
|
||||
## Example Scenarios
|
||||
|
||||
### Top-Level Commands
|
||||
|
||||
**Input:**
|
||||
```bash
|
||||
backlog <TAB>
|
||||
```
|
||||
|
||||
**What happens internally:**
|
||||
- Buffer: `"backlog "`
|
||||
- Cursor: `8` (position after the space)
|
||||
- CLI returns: `task\ndoc\nboard\nconfig\ncompletion`
|
||||
- Zsh shows: `task doc board config completion`
|
||||
|
||||
### Subcommands
|
||||
|
||||
**Input:**
|
||||
```bash
|
||||
backlog task <TAB>
|
||||
```
|
||||
|
||||
**What happens internally:**
|
||||
- Buffer: `"backlog task "`
|
||||
- Cursor: `13`
|
||||
- CLI returns: `create\nedit\nview\nlist\nsearch\narchive`
|
||||
- Zsh shows: `create edit view list search archive`
|
||||
|
||||
### Flags
|
||||
|
||||
**Input:**
|
||||
```bash
|
||||
backlog task create --<TAB>
|
||||
```
|
||||
|
||||
**What happens internally:**
|
||||
- Buffer: `"backlog task create --"`
|
||||
- Cursor: `22`
|
||||
- CLI returns: `--title\n--description\n--priority\n--status\n--assignee\n--labels`
|
||||
- Zsh shows: `--title --description --priority --status --assignee --labels`
|
||||
|
||||
### Dynamic Task ID Completion
|
||||
|
||||
**Input:**
|
||||
```bash
|
||||
backlog task edit <TAB>
|
||||
```
|
||||
|
||||
**What happens internally:**
|
||||
- Buffer: `"backlog task edit "`
|
||||
- Cursor: `18`
|
||||
- CLI scans backlog directory for tasks
|
||||
- CLI returns: `task-1\ntask-2\ntask-308\ntask-308.01\n...`
|
||||
- Zsh shows: `task-1 task-2 task-308 task-308.01 ...`
|
||||
|
||||
### Flag Value Completion
|
||||
|
||||
**Input:**
|
||||
```bash
|
||||
backlog task edit task-308 --status <TAB>
|
||||
```
|
||||
|
||||
**What happens internally:**
|
||||
- Buffer: `"backlog task edit task-308 --status "`
|
||||
- Cursor: `37`
|
||||
- CLI recognizes `--status` flag
|
||||
- CLI returns: `To Do\nIn Progress\nDone`
|
||||
- Zsh shows: `To Do In Progress Done`
|
||||
|
||||
### Partial Completion
|
||||
|
||||
**Input:**
|
||||
```bash
|
||||
backlog task cr<TAB>
|
||||
```
|
||||
|
||||
**What happens internally:**
|
||||
- Buffer: `"backlog task cr"`
|
||||
- Cursor: `15`
|
||||
- Partial word: `"cr"`
|
||||
- CLI filters subcommands starting with "cr"
|
||||
- CLI returns: `create`
|
||||
- Zsh completes to: `backlog task create`
|
||||
|
||||
## Testing the Completion
|
||||
|
||||
### Manual Testing
|
||||
|
||||
1. Load the completion:
|
||||
```bash
|
||||
source completions/_backlog
|
||||
```
|
||||
|
||||
2. Try various completions:
|
||||
```bash
|
||||
backlog <TAB>
|
||||
backlog task <TAB>
|
||||
backlog task create --<TAB>
|
||||
```
|
||||
|
||||
### Testing Without Zsh
|
||||
|
||||
You can test the backend directly:
|
||||
|
||||
```bash
|
||||
# Test top-level commands
|
||||
backlog completion __complete "backlog " 8
|
||||
|
||||
# Test subcommands
|
||||
backlog completion __complete "backlog task " 13
|
||||
|
||||
# Test with partial input
|
||||
backlog completion __complete "backlog ta" 10
|
||||
|
||||
# Test flag completion
|
||||
backlog completion __complete "backlog task create --" 22
|
||||
```
|
||||
|
||||
## Advanced Features
|
||||
|
||||
### Context-Aware Completion
|
||||
|
||||
The completion system understands context:
|
||||
|
||||
```bash
|
||||
# After --status flag, only show valid statuses
|
||||
backlog task create --status <TAB>
|
||||
# Shows: To Do, In Progress, Done
|
||||
|
||||
# After --priority flag, only show valid priorities
|
||||
backlog task create --priority <TAB>
|
||||
# Shows: high, medium, low
|
||||
|
||||
# For task ID arguments, show actual task IDs
|
||||
backlog task edit <TAB>
|
||||
# Shows: task-1, task-2, task-308, ...
|
||||
```
|
||||
|
||||
### Multi-Word Arguments
|
||||
|
||||
Zsh handles multi-word arguments automatically:
|
||||
|
||||
```bash
|
||||
backlog task create --title "My Task" --status <TAB>
|
||||
# Correctly identifies we're completing after --status
|
||||
```
|
||||
|
||||
### Error Handling
|
||||
|
||||
If the CLI fails or returns no completions:
|
||||
|
||||
```bash
|
||||
backlog nonexistent <TAB>
|
||||
# No completions shown, no error message
|
||||
# The shell stays responsive
|
||||
```
|
||||
|
||||
This is handled by:
|
||||
- `2>/dev/null` - suppresses error output
|
||||
- `return 1` - tells zsh no completions available
|
||||
- Graceful fallback to default file/directory completion
|
||||
|
||||
## Performance
|
||||
|
||||
The completion system is designed to be fast:
|
||||
|
||||
- Completions are generated on-demand
|
||||
- Results are not cached (always current)
|
||||
- CLI execution is optimized for quick response
|
||||
- Typical completion time: < 100ms
|
||||
|
||||
For large backlogs with many tasks, you may notice a slight delay when completing task IDs, but the system remains responsive.
|
||||
|
||||
## Debugging
|
||||
|
||||
If completions aren't working:
|
||||
|
||||
1. Check the function is loaded:
|
||||
```bash
|
||||
which _backlog
|
||||
# Should output the function definition
|
||||
```
|
||||
|
||||
2. Test the backend directly:
|
||||
```bash
|
||||
backlog completion __complete "backlog " 8
|
||||
# Should output: task, doc, board, config, completion
|
||||
```
|
||||
|
||||
3. Enable zsh completion debugging:
|
||||
```bash
|
||||
zstyle ':completion:*' verbose yes
|
||||
zstyle ':completion:*' format 'Completing %d'
|
||||
```
|
||||
|
||||
4. Check for errors:
|
||||
```bash
|
||||
# Remove 2>/dev/null temporarily to see errors
|
||||
_backlog() {
|
||||
local completions=(${(f)"$(backlog completion __complete "$BUFFER" "$CURSOR")"})
|
||||
_describe 'backlog commands' completions
|
||||
}
|
||||
```
|
||||
|
|
@ -0,0 +1,235 @@
|
|||
# Shell Completion Scripts
|
||||
|
||||
**Note**: The completion scripts are embedded in the compiled `backlog` binary. These files serve as reference documentation and are used during development (the CLI reads them first if available, otherwise uses the embedded versions).
|
||||
|
||||
## Available Shells
|
||||
|
||||
### Zsh
|
||||
|
||||
**File**: `_backlog`
|
||||
|
||||
**Installation**:
|
||||
|
||||
1. **Automatic** (recommended):
|
||||
```bash
|
||||
backlog completion install --shell zsh
|
||||
```
|
||||
|
||||
2. **Manual**:
|
||||
```bash
|
||||
# Copy to a directory in your $fpath
|
||||
sudo cp _backlog /usr/local/share/zsh/site-functions/_backlog
|
||||
|
||||
# Or add to your custom completions directory
|
||||
mkdir -p ~/.zsh/completions
|
||||
cp _backlog ~/.zsh/completions/_backlog
|
||||
|
||||
# Add to ~/.zshrc if not already present:
|
||||
fpath=(~/.zsh/completions $fpath)
|
||||
autoload -Uz compinit && compinit
|
||||
```
|
||||
|
||||
3. **Testing without installation**:
|
||||
```bash
|
||||
# In your current zsh session
|
||||
fpath=(./completions $fpath)
|
||||
autoload -Uz compinit && compinit
|
||||
```
|
||||
|
||||
**Verification**:
|
||||
```bash
|
||||
# Type and press TAB
|
||||
backlog <TAB>
|
||||
backlog task <TAB>
|
||||
```
|
||||
|
||||
### Bash
|
||||
|
||||
**File**: `backlog.bash`
|
||||
|
||||
**Installation**:
|
||||
|
||||
1. **Automatic** (recommended):
|
||||
```bash
|
||||
backlog completion install --shell bash
|
||||
```
|
||||
|
||||
2. **Manual**:
|
||||
```bash
|
||||
# Copy to bash-completion directory
|
||||
sudo cp backlog.bash /etc/bash_completion.d/backlog
|
||||
|
||||
# Or source in ~/.bashrc
|
||||
echo "source /path/to/backlog.bash" >> ~/.bashrc
|
||||
source ~/.bashrc
|
||||
```
|
||||
|
||||
3. **Testing without installation**:
|
||||
```bash
|
||||
# In your current bash session
|
||||
source ./completions/backlog.bash
|
||||
```
|
||||
|
||||
**Verification**:
|
||||
```bash
|
||||
# Type and press TAB
|
||||
backlog <TAB>
|
||||
backlog task <TAB>
|
||||
```
|
||||
|
||||
### Fish
|
||||
|
||||
**File**: `backlog.fish`
|
||||
|
||||
**Installation**:
|
||||
|
||||
1. **Automatic** (recommended):
|
||||
```bash
|
||||
backlog completion install --shell fish
|
||||
```
|
||||
|
||||
2. **Manual**:
|
||||
```bash
|
||||
# Copy to fish completions directory
|
||||
cp backlog.fish ~/.config/fish/completions/backlog.fish
|
||||
|
||||
# Completions are automatically loaded in new fish sessions
|
||||
```
|
||||
|
||||
3. **Testing without installation**:
|
||||
```bash
|
||||
# In your current fish session
|
||||
source ./completions/backlog.fish
|
||||
```
|
||||
|
||||
**Verification**:
|
||||
```bash
|
||||
# Type and press TAB
|
||||
backlog <TAB>
|
||||
backlog task <TAB>
|
||||
```
|
||||
|
||||
## How It Works
|
||||
|
||||
All completion scripts use the same backend:
|
||||
|
||||
1. The shell calls the completion function when TAB is pressed
|
||||
2. The completion function invokes `backlog completion __complete "$BUFFER" "$CURSOR"`
|
||||
3. The CLI returns a newline-separated list of completions
|
||||
4. The shell presents these completions to the user
|
||||
|
||||
This architecture provides:
|
||||
- **Dynamic completions**: Task IDs, labels, statuses are read from actual data
|
||||
- **Consistent behavior**: All shells use the same completion logic
|
||||
- **Easy maintenance**: Update completion logic once in TypeScript
|
||||
- **Embedded scripts**: Scripts are built into the binary, no external files needed
|
||||
|
||||
## Development
|
||||
|
||||
### Testing Completions
|
||||
|
||||
**Zsh**:
|
||||
```bash
|
||||
# Run automated tests
|
||||
zsh _backlog.test.zsh
|
||||
|
||||
# Or manually verify
|
||||
zsh
|
||||
source _backlog
|
||||
which _backlog
|
||||
```
|
||||
|
||||
**Bash**:
|
||||
```bash
|
||||
# Manually verify
|
||||
bash
|
||||
source backlog.bash
|
||||
complete -p backlog
|
||||
```
|
||||
|
||||
**Fish**:
|
||||
```bash
|
||||
# Run automated tests
|
||||
fish backlog.test.fish
|
||||
|
||||
# Or manually verify
|
||||
fish
|
||||
source backlog.fish
|
||||
complete -C'backlog '
|
||||
```
|
||||
|
||||
### Adding New Completions
|
||||
|
||||
Completions are generated by:
|
||||
- `/src/completions/helper.ts` - Main completion logic
|
||||
- `/src/completions/command-structure.ts` - Command parsing
|
||||
- `/src/completions/data-providers.ts` - Dynamic data (task IDs, labels, etc.)
|
||||
- `/src/commands/completion.ts` - Embedded shell scripts in `getEmbeddedCompletionScript()`
|
||||
|
||||
To update completion scripts:
|
||||
1. Edit the embedded scripts in `/src/commands/completion.ts`
|
||||
2. (Optional) Update the reference files in `/completions/` for documentation
|
||||
3. Rebuild: `bun run build`
|
||||
|
||||
## Requirements
|
||||
|
||||
- **Zsh**: Version 5.x or higher
|
||||
- **Bash**: Version 4.x or higher
|
||||
- **Fish**: Version 3.x or higher
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Completions not working
|
||||
|
||||
1. Verify the CLI is in your PATH:
|
||||
```bash
|
||||
which backlog
|
||||
```
|
||||
|
||||
2. Check completion function is loaded:
|
||||
```bash
|
||||
# Zsh
|
||||
which _backlog
|
||||
|
||||
# Bash
|
||||
complete -p backlog
|
||||
|
||||
# Fish
|
||||
complete -C'backlog '
|
||||
```
|
||||
|
||||
3. Test the completion backend directly:
|
||||
```bash
|
||||
backlog completion __complete "backlog task " 13
|
||||
```
|
||||
This should output available subcommands for `backlog task`.
|
||||
|
||||
4. Reload your shell configuration:
|
||||
```bash
|
||||
# Zsh
|
||||
exec zsh
|
||||
|
||||
# Bash
|
||||
exec bash
|
||||
|
||||
# Fish
|
||||
exec fish
|
||||
```
|
||||
|
||||
### Slow completions
|
||||
|
||||
If completions feel slow, it may be because:
|
||||
- Large number of tasks/documents in your backlog
|
||||
- Network latency (if applicable)
|
||||
- First completion triggers CLI initialization
|
||||
|
||||
The completion system is designed to be fast, but with very large datasets you may notice a slight delay.
|
||||
|
||||
## Contributing
|
||||
|
||||
When adding new completion features:
|
||||
|
||||
1. Update the backend in `/src/completions/`
|
||||
2. Test with `backlog completion __complete`
|
||||
3. Verify each shell script still works
|
||||
4. Update this README if behavior changes
|
||||
|
|
@ -0,0 +1,35 @@
|
|||
#compdef backlog
|
||||
|
||||
# Zsh completion script for backlog CLI
|
||||
#
|
||||
# NOTE: This script is embedded in the backlog binary and installed automatically
|
||||
# via 'backlog completion install'. This file serves as reference documentation.
|
||||
#
|
||||
# Installation:
|
||||
# - Recommended: backlog completion install --shell zsh
|
||||
# - Manual: Copy this file to a directory in your $fpath and run compinit
|
||||
|
||||
_backlog() {
|
||||
# Get the current command line buffer and cursor position
|
||||
local line=$BUFFER
|
||||
local point=$CURSOR
|
||||
|
||||
# Call the backlog completion command to get dynamic completions
|
||||
# The __complete command returns one completion per line
|
||||
local -a completions
|
||||
completions=(${(f)"$(backlog completion __complete "$line" "$point" 2>/dev/null)"})
|
||||
|
||||
# Check if we got any completions
|
||||
if (( ${#completions[@]} == 0 )); then
|
||||
# No completions available
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Present the completions to the user
|
||||
# _describe shows completions with optional descriptions
|
||||
# The first argument is the tag name shown in completion groups
|
||||
_describe 'backlog commands' completions
|
||||
}
|
||||
|
||||
# Register the completion function for the backlog command
|
||||
compdef _backlog backlog
|
||||
|
|
@ -0,0 +1,62 @@
|
|||
#!/usr/bin/env bash
|
||||
# Bash completion script for backlog CLI
|
||||
#
|
||||
# NOTE: This script is embedded in the backlog binary and installed automatically
|
||||
# via 'backlog completion install'. This file serves as reference documentation.
|
||||
#
|
||||
# Installation:
|
||||
# - Recommended: backlog completion install --shell bash
|
||||
# - Manual: Copy to /etc/bash_completion.d/backlog
|
||||
# - Or source directly in ~/.bashrc: source /path/to/backlog.bash
|
||||
#
|
||||
# Requirements:
|
||||
# - Bash 4.x or 5.x
|
||||
# - bash-completion package (optional but recommended)
|
||||
|
||||
# Main completion function for backlog CLI
|
||||
_backlog() {
|
||||
# Initialize completion variables using bash-completion helper if available
|
||||
# Falls back to manual initialization if bash-completion is not installed
|
||||
local cur prev words cword
|
||||
if declare -F _init_completion >/dev/null 2>&1; then
|
||||
_init_completion || return
|
||||
else
|
||||
# Manual initialization fallback
|
||||
COMPREPLY=()
|
||||
cur="${COMP_WORDS[COMP_CWORD]}"
|
||||
prev="${COMP_WORDS[COMP_CWORD-1]}"
|
||||
words=("${COMP_WORDS[@]}")
|
||||
cword=$COMP_CWORD
|
||||
fi
|
||||
|
||||
# Get the full command line and cursor position
|
||||
local line="${COMP_LINE}"
|
||||
local point="${COMP_POINT}"
|
||||
|
||||
# Call the CLI's internal completion command
|
||||
# This delegates all completion logic to the TypeScript implementation
|
||||
# Output format: one completion per line
|
||||
local completions
|
||||
completions=$(backlog completion __complete "$line" "$point" 2>/dev/null)
|
||||
|
||||
# Check if the completion command failed
|
||||
if [[ $? -ne 0 ]]; then
|
||||
# Silent failure - completion should never break the shell
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Generate completion replies using compgen
|
||||
# -W: wordlist - splits completions by whitespace
|
||||
# --: end of options
|
||||
# "$cur": current word being completed
|
||||
COMPREPLY=( $(compgen -W "$completions" -- "$cur") )
|
||||
|
||||
# Return success
|
||||
return 0
|
||||
}
|
||||
|
||||
# Register the completion function for the 'backlog' command
|
||||
# -F: use function for completion
|
||||
# _backlog: name of the completion function
|
||||
# backlog: command to complete
|
||||
complete -F _backlog backlog
|
||||
|
|
@ -0,0 +1,38 @@
|
|||
#!/usr/bin/env fish
|
||||
# Fish completion script for backlog CLI
|
||||
#
|
||||
# NOTE: This script is embedded in the backlog binary and installed automatically
|
||||
# via 'backlog completion install'. This file serves as reference documentation.
|
||||
#
|
||||
# Installation:
|
||||
# - Recommended: backlog completion install --shell fish
|
||||
# - Manual: Copy to ~/.config/fish/completions/backlog.fish
|
||||
#
|
||||
# Requirements:
|
||||
# - Fish 3.x or later
|
||||
|
||||
# Helper function to get completions from the CLI
|
||||
# This delegates all completion logic to the TypeScript implementation
|
||||
function __backlog_complete
|
||||
# Get the current command line and cursor position
|
||||
# -cp: get the command line with cursor position preserved
|
||||
set -l line (commandline -cp)
|
||||
|
||||
# Calculate the cursor position (length of the line up to cursor)
|
||||
# Fish tracks cursor position differently than bash/zsh
|
||||
set -l point (string length "$line")
|
||||
|
||||
# Call the CLI's internal completion command
|
||||
# Output format: one completion per line
|
||||
# Redirect stderr to /dev/null to suppress error messages
|
||||
backlog completion __complete "$line" "$point" 2>/dev/null
|
||||
|
||||
# Fish will automatically handle the exit status
|
||||
# If the command fails, no completions will be shown
|
||||
end
|
||||
|
||||
# Register completion for the 'backlog' command
|
||||
# -c: specify the command to complete
|
||||
# -f: disable file completion (we handle all completions dynamically)
|
||||
# -a: add completion candidates from the function output
|
||||
complete -c backlog -f -a '(__backlog_complete)'
|
||||
|
|
@ -0,0 +1,27 @@
|
|||
services:
|
||||
backlog:
|
||||
build: .
|
||||
container_name: backlog-md
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
# Persist backlog data
|
||||
- ./backlog:/app/backlog
|
||||
- backlog-data:/app/.backlog
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.backlog.rule=Host(`backlog.jeffemmett.com`)"
|
||||
- "traefik.http.routers.backlog.entrypoints=web"
|
||||
- "traefik.http.services.backlog.loadbalancer.server.port=6420"
|
||||
- "traefik.docker.network=traefik-public"
|
||||
networks:
|
||||
- traefik-public
|
||||
environment:
|
||||
- PORT=6420
|
||||
- NODE_ENV=production
|
||||
|
||||
volumes:
|
||||
backlog-data:
|
||||
|
||||
networks:
|
||||
traefik-public:
|
||||
external: true
|
||||
|
|
@ -0,0 +1,146 @@
|
|||
{
|
||||
"nodes": {
|
||||
"blueprint": {
|
||||
"inputs": {
|
||||
"nixpkgs": [
|
||||
"bun2nix",
|
||||
"nixpkgs"
|
||||
],
|
||||
"systems": [
|
||||
"bun2nix",
|
||||
"systems"
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1744632722,
|
||||
"narHash": "sha256-0chvqUV1Kzf8BMQ7MsH3CeicJEb2HeCpwliS77FGyfc=",
|
||||
"owner": "numtide",
|
||||
"repo": "blueprint",
|
||||
"rev": "49bbd5d072b577072f4a1d07d4b0621ecce768af",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "numtide",
|
||||
"repo": "blueprint",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"bun2nix": {
|
||||
"inputs": {
|
||||
"blueprint": "blueprint",
|
||||
"nixpkgs": [
|
||||
"nixpkgs"
|
||||
],
|
||||
"systems": "systems",
|
||||
"treefmt-nix": "treefmt-nix"
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1750682174,
|
||||
"narHash": "sha256-rUpcATQ0LiY8IYRndqTlPUhF4YGJH3lM2aMOs5vBDGM=",
|
||||
"owner": "baileyluTCD",
|
||||
"repo": "bun2nix",
|
||||
"rev": "85d692d68a5345d868d3bb1158b953d2996d70f7",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "baileyluTCD",
|
||||
"repo": "bun2nix",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"flake-utils": {
|
||||
"inputs": {
|
||||
"systems": "systems_2"
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1731533236,
|
||||
"narHash": "sha256-l0KFg5HjrsfsO/JpG+r7fRrqm12kzFHyUHqHCVpMMbI=",
|
||||
"owner": "numtide",
|
||||
"repo": "flake-utils",
|
||||
"rev": "11707dc2f618dd54ca8739b309ec4fc024de578b",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "numtide",
|
||||
"repo": "flake-utils",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"nixpkgs": {
|
||||
"locked": {
|
||||
"lastModified": 1752480373,
|
||||
"narHash": "sha256-JHQbm+OcGp32wAsXTE/FLYGNpb+4GLi5oTvCxwSoBOA=",
|
||||
"owner": "NixOS",
|
||||
"repo": "nixpkgs",
|
||||
"rev": "62e0f05ede1da0d54515d4ea8ce9c733f12d9f08",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "NixOS",
|
||||
"ref": "nixos-unstable",
|
||||
"repo": "nixpkgs",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"root": {
|
||||
"inputs": {
|
||||
"bun2nix": "bun2nix",
|
||||
"flake-utils": "flake-utils",
|
||||
"nixpkgs": "nixpkgs"
|
||||
}
|
||||
},
|
||||
"systems": {
|
||||
"locked": {
|
||||
"lastModified": 1681028828,
|
||||
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
|
||||
"owner": "nix-systems",
|
||||
"repo": "default",
|
||||
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "nix-systems",
|
||||
"repo": "default",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"systems_2": {
|
||||
"locked": {
|
||||
"lastModified": 1681028828,
|
||||
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
|
||||
"owner": "nix-systems",
|
||||
"repo": "default",
|
||||
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "nix-systems",
|
||||
"repo": "default",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"treefmt-nix": {
|
||||
"inputs": {
|
||||
"nixpkgs": [
|
||||
"bun2nix",
|
||||
"nixpkgs"
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1748243702,
|
||||
"narHash": "sha256-9YzfeN8CB6SzNPyPm2XjRRqSixDopTapaRsnTpXUEY8=",
|
||||
"owner": "numtide",
|
||||
"repo": "treefmt-nix",
|
||||
"rev": "1f3f7b784643d488ba4bf315638b2b0a4c5fb007",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "numtide",
|
||||
"repo": "treefmt-nix",
|
||||
"type": "github"
|
||||
}
|
||||
}
|
||||
},
|
||||
"root": "root",
|
||||
"version": 7
|
||||
}
|
||||
|
|
@ -0,0 +1,148 @@
|
|||
{
|
||||
description = "Backlog.md - A markdown-based task management CLI tool";
|
||||
|
||||
inputs = {
|
||||
nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
|
||||
flake-utils.url = "github:numtide/flake-utils";
|
||||
bun2nix = {
|
||||
url = "github:baileyluTCD/bun2nix";
|
||||
inputs.nixpkgs.follows = "nixpkgs";
|
||||
};
|
||||
};
|
||||
|
||||
outputs = { self, nixpkgs, flake-utils, bun2nix }:
|
||||
flake-utils.lib.eachDefaultSystem (system:
|
||||
let
|
||||
# Use baseline Bun for x86_64-linux to support older CPUs without AVX2
|
||||
# This fixes issue #412 where users with older CPUs (i7-3770, i7-3612QE)
|
||||
# get "Illegal instruction" errors during the build process.
|
||||
#
|
||||
# The baseline build targets Nehalem architecture (2008+) with SSE4.2
|
||||
# instead of Haswell (2013+) with AVX2, allowing builds on older hardware.
|
||||
#
|
||||
# Using an overlay to replace the Bun package maintains full compatibility
|
||||
# with the standard Bun package structure (thanks to @erdosxx for this solution).
|
||||
pkgs = import nixpkgs {
|
||||
inherit system;
|
||||
overlays = if system == "x86_64-linux" then
|
||||
let bunVersion = "1.2.23"; in [
|
||||
(final: prev: {
|
||||
bun = prev.bun.overrideAttrs (oldAttrs: {
|
||||
src = prev.fetchurl {
|
||||
url = "https://github.com/oven-sh/bun/releases/download/bun-v${bunVersion}/bun-linux-x64-baseline.zip";
|
||||
sha256 = "017f89e19e1b40aa4c11a7cf671d3990cb51cc12288a43473238a019a8cafffc";
|
||||
};
|
||||
});
|
||||
})
|
||||
]
|
||||
else
|
||||
[];
|
||||
};
|
||||
|
||||
# Read version from package.json
|
||||
packageJson = builtins.fromJSON (builtins.readFile ./package.json);
|
||||
version = packageJson.version;
|
||||
|
||||
ldLibraryPath = ''
|
||||
LD_LIBRARY_PATH=${pkgs.lib.makeLibraryPath [
|
||||
pkgs.stdenv.cc.cc.lib
|
||||
]}''${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH}
|
||||
'';
|
||||
|
||||
backlog-md = bun2nix.lib.${system}.mkBunDerivation {
|
||||
pname = "backlog";
|
||||
inherit version;
|
||||
src = ./.;
|
||||
packageJson = ./package.json;
|
||||
bunNix = ./bun.nix;
|
||||
|
||||
nativeBuildInputs = with pkgs; [ bun git rsync ];
|
||||
|
||||
preBuild = ''
|
||||
export HOME=$TMPDIR
|
||||
export HUSKY=0
|
||||
export ${ldLibraryPath}
|
||||
'';
|
||||
|
||||
buildPhase = ''
|
||||
runHook preBuild
|
||||
|
||||
# Build the CLI tool with embedded version
|
||||
# Note: CSS is pre-compiled and committed to git, no need to build here
|
||||
bun build --compile --minify --define "__EMBEDDED_VERSION__=${version}" --outfile=dist/backlog src/cli.ts
|
||||
|
||||
runHook postBuild
|
||||
'';
|
||||
|
||||
installPhase = ''
|
||||
runHook preInstall
|
||||
|
||||
mkdir -p $out/bin
|
||||
cp dist/backlog $out/bin/backlog
|
||||
chmod +x $out/bin/backlog
|
||||
|
||||
runHook postInstall
|
||||
'';
|
||||
|
||||
meta = with pkgs.lib; {
|
||||
description = "A markdown-based task management CLI tool with Kanban board";
|
||||
longDescription = ''
|
||||
Backlog.md is a command-line tool for managing tasks and projects using markdown files.
|
||||
It provides Kanban board visualization, task management, and integrates with Git workflows.
|
||||
'';
|
||||
homepage = "https://backlog.md";
|
||||
changelog = "https://github.com/MrLesk/Backlog.md/releases";
|
||||
license = licenses.mit;
|
||||
maintainers = let
|
||||
mrlesk = {
|
||||
name = "MrLesk";
|
||||
github = "MrLesk";
|
||||
githubId = 181345848;
|
||||
};
|
||||
in
|
||||
with maintainers; [ anpryl mrlesk ];
|
||||
platforms = platforms.all;
|
||||
mainProgram = "backlog";
|
||||
};
|
||||
};
|
||||
in
|
||||
{
|
||||
packages = {
|
||||
default = backlog-md;
|
||||
"backlog-md" = backlog-md;
|
||||
};
|
||||
|
||||
apps = {
|
||||
default = flake-utils.lib.mkApp {
|
||||
drv = backlog-md;
|
||||
name = "backlog";
|
||||
};
|
||||
};
|
||||
|
||||
devShells.default = pkgs.mkShell {
|
||||
packages = with pkgs; [
|
||||
bun
|
||||
bun2nix.packages.${system}.default
|
||||
];
|
||||
|
||||
buildInputs = with pkgs; [
|
||||
bun
|
||||
nodejs_20
|
||||
git
|
||||
biome
|
||||
];
|
||||
|
||||
shellHook = ''
|
||||
export ${ldLibraryPath}
|
||||
|
||||
echo "Backlog.md development environment"
|
||||
echo "Available commands:"
|
||||
echo " bun i - Install dependencies"
|
||||
echo " bun test - Run tests"
|
||||
echo " bun run cli - Run CLI in development mode"
|
||||
echo " bun run build - Build the CLI tool"
|
||||
echo " bun run check - Run Biome checks"
|
||||
'';
|
||||
};
|
||||
});
|
||||
}
|
||||
|
|
@ -0,0 +1,97 @@
|
|||
{
|
||||
"name": "backlog.md",
|
||||
"version": "1.26.0",
|
||||
"type": "module",
|
||||
"module": "src/cli.ts",
|
||||
"files": [
|
||||
"scripts/*.cjs",
|
||||
"README.md",
|
||||
"LICENSE"
|
||||
],
|
||||
"bin": {
|
||||
"backlog": "scripts/cli.cjs"
|
||||
},
|
||||
"optionalDependencies": {
|
||||
"backlog.md-darwin-arm64": "*",
|
||||
"backlog.md-darwin-x64": "*",
|
||||
"backlog.md-linux-arm64": "*",
|
||||
"backlog.md-linux-x64": "*",
|
||||
"backlog.md-windows-x64": "*"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@biomejs/biome": "2.3.8",
|
||||
"@tailwindcss/cli": "4.1.17",
|
||||
"@types/bun": "1.3.3",
|
||||
"@modelcontextprotocol/sdk": "1.24.2",
|
||||
"@types/prompts": "2.4.9",
|
||||
"@types/react": "19.2.7",
|
||||
"@types/react-dom": "19.2.3",
|
||||
"@types/react-router-dom": "5.3.3",
|
||||
"@types/jsdom": "27.0.0",
|
||||
"@uiw/react-markdown-preview": "5.1.5",
|
||||
"@uiw/react-md-editor": "4.0.10",
|
||||
"commander": "14.0.2",
|
||||
"fuse.js": "7.1.0",
|
||||
"gray-matter": "4.0.3",
|
||||
"husky": "9.1.7",
|
||||
"install": "0.13.0",
|
||||
"lint-staged": "16.2.7",
|
||||
"mermaid": "11.12.2",
|
||||
"jsdom": "27.2.0",
|
||||
"neo-neo-bblessed": "1.0.9",
|
||||
"prompts": "2.4.2",
|
||||
"react": "19.2.1",
|
||||
"react-dom": "19.2.1",
|
||||
"react-router-dom": "7.10.0",
|
||||
"react-tooltip": "5.30.0",
|
||||
"tailwindcss": "4.1.17"
|
||||
},
|
||||
"scripts": {
|
||||
"test": "bun test",
|
||||
"format": "biome format --write .",
|
||||
"lint": "biome lint --write .",
|
||||
"check": "biome check .",
|
||||
"check:types": "bunx tsc --noEmit",
|
||||
"prepare": "husky",
|
||||
"build:css": "bun ./node_modules/@tailwindcss/cli/dist/index.mjs -i src/web/styles/source.css -o src/web/styles/style.css --minify",
|
||||
"build": "bun run build:css && bun build --production --compile --minify --outfile=dist/backlog src/cli.ts",
|
||||
"cli": "bun run build:css && bun src/cli.ts",
|
||||
"mcp": "bun src/cli.ts mcp start",
|
||||
"update-nix": "sh scripts/update-nix.sh",
|
||||
"postinstall": "sh -c 'command -v bun2nix >/dev/null 2>&1 && bun2nix -o bun.nix || (command -v nix >/dev/null 2>&1 && nix --extra-experimental-features \"nix-command flakes\" run github:baileyluTCD/bun2nix -- -o bun.nix || true)'"
|
||||
},
|
||||
"lint-staged": {
|
||||
"package.json": [
|
||||
"biome check --write --files-ignore-unknown=true"
|
||||
],
|
||||
"*.json": [
|
||||
"biome check --write --files-ignore-unknown=true"
|
||||
],
|
||||
"src/**/*.{ts,js}": [
|
||||
"biome check --write --files-ignore-unknown=true"
|
||||
]
|
||||
},
|
||||
"author": "Alex Gavrilescu (https://github.com/MrLesk)",
|
||||
"repository": {
|
||||
"type": "git",
|
||||
"url": "git+https://github.com/MrLesk/Backlog.md.git"
|
||||
},
|
||||
"bugs": {
|
||||
"url": "https://github.com/MrLesk/Backlog.md/issues"
|
||||
},
|
||||
"homepage": "https://backlog.md",
|
||||
"keywords": [
|
||||
"cli",
|
||||
"markdown",
|
||||
"kanban",
|
||||
"task",
|
||||
"project-management",
|
||||
"backlog",
|
||||
"agents"
|
||||
],
|
||||
"license": "MIT",
|
||||
"trustedDependencies": [
|
||||
"@biomejs/biome",
|
||||
"node-pty"
|
||||
]
|
||||
}
|
||||
|
|
@ -0,0 +1,47 @@
|
|||
#!/usr/bin/env node
|
||||
|
||||
const { spawn } = require("node:child_process");
|
||||
const { resolveBinaryPath } = require("./resolveBinary.cjs");
|
||||
|
||||
let binaryPath;
|
||||
try {
|
||||
binaryPath = resolveBinaryPath();
|
||||
} catch {
|
||||
console.error(`Binary package not installed for ${process.platform}-${process.arch}.`);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Clean up unexpected args some global shims pass (e.g. bun) like the binary path itself
|
||||
const rawArgs = process.argv.slice(2);
|
||||
const cleanedArgs = rawArgs.filter((arg) => {
|
||||
if (arg === binaryPath) return false;
|
||||
// Filter any accidental deep path to our platform package binary
|
||||
try {
|
||||
const pattern = /node_modules[/\\]backlog\.md-(darwin|linux|windows)-[^/\\]+[/\\]backlog(\.exe)?$/i;
|
||||
return !pattern.test(arg);
|
||||
} catch {
|
||||
return true;
|
||||
}
|
||||
});
|
||||
|
||||
// Spawn the binary with cleaned arguments
|
||||
const child = spawn(binaryPath, cleanedArgs, {
|
||||
stdio: "inherit",
|
||||
windowsHide: true,
|
||||
});
|
||||
|
||||
// Handle exit
|
||||
child.on("exit", (code) => {
|
||||
process.exit(code || 0);
|
||||
});
|
||||
|
||||
// Handle errors
|
||||
child.on("error", (err) => {
|
||||
if (err.code === "ENOENT") {
|
||||
console.error(`Binary not found: ${binaryPath}`);
|
||||
console.error(`Please ensure you have the correct version for your platform (${process.platform}-${process.arch})`);
|
||||
} else {
|
||||
console.error("Failed to start backlog:", err);
|
||||
}
|
||||
process.exit(1);
|
||||
});
|
||||
|
|
@ -0,0 +1,36 @@
|
|||
#!/usr/bin/env node
|
||||
|
||||
const { spawn } = require("node:child_process");
|
||||
|
||||
// Platform-specific packages to uninstall
|
||||
const platformPackages = [
|
||||
"backlog.md-linux-x64",
|
||||
"backlog.md-linux-arm64",
|
||||
"backlog.md-darwin-x64",
|
||||
"backlog.md-darwin-arm64",
|
||||
"backlog.md-windows-x64",
|
||||
];
|
||||
|
||||
// Detect package manager
|
||||
const packageManager = process.env.npm_config_user_agent?.split("/")[0] || "npm";
|
||||
|
||||
console.log("Cleaning up platform-specific packages...");
|
||||
|
||||
// Try to uninstall all platform packages
|
||||
for (const pkg of platformPackages) {
|
||||
const args = packageManager === "bun" ? ["remove", "-g", pkg] : ["uninstall", "-g", pkg];
|
||||
|
||||
const child = spawn(packageManager, args, {
|
||||
stdio: "pipe", // Don't show output to avoid spam
|
||||
windowsHide: true,
|
||||
});
|
||||
|
||||
child.on("exit", (code) => {
|
||||
if (code === 0) {
|
||||
console.log(`✓ Cleaned up ${pkg}`);
|
||||
}
|
||||
// Silently ignore failures - package might not be installed
|
||||
});
|
||||
}
|
||||
|
||||
console.log("Platform package cleanup completed.");
|
||||
|
|
@ -0,0 +1,33 @@
|
|||
function mapPlatform(platform = process.platform) {
|
||||
switch (platform) {
|
||||
case "win32":
|
||||
return "windows";
|
||||
case "darwin":
|
||||
case "linux":
|
||||
return platform;
|
||||
default:
|
||||
return platform;
|
||||
}
|
||||
}
|
||||
|
||||
function mapArch(arch = process.arch) {
|
||||
switch (arch) {
|
||||
case "x64":
|
||||
case "arm64":
|
||||
return arch;
|
||||
default:
|
||||
return arch;
|
||||
}
|
||||
}
|
||||
|
||||
function getPackageName(platform = process.platform, arch = process.arch) {
|
||||
return `backlog.md-${mapPlatform(platform)}-${mapArch(arch)}`;
|
||||
}
|
||||
|
||||
function resolveBinaryPath(platform = process.platform, arch = process.arch) {
|
||||
const packageName = getPackageName(platform, arch);
|
||||
const binary = `backlog${platform === "win32" ? ".exe" : ""}`;
|
||||
return require.resolve(`${packageName}/${binary}`);
|
||||
}
|
||||
|
||||
module.exports = { getPackageName, resolveBinaryPath };
|
||||
|
|
@ -0,0 +1,21 @@
|
|||
#!/usr/bin/env bash
|
||||
# Updates bun.nix using bun2nix via Docker
|
||||
# Run this after updating dependencies (bun install) before committing
|
||||
|
||||
set -e
|
||||
|
||||
echo "🔄 Regenerating bun.nix using bun2nix..."
|
||||
|
||||
# Check if Docker is available
|
||||
if ! command -v docker &> /dev/null; then
|
||||
echo "❌ Error: Docker is not installed or not in PATH"
|
||||
echo " Please install Docker or use Nix directly if available"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Run bun2nix in Docker
|
||||
docker run --rm -v "$(pwd):/app" -w /app nixos/nix:latest \
|
||||
nix --extra-experimental-features "nix-command flakes" run github:baileyluTCD/bun2nix -- -o bun.nix
|
||||
|
||||
echo "✅ bun.nix has been regenerated successfully"
|
||||
echo " Don't forget to commit the updated bun.nix file!"
|
||||
|
|
@ -0,0 +1,277 @@
|
|||
import { existsSync, readFileSync } from "node:fs";
|
||||
import { mkdir } from "node:fs/promises";
|
||||
import { dirname, isAbsolute, join } from "node:path";
|
||||
import { fileURLToPath } from "node:url";
|
||||
import {
|
||||
AGENT_GUIDELINES,
|
||||
CLAUDE_AGENT_CONTENT,
|
||||
CLAUDE_GUIDELINES,
|
||||
COPILOT_GUIDELINES,
|
||||
GEMINI_GUIDELINES,
|
||||
MCP_AGENT_NUDGE,
|
||||
README_GUIDELINES,
|
||||
} from "./constants/index.ts";
|
||||
import type { GitOperations } from "./git/operations.ts";
|
||||
|
||||
export type AgentInstructionFile =
|
||||
| "AGENTS.md"
|
||||
| "CLAUDE.md"
|
||||
| "GEMINI.md"
|
||||
| ".github/copilot-instructions.md"
|
||||
| "README.md";
|
||||
|
||||
const __dirname = dirname(fileURLToPath(import.meta.url));
|
||||
|
||||
async function loadContent(textOrPath: string): Promise<string> {
|
||||
if (textOrPath.includes("\n")) return textOrPath;
|
||||
try {
|
||||
const path = isAbsolute(textOrPath) ? textOrPath : join(__dirname, textOrPath);
|
||||
return await Bun.file(path).text();
|
||||
} catch {
|
||||
return textOrPath;
|
||||
}
|
||||
}
|
||||
|
||||
type GuidelineMarkerKind = "default" | "mcp";
|
||||
|
||||
/**
|
||||
* Gets the appropriate markers for a given file type
|
||||
*/
|
||||
function getMarkers(fileName: string, kind: GuidelineMarkerKind = "default"): { start: string; end: string } {
|
||||
const label = kind === "mcp" ? "BACKLOG.MD MCP GUIDELINES" : "BACKLOG.MD GUIDELINES";
|
||||
if (fileName === ".cursorrules") {
|
||||
// .cursorrules doesn't support HTML comments, use markdown-style comments
|
||||
return {
|
||||
start: `# === ${label} START ===`,
|
||||
end: `# === ${label} END ===`,
|
||||
};
|
||||
}
|
||||
// All markdown files support HTML comments
|
||||
return {
|
||||
start: `<!-- ${label} START -->`,
|
||||
end: `<!-- ${label} END -->`,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if the Backlog.md guidelines are already present in the content
|
||||
*/
|
||||
function hasBacklogGuidelines(content: string, fileName: string): boolean {
|
||||
const { start } = getMarkers(fileName);
|
||||
return content.includes(start);
|
||||
}
|
||||
|
||||
/**
|
||||
* Wraps the Backlog.md guidelines with appropriate markers
|
||||
*/
|
||||
function wrapWithMarkers(content: string, fileName: string, kind: GuidelineMarkerKind = "default"): string {
|
||||
const { start, end } = getMarkers(fileName, kind);
|
||||
return `\n${start}\n${content}\n${end}\n`;
|
||||
}
|
||||
|
||||
function stripGuidelineSection(
|
||||
content: string,
|
||||
fileName: string,
|
||||
kind: GuidelineMarkerKind,
|
||||
): { content: string; removed: boolean; firstIndex?: number } {
|
||||
const { start, end } = getMarkers(fileName, kind);
|
||||
let removed = false;
|
||||
let result = content;
|
||||
let firstIndex: number | undefined;
|
||||
|
||||
while (true) {
|
||||
const startIndex = result.indexOf(start);
|
||||
if (startIndex === -1) {
|
||||
break;
|
||||
}
|
||||
|
||||
const endIndex = result.indexOf(end, startIndex);
|
||||
if (endIndex === -1) {
|
||||
break;
|
||||
}
|
||||
|
||||
let removalStart = startIndex;
|
||||
while (removalStart > 0 && (result[removalStart - 1] === " " || result[removalStart - 1] === "\t")) {
|
||||
removalStart -= 1;
|
||||
}
|
||||
if (removalStart > 0 && result[removalStart - 1] === "\n") {
|
||||
removalStart -= 1;
|
||||
if (removalStart > 0 && result[removalStart - 1] === "\r") {
|
||||
removalStart -= 1;
|
||||
}
|
||||
} else if (removalStart > 0 && result[removalStart - 1] === "\r") {
|
||||
removalStart -= 1;
|
||||
}
|
||||
|
||||
let removalEnd = endIndex + end.length;
|
||||
if (removalEnd < result.length && result[removalEnd] === "\r") {
|
||||
removalEnd += 1;
|
||||
}
|
||||
if (removalEnd < result.length && result[removalEnd] === "\n") {
|
||||
removalEnd += 1;
|
||||
}
|
||||
|
||||
if (firstIndex === undefined) {
|
||||
firstIndex = removalStart;
|
||||
}
|
||||
result = result.slice(0, removalStart) + result.slice(removalEnd);
|
||||
removed = true;
|
||||
}
|
||||
|
||||
return { content: result, removed, firstIndex };
|
||||
}
|
||||
|
||||
export async function addAgentInstructions(
|
||||
projectRoot: string,
|
||||
git?: GitOperations,
|
||||
files: AgentInstructionFile[] = ["AGENTS.md", "CLAUDE.md", "GEMINI.md", ".github/copilot-instructions.md"],
|
||||
autoCommit = false,
|
||||
): Promise<void> {
|
||||
const mapping: Record<AgentInstructionFile, string> = {
|
||||
"AGENTS.md": AGENT_GUIDELINES,
|
||||
"CLAUDE.md": CLAUDE_GUIDELINES,
|
||||
"GEMINI.md": GEMINI_GUIDELINES,
|
||||
".github/copilot-instructions.md": COPILOT_GUIDELINES,
|
||||
"README.md": README_GUIDELINES,
|
||||
};
|
||||
|
||||
const paths: string[] = [];
|
||||
for (const name of files) {
|
||||
const content = await loadContent(mapping[name]);
|
||||
const filePath = join(projectRoot, name);
|
||||
let finalContent = "";
|
||||
|
||||
// Check if file exists first to avoid Windows hanging issue
|
||||
if (existsSync(filePath)) {
|
||||
try {
|
||||
// On Windows, use synchronous read to avoid hanging
|
||||
let existing: string;
|
||||
if (process.platform === "win32") {
|
||||
existing = readFileSync(filePath, "utf-8");
|
||||
} else {
|
||||
existing = await Bun.file(filePath).text();
|
||||
}
|
||||
|
||||
const mcpStripped = stripGuidelineSection(existing, name, "mcp");
|
||||
if (mcpStripped.removed) {
|
||||
existing = mcpStripped.content;
|
||||
}
|
||||
|
||||
// Check if Backlog.md guidelines are already present
|
||||
if (hasBacklogGuidelines(existing, name)) {
|
||||
// Guidelines already exist, skip this file
|
||||
continue;
|
||||
}
|
||||
|
||||
// Append Backlog.md guidelines with markers
|
||||
if (!existing.endsWith("\n")) existing += "\n";
|
||||
finalContent = existing + wrapWithMarkers(content, name);
|
||||
} catch (error) {
|
||||
console.error(`Error reading existing file ${filePath}:`, error);
|
||||
// If we can't read it, just use the new content with markers
|
||||
finalContent = wrapWithMarkers(content, name);
|
||||
}
|
||||
} else {
|
||||
// File doesn't exist, create with markers
|
||||
finalContent = wrapWithMarkers(content, name);
|
||||
}
|
||||
|
||||
await mkdir(dirname(filePath), { recursive: true });
|
||||
await Bun.write(filePath, finalContent);
|
||||
paths.push(filePath);
|
||||
}
|
||||
|
||||
if (git && paths.length > 0 && autoCommit) {
|
||||
await git.addFiles(paths);
|
||||
await git.commitChanges("Add AI agent instructions");
|
||||
}
|
||||
}
|
||||
|
||||
export { loadContent as _loadAgentGuideline };
|
||||
|
||||
function _hasMcpGuidelines(content: string, fileName: string): boolean {
|
||||
const { start } = getMarkers(fileName, "mcp");
|
||||
return content.includes(start);
|
||||
}
|
||||
|
||||
async function readExistingFile(filePath: string): Promise<string> {
|
||||
if (process.platform === "win32") {
|
||||
return readFileSync(filePath, "utf-8");
|
||||
}
|
||||
return await Bun.file(filePath).text();
|
||||
}
|
||||
|
||||
export interface EnsureMcpGuidelinesResult {
|
||||
changed: boolean;
|
||||
created: boolean;
|
||||
fileName: AgentInstructionFile;
|
||||
filePath: string;
|
||||
}
|
||||
|
||||
export async function ensureMcpGuidelines(
|
||||
projectRoot: string,
|
||||
fileName: AgentInstructionFile,
|
||||
): Promise<EnsureMcpGuidelinesResult> {
|
||||
const filePath = join(projectRoot, fileName);
|
||||
const fileExists = existsSync(filePath);
|
||||
let existing = "";
|
||||
let original = "";
|
||||
let insertIndex: number | null = null;
|
||||
|
||||
if (fileExists) {
|
||||
try {
|
||||
existing = await readExistingFile(filePath);
|
||||
original = existing;
|
||||
const cliStripped = stripGuidelineSection(existing, fileName, "default");
|
||||
if (cliStripped.removed && cliStripped.firstIndex !== undefined) {
|
||||
insertIndex = cliStripped.firstIndex;
|
||||
}
|
||||
existing = cliStripped.content;
|
||||
const mcpStripped = stripGuidelineSection(existing, fileName, "mcp");
|
||||
if (mcpStripped.removed && mcpStripped.firstIndex !== undefined) {
|
||||
insertIndex = mcpStripped.firstIndex;
|
||||
}
|
||||
existing = mcpStripped.content;
|
||||
} catch (error) {
|
||||
console.error(`Error reading existing file ${filePath}:`, error);
|
||||
existing = "";
|
||||
}
|
||||
}
|
||||
|
||||
const nudgeBlock = wrapWithMarkers(MCP_AGENT_NUDGE, fileName, "mcp");
|
||||
let nextContent: string;
|
||||
if (insertIndex !== null) {
|
||||
const normalizedIndex = Math.max(0, Math.min(insertIndex, existing.length));
|
||||
nextContent = existing.slice(0, normalizedIndex) + nudgeBlock + existing.slice(normalizedIndex);
|
||||
} else {
|
||||
nextContent = existing;
|
||||
if (nextContent && !nextContent.endsWith("\n")) {
|
||||
nextContent += "\n";
|
||||
}
|
||||
nextContent += nudgeBlock;
|
||||
}
|
||||
|
||||
const finalContent = nextContent;
|
||||
const changed = !fileExists || finalContent !== original;
|
||||
|
||||
await mkdir(dirname(filePath), { recursive: true });
|
||||
if (changed) {
|
||||
await Bun.write(filePath, finalContent);
|
||||
}
|
||||
|
||||
return { changed, created: !fileExists, fileName, filePath };
|
||||
}
|
||||
|
||||
/**
|
||||
* Installs the Claude Code backlog agent to the project's .claude/agents directory
|
||||
*/
|
||||
export async function installClaudeAgent(projectRoot: string): Promise<void> {
|
||||
const agentDir = join(projectRoot, ".claude", "agents");
|
||||
const agentPath = join(agentDir, "project-manager-backlog.md");
|
||||
|
||||
// Create the directory if it doesn't exist
|
||||
await mkdir(agentDir, { recursive: true });
|
||||
|
||||
// Write the agent content
|
||||
await Bun.write(agentPath, CLAUDE_AGENT_CONTENT);
|
||||
}
|
||||
|
|
@ -0,0 +1,198 @@
|
|||
import { mkdir } from "node:fs/promises";
|
||||
import { dirname } from "node:path";
|
||||
import type { Task } from "./types/index.ts";
|
||||
|
||||
export interface BoardOptions {
|
||||
statuses?: string[];
|
||||
}
|
||||
|
||||
export type BoardLayout = "horizontal" | "vertical";
|
||||
export type BoardFormat = "terminal" | "markdown";
|
||||
|
||||
export function buildKanbanStatusGroups(
|
||||
tasks: Task[],
|
||||
statuses: string[],
|
||||
): { orderedStatuses: string[]; groupedTasks: Map<string, Task[]> } {
|
||||
const canonicalByLower = new Map<string, string>();
|
||||
const orderedConfiguredStatuses: string[] = [];
|
||||
const configuredSeen = new Set<string>();
|
||||
|
||||
for (const status of statuses ?? []) {
|
||||
if (typeof status !== "string") continue;
|
||||
const trimmed = status.trim();
|
||||
if (!trimmed) continue;
|
||||
const lower = trimmed.toLowerCase();
|
||||
if (!canonicalByLower.has(lower)) {
|
||||
canonicalByLower.set(lower, trimmed);
|
||||
}
|
||||
if (!configuredSeen.has(trimmed)) {
|
||||
orderedConfiguredStatuses.push(trimmed);
|
||||
configuredSeen.add(trimmed);
|
||||
}
|
||||
}
|
||||
|
||||
const groupedTasks = new Map<string, Task[]>();
|
||||
for (const status of orderedConfiguredStatuses) {
|
||||
groupedTasks.set(status, []);
|
||||
}
|
||||
|
||||
for (const task of tasks) {
|
||||
const raw = (task.status ?? "").trim();
|
||||
if (!raw) continue;
|
||||
const canonical = canonicalByLower.get(raw.toLowerCase()) ?? raw;
|
||||
if (!groupedTasks.has(canonical)) {
|
||||
groupedTasks.set(canonical, []);
|
||||
}
|
||||
groupedTasks.get(canonical)?.push(task);
|
||||
}
|
||||
|
||||
const orderedStatuses: string[] = [];
|
||||
const seen = new Set<string>();
|
||||
|
||||
for (const status of orderedConfiguredStatuses) {
|
||||
if (seen.has(status)) continue;
|
||||
orderedStatuses.push(status);
|
||||
seen.add(status);
|
||||
}
|
||||
|
||||
for (const status of groupedTasks.keys()) {
|
||||
if (seen.has(status)) continue;
|
||||
orderedStatuses.push(status);
|
||||
seen.add(status);
|
||||
}
|
||||
|
||||
return { orderedStatuses, groupedTasks };
|
||||
}
|
||||
|
||||
export function generateKanbanBoardWithMetadata(tasks: Task[], statuses: string[], projectName: string): string {
|
||||
// Generate timestamp
|
||||
const now = new Date();
|
||||
const timestamp = now.toISOString().replace("T", " ").substring(0, 19);
|
||||
|
||||
const { orderedStatuses, groupedTasks } = buildKanbanStatusGroups(tasks, statuses);
|
||||
|
||||
// Create header
|
||||
const header = `# Kanban Board Export (powered by Backlog.md)
|
||||
Generated on: ${timestamp}
|
||||
Project: ${projectName}
|
||||
|
||||
`;
|
||||
|
||||
// Return early if there are no configured statuses and no tasks
|
||||
if (orderedStatuses.length === 0) {
|
||||
return `${header}No tasks found.`;
|
||||
}
|
||||
|
||||
// Create table header
|
||||
const headerRow = `| ${orderedStatuses.map((status) => status || "No Status").join(" | ")} |`;
|
||||
const separatorRow = `| ${orderedStatuses.map(() => "---").join(" | ")} |`;
|
||||
|
||||
// Map for quick lookup by id
|
||||
const byId = new Map<string, Task>(tasks.map((t) => [t.id, t]));
|
||||
|
||||
// Group tasks by status and handle parent-child relationships
|
||||
const columns: Task[][] = orderedStatuses.map((status) => {
|
||||
const items = groupedTasks.get(status) || [];
|
||||
const top: Task[] = [];
|
||||
const children = new Map<string, Task[]>();
|
||||
|
||||
// Sort items: All columns by updatedDate descending (fallback to createdDate), then by ID as secondary
|
||||
const sortedItems = items.sort((a, b) => {
|
||||
// Primary sort: updatedDate (newest first), fallback to createdDate if updatedDate is missing
|
||||
const dateA = a.updatedDate ? new Date(a.updatedDate).getTime() : new Date(a.createdDate).getTime();
|
||||
const dateB = b.updatedDate ? new Date(b.updatedDate).getTime() : new Date(b.createdDate).getTime();
|
||||
if (dateB !== dateA) {
|
||||
return dateB - dateA; // Newest first
|
||||
}
|
||||
// Secondary sort: ID descending when dates are equal
|
||||
const idA = Number.parseInt(a.id.replace("task-", ""), 10);
|
||||
const idB = Number.parseInt(b.id.replace("task-", ""), 10);
|
||||
return idB - idA; // Highest ID first (newest)
|
||||
});
|
||||
|
||||
// Separate top-level tasks from subtasks
|
||||
for (const t of sortedItems) {
|
||||
const parent = t.parentTaskId ? byId.get(t.parentTaskId) : undefined;
|
||||
if (parent && parent.status === t.status) {
|
||||
// Subtask with same status as parent - group under parent
|
||||
const list = children.get(parent.id) || [];
|
||||
list.push(t);
|
||||
children.set(parent.id, list);
|
||||
} else {
|
||||
// Top-level task or subtask with different status
|
||||
top.push(t);
|
||||
}
|
||||
}
|
||||
|
||||
// Build final list with subtasks nested under parents
|
||||
const result: Task[] = [];
|
||||
for (const t of top) {
|
||||
result.push(t);
|
||||
const subs = children.get(t.id) || [];
|
||||
subs.sort((a, b) => {
|
||||
const idA = Number.parseInt(a.id.replace("task-", ""), 10);
|
||||
const idB = Number.parseInt(b.id.replace("task-", ""), 10);
|
||||
return idA - idB; // Subtasks in ascending order
|
||||
});
|
||||
result.push(...subs);
|
||||
}
|
||||
|
||||
return result;
|
||||
});
|
||||
|
||||
const maxTasks = Math.max(...columns.map((c) => c.length), 0);
|
||||
const rows = [headerRow, separatorRow];
|
||||
|
||||
for (let taskIdx = 0; taskIdx < maxTasks; taskIdx++) {
|
||||
const row = orderedStatuses.map((_, cIdx) => {
|
||||
const task = columns[cIdx]?.[taskIdx];
|
||||
if (!task || !task.id || !task.title) return "";
|
||||
|
||||
// Check if this is a subtask
|
||||
const isSubtask = task.parentTaskId;
|
||||
const taskIdPrefix = isSubtask ? "└─ " : "";
|
||||
const taskIdUpper = task.id.toUpperCase();
|
||||
|
||||
// Format assignees in brackets or empty string if none
|
||||
// Add @ prefix only if not already present
|
||||
const assigneesText =
|
||||
task.assignee && task.assignee.length > 0
|
||||
? ` [${task.assignee.map((a) => (a.startsWith("@") ? a : `@${a}`)).join(", ")}]`
|
||||
: "";
|
||||
|
||||
// Format labels with # prefix and italic or empty string if none
|
||||
const labelsText =
|
||||
task.labels && task.labels.length > 0 ? `<br>*${task.labels.map((label) => `#${label}`).join(" ")}*` : "";
|
||||
|
||||
return `${taskIdPrefix}**${taskIdUpper}** - ${task.title}${assigneesText}${labelsText}`;
|
||||
});
|
||||
rows.push(`| ${row.join(" | ")} |`);
|
||||
}
|
||||
|
||||
const table = `${rows.join("\n")}`;
|
||||
if (maxTasks === 0) {
|
||||
return `${header}${table}\n\nNo tasks found.\n`;
|
||||
}
|
||||
|
||||
return `${header}${table}\n`;
|
||||
}
|
||||
|
||||
export async function exportKanbanBoardToFile(
|
||||
tasks: Task[],
|
||||
statuses: string[],
|
||||
filePath: string,
|
||||
projectName: string,
|
||||
_overwrite = false,
|
||||
): Promise<void> {
|
||||
const board = generateKanbanBoardWithMetadata(tasks, statuses, projectName);
|
||||
|
||||
// Ensure directory exists
|
||||
try {
|
||||
await mkdir(dirname(filePath), { recursive: true });
|
||||
} catch {
|
||||
// Directory might already exist
|
||||
}
|
||||
|
||||
// Write the content (overwrite mode)
|
||||
await Bun.write(filePath, board);
|
||||
}
|
||||
|
|
@ -0,0 +1,257 @@
|
|||
import prompts from "prompts";
|
||||
import type { BacklogConfig } from "../types/index.ts";
|
||||
import { isEditorAvailable } from "../utils/editor.ts";
|
||||
|
||||
export type PromptRunner = (...args: Parameters<typeof prompts>) => ReturnType<typeof prompts>;
|
||||
|
||||
interface WizardOptions {
|
||||
existingConfig?: BacklogConfig | null;
|
||||
cancelMessage: string;
|
||||
includeClaudePrompt?: boolean;
|
||||
promptImpl?: PromptRunner;
|
||||
}
|
||||
|
||||
export interface AdvancedConfigWizardResult {
|
||||
config: Partial<BacklogConfig>;
|
||||
installClaudeAgent: boolean;
|
||||
installShellCompletions: boolean;
|
||||
}
|
||||
|
||||
function handlePromptCancel(message: string) {
|
||||
console.log(message);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
export async function runAdvancedConfigWizard({
|
||||
existingConfig,
|
||||
cancelMessage,
|
||||
includeClaudePrompt = false,
|
||||
promptImpl = prompts,
|
||||
}: WizardOptions): Promise<AdvancedConfigWizardResult> {
|
||||
const onCancel = () => handlePromptCancel(cancelMessage);
|
||||
const config = existingConfig ?? null;
|
||||
|
||||
let checkActiveBranches = config?.checkActiveBranches ?? true;
|
||||
let remoteOperations = config?.remoteOperations ?? true;
|
||||
let activeBranchDays = config?.activeBranchDays ?? 30;
|
||||
let bypassGitHooks = config?.bypassGitHooks ?? false;
|
||||
let autoCommit = config?.autoCommit ?? false;
|
||||
let zeroPaddedIds = config?.zeroPaddedIds;
|
||||
let defaultEditor = config?.defaultEditor;
|
||||
let defaultPort = config?.defaultPort ?? 6420;
|
||||
let autoOpenBrowser = config?.autoOpenBrowser ?? true;
|
||||
let installClaudeAgent = false;
|
||||
let installShellCompletions = false;
|
||||
|
||||
const completionPrompt = await promptImpl(
|
||||
{
|
||||
type: "confirm",
|
||||
name: "installCompletions",
|
||||
message: "Install shell completions now?",
|
||||
hint: "Adds TAB completion support for backlog commands in your shell",
|
||||
initial: true,
|
||||
},
|
||||
{ onCancel },
|
||||
);
|
||||
installShellCompletions = Boolean(completionPrompt?.installCompletions);
|
||||
|
||||
const crossBranchPrompt = await promptImpl(
|
||||
{
|
||||
type: "confirm",
|
||||
name: "checkActiveBranches",
|
||||
message: "Check task states across active branches?",
|
||||
hint: "Ensures accurate task tracking across branches (may impact performance on large repos)",
|
||||
initial: checkActiveBranches,
|
||||
},
|
||||
{ onCancel },
|
||||
);
|
||||
checkActiveBranches = crossBranchPrompt.checkActiveBranches ?? true;
|
||||
|
||||
if (checkActiveBranches) {
|
||||
const remotePrompt = await promptImpl(
|
||||
{
|
||||
type: "confirm",
|
||||
name: "remoteOperations",
|
||||
message: "Check task states in remote branches?",
|
||||
hint: "Required for accessing tasks from feature branches on remote repos",
|
||||
initial: remoteOperations,
|
||||
},
|
||||
{ onCancel },
|
||||
);
|
||||
remoteOperations = remotePrompt.remoteOperations ?? remoteOperations;
|
||||
|
||||
const daysPrompt = await promptImpl(
|
||||
{
|
||||
type: "number",
|
||||
name: "activeBranchDays",
|
||||
message: "How many days should a branch be considered active?",
|
||||
hint: "Lower values improve performance (default: 30 days)",
|
||||
initial: activeBranchDays,
|
||||
min: 1,
|
||||
max: 365,
|
||||
},
|
||||
{ onCancel },
|
||||
);
|
||||
if (typeof daysPrompt.activeBranchDays === "number" && !Number.isNaN(daysPrompt.activeBranchDays)) {
|
||||
activeBranchDays = daysPrompt.activeBranchDays;
|
||||
}
|
||||
} else {
|
||||
remoteOperations = false;
|
||||
}
|
||||
|
||||
const gitHooksPrompt = await promptImpl(
|
||||
{
|
||||
type: "confirm",
|
||||
name: "bypassGitHooks",
|
||||
message: "Bypass git hooks when committing?",
|
||||
hint: "Use --no-verify flag to skip pre-commit hooks",
|
||||
initial: bypassGitHooks,
|
||||
},
|
||||
{ onCancel },
|
||||
);
|
||||
bypassGitHooks = gitHooksPrompt.bypassGitHooks ?? bypassGitHooks;
|
||||
|
||||
const autoCommitPrompt = await promptImpl(
|
||||
{
|
||||
type: "confirm",
|
||||
name: "autoCommit",
|
||||
message: "Enable automatic commits for Backlog operations?",
|
||||
hint: "Creates commits automatically after CLI changes",
|
||||
initial: autoCommit,
|
||||
},
|
||||
{ onCancel },
|
||||
);
|
||||
autoCommit = autoCommitPrompt.autoCommit ?? autoCommit;
|
||||
|
||||
const zeroPaddingPrompt = await promptImpl(
|
||||
{
|
||||
type: "confirm",
|
||||
name: "enableZeroPadding",
|
||||
message: "Enable zero-padded IDs for consistent formatting?",
|
||||
hint: "Example: task-001, doc-001 instead of task-1, doc-1",
|
||||
initial: (zeroPaddedIds ?? 0) > 0,
|
||||
},
|
||||
{ onCancel },
|
||||
);
|
||||
|
||||
if (zeroPaddingPrompt.enableZeroPadding) {
|
||||
const paddingPrompt = await promptImpl(
|
||||
{
|
||||
type: "number",
|
||||
name: "paddingWidth",
|
||||
message: "Number of digits for zero-padding:",
|
||||
hint: "e.g., 3 creates task-001; 4 creates task-0001",
|
||||
initial: zeroPaddedIds ?? 3,
|
||||
min: 1,
|
||||
max: 10,
|
||||
},
|
||||
{ onCancel },
|
||||
);
|
||||
if (typeof paddingPrompt?.paddingWidth === "number" && !Number.isNaN(paddingPrompt.paddingWidth)) {
|
||||
zeroPaddedIds = paddingPrompt.paddingWidth;
|
||||
}
|
||||
} else {
|
||||
zeroPaddedIds = undefined;
|
||||
}
|
||||
|
||||
const editorPrompt = await promptImpl(
|
||||
{
|
||||
type: "text",
|
||||
name: "editor",
|
||||
message: "Default editor command (leave blank to use system default):",
|
||||
hint: "e.g., 'code --wait', 'vim', 'nano'",
|
||||
initial: defaultEditor ?? "",
|
||||
},
|
||||
{ onCancel },
|
||||
);
|
||||
|
||||
let editorResult = String(editorPrompt.editor ?? "").trim();
|
||||
if (editorResult.length > 0) {
|
||||
const isAvailable = await isEditorAvailable(editorResult);
|
||||
if (!isAvailable) {
|
||||
console.warn(`Warning: Editor command '${editorResult}' not found in PATH`);
|
||||
const confirmAnyway = await promptImpl(
|
||||
{
|
||||
type: "confirm",
|
||||
name: "confirm",
|
||||
message: "Editor not found. Set it anyway?",
|
||||
initial: false,
|
||||
},
|
||||
{ onCancel },
|
||||
);
|
||||
if (!confirmAnyway?.confirm) {
|
||||
editorResult = "";
|
||||
}
|
||||
}
|
||||
}
|
||||
defaultEditor = editorResult.length > 0 ? editorResult : undefined;
|
||||
|
||||
const webUIPrompt = await promptImpl(
|
||||
{
|
||||
type: "confirm",
|
||||
name: "configureWebUI",
|
||||
message: "Configure web UI settings now?",
|
||||
hint: "Port and browser auto-open",
|
||||
initial: false,
|
||||
},
|
||||
{ onCancel },
|
||||
);
|
||||
|
||||
if (webUIPrompt.configureWebUI) {
|
||||
const webUIValues = await promptImpl(
|
||||
[
|
||||
{
|
||||
type: "number",
|
||||
name: "defaultPort",
|
||||
message: "Default web UI port:",
|
||||
hint: "Port number for the web interface (1-65535)",
|
||||
initial: defaultPort,
|
||||
min: 1,
|
||||
max: 65535,
|
||||
},
|
||||
{
|
||||
type: "confirm",
|
||||
name: "autoOpenBrowser",
|
||||
message: "Automatically open browser when starting web UI?",
|
||||
hint: "When enabled, 'backlog web' opens your browser",
|
||||
initial: autoOpenBrowser,
|
||||
},
|
||||
],
|
||||
{ onCancel },
|
||||
);
|
||||
if (typeof webUIValues?.defaultPort === "number" && !Number.isNaN(webUIValues.defaultPort)) {
|
||||
defaultPort = webUIValues.defaultPort;
|
||||
}
|
||||
autoOpenBrowser = Boolean(webUIValues?.autoOpenBrowser ?? autoOpenBrowser);
|
||||
}
|
||||
|
||||
if (includeClaudePrompt) {
|
||||
const claudePrompt = await promptImpl(
|
||||
{
|
||||
type: "confirm",
|
||||
name: "installClaudeAgent",
|
||||
message: "Install Claude Code Backlog.md agent?",
|
||||
hint: "Adds configuration under .claude/agents/",
|
||||
initial: false,
|
||||
},
|
||||
{ onCancel },
|
||||
);
|
||||
installClaudeAgent = Boolean(claudePrompt?.installClaudeAgent);
|
||||
}
|
||||
|
||||
return {
|
||||
config: {
|
||||
checkActiveBranches,
|
||||
remoteOperations,
|
||||
activeBranchDays,
|
||||
bypassGitHooks,
|
||||
autoCommit,
|
||||
zeroPaddedIds,
|
||||
defaultEditor,
|
||||
defaultPort,
|
||||
autoOpenBrowser,
|
||||
},
|
||||
installClaudeAgent,
|
||||
installShellCompletions,
|
||||
};
|
||||
}
|
||||
|
|
@ -0,0 +1,372 @@
|
|||
import { existsSync } from "node:fs";
|
||||
import { mkdir, readFile, writeFile } from "node:fs/promises";
|
||||
import { homedir } from "node:os";
|
||||
import { dirname, join } from "node:path";
|
||||
import { fileURLToPath } from "node:url";
|
||||
import type { Command } from "commander";
|
||||
import { getCompletions } from "../completions/helper.ts";
|
||||
|
||||
const __dirname = dirname(fileURLToPath(import.meta.url));
|
||||
|
||||
export type Shell = "bash" | "zsh" | "fish";
|
||||
|
||||
export interface CompletionInstallResult {
|
||||
shell: Shell;
|
||||
installPath: string;
|
||||
instructions: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Detect the user's current shell
|
||||
*/
|
||||
function detectShell(): Shell | null {
|
||||
const shell = process.env.SHELL || "";
|
||||
|
||||
if (shell.includes("bash")) {
|
||||
return "bash";
|
||||
}
|
||||
if (shell.includes("zsh")) {
|
||||
return "zsh";
|
||||
}
|
||||
if (shell.includes("fish")) {
|
||||
return "fish";
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the completion script content for a shell
|
||||
*/
|
||||
async function getCompletionScript(shell: Shell): Promise<string> {
|
||||
// Try to read from file system first (for development)
|
||||
const scriptFiles: Record<Shell, string> = {
|
||||
bash: "backlog.bash",
|
||||
zsh: "_backlog",
|
||||
fish: "backlog.fish",
|
||||
};
|
||||
|
||||
const scriptPath = join(__dirname, "..", "..", "completions", scriptFiles[shell]);
|
||||
|
||||
try {
|
||||
if (existsSync(scriptPath)) {
|
||||
return await readFile(scriptPath, "utf-8");
|
||||
}
|
||||
} catch {
|
||||
// Fall through to embedded scripts
|
||||
}
|
||||
|
||||
// Fallback to embedded scripts (for compiled binary)
|
||||
return getEmbeddedCompletionScript(shell);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get embedded completion script (used when files aren't available)
|
||||
*/
|
||||
function getEmbeddedCompletionScript(shell: Shell): string {
|
||||
const scripts: Record<Shell, string> = {
|
||||
bash: `#!/usr/bin/env bash
|
||||
# Bash completion script for backlog CLI
|
||||
#
|
||||
# Installation:
|
||||
# - Copy to /etc/bash_completion.d/backlog
|
||||
# - Or source directly in ~/.bashrc: source /path/to/backlog.bash
|
||||
#
|
||||
# Requirements:
|
||||
# - Bash 4.x or 5.x
|
||||
# - bash-completion package (optional but recommended)
|
||||
|
||||
# Main completion function for backlog CLI
|
||||
_backlog() {
|
||||
# Initialize completion variables using bash-completion helper if available
|
||||
# Falls back to manual initialization if bash-completion is not installed
|
||||
local cur prev words cword
|
||||
if declare -F _init_completion >/dev/null 2>&1; then
|
||||
_init_completion || return
|
||||
else
|
||||
# Manual initialization fallback
|
||||
COMPREPLY=()
|
||||
cur="\${COMP_WORDS[COMP_CWORD]}"
|
||||
prev="\${COMP_WORDS[COMP_CWORD-1]}"
|
||||
words=("\${COMP_WORDS[@]}")
|
||||
cword=$COMP_CWORD
|
||||
fi
|
||||
|
||||
# Get the full command line and cursor position
|
||||
local line="\${COMP_LINE}"
|
||||
local point="\${COMP_POINT}"
|
||||
|
||||
# Call the CLI's internal completion command
|
||||
# This delegates all completion logic to the TypeScript implementation
|
||||
# Output format: one completion per line
|
||||
local completions
|
||||
completions=$(backlog completion __complete "$line" "$point" 2>/dev/null)
|
||||
|
||||
# Check if the completion command failed
|
||||
if [[ $? -ne 0 ]]; then
|
||||
# Silent failure - completion should never break the shell
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Generate completion replies using compgen
|
||||
# -W: wordlist - splits completions by whitespace
|
||||
# --: end of options
|
||||
# "$cur": current word being completed
|
||||
COMPREPLY=( $(compgen -W "$completions" -- "$cur") )
|
||||
|
||||
# Return success
|
||||
return 0
|
||||
}
|
||||
|
||||
# Register the completion function for the 'backlog' command
|
||||
# -F: use function for completion
|
||||
# _backlog: name of the completion function
|
||||
# backlog: command to complete
|
||||
complete -F _backlog backlog
|
||||
`,
|
||||
zsh: `#compdef backlog
|
||||
|
||||
# Zsh completion script for backlog CLI
|
||||
#
|
||||
# Installation:
|
||||
# 1. Copy this file to a directory in your $fpath
|
||||
# 2. Run: compinit
|
||||
#
|
||||
# Or use: backlog completion install --shell zsh
|
||||
|
||||
_backlog() {
|
||||
# Get the current command line buffer and cursor position
|
||||
local line=$BUFFER
|
||||
local point=$CURSOR
|
||||
|
||||
# Call the backlog completion command to get dynamic completions
|
||||
# The __complete command returns one completion per line
|
||||
local -a completions
|
||||
completions=(\${(f)"$(backlog completion __complete "$line" "$point" 2>/dev/null)"})
|
||||
|
||||
# Check if we got any completions
|
||||
if (( \${#completions[@]} == 0 )); then
|
||||
# No completions available
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Present the completions to the user
|
||||
# _describe shows completions with optional descriptions
|
||||
# The first argument is the tag name shown in completion groups
|
||||
_describe 'backlog commands' completions
|
||||
}
|
||||
|
||||
# Register the completion function for the backlog command
|
||||
compdef _backlog backlog
|
||||
`,
|
||||
fish: `#!/usr/bin/env fish
|
||||
# Fish completion script for backlog CLI
|
||||
#
|
||||
# Installation:
|
||||
# - Copy to ~/.config/fish/completions/backlog.fish
|
||||
# - Or use: backlog completion install --shell fish
|
||||
#
|
||||
# Requirements:
|
||||
# - Fish 3.x or later
|
||||
|
||||
# Helper function to get completions from the CLI
|
||||
# This delegates all completion logic to the TypeScript implementation
|
||||
function __backlog_complete
|
||||
# Get the current command line and cursor position
|
||||
# -cp: get the command line with cursor position preserved
|
||||
set -l line (commandline -cp)
|
||||
|
||||
# Calculate the cursor position (length of the line up to cursor)
|
||||
# Fish tracks cursor position differently than bash/zsh
|
||||
set -l point (string length "$line")
|
||||
|
||||
# Call the CLI's internal completion command
|
||||
# Output format: one completion per line
|
||||
# Redirect stderr to /dev/null to suppress error messages
|
||||
backlog completion __complete "$line" "$point" 2>/dev/null
|
||||
|
||||
# Fish will automatically handle the exit status
|
||||
# If the command fails, no completions will be shown
|
||||
end
|
||||
|
||||
# Register completion for the 'backlog' command
|
||||
# -c: specify the command to complete
|
||||
# -f: disable file completion (we handle all completions dynamically)
|
||||
# -a: add completion candidates from the function output
|
||||
complete -c backlog -f -a '(__backlog_complete)'
|
||||
`,
|
||||
};
|
||||
|
||||
return scripts[shell];
|
||||
}
|
||||
|
||||
/**
|
||||
* Get installation paths for a shell
|
||||
*/
|
||||
function getInstallPaths(shell: Shell): { system: string; user: string } {
|
||||
const home = homedir();
|
||||
|
||||
const paths: Record<Shell, { system: string; user: string }> = {
|
||||
bash: {
|
||||
system: "/etc/bash_completion.d/backlog",
|
||||
user: join(home, ".local/share/bash-completion/completions/backlog"),
|
||||
},
|
||||
zsh: {
|
||||
system: "/usr/local/share/zsh/site-functions/_backlog",
|
||||
user: join(home, ".zsh/completions/_backlog"),
|
||||
},
|
||||
fish: {
|
||||
system: "/usr/share/fish/vendor_completions.d/backlog.fish",
|
||||
user: join(home, ".config/fish/completions/backlog.fish"),
|
||||
},
|
||||
};
|
||||
|
||||
return paths[shell];
|
||||
}
|
||||
|
||||
/**
|
||||
* Get instructions for enabling completions after installation
|
||||
*/
|
||||
function getEnableInstructions(shell: Shell, installPath: string): string {
|
||||
const instructions: Record<Shell, string> = {
|
||||
bash: `
|
||||
To enable completions, add this to your ~/.bashrc:
|
||||
source ${installPath}
|
||||
|
||||
Then restart your shell or run:
|
||||
source ~/.bashrc
|
||||
`,
|
||||
zsh: `
|
||||
To enable completions, ensure the directory is in your fpath.
|
||||
Add this to your ~/.zshrc:
|
||||
fpath=(${dirname(installPath)} $fpath)
|
||||
autoload -Uz compinit && compinit
|
||||
|
||||
Then restart your shell or run:
|
||||
source ~/.zshrc
|
||||
`,
|
||||
fish: `
|
||||
Completions should be automatically loaded by fish.
|
||||
Restart your shell or run:
|
||||
exec fish
|
||||
`,
|
||||
};
|
||||
|
||||
return instructions[shell];
|
||||
}
|
||||
|
||||
/**
|
||||
* Install completion script
|
||||
*/
|
||||
export async function installCompletion(shell?: string): Promise<CompletionInstallResult> {
|
||||
// Detect shell if not provided
|
||||
const targetShell = shell as Shell | undefined;
|
||||
const detectedShell = targetShell || detectShell();
|
||||
|
||||
if (!detectedShell) {
|
||||
const message = [
|
||||
"Could not detect your shell.",
|
||||
"Please specify it manually:",
|
||||
" backlog completion install --shell bash",
|
||||
" backlog completion install --shell zsh",
|
||||
" backlog completion install --shell fish",
|
||||
].join("\n");
|
||||
throw new Error(message);
|
||||
}
|
||||
|
||||
if (!["bash", "zsh", "fish"].includes(detectedShell)) {
|
||||
throw new Error(`Unsupported shell: ${detectedShell}\nSupported shells: bash, zsh, fish`);
|
||||
}
|
||||
|
||||
// Get completion script content
|
||||
let scriptContent: string;
|
||||
try {
|
||||
scriptContent = await getCompletionScript(detectedShell);
|
||||
} catch (error) {
|
||||
throw new Error(error instanceof Error ? error.message : String(error));
|
||||
}
|
||||
|
||||
// Get installation paths
|
||||
const paths = getInstallPaths(detectedShell);
|
||||
|
||||
// Try user installation first (no sudo required)
|
||||
const installPath = paths.user;
|
||||
const installDir = dirname(installPath);
|
||||
|
||||
try {
|
||||
// Create directory if it doesn't exist
|
||||
if (!existsSync(installDir)) {
|
||||
await mkdir(installDir, { recursive: true });
|
||||
}
|
||||
|
||||
// Write the completion script
|
||||
await writeFile(installPath, scriptContent, "utf-8");
|
||||
} catch (error) {
|
||||
const manualInstructions = [
|
||||
"Failed to install completion script automatically.",
|
||||
"",
|
||||
"Manual installation options:",
|
||||
"1. System-wide installation (requires sudo):",
|
||||
` sudo cp completions/${detectedShell === "zsh" ? "_backlog" : `backlog.${detectedShell}`} ${paths.system}`,
|
||||
"",
|
||||
"2. User installation:",
|
||||
` mkdir -p ${installDir}`,
|
||||
` cp completions/${detectedShell === "zsh" ? "_backlog" : `backlog.${detectedShell}`} ${installPath}`,
|
||||
].join("\n");
|
||||
const errorMessage = error instanceof Error ? error.message : String(error);
|
||||
throw new Error(`${errorMessage}\n\n${manualInstructions}`);
|
||||
}
|
||||
|
||||
return {
|
||||
shell: detectedShell,
|
||||
installPath,
|
||||
instructions: getEnableInstructions(detectedShell, installPath),
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Register the completion command and subcommands
|
||||
*/
|
||||
export function registerCompletionCommand(program: Command): void {
|
||||
const completionCmd = program.command("completion").description("manage shell completion scripts");
|
||||
|
||||
// Hidden command used by shell completion scripts
|
||||
completionCmd
|
||||
.command("__complete <line> <point>")
|
||||
.description("internal command for shell completion (do not call directly)")
|
||||
.action(async (line: string, point: string) => {
|
||||
try {
|
||||
const pointNum = Number.parseInt(point, 10);
|
||||
if (Number.isNaN(pointNum)) {
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const completions = await getCompletions(program, line, pointNum);
|
||||
for (const completion of completions) {
|
||||
console.log(completion);
|
||||
}
|
||||
process.exit(0);
|
||||
} catch (_error) {
|
||||
// Silent failure - completion should never break the shell
|
||||
process.exit(1);
|
||||
}
|
||||
});
|
||||
|
||||
// Installation command
|
||||
completionCmd
|
||||
.command("install")
|
||||
.description("install shell completion script")
|
||||
.option("--shell <shell>", "shell type (bash, zsh, fish)")
|
||||
.action(async (options: { shell?: string }) => {
|
||||
try {
|
||||
const result = await installCompletion(options.shell);
|
||||
console.log(`📦 Installed ${result.shell} completion for backlog CLI.`);
|
||||
console.log(`✅ Completion script written to ${result.installPath}`);
|
||||
console.log(result.instructions.trimEnd());
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : String(error);
|
||||
console.error(`❌ ${message}`);
|
||||
process.exit(1);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
|
@ -0,0 +1,34 @@
|
|||
import type { Core } from "../core/backlog.ts";
|
||||
import type { BacklogConfig } from "../types/index.ts";
|
||||
import { type PromptRunner, runAdvancedConfigWizard } from "./advanced-config-wizard.ts";
|
||||
|
||||
interface ConfigureAdvancedOptions {
|
||||
promptImpl?: PromptRunner;
|
||||
cancelMessage?: string;
|
||||
}
|
||||
|
||||
export async function configureAdvancedSettings(
|
||||
core: Core,
|
||||
{ promptImpl, cancelMessage = "Aborting configuration." }: ConfigureAdvancedOptions = {},
|
||||
): Promise<{ mergedConfig: BacklogConfig; installClaudeAgent: boolean; installShellCompletions: boolean }> {
|
||||
const existingConfig = await core.filesystem.loadConfig();
|
||||
if (!existingConfig) {
|
||||
throw new Error("No backlog project found. Initialize one first with: backlog init");
|
||||
}
|
||||
|
||||
const wizardResult = await runAdvancedConfigWizard({
|
||||
existingConfig,
|
||||
cancelMessage,
|
||||
includeClaudePrompt: true,
|
||||
promptImpl,
|
||||
});
|
||||
|
||||
const mergedConfig: BacklogConfig = { ...existingConfig, ...wizardResult.config };
|
||||
await core.filesystem.saveConfig(mergedConfig);
|
||||
|
||||
return {
|
||||
mergedConfig,
|
||||
installClaudeAgent: wizardResult.installClaudeAgent,
|
||||
installShellCompletions: wizardResult.installShellCompletions,
|
||||
};
|
||||
}
|
||||
|
|
@ -0,0 +1,66 @@
|
|||
/**
|
||||
* MCP Command Group - Model Context Protocol CLI commands.
|
||||
*
|
||||
* This simplified command set focuses on the stdio transport, which is the
|
||||
* only supported transport for Backlog.md's local MCP integration.
|
||||
*/
|
||||
|
||||
import type { Command } from "commander";
|
||||
import { createMcpServer } from "../mcp/server.ts";
|
||||
|
||||
type StartOptions = {
|
||||
debug?: boolean;
|
||||
};
|
||||
|
||||
/**
|
||||
* Register MCP command group with CLI program.
|
||||
*
|
||||
* @param program - Commander program instance
|
||||
*/
|
||||
export function registerMcpCommand(program: Command): void {
|
||||
const mcpCmd = program.command("mcp");
|
||||
registerStartCommand(mcpCmd);
|
||||
}
|
||||
|
||||
/**
|
||||
* Register 'mcp start' command for stdio transport.
|
||||
*/
|
||||
function registerStartCommand(mcpCmd: Command): void {
|
||||
mcpCmd
|
||||
.command("start")
|
||||
.description("Start the MCP server using stdio transport")
|
||||
.option("-d, --debug", "Enable debug logging", false)
|
||||
.action(async (options: StartOptions) => {
|
||||
try {
|
||||
const server = await createMcpServer(process.cwd(), { debug: options.debug });
|
||||
|
||||
await server.connect();
|
||||
await server.start();
|
||||
|
||||
if (options.debug) {
|
||||
console.error("Backlog.md MCP server started (stdio transport)");
|
||||
}
|
||||
|
||||
const shutdown = async (signal: string) => {
|
||||
if (options.debug) {
|
||||
console.error(`Received ${signal}, shutting down MCP server...`);
|
||||
}
|
||||
|
||||
try {
|
||||
await server.stop();
|
||||
process.exit(0);
|
||||
} catch (error) {
|
||||
console.error("Error during MCP server shutdown:", error);
|
||||
process.exit(1);
|
||||
}
|
||||
};
|
||||
|
||||
process.once("SIGINT", () => shutdown("SIGINT"));
|
||||
process.once("SIGTERM", () => shutdown("SIGTERM"));
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : String(error);
|
||||
console.error(`Failed to start MCP server: ${message}`);
|
||||
process.exit(1);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
|
@ -0,0 +1,45 @@
|
|||
import type { Core } from "../core/backlog.ts";
|
||||
import { getTaskStatistics } from "../core/statistics.ts";
|
||||
import { createLoadingScreen } from "../ui/loading.ts";
|
||||
import { renderOverviewTui } from "../ui/overview-tui.ts";
|
||||
|
||||
function formatTime(ms: number): string {
|
||||
if (ms < 1000) return `${Math.round(ms)}ms`;
|
||||
return `${(ms / 1000).toFixed(1)}s`;
|
||||
}
|
||||
|
||||
export async function runOverviewCommand(core: Core): Promise<void> {
|
||||
const startTime = performance.now();
|
||||
|
||||
// Load tasks with loading screen
|
||||
const loadingScreen = await createLoadingScreen("Loading project statistics");
|
||||
|
||||
try {
|
||||
// Use the shared task loading logic
|
||||
const loadStart = performance.now();
|
||||
const {
|
||||
tasks: activeTasks,
|
||||
drafts,
|
||||
statuses,
|
||||
} = await core.loadAllTasksForStatistics((msg) =>
|
||||
loadingScreen?.update(`${msg} in ${formatTime(performance.now() - loadStart)}`),
|
||||
);
|
||||
|
||||
loadingScreen?.close();
|
||||
|
||||
// Calculate statistics
|
||||
const statsStart = performance.now();
|
||||
const statistics = getTaskStatistics(activeTasks, drafts, statuses);
|
||||
const statsTime = Math.round(performance.now() - statsStart);
|
||||
|
||||
// Display the TUI
|
||||
const totalTime = Math.round(performance.now() - startTime);
|
||||
console.log(`\nPerformance summary: Total time ${totalTime}ms (stats calculation: ${statsTime}ms)`);
|
||||
|
||||
const config = await core.fs.loadConfig();
|
||||
await renderOverviewTui(statistics, config?.projectName || "Project");
|
||||
} catch (error) {
|
||||
loadingScreen?.close();
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,185 @@
|
|||
import type { Argument, Command, Option } from "commander";
|
||||
|
||||
export interface CommandInfo {
|
||||
name: string;
|
||||
aliases: string[];
|
||||
arguments: ArgumentInfo[];
|
||||
subcommands: CommandInfo[];
|
||||
options: OptionInfo[];
|
||||
}
|
||||
|
||||
export interface ArgumentInfo {
|
||||
name: string;
|
||||
required: boolean;
|
||||
variadic: boolean;
|
||||
}
|
||||
|
||||
export interface OptionInfo {
|
||||
flags: string;
|
||||
long?: string;
|
||||
short?: string;
|
||||
description: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract command structure from a Commander.js program
|
||||
*/
|
||||
export function extractCommandStructure(program: Command): CommandInfo {
|
||||
return {
|
||||
name: program.name(),
|
||||
aliases: program.aliases(),
|
||||
arguments: extractArguments(program),
|
||||
subcommands: program.commands.map((cmd) => extractCommandInfo(cmd)),
|
||||
options: program.options.map((opt) => extractOptionInfo(opt)),
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract info from a single command
|
||||
*/
|
||||
function extractCommandInfo(command: Command): CommandInfo {
|
||||
return {
|
||||
name: command.name(),
|
||||
aliases: command.aliases(),
|
||||
arguments: extractArguments(command),
|
||||
subcommands: command.commands.map((cmd) => extractCommandInfo(cmd)),
|
||||
options: command.options.map((opt) => extractOptionInfo(opt)),
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract arguments from a command
|
||||
*/
|
||||
function extractArguments(command: Command): ArgumentInfo[] {
|
||||
// Commander.js v14 has registeredArguments or processedArgs
|
||||
type CommandWithArgs = Command & {
|
||||
registeredArguments?: Argument[];
|
||||
args?: Argument[];
|
||||
};
|
||||
|
||||
const commandWithArgs = command as CommandWithArgs;
|
||||
const args = commandWithArgs.registeredArguments || commandWithArgs.args || [];
|
||||
|
||||
return args.map((arg: Argument) => ({
|
||||
name: arg.name(),
|
||||
required: arg.required,
|
||||
variadic: arg.variadic,
|
||||
}));
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract info from an option
|
||||
*/
|
||||
function extractOptionInfo(option: Option): OptionInfo {
|
||||
return {
|
||||
flags: option.flags,
|
||||
long: option.long,
|
||||
short: option.short,
|
||||
description: option.description || "",
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Find a command by name (including aliases)
|
||||
*/
|
||||
export function findCommand(info: CommandInfo, commandName: string): CommandInfo | null {
|
||||
return info.subcommands.find((cmd) => cmd.name === commandName || cmd.aliases.includes(commandName)) || null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Find a subcommand within a command
|
||||
*/
|
||||
export function findSubcommand(info: CommandInfo, commandName: string, subcommandName: string): CommandInfo | null {
|
||||
const command = findCommand(info, commandName);
|
||||
if (!command) {
|
||||
return null;
|
||||
}
|
||||
|
||||
return command.subcommands.find((sub) => sub.name === subcommandName || sub.aliases.includes(subcommandName)) || null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get all top-level command names (including aliases)
|
||||
*/
|
||||
export function getTopLevelCommands(info: CommandInfo): string[] {
|
||||
const names: string[] = [];
|
||||
for (const cmd of info.subcommands) {
|
||||
names.push(cmd.name, ...cmd.aliases);
|
||||
}
|
||||
return names;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get all subcommand names for a command (including aliases)
|
||||
*/
|
||||
export function getSubcommandNames(info: CommandInfo, commandName: string): string[] {
|
||||
const command = findCommand(info, commandName);
|
||||
if (!command) {
|
||||
return [];
|
||||
}
|
||||
|
||||
const names: string[] = [];
|
||||
for (const sub of command.subcommands) {
|
||||
names.push(sub.name, ...sub.aliases);
|
||||
}
|
||||
return names;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get all option flags for a specific command/subcommand
|
||||
*/
|
||||
export function getOptionFlags(info: CommandInfo, commandName?: string, subcommandName?: string): string[] {
|
||||
let targetCommand = info;
|
||||
|
||||
if (commandName) {
|
||||
const cmd = findCommand(info, commandName);
|
||||
if (!cmd) {
|
||||
return [];
|
||||
}
|
||||
targetCommand = cmd;
|
||||
}
|
||||
|
||||
if (subcommandName) {
|
||||
const sub = findCommand(targetCommand, subcommandName);
|
||||
if (!sub) {
|
||||
return [];
|
||||
}
|
||||
targetCommand = sub;
|
||||
}
|
||||
|
||||
const flags: string[] = [];
|
||||
for (const opt of targetCommand.options) {
|
||||
if (opt.long) {
|
||||
flags.push(opt.long);
|
||||
}
|
||||
if (opt.short) {
|
||||
flags.push(opt.short);
|
||||
}
|
||||
}
|
||||
return flags;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get expected arguments for a command/subcommand
|
||||
*/
|
||||
export function getExpectedArguments(info: CommandInfo, commandName?: string, subcommandName?: string): ArgumentInfo[] {
|
||||
let targetCommand = info;
|
||||
|
||||
if (commandName) {
|
||||
const cmd = findCommand(info, commandName);
|
||||
if (!cmd) {
|
||||
return [];
|
||||
}
|
||||
targetCommand = cmd;
|
||||
}
|
||||
|
||||
if (subcommandName) {
|
||||
const sub = findCommand(targetCommand, subcommandName);
|
||||
if (!sub) {
|
||||
return [];
|
||||
}
|
||||
targetCommand = sub;
|
||||
}
|
||||
|
||||
return targetCommand.arguments;
|
||||
}
|
||||
|
|
@ -0,0 +1,100 @@
|
|||
import { Core } from "../index.ts";
|
||||
import type { BacklogConfig } from "../types/index.ts";
|
||||
|
||||
type CoreCallback<T> = (core: Core) => Promise<T>;
|
||||
|
||||
/**
|
||||
* Create a Core instance bound to the current working directory.
|
||||
*/
|
||||
function createCore(): Core {
|
||||
return new Core(process.cwd());
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute a callback with a Core instance, returning a fallback value if anything fails.
|
||||
*/
|
||||
async function withCore<T>(callback: CoreCallback<T>, fallback: T): Promise<T> {
|
||||
try {
|
||||
const core = createCore();
|
||||
return await callback(core);
|
||||
} catch {
|
||||
return fallback;
|
||||
}
|
||||
}
|
||||
|
||||
function getDefaultStatuses(): string[] {
|
||||
return ["To Do", "In Progress", "Done"];
|
||||
}
|
||||
|
||||
/**
|
||||
* Get all task IDs from the backlog
|
||||
*/
|
||||
export async function getTaskIds(): Promise<string[]> {
|
||||
return await withCore(async (core) => {
|
||||
const tasks = await core.filesystem.listTasks();
|
||||
return tasks.map((task) => task.id).sort();
|
||||
}, []);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get configured status values
|
||||
*/
|
||||
export async function getStatuses(): Promise<string[]> {
|
||||
return await withCore(async (core) => {
|
||||
const config: BacklogConfig | null = await core.filesystem.loadConfig();
|
||||
const statuses = config?.statuses;
|
||||
if (Array.isArray(statuses) && statuses.length > 0) {
|
||||
return statuses;
|
||||
}
|
||||
return getDefaultStatuses();
|
||||
}, getDefaultStatuses());
|
||||
}
|
||||
|
||||
/**
|
||||
* Get priority values
|
||||
*/
|
||||
export function getPriorities(): string[] {
|
||||
return ["high", "medium", "low"];
|
||||
}
|
||||
|
||||
/**
|
||||
* Get unique labels from all tasks
|
||||
*/
|
||||
export async function getLabels(): Promise<string[]> {
|
||||
return await withCore(async (core) => {
|
||||
const tasks = await core.filesystem.listTasks();
|
||||
const labels = new Set<string>();
|
||||
for (const task of tasks) {
|
||||
for (const label of task.labels) {
|
||||
labels.add(label);
|
||||
}
|
||||
}
|
||||
return Array.from(labels).sort();
|
||||
}, []);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get unique assignees from all tasks
|
||||
*/
|
||||
export async function getAssignees(): Promise<string[]> {
|
||||
return await withCore(async (core) => {
|
||||
const tasks = await core.filesystem.listTasks();
|
||||
const assignees = new Set<string>();
|
||||
for (const task of tasks) {
|
||||
for (const assignee of task.assignee) {
|
||||
assignees.add(assignee);
|
||||
}
|
||||
}
|
||||
return Array.from(assignees).sort();
|
||||
}, []);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get all document IDs from the backlog
|
||||
*/
|
||||
export async function getDocumentIds(): Promise<string[]> {
|
||||
return await withCore(async (core) => {
|
||||
const docs = await core.filesystem.listDocuments();
|
||||
return docs.map((doc) => doc.id).sort();
|
||||
}, []);
|
||||
}
|
||||
|
|
@ -0,0 +1,107 @@
|
|||
import { describe, expect, test } from "bun:test";
|
||||
import { parseCompletionContext } from "./helper.ts";
|
||||
|
||||
describe("parseCompletionContext", () => {
|
||||
test("parses empty command line", () => {
|
||||
const context = parseCompletionContext("backlog ", 8);
|
||||
expect(context.command).toBeNull();
|
||||
expect(context.subcommand).toBeNull();
|
||||
expect(context.partial).toBe("");
|
||||
expect(context.lastFlag).toBeNull();
|
||||
});
|
||||
|
||||
test("parses partial command", () => {
|
||||
const context = parseCompletionContext("backlog tas", 11);
|
||||
expect(context.command).toBeNull();
|
||||
expect(context.partial).toBe("tas");
|
||||
});
|
||||
|
||||
test("parses complete command", () => {
|
||||
const context = parseCompletionContext("backlog task ", 13);
|
||||
expect(context.command).toBe("task");
|
||||
expect(context.subcommand).toBeNull();
|
||||
expect(context.partial).toBe("");
|
||||
});
|
||||
|
||||
test("parses partial subcommand", () => {
|
||||
const context = parseCompletionContext("backlog task ed", 15);
|
||||
expect(context.command).toBe("task");
|
||||
expect(context.subcommand).toBeNull();
|
||||
expect(context.partial).toBe("ed");
|
||||
});
|
||||
|
||||
test("parses complete subcommand", () => {
|
||||
const context = parseCompletionContext("backlog task edit ", 18);
|
||||
expect(context.command).toBe("task");
|
||||
expect(context.subcommand).toBe("edit");
|
||||
expect(context.partial).toBe("");
|
||||
});
|
||||
|
||||
test("parses partial argument", () => {
|
||||
const context = parseCompletionContext("backlog task edit task-", 23);
|
||||
expect(context.command).toBe("task");
|
||||
expect(context.subcommand).toBe("edit");
|
||||
expect(context.partial).toBe("task-");
|
||||
});
|
||||
|
||||
test("parses flag", () => {
|
||||
const context = parseCompletionContext("backlog task create --status ", 29);
|
||||
expect(context.command).toBe("task");
|
||||
expect(context.subcommand).toBe("create");
|
||||
expect(context.lastFlag).toBe("--status");
|
||||
expect(context.partial).toBe("");
|
||||
});
|
||||
|
||||
test("parses partial flag value", () => {
|
||||
const context = parseCompletionContext("backlog task create --status In", 31);
|
||||
expect(context.command).toBe("task");
|
||||
expect(context.subcommand).toBe("create");
|
||||
expect(context.lastFlag).toBe("--status");
|
||||
expect(context.partial).toBe("In");
|
||||
});
|
||||
|
||||
test("handles quoted strings", () => {
|
||||
const context = parseCompletionContext('backlog task create "test task" --status ', 41);
|
||||
expect(context.command).toBe("task");
|
||||
expect(context.subcommand).toBe("create");
|
||||
expect(context.lastFlag).toBe("--status");
|
||||
expect(context.partial).toBe("");
|
||||
});
|
||||
|
||||
test("handles multiple flags", () => {
|
||||
const context = parseCompletionContext("backlog task create --priority high --status ", 46);
|
||||
expect(context.command).toBe("task");
|
||||
expect(context.subcommand).toBe("create");
|
||||
expect(context.lastFlag).toBe("--status");
|
||||
expect(context.partial).toBe("");
|
||||
});
|
||||
|
||||
test("parses completion subcommand", () => {
|
||||
const context = parseCompletionContext("backlog completion install ", 27);
|
||||
expect(context.command).toBe("completion");
|
||||
expect(context.subcommand).toBe("install");
|
||||
expect(context.partial).toBe("");
|
||||
});
|
||||
|
||||
test("handles cursor in middle of line", () => {
|
||||
// Cursor at position 13 is after "backlog task " (space included)
|
||||
const context = parseCompletionContext("backlog task edit", 13);
|
||||
expect(context.command).toBe("task");
|
||||
expect(context.subcommand).toBeNull();
|
||||
expect(context.partial).toBe("");
|
||||
});
|
||||
|
||||
test("counts argument position correctly", () => {
|
||||
const context = parseCompletionContext("backlog task edit task-1 ", 25);
|
||||
expect(context.command).toBe("task");
|
||||
expect(context.subcommand).toBe("edit");
|
||||
expect(context.argPosition).toBe(1);
|
||||
});
|
||||
|
||||
test("does not count flag values as arguments", () => {
|
||||
const context = parseCompletionContext("backlog task create --status Done ", 34);
|
||||
expect(context.command).toBe("task");
|
||||
expect(context.subcommand).toBe("create");
|
||||
expect(context.argPosition).toBe(0);
|
||||
});
|
||||
});
|
||||
|
|
@ -0,0 +1,172 @@
|
|||
import type { Command } from "commander";
|
||||
import {
|
||||
extractCommandStructure,
|
||||
getExpectedArguments,
|
||||
getOptionFlags,
|
||||
getSubcommandNames,
|
||||
getTopLevelCommands,
|
||||
} from "./command-structure.ts";
|
||||
import { getAssignees, getDocumentIds, getLabels, getPriorities, getStatuses, getTaskIds } from "./data-providers.ts";
|
||||
|
||||
export interface CompletionContext {
|
||||
words: string[];
|
||||
partial: string;
|
||||
command: string | null;
|
||||
subcommand: string | null;
|
||||
lastFlag: string | null;
|
||||
argPosition: number;
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse the command line to determine completion context
|
||||
*/
|
||||
export function parseCompletionContext(line: string, point: number): CompletionContext {
|
||||
// Extract the portion up to the cursor
|
||||
const textBeforeCursor = line.slice(0, point);
|
||||
|
||||
// Split into words, handling quotes
|
||||
const words = textBeforeCursor.match(/(?:[^\s"']+|"[^"]*"|'[^']*')+/g) || [];
|
||||
|
||||
// Remove "backlog" from the start
|
||||
const cleanWords = words.slice(1);
|
||||
|
||||
// Determine if we're completing a partial word or starting a new one
|
||||
const endsWithSpace = textBeforeCursor.endsWith(" ");
|
||||
const partial = endsWithSpace ? "" : cleanWords[cleanWords.length - 1] || "";
|
||||
|
||||
// Remove partial from words if not completing a new word
|
||||
const completedWords = endsWithSpace ? cleanWords : cleanWords.slice(0, -1);
|
||||
|
||||
// Identify command, subcommand, last flag, and argument position
|
||||
let command: string | null = null;
|
||||
let subcommand: string | null = null;
|
||||
let lastFlag: string | null = null;
|
||||
let argPosition = 0;
|
||||
|
||||
for (let i = 0; i < completedWords.length; i++) {
|
||||
const word = completedWords[i];
|
||||
if (!word) {
|
||||
continue;
|
||||
}
|
||||
if (word.startsWith("-")) {
|
||||
lastFlag = word;
|
||||
} else if (!command) {
|
||||
command = word;
|
||||
} else if (!subcommand) {
|
||||
subcommand = word;
|
||||
} else {
|
||||
// Count positional arguments
|
||||
const prevWord = completedWords[i - 1];
|
||||
if (!prevWord || !prevWord.startsWith("-")) {
|
||||
argPosition++;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
words: completedWords,
|
||||
partial,
|
||||
command,
|
||||
subcommand,
|
||||
lastFlag,
|
||||
argPosition,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Filter completions by partial match
|
||||
*/
|
||||
function filterCompletions(completions: string[], partial: string): string[] {
|
||||
if (!partial) {
|
||||
return completions;
|
||||
}
|
||||
return completions.filter((c) => c.toLowerCase().startsWith(partial.toLowerCase()));
|
||||
}
|
||||
|
||||
/**
|
||||
* Get completions based on argument name pattern
|
||||
*/
|
||||
async function getArgumentCompletions(argumentName: string): Promise<string[]> {
|
||||
const lowerName = argumentName.toLowerCase();
|
||||
|
||||
// Match common patterns
|
||||
if (lowerName.includes("taskid") || lowerName === "id") {
|
||||
return await getTaskIds();
|
||||
}
|
||||
if (lowerName.includes("docid") || lowerName.includes("documentid")) {
|
||||
return await getDocumentIds();
|
||||
}
|
||||
if (lowerName.includes("title") || lowerName.includes("name")) {
|
||||
return []; // Free-form text, no completions
|
||||
}
|
||||
|
||||
return [];
|
||||
}
|
||||
|
||||
/**
|
||||
* Get completions for flag values based on flag name
|
||||
*/
|
||||
async function getFlagValueCompletions(flagName: string): Promise<string[]> {
|
||||
const cleanFlag = flagName.replace(/^-+/, "");
|
||||
|
||||
switch (cleanFlag) {
|
||||
case "status":
|
||||
return await getStatuses();
|
||||
case "priority":
|
||||
return getPriorities();
|
||||
case "labels":
|
||||
case "label":
|
||||
return await getLabels();
|
||||
case "assignee":
|
||||
return await getAssignees();
|
||||
case "shell":
|
||||
return ["bash", "zsh", "fish"];
|
||||
default:
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate completions based on context
|
||||
*/
|
||||
export async function getCompletions(program: Command, line: string, point: number): Promise<string[]> {
|
||||
const context = parseCompletionContext(line, point);
|
||||
const cmdInfo = extractCommandStructure(program);
|
||||
|
||||
// If completing a flag value
|
||||
if (context.lastFlag) {
|
||||
const flagCompletions = await getFlagValueCompletions(context.lastFlag);
|
||||
return filterCompletions(flagCompletions, context.partial);
|
||||
}
|
||||
|
||||
// No command yet - complete top-level commands
|
||||
if (!context.command) {
|
||||
return filterCompletions(getTopLevelCommands(cmdInfo), context.partial);
|
||||
}
|
||||
|
||||
// Command but no subcommand - complete subcommands or flags
|
||||
if (!context.subcommand) {
|
||||
const subcommands = getSubcommandNames(cmdInfo, context.command);
|
||||
const flags = getOptionFlags(cmdInfo, context.command);
|
||||
return filterCompletions([...subcommands, ...flags], context.partial);
|
||||
}
|
||||
|
||||
// We have command and subcommand - check what arguments are expected
|
||||
const expectedArgs = getExpectedArguments(cmdInfo, context.command, context.subcommand);
|
||||
|
||||
// If we're at a position where an argument is expected
|
||||
if (expectedArgs.length > context.argPosition) {
|
||||
const expectedArg = expectedArgs[context.argPosition];
|
||||
if (expectedArg) {
|
||||
const argCompletions = await getArgumentCompletions(expectedArg.name);
|
||||
|
||||
// Also include flags
|
||||
const flags = getOptionFlags(cmdInfo, context.command, context.subcommand);
|
||||
return filterCompletions([...argCompletions, ...flags], context.partial);
|
||||
}
|
||||
}
|
||||
|
||||
// No more positional arguments expected, just show flags
|
||||
const flags = getOptionFlags(cmdInfo, context.command, context.subcommand);
|
||||
return filterCompletions(flags, context.partial);
|
||||
}
|
||||
|
|
@ -0,0 +1,66 @@
|
|||
/**
|
||||
* Default directory structure for backlog projects
|
||||
*/
|
||||
export const DEFAULT_DIRECTORIES = {
|
||||
/** Main backlog directory */
|
||||
BACKLOG: "backlog",
|
||||
/** Active tasks directory */
|
||||
TASKS: "tasks",
|
||||
/** Draft tasks directory */
|
||||
DRAFTS: "drafts",
|
||||
/** Completed tasks directory */
|
||||
COMPLETED: "completed",
|
||||
/** Archive root directory */
|
||||
ARCHIVE: "archive",
|
||||
/** Archived tasks directory */
|
||||
ARCHIVE_TASKS: "archive/tasks",
|
||||
/** Archived drafts directory */
|
||||
ARCHIVE_DRAFTS: "archive/drafts",
|
||||
/** Documentation directory */
|
||||
DOCS: "docs",
|
||||
/** Decision logs directory */
|
||||
DECISIONS: "decisions",
|
||||
} as const;
|
||||
|
||||
/**
|
||||
* Default configuration file names
|
||||
*/
|
||||
export const DEFAULT_FILES = {
|
||||
/** Main configuration file */
|
||||
CONFIG: "config.yml",
|
||||
/** Local user settings file */
|
||||
USER: ".user",
|
||||
} as const;
|
||||
|
||||
/**
|
||||
* Default task statuses
|
||||
*/
|
||||
export const DEFAULT_STATUSES = ["To Do", "In Progress", "Done"] as const;
|
||||
|
||||
/**
|
||||
* Fallback status when no default is configured
|
||||
*/
|
||||
export const FALLBACK_STATUS = "To Do";
|
||||
|
||||
/**
|
||||
* Maximum width for wrapped text lines in UI components
|
||||
*/
|
||||
export const WRAP_LIMIT = 72;
|
||||
|
||||
/**
|
||||
* Default values for advanced configuration options used during project initialization.
|
||||
* Shared between CLI and browser wizard to ensure consistent defaults.
|
||||
*/
|
||||
export const DEFAULT_INIT_CONFIG = {
|
||||
checkActiveBranches: true,
|
||||
remoteOperations: true,
|
||||
activeBranchDays: 30,
|
||||
bypassGitHooks: false,
|
||||
autoCommit: false,
|
||||
zeroPaddedIds: undefined as number | undefined,
|
||||
defaultEditor: undefined as string | undefined,
|
||||
defaultPort: 6420,
|
||||
autoOpenBrowser: true,
|
||||
} as const;
|
||||
|
||||
export * from "../guidelines/index.ts";
|
||||
|
|
@ -0,0 +1,61 @@
|
|||
import type { BacklogConfig } from "../types/index.ts";
|
||||
|
||||
/**
|
||||
* Migrates config to ensure all required fields exist with default values
|
||||
*/
|
||||
export function migrateConfig(config: Partial<BacklogConfig>): BacklogConfig {
|
||||
const defaultConfig: BacklogConfig = {
|
||||
projectName: "Untitled Project",
|
||||
defaultEditor: "",
|
||||
defaultStatus: "",
|
||||
statuses: ["To Do", "In Progress", "Done"],
|
||||
labels: [],
|
||||
milestones: [],
|
||||
dateFormat: "YYYY-MM-DD",
|
||||
maxColumnWidth: 80,
|
||||
autoOpenBrowser: true,
|
||||
defaultPort: 6420,
|
||||
remoteOperations: true,
|
||||
autoCommit: false,
|
||||
bypassGitHooks: false,
|
||||
checkActiveBranches: true,
|
||||
activeBranchDays: 30,
|
||||
};
|
||||
|
||||
// Merge provided config with defaults, ensuring all fields exist
|
||||
// Only include fields from config that are not undefined
|
||||
const filteredConfig = Object.fromEntries(Object.entries(config).filter(([_, value]) => value !== undefined));
|
||||
|
||||
const migratedConfig: BacklogConfig = {
|
||||
...defaultConfig,
|
||||
...filteredConfig,
|
||||
};
|
||||
|
||||
// Ensure arrays are not undefined
|
||||
migratedConfig.statuses = config.statuses || defaultConfig.statuses;
|
||||
migratedConfig.labels = config.labels || defaultConfig.labels;
|
||||
migratedConfig.milestones = config.milestones || defaultConfig.milestones;
|
||||
|
||||
return migratedConfig;
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if config needs migration (missing any expected fields)
|
||||
*/
|
||||
export function needsMigration(config: Partial<BacklogConfig>): boolean {
|
||||
// Check for all expected fields including new ones
|
||||
// We need to check not just presence but also that they aren't undefined
|
||||
const expectedFieldsWithDefaults = [
|
||||
{ field: "projectName", hasDefault: true },
|
||||
{ field: "statuses", hasDefault: true },
|
||||
{ field: "defaultPort", hasDefault: true },
|
||||
{ field: "autoOpenBrowser", hasDefault: true },
|
||||
{ field: "remoteOperations", hasDefault: true },
|
||||
{ field: "autoCommit", hasDefault: true },
|
||||
];
|
||||
|
||||
return expectedFieldsWithDefaults.some(({ field }) => {
|
||||
const value = config[field as keyof BacklogConfig];
|
||||
return value === undefined;
|
||||
});
|
||||
}
|
||||
|
|
@ -0,0 +1,899 @@
|
|||
import { type FSWatcher, watch } from "node:fs";
|
||||
import { readdir, stat } from "node:fs/promises";
|
||||
import { basename, join, relative, sep } from "node:path";
|
||||
import type { FileSystem } from "../file-system/operations.ts";
|
||||
import { parseDecision, parseDocument, parseTask } from "../markdown/parser.ts";
|
||||
import type { Decision, Document, Task, TaskListFilter } from "../types/index.ts";
|
||||
import { taskIdsEqual } from "../utils/task-path.ts";
|
||||
import { sortByTaskId } from "../utils/task-sorting.ts";
|
||||
|
||||
interface ContentSnapshot {
|
||||
tasks: Task[];
|
||||
documents: Document[];
|
||||
decisions: Decision[];
|
||||
}
|
||||
|
||||
type ContentStoreEventType = "ready" | "tasks" | "documents" | "decisions";
|
||||
|
||||
export type ContentStoreEvent =
|
||||
| { type: "ready"; snapshot: ContentSnapshot; version: number }
|
||||
| { type: "tasks"; tasks: Task[]; snapshot: ContentSnapshot; version: number }
|
||||
| { type: "documents"; documents: Document[]; snapshot: ContentSnapshot; version: number }
|
||||
| { type: "decisions"; decisions: Decision[]; snapshot: ContentSnapshot; version: number };
|
||||
|
||||
export type ContentStoreListener = (event: ContentStoreEvent) => void;
|
||||
|
||||
interface WatchHandle {
|
||||
stop(): void;
|
||||
}
|
||||
|
||||
export class ContentStore {
|
||||
private initialized = false;
|
||||
private initializing: Promise<void> | null = null;
|
||||
private version = 0;
|
||||
|
||||
private readonly tasks = new Map<string, Task>();
|
||||
private readonly documents = new Map<string, Document>();
|
||||
private readonly decisions = new Map<string, Decision>();
|
||||
|
||||
private cachedTasks: Task[] = [];
|
||||
private cachedDocuments: Document[] = [];
|
||||
private cachedDecisions: Decision[] = [];
|
||||
|
||||
private readonly listeners = new Set<ContentStoreListener>();
|
||||
private readonly watchers: WatchHandle[] = [];
|
||||
private restoreFilesystemPatch?: () => void;
|
||||
private chainTail: Promise<void> = Promise.resolve();
|
||||
private watchersInitialized = false;
|
||||
private configWatcherActive = false;
|
||||
|
||||
private attachWatcherErrorHandler(watcher: FSWatcher, context: string): void {
|
||||
watcher.on("error", (error) => {
|
||||
if (process.env.DEBUG) {
|
||||
console.warn(`Watcher error (${context})`, error);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
constructor(
|
||||
private readonly filesystem: FileSystem,
|
||||
private readonly taskLoader?: () => Promise<Task[]>,
|
||||
private readonly enableWatchers = false,
|
||||
) {
|
||||
this.patchFilesystem();
|
||||
}
|
||||
|
||||
subscribe(listener: ContentStoreListener): () => void {
|
||||
this.listeners.add(listener);
|
||||
|
||||
if (this.initialized) {
|
||||
listener({ type: "ready", snapshot: this.getSnapshot(), version: this.version });
|
||||
} else {
|
||||
void this.ensureInitialized();
|
||||
}
|
||||
|
||||
return () => {
|
||||
this.listeners.delete(listener);
|
||||
};
|
||||
}
|
||||
|
||||
async ensureInitialized(): Promise<ContentSnapshot> {
|
||||
if (this.initialized) {
|
||||
return this.getSnapshot();
|
||||
}
|
||||
|
||||
if (!this.initializing) {
|
||||
this.initializing = this.loadInitialData().catch((error) => {
|
||||
this.initializing = null;
|
||||
throw error;
|
||||
});
|
||||
}
|
||||
|
||||
await this.initializing;
|
||||
return this.getSnapshot();
|
||||
}
|
||||
|
||||
getTasks(filter?: TaskListFilter): Task[] {
|
||||
if (!this.initialized) {
|
||||
throw new Error("ContentStore not initialized. Call ensureInitialized() first.");
|
||||
}
|
||||
|
||||
let tasks = this.cachedTasks;
|
||||
if (filter?.status) {
|
||||
const statusLower = filter.status.toLowerCase();
|
||||
tasks = tasks.filter((task) => task.status.toLowerCase() === statusLower);
|
||||
}
|
||||
if (filter?.assignee) {
|
||||
const assignee = filter.assignee;
|
||||
tasks = tasks.filter((task) => task.assignee.includes(assignee));
|
||||
}
|
||||
if (filter?.priority) {
|
||||
const priority = filter.priority.toLowerCase();
|
||||
tasks = tasks.filter((task) => (task.priority ?? "").toLowerCase() === priority);
|
||||
}
|
||||
if (filter?.parentTaskId) {
|
||||
const parentFilter = filter.parentTaskId;
|
||||
tasks = tasks.filter((task) => task.parentTaskId && taskIdsEqual(parentFilter, task.parentTaskId));
|
||||
}
|
||||
|
||||
return tasks.slice();
|
||||
}
|
||||
|
||||
upsertTask(task: Task): void {
|
||||
if (!this.initialized) {
|
||||
return;
|
||||
}
|
||||
this.tasks.set(task.id, task);
|
||||
this.cachedTasks = sortByTaskId(Array.from(this.tasks.values()));
|
||||
this.notify("tasks");
|
||||
}
|
||||
|
||||
getDocuments(): Document[] {
|
||||
if (!this.initialized) {
|
||||
throw new Error("ContentStore not initialized. Call ensureInitialized() first.");
|
||||
}
|
||||
return this.cachedDocuments.slice();
|
||||
}
|
||||
|
||||
getDecisions(): Decision[] {
|
||||
if (!this.initialized) {
|
||||
throw new Error("ContentStore not initialized. Call ensureInitialized() first.");
|
||||
}
|
||||
return this.cachedDecisions.slice();
|
||||
}
|
||||
|
||||
getSnapshot(): ContentSnapshot {
|
||||
return {
|
||||
tasks: this.cachedTasks.slice(),
|
||||
documents: this.cachedDocuments.slice(),
|
||||
decisions: this.cachedDecisions.slice(),
|
||||
};
|
||||
}
|
||||
|
||||
dispose(): void {
|
||||
if (this.restoreFilesystemPatch) {
|
||||
this.restoreFilesystemPatch();
|
||||
this.restoreFilesystemPatch = undefined;
|
||||
}
|
||||
for (const watcher of this.watchers) {
|
||||
try {
|
||||
watcher.stop();
|
||||
} catch {
|
||||
// Ignore watcher shutdown errors
|
||||
}
|
||||
}
|
||||
this.watchers.length = 0;
|
||||
this.watchersInitialized = false;
|
||||
}
|
||||
|
||||
private emit(event: ContentStoreEvent): void {
|
||||
for (const listener of [...this.listeners]) {
|
||||
listener(event);
|
||||
}
|
||||
}
|
||||
|
||||
private notify(type: ContentStoreEventType): void {
|
||||
this.version += 1;
|
||||
const snapshot = this.getSnapshot();
|
||||
|
||||
if (type === "tasks") {
|
||||
this.emit({ type, tasks: snapshot.tasks, snapshot, version: this.version });
|
||||
return;
|
||||
}
|
||||
|
||||
if (type === "documents") {
|
||||
this.emit({ type, documents: snapshot.documents, snapshot, version: this.version });
|
||||
return;
|
||||
}
|
||||
|
||||
if (type === "decisions") {
|
||||
this.emit({ type, decisions: snapshot.decisions, snapshot, version: this.version });
|
||||
return;
|
||||
}
|
||||
|
||||
this.emit({ type: "ready", snapshot, version: this.version });
|
||||
}
|
||||
|
||||
private async loadInitialData(): Promise<void> {
|
||||
await this.filesystem.ensureBacklogStructure();
|
||||
|
||||
// Use custom task loader if provided (e.g., loadTasks for cross-branch support)
|
||||
// Otherwise fall back to filesystem-only loading
|
||||
const [tasks, documents, decisions] = await Promise.all([
|
||||
this.loadTasksWithLoader(),
|
||||
this.filesystem.listDocuments(),
|
||||
this.filesystem.listDecisions(),
|
||||
]);
|
||||
|
||||
this.replaceTasks(tasks);
|
||||
this.replaceDocuments(documents);
|
||||
this.replaceDecisions(decisions);
|
||||
|
||||
this.initialized = true;
|
||||
if (this.enableWatchers) {
|
||||
await this.setupWatchers();
|
||||
}
|
||||
this.notify("ready");
|
||||
}
|
||||
|
||||
private async setupWatchers(): Promise<void> {
|
||||
if (this.watchersInitialized) return;
|
||||
this.watchersInitialized = true;
|
||||
|
||||
try {
|
||||
this.watchers.push(this.createTaskWatcher());
|
||||
} catch (error) {
|
||||
if (process.env.DEBUG) {
|
||||
console.error("Failed to initialize task watcher", error);
|
||||
}
|
||||
}
|
||||
|
||||
try {
|
||||
this.watchers.push(this.createDecisionWatcher());
|
||||
} catch (error) {
|
||||
if (process.env.DEBUG) {
|
||||
console.error("Failed to initialize decision watcher", error);
|
||||
}
|
||||
}
|
||||
|
||||
try {
|
||||
const docWatcher = await this.createDocumentWatcher();
|
||||
this.watchers.push(docWatcher);
|
||||
} catch (error) {
|
||||
if (process.env.DEBUG) {
|
||||
console.error("Failed to initialize document watcher", error);
|
||||
}
|
||||
}
|
||||
|
||||
try {
|
||||
const configWatcher = this.createConfigWatcher();
|
||||
if (configWatcher) {
|
||||
this.watchers.push(configWatcher);
|
||||
this.configWatcherActive = true;
|
||||
}
|
||||
} catch (error) {
|
||||
if (process.env.DEBUG) {
|
||||
console.error("Failed to initialize config watcher", error);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Retry setting up the config watcher after initialization.
|
||||
* Called when the config file is created after the server started.
|
||||
*/
|
||||
ensureConfigWatcher(): void {
|
||||
if (this.configWatcherActive) {
|
||||
return;
|
||||
}
|
||||
try {
|
||||
const configWatcher = this.createConfigWatcher();
|
||||
if (configWatcher) {
|
||||
this.watchers.push(configWatcher);
|
||||
this.configWatcherActive = true;
|
||||
}
|
||||
} catch (error) {
|
||||
if (process.env.DEBUG) {
|
||||
console.error("Failed to setup config watcher after init", error);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private createConfigWatcher(): WatchHandle | null {
|
||||
const configPath = this.filesystem.configFilePath;
|
||||
try {
|
||||
const watcher: FSWatcher = watch(configPath, (eventType) => {
|
||||
if (eventType !== "change" && eventType !== "rename") {
|
||||
return;
|
||||
}
|
||||
this.enqueue(async () => {
|
||||
this.filesystem.invalidateConfigCache();
|
||||
this.notify("tasks");
|
||||
});
|
||||
});
|
||||
this.attachWatcherErrorHandler(watcher, "config");
|
||||
|
||||
return {
|
||||
stop() {
|
||||
watcher.close();
|
||||
},
|
||||
};
|
||||
} catch (error) {
|
||||
if (process.env.DEBUG) {
|
||||
console.error("Failed to watch config file", error);
|
||||
}
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
private createTaskWatcher(): WatchHandle {
|
||||
const tasksDir = this.filesystem.tasksDir;
|
||||
const watcher: FSWatcher = watch(tasksDir, { recursive: false }, (eventType, filename) => {
|
||||
const file = this.normalizeFilename(filename);
|
||||
if (!file || !file.startsWith("task-") || !file.endsWith(".md")) {
|
||||
this.enqueue(async () => {
|
||||
await this.refreshTasksFromDisk();
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
this.enqueue(async () => {
|
||||
const [taskId] = file.split(" ");
|
||||
if (!taskId) return;
|
||||
|
||||
const fullPath = join(tasksDir, file);
|
||||
const exists = await Bun.file(fullPath).exists();
|
||||
|
||||
if (!exists && eventType === "rename") {
|
||||
if (this.tasks.delete(taskId)) {
|
||||
this.cachedTasks = sortByTaskId(Array.from(this.tasks.values()));
|
||||
this.notify("tasks");
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
if (eventType === "rename" && exists) {
|
||||
await this.refreshTasksFromDisk();
|
||||
return;
|
||||
}
|
||||
|
||||
const previous = this.tasks.get(taskId);
|
||||
const task = await this.retryRead(
|
||||
async () => {
|
||||
const stillExists = await Bun.file(fullPath).exists();
|
||||
if (!stillExists) {
|
||||
return null;
|
||||
}
|
||||
const content = await Bun.file(fullPath).text();
|
||||
return parseTask(content);
|
||||
},
|
||||
(result) => {
|
||||
if (!result) {
|
||||
return false;
|
||||
}
|
||||
if (result.id !== taskId) {
|
||||
return false;
|
||||
}
|
||||
if (!previous) {
|
||||
return true;
|
||||
}
|
||||
return this.hasTaskChanged(previous, result);
|
||||
},
|
||||
);
|
||||
if (!task) {
|
||||
await this.refreshTasksFromDisk(taskId, previous);
|
||||
return;
|
||||
}
|
||||
|
||||
this.tasks.set(task.id, task);
|
||||
this.cachedTasks = sortByTaskId(Array.from(this.tasks.values()));
|
||||
this.notify("tasks");
|
||||
});
|
||||
});
|
||||
this.attachWatcherErrorHandler(watcher, "tasks");
|
||||
|
||||
return {
|
||||
stop() {
|
||||
watcher.close();
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
private createDecisionWatcher(): WatchHandle {
|
||||
const decisionsDir = this.filesystem.decisionsDir;
|
||||
const watcher: FSWatcher = watch(decisionsDir, { recursive: false }, (eventType, filename) => {
|
||||
const file = this.normalizeFilename(filename);
|
||||
if (!file || !file.startsWith("decision-") || !file.endsWith(".md")) {
|
||||
this.enqueue(async () => {
|
||||
await this.refreshDecisionsFromDisk();
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
this.enqueue(async () => {
|
||||
const [idPart] = file.split(" - ");
|
||||
if (!idPart) return;
|
||||
|
||||
const fullPath = join(decisionsDir, file);
|
||||
const exists = await Bun.file(fullPath).exists();
|
||||
|
||||
if (!exists && eventType === "rename") {
|
||||
if (this.decisions.delete(idPart)) {
|
||||
this.cachedDecisions = sortByTaskId(Array.from(this.decisions.values()));
|
||||
this.notify("decisions");
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
if (eventType === "rename" && exists) {
|
||||
await this.refreshDecisionsFromDisk();
|
||||
return;
|
||||
}
|
||||
|
||||
const previous = this.decisions.get(idPart);
|
||||
const decision = await this.retryRead(
|
||||
async () => {
|
||||
try {
|
||||
const content = await Bun.file(fullPath).text();
|
||||
return parseDecision(content);
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
},
|
||||
(result) => {
|
||||
if (!result) {
|
||||
return false;
|
||||
}
|
||||
if (result.id !== idPart) {
|
||||
return false;
|
||||
}
|
||||
if (!previous) {
|
||||
return true;
|
||||
}
|
||||
return this.hasDecisionChanged(previous, result);
|
||||
},
|
||||
);
|
||||
if (!decision) {
|
||||
await this.refreshDecisionsFromDisk(idPart, previous);
|
||||
return;
|
||||
}
|
||||
this.decisions.set(decision.id, decision);
|
||||
this.cachedDecisions = sortByTaskId(Array.from(this.decisions.values()));
|
||||
this.notify("decisions");
|
||||
});
|
||||
});
|
||||
this.attachWatcherErrorHandler(watcher, "decisions");
|
||||
|
||||
return {
|
||||
stop() {
|
||||
watcher.close();
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
private async createDocumentWatcher(): Promise<WatchHandle> {
|
||||
const docsDir = this.filesystem.docsDir;
|
||||
return this.createDirectoryWatcher(docsDir, async (eventType, absolutePath, relativePath) => {
|
||||
const base = basename(absolutePath);
|
||||
if (!base.endsWith(".md")) {
|
||||
if (relativePath === null) {
|
||||
await this.refreshDocumentsFromDisk();
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
if (!base.startsWith("doc-")) {
|
||||
await this.refreshDocumentsFromDisk();
|
||||
return;
|
||||
}
|
||||
|
||||
const [idPart] = base.split(" - ");
|
||||
if (!idPart) {
|
||||
await this.refreshDocumentsFromDisk();
|
||||
return;
|
||||
}
|
||||
|
||||
const exists = await Bun.file(absolutePath).exists();
|
||||
|
||||
if (!exists && eventType === "rename") {
|
||||
if (this.documents.delete(idPart)) {
|
||||
this.cachedDocuments = [...this.documents.values()].sort((a, b) => a.title.localeCompare(b.title));
|
||||
this.notify("documents");
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
if (eventType === "rename" && exists) {
|
||||
await this.refreshDocumentsFromDisk();
|
||||
return;
|
||||
}
|
||||
|
||||
const previous = this.documents.get(idPart);
|
||||
const document = await this.retryRead(
|
||||
async () => {
|
||||
try {
|
||||
const content = await Bun.file(absolutePath).text();
|
||||
return parseDocument(content);
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
},
|
||||
(result) => {
|
||||
if (!result) {
|
||||
return false;
|
||||
}
|
||||
if (result.id !== idPart) {
|
||||
return false;
|
||||
}
|
||||
if (!previous) {
|
||||
return true;
|
||||
}
|
||||
return this.hasDocumentChanged(previous, result);
|
||||
},
|
||||
);
|
||||
if (!document) {
|
||||
await this.refreshDocumentsFromDisk(idPart, previous);
|
||||
return;
|
||||
}
|
||||
|
||||
this.documents.set(document.id, document);
|
||||
this.cachedDocuments = [...this.documents.values()].sort((a, b) => a.title.localeCompare(b.title));
|
||||
this.notify("documents");
|
||||
});
|
||||
}
|
||||
|
||||
private normalizeFilename(value: string | Buffer | null | undefined): string | null {
|
||||
if (typeof value === "string") {
|
||||
return value;
|
||||
}
|
||||
if (value instanceof Buffer) {
|
||||
return value.toString();
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
private async createDirectoryWatcher(
|
||||
rootDir: string,
|
||||
handler: (eventType: string, absolutePath: string, relativePath: string | null) => Promise<void> | void,
|
||||
): Promise<WatchHandle> {
|
||||
try {
|
||||
const watcher = watch(rootDir, { recursive: true }, (eventType, filename) => {
|
||||
const relativePath = this.normalizeFilename(filename);
|
||||
const absolutePath = relativePath ? join(rootDir, relativePath) : rootDir;
|
||||
|
||||
this.enqueue(async () => {
|
||||
await handler(eventType, absolutePath, relativePath);
|
||||
});
|
||||
});
|
||||
this.attachWatcherErrorHandler(watcher, `dir:${rootDir}`);
|
||||
|
||||
return {
|
||||
stop() {
|
||||
watcher.close();
|
||||
},
|
||||
};
|
||||
} catch (error) {
|
||||
if (this.isRecursiveUnsupported(error)) {
|
||||
return this.createManualRecursiveWatcher(rootDir, handler);
|
||||
}
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
private isRecursiveUnsupported(error: unknown): boolean {
|
||||
if (!error || typeof error !== "object") {
|
||||
return false;
|
||||
}
|
||||
const maybeError = error as { code?: string; message?: string };
|
||||
if (maybeError.code === "ERR_FEATURE_UNAVAILABLE_ON_PLATFORM") {
|
||||
return true;
|
||||
}
|
||||
return (
|
||||
typeof maybeError.message === "string" &&
|
||||
maybeError.message.toLowerCase().includes("recursive") &&
|
||||
maybeError.message.toLowerCase().includes("not supported")
|
||||
);
|
||||
}
|
||||
|
||||
private replaceTasks(tasks: Task[]): void {
|
||||
this.tasks.clear();
|
||||
for (const task of tasks) {
|
||||
this.tasks.set(task.id, task);
|
||||
}
|
||||
this.cachedTasks = sortByTaskId(Array.from(this.tasks.values()));
|
||||
}
|
||||
|
||||
private replaceDocuments(documents: Document[]): void {
|
||||
this.documents.clear();
|
||||
for (const document of documents) {
|
||||
this.documents.set(document.id, document);
|
||||
}
|
||||
this.cachedDocuments = [...this.documents.values()].sort((a, b) => a.title.localeCompare(b.title));
|
||||
}
|
||||
|
||||
private replaceDecisions(decisions: Decision[]): void {
|
||||
this.decisions.clear();
|
||||
for (const decision of decisions) {
|
||||
this.decisions.set(decision.id, decision);
|
||||
}
|
||||
this.cachedDecisions = sortByTaskId(Array.from(this.decisions.values()));
|
||||
}
|
||||
|
||||
private patchFilesystem(): void {
|
||||
if (this.restoreFilesystemPatch) {
|
||||
return;
|
||||
}
|
||||
|
||||
const originalSaveTask = this.filesystem.saveTask;
|
||||
const originalSaveDocument = this.filesystem.saveDocument;
|
||||
const originalSaveDecision = this.filesystem.saveDecision;
|
||||
|
||||
this.filesystem.saveTask = (async (task: Task): Promise<string> => {
|
||||
const result = await originalSaveTask.call(this.filesystem, task);
|
||||
await this.handleTaskWrite(task.id);
|
||||
return result;
|
||||
}) as FileSystem["saveTask"];
|
||||
|
||||
this.filesystem.saveDocument = (async (document: Document, subPath = ""): Promise<string> => {
|
||||
const result = await originalSaveDocument.call(this.filesystem, document, subPath);
|
||||
await this.handleDocumentWrite(document.id);
|
||||
return result;
|
||||
}) as FileSystem["saveDocument"];
|
||||
|
||||
this.filesystem.saveDecision = (async (decision: Decision): Promise<void> => {
|
||||
await originalSaveDecision.call(this.filesystem, decision);
|
||||
await this.handleDecisionWrite(decision.id);
|
||||
}) as FileSystem["saveDecision"];
|
||||
|
||||
this.restoreFilesystemPatch = () => {
|
||||
this.filesystem.saveTask = originalSaveTask;
|
||||
this.filesystem.saveDocument = originalSaveDocument;
|
||||
this.filesystem.saveDecision = originalSaveDecision;
|
||||
};
|
||||
}
|
||||
|
||||
private async handleTaskWrite(taskId: string): Promise<void> {
|
||||
if (!this.initialized) {
|
||||
return;
|
||||
}
|
||||
await this.updateTaskFromDisk(taskId);
|
||||
}
|
||||
|
||||
private async handleDocumentWrite(documentId: string): Promise<void> {
|
||||
if (!this.initialized) {
|
||||
return;
|
||||
}
|
||||
await this.refreshDocumentsFromDisk(documentId, this.documents.get(documentId));
|
||||
}
|
||||
|
||||
private hasTaskChanged(previous: Task, next: Task): boolean {
|
||||
return JSON.stringify(previous) !== JSON.stringify(next);
|
||||
}
|
||||
|
||||
private hasDocumentChanged(previous: Document, next: Document): boolean {
|
||||
return JSON.stringify(previous) !== JSON.stringify(next);
|
||||
}
|
||||
|
||||
private hasDecisionChanged(previous: Decision, next: Decision): boolean {
|
||||
return JSON.stringify(previous) !== JSON.stringify(next);
|
||||
}
|
||||
|
||||
private async refreshTasksFromDisk(expectedId?: string, previous?: Task): Promise<void> {
|
||||
const tasks = await this.retryRead(
|
||||
async () => this.loadTasksWithLoader(),
|
||||
(expected) => {
|
||||
if (!expectedId) {
|
||||
return true;
|
||||
}
|
||||
const match = expected.find((task) => task.id === expectedId);
|
||||
if (!match) {
|
||||
return false;
|
||||
}
|
||||
if (previous && !this.hasTaskChanged(previous, match)) {
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
},
|
||||
);
|
||||
if (!tasks) {
|
||||
return;
|
||||
}
|
||||
this.replaceTasks(tasks);
|
||||
this.notify("tasks");
|
||||
}
|
||||
|
||||
private async refreshDocumentsFromDisk(expectedId?: string, previous?: Document): Promise<void> {
|
||||
const documents = await this.retryRead(
|
||||
async () => this.filesystem.listDocuments(),
|
||||
(expected) => {
|
||||
if (!expectedId) {
|
||||
return true;
|
||||
}
|
||||
const match = expected.find((doc) => doc.id === expectedId);
|
||||
if (!match) {
|
||||
return false;
|
||||
}
|
||||
if (previous && !this.hasDocumentChanged(previous, match)) {
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
},
|
||||
);
|
||||
if (!documents) {
|
||||
return;
|
||||
}
|
||||
this.replaceDocuments(documents);
|
||||
this.notify("documents");
|
||||
}
|
||||
|
||||
private async refreshDecisionsFromDisk(expectedId?: string, previous?: Decision): Promise<void> {
|
||||
const decisions = await this.retryRead(
|
||||
async () => this.filesystem.listDecisions(),
|
||||
(expected) => {
|
||||
if (!expectedId) {
|
||||
return true;
|
||||
}
|
||||
const match = expected.find((decision) => decision.id === expectedId);
|
||||
if (!match) {
|
||||
return false;
|
||||
}
|
||||
if (previous && !this.hasDecisionChanged(previous, match)) {
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
},
|
||||
);
|
||||
if (!decisions) {
|
||||
return;
|
||||
}
|
||||
this.replaceDecisions(decisions);
|
||||
this.notify("decisions");
|
||||
}
|
||||
|
||||
private async handleDecisionWrite(decisionId: string): Promise<void> {
|
||||
if (!this.initialized) {
|
||||
return;
|
||||
}
|
||||
await this.updateDecisionFromDisk(decisionId);
|
||||
}
|
||||
|
||||
private async updateTaskFromDisk(taskId: string): Promise<void> {
|
||||
const previous = this.tasks.get(taskId);
|
||||
const task = await this.retryRead(
|
||||
async () => this.filesystem.loadTask(taskId),
|
||||
(result) => result !== null && (!previous || this.hasTaskChanged(previous, result)),
|
||||
);
|
||||
if (!task) {
|
||||
return;
|
||||
}
|
||||
this.tasks.set(task.id, task);
|
||||
this.cachedTasks = sortByTaskId(Array.from(this.tasks.values()));
|
||||
this.notify("tasks");
|
||||
}
|
||||
|
||||
private async updateDecisionFromDisk(decisionId: string): Promise<void> {
|
||||
const previous = this.decisions.get(decisionId);
|
||||
const decision = await this.retryRead(
|
||||
async () => this.filesystem.loadDecision(decisionId),
|
||||
(result) => result !== null && (!previous || this.hasDecisionChanged(previous, result)),
|
||||
);
|
||||
if (!decision) {
|
||||
return;
|
||||
}
|
||||
this.decisions.set(decision.id, decision);
|
||||
this.cachedDecisions = sortByTaskId(Array.from(this.decisions.values()));
|
||||
this.notify("decisions");
|
||||
}
|
||||
|
||||
private async createManualRecursiveWatcher(
|
||||
rootDir: string,
|
||||
handler: (eventType: string, absolutePath: string, relativePath: string | null) => Promise<void> | void,
|
||||
): Promise<WatchHandle> {
|
||||
const watchers = new Map<string, FSWatcher>();
|
||||
let disposed = false;
|
||||
|
||||
const removeSubtreeWatchers = (baseDir: string) => {
|
||||
const prefix = baseDir.endsWith(sep) ? baseDir : `${baseDir}${sep}`;
|
||||
for (const path of [...watchers.keys()]) {
|
||||
if (path === baseDir || path.startsWith(prefix)) {
|
||||
watchers.get(path)?.close();
|
||||
watchers.delete(path);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
const addWatcher = async (dir: string): Promise<void> => {
|
||||
if (disposed || watchers.has(dir)) {
|
||||
return;
|
||||
}
|
||||
|
||||
const watcher = watch(dir, { recursive: false }, (eventType, filename) => {
|
||||
if (disposed) {
|
||||
return;
|
||||
}
|
||||
const relativePath = this.normalizeFilename(filename);
|
||||
const absolutePath = relativePath ? join(dir, relativePath) : dir;
|
||||
const normalizedRelative = relativePath ? relative(rootDir, absolutePath) : null;
|
||||
|
||||
this.enqueue(async () => {
|
||||
await handler(eventType, absolutePath, normalizedRelative);
|
||||
|
||||
if (eventType === "rename" && relativePath) {
|
||||
try {
|
||||
const stats = await stat(absolutePath);
|
||||
if (stats.isDirectory()) {
|
||||
await addWatcher(absolutePath);
|
||||
}
|
||||
} catch {
|
||||
removeSubtreeWatchers(absolutePath);
|
||||
}
|
||||
}
|
||||
});
|
||||
});
|
||||
this.attachWatcherErrorHandler(watcher, `manual:${dir}`);
|
||||
|
||||
watchers.set(dir, watcher);
|
||||
|
||||
try {
|
||||
const entries = await readdir(dir, { withFileTypes: true });
|
||||
for (const entry of entries) {
|
||||
const entryPath = join(dir, entry.name);
|
||||
if (entry.isDirectory()) {
|
||||
await addWatcher(entryPath);
|
||||
continue;
|
||||
}
|
||||
|
||||
if (entry.isFile()) {
|
||||
this.enqueue(async () => {
|
||||
await handler("change", entryPath, relative(rootDir, entryPath));
|
||||
});
|
||||
}
|
||||
}
|
||||
} catch {
|
||||
// Ignore transient directory enumeration issues
|
||||
}
|
||||
};
|
||||
|
||||
await addWatcher(rootDir);
|
||||
|
||||
return {
|
||||
stop() {
|
||||
disposed = true;
|
||||
for (const watcher of watchers.values()) {
|
||||
watcher.close();
|
||||
}
|
||||
watchers.clear();
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
private async retryRead<T>(
|
||||
loader: () => Promise<T>,
|
||||
isValid: (result: T) => boolean = (value) => value !== null && value !== undefined,
|
||||
attempts = 12,
|
||||
delayMs = 75,
|
||||
): Promise<T | null> {
|
||||
let lastError: unknown = null;
|
||||
for (let attempt = 1; attempt <= attempts; attempt++) {
|
||||
try {
|
||||
const result = await loader();
|
||||
if (isValid(result)) {
|
||||
return result;
|
||||
}
|
||||
} catch (error) {
|
||||
lastError = error;
|
||||
}
|
||||
if (attempt < attempts) {
|
||||
await this.delay(delayMs * attempt);
|
||||
}
|
||||
}
|
||||
|
||||
if (lastError && process.env.DEBUG) {
|
||||
console.error("ContentStore retryRead exhausted attempts", lastError);
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
private async delay(ms: number): Promise<void> {
|
||||
await new Promise((resolve) => setTimeout(resolve, ms));
|
||||
}
|
||||
|
||||
private enqueue(fn: () => Promise<void>): void {
|
||||
this.chainTail = this.chainTail
|
||||
.then(() => fn())
|
||||
.catch((error) => {
|
||||
if (process.env.DEBUG) {
|
||||
console.error("ContentStore update failed", error);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
private async loadTasksWithLoader(): Promise<Task[]> {
|
||||
if (this.taskLoader) {
|
||||
return await this.taskLoader();
|
||||
}
|
||||
return await this.filesystem.listTasks();
|
||||
}
|
||||
}
|
||||
|
||||
export type { ContentSnapshot };
|
||||
|
|
@ -0,0 +1,248 @@
|
|||
/**
|
||||
* Cross-branch task state resolution
|
||||
* Determines the latest state of tasks across all git branches
|
||||
*/
|
||||
|
||||
import { DEFAULT_DIRECTORIES } from "../constants/index.ts";
|
||||
import type { FileSystem } from "../file-system/operations.ts";
|
||||
import type { GitOperations as GitOps } from "../git/operations.ts";
|
||||
import type { Task } from "../types/index.ts";
|
||||
|
||||
export type TaskDirectoryType = "task" | "draft" | "archived" | "completed";
|
||||
|
||||
export interface TaskDirectoryInfo {
|
||||
taskId: string;
|
||||
type: TaskDirectoryType;
|
||||
lastModified: Date;
|
||||
branch: string;
|
||||
path: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the latest directory location of specific task IDs across all branches
|
||||
* Only checks the provided task IDs for optimal performance
|
||||
*/
|
||||
export async function getLatestTaskStatesForIds(
|
||||
gitOps: GitOps,
|
||||
_filesystem: FileSystem,
|
||||
taskIds: string[],
|
||||
onProgress?: (message: string) => void,
|
||||
options?: { recentBranchesOnly?: boolean; daysAgo?: number },
|
||||
): Promise<Map<string, TaskDirectoryInfo>> {
|
||||
const taskDirectories = new Map<string, TaskDirectoryInfo>();
|
||||
|
||||
if (taskIds.length === 0) {
|
||||
return taskDirectories;
|
||||
}
|
||||
|
||||
try {
|
||||
// Get branches - use recent branches by default for performance
|
||||
const useRecentOnly = options?.recentBranchesOnly ?? true;
|
||||
const daysAgo = options?.daysAgo ?? 30; // Default to 30 days if not specified
|
||||
|
||||
let branches = useRecentOnly ? await gitOps.listRecentBranches(daysAgo) : await gitOps.listAllBranches();
|
||||
|
||||
if (branches.length === 0) {
|
||||
return taskDirectories;
|
||||
}
|
||||
|
||||
// Use standard backlog directory
|
||||
const backlogDir = DEFAULT_DIRECTORIES.BACKLOG;
|
||||
|
||||
// Filter branches that actually have backlog changes
|
||||
const branchesWithBacklog: string[] = [];
|
||||
|
||||
// Quick check which branches actually have the backlog directory
|
||||
for (const branch of branches) {
|
||||
try {
|
||||
// Just check if the backlog directory exists
|
||||
const files = await gitOps.listFilesInTree(branch, backlogDir);
|
||||
if (files.length > 0) {
|
||||
branchesWithBacklog.push(branch);
|
||||
}
|
||||
} catch {
|
||||
// Branch doesn't have backlog directory
|
||||
}
|
||||
}
|
||||
|
||||
// Use filtered branches
|
||||
branches = branchesWithBacklog;
|
||||
|
||||
// Count local vs remote branches for info
|
||||
const localBranches = branches.filter((b) => !b.includes("origin/"));
|
||||
const remoteBranches = branches.filter((b) => b.includes("origin/"));
|
||||
|
||||
const branchMsg = useRecentOnly
|
||||
? `${branches.length} branches with backlog (from ${daysAgo} days, ${localBranches.length} local, ${remoteBranches.length} remote)`
|
||||
: `${branches.length} branches with backlog (${localBranches.length} local, ${remoteBranches.length} remote)`;
|
||||
onProgress?.(`Checking ${taskIds.length} tasks across ${branchMsg}...`);
|
||||
|
||||
// Create all file path combinations we need to check
|
||||
const directoryChecks: Array<{ path: string; type: TaskDirectoryType }> = [
|
||||
{ path: `${backlogDir}/tasks`, type: "task" },
|
||||
{ path: `${backlogDir}/drafts`, type: "draft" },
|
||||
{ path: `${backlogDir}/archive/tasks`, type: "archived" },
|
||||
{ path: `${backlogDir}/completed`, type: "completed" },
|
||||
];
|
||||
|
||||
// For better performance, prioritize checking current branch and main branch first
|
||||
const priorityBranches = ["main", "master"];
|
||||
const currentBranch = await gitOps.getCurrentBranch();
|
||||
if (currentBranch && !priorityBranches.includes(currentBranch)) {
|
||||
priorityBranches.unshift(currentBranch);
|
||||
}
|
||||
|
||||
// Check priority branches first
|
||||
for (const branch of priorityBranches) {
|
||||
if (!branches.includes(branch)) continue;
|
||||
|
||||
// Remove from main list to avoid duplicate checking
|
||||
branches = branches.filter((b) => b !== branch);
|
||||
|
||||
// Quick check for all tasks in this branch
|
||||
for (const { path, type } of directoryChecks) {
|
||||
try {
|
||||
const files = await gitOps.listFilesInTree(branch, path);
|
||||
if (files.length === 0) continue;
|
||||
|
||||
// Get all modification times in one pass
|
||||
const modTimes = await gitOps.getBranchLastModifiedMap(branch, path);
|
||||
|
||||
// Build file->id map for O(1) lookup
|
||||
const fileToId = new Map<string, string>();
|
||||
for (const f of files) {
|
||||
const filename = f.substring(f.lastIndexOf("/") + 1);
|
||||
const match = filename.match(/^(task-\d+(?:\.\d+)?)/);
|
||||
if (match?.[1]) {
|
||||
fileToId.set(match[1], f);
|
||||
}
|
||||
}
|
||||
|
||||
// Check each task ID
|
||||
for (const taskId of taskIds) {
|
||||
const taskFile = fileToId.get(taskId);
|
||||
|
||||
if (taskFile) {
|
||||
const lastModified = modTimes.get(taskFile);
|
||||
if (lastModified) {
|
||||
const existing = taskDirectories.get(taskId);
|
||||
if (!existing || lastModified > existing.lastModified) {
|
||||
taskDirectories.set(taskId, {
|
||||
taskId,
|
||||
type,
|
||||
lastModified,
|
||||
branch,
|
||||
path: taskFile,
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch {
|
||||
// Skip directories that don't exist
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// If we found all tasks in priority branches, we can skip other branches
|
||||
if (taskDirectories.size === taskIds.length) {
|
||||
onProgress?.(`Found all ${taskIds.length} tasks in priority branches`);
|
||||
return taskDirectories;
|
||||
}
|
||||
|
||||
// For remaining tasks, check other branches
|
||||
const remainingTaskIds = taskIds.filter((id) => !taskDirectories.has(id));
|
||||
if (remainingTaskIds.length === 0 || branches.length === 0) {
|
||||
onProgress?.(`Checked ${taskIds.length} tasks`);
|
||||
return taskDirectories;
|
||||
}
|
||||
|
||||
onProgress?.(`Checking ${remainingTaskIds.length} remaining tasks across ${branches.length} branches...`);
|
||||
|
||||
// Check remaining branches in parallel batches
|
||||
const BRANCH_BATCH_SIZE = 5; // Process 5 branches at a time for better performance
|
||||
for (let i = 0; i < branches.length; i += BRANCH_BATCH_SIZE) {
|
||||
const branchBatch = branches.slice(i, i + BRANCH_BATCH_SIZE);
|
||||
|
||||
await Promise.all(
|
||||
branchBatch.map(async (branch) => {
|
||||
for (const { path, type } of directoryChecks) {
|
||||
try {
|
||||
const files = await gitOps.listFilesInTree(branch, path);
|
||||
|
||||
if (files.length === 0) continue;
|
||||
|
||||
// Get all modification times in one pass
|
||||
const modTimes = await gitOps.getBranchLastModifiedMap(branch, path);
|
||||
|
||||
// Build file->id map for O(1) lookup
|
||||
const fileToId = new Map<string, string>();
|
||||
for (const f of files) {
|
||||
const filename = f.substring(f.lastIndexOf("/") + 1);
|
||||
const match = filename.match(/^(task-\d+(?:\.\d+)?)/);
|
||||
if (match?.[1]) {
|
||||
fileToId.set(match[1], f);
|
||||
}
|
||||
}
|
||||
|
||||
for (const taskId of remainingTaskIds) {
|
||||
// Skip if we already found this task
|
||||
if (taskDirectories.has(taskId)) continue;
|
||||
|
||||
const taskFile = fileToId.get(taskId);
|
||||
|
||||
if (taskFile) {
|
||||
const lastModified = modTimes.get(taskFile);
|
||||
if (lastModified) {
|
||||
const existing = taskDirectories.get(taskId);
|
||||
if (!existing || lastModified > existing.lastModified) {
|
||||
taskDirectories.set(taskId, {
|
||||
taskId,
|
||||
type,
|
||||
lastModified,
|
||||
branch,
|
||||
path: taskFile,
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch {
|
||||
// Skip directories that don't exist
|
||||
}
|
||||
}
|
||||
}),
|
||||
);
|
||||
|
||||
// Early exit if we found all tasks
|
||||
if (taskDirectories.size === taskIds.length) {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
onProgress?.(`Checked ${taskIds.length} tasks`);
|
||||
} catch (error) {
|
||||
console.error("Failed to get task directory locations for IDs:", error);
|
||||
}
|
||||
|
||||
return taskDirectories;
|
||||
}
|
||||
|
||||
/**
|
||||
* Filter tasks based on their latest directory location across all branches
|
||||
* Only returns tasks whose latest directory type is "task" (not draft, archived, or completed)
|
||||
*/
|
||||
export function filterTasksByLatestState(tasks: Task[], latestDirectories: Map<string, TaskDirectoryInfo>): Task[] {
|
||||
return tasks.filter((task) => {
|
||||
const latestDirectory = latestDirectories.get(task.id);
|
||||
|
||||
// If we don't have directory info, assume it's an active task
|
||||
if (!latestDirectory) {
|
||||
return true;
|
||||
}
|
||||
|
||||
// Only show tasks whose latest directory type is "task"
|
||||
// Completed, archived, and draft tasks should not appear on the main board
|
||||
return latestDirectory.type === "task";
|
||||
});
|
||||
}
|
||||
|
|
@ -0,0 +1,205 @@
|
|||
import { spawn } from "bun";
|
||||
import {
|
||||
type AgentInstructionFile,
|
||||
addAgentInstructions,
|
||||
ensureMcpGuidelines,
|
||||
installClaudeAgent,
|
||||
} from "../agent-instructions.ts";
|
||||
import { DEFAULT_INIT_CONFIG } from "../constants/index.ts";
|
||||
import type { BacklogConfig } from "../types/index.ts";
|
||||
import type { Core } from "./backlog.ts";
|
||||
|
||||
export const MCP_SERVER_NAME = "backlog";
|
||||
export const MCP_GUIDE_URL = "https://github.com/MrLesk/Backlog.md#-mcp-integration-model-context-protocol";
|
||||
|
||||
export type IntegrationMode = "mcp" | "cli" | "none";
|
||||
export type McpClient = "claude" | "codex" | "gemini" | "guide";
|
||||
|
||||
export interface InitializeProjectOptions {
|
||||
projectName: string;
|
||||
integrationMode: IntegrationMode;
|
||||
mcpClients?: McpClient[];
|
||||
agentInstructions?: AgentInstructionFile[];
|
||||
installClaudeAgent?: boolean;
|
||||
advancedConfig?: {
|
||||
checkActiveBranches?: boolean;
|
||||
remoteOperations?: boolean;
|
||||
activeBranchDays?: number;
|
||||
bypassGitHooks?: boolean;
|
||||
autoCommit?: boolean;
|
||||
zeroPaddedIds?: number;
|
||||
defaultEditor?: string;
|
||||
defaultPort?: number;
|
||||
autoOpenBrowser?: boolean;
|
||||
};
|
||||
/** Existing config for re-initialization */
|
||||
existingConfig?: BacklogConfig | null;
|
||||
}
|
||||
|
||||
export interface InitializeProjectResult {
|
||||
success: boolean;
|
||||
projectName: string;
|
||||
isReInitialization: boolean;
|
||||
config: BacklogConfig;
|
||||
mcpResults?: Record<string, string>;
|
||||
}
|
||||
|
||||
async function runMcpClientCommand(label: string, command: string, args: string[]): Promise<string> {
|
||||
try {
|
||||
const child = spawn({
|
||||
cmd: [command, ...args],
|
||||
stdout: "pipe",
|
||||
stderr: "pipe",
|
||||
});
|
||||
const exitCode = await child.exited;
|
||||
if (exitCode !== 0) {
|
||||
throw new Error(`Command exited with code ${exitCode}`);
|
||||
}
|
||||
return `Added Backlog MCP server to ${label}`;
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : String(error);
|
||||
throw new Error(
|
||||
`Unable to configure ${label} automatically (${message}). Run manually: ${command} ${args.join(" ")}`,
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Core initialization logic shared between CLI and browser.
|
||||
* Both CLI and browser validate input before calling this function.
|
||||
*/
|
||||
export async function initializeProject(
|
||||
core: Core,
|
||||
options: InitializeProjectOptions,
|
||||
): Promise<InitializeProjectResult> {
|
||||
const {
|
||||
projectName,
|
||||
integrationMode,
|
||||
mcpClients = [],
|
||||
agentInstructions = [],
|
||||
installClaudeAgent: installClaudeAgentFlag = false,
|
||||
advancedConfig = {},
|
||||
existingConfig,
|
||||
} = options;
|
||||
|
||||
const isReInitialization = !!existingConfig;
|
||||
const projectRoot = core.filesystem.rootDir;
|
||||
|
||||
// Build config, preserving existing values for re-initialization
|
||||
const d = DEFAULT_INIT_CONFIG;
|
||||
const config: BacklogConfig = {
|
||||
projectName,
|
||||
statuses: existingConfig?.statuses || ["To Do", "In Progress", "Done"],
|
||||
labels: existingConfig?.labels || [],
|
||||
milestones: existingConfig?.milestones || [],
|
||||
defaultStatus: existingConfig?.defaultStatus || "To Do",
|
||||
dateFormat: existingConfig?.dateFormat || "yyyy-mm-dd",
|
||||
maxColumnWidth: existingConfig?.maxColumnWidth || 20,
|
||||
autoCommit: advancedConfig.autoCommit ?? existingConfig?.autoCommit ?? d.autoCommit,
|
||||
remoteOperations: advancedConfig.remoteOperations ?? existingConfig?.remoteOperations ?? d.remoteOperations,
|
||||
bypassGitHooks: advancedConfig.bypassGitHooks ?? existingConfig?.bypassGitHooks ?? d.bypassGitHooks,
|
||||
checkActiveBranches:
|
||||
advancedConfig.checkActiveBranches ?? existingConfig?.checkActiveBranches ?? d.checkActiveBranches,
|
||||
activeBranchDays: advancedConfig.activeBranchDays ?? existingConfig?.activeBranchDays ?? d.activeBranchDays,
|
||||
defaultPort: advancedConfig.defaultPort ?? existingConfig?.defaultPort ?? d.defaultPort,
|
||||
autoOpenBrowser: advancedConfig.autoOpenBrowser ?? existingConfig?.autoOpenBrowser ?? d.autoOpenBrowser,
|
||||
taskResolutionStrategy: existingConfig?.taskResolutionStrategy || "most_recent",
|
||||
...(advancedConfig.defaultEditor ? { defaultEditor: advancedConfig.defaultEditor } : {}),
|
||||
...(typeof advancedConfig.zeroPaddedIds === "number" && advancedConfig.zeroPaddedIds > 0
|
||||
? { zeroPaddedIds: advancedConfig.zeroPaddedIds }
|
||||
: {}),
|
||||
};
|
||||
|
||||
// Create structure and save config
|
||||
if (isReInitialization) {
|
||||
await core.filesystem.saveConfig(config);
|
||||
} else {
|
||||
await core.filesystem.ensureBacklogStructure();
|
||||
await core.filesystem.saveConfig(config);
|
||||
await core.ensureConfigLoaded();
|
||||
}
|
||||
|
||||
const mcpResults: Record<string, string> = {};
|
||||
|
||||
// Handle MCP integration
|
||||
if (integrationMode === "mcp" && mcpClients.length > 0) {
|
||||
for (const client of mcpClients) {
|
||||
try {
|
||||
if (client === "claude") {
|
||||
const result = await runMcpClientCommand("Claude Code", "claude", [
|
||||
"mcp",
|
||||
"add",
|
||||
"-s",
|
||||
"user",
|
||||
MCP_SERVER_NAME,
|
||||
"--",
|
||||
"backlog",
|
||||
"mcp",
|
||||
"start",
|
||||
]);
|
||||
mcpResults.claude = result;
|
||||
await ensureMcpGuidelines(projectRoot, "CLAUDE.md");
|
||||
} else if (client === "codex") {
|
||||
const result = await runMcpClientCommand("OpenAI Codex", "codex", [
|
||||
"mcp",
|
||||
"add",
|
||||
MCP_SERVER_NAME,
|
||||
"backlog",
|
||||
"mcp",
|
||||
"start",
|
||||
]);
|
||||
mcpResults.codex = result;
|
||||
await ensureMcpGuidelines(projectRoot, "AGENTS.md");
|
||||
} else if (client === "gemini") {
|
||||
const result = await runMcpClientCommand("Gemini CLI", "gemini", [
|
||||
"mcp",
|
||||
"add",
|
||||
"-s",
|
||||
"user",
|
||||
MCP_SERVER_NAME,
|
||||
"backlog",
|
||||
"mcp",
|
||||
"start",
|
||||
]);
|
||||
mcpResults.gemini = result;
|
||||
await ensureMcpGuidelines(projectRoot, "GEMINI.md");
|
||||
} else if (client === "guide") {
|
||||
mcpResults.guide = `Setup guide: ${MCP_GUIDE_URL}`;
|
||||
}
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : String(error);
|
||||
mcpResults[client] = `Failed: ${message}`;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Handle CLI integration - agent instruction files
|
||||
if (integrationMode === "cli" && agentInstructions.length > 0) {
|
||||
try {
|
||||
await addAgentInstructions(projectRoot, core.gitOps, agentInstructions, config.autoCommit);
|
||||
mcpResults.agentFiles = `Created: ${agentInstructions.join(", ")}`;
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : String(error);
|
||||
mcpResults.agentFiles = `Failed: ${message}`;
|
||||
}
|
||||
}
|
||||
|
||||
// Handle Claude agent installation
|
||||
if (integrationMode === "cli" && installClaudeAgentFlag) {
|
||||
try {
|
||||
await installClaudeAgent(projectRoot);
|
||||
mcpResults.claudeAgent = "Installed to .claude/agents/";
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : String(error);
|
||||
mcpResults.claudeAgent = `Failed: ${message}`;
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
success: true,
|
||||
projectName,
|
||||
isReInitialization,
|
||||
config,
|
||||
mcpResults: Object.keys(mcpResults).length > 0 ? mcpResults : undefined,
|
||||
};
|
||||
}
|
||||
|
|
@ -0,0 +1,96 @@
|
|||
import type { Task } from "../types/index.ts";
|
||||
|
||||
export const DEFAULT_ORDINAL_STEP = 1000;
|
||||
const EPSILON = 1e-6;
|
||||
|
||||
export interface CalculateNewOrdinalOptions {
|
||||
previous?: Pick<Task, "id" | "ordinal"> | null;
|
||||
next?: Pick<Task, "id" | "ordinal"> | null;
|
||||
defaultStep?: number;
|
||||
}
|
||||
|
||||
export interface CalculateNewOrdinalResult {
|
||||
ordinal: number;
|
||||
requiresRebalance: boolean;
|
||||
}
|
||||
|
||||
export function calculateNewOrdinal(options: CalculateNewOrdinalOptions): CalculateNewOrdinalResult {
|
||||
const { previous, next, defaultStep = DEFAULT_ORDINAL_STEP } = options;
|
||||
const prevOrdinal = previous?.ordinal;
|
||||
const nextOrdinal = next?.ordinal;
|
||||
|
||||
if (prevOrdinal === undefined && nextOrdinal === undefined) {
|
||||
return { ordinal: defaultStep, requiresRebalance: false };
|
||||
}
|
||||
|
||||
if (prevOrdinal === undefined) {
|
||||
if (nextOrdinal === undefined) {
|
||||
return { ordinal: defaultStep, requiresRebalance: false };
|
||||
}
|
||||
const candidate = nextOrdinal / 2;
|
||||
const requiresRebalance = !Number.isFinite(candidate) || candidate <= 0 || candidate >= nextOrdinal - EPSILON;
|
||||
return { ordinal: candidate, requiresRebalance };
|
||||
}
|
||||
|
||||
if (nextOrdinal === undefined) {
|
||||
const candidate = prevOrdinal + defaultStep;
|
||||
const requiresRebalance = !Number.isFinite(candidate);
|
||||
return { ordinal: candidate, requiresRebalance };
|
||||
}
|
||||
|
||||
const gap = nextOrdinal - prevOrdinal;
|
||||
if (gap <= EPSILON) {
|
||||
return { ordinal: prevOrdinal + defaultStep, requiresRebalance: true };
|
||||
}
|
||||
|
||||
const candidate = prevOrdinal + gap / 2;
|
||||
const requiresRebalance = candidate <= prevOrdinal + EPSILON || candidate >= nextOrdinal - EPSILON;
|
||||
return { ordinal: candidate, requiresRebalance };
|
||||
}
|
||||
|
||||
export interface ResolveOrdinalConflictsOptions {
|
||||
defaultStep?: number;
|
||||
startOrdinal?: number;
|
||||
forceSequential?: boolean;
|
||||
}
|
||||
|
||||
export function resolveOrdinalConflicts<T extends { id: string; ordinal?: number }>(
|
||||
tasks: T[],
|
||||
options: ResolveOrdinalConflictsOptions = {},
|
||||
): T[] {
|
||||
const defaultStep = options.defaultStep ?? DEFAULT_ORDINAL_STEP;
|
||||
const startOrdinal = options.startOrdinal ?? defaultStep;
|
||||
const forceSequential = options.forceSequential ?? false;
|
||||
|
||||
const updates: T[] = [];
|
||||
let lastOrdinal: number | undefined;
|
||||
|
||||
for (let index = 0; index < tasks.length; index += 1) {
|
||||
const task = tasks[index];
|
||||
if (!task) {
|
||||
continue;
|
||||
}
|
||||
let assigned: number;
|
||||
|
||||
if (forceSequential) {
|
||||
assigned = index === 0 ? startOrdinal : (lastOrdinal ?? startOrdinal) + defaultStep;
|
||||
} else if (task.ordinal === undefined) {
|
||||
assigned = index === 0 ? startOrdinal : (lastOrdinal ?? startOrdinal) + defaultStep;
|
||||
} else if (lastOrdinal !== undefined && task.ordinal <= lastOrdinal) {
|
||||
assigned = lastOrdinal + defaultStep;
|
||||
} else {
|
||||
assigned = task.ordinal;
|
||||
}
|
||||
|
||||
if (assigned !== task.ordinal) {
|
||||
updates.push({
|
||||
...task,
|
||||
ordinal: assigned,
|
||||
});
|
||||
}
|
||||
|
||||
lastOrdinal = assigned;
|
||||
}
|
||||
|
||||
return updates;
|
||||
}
|
||||
|
|
@ -0,0 +1,418 @@
|
|||
import Fuse, { type FuseResult, type FuseResultMatch } from "fuse.js";
|
||||
import type {
|
||||
Decision,
|
||||
Document,
|
||||
SearchFilters,
|
||||
SearchMatch,
|
||||
SearchOptions,
|
||||
SearchPriorityFilter,
|
||||
SearchResult,
|
||||
SearchResultType,
|
||||
Task,
|
||||
} from "../types/index.ts";
|
||||
import type { ContentStore, ContentStoreEvent } from "./content-store.ts";
|
||||
|
||||
interface BaseSearchEntity {
|
||||
readonly id: string;
|
||||
readonly type: SearchResultType;
|
||||
readonly title: string;
|
||||
readonly bodyText: string;
|
||||
}
|
||||
|
||||
interface TaskSearchEntity extends BaseSearchEntity {
|
||||
readonly type: "task";
|
||||
readonly task: Task;
|
||||
readonly statusLower: string;
|
||||
readonly priorityLower?: SearchPriorityFilter;
|
||||
readonly idVariants: string[];
|
||||
readonly dependencyIds: string[];
|
||||
}
|
||||
|
||||
interface DocumentSearchEntity extends BaseSearchEntity {
|
||||
readonly type: "document";
|
||||
readonly document: Document;
|
||||
}
|
||||
|
||||
interface DecisionSearchEntity extends BaseSearchEntity {
|
||||
readonly type: "decision";
|
||||
readonly decision: Decision;
|
||||
}
|
||||
|
||||
type SearchEntity = TaskSearchEntity | DocumentSearchEntity | DecisionSearchEntity;
|
||||
|
||||
type NormalizedFilters = {
|
||||
statuses?: string[];
|
||||
priorities?: SearchPriorityFilter[];
|
||||
};
|
||||
|
||||
const TASK_ID_PREFIX = "task-";
|
||||
|
||||
function parseTaskIdSegments(value: string): number[] | null {
|
||||
const withoutPrefix = value.startsWith(TASK_ID_PREFIX) ? value.slice(TASK_ID_PREFIX.length) : value;
|
||||
if (!/^[0-9]+(?:\.[0-9]+)*$/.test(withoutPrefix)) {
|
||||
return null;
|
||||
}
|
||||
return withoutPrefix.split(".").map((segment) => Number.parseInt(segment, 10));
|
||||
}
|
||||
|
||||
function createTaskIdVariants(id: string): string[] {
|
||||
const segments = parseTaskIdSegments(id);
|
||||
if (!segments) {
|
||||
const normalized = id.startsWith(TASK_ID_PREFIX) ? id : `${TASK_ID_PREFIX}${id}`;
|
||||
return id === normalized ? [normalized] : [normalized, id];
|
||||
}
|
||||
const canonicalSuffix = segments.join(".");
|
||||
const variants = new Set<string>();
|
||||
const normalized = id.startsWith(TASK_ID_PREFIX) ? id : `${TASK_ID_PREFIX}${id}`;
|
||||
variants.add(normalized);
|
||||
variants.add(`${TASK_ID_PREFIX}${canonicalSuffix}`);
|
||||
variants.add(canonicalSuffix);
|
||||
if (id !== normalized) {
|
||||
variants.add(id);
|
||||
}
|
||||
return Array.from(variants);
|
||||
}
|
||||
|
||||
export class SearchService {
|
||||
private initialized = false;
|
||||
private initializing: Promise<void> | null = null;
|
||||
private unsubscribe?: () => void;
|
||||
private fuse: Fuse<SearchEntity> | null = null;
|
||||
private tasks: TaskSearchEntity[] = [];
|
||||
private documents: DocumentSearchEntity[] = [];
|
||||
private decisions: DecisionSearchEntity[] = [];
|
||||
private collection: SearchEntity[] = [];
|
||||
private version = 0;
|
||||
|
||||
constructor(private readonly store: ContentStore) {}
|
||||
|
||||
async ensureInitialized(): Promise<void> {
|
||||
if (this.initialized) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (!this.initializing) {
|
||||
this.initializing = this.initialize().catch((error) => {
|
||||
this.initializing = null;
|
||||
throw error;
|
||||
});
|
||||
}
|
||||
|
||||
await this.initializing;
|
||||
}
|
||||
|
||||
dispose(): void {
|
||||
if (this.unsubscribe) {
|
||||
this.unsubscribe();
|
||||
this.unsubscribe = undefined;
|
||||
}
|
||||
this.fuse = null;
|
||||
this.collection = [];
|
||||
this.tasks = [];
|
||||
this.documents = [];
|
||||
this.decisions = [];
|
||||
this.initialized = false;
|
||||
this.initializing = null;
|
||||
}
|
||||
|
||||
search(options: SearchOptions = {}): SearchResult[] {
|
||||
if (!this.initialized) {
|
||||
throw new Error("SearchService not initialized. Call ensureInitialized() first.");
|
||||
}
|
||||
|
||||
const { query = "", limit, types, filters } = options;
|
||||
|
||||
const trimmedQuery = query.trim();
|
||||
const allowedTypes = new Set<SearchResultType>(
|
||||
types && types.length > 0 ? types : ["task", "document", "decision"],
|
||||
);
|
||||
const normalizedFilters = this.normalizeFilters(filters);
|
||||
|
||||
if (trimmedQuery === "") {
|
||||
return this.collectWithoutQuery(allowedTypes, normalizedFilters, limit);
|
||||
}
|
||||
|
||||
const fuse = this.fuse;
|
||||
if (!fuse) {
|
||||
return [];
|
||||
}
|
||||
|
||||
const fuseResults = fuse.search(trimmedQuery);
|
||||
const results: SearchResult[] = [];
|
||||
|
||||
for (const result of fuseResults) {
|
||||
const entity = result.item;
|
||||
if (!allowedTypes.has(entity.type)) {
|
||||
continue;
|
||||
}
|
||||
|
||||
if (entity.type === "task" && !this.matchesTaskFilters(entity, normalizedFilters)) {
|
||||
continue;
|
||||
}
|
||||
|
||||
results.push(this.mapEntityToResult(entity, result));
|
||||
if (limit && results.length >= limit) {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
return results;
|
||||
}
|
||||
|
||||
private async initialize(): Promise<void> {
|
||||
const snapshot = await this.store.ensureInitialized();
|
||||
this.applySnapshot(snapshot.tasks, snapshot.documents, snapshot.decisions);
|
||||
|
||||
if (!this.unsubscribe) {
|
||||
this.unsubscribe = this.store.subscribe((event) => {
|
||||
this.handleStoreEvent(event);
|
||||
});
|
||||
}
|
||||
|
||||
this.initialized = true;
|
||||
this.initializing = null;
|
||||
}
|
||||
|
||||
private handleStoreEvent(event: ContentStoreEvent): void {
|
||||
if (event.version <= this.version) {
|
||||
return;
|
||||
}
|
||||
this.version = event.version;
|
||||
this.applySnapshot(event.snapshot.tasks, event.snapshot.documents, event.snapshot.decisions);
|
||||
}
|
||||
|
||||
private applySnapshot(tasks: Task[], documents: Document[], decisions: Decision[]): void {
|
||||
this.tasks = tasks.map((task) => ({
|
||||
id: task.id,
|
||||
type: "task",
|
||||
title: task.title,
|
||||
bodyText: buildTaskBodyText(task),
|
||||
task,
|
||||
statusLower: task.status.toLowerCase(),
|
||||
priorityLower: task.priority ? (task.priority.toLowerCase() as SearchPriorityFilter) : undefined,
|
||||
idVariants: createTaskIdVariants(task.id),
|
||||
dependencyIds: (task.dependencies ?? []).flatMap((dependency) => createTaskIdVariants(dependency)),
|
||||
}));
|
||||
|
||||
this.documents = documents.map((document) => ({
|
||||
id: document.id,
|
||||
type: "document",
|
||||
title: document.title,
|
||||
bodyText: document.rawContent ?? "",
|
||||
document,
|
||||
}));
|
||||
|
||||
this.decisions = decisions.map((decision) => ({
|
||||
id: decision.id,
|
||||
type: "decision",
|
||||
title: decision.title,
|
||||
bodyText: decision.rawContent ?? "",
|
||||
decision,
|
||||
}));
|
||||
|
||||
this.collection = [...this.tasks, ...this.documents, ...this.decisions];
|
||||
this.rebuildFuse();
|
||||
}
|
||||
|
||||
private rebuildFuse(): void {
|
||||
if (this.collection.length === 0) {
|
||||
this.fuse = null;
|
||||
return;
|
||||
}
|
||||
|
||||
this.fuse = new Fuse(this.collection, {
|
||||
includeScore: true,
|
||||
includeMatches: true,
|
||||
threshold: 0.35,
|
||||
ignoreLocation: true,
|
||||
minMatchCharLength: 2,
|
||||
keys: [
|
||||
{ name: "title", weight: 0.35 },
|
||||
{ name: "bodyText", weight: 0.3 },
|
||||
{ name: "id", weight: 0.2 },
|
||||
{ name: "idVariants", weight: 0.1 },
|
||||
{ name: "dependencyIds", weight: 0.05 },
|
||||
],
|
||||
});
|
||||
}
|
||||
|
||||
private collectWithoutQuery(
|
||||
allowedTypes: Set<SearchResultType>,
|
||||
filters: NormalizedFilters,
|
||||
limit?: number,
|
||||
): SearchResult[] {
|
||||
const results: SearchResult[] = [];
|
||||
|
||||
if (allowedTypes.has("task")) {
|
||||
const tasks = this.applyTaskFilters(this.tasks, filters);
|
||||
for (const entity of tasks) {
|
||||
results.push(this.mapEntityToResult(entity));
|
||||
if (limit && results.length >= limit) {
|
||||
return results;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (allowedTypes.has("document")) {
|
||||
for (const entity of this.documents) {
|
||||
results.push(this.mapEntityToResult(entity));
|
||||
if (limit && results.length >= limit) {
|
||||
return results;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (allowedTypes.has("decision")) {
|
||||
for (const entity of this.decisions) {
|
||||
results.push(this.mapEntityToResult(entity));
|
||||
if (limit && results.length >= limit) {
|
||||
return results;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return results;
|
||||
}
|
||||
|
||||
private applyTaskFilters(tasks: TaskSearchEntity[], filters: NormalizedFilters): TaskSearchEntity[] {
|
||||
let filtered = tasks;
|
||||
if (filters.statuses && filters.statuses.length > 0) {
|
||||
const allowedStatuses = new Set(filters.statuses);
|
||||
filtered = filtered.filter((task) => allowedStatuses.has(task.statusLower));
|
||||
}
|
||||
if (filters.priorities && filters.priorities.length > 0) {
|
||||
const allowedPriorities = new Set(filters.priorities);
|
||||
filtered = filtered.filter((task) => {
|
||||
if (!task.priorityLower) {
|
||||
return false;
|
||||
}
|
||||
return allowedPriorities.has(task.priorityLower);
|
||||
});
|
||||
}
|
||||
return filtered;
|
||||
}
|
||||
|
||||
private matchesTaskFilters(task: TaskSearchEntity, filters: NormalizedFilters): boolean {
|
||||
if (filters.statuses && filters.statuses.length > 0) {
|
||||
if (!filters.statuses.includes(task.statusLower)) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
if (filters.priorities && filters.priorities.length > 0) {
|
||||
if (!task.priorityLower || !filters.priorities.includes(task.priorityLower)) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
private normalizeFilters(filters?: SearchFilters): NormalizedFilters {
|
||||
if (!filters) {
|
||||
return {};
|
||||
}
|
||||
|
||||
const statuses = this.normalizeStringArray(filters.status);
|
||||
const priorities = this.normalizePriorityArray(filters.priority);
|
||||
|
||||
return {
|
||||
statuses,
|
||||
priorities,
|
||||
};
|
||||
}
|
||||
|
||||
private normalizeStringArray(value?: string | string[]): string[] | undefined {
|
||||
if (!value) {
|
||||
return undefined;
|
||||
}
|
||||
|
||||
const values = Array.isArray(value) ? value : [value];
|
||||
const normalized = values.map((item) => item.trim().toLowerCase()).filter((item) => item.length > 0);
|
||||
|
||||
return normalized.length > 0 ? normalized : undefined;
|
||||
}
|
||||
|
||||
private normalizePriorityArray(
|
||||
value?: SearchPriorityFilter | SearchPriorityFilter[],
|
||||
): SearchPriorityFilter[] | undefined {
|
||||
if (!value) {
|
||||
return undefined;
|
||||
}
|
||||
|
||||
const values = Array.isArray(value) ? value : [value];
|
||||
const normalized = values
|
||||
.map((item) => item.trim().toLowerCase())
|
||||
.filter((item): item is SearchPriorityFilter => {
|
||||
return item === "high" || item === "medium" || item === "low";
|
||||
});
|
||||
|
||||
return normalized.length > 0 ? normalized : undefined;
|
||||
}
|
||||
|
||||
private mapEntityToResult(entity: SearchEntity, result?: FuseResult<SearchEntity>): SearchResult {
|
||||
const score = result?.score ?? null;
|
||||
const matches = this.mapMatches(result?.matches);
|
||||
|
||||
if (entity.type === "task") {
|
||||
return {
|
||||
type: "task",
|
||||
score,
|
||||
task: entity.task,
|
||||
matches,
|
||||
};
|
||||
}
|
||||
|
||||
if (entity.type === "document") {
|
||||
return {
|
||||
type: "document",
|
||||
score,
|
||||
document: entity.document,
|
||||
matches,
|
||||
};
|
||||
}
|
||||
|
||||
return {
|
||||
type: "decision",
|
||||
score,
|
||||
decision: entity.decision,
|
||||
matches,
|
||||
};
|
||||
}
|
||||
|
||||
private mapMatches(matches?: readonly FuseResultMatch[]): SearchMatch[] | undefined {
|
||||
if (!matches || matches.length === 0) {
|
||||
return undefined;
|
||||
}
|
||||
|
||||
return matches.map((match) => ({
|
||||
key: match.key,
|
||||
indices: match.indices.map(([start, end]) => [start, end] as [number, number]),
|
||||
value: match.value,
|
||||
}));
|
||||
}
|
||||
}
|
||||
function buildTaskBodyText(task: Task): string {
|
||||
const parts: string[] = [];
|
||||
|
||||
if (task.description) {
|
||||
parts.push(task.description);
|
||||
}
|
||||
|
||||
if (Array.isArray(task.acceptanceCriteriaItems) && task.acceptanceCriteriaItems.length > 0) {
|
||||
const lines = [...task.acceptanceCriteriaItems]
|
||||
.sort((a, b) => a.index - b.index)
|
||||
.map((criterion) => `- [${criterion.checked ? "x" : " "}] ${criterion.text}`);
|
||||
parts.push(lines.join("\n"));
|
||||
}
|
||||
|
||||
if (task.implementationPlan) {
|
||||
parts.push(task.implementationPlan);
|
||||
}
|
||||
|
||||
if (task.implementationNotes) {
|
||||
parts.push(task.implementationNotes);
|
||||
}
|
||||
|
||||
return parts.join("\n\n");
|
||||
}
|
||||
|
|
@ -0,0 +1,266 @@
|
|||
import type { Sequence, Task } from "../types/index.ts";
|
||||
import { sortByTaskId } from "../utils/task-sorting.ts";
|
||||
|
||||
/**
|
||||
* Compute execution sequences (layers) from task dependencies.
|
||||
* - Sequence 1 contains tasks with no dependencies among the provided set.
|
||||
* - Subsequent sequences contain tasks whose dependencies appear in earlier sequences.
|
||||
* - Dependencies that reference tasks outside the provided set are ignored for layering.
|
||||
* - If cycles exist, any remaining tasks are emitted in a final sequence to ensure each task
|
||||
* appears exactly once (consumers may choose to surface a warning in that case).
|
||||
*/
|
||||
export function computeSequences(tasks: Task[]): { unsequenced: Task[]; sequences: Sequence[] } {
|
||||
// Map task id -> task for fast lookups
|
||||
const byId = new Map<string, Task>();
|
||||
for (const t of tasks) byId.set(t.id, t);
|
||||
|
||||
const allIds = new Set(Array.from(byId.keys()));
|
||||
|
||||
// Build adjacency using only edges within provided set
|
||||
const successors = new Map<string, string[]>();
|
||||
const indegree = new Map<string, number>();
|
||||
for (const id of allIds) {
|
||||
successors.set(id, []);
|
||||
indegree.set(id, 0);
|
||||
}
|
||||
for (const t of tasks) {
|
||||
const deps = Array.isArray(t.dependencies) ? t.dependencies : [];
|
||||
for (const dep of deps) {
|
||||
if (!allIds.has(dep)) continue; // ignore external deps for layering
|
||||
successors.get(dep)?.push(t.id);
|
||||
indegree.set(t.id, (indegree.get(t.id) || 0) + 1);
|
||||
}
|
||||
}
|
||||
|
||||
// Identify isolated tasks: absolutely no dependencies (even external) AND no internal dependents
|
||||
const hasAnyDeps = (t: Task) => (t.dependencies || []).length > 0;
|
||||
const hasDependents = (id: string) => (successors.get(id) || []).length > 0;
|
||||
|
||||
const unsequenced = sortByTaskId(
|
||||
tasks.filter((t) => !hasAnyDeps(t) && !hasDependents(t.id) && t.ordinal === undefined),
|
||||
);
|
||||
|
||||
// Build layering set by excluding unsequenced tasks
|
||||
const layeringIds = new Set(Array.from(allIds).filter((id) => !unsequenced.some((t) => t.id === id)));
|
||||
|
||||
// Kahn-style layered topological grouping on the remainder
|
||||
const sequences: Sequence[] = [];
|
||||
const remaining = new Set(layeringIds);
|
||||
|
||||
// Prepare local indegree copy considering only remaining nodes
|
||||
const indegRem = new Map<string, number>();
|
||||
for (const id of remaining) indegRem.set(id, 0);
|
||||
for (const id of remaining) {
|
||||
const t = byId.get(id);
|
||||
if (!t) continue;
|
||||
for (const dep of t.dependencies || []) {
|
||||
if (remaining.has(dep)) indegRem.set(id, (indegRem.get(id) || 0) + 1);
|
||||
}
|
||||
}
|
||||
|
||||
while (remaining.size > 0) {
|
||||
const layerIds: string[] = [];
|
||||
for (const id of remaining) {
|
||||
if ((indegRem.get(id) || 0) === 0) layerIds.push(id);
|
||||
}
|
||||
|
||||
if (layerIds.length === 0) {
|
||||
// Cycle detected; emit all remaining nodes as final layer (deterministic order)
|
||||
const finalTasks = sortByTaskId(
|
||||
Array.from(remaining)
|
||||
.map((id) => byId.get(id))
|
||||
.filter((t): t is Task => Boolean(t)),
|
||||
);
|
||||
sequences.push({ index: sequences.length + 1, tasks: finalTasks });
|
||||
break;
|
||||
}
|
||||
|
||||
const layerTasks = sortByTaskId(layerIds.map((id) => byId.get(id)).filter((t): t is Task => Boolean(t)));
|
||||
sequences.push({ index: sequences.length + 1, tasks: layerTasks });
|
||||
|
||||
for (const id of layerIds) {
|
||||
remaining.delete(id);
|
||||
for (const succ of successors.get(id) || []) {
|
||||
if (!remaining.has(succ)) continue;
|
||||
indegRem.set(succ, (indegRem.get(succ) || 0) - 1);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return { unsequenced, sequences };
|
||||
}
|
||||
|
||||
/**
|
||||
* Return true if the task has no dependencies and no dependents among the provided set.
|
||||
* Note: Ordinal is intentionally ignored here; computeSequences handles ordinal when grouping.
|
||||
*/
|
||||
export function canMoveToUnsequenced(tasks: Task[], taskId: string): boolean {
|
||||
const byId = new Map<string, Task>(tasks.map((t) => [t.id, t]));
|
||||
const t = byId.get(taskId);
|
||||
if (!t) return false;
|
||||
const allIds = new Set(byId.keys());
|
||||
const hasDeps = (t.dependencies || []).some((d) => allIds.has(d));
|
||||
if (hasDeps) return false;
|
||||
const hasDependents = tasks.some((x) => (x.dependencies || []).includes(taskId));
|
||||
return !hasDependents;
|
||||
}
|
||||
|
||||
/**
|
||||
* Adjust dependencies when moving a task to a target sequence index.
|
||||
*
|
||||
* Rules:
|
||||
* - Set moved task's dependencies to all task IDs from the immediately previous
|
||||
* sequence (targetIndex - 1). If targetIndex is 1, dependencies become [].
|
||||
* - Add the moved task as a dependency to all tasks in the immediately next
|
||||
* sequence (targetIndex + 1). Duplicates are removed.
|
||||
* - Other dependencies remain unchanged for other tasks.
|
||||
*/
|
||||
export function adjustDependenciesForMove(
|
||||
tasks: Task[],
|
||||
sequences: Sequence[],
|
||||
movedTaskId: string,
|
||||
targetSequenceIndex: number,
|
||||
): Task[] {
|
||||
// Join semantics: set moved.dependencies to previous sequence tasks (if any),
|
||||
// do NOT add moved as a dependency to next-sequence tasks, and do not touch others.
|
||||
const byId = new Map<string, Task>(tasks.map((t) => [t.id, { ...t }]));
|
||||
const moved = byId.get(movedTaskId);
|
||||
if (!moved) return tasks;
|
||||
|
||||
const prevSeq = sequences.find((s) => s.index === targetSequenceIndex - 1);
|
||||
// Exclude the moved task itself to avoid creating a self-dependency when moving from seq N to N+1
|
||||
const prevIds = prevSeq ? prevSeq.tasks.map((t) => t.id).filter((id) => id !== movedTaskId) : [];
|
||||
|
||||
moved.dependencies = [...prevIds];
|
||||
byId.set(moved.id, moved);
|
||||
|
||||
return Array.from(byId.values());
|
||||
}
|
||||
|
||||
/**
|
||||
* Insert a new sequence by dropping a task between two existing sequences.
|
||||
*
|
||||
* Semantics (K in [0..N]):
|
||||
* - Dropping between Sequence K and K+1 creates a new Sequence K+1 containing the moved task.
|
||||
* - Update dependencies so that:
|
||||
* - moved.dependencies = all task IDs from Sequence K (or [] when K = 0), excluding itself.
|
||||
* - every task currently in Sequence K+1 adds the moved task ID to its dependencies (deduped).
|
||||
* - No other tasks are modified.
|
||||
* - Special case when there is no next sequence (K = N): only moved.dependencies are updated.
|
||||
* - Special case when K = 0 and there is no next sequence and moved.dependencies remain empty:
|
||||
* assign moved.ordinal = 0 to ensure it participates in layering (avoids Unsequenced bucket).
|
||||
*/
|
||||
export function adjustDependenciesForInsertBetween(
|
||||
tasks: Task[],
|
||||
sequences: Sequence[],
|
||||
movedTaskId: string,
|
||||
betweenK: number,
|
||||
): Task[] {
|
||||
const byId = new Map<string, Task>(tasks.map((t) => [t.id, { ...t }]));
|
||||
const moved = byId.get(movedTaskId);
|
||||
if (!moved) return tasks;
|
||||
|
||||
// Normalize K to integer within [0..N]
|
||||
const maxK = sequences.length;
|
||||
const K = Math.max(0, Math.min(maxK, Math.floor(betweenK)));
|
||||
|
||||
const prevSeq = sequences.find((s) => s.index === K);
|
||||
const nextSeq = sequences.find((s) => s.index === K + 1);
|
||||
|
||||
const prevIds = prevSeq ? prevSeq.tasks.map((t) => t.id).filter((id) => id !== movedTaskId) : [];
|
||||
moved.dependencies = [...prevIds];
|
||||
|
||||
// Update next sequence tasks to depend on moved task
|
||||
if (nextSeq) {
|
||||
for (const t of nextSeq.tasks) {
|
||||
const orig = byId.get(t.id);
|
||||
if (!orig) continue;
|
||||
const deps = Array.isArray(orig.dependencies) ? orig.dependencies : [];
|
||||
if (!deps.includes(movedTaskId)) orig.dependencies = [...deps, movedTaskId];
|
||||
byId.set(orig.id, orig);
|
||||
}
|
||||
} else {
|
||||
// No next sequence; if K = 0 and moved has no deps, ensure it stays sequenced
|
||||
if (K === 0 && (!moved.dependencies || moved.dependencies.length === 0)) {
|
||||
if (moved.ordinal === undefined) moved.ordinal = 0;
|
||||
}
|
||||
}
|
||||
|
||||
byId.set(moved.id, moved);
|
||||
return Array.from(byId.values());
|
||||
}
|
||||
|
||||
/**
|
||||
* Reorder tasks within a sequence by assigning ordinal values.
|
||||
* Does not modify dependencies. Only tasks in the provided sequenceTaskIds are re-assigned ordinals.
|
||||
*/
|
||||
export function reorderWithinSequence(
|
||||
tasks: Task[],
|
||||
sequenceTaskIds: string[],
|
||||
movedTaskId: string,
|
||||
newIndex: number,
|
||||
): Task[] {
|
||||
const seqIds = sequenceTaskIds.filter((id) => id && tasks.some((t) => t.id === id));
|
||||
const withoutMoved = seqIds.filter((id) => id !== movedTaskId);
|
||||
const clampedIndex = Math.max(0, Math.min(withoutMoved.length, newIndex));
|
||||
const newOrder = [...withoutMoved.slice(0, clampedIndex), movedTaskId, ...withoutMoved.slice(clampedIndex)];
|
||||
|
||||
const byId = new Map<string, Task>(tasks.map((t) => [t.id, { ...t }]));
|
||||
newOrder.forEach((id, idx) => {
|
||||
const t = byId.get(id);
|
||||
if (t) {
|
||||
t.ordinal = idx;
|
||||
byId.set(id, t);
|
||||
}
|
||||
});
|
||||
return Array.from(byId.values());
|
||||
}
|
||||
|
||||
/**
|
||||
* Plan a move into a target sequence using join semantics.
|
||||
* Returns only the tasks that changed (dependencies and/or ordinal).
|
||||
*/
|
||||
export function planMoveToSequence(
|
||||
allTasks: Task[],
|
||||
sequences: Sequence[],
|
||||
movedTaskId: string,
|
||||
targetSequenceIndex: number,
|
||||
): Task[] {
|
||||
const updated = adjustDependenciesForMove(allTasks, sequences, movedTaskId, targetSequenceIndex);
|
||||
// If moving to Sequence 1 and resulting deps are empty, anchor with ordinal 0
|
||||
if (targetSequenceIndex === 1) {
|
||||
const movedU = updated.find((x) => x.id === movedTaskId);
|
||||
if (movedU && (!movedU.dependencies || movedU.dependencies.length === 0)) {
|
||||
if (movedU.ordinal === undefined) movedU.ordinal = 0;
|
||||
}
|
||||
}
|
||||
const byIdOrig = new Map(allTasks.map((t) => [t.id, t]));
|
||||
const changed: Task[] = [];
|
||||
for (const u of updated) {
|
||||
const orig = byIdOrig.get(u.id);
|
||||
if (!orig) continue;
|
||||
const depsChanged = JSON.stringify(orig.dependencies) !== JSON.stringify(u.dependencies);
|
||||
const ordChanged = (orig.ordinal ?? null) !== (u.ordinal ?? null);
|
||||
if (depsChanged || ordChanged) changed.push(u);
|
||||
}
|
||||
return changed;
|
||||
}
|
||||
|
||||
/**
|
||||
* Plan a move to Unsequenced. Returns changed tasks or an error message when not eligible.
|
||||
*/
|
||||
export function planMoveToUnsequenced(
|
||||
allTasks: Task[],
|
||||
movedTaskId: string,
|
||||
): { ok: true; changed: Task[] } | { ok: false; error: string } {
|
||||
if (!canMoveToUnsequenced(allTasks, movedTaskId)) {
|
||||
return { ok: false, error: "Cannot move to Unsequenced: task has dependencies or dependents" };
|
||||
}
|
||||
const byId = new Map(allTasks.map((t) => [t.id, { ...t }]));
|
||||
const moved = byId.get(movedTaskId);
|
||||
if (!moved) return { ok: false, error: "Task not found" };
|
||||
moved.dependencies = [];
|
||||
// Clear ordinal to ensure it is considered Unsequenced (no ordinal)
|
||||
if (moved.ordinal !== undefined) moved.ordinal = undefined;
|
||||
return { ok: true, changed: [moved] };
|
||||
}
|
||||
|
|
@ -0,0 +1,162 @@
|
|||
import type { Task } from "../types/index.ts";
|
||||
|
||||
export interface TaskStatistics {
|
||||
statusCounts: Map<string, number>;
|
||||
priorityCounts: Map<string, number>;
|
||||
totalTasks: number;
|
||||
completedTasks: number;
|
||||
completionPercentage: number;
|
||||
draftCount: number;
|
||||
recentActivity: {
|
||||
created: Task[];
|
||||
updated: Task[];
|
||||
};
|
||||
projectHealth: {
|
||||
averageTaskAge: number;
|
||||
staleTasks: Task[];
|
||||
blockedTasks: Task[];
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Calculate comprehensive task statistics for the overview
|
||||
*/
|
||||
export function getTaskStatistics(tasks: Task[], drafts: Task[], statuses: string[]): TaskStatistics {
|
||||
const statusCounts = new Map<string, number>();
|
||||
const priorityCounts = new Map<string, number>();
|
||||
|
||||
// Initialize status counts
|
||||
for (const status of statuses) {
|
||||
statusCounts.set(status, 0);
|
||||
}
|
||||
|
||||
// Initialize priority counts
|
||||
priorityCounts.set("high", 0);
|
||||
priorityCounts.set("medium", 0);
|
||||
priorityCounts.set("low", 0);
|
||||
priorityCounts.set("none", 0);
|
||||
|
||||
let completedTasks = 0;
|
||||
const now = new Date();
|
||||
const oneWeekAgo = new Date(now.getTime() - 7 * 24 * 60 * 60 * 1000);
|
||||
const oneMonthAgo = new Date(now.getTime() - 30 * 24 * 60 * 60 * 1000);
|
||||
|
||||
const recentlyCreated: Task[] = [];
|
||||
const recentlyUpdated: Task[] = [];
|
||||
const staleTasks: Task[] = [];
|
||||
const blockedTasks: Task[] = [];
|
||||
let totalAge = 0;
|
||||
let taskCount = 0;
|
||||
|
||||
// Process each task
|
||||
for (const task of tasks) {
|
||||
// Skip tasks with empty or undefined status
|
||||
if (!task.status || task.status === "") {
|
||||
continue;
|
||||
}
|
||||
|
||||
// Count by status
|
||||
const currentCount = statusCounts.get(task.status) || 0;
|
||||
statusCounts.set(task.status, currentCount + 1);
|
||||
|
||||
// Count completed tasks
|
||||
if (task.status === "Done") {
|
||||
completedTasks++;
|
||||
}
|
||||
|
||||
// Count by priority
|
||||
const priority = task.priority || "none";
|
||||
const priorityCount = priorityCounts.get(priority) || 0;
|
||||
priorityCounts.set(priority, priorityCount + 1);
|
||||
|
||||
// Track recent activity
|
||||
if (task.createdDate) {
|
||||
const createdDate = new Date(task.createdDate);
|
||||
if (createdDate >= oneWeekAgo) {
|
||||
recentlyCreated.push(task);
|
||||
}
|
||||
|
||||
// Calculate task age
|
||||
// For completed tasks, use the time from creation to completion
|
||||
// For active tasks, use the time from creation to now
|
||||
let ageInDays: number;
|
||||
if (task.status === "Done" && task.updatedDate) {
|
||||
const updatedDate = new Date(task.updatedDate);
|
||||
ageInDays = Math.floor((updatedDate.getTime() - createdDate.getTime()) / (24 * 60 * 60 * 1000));
|
||||
} else {
|
||||
ageInDays = Math.floor((now.getTime() - createdDate.getTime()) / (24 * 60 * 60 * 1000));
|
||||
}
|
||||
totalAge += ageInDays;
|
||||
taskCount++;
|
||||
}
|
||||
|
||||
if (task.updatedDate) {
|
||||
const updatedDate = new Date(task.updatedDate);
|
||||
if (updatedDate >= oneWeekAgo) {
|
||||
recentlyUpdated.push(task);
|
||||
}
|
||||
}
|
||||
|
||||
// Identify stale tasks (not updated in 30 days and not done)
|
||||
if (task.status !== "Done") {
|
||||
const lastDate = task.updatedDate || task.createdDate;
|
||||
if (lastDate) {
|
||||
const date = new Date(lastDate);
|
||||
if (date < oneMonthAgo) {
|
||||
staleTasks.push(task);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Identify blocked tasks (has dependencies that are not done)
|
||||
if (task.dependencies && task.dependencies.length > 0 && task.status !== "Done") {
|
||||
// Check if any dependency is not done
|
||||
const hasBlockingDependency = task.dependencies.some((depId) => {
|
||||
const dep = tasks.find((t) => t.id === depId);
|
||||
return dep && dep.status !== "Done";
|
||||
});
|
||||
|
||||
if (hasBlockingDependency) {
|
||||
blockedTasks.push(task);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Sort recent activity by date
|
||||
recentlyCreated.sort((a, b) => {
|
||||
const dateA = new Date(a.createdDate || 0);
|
||||
const dateB = new Date(b.createdDate || 0);
|
||||
return dateB.getTime() - dateA.getTime();
|
||||
});
|
||||
|
||||
recentlyUpdated.sort((a, b) => {
|
||||
const dateA = new Date(a.updatedDate || 0);
|
||||
const dateB = new Date(b.updatedDate || 0);
|
||||
return dateB.getTime() - dateA.getTime();
|
||||
});
|
||||
|
||||
// Calculate average task age
|
||||
const averageTaskAge = taskCount > 0 ? Math.round(totalAge / taskCount) : 0;
|
||||
|
||||
// Calculate completion percentage (only count tasks with valid status)
|
||||
const totalTasks = Array.from(statusCounts.values()).reduce((sum, count) => sum + count, 0);
|
||||
const completionPercentage = totalTasks > 0 ? Math.round((completedTasks / totalTasks) * 100) : 0;
|
||||
|
||||
return {
|
||||
statusCounts,
|
||||
priorityCounts,
|
||||
totalTasks,
|
||||
completedTasks,
|
||||
completionPercentage,
|
||||
draftCount: drafts.length,
|
||||
recentActivity: {
|
||||
created: recentlyCreated.slice(0, 5), // Top 5 most recent
|
||||
updated: recentlyUpdated.slice(0, 5), // Top 5 most recent
|
||||
},
|
||||
projectHealth: {
|
||||
averageTaskAge,
|
||||
staleTasks: staleTasks.slice(0, 5), // Top 5 stale tasks
|
||||
blockedTasks: blockedTasks.slice(0, 5), // Top 5 blocked tasks
|
||||
},
|
||||
};
|
||||
}
|
||||
|
|
@ -0,0 +1,605 @@
|
|||
/**
|
||||
* Task loading with optimized index-first, hydrate-later pattern
|
||||
* Dramatically reduces git operations for multi-branch task loading
|
||||
*
|
||||
* This is the single module for all cross-branch task loading:
|
||||
* - Local filesystem tasks
|
||||
* - Other local branch tasks
|
||||
* - Remote branch tasks
|
||||
*/
|
||||
|
||||
import { DEFAULT_DIRECTORIES } from "../constants/index.ts";
|
||||
import type { GitOperations } from "../git/operations.ts";
|
||||
import { parseTask } from "../markdown/parser.ts";
|
||||
import type { BacklogConfig, Task } from "../types/index.ts";
|
||||
|
||||
/**
|
||||
* Get the appropriate loading message based on remote operations configuration
|
||||
*/
|
||||
export function getTaskLoadingMessage(config: BacklogConfig | null): string {
|
||||
return config?.remoteOperations === false
|
||||
? "Loading tasks from local branches..."
|
||||
: "Loading tasks from local and remote branches...";
|
||||
}
|
||||
|
||||
interface RemoteIndexEntry {
|
||||
id: string;
|
||||
branch: string;
|
||||
path: string; // "backlog/tasks/task-123 - title.md"
|
||||
lastModified: Date;
|
||||
}
|
||||
|
||||
function normalizeRemoteBranch(branch: string): string | null {
|
||||
let br = branch.trim();
|
||||
if (!br) return null;
|
||||
br = br.replace(/^refs\/remotes\//, "");
|
||||
if (br === "origin" || br === "HEAD" || br === "origin/HEAD") return null;
|
||||
if (br.startsWith("origin/")) br = br.slice("origin/".length);
|
||||
// Filter weird cases like "origin" again after stripping prefix
|
||||
if (!br || br === "HEAD" || br === "origin") return null;
|
||||
return br;
|
||||
}
|
||||
|
||||
/**
|
||||
* Normalize a local branch name, filtering out invalid entries
|
||||
*/
|
||||
function normalizeLocalBranch(branch: string, currentBranch: string): string | null {
|
||||
const br = branch.trim();
|
||||
if (!br) return null;
|
||||
// Skip HEAD, origin refs, and current branch
|
||||
if (br === "HEAD" || br.includes("HEAD")) return null;
|
||||
if (br.startsWith("origin/") || br.startsWith("refs/remotes/")) return null;
|
||||
if (br === "origin") return null;
|
||||
// Skip current branch - we already have its tasks from filesystem
|
||||
if (br === currentBranch) return null;
|
||||
return br;
|
||||
}
|
||||
|
||||
/**
|
||||
* Build a cheap index of remote tasks without fetching content
|
||||
* This is VERY fast as it only lists files and gets modification times in batch
|
||||
*/
|
||||
export async function buildRemoteTaskIndex(
|
||||
git: GitOperations,
|
||||
branches: string[],
|
||||
backlogDir = "backlog",
|
||||
sinceDays?: number,
|
||||
): Promise<Map<string, RemoteIndexEntry[]>> {
|
||||
const out = new Map<string, RemoteIndexEntry[]>();
|
||||
|
||||
const normalized = branches.map(normalizeRemoteBranch).filter((b): b is string => Boolean(b));
|
||||
|
||||
// Do branches in parallel but not unbounded
|
||||
const CONCURRENCY = 4;
|
||||
const queue = [...normalized];
|
||||
|
||||
const workers = Array.from({ length: Math.min(CONCURRENCY, queue.length) }, async () => {
|
||||
while (queue.length) {
|
||||
const br = queue.pop();
|
||||
if (!br) break;
|
||||
|
||||
const ref = `origin/${br}`;
|
||||
|
||||
try {
|
||||
// Get all task files in this branch
|
||||
const files = await git.listFilesInTree(ref, `${backlogDir}/tasks`);
|
||||
if (files.length === 0) continue;
|
||||
|
||||
// Get last modified times for all files in one pass
|
||||
const lm = await git.getBranchLastModifiedMap(ref, `${backlogDir}/tasks`, sinceDays);
|
||||
|
||||
for (const f of files) {
|
||||
// Extract task ID from filename
|
||||
// Extract task ID from filename (support subtasks like task-123.01)
|
||||
const m = f.match(/task-(\d+(?:\.\d+)?)/);
|
||||
if (!m) continue;
|
||||
|
||||
const id = `task-${m[1]}`;
|
||||
const lastModified = lm.get(f) ?? new Date(0);
|
||||
const entry: RemoteIndexEntry = { id, branch: br, path: f, lastModified };
|
||||
|
||||
const arr = out.get(id);
|
||||
if (arr) {
|
||||
arr.push(entry);
|
||||
} else {
|
||||
out.set(id, [entry]);
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
// Branch might not have backlog directory, skip it
|
||||
console.debug(`Skipping branch ${br}: ${error}`);
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
await Promise.all(workers);
|
||||
return out;
|
||||
}
|
||||
|
||||
/**
|
||||
* Hydrate tasks by fetching their content
|
||||
* Only call this for the "winner" tasks that we actually need
|
||||
*/
|
||||
async function hydrateTasks(
|
||||
git: GitOperations,
|
||||
winners: Array<{ id: string; ref: string; path: string }>,
|
||||
): Promise<Task[]> {
|
||||
const CONCURRENCY = 8;
|
||||
const result: Task[] = [];
|
||||
let i = 0;
|
||||
|
||||
async function worker() {
|
||||
while (i < winners.length) {
|
||||
const idx = i++;
|
||||
if (idx >= winners.length) break;
|
||||
|
||||
const w = winners[idx];
|
||||
if (!w) break;
|
||||
|
||||
try {
|
||||
const content = await git.showFile(w.ref, w.path);
|
||||
const task = parseTask(content);
|
||||
if (task) {
|
||||
// Mark as remote source and branch
|
||||
task.source = "remote";
|
||||
// Extract branch name from ref (e.g., "origin/main" -> "main")
|
||||
task.branch = w.ref.replace("origin/", "");
|
||||
result.push(task);
|
||||
}
|
||||
} catch (error) {
|
||||
console.error(`Failed to hydrate task ${w.id} from ${w.ref}:${w.path}`, error);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
await Promise.all(Array.from({ length: Math.min(CONCURRENCY, winners.length) }, worker));
|
||||
return result;
|
||||
}
|
||||
|
||||
/**
|
||||
* Build a cheap index of tasks from local branches (excluding current branch)
|
||||
* Similar to buildRemoteTaskIndex but for local refs
|
||||
*/
|
||||
export async function buildLocalBranchTaskIndex(
|
||||
git: GitOperations,
|
||||
branches: string[],
|
||||
currentBranch: string,
|
||||
backlogDir = "backlog",
|
||||
sinceDays?: number,
|
||||
): Promise<Map<string, RemoteIndexEntry[]>> {
|
||||
const out = new Map<string, RemoteIndexEntry[]>();
|
||||
|
||||
const normalized = branches.map((b) => normalizeLocalBranch(b, currentBranch)).filter((b): b is string => Boolean(b));
|
||||
|
||||
if (normalized.length === 0) {
|
||||
return out;
|
||||
}
|
||||
|
||||
// Do branches in parallel but not unbounded
|
||||
const CONCURRENCY = 4;
|
||||
const queue = [...normalized];
|
||||
|
||||
const workers = Array.from({ length: Math.min(CONCURRENCY, queue.length) }, async () => {
|
||||
while (queue.length) {
|
||||
const br = queue.pop();
|
||||
if (!br) break;
|
||||
|
||||
try {
|
||||
// Get all task files in this branch (use branch name directly, not origin/)
|
||||
const files = await git.listFilesInTree(br, `${backlogDir}/tasks`);
|
||||
if (files.length === 0) continue;
|
||||
|
||||
// Get last modified times for all files in one pass
|
||||
const lm = await git.getBranchLastModifiedMap(br, `${backlogDir}/tasks`, sinceDays);
|
||||
|
||||
for (const f of files) {
|
||||
// Extract task ID from filename (support subtasks like task-123.01)
|
||||
const m = f.match(/task-(\d+(?:\.\d+)?)/);
|
||||
if (!m) continue;
|
||||
|
||||
const id = `task-${m[1]}`;
|
||||
const lastModified = lm.get(f) ?? new Date(0);
|
||||
const entry: RemoteIndexEntry = { id, branch: br, path: f, lastModified };
|
||||
|
||||
const arr = out.get(id);
|
||||
if (arr) {
|
||||
arr.push(entry);
|
||||
} else {
|
||||
out.set(id, [entry]);
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
// Branch might not have backlog directory, skip it
|
||||
if (process.env.DEBUG) {
|
||||
console.debug(`Skipping local branch ${br}: ${error}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
await Promise.all(workers);
|
||||
return out;
|
||||
}
|
||||
|
||||
/**
|
||||
* Choose which remote tasks need to be hydrated based on strategy
|
||||
* Returns only the tasks that are newer or more progressed than local versions
|
||||
*/
|
||||
function chooseWinners(
|
||||
localById: Map<string, Task>,
|
||||
remoteIndex: Map<string, RemoteIndexEntry[]>,
|
||||
strategy: "most_recent" | "most_progressed" = "most_progressed",
|
||||
): Array<{ id: string; ref: string; path: string }> {
|
||||
const winners: Array<{ id: string; ref: string; path: string }> = [];
|
||||
|
||||
for (const [id, entries] of remoteIndex) {
|
||||
const local = localById.get(id);
|
||||
|
||||
if (!local) {
|
||||
// No local version - take the newest remote
|
||||
const best = entries.reduce((a, b) => (a.lastModified >= b.lastModified ? a : b));
|
||||
winners.push({ id, ref: `origin/${best.branch}`, path: best.path });
|
||||
continue;
|
||||
}
|
||||
|
||||
// If strategy is "most_recent", only hydrate if any remote is newer
|
||||
if (strategy === "most_recent") {
|
||||
const localTs = local.updatedDate ? new Date(local.updatedDate).getTime() : 0;
|
||||
const newestRemote = entries.reduce((a, b) => (a.lastModified >= b.lastModified ? a : b));
|
||||
|
||||
if (newestRemote.lastModified.getTime() > localTs) {
|
||||
winners.push({
|
||||
id,
|
||||
ref: `origin/${newestRemote.branch}`,
|
||||
path: newestRemote.path,
|
||||
});
|
||||
}
|
||||
continue;
|
||||
}
|
||||
|
||||
// For "most_progressed", we might need to check if remote is newer
|
||||
// to potentially have a more progressed status
|
||||
const localTs = local.updatedDate ? new Date(local.updatedDate).getTime() : 0;
|
||||
const maybeNewer = entries.some((e) => e.lastModified.getTime() > localTs);
|
||||
|
||||
if (maybeNewer) {
|
||||
// Only hydrate the newest remote to check if it's more progressed
|
||||
const newestRemote = entries.reduce((a, b) => (a.lastModified >= b.lastModified ? a : b));
|
||||
winners.push({
|
||||
id,
|
||||
ref: `origin/${newestRemote.branch}`,
|
||||
path: newestRemote.path,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return winners;
|
||||
}
|
||||
|
||||
/**
|
||||
* Find and load a specific task from remote branches
|
||||
* Searches through recent remote branches for the task and returns the newest version
|
||||
*/
|
||||
export async function findTaskInRemoteBranches(
|
||||
git: GitOperations,
|
||||
taskId: string,
|
||||
backlogDir = "backlog",
|
||||
sinceDays = 30,
|
||||
): Promise<Task | null> {
|
||||
try {
|
||||
// Check if we have any remote
|
||||
if (!(await git.hasAnyRemote())) return null;
|
||||
|
||||
// Get recent remote branches
|
||||
const branches = await git.listRecentRemoteBranches(sinceDays);
|
||||
if (branches.length === 0) return null;
|
||||
|
||||
// Build task index for remote branches
|
||||
const remoteIndex = await buildRemoteTaskIndex(git, branches, backlogDir, sinceDays);
|
||||
|
||||
// Check if the task exists in the index
|
||||
const entries = remoteIndex.get(taskId);
|
||||
if (!entries || entries.length === 0) return null;
|
||||
|
||||
// Get the newest version
|
||||
const best = entries.reduce((a, b) => (a.lastModified >= b.lastModified ? a : b));
|
||||
|
||||
// Hydrate the task
|
||||
const ref = `origin/${best.branch}`;
|
||||
const content = await git.showFile(ref, best.path);
|
||||
const task = parseTask(content);
|
||||
if (task) {
|
||||
task.source = "remote";
|
||||
task.branch = best.branch;
|
||||
}
|
||||
return task;
|
||||
} catch (error) {
|
||||
if (process.env.DEBUG) {
|
||||
console.error(`Failed to find task ${taskId} in remote branches:`, error);
|
||||
}
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Find and load a specific task from local branches (excluding current branch)
|
||||
* Searches through recent local branches for the task and returns the newest version
|
||||
*/
|
||||
export async function findTaskInLocalBranches(
|
||||
git: GitOperations,
|
||||
taskId: string,
|
||||
backlogDir = "backlog",
|
||||
sinceDays = 30,
|
||||
): Promise<Task | null> {
|
||||
try {
|
||||
const currentBranch = await git.getCurrentBranch();
|
||||
if (!currentBranch) return null;
|
||||
|
||||
// Get recent local branches
|
||||
const allBranches = await git.listRecentBranches(sinceDays);
|
||||
const localBranches = allBranches.filter(
|
||||
(b) => !b.startsWith("origin/") && !b.startsWith("refs/remotes/") && b !== "origin",
|
||||
);
|
||||
|
||||
if (localBranches.length <= 1) return null; // Only current branch
|
||||
|
||||
// Build task index for local branches
|
||||
const localIndex = await buildLocalBranchTaskIndex(git, localBranches, currentBranch, backlogDir, sinceDays);
|
||||
|
||||
// Check if the task exists in the index
|
||||
const entries = localIndex.get(taskId);
|
||||
if (!entries || entries.length === 0) return null;
|
||||
|
||||
// Get the newest version
|
||||
const best = entries.reduce((a, b) => (a.lastModified >= b.lastModified ? a : b));
|
||||
|
||||
// Hydrate the task
|
||||
const content = await git.showFile(best.branch, best.path);
|
||||
const task = parseTask(content);
|
||||
if (task) {
|
||||
task.source = "local-branch";
|
||||
task.branch = best.branch;
|
||||
}
|
||||
return task;
|
||||
} catch (error) {
|
||||
if (process.env.DEBUG) {
|
||||
console.error(`Failed to find task ${taskId} in local branches:`, error);
|
||||
}
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Load all remote tasks using optimized index-first, hydrate-later pattern
|
||||
* Dramatically reduces git operations by only fetching content for tasks that need it
|
||||
*/
|
||||
export async function loadRemoteTasks(
|
||||
gitOps: GitOperations,
|
||||
userConfig: BacklogConfig | null = null,
|
||||
onProgress?: (message: string) => void,
|
||||
localTasks?: Task[],
|
||||
): Promise<Task[]> {
|
||||
try {
|
||||
// Skip remote operations if disabled
|
||||
if (userConfig?.remoteOperations === false) {
|
||||
onProgress?.("Remote operations disabled - skipping remote tasks");
|
||||
return [];
|
||||
}
|
||||
|
||||
// Fetch remote branches
|
||||
onProgress?.("Fetching remote branches...");
|
||||
await gitOps.fetch();
|
||||
|
||||
// Use recent branches only for better performance
|
||||
const days = userConfig?.activeBranchDays ?? 30;
|
||||
const branches = await gitOps.listRecentRemoteBranches(days);
|
||||
|
||||
if (branches.length === 0) {
|
||||
onProgress?.("No recent remote branches found");
|
||||
return [];
|
||||
}
|
||||
|
||||
onProgress?.(`Indexing ${branches.length} recent remote branches (last ${days} days)...`);
|
||||
|
||||
// Build a cheap index without fetching content
|
||||
const backlogDir = DEFAULT_DIRECTORIES.BACKLOG;
|
||||
const remoteIndex = await buildRemoteTaskIndex(gitOps, branches, backlogDir, days);
|
||||
|
||||
if (remoteIndex.size === 0) {
|
||||
onProgress?.("No remote tasks found");
|
||||
return [];
|
||||
}
|
||||
|
||||
onProgress?.(`Found ${remoteIndex.size} unique tasks across remote branches`);
|
||||
|
||||
// If we have local tasks, use them to determine which remote tasks to hydrate
|
||||
let winners: Array<{ id: string; ref: string; path: string }>;
|
||||
|
||||
if (localTasks && localTasks.length > 0) {
|
||||
// Build local task map for comparison
|
||||
const localById = new Map(localTasks.map((t) => [t.id, t]));
|
||||
const strategy = userConfig?.taskResolutionStrategy || "most_progressed";
|
||||
|
||||
// Only hydrate remote tasks that are newer or missing locally
|
||||
winners = chooseWinners(localById, remoteIndex, strategy);
|
||||
onProgress?.(`Hydrating ${winners.length} remote candidates...`);
|
||||
} else {
|
||||
// No local tasks, need to hydrate all remote tasks (take newest of each)
|
||||
winners = [];
|
||||
for (const [id, entries] of remoteIndex) {
|
||||
const best = entries.reduce((a, b) => (a.lastModified >= b.lastModified ? a : b));
|
||||
winners.push({ id, ref: `origin/${best.branch}`, path: best.path });
|
||||
}
|
||||
onProgress?.(`Hydrating ${winners.length} remote tasks...`);
|
||||
}
|
||||
|
||||
// Only fetch content for the tasks we actually need
|
||||
const hydratedTasks = await hydrateTasks(gitOps, winners);
|
||||
|
||||
onProgress?.(`Loaded ${hydratedTasks.length} remote tasks`);
|
||||
return hydratedTasks;
|
||||
} catch (error) {
|
||||
// If fetch fails, we can still work with local tasks
|
||||
console.error("Failed to fetch remote tasks:", error);
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve conflicts between local and remote tasks based on strategy
|
||||
*/
|
||||
function getTaskDate(task: Task): Date {
|
||||
if (task.updatedDate) {
|
||||
return new Date(task.updatedDate);
|
||||
}
|
||||
return task.lastModified ?? new Date(0);
|
||||
}
|
||||
|
||||
export function resolveTaskConflict(
|
||||
existing: Task,
|
||||
incoming: Task,
|
||||
statuses: string[],
|
||||
strategy: "most_recent" | "most_progressed" = "most_progressed",
|
||||
): Task {
|
||||
if (strategy === "most_recent") {
|
||||
const existingDate = getTaskDate(existing);
|
||||
const incomingDate = getTaskDate(incoming);
|
||||
return existingDate >= incomingDate ? existing : incoming;
|
||||
}
|
||||
|
||||
// Default to most_progressed strategy
|
||||
// Map status to rank (default to 0 for unknown statuses)
|
||||
const currentIdx = statuses.indexOf(existing.status);
|
||||
const newIdx = statuses.indexOf(incoming.status);
|
||||
const currentRank = currentIdx >= 0 ? currentIdx : 0;
|
||||
const newRank = newIdx >= 0 ? newIdx : 0;
|
||||
|
||||
// If incoming task has a more progressed status, use it
|
||||
if (newRank > currentRank) {
|
||||
return incoming;
|
||||
}
|
||||
|
||||
// If statuses are equal, use the most recent
|
||||
if (newRank === currentRank) {
|
||||
const existingDate = getTaskDate(existing);
|
||||
const incomingDate = getTaskDate(incoming);
|
||||
return existingDate >= incomingDate ? existing : incoming;
|
||||
}
|
||||
|
||||
return existing;
|
||||
}
|
||||
|
||||
/**
|
||||
* Load tasks from other local branches (not current branch, not remote)
|
||||
* Uses the same optimized index-first, hydrate-later pattern as remote loading
|
||||
*/
|
||||
export async function loadLocalBranchTasks(
|
||||
gitOps: GitOperations,
|
||||
userConfig: BacklogConfig | null = null,
|
||||
onProgress?: (message: string) => void,
|
||||
localTasks?: Task[],
|
||||
): Promise<Task[]> {
|
||||
try {
|
||||
const currentBranch = await gitOps.getCurrentBranch();
|
||||
if (!currentBranch) {
|
||||
// Not on a branch (detached HEAD), skip local branch loading
|
||||
return [];
|
||||
}
|
||||
|
||||
// Get recent local branches (excludes remote refs)
|
||||
const days = userConfig?.activeBranchDays ?? 30;
|
||||
const allBranches = await gitOps.listRecentBranches(days);
|
||||
|
||||
// Filter to only local branches (not origin/*)
|
||||
const localBranches = allBranches.filter(
|
||||
(b) => !b.startsWith("origin/") && !b.startsWith("refs/remotes/") && b !== "origin",
|
||||
);
|
||||
|
||||
if (localBranches.length <= 1) {
|
||||
// Only current branch or no branches
|
||||
return [];
|
||||
}
|
||||
|
||||
onProgress?.(`Indexing ${localBranches.length - 1} other local branches...`);
|
||||
|
||||
// Build index of tasks from other local branches
|
||||
const backlogDir = DEFAULT_DIRECTORIES.BACKLOG;
|
||||
const localBranchIndex = await buildLocalBranchTaskIndex(gitOps, localBranches, currentBranch, backlogDir, days);
|
||||
|
||||
if (localBranchIndex.size === 0) {
|
||||
return [];
|
||||
}
|
||||
|
||||
onProgress?.(`Found ${localBranchIndex.size} unique tasks in other local branches`);
|
||||
|
||||
// Determine which tasks to hydrate
|
||||
let winners: Array<{ id: string; ref: string; path: string }>;
|
||||
|
||||
if (localTasks && localTasks.length > 0) {
|
||||
// Build local task map for comparison
|
||||
const localById = new Map(localTasks.map((t) => [t.id, t]));
|
||||
const strategy = userConfig?.taskResolutionStrategy || "most_progressed";
|
||||
|
||||
// Only hydrate tasks that are missing locally or potentially newer
|
||||
winners = [];
|
||||
for (const [id, entries] of localBranchIndex) {
|
||||
const local = localById.get(id);
|
||||
|
||||
if (!local) {
|
||||
// Task doesn't exist locally - take the newest from other branches
|
||||
const best = entries.reduce((a, b) => (a.lastModified >= b.lastModified ? a : b));
|
||||
winners.push({ id, ref: best.branch, path: best.path });
|
||||
continue;
|
||||
}
|
||||
|
||||
// For existing tasks, check if any other branch version is newer
|
||||
if (strategy === "most_recent") {
|
||||
const localTs = local.updatedDate ? new Date(local.updatedDate).getTime() : 0;
|
||||
const newestOther = entries.reduce((a, b) => (a.lastModified >= b.lastModified ? a : b));
|
||||
|
||||
if (newestOther.lastModified.getTime() > localTs) {
|
||||
winners.push({ id, ref: newestOther.branch, path: newestOther.path });
|
||||
}
|
||||
} else {
|
||||
// For most_progressed, we need to hydrate to check status
|
||||
const localTs = local.updatedDate ? new Date(local.updatedDate).getTime() : 0;
|
||||
const maybeNewer = entries.some((e) => e.lastModified.getTime() > localTs);
|
||||
|
||||
if (maybeNewer) {
|
||||
const newestOther = entries.reduce((a, b) => (a.lastModified >= b.lastModified ? a : b));
|
||||
winners.push({ id, ref: newestOther.branch, path: newestOther.path });
|
||||
}
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// No local tasks, hydrate all from other branches (take newest of each)
|
||||
winners = [];
|
||||
for (const [id, entries] of localBranchIndex) {
|
||||
const best = entries.reduce((a, b) => (a.lastModified >= b.lastModified ? a : b));
|
||||
winners.push({ id, ref: best.branch, path: best.path });
|
||||
}
|
||||
}
|
||||
|
||||
if (winners.length === 0) {
|
||||
return [];
|
||||
}
|
||||
|
||||
onProgress?.(`Hydrating ${winners.length} tasks from other local branches...`);
|
||||
|
||||
// Hydrate the tasks - note: ref is the branch name directly (not origin/)
|
||||
const hydratedTasks = await hydrateTasks(gitOps, winners);
|
||||
|
||||
// Mark these as coming from local branches
|
||||
for (const task of hydratedTasks) {
|
||||
task.source = "local-branch";
|
||||
}
|
||||
|
||||
onProgress?.(`Loaded ${hydratedTasks.length} tasks from other local branches`);
|
||||
return hydratedTasks;
|
||||
} catch (error) {
|
||||
if (process.env.DEBUG) {
|
||||
console.error("Failed to load local branch tasks:", error);
|
||||
}
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,824 @@
|
|||
import { mkdir, rename, unlink } from "node:fs/promises";
|
||||
import { homedir } from "node:os";
|
||||
import { dirname, join } from "node:path";
|
||||
import { DEFAULT_DIRECTORIES, DEFAULT_FILES, DEFAULT_STATUSES } from "../constants/index.ts";
|
||||
import { parseDecision, parseDocument, parseTask } from "../markdown/parser.ts";
|
||||
import { serializeDecision, serializeDocument, serializeTask } from "../markdown/serializer.ts";
|
||||
import type { BacklogConfig, Decision, Document, Task, TaskListFilter } from "../types/index.ts";
|
||||
import { documentIdsEqual, normalizeDocumentId } from "../utils/document-id.ts";
|
||||
import { getTaskFilename, getTaskPath, normalizeTaskId } from "../utils/task-path.ts";
|
||||
import { sortByTaskId } from "../utils/task-sorting.ts";
|
||||
|
||||
// Interface for task path resolution context
|
||||
interface TaskPathContext {
|
||||
filesystem: {
|
||||
tasksDir: string;
|
||||
};
|
||||
}
|
||||
|
||||
export class FileSystem {
|
||||
private readonly backlogDir: string;
|
||||
private readonly projectRoot: string;
|
||||
private cachedConfig: BacklogConfig | null = null;
|
||||
|
||||
constructor(projectRoot: string) {
|
||||
this.projectRoot = projectRoot;
|
||||
this.backlogDir = join(projectRoot, DEFAULT_DIRECTORIES.BACKLOG);
|
||||
}
|
||||
|
||||
private async getBacklogDir(): Promise<string> {
|
||||
// Ensure migration is checked if needed
|
||||
if (!this.cachedConfig) {
|
||||
this.cachedConfig = await this.loadConfigDirect();
|
||||
}
|
||||
// Always use "backlog" as the directory name - no configuration needed
|
||||
return join(this.projectRoot, DEFAULT_DIRECTORIES.BACKLOG);
|
||||
}
|
||||
|
||||
private async loadConfigDirect(): Promise<BacklogConfig | null> {
|
||||
try {
|
||||
// First try the standard "backlog" directory
|
||||
let configPath = join(this.projectRoot, DEFAULT_DIRECTORIES.BACKLOG, DEFAULT_FILES.CONFIG);
|
||||
let file = Bun.file(configPath);
|
||||
let exists = await file.exists();
|
||||
|
||||
// If not found, check for legacy ".backlog" directory and migrate it
|
||||
if (!exists) {
|
||||
const legacyBacklogDir = join(this.projectRoot, ".backlog");
|
||||
const legacyConfigPath = join(legacyBacklogDir, DEFAULT_FILES.CONFIG);
|
||||
const legacyFile = Bun.file(legacyConfigPath);
|
||||
const legacyExists = await legacyFile.exists();
|
||||
|
||||
if (legacyExists) {
|
||||
// Migrate legacy .backlog directory to backlog
|
||||
const newBacklogDir = join(this.projectRoot, DEFAULT_DIRECTORIES.BACKLOG);
|
||||
await rename(legacyBacklogDir, newBacklogDir);
|
||||
|
||||
// Update paths to use the new location
|
||||
configPath = join(this.projectRoot, DEFAULT_DIRECTORIES.BACKLOG, DEFAULT_FILES.CONFIG);
|
||||
file = Bun.file(configPath);
|
||||
exists = true;
|
||||
}
|
||||
}
|
||||
|
||||
if (!exists) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const content = await file.text();
|
||||
return this.parseConfig(content);
|
||||
} catch (_error) {
|
||||
if (process.env.DEBUG) {
|
||||
console.error("Error loading config:", _error);
|
||||
}
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
// Public accessors for directory paths
|
||||
get tasksDir(): string {
|
||||
return join(this.backlogDir, DEFAULT_DIRECTORIES.TASKS);
|
||||
}
|
||||
get completedDir(): string {
|
||||
return join(this.backlogDir, DEFAULT_DIRECTORIES.COMPLETED);
|
||||
}
|
||||
|
||||
get archiveTasksDir(): string {
|
||||
return join(this.backlogDir, DEFAULT_DIRECTORIES.ARCHIVE_TASKS);
|
||||
}
|
||||
get decisionsDir(): string {
|
||||
return join(this.backlogDir, DEFAULT_DIRECTORIES.DECISIONS);
|
||||
}
|
||||
|
||||
get docsDir(): string {
|
||||
return join(this.backlogDir, DEFAULT_DIRECTORIES.DOCS);
|
||||
}
|
||||
|
||||
get configFilePath(): string {
|
||||
return join(this.backlogDir, DEFAULT_FILES.CONFIG);
|
||||
}
|
||||
|
||||
/** Get the project root directory */
|
||||
get rootDir(): string {
|
||||
return this.projectRoot;
|
||||
}
|
||||
|
||||
invalidateConfigCache(): void {
|
||||
this.cachedConfig = null;
|
||||
}
|
||||
|
||||
private async getTasksDir(): Promise<string> {
|
||||
const backlogDir = await this.getBacklogDir();
|
||||
return join(backlogDir, DEFAULT_DIRECTORIES.TASKS);
|
||||
}
|
||||
|
||||
async getDraftsDir(): Promise<string> {
|
||||
const backlogDir = await this.getBacklogDir();
|
||||
return join(backlogDir, DEFAULT_DIRECTORIES.DRAFTS);
|
||||
}
|
||||
|
||||
async getArchiveTasksDir(): Promise<string> {
|
||||
const backlogDir = await this.getBacklogDir();
|
||||
return join(backlogDir, DEFAULT_DIRECTORIES.ARCHIVE_TASKS);
|
||||
}
|
||||
|
||||
private async getArchiveDraftsDir(): Promise<string> {
|
||||
const backlogDir = await this.getBacklogDir();
|
||||
return join(backlogDir, DEFAULT_DIRECTORIES.ARCHIVE_DRAFTS);
|
||||
}
|
||||
|
||||
private async getDecisionsDir(): Promise<string> {
|
||||
const backlogDir = await this.getBacklogDir();
|
||||
return join(backlogDir, DEFAULT_DIRECTORIES.DECISIONS);
|
||||
}
|
||||
|
||||
private async getDocsDir(): Promise<string> {
|
||||
const backlogDir = await this.getBacklogDir();
|
||||
return join(backlogDir, DEFAULT_DIRECTORIES.DOCS);
|
||||
}
|
||||
|
||||
private async getCompletedDir(): Promise<string> {
|
||||
const backlogDir = await this.getBacklogDir();
|
||||
return join(backlogDir, DEFAULT_DIRECTORIES.COMPLETED);
|
||||
}
|
||||
|
||||
async ensureBacklogStructure(): Promise<void> {
|
||||
const backlogDir = await this.getBacklogDir();
|
||||
const directories = [
|
||||
backlogDir,
|
||||
join(backlogDir, DEFAULT_DIRECTORIES.TASKS),
|
||||
join(backlogDir, DEFAULT_DIRECTORIES.DRAFTS),
|
||||
join(backlogDir, DEFAULT_DIRECTORIES.COMPLETED),
|
||||
join(backlogDir, DEFAULT_DIRECTORIES.ARCHIVE_TASKS),
|
||||
join(backlogDir, DEFAULT_DIRECTORIES.ARCHIVE_DRAFTS),
|
||||
join(backlogDir, DEFAULT_DIRECTORIES.DOCS),
|
||||
join(backlogDir, DEFAULT_DIRECTORIES.DECISIONS),
|
||||
];
|
||||
|
||||
for (const dir of directories) {
|
||||
await mkdir(dir, { recursive: true });
|
||||
}
|
||||
}
|
||||
|
||||
// Task operations
|
||||
async saveTask(task: Task): Promise<string> {
|
||||
const taskId = normalizeTaskId(task.id);
|
||||
const filename = `${taskId} - ${this.sanitizeFilename(task.title)}.md`;
|
||||
const tasksDir = await this.getTasksDir();
|
||||
const filepath = join(tasksDir, filename);
|
||||
const content = serializeTask(task);
|
||||
|
||||
// Delete any existing task files with the same ID but different filenames
|
||||
try {
|
||||
const core = { filesystem: { tasksDir } };
|
||||
const existingPath = await getTaskPath(taskId, core as TaskPathContext);
|
||||
if (existingPath && !existingPath.endsWith(filename)) {
|
||||
await unlink(existingPath);
|
||||
}
|
||||
} catch {
|
||||
// Ignore errors if no existing files found
|
||||
}
|
||||
|
||||
await this.ensureDirectoryExists(dirname(filepath));
|
||||
await Bun.write(filepath, content);
|
||||
return filepath;
|
||||
}
|
||||
|
||||
async loadTask(taskId: string): Promise<Task | null> {
|
||||
try {
|
||||
const tasksDir = await this.getTasksDir();
|
||||
const core = { filesystem: { tasksDir } };
|
||||
const filepath = await getTaskPath(taskId, core as TaskPathContext);
|
||||
|
||||
if (!filepath) return null;
|
||||
|
||||
const content = await Bun.file(filepath).text();
|
||||
const task = parseTask(content);
|
||||
return { ...task, filePath: filepath };
|
||||
} catch (_error) {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
async listTasks(filter?: TaskListFilter): Promise<Task[]> {
|
||||
try {
|
||||
const tasksDir = await this.getTasksDir();
|
||||
const taskFiles = await Array.fromAsync(new Bun.Glob("task-*.md").scan({ cwd: tasksDir }));
|
||||
|
||||
let tasks: Task[] = [];
|
||||
for (const file of taskFiles) {
|
||||
const filepath = join(tasksDir, file);
|
||||
const content = await Bun.file(filepath).text();
|
||||
const task = parseTask(content);
|
||||
tasks.push({ ...task, filePath: filepath });
|
||||
}
|
||||
|
||||
if (filter?.status) {
|
||||
const statusLower = filter.status.toLowerCase();
|
||||
tasks = tasks.filter((t) => t.status.toLowerCase() === statusLower);
|
||||
}
|
||||
|
||||
if (filter?.assignee) {
|
||||
const assignee = filter.assignee;
|
||||
tasks = tasks.filter((t) => t.assignee.includes(assignee));
|
||||
}
|
||||
|
||||
return sortByTaskId(tasks);
|
||||
} catch (_error) {
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
async listCompletedTasks(): Promise<Task[]> {
|
||||
try {
|
||||
const completedDir = await this.getCompletedDir();
|
||||
const taskFiles = await Array.fromAsync(new Bun.Glob("task-*.md").scan({ cwd: completedDir }));
|
||||
|
||||
const tasks: Task[] = [];
|
||||
for (const file of taskFiles) {
|
||||
const filepath = join(completedDir, file);
|
||||
const content = await Bun.file(filepath).text();
|
||||
const task = parseTask(content);
|
||||
tasks.push({ ...task, filePath: filepath });
|
||||
}
|
||||
|
||||
return sortByTaskId(tasks);
|
||||
} catch (_error) {
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
async archiveTask(taskId: string): Promise<boolean> {
|
||||
try {
|
||||
const tasksDir = await this.getTasksDir();
|
||||
const archiveTasksDir = await this.getArchiveTasksDir();
|
||||
const core = { filesystem: { tasksDir } };
|
||||
const sourcePath = await getTaskPath(taskId, core as TaskPathContext);
|
||||
const taskFile = await getTaskFilename(taskId, core as TaskPathContext);
|
||||
|
||||
if (!sourcePath || !taskFile) return false;
|
||||
|
||||
const targetPath = join(archiveTasksDir, taskFile);
|
||||
|
||||
// Ensure target directory exists
|
||||
await this.ensureDirectoryExists(dirname(targetPath));
|
||||
|
||||
// Use rename for proper Git move detection
|
||||
await rename(sourcePath, targetPath);
|
||||
|
||||
return true;
|
||||
} catch (_error) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
async completeTask(taskId: string): Promise<boolean> {
|
||||
try {
|
||||
const tasksDir = await this.getTasksDir();
|
||||
const completedDir = await this.getCompletedDir();
|
||||
const core = { filesystem: { tasksDir } };
|
||||
const sourcePath = await getTaskPath(taskId, core as TaskPathContext);
|
||||
const taskFile = await getTaskFilename(taskId, core as TaskPathContext);
|
||||
|
||||
if (!sourcePath || !taskFile) return false;
|
||||
|
||||
const targetPath = join(completedDir, taskFile);
|
||||
|
||||
// Ensure target directory exists
|
||||
await this.ensureDirectoryExists(dirname(targetPath));
|
||||
|
||||
// Use rename for proper Git move detection
|
||||
await rename(sourcePath, targetPath);
|
||||
|
||||
return true;
|
||||
} catch (_error) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
async archiveDraft(taskId: string): Promise<boolean> {
|
||||
try {
|
||||
const draftsDir = await this.getDraftsDir();
|
||||
const archiveDraftsDir = await this.getArchiveDraftsDir();
|
||||
const core = { filesystem: { tasksDir: draftsDir } };
|
||||
const sourcePath = await getTaskPath(taskId, core as TaskPathContext);
|
||||
const taskFile = await getTaskFilename(taskId, core as TaskPathContext);
|
||||
|
||||
if (!sourcePath || !taskFile) return false;
|
||||
|
||||
const targetPath = join(archiveDraftsDir, taskFile);
|
||||
|
||||
const content = await Bun.file(sourcePath).text();
|
||||
await this.ensureDirectoryExists(dirname(targetPath));
|
||||
await Bun.write(targetPath, content);
|
||||
|
||||
await unlink(sourcePath);
|
||||
|
||||
return true;
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
async promoteDraft(taskId: string): Promise<boolean> {
|
||||
try {
|
||||
const draftsDir = await this.getDraftsDir();
|
||||
const tasksDir = await this.getTasksDir();
|
||||
const core = { filesystem: { tasksDir: draftsDir } };
|
||||
const sourcePath = await getTaskPath(taskId, core as TaskPathContext);
|
||||
const taskFile = await getTaskFilename(taskId, core as TaskPathContext);
|
||||
|
||||
if (!sourcePath || !taskFile) return false;
|
||||
|
||||
const targetPath = join(tasksDir, taskFile);
|
||||
|
||||
const content = await Bun.file(sourcePath).text();
|
||||
await this.ensureDirectoryExists(dirname(targetPath));
|
||||
await Bun.write(targetPath, content);
|
||||
|
||||
await unlink(sourcePath);
|
||||
|
||||
return true;
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
async demoteTask(taskId: string): Promise<boolean> {
|
||||
try {
|
||||
const tasksDir = await this.getTasksDir();
|
||||
const draftsDir = await this.getDraftsDir();
|
||||
const core = { filesystem: { tasksDir } };
|
||||
const sourcePath = await getTaskPath(taskId, core as TaskPathContext);
|
||||
const taskFile = await getTaskFilename(taskId, core as TaskPathContext);
|
||||
|
||||
if (!sourcePath || !taskFile) return false;
|
||||
|
||||
const targetPath = join(draftsDir, taskFile);
|
||||
|
||||
const content = await Bun.file(sourcePath).text();
|
||||
await this.ensureDirectoryExists(dirname(targetPath));
|
||||
await Bun.write(targetPath, content);
|
||||
|
||||
await unlink(sourcePath);
|
||||
|
||||
return true;
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
// Draft operations
|
||||
async saveDraft(task: Task): Promise<string> {
|
||||
const taskId = normalizeTaskId(task.id);
|
||||
const filename = `${taskId} - ${this.sanitizeFilename(task.title)}.md`;
|
||||
const draftsDir = await this.getDraftsDir();
|
||||
const filepath = join(draftsDir, filename);
|
||||
const content = serializeTask(task);
|
||||
|
||||
try {
|
||||
const core = { filesystem: { tasksDir: draftsDir } };
|
||||
const existingPath = await getTaskPath(taskId, core as TaskPathContext);
|
||||
if (existingPath && !existingPath.endsWith(filename)) {
|
||||
await unlink(existingPath);
|
||||
}
|
||||
} catch {
|
||||
// Ignore errors if no existing files found
|
||||
}
|
||||
|
||||
await this.ensureDirectoryExists(dirname(filepath));
|
||||
await Bun.write(filepath, content);
|
||||
return filepath;
|
||||
}
|
||||
|
||||
async loadDraft(taskId: string): Promise<Task | null> {
|
||||
try {
|
||||
const draftsDir = await this.getDraftsDir();
|
||||
const core = { filesystem: { tasksDir: draftsDir } };
|
||||
const filepath = await getTaskPath(taskId, core as TaskPathContext);
|
||||
|
||||
if (!filepath) return null;
|
||||
|
||||
const content = await Bun.file(filepath).text();
|
||||
const task = parseTask(content);
|
||||
return { ...task, filePath: filepath };
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
async listDrafts(): Promise<Task[]> {
|
||||
try {
|
||||
const draftsDir = await this.getDraftsDir();
|
||||
const taskFiles = await Array.fromAsync(new Bun.Glob("task-*.md").scan({ cwd: draftsDir }));
|
||||
|
||||
const tasks: Task[] = [];
|
||||
for (const file of taskFiles) {
|
||||
const filepath = join(draftsDir, file);
|
||||
const content = await Bun.file(filepath).text();
|
||||
const task = parseTask(content);
|
||||
tasks.push({ ...task, filePath: filepath });
|
||||
}
|
||||
|
||||
return sortByTaskId(tasks);
|
||||
} catch {
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
// Decision log operations
|
||||
async saveDecision(decision: Decision): Promise<void> {
|
||||
// Normalize ID - remove "decision-" prefix if present
|
||||
const normalizedId = decision.id.replace(/^decision-/, "");
|
||||
const filename = `decision-${normalizedId} - ${this.sanitizeFilename(decision.title)}.md`;
|
||||
const decisionsDir = await this.getDecisionsDir();
|
||||
const filepath = join(decisionsDir, filename);
|
||||
const content = serializeDecision(decision);
|
||||
|
||||
await this.ensureDirectoryExists(dirname(filepath));
|
||||
await Bun.write(filepath, content);
|
||||
}
|
||||
|
||||
async loadDecision(decisionId: string): Promise<Decision | null> {
|
||||
try {
|
||||
const decisionsDir = await this.getDecisionsDir();
|
||||
const files = await Array.fromAsync(new Bun.Glob("decision-*.md").scan({ cwd: decisionsDir }));
|
||||
|
||||
// Normalize ID - remove "decision-" prefix if present
|
||||
const normalizedId = decisionId.replace(/^decision-/, "");
|
||||
const decisionFile = files.find((file) => file.startsWith(`decision-${normalizedId} -`));
|
||||
|
||||
if (!decisionFile) return null;
|
||||
|
||||
const filepath = join(decisionsDir, decisionFile);
|
||||
const content = await Bun.file(filepath).text();
|
||||
return parseDecision(content);
|
||||
} catch (_error) {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
// Document operations
|
||||
async saveDocument(document: Document, subPath = ""): Promise<string> {
|
||||
const docsDir = await this.getDocsDir();
|
||||
const canonicalId = normalizeDocumentId(document.id);
|
||||
document.id = canonicalId;
|
||||
const filename = `${canonicalId} - ${this.sanitizeFilename(document.title)}.md`;
|
||||
const subPathSegments = subPath
|
||||
.split(/[\\/]+/)
|
||||
.map((segment) => segment.trim())
|
||||
.filter((segment) => segment.length > 0 && segment !== "." && segment !== "..");
|
||||
const relativePath = subPathSegments.length > 0 ? join(...subPathSegments, filename) : filename;
|
||||
const filepath = join(docsDir, relativePath);
|
||||
const content = serializeDocument(document);
|
||||
|
||||
await this.ensureDirectoryExists(dirname(filepath));
|
||||
|
||||
const glob = new Bun.Glob("**/doc-*.md");
|
||||
const existingMatches = await Array.fromAsync(glob.scan({ cwd: docsDir }));
|
||||
const matchesForId = existingMatches.filter((relative) => {
|
||||
const base = relative.split("/").pop() || relative;
|
||||
const [candidateId] = base.split(" - ");
|
||||
if (!candidateId) return false;
|
||||
return documentIdsEqual(canonicalId, candidateId);
|
||||
});
|
||||
|
||||
let sourceRelativePath = document.path;
|
||||
if (!sourceRelativePath && matchesForId.length > 0) {
|
||||
sourceRelativePath = matchesForId[0];
|
||||
}
|
||||
|
||||
if (sourceRelativePath && sourceRelativePath !== relativePath) {
|
||||
const sourcePath = join(docsDir, sourceRelativePath);
|
||||
try {
|
||||
await this.ensureDirectoryExists(dirname(filepath));
|
||||
await rename(sourcePath, filepath);
|
||||
} catch (error) {
|
||||
const code = (error as NodeJS.ErrnoException | undefined)?.code;
|
||||
if (code !== "ENOENT") {
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for (const match of matchesForId) {
|
||||
const matchPath = join(docsDir, match);
|
||||
if (matchPath === filepath) {
|
||||
continue;
|
||||
}
|
||||
try {
|
||||
await unlink(matchPath);
|
||||
} catch {
|
||||
// Ignore cleanup errors - file may have been removed already
|
||||
}
|
||||
}
|
||||
|
||||
await Bun.write(filepath, content);
|
||||
|
||||
document.path = relativePath;
|
||||
return relativePath;
|
||||
}
|
||||
|
||||
async listDecisions(): Promise<Decision[]> {
|
||||
try {
|
||||
const decisionsDir = await this.getDecisionsDir();
|
||||
const decisionFiles = await Array.fromAsync(new Bun.Glob("decision-*.md").scan({ cwd: decisionsDir }));
|
||||
const decisions: Decision[] = [];
|
||||
for (const file of decisionFiles) {
|
||||
// Filter out README files as they're just instruction files
|
||||
if (file.toLowerCase().match(/^readme\.md$/i)) {
|
||||
continue;
|
||||
}
|
||||
const filepath = join(decisionsDir, file);
|
||||
const content = await Bun.file(filepath).text();
|
||||
decisions.push(parseDecision(content));
|
||||
}
|
||||
return sortByTaskId(decisions);
|
||||
} catch {
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
async listDocuments(): Promise<Document[]> {
|
||||
try {
|
||||
const docsDir = await this.getDocsDir();
|
||||
// Recursively include all markdown files under docs, excluding README.md variants
|
||||
const glob = new Bun.Glob("**/*.md");
|
||||
const docFiles = await Array.fromAsync(glob.scan({ cwd: docsDir }));
|
||||
const docs: Document[] = [];
|
||||
for (const file of docFiles) {
|
||||
const base = file.split("/").pop() || file;
|
||||
if (base.toLowerCase() === "readme.md") continue;
|
||||
const filepath = join(docsDir, file);
|
||||
const content = await Bun.file(filepath).text();
|
||||
const parsed = parseDocument(content);
|
||||
docs.push({
|
||||
...parsed,
|
||||
path: file,
|
||||
});
|
||||
}
|
||||
|
||||
// Stable sort by title for UI/CLI listing
|
||||
return docs.sort((a, b) => a.title.localeCompare(b.title));
|
||||
} catch {
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
async loadDocument(id: string): Promise<Document> {
|
||||
const documents = await this.listDocuments();
|
||||
const document = documents.find((doc) => documentIdsEqual(id, doc.id));
|
||||
if (!document) {
|
||||
throw new Error(`Document not found: ${id}`);
|
||||
}
|
||||
return document;
|
||||
}
|
||||
|
||||
// Config operations
|
||||
async loadConfig(): Promise<BacklogConfig | null> {
|
||||
// Return cached config if available
|
||||
if (this.cachedConfig !== null) {
|
||||
return this.cachedConfig;
|
||||
}
|
||||
|
||||
try {
|
||||
const backlogDir = await this.getBacklogDir();
|
||||
const configPath = join(backlogDir, DEFAULT_FILES.CONFIG);
|
||||
|
||||
// Check if file exists first to avoid hanging on Windows
|
||||
const file = Bun.file(configPath);
|
||||
const exists = await file.exists();
|
||||
|
||||
if (!exists) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const content = await file.text();
|
||||
const config = this.parseConfig(content);
|
||||
|
||||
// Cache the loaded config
|
||||
this.cachedConfig = config;
|
||||
return config;
|
||||
} catch (_error) {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
async saveConfig(config: BacklogConfig): Promise<void> {
|
||||
const backlogDir = await this.getBacklogDir();
|
||||
const configPath = join(backlogDir, DEFAULT_FILES.CONFIG);
|
||||
const content = this.serializeConfig(config);
|
||||
await Bun.write(configPath, content);
|
||||
this.cachedConfig = config;
|
||||
}
|
||||
|
||||
async getUserSetting(key: string, global = false): Promise<string | undefined> {
|
||||
const settings = await this.loadUserSettings(global);
|
||||
return settings ? settings[key] : undefined;
|
||||
}
|
||||
|
||||
async setUserSetting(key: string, value: string, global = false): Promise<void> {
|
||||
const settings = (await this.loadUserSettings(global)) || {};
|
||||
settings[key] = value;
|
||||
await this.saveUserSettings(settings, global);
|
||||
}
|
||||
|
||||
private async loadUserSettings(global = false): Promise<Record<string, string> | null> {
|
||||
const primaryPath = global
|
||||
? join(homedir(), "backlog", DEFAULT_FILES.USER)
|
||||
: join(this.projectRoot, DEFAULT_FILES.USER);
|
||||
const fallbackPath = global ? join(this.projectRoot, "backlog", DEFAULT_FILES.USER) : undefined;
|
||||
const tryPaths = fallbackPath ? [primaryPath, fallbackPath] : [primaryPath];
|
||||
for (const filePath of tryPaths) {
|
||||
try {
|
||||
const content = await Bun.file(filePath).text();
|
||||
const result: Record<string, string> = {};
|
||||
for (const line of content.split(/\r?\n/)) {
|
||||
const trimmed = line.trim();
|
||||
if (!trimmed || trimmed.startsWith("#")) continue;
|
||||
const idx = trimmed.indexOf(":");
|
||||
if (idx === -1) continue;
|
||||
const k = trimmed.substring(0, idx).trim();
|
||||
result[k] = trimmed
|
||||
.substring(idx + 1)
|
||||
.trim()
|
||||
.replace(/^['"]|['"]$/g, "");
|
||||
}
|
||||
return result;
|
||||
} catch {
|
||||
// Try next path (if any)
|
||||
}
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
private async saveUserSettings(settings: Record<string, string>, global = false): Promise<void> {
|
||||
const primaryPath = global
|
||||
? join(homedir(), "backlog", DEFAULT_FILES.USER)
|
||||
: join(this.projectRoot, DEFAULT_FILES.USER);
|
||||
const fallbackPath = global ? join(this.projectRoot, "backlog", DEFAULT_FILES.USER) : undefined;
|
||||
|
||||
const lines = Object.entries(settings).map(([k, v]) => `${k}: ${v}`);
|
||||
const data = `${lines.join("\n")}\n`;
|
||||
|
||||
try {
|
||||
await this.ensureDirectoryExists(dirname(primaryPath));
|
||||
await Bun.write(primaryPath, data);
|
||||
return;
|
||||
} catch {
|
||||
// Fall through to fallback when global write fails (e.g., sandboxed env)
|
||||
}
|
||||
|
||||
if (fallbackPath) {
|
||||
await this.ensureDirectoryExists(dirname(fallbackPath));
|
||||
await Bun.write(fallbackPath, data);
|
||||
}
|
||||
}
|
||||
|
||||
// Utility methods
|
||||
private sanitizeFilename(filename: string): string {
|
||||
return filename
|
||||
.replace(/[<>:"/\\|?*]/g, "-")
|
||||
.replace(/\s+/g, "-")
|
||||
.replace(/-+/g, "-")
|
||||
.replace(/^-|-$/g, "");
|
||||
}
|
||||
|
||||
private async ensureDirectoryExists(dirPath: string): Promise<void> {
|
||||
try {
|
||||
await mkdir(dirPath, { recursive: true });
|
||||
} catch (_error) {
|
||||
// Directory creation failed, ignore
|
||||
}
|
||||
}
|
||||
|
||||
private parseConfig(content: string): BacklogConfig {
|
||||
const config: Partial<BacklogConfig> = {};
|
||||
const lines = content.split("\n");
|
||||
|
||||
for (const line of lines) {
|
||||
const trimmed = line.trim();
|
||||
if (!trimmed || trimmed.startsWith("#")) continue;
|
||||
|
||||
const colonIndex = trimmed.indexOf(":");
|
||||
if (colonIndex === -1) continue;
|
||||
|
||||
const key = trimmed.substring(0, colonIndex).trim();
|
||||
const value = trimmed.substring(colonIndex + 1).trim();
|
||||
|
||||
switch (key) {
|
||||
case "project_name":
|
||||
config.projectName = value.replace(/['"]/g, "");
|
||||
break;
|
||||
case "default_assignee":
|
||||
config.defaultAssignee = value.replace(/['"]/g, "");
|
||||
break;
|
||||
case "default_reporter":
|
||||
config.defaultReporter = value.replace(/['"]/g, "");
|
||||
break;
|
||||
case "default_status":
|
||||
config.defaultStatus = value.replace(/['"]/g, "");
|
||||
break;
|
||||
case "statuses":
|
||||
case "labels":
|
||||
case "milestones":
|
||||
if (value.startsWith("[") && value.endsWith("]")) {
|
||||
const arrayContent = value.slice(1, -1);
|
||||
config[key] = arrayContent
|
||||
.split(",")
|
||||
.map((item) => item.trim().replace(/['"]/g, ""))
|
||||
.filter(Boolean);
|
||||
}
|
||||
break;
|
||||
case "date_format":
|
||||
config.dateFormat = value.replace(/['"]/g, "");
|
||||
break;
|
||||
case "max_column_width":
|
||||
config.maxColumnWidth = Number.parseInt(value, 10);
|
||||
break;
|
||||
case "default_editor":
|
||||
config.defaultEditor = value.replace(/["']/g, "");
|
||||
break;
|
||||
case "auto_open_browser":
|
||||
config.autoOpenBrowser = value.toLowerCase() === "true";
|
||||
break;
|
||||
case "default_port":
|
||||
config.defaultPort = Number.parseInt(value, 10);
|
||||
break;
|
||||
case "remote_operations":
|
||||
config.remoteOperations = value.toLowerCase() === "true";
|
||||
break;
|
||||
case "auto_commit":
|
||||
config.autoCommit = value.toLowerCase() === "true";
|
||||
break;
|
||||
case "zero_padded_ids":
|
||||
config.zeroPaddedIds = Number.parseInt(value, 10);
|
||||
break;
|
||||
case "bypass_git_hooks":
|
||||
config.bypassGitHooks = value.toLowerCase() === "true";
|
||||
break;
|
||||
case "check_active_branches":
|
||||
config.checkActiveBranches = value.toLowerCase() === "true";
|
||||
break;
|
||||
case "active_branch_days":
|
||||
config.activeBranchDays = Number.parseInt(value, 10);
|
||||
break;
|
||||
case "onStatusChange":
|
||||
case "on_status_change":
|
||||
// Remove surrounding quotes if present, but preserve inner content
|
||||
config.onStatusChange = value.replace(/^['"]|['"]$/g, "");
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
projectName: config.projectName || "",
|
||||
defaultAssignee: config.defaultAssignee,
|
||||
defaultReporter: config.defaultReporter,
|
||||
statuses: config.statuses || [...DEFAULT_STATUSES],
|
||||
labels: config.labels || [],
|
||||
milestones: config.milestones || [],
|
||||
defaultStatus: config.defaultStatus,
|
||||
dateFormat: config.dateFormat || "yyyy-mm-dd",
|
||||
maxColumnWidth: config.maxColumnWidth,
|
||||
defaultEditor: config.defaultEditor,
|
||||
autoOpenBrowser: config.autoOpenBrowser,
|
||||
defaultPort: config.defaultPort,
|
||||
remoteOperations: config.remoteOperations,
|
||||
autoCommit: config.autoCommit,
|
||||
zeroPaddedIds: config.zeroPaddedIds,
|
||||
bypassGitHooks: config.bypassGitHooks,
|
||||
checkActiveBranches: config.checkActiveBranches,
|
||||
activeBranchDays: config.activeBranchDays,
|
||||
onStatusChange: config.onStatusChange,
|
||||
};
|
||||
}
|
||||
|
||||
private serializeConfig(config: BacklogConfig): string {
|
||||
const lines = [
|
||||
`project_name: "${config.projectName}"`,
|
||||
...(config.defaultAssignee ? [`default_assignee: "${config.defaultAssignee}"`] : []),
|
||||
...(config.defaultReporter ? [`default_reporter: "${config.defaultReporter}"`] : []),
|
||||
...(config.defaultStatus ? [`default_status: "${config.defaultStatus}"`] : []),
|
||||
`statuses: [${config.statuses.map((s) => `"${s}"`).join(", ")}]`,
|
||||
`labels: [${config.labels.map((l) => `"${l}"`).join(", ")}]`,
|
||||
`milestones: [${config.milestones.map((m) => `"${m}"`).join(", ")}]`,
|
||||
`date_format: ${config.dateFormat}`,
|
||||
...(config.maxColumnWidth ? [`max_column_width: ${config.maxColumnWidth}`] : []),
|
||||
...(config.defaultEditor ? [`default_editor: "${config.defaultEditor}"`] : []),
|
||||
...(typeof config.autoOpenBrowser === "boolean" ? [`auto_open_browser: ${config.autoOpenBrowser}`] : []),
|
||||
...(config.defaultPort ? [`default_port: ${config.defaultPort}`] : []),
|
||||
...(typeof config.remoteOperations === "boolean" ? [`remote_operations: ${config.remoteOperations}`] : []),
|
||||
...(typeof config.autoCommit === "boolean" ? [`auto_commit: ${config.autoCommit}`] : []),
|
||||
...(typeof config.zeroPaddedIds === "number" ? [`zero_padded_ids: ${config.zeroPaddedIds}`] : []),
|
||||
...(typeof config.bypassGitHooks === "boolean" ? [`bypass_git_hooks: ${config.bypassGitHooks}`] : []),
|
||||
...(typeof config.checkActiveBranches === "boolean"
|
||||
? [`check_active_branches: ${config.checkActiveBranches}`]
|
||||
: []),
|
||||
...(typeof config.activeBranchDays === "number" ? [`active_branch_days: ${config.activeBranchDays}`] : []),
|
||||
...(config.onStatusChange ? [`onStatusChange: '${config.onStatusChange}'`] : []),
|
||||
];
|
||||
|
||||
return `${lines.join("\n")}\n`;
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,134 @@
|
|||
import type { Task } from "../types/index.ts";
|
||||
import type { ChecklistItem } from "../ui/checklist.ts";
|
||||
import { transformCodePathsPlain } from "../ui/code-path.ts";
|
||||
import { formatStatusWithIcon } from "../ui/status-icon.ts";
|
||||
|
||||
export type TaskPlainTextOptions = {
|
||||
filePathOverride?: string;
|
||||
};
|
||||
|
||||
export function formatDateForDisplay(dateStr: string): string {
|
||||
if (!dateStr) return "";
|
||||
const hasTime = dateStr.includes(" ") || dateStr.includes("T");
|
||||
return hasTime ? dateStr : dateStr;
|
||||
}
|
||||
|
||||
export function buildAcceptanceCriteriaItems(task: Task): ChecklistItem[] {
|
||||
const items = task.acceptanceCriteriaItems ?? [];
|
||||
return items
|
||||
.slice()
|
||||
.sort((a, b) => a.index - b.index)
|
||||
.map((criterion, index) => ({
|
||||
text: `#${index + 1} ${criterion.text}`,
|
||||
checked: criterion.checked,
|
||||
}));
|
||||
}
|
||||
|
||||
export function formatAcceptanceCriteriaLines(items: ChecklistItem[]): string[] {
|
||||
if (items.length === 0) return [];
|
||||
return items.map((item) => {
|
||||
const prefix = item.checked ? "- [x]" : "- [ ]";
|
||||
return `${prefix} ${transformCodePathsPlain(item.text)}`;
|
||||
});
|
||||
}
|
||||
|
||||
function formatPriority(priority?: "high" | "medium" | "low"): string | null {
|
||||
if (!priority) return null;
|
||||
const label = priority.charAt(0).toUpperCase() + priority.slice(1);
|
||||
return label;
|
||||
}
|
||||
|
||||
function formatAssignees(assignee?: string[]): string | null {
|
||||
if (!assignee || assignee.length === 0) return null;
|
||||
return assignee.map((a) => (a.startsWith("@") ? a : `@${a}`)).join(", ");
|
||||
}
|
||||
|
||||
export function formatTaskPlainText(task: Task, options: TaskPlainTextOptions = {}): string {
|
||||
const lines: string[] = [];
|
||||
const filePath = options.filePathOverride ?? task.filePath;
|
||||
|
||||
if (filePath) {
|
||||
lines.push(`File: ${filePath}`);
|
||||
lines.push("");
|
||||
}
|
||||
|
||||
lines.push(`Task ${task.id} - ${task.title}`);
|
||||
lines.push("=".repeat(50));
|
||||
lines.push("");
|
||||
lines.push(`Status: ${formatStatusWithIcon(task.status)}`);
|
||||
|
||||
const priorityLabel = formatPriority(task.priority);
|
||||
if (priorityLabel) {
|
||||
lines.push(`Priority: ${priorityLabel}`);
|
||||
}
|
||||
|
||||
const assigneeText = formatAssignees(task.assignee);
|
||||
if (assigneeText) {
|
||||
lines.push(`Assignee: ${assigneeText}`);
|
||||
}
|
||||
|
||||
if (task.reporter) {
|
||||
const reporter = task.reporter.startsWith("@") ? task.reporter : `@${task.reporter}`;
|
||||
lines.push(`Reporter: ${reporter}`);
|
||||
}
|
||||
|
||||
lines.push(`Created: ${formatDateForDisplay(task.createdDate)}`);
|
||||
if (task.updatedDate) {
|
||||
lines.push(`Updated: ${formatDateForDisplay(task.updatedDate)}`);
|
||||
}
|
||||
|
||||
if (task.labels?.length) {
|
||||
lines.push(`Labels: ${task.labels.join(", ")}`);
|
||||
}
|
||||
|
||||
if (task.milestone) {
|
||||
lines.push(`Milestone: ${task.milestone}`);
|
||||
}
|
||||
|
||||
if (task.parentTaskId) {
|
||||
lines.push(`Parent: ${task.parentTaskId}`);
|
||||
}
|
||||
|
||||
if (task.subtasks?.length) {
|
||||
lines.push(`Subtasks: ${task.subtasks.length}`);
|
||||
}
|
||||
|
||||
if (task.dependencies?.length) {
|
||||
lines.push(`Dependencies: ${task.dependencies.join(", ")}`);
|
||||
}
|
||||
|
||||
lines.push("");
|
||||
lines.push("Description:");
|
||||
lines.push("-".repeat(50));
|
||||
const description = task.description?.trim();
|
||||
lines.push(transformCodePathsPlain(description && description.length > 0 ? description : "No description provided"));
|
||||
lines.push("");
|
||||
|
||||
lines.push("Acceptance Criteria:");
|
||||
lines.push("-".repeat(50));
|
||||
const criteriaItems = buildAcceptanceCriteriaItems(task);
|
||||
if (criteriaItems.length > 0) {
|
||||
lines.push(...formatAcceptanceCriteriaLines(criteriaItems));
|
||||
} else {
|
||||
lines.push("No acceptance criteria defined");
|
||||
}
|
||||
lines.push("");
|
||||
|
||||
const implementationPlan = task.implementationPlan?.trim();
|
||||
if (implementationPlan) {
|
||||
lines.push("Implementation Plan:");
|
||||
lines.push("-".repeat(50));
|
||||
lines.push(transformCodePathsPlain(implementationPlan));
|
||||
lines.push("");
|
||||
}
|
||||
|
||||
const implementationNotes = task.implementationNotes?.trim();
|
||||
if (implementationNotes) {
|
||||
lines.push("Implementation Notes:");
|
||||
lines.push("-".repeat(50));
|
||||
lines.push(transformCodePathsPlain(implementationNotes));
|
||||
lines.push("");
|
||||
}
|
||||
|
||||
return lines.join("\n");
|
||||
}
|
||||
|
|
@ -0,0 +1,516 @@
|
|||
import { $ } from "bun";
|
||||
import type { BacklogConfig } from "../types/index.ts";
|
||||
|
||||
export class GitOperations {
|
||||
private projectRoot: string;
|
||||
private config: BacklogConfig | null = null;
|
||||
|
||||
constructor(projectRoot: string, config: BacklogConfig | null = null) {
|
||||
this.projectRoot = projectRoot;
|
||||
this.config = config;
|
||||
}
|
||||
|
||||
setConfig(config: BacklogConfig | null): void {
|
||||
this.config = config;
|
||||
}
|
||||
|
||||
async addFile(filePath: string): Promise<void> {
|
||||
// Convert absolute paths to relative paths from project root to avoid Windows encoding issues
|
||||
const { relative } = await import("node:path");
|
||||
const relativePath = relative(this.projectRoot, filePath).replace(/\\/g, "/");
|
||||
await this.execGit(["add", relativePath]);
|
||||
}
|
||||
|
||||
async addFiles(filePaths: string[]): Promise<void> {
|
||||
// Convert absolute paths to relative paths from project root to avoid Windows encoding issues
|
||||
const { relative } = await import("node:path");
|
||||
const relativePaths = filePaths.map((filePath) => relative(this.projectRoot, filePath).replace(/\\/g, "/"));
|
||||
await this.execGit(["add", ...relativePaths]);
|
||||
}
|
||||
|
||||
async commitTaskChange(taskId: string, message: string): Promise<void> {
|
||||
const commitMessage = `${taskId} - ${message}`;
|
||||
const args = ["commit", "-m", commitMessage];
|
||||
if (this.config?.bypassGitHooks) {
|
||||
args.push("--no-verify");
|
||||
}
|
||||
await this.execGit(args);
|
||||
}
|
||||
|
||||
async commitChanges(message: string): Promise<void> {
|
||||
const args = ["commit", "-m", message];
|
||||
if (this.config?.bypassGitHooks) {
|
||||
args.push("--no-verify");
|
||||
}
|
||||
await this.execGit(args);
|
||||
}
|
||||
|
||||
async resetIndex(): Promise<void> {
|
||||
// Reset the staging area without affecting working directory
|
||||
await this.execGit(["reset", "HEAD"]);
|
||||
}
|
||||
|
||||
async commitStagedChanges(message: string): Promise<void> {
|
||||
// Check if there are any staged changes before committing
|
||||
const { stdout: status } = await this.execGit(["status", "--porcelain"]);
|
||||
const hasStagedChanges = status.split("\n").some((line) => line.match(/^[AMDRC]/));
|
||||
|
||||
if (!hasStagedChanges) {
|
||||
throw new Error("No staged changes to commit");
|
||||
}
|
||||
|
||||
const args = ["commit", "-m", message];
|
||||
if (this.config?.bypassGitHooks) {
|
||||
args.push("--no-verify");
|
||||
}
|
||||
await this.execGit(args);
|
||||
}
|
||||
|
||||
async retryGitOperation<T>(operation: () => Promise<T>, operationName: string, maxRetries = 3): Promise<T> {
|
||||
let lastError: Error | undefined;
|
||||
|
||||
for (let attempt = 1; attempt <= maxRetries; attempt++) {
|
||||
try {
|
||||
return await operation();
|
||||
} catch (error) {
|
||||
lastError = error instanceof Error ? error : new Error(String(error));
|
||||
|
||||
if (process.env.DEBUG) {
|
||||
console.warn(
|
||||
`Git operation '${operationName}' failed on attempt ${attempt}/${maxRetries}:`,
|
||||
lastError.message,
|
||||
);
|
||||
}
|
||||
|
||||
// Don't retry on the last attempt
|
||||
if (attempt === maxRetries) {
|
||||
break;
|
||||
}
|
||||
|
||||
// Wait briefly before retrying (exponential backoff)
|
||||
await new Promise((resolve) => setTimeout(resolve, 2 ** (attempt - 1) * 100));
|
||||
}
|
||||
}
|
||||
|
||||
throw new Error(`Git operation '${operationName}' failed after ${maxRetries} attempts: ${lastError?.message}`);
|
||||
}
|
||||
|
||||
async getStatus(): Promise<string> {
|
||||
const { stdout } = await this.execGit(["status", "--porcelain"], { readOnly: true });
|
||||
return stdout;
|
||||
}
|
||||
|
||||
async isClean(): Promise<boolean> {
|
||||
const status = await this.getStatus();
|
||||
return status.trim() === "";
|
||||
}
|
||||
|
||||
async getCurrentBranch(): Promise<string> {
|
||||
const { stdout } = await this.execGit(["branch", "--show-current"], { readOnly: true });
|
||||
return stdout.trim();
|
||||
}
|
||||
async hasUncommittedChanges(): Promise<boolean> {
|
||||
const status = await this.getStatus();
|
||||
return status.trim() !== "";
|
||||
}
|
||||
|
||||
async getLastCommitMessage(): Promise<string> {
|
||||
const { stdout } = await this.execGit(["log", "-1", "--pretty=format:%s"], { readOnly: true });
|
||||
return stdout.trim();
|
||||
}
|
||||
|
||||
async fetch(remote = "origin"): Promise<void> {
|
||||
// Check if remote operations are disabled
|
||||
if (this.config?.remoteOperations === false) {
|
||||
if (process.env.DEBUG) {
|
||||
console.warn("Remote operations are disabled in config. Skipping fetch.");
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
// Preflight: skip if repository has no remotes configured
|
||||
const hasRemotes = await this.hasAnyRemote();
|
||||
if (!hasRemotes) {
|
||||
// No remotes configured; silently skip fetch. A consolidated warning is shown during init if applicable.
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
// Use --prune to remove dead refs and reduce later scans
|
||||
await this.execGit(["fetch", remote, "--prune", "--quiet"]);
|
||||
} catch (error) {
|
||||
// Check if this is a network-related error
|
||||
if (this.isNetworkError(error)) {
|
||||
// Don't show console warnings - let the calling code handle user messaging
|
||||
if (process.env.DEBUG) {
|
||||
console.warn(`Network error details: ${error}`);
|
||||
}
|
||||
return;
|
||||
}
|
||||
// Re-throw non-network errors
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
private isNetworkError(error: unknown): boolean {
|
||||
if (typeof error === "string") {
|
||||
return this.containsNetworkErrorPattern(error);
|
||||
}
|
||||
if (error instanceof Error) {
|
||||
return this.containsNetworkErrorPattern(error.message);
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
private containsNetworkErrorPattern(message: string): boolean {
|
||||
const networkErrorPatterns = [
|
||||
"could not resolve host",
|
||||
"connection refused",
|
||||
"network is unreachable",
|
||||
"timeout",
|
||||
"no route to host",
|
||||
"connection timed out",
|
||||
"temporary failure in name resolution",
|
||||
"operation timed out",
|
||||
];
|
||||
|
||||
const lowerMessage = message.toLowerCase();
|
||||
return networkErrorPatterns.some((pattern) => lowerMessage.includes(pattern));
|
||||
}
|
||||
async addAndCommitTaskFile(taskId: string, filePath: string, action: "create" | "update" | "archive"): Promise<void> {
|
||||
const actionMessages = {
|
||||
create: `Create task ${taskId}`,
|
||||
update: `Update task ${taskId}`,
|
||||
archive: `Archive task ${taskId}`,
|
||||
};
|
||||
|
||||
// Retry git operations to handle transient failures
|
||||
await this.retryGitOperation(async () => {
|
||||
// Reset index to ensure only the specific file is staged
|
||||
await this.resetIndex();
|
||||
|
||||
// Stage only the specific task file
|
||||
await this.addFile(filePath);
|
||||
|
||||
// Commit only the staged file
|
||||
await this.commitStagedChanges(actionMessages[action]);
|
||||
}, `commit task file ${filePath}`);
|
||||
}
|
||||
|
||||
async stageBacklogDirectory(backlogDir = "backlog"): Promise<void> {
|
||||
await this.execGit(["add", `${backlogDir}/`]);
|
||||
}
|
||||
async stageFileMove(fromPath: string, toPath: string): Promise<void> {
|
||||
// Stage the deletion of the old file and addition of the new file
|
||||
// Git will automatically detect this as a rename if the content is similar enough
|
||||
try {
|
||||
// First try to stage the removal of the old file (if it still exists)
|
||||
await this.execGit(["add", "--all", fromPath]);
|
||||
} catch {
|
||||
// If the old file doesn't exist, that's okay - it was already moved
|
||||
}
|
||||
|
||||
// Always stage the new file location
|
||||
await this.execGit(["add", toPath]);
|
||||
}
|
||||
|
||||
async listRemoteBranches(remote = "origin"): Promise<string[]> {
|
||||
try {
|
||||
// Fast-path: if no remotes, return empty
|
||||
if (!(await this.hasAnyRemote())) return [];
|
||||
const { stdout } = await this.execGit(["branch", "-r", "--format=%(refname:short)"], { readOnly: true });
|
||||
return stdout
|
||||
.split("\n")
|
||||
.map((l) => l.trim())
|
||||
.filter(Boolean)
|
||||
.filter((branch) => branch.startsWith(`${remote}/`))
|
||||
.map((branch) => branch.substring(`${remote}/`.length));
|
||||
} catch {
|
||||
// If remote doesn't exist or other error, return empty array
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* List remote branches that have been active within the specified days
|
||||
* Much faster than listRemoteBranches for filtering old branches
|
||||
*/
|
||||
async listRecentRemoteBranches(daysAgo: number, remote = "origin"): Promise<string[]> {
|
||||
try {
|
||||
// Fast-path: if no remotes, return empty
|
||||
if (!(await this.hasAnyRemote())) return [];
|
||||
const { stdout } = await this.execGit(
|
||||
["for-each-ref", "--format=%(refname:short)|%(committerdate:iso8601)", `refs/remotes/${remote}`],
|
||||
{ readOnly: true },
|
||||
);
|
||||
const since = Date.now() - daysAgo * 24 * 60 * 60 * 1000;
|
||||
return (
|
||||
stdout
|
||||
.split("\n")
|
||||
.map((l) => l.trim())
|
||||
.filter(Boolean)
|
||||
.map((line) => {
|
||||
const [ref, iso] = line.split("|");
|
||||
return { ref, t: Date.parse(iso || "") };
|
||||
})
|
||||
.filter((x) => Number.isFinite(x.t) && x.t >= since && x.ref)
|
||||
.map((x) => x.ref?.replace(`${remote}/`, ""))
|
||||
// Filter out invalid/ambiguous entries that would normalize to empty or "origin"
|
||||
.filter((b): b is string => Boolean(b))
|
||||
.filter((b) => b !== "HEAD" && b !== remote && b !== `${remote}`)
|
||||
);
|
||||
} catch {
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
async listRecentBranches(daysAgo: number): Promise<string[]> {
|
||||
try {
|
||||
// Get all branches with their last commit date
|
||||
// Using for-each-ref which is more efficient than multiple branch commands
|
||||
const since = new Date();
|
||||
since.setDate(since.getDate() - daysAgo);
|
||||
|
||||
// Build refs to check based on remoteOperations config
|
||||
const refs = ["refs/heads"];
|
||||
if (this.config?.remoteOperations !== false) {
|
||||
refs.push("refs/remotes/origin");
|
||||
}
|
||||
|
||||
// Get local and remote branches with commit dates
|
||||
const { stdout } = await this.execGit(
|
||||
["for-each-ref", "--format=%(refname:short)|%(committerdate:iso8601)", ...refs],
|
||||
{ readOnly: true },
|
||||
);
|
||||
|
||||
const recentBranches: string[] = [];
|
||||
const lines = stdout.split("\n").filter(Boolean);
|
||||
|
||||
for (const line of lines) {
|
||||
const [branch, dateStr] = line.split("|");
|
||||
if (!branch || !dateStr) continue;
|
||||
|
||||
const commitDate = new Date(dateStr);
|
||||
if (commitDate >= since) {
|
||||
// Keep the full branch name including origin/ prefix
|
||||
// This allows cross-branch checking to distinguish local vs remote
|
||||
if (!recentBranches.includes(branch)) {
|
||||
recentBranches.push(branch);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return recentBranches;
|
||||
} catch {
|
||||
// Fallback to all branches if the command fails
|
||||
return this.listAllBranches();
|
||||
}
|
||||
}
|
||||
|
||||
async listLocalBranches(): Promise<string[]> {
|
||||
try {
|
||||
const { stdout } = await this.execGit(["branch", "--format=%(refname:short)"], { readOnly: true });
|
||||
return stdout
|
||||
.split("\n")
|
||||
.map((l) => l.trim())
|
||||
.filter(Boolean);
|
||||
} catch {
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
async listAllBranches(_remote = "origin"): Promise<string[]> {
|
||||
try {
|
||||
// Use -a flag only if remote operations are enabled
|
||||
const branchArgs =
|
||||
this.config?.remoteOperations === false
|
||||
? ["branch", "--format=%(refname:short)"]
|
||||
: ["branch", "-a", "--format=%(refname:short)"];
|
||||
|
||||
const { stdout } = await this.execGit(branchArgs, { readOnly: true });
|
||||
return stdout
|
||||
.split("\n")
|
||||
.map((l) => l.trim())
|
||||
.filter(Boolean)
|
||||
.filter((b) => !b.includes("HEAD"));
|
||||
} catch {
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns true if the current repository has any remotes configured
|
||||
*/
|
||||
async hasAnyRemote(): Promise<boolean> {
|
||||
try {
|
||||
const { stdout } = await this.execGit(["remote"], { readOnly: true });
|
||||
return (
|
||||
stdout
|
||||
.split("\n")
|
||||
.map((s) => s.trim())
|
||||
.filter(Boolean).length > 0
|
||||
);
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns true if a specific remote exists (default: origin)
|
||||
*/
|
||||
async hasRemote(remote = "origin"): Promise<boolean> {
|
||||
try {
|
||||
const { stdout } = await this.execGit(["remote"], { readOnly: true });
|
||||
return stdout.split("\n").some((r) => r.trim() === remote);
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
async listFilesInTree(ref: string, path: string): Promise<string[]> {
|
||||
const { stdout } = await this.execGit(["ls-tree", "-r", "--name-only", "-z", ref, "--", path], { readOnly: true });
|
||||
return stdout.split("\0").filter(Boolean);
|
||||
}
|
||||
async showFile(ref: string, filePath: string): Promise<string> {
|
||||
const { stdout } = await this.execGit(["show", `${ref}:${filePath}`], { readOnly: true });
|
||||
return stdout;
|
||||
}
|
||||
/**
|
||||
* Build a map of file -> last modified date for all files in a directory in one git log pass
|
||||
* Much more efficient than individual getFileLastModifiedTime calls
|
||||
* Returns a Map of filePath -> Date
|
||||
*/
|
||||
async getBranchLastModifiedMap(ref: string, dir: string, sinceDays?: number): Promise<Map<string, Date>> {
|
||||
const out = new Map<string, Date>();
|
||||
|
||||
try {
|
||||
// Build args with optional --since filter
|
||||
const args = [
|
||||
"log",
|
||||
"--pretty=format:%ct%x00", // Unix timestamp + NUL for bulletproof parsing
|
||||
"--name-only",
|
||||
"-z", // Null-delimited for safety
|
||||
];
|
||||
|
||||
if (sinceDays) {
|
||||
args.push(`--since=${sinceDays}.days`);
|
||||
}
|
||||
|
||||
args.push(ref, "--", dir);
|
||||
|
||||
// Null-delimited to be safe with filenames
|
||||
const { stdout } = await this.execGit(args, { readOnly: true });
|
||||
|
||||
// Parse null-delimited output
|
||||
// Format is: timestamp\0 file1\0 file2\0 ... timestamp\0 file1\0 ...
|
||||
const parts = stdout.split("\0").filter(Boolean);
|
||||
let i = 0;
|
||||
|
||||
while (i < parts.length) {
|
||||
const timestampStr = parts[i]?.trim();
|
||||
if (timestampStr && /^\d+$/.test(timestampStr)) {
|
||||
// This is a timestamp, files follow until next timestamp
|
||||
const epoch = Number(timestampStr);
|
||||
const date = new Date(epoch * 1000);
|
||||
i++;
|
||||
|
||||
// Process files until we hit another timestamp or end
|
||||
// Check if next part looks like a timestamp (digits only)
|
||||
while (i < parts.length && parts[i] && !/^\d+$/.test(parts[i]?.trim() || "")) {
|
||||
const file = parts[i]?.trim();
|
||||
// First time we see a file is its last modification
|
||||
if (file && !out.has(file)) {
|
||||
out.set(file, date);
|
||||
}
|
||||
i++;
|
||||
}
|
||||
} else {
|
||||
// Skip unexpected content
|
||||
i++;
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
// If the command fails, return empty map
|
||||
console.error(`Failed to get branch last modified map for ${ref}:${dir}`, error);
|
||||
}
|
||||
|
||||
return out;
|
||||
}
|
||||
|
||||
async getFileLastModifiedBranch(filePath: string): Promise<string | null> {
|
||||
try {
|
||||
// Get the hash of the last commit that touched the file
|
||||
const { stdout: commitHash } = await this.execGit(["log", "-1", "--format=%H", "--", filePath], {
|
||||
readOnly: true,
|
||||
});
|
||||
if (!commitHash) return null;
|
||||
|
||||
// Find all branches that contain this commit
|
||||
const { stdout: branches } = await this.execGit([
|
||||
"branch",
|
||||
"-a",
|
||||
"--contains",
|
||||
commitHash.trim(),
|
||||
"--format=%(refname:short)",
|
||||
]);
|
||||
|
||||
if (!branches) return "main"; // Default to main if no specific branch found
|
||||
|
||||
// Prefer non-remote branches and 'main' or 'master'
|
||||
const branchList = branches
|
||||
.split("\n")
|
||||
.map((b) => b.trim())
|
||||
.filter(Boolean);
|
||||
const mainBranch = branchList.find((b) => b === "main" || b === "master");
|
||||
if (mainBranch) return mainBranch;
|
||||
|
||||
const nonRemote = branchList.find((b) => !b.startsWith("remotes/"));
|
||||
return nonRemote || branchList[0] || "main";
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
private async execGit(args: string[], options?: { readOnly?: boolean }): Promise<{ stdout: string; stderr: string }> {
|
||||
// Use Bun.spawn so we can explicitly control stdio behaviour on Windows. When running
|
||||
// under the MCP stdio transport, delegating to git with inherited stdin can deadlock.
|
||||
const env = options?.readOnly
|
||||
? ({ ...process.env, GIT_OPTIONAL_LOCKS: "0" } as Record<string, string>)
|
||||
: (process.env as Record<string, string>);
|
||||
|
||||
const subprocess = Bun.spawn(["git", ...args], {
|
||||
cwd: this.projectRoot,
|
||||
stdin: "ignore", // avoid inheriting MCP stdio pipes which can block on Windows
|
||||
stdout: "pipe",
|
||||
stderr: "pipe",
|
||||
env,
|
||||
});
|
||||
|
||||
const stdoutPromise = subprocess.stdout ? new Response(subprocess.stdout).text() : Promise.resolve("");
|
||||
const stderrPromise = subprocess.stderr ? new Response(subprocess.stderr).text() : Promise.resolve("");
|
||||
const [exitCode, stdout, stderr] = await Promise.all([subprocess.exited, stdoutPromise, stderrPromise]);
|
||||
|
||||
if (exitCode !== 0) {
|
||||
throw new Error(`Git command failed (exit code ${exitCode}): git ${args.join(" ")}\n${stderr}`);
|
||||
}
|
||||
|
||||
return { stdout, stderr };
|
||||
}
|
||||
}
|
||||
|
||||
export async function isGitRepository(projectRoot: string): Promise<boolean> {
|
||||
try {
|
||||
await $`git rev-parse --git-dir`.cwd(projectRoot).quiet();
|
||||
return true;
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
export async function initializeGitRepository(projectRoot: string): Promise<void> {
|
||||
try {
|
||||
await $`git init`.cwd(projectRoot).quiet();
|
||||
} catch (error) {
|
||||
throw new Error(`Failed to initialize git repository: ${error}`);
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,529 @@
|
|||
# Instructions for the usage of Backlog.md CLI Tool
|
||||
|
||||
## Backlog.md: Comprehensive Project Management Tool via CLI
|
||||
|
||||
### Assistant Objective
|
||||
|
||||
Efficiently manage all project tasks, status, and documentation using the Backlog.md CLI, ensuring all project metadata
|
||||
remains fully synchronized and up-to-date.
|
||||
|
||||
### Core Capabilities
|
||||
|
||||
- ✅ **Task Management**: Create, edit, assign, prioritize, and track tasks with full metadata
|
||||
- ✅ **Search**: Fuzzy search across tasks, documents, and decisions with `backlog search`
|
||||
- ✅ **Acceptance Criteria**: Granular control with add/remove/check/uncheck by index
|
||||
- ✅ **Board Visualization**: Terminal-based Kanban board (`backlog board`) and web UI (`backlog browser`)
|
||||
- ✅ **Git Integration**: Automatic tracking of task states across branches
|
||||
- ✅ **Dependencies**: Task relationships and subtask hierarchies
|
||||
- ✅ **Documentation & Decisions**: Structured docs and architectural decision records
|
||||
- ✅ **Export & Reporting**: Generate markdown reports and board snapshots
|
||||
- ✅ **AI-Optimized**: `--plain` flag provides clean text output for AI processing
|
||||
|
||||
### Why This Matters to You (AI Agent)
|
||||
|
||||
1. **Comprehensive system** - Full project management capabilities through CLI
|
||||
2. **The CLI is the interface** - All operations go through `backlog` commands
|
||||
3. **Unified interaction model** - You can use CLI for both reading (`backlog task 1 --plain`) and writing (
|
||||
`backlog task edit 1`)
|
||||
4. **Metadata stays synchronized** - The CLI handles all the complex relationships
|
||||
|
||||
### Key Understanding
|
||||
|
||||
- **Tasks** live in `backlog/tasks/` as `task-<id> - <title>.md` files
|
||||
- **You interact via CLI only**: `backlog task create`, `backlog task edit`, etc.
|
||||
- **Use `--plain` flag** for AI-friendly output when viewing/listing
|
||||
- **Never bypass the CLI** - It handles Git, metadata, file naming, and relationships
|
||||
|
||||
---
|
||||
|
||||
# ⚠️ CRITICAL: NEVER EDIT TASK FILES DIRECTLY. Edit Only via CLI
|
||||
|
||||
**ALL task operations MUST use the Backlog.md CLI commands**
|
||||
|
||||
- ✅ **DO**: Use `backlog task edit` and other CLI commands
|
||||
- ✅ **DO**: Use `backlog task create` to create new tasks
|
||||
- ✅ **DO**: Use `backlog task edit <id> --check-ac <index>` to mark acceptance criteria
|
||||
- ❌ **DON'T**: Edit markdown files directly
|
||||
- ❌ **DON'T**: Manually change checkboxes in files
|
||||
- ❌ **DON'T**: Add or modify text in task files without using CLI
|
||||
|
||||
**Why?** Direct file editing breaks metadata synchronization, Git tracking, and task relationships.
|
||||
|
||||
---
|
||||
|
||||
## 1. Source of Truth & File Structure
|
||||
|
||||
### 📖 **UNDERSTANDING** (What you'll see when reading)
|
||||
|
||||
- Markdown task files live under **`backlog/tasks/`** (drafts under **`backlog/drafts/`**)
|
||||
- Files are named: `task-<id> - <title>.md` (e.g., `task-42 - Add GraphQL resolver.md`)
|
||||
- Project documentation is in **`backlog/docs/`**
|
||||
- Project decisions are in **`backlog/decisions/`**
|
||||
|
||||
### 🔧 **ACTING** (How to change things)
|
||||
|
||||
- **All task operations MUST use the Backlog.md CLI tool**
|
||||
- This ensures metadata is correctly updated and the project stays in sync
|
||||
- **Always use `--plain` flag** when listing or viewing tasks for AI-friendly text output
|
||||
|
||||
---
|
||||
|
||||
## 2. Common Mistakes to Avoid
|
||||
|
||||
### ❌ **WRONG: Direct File Editing**
|
||||
|
||||
```markdown
|
||||
# DON'T DO THIS:
|
||||
|
||||
1. Open backlog/tasks/task-7 - Feature.md in editor
|
||||
2. Change "- [ ]" to "- [x]" manually
|
||||
3. Add notes directly to the file
|
||||
4. Save the file
|
||||
```
|
||||
|
||||
### ✅ **CORRECT: Using CLI Commands**
|
||||
|
||||
```bash
|
||||
# DO THIS INSTEAD:
|
||||
backlog task edit 7 --check-ac 1 # Mark AC #1 as complete
|
||||
backlog task edit 7 --notes "Implementation complete" # Add notes
|
||||
backlog task edit 7 -s "In Progress" -a @agent-k # Multiple commands: change status and assign the task when you start working on the task
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Understanding Task Format (Read-Only Reference)
|
||||
|
||||
⚠️ **FORMAT REFERENCE ONLY** - The following sections show what you'll SEE in task files.
|
||||
**Never edit these directly! Use CLI commands to make changes.**
|
||||
|
||||
### Task Structure You'll See
|
||||
|
||||
```markdown
|
||||
---
|
||||
id: task-42
|
||||
title: Add GraphQL resolver
|
||||
status: To Do
|
||||
assignee: [@sara]
|
||||
labels: [backend, api]
|
||||
---
|
||||
|
||||
## Description
|
||||
|
||||
Brief explanation of the task purpose.
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
<!-- AC:BEGIN -->
|
||||
|
||||
- [ ] #1 First criterion
|
||||
- [x] #2 Second criterion (completed)
|
||||
- [ ] #3 Third criterion
|
||||
|
||||
<!-- AC:END -->
|
||||
|
||||
## Implementation Plan
|
||||
|
||||
1. Research approach
|
||||
2. Implement solution
|
||||
|
||||
## Implementation Notes
|
||||
|
||||
Summary of what was done.
|
||||
```
|
||||
|
||||
### How to Modify Each Section
|
||||
|
||||
| What You Want to Change | CLI Command to Use |
|
||||
|-------------------------|----------------------------------------------------------|
|
||||
| Title | `backlog task edit 42 -t "New Title"` |
|
||||
| Status | `backlog task edit 42 -s "In Progress"` |
|
||||
| Assignee | `backlog task edit 42 -a @sara` |
|
||||
| Labels | `backlog task edit 42 -l backend,api` |
|
||||
| Description | `backlog task edit 42 -d "New description"` |
|
||||
| Add AC | `backlog task edit 42 --ac "New criterion"` |
|
||||
| Check AC #1 | `backlog task edit 42 --check-ac 1` |
|
||||
| Uncheck AC #2 | `backlog task edit 42 --uncheck-ac 2` |
|
||||
| Remove AC #3 | `backlog task edit 42 --remove-ac 3` |
|
||||
| Add Plan | `backlog task edit 42 --plan "1. Step one\n2. Step two"` |
|
||||
| Add Notes (replace) | `backlog task edit 42 --notes "What I did"` |
|
||||
| Append Notes | `backlog task edit 42 --append-notes "Another note"` |
|
||||
|
||||
---
|
||||
|
||||
## 4. Defining Tasks
|
||||
|
||||
### Creating New Tasks
|
||||
|
||||
**Always use CLI to create tasks:**
|
||||
|
||||
```bash
|
||||
# Example
|
||||
backlog task create "Task title" -d "Description" --ac "First criterion" --ac "Second criterion"
|
||||
```
|
||||
|
||||
### Title (one liner)
|
||||
|
||||
Use a clear brief title that summarizes the task.
|
||||
|
||||
### Description (The "why")
|
||||
|
||||
Provide a concise summary of the task purpose and its goal. Explains the context without implementation details.
|
||||
|
||||
### Acceptance Criteria (The "what")
|
||||
|
||||
**Understanding the Format:**
|
||||
|
||||
- Acceptance criteria appear as numbered checkboxes in the markdown files
|
||||
- Format: `- [ ] #1 Criterion text` (unchecked) or `- [x] #1 Criterion text` (checked)
|
||||
|
||||
**Managing Acceptance Criteria via CLI:**
|
||||
|
||||
⚠️ **IMPORTANT: How AC Commands Work**
|
||||
|
||||
- **Adding criteria (`--ac`)** accepts multiple flags: `--ac "First" --ac "Second"` ✅
|
||||
- **Checking/unchecking/removing** accept multiple flags too: `--check-ac 1 --check-ac 2` ✅
|
||||
- **Mixed operations** work in a single command: `--check-ac 1 --uncheck-ac 2 --remove-ac 3` ✅
|
||||
|
||||
```bash
|
||||
# Examples
|
||||
|
||||
# Add new criteria (MULTIPLE values allowed)
|
||||
backlog task edit 42 --ac "User can login" --ac "Session persists"
|
||||
|
||||
# Check specific criteria by index (MULTIPLE values supported)
|
||||
backlog task edit 42 --check-ac 1 --check-ac 2 --check-ac 3 # Check multiple ACs
|
||||
# Or check them individually if you prefer:
|
||||
backlog task edit 42 --check-ac 1 # Mark #1 as complete
|
||||
backlog task edit 42 --check-ac 2 # Mark #2 as complete
|
||||
|
||||
# Mixed operations in single command
|
||||
backlog task edit 42 --check-ac 1 --uncheck-ac 2 --remove-ac 3
|
||||
|
||||
# ❌ STILL WRONG - These formats don't work:
|
||||
# backlog task edit 42 --check-ac 1,2,3 # No comma-separated values
|
||||
# backlog task edit 42 --check-ac 1-3 # No ranges
|
||||
# backlog task edit 42 --check 1 # Wrong flag name
|
||||
|
||||
# Multiple operations of same type
|
||||
backlog task edit 42 --uncheck-ac 1 --uncheck-ac 2 # Uncheck multiple ACs
|
||||
backlog task edit 42 --remove-ac 2 --remove-ac 4 # Remove multiple ACs (processed high-to-low)
|
||||
```
|
||||
|
||||
**Key Principles for Good ACs:**
|
||||
|
||||
- **Outcome-Oriented:** Focus on the result, not the method.
|
||||
- **Testable/Verifiable:** Each criterion should be objectively testable
|
||||
- **Clear and Concise:** Unambiguous language
|
||||
- **Complete:** Collectively cover the task scope
|
||||
- **User-Focused:** Frame from end-user or system behavior perspective
|
||||
|
||||
Good Examples:
|
||||
|
||||
- "User can successfully log in with valid credentials"
|
||||
- "System processes 1000 requests per second without errors"
|
||||
- "CLI preserves literal newlines in description/plan/notes; `\\n` sequences are not auto‑converted"
|
||||
|
||||
Bad Example (Implementation Step):
|
||||
|
||||
- "Add a new function handleLogin() in auth.ts"
|
||||
- "Define expected behavior and document supported input patterns"
|
||||
|
||||
### Task Breakdown Strategy
|
||||
|
||||
1. Identify foundational components first
|
||||
2. Create tasks in dependency order (foundations before features)
|
||||
3. Ensure each task delivers value independently
|
||||
4. Avoid creating tasks that block each other
|
||||
|
||||
### Task Requirements
|
||||
|
||||
- Tasks must be **atomic** and **testable** or **verifiable**
|
||||
- Each task should represent a single unit of work for one PR
|
||||
- **Never** reference future tasks (only tasks with id < current task id)
|
||||
- Ensure tasks are **independent** and don't depend on future work
|
||||
|
||||
---
|
||||
|
||||
## 5. Implementing Tasks
|
||||
|
||||
### 5.1. First step when implementing a task
|
||||
|
||||
The very first things you must do when you take over a task are:
|
||||
|
||||
* set the task in progress
|
||||
* assign it to yourself
|
||||
|
||||
```bash
|
||||
# Example
|
||||
backlog task edit 42 -s "In Progress" -a @{myself}
|
||||
```
|
||||
|
||||
### 5.2. Create an Implementation Plan (The "how")
|
||||
|
||||
Previously created tasks contain the why and the what. Once you are familiar with that part you should think about a
|
||||
plan on **HOW** to tackle the task and all its acceptance criteria. This is your **Implementation Plan**.
|
||||
First do a quick check to see if all the tools that you are planning to use are available in the environment you are
|
||||
working in.
|
||||
When you are ready, write it down in the task so that you can refer to it later.
|
||||
|
||||
```bash
|
||||
# Example
|
||||
backlog task edit 42 --plan "1. Research codebase for references\n2Research on internet for similar cases\n3. Implement\n4. Test"
|
||||
```
|
||||
|
||||
## 5.3. Implementation
|
||||
|
||||
Once you have a plan, you can start implementing the task. This is where you write code, run tests, and make sure
|
||||
everything works as expected. Follow the acceptance criteria one by one and MARK THEM AS COMPLETE as soon as you
|
||||
finish them.
|
||||
|
||||
### 5.4 Implementation Notes (PR description)
|
||||
|
||||
When you are done implementing a tasks you need to prepare a PR description for it.
|
||||
Because you cannot create PRs directly, write the PR as a clean description in the task notes.
|
||||
Append notes progressively during implementation using `--append-notes`:
|
||||
|
||||
```
|
||||
backlog task edit 42 --append-notes "Implemented X" --append-notes "Added tests"
|
||||
```
|
||||
|
||||
```bash
|
||||
# Example
|
||||
backlog task edit 42 --notes "Implemented using pattern X because Reason Y, modified files Z and W"
|
||||
```
|
||||
|
||||
**IMPORTANT**: Do NOT include an Implementation Plan when creating a task. The plan is added only after you start the
|
||||
implementation.
|
||||
|
||||
- Creation phase: provide Title, Description, Acceptance Criteria, and optionally labels/priority/assignee.
|
||||
- When you begin work, switch to edit, set the task in progress and assign to yourself
|
||||
`backlog task edit <id> -s "In Progress" -a "..."`.
|
||||
- Think about how you would solve the task and add the plan: `backlog task edit <id> --plan "..."`.
|
||||
- After updating the plan, share it with the user and ask for confirmation. Do not begin coding until the user approves the plan or explicitly tells you to skip the review.
|
||||
- Add Implementation Notes only after completing the work: `backlog task edit <id> --notes "..."` (replace) or append progressively using `--append-notes`.
|
||||
|
||||
## Phase discipline: What goes where
|
||||
|
||||
- Creation: Title, Description, Acceptance Criteria, labels/priority/assignee.
|
||||
- Implementation: Implementation Plan (after moving to In Progress and assigning to yourself).
|
||||
- Wrap-up: Implementation Notes (Like a PR description), AC and Definition of Done checks.
|
||||
|
||||
**IMPORTANT**: Only implement what's in the Acceptance Criteria. If you need to do more, either:
|
||||
|
||||
1. Update the AC first: `backlog task edit 42 --ac "New requirement"`
|
||||
2. Or create a new follow up task: `backlog task create "Additional feature"`
|
||||
|
||||
---
|
||||
|
||||
## 6. Typical Workflow
|
||||
|
||||
```bash
|
||||
# 1. Identify work
|
||||
backlog task list -s "To Do" --plain
|
||||
|
||||
# 2. Read task details
|
||||
backlog task 42 --plain
|
||||
|
||||
# 3. Start work: assign yourself & change status
|
||||
backlog task edit 42 -s "In Progress" -a @myself
|
||||
|
||||
# 4. Add implementation plan
|
||||
backlog task edit 42 --plan "1. Analyze\n2. Refactor\n3. Test"
|
||||
|
||||
# 5. Share the plan with the user and wait for approval (do not write code yet)
|
||||
|
||||
# 6. Work on the task (write code, test, etc.)
|
||||
|
||||
# 7. Mark acceptance criteria as complete (supports multiple in one command)
|
||||
backlog task edit 42 --check-ac 1 --check-ac 2 --check-ac 3 # Check all at once
|
||||
# Or check them individually if preferred:
|
||||
# backlog task edit 42 --check-ac 1
|
||||
# backlog task edit 42 --check-ac 2
|
||||
# backlog task edit 42 --check-ac 3
|
||||
|
||||
# 8. Add implementation notes (PR Description)
|
||||
backlog task edit 42 --notes "Refactored using strategy pattern, updated tests"
|
||||
|
||||
# 9. Mark task as done
|
||||
backlog task edit 42 -s Done
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. Definition of Done (DoD)
|
||||
|
||||
A task is **Done** only when **ALL** of the following are complete:
|
||||
|
||||
### ✅ Via CLI Commands:
|
||||
|
||||
1. **All acceptance criteria checked**: Use `backlog task edit <id> --check-ac <index>` for each
|
||||
2. **Implementation notes added**: Use `backlog task edit <id> --notes "..."`
|
||||
3. **Status set to Done**: Use `backlog task edit <id> -s Done`
|
||||
|
||||
### ✅ Via Code/Testing:
|
||||
|
||||
4. **Tests pass**: Run test suite and linting
|
||||
5. **Documentation updated**: Update relevant docs if needed
|
||||
6. **Code reviewed**: Self-review your changes
|
||||
7. **No regressions**: Performance, security checks pass
|
||||
|
||||
⚠️ **NEVER mark a task as Done without completing ALL items above**
|
||||
|
||||
---
|
||||
|
||||
## 8. Finding Tasks and Content with Search
|
||||
|
||||
When users ask you to find tasks related to a topic, use the `backlog search` command with `--plain` flag:
|
||||
|
||||
```bash
|
||||
# Search for tasks about authentication
|
||||
backlog search "auth" --plain
|
||||
|
||||
# Search only in tasks (not docs/decisions)
|
||||
backlog search "login" --type task --plain
|
||||
|
||||
# Search with filters
|
||||
backlog search "api" --status "In Progress" --plain
|
||||
backlog search "bug" --priority high --plain
|
||||
```
|
||||
|
||||
**Key points:**
|
||||
- Uses fuzzy matching - finds "authentication" when searching "auth"
|
||||
- Searches task titles, descriptions, and content
|
||||
- Also searches documents and decisions unless filtered with `--type task`
|
||||
- Always use `--plain` flag for AI-readable output
|
||||
|
||||
---
|
||||
|
||||
## 9. Quick Reference: DO vs DON'T
|
||||
|
||||
### Viewing and Finding Tasks
|
||||
|
||||
| Task | ✅ DO | ❌ DON'T |
|
||||
|--------------|-----------------------------|---------------------------------|
|
||||
| View task | `backlog task 42 --plain` | Open and read .md file directly |
|
||||
| List tasks | `backlog task list --plain` | Browse backlog/tasks folder |
|
||||
| Check status | `backlog task 42 --plain` | Look at file content |
|
||||
| Find by topic| `backlog search "auth" --plain` | Manually grep through files |
|
||||
|
||||
### Modifying Tasks
|
||||
|
||||
| Task | ✅ DO | ❌ DON'T |
|
||||
|---------------|--------------------------------------|-----------------------------------|
|
||||
| Check AC | `backlog task edit 42 --check-ac 1` | Change `- [ ]` to `- [x]` in file |
|
||||
| Add notes | `backlog task edit 42 --notes "..."` | Type notes into .md file |
|
||||
| Change status | `backlog task edit 42 -s Done` | Edit status in frontmatter |
|
||||
| Add AC | `backlog task edit 42 --ac "New"` | Add `- [ ] New` to file |
|
||||
|
||||
---
|
||||
|
||||
## 10. Complete CLI Command Reference
|
||||
|
||||
### Task Creation
|
||||
|
||||
| Action | Command |
|
||||
|------------------|-------------------------------------------------------------------------------------|
|
||||
| Create task | `backlog task create "Title"` |
|
||||
| With description | `backlog task create "Title" -d "Description"` |
|
||||
| With AC | `backlog task create "Title" --ac "Criterion 1" --ac "Criterion 2"` |
|
||||
| With all options | `backlog task create "Title" -d "Desc" -a @sara -s "To Do" -l auth --priority high` |
|
||||
| Create draft | `backlog task create "Title" --draft` |
|
||||
| Create subtask | `backlog task create "Title" -p 42` |
|
||||
|
||||
### Task Modification
|
||||
|
||||
| Action | Command |
|
||||
|------------------|---------------------------------------------|
|
||||
| Edit title | `backlog task edit 42 -t "New Title"` |
|
||||
| Edit description | `backlog task edit 42 -d "New description"` |
|
||||
| Change status | `backlog task edit 42 -s "In Progress"` |
|
||||
| Assign | `backlog task edit 42 -a @sara` |
|
||||
| Add labels | `backlog task edit 42 -l backend,api` |
|
||||
| Set priority | `backlog task edit 42 --priority high` |
|
||||
|
||||
### Acceptance Criteria Management
|
||||
|
||||
| Action | Command |
|
||||
|---------------------|-----------------------------------------------------------------------------|
|
||||
| Add AC | `backlog task edit 42 --ac "New criterion" --ac "Another"` |
|
||||
| Remove AC #2 | `backlog task edit 42 --remove-ac 2` |
|
||||
| Remove multiple ACs | `backlog task edit 42 --remove-ac 2 --remove-ac 4` |
|
||||
| Check AC #1 | `backlog task edit 42 --check-ac 1` |
|
||||
| Check multiple ACs | `backlog task edit 42 --check-ac 1 --check-ac 3` |
|
||||
| Uncheck AC #3 | `backlog task edit 42 --uncheck-ac 3` |
|
||||
| Mixed operations | `backlog task edit 42 --check-ac 1 --uncheck-ac 2 --remove-ac 3 --ac "New"` |
|
||||
|
||||
### Task Content
|
||||
|
||||
| Action | Command |
|
||||
|------------------|----------------------------------------------------------|
|
||||
| Add plan | `backlog task edit 42 --plan "1. Step one\n2. Step two"` |
|
||||
| Add notes | `backlog task edit 42 --notes "Implementation details"` |
|
||||
| Add dependencies | `backlog task edit 42 --dep task-1 --dep task-2` |
|
||||
|
||||
### Multi‑line Input (Description/Plan/Notes)
|
||||
|
||||
The CLI preserves input literally. Shells do not convert `\n` inside normal quotes. Use one of the following to insert real newlines:
|
||||
|
||||
- Bash/Zsh (ANSI‑C quoting):
|
||||
- Description: `backlog task edit 42 --desc $'Line1\nLine2\n\nFinal'`
|
||||
- Plan: `backlog task edit 42 --plan $'1. A\n2. B'`
|
||||
- Notes: `backlog task edit 42 --notes $'Done A\nDoing B'`
|
||||
- Append notes: `backlog task edit 42 --append-notes $'Progress update line 1\nLine 2'`
|
||||
- POSIX portable (printf):
|
||||
- `backlog task edit 42 --notes "$(printf 'Line1\nLine2')"`
|
||||
- PowerShell (backtick n):
|
||||
- `backlog task edit 42 --notes "Line1`nLine2"`
|
||||
|
||||
Do not expect `"...\n..."` to become a newline. That passes the literal backslash + n to the CLI by design.
|
||||
|
||||
Descriptions support literal newlines; shell examples may show escaped `\\n`, but enter a single `\n` to create a newline.
|
||||
|
||||
### Implementation Notes Formatting
|
||||
|
||||
- Keep implementation notes human-friendly and PR-ready: use short paragraphs or
|
||||
bullet lists instead of a single long line.
|
||||
- Lead with the outcome, then add supporting details (e.g., testing, follow-up
|
||||
actions) on separate lines or bullets.
|
||||
- Prefer Markdown bullets (`-` for unordered, `1.` for ordered) so Maintainers
|
||||
can paste notes straight into GitHub without additional formatting.
|
||||
- When using CLI flags like `--append-notes`, remember to include explicit
|
||||
newlines. Example:
|
||||
|
||||
```bash
|
||||
backlog task edit 42 --append-notes $'- Added new API endpoint\n- Updated tests\n- TODO: monitor staging deploy'
|
||||
```
|
||||
|
||||
### Task Operations
|
||||
|
||||
| Action | Command |
|
||||
|--------------------|----------------------------------------------|
|
||||
| View task | `backlog task 42 --plain` |
|
||||
| List tasks | `backlog task list --plain` |
|
||||
| Search tasks | `backlog search "topic" --plain` |
|
||||
| Search with filter | `backlog search "api" --status "To Do" --plain` |
|
||||
| Filter by status | `backlog task list -s "In Progress" --plain` |
|
||||
| Filter by assignee | `backlog task list -a @sara --plain` |
|
||||
| Archive task | `backlog task archive 42` |
|
||||
| Demote to draft | `backlog task demote 42` |
|
||||
|
||||
---
|
||||
|
||||
## Common Issues
|
||||
|
||||
| Problem | Solution |
|
||||
|----------------------|--------------------------------------------------------------------|
|
||||
| Task not found | Check task ID with `backlog task list --plain` |
|
||||
| AC won't check | Use correct index: `backlog task 42 --plain` to see AC numbers |
|
||||
| Changes not saving | Ensure you're using CLI, not editing files |
|
||||
| Metadata out of sync | Re-edit via CLI to fix: `backlog task edit 42 -s <current-status>` |
|
||||
|
||||
---
|
||||
|
||||
## Remember: The Golden Rule
|
||||
|
||||
**🎯 If you want to change ANYTHING in a task, use the `backlog task edit` command.**
|
||||
**📖 Use CLI to read tasks, exceptionally READ task files directly, never WRITE to them.**
|
||||
|
||||
Full help available: `backlog --help`
|
||||
|
|
@ -0,0 +1,12 @@
|
|||
import agentGuidelinesContent from "./agent-guidelines.md" with { type: "text" };
|
||||
import mcpAgentNudgeContent from "./mcp/agent-nudge.md" with { type: "text" };
|
||||
import claudeAgentContent from "./project-manager-backlog.md" with { type: "text" };
|
||||
|
||||
export const AGENT_GUIDELINES = agentGuidelinesContent;
|
||||
export const CLAUDE_GUIDELINES = agentGuidelinesContent;
|
||||
export const CURSOR_GUIDELINES = agentGuidelinesContent;
|
||||
export const GEMINI_GUIDELINES = agentGuidelinesContent;
|
||||
export const COPILOT_GUIDELINES = agentGuidelinesContent;
|
||||
export const README_GUIDELINES = `## AI Agent Guidelines\n\n${agentGuidelinesContent.replace(/^#.*\n/, "")}`;
|
||||
export const CLAUDE_AGENT_CONTENT = claudeAgentContent;
|
||||
export const MCP_AGENT_NUDGE = mcpAgentNudgeContent;
|
||||
|
|
@ -0,0 +1,25 @@
|
|||
|
||||
<CRITICAL_INSTRUCTION>
|
||||
|
||||
## BACKLOG WORKFLOW INSTRUCTIONS
|
||||
|
||||
This project uses Backlog.md MCP for all task and project management activities.
|
||||
|
||||
**CRITICAL GUIDANCE**
|
||||
|
||||
- If your client supports MCP resources, read `backlog://workflow/overview` to understand when and how to use Backlog for this project.
|
||||
- If your client only supports tools or the above request fails, call `backlog.get_workflow_overview()` tool to load the tool-oriented overview (it lists the matching guide tools).
|
||||
|
||||
- **First time working here?** Read the overview resource IMMEDIATELY to learn the workflow
|
||||
- **Already familiar?** You should have the overview cached ("## Backlog.md Overview (MCP)")
|
||||
- **When to read it**: BEFORE creating tasks, or when you're unsure whether to track work
|
||||
|
||||
These guides cover:
|
||||
- Decision framework for when to create tasks
|
||||
- Search-first workflow to avoid duplicates
|
||||
- Links to detailed guides for task creation, execution, and completion
|
||||
- MCP tools reference
|
||||
|
||||
You MUST read the overview resource to understand the complete workflow. The information is NOT summarized here.
|
||||
|
||||
</CRITICAL_INSTRUCTION>
|
||||
|
|
@ -0,0 +1,13 @@
|
|||
import initRequired from "./init-required.md" with { type: "text" };
|
||||
import overviewResources from "./overview.md" with { type: "text" };
|
||||
import overviewTools from "./overview-tools.md" with { type: "text" };
|
||||
import taskCompletion from "./task-completion.md" with { type: "text" };
|
||||
import taskCreation from "./task-creation.md" with { type: "text" };
|
||||
import taskExecution from "./task-execution.md" with { type: "text" };
|
||||
|
||||
export const MCP_WORKFLOW_OVERVIEW = overviewResources.trim();
|
||||
export const MCP_WORKFLOW_OVERVIEW_TOOLS = overviewTools.trim();
|
||||
export const MCP_TASK_CREATION_GUIDE = taskCreation.trim();
|
||||
export const MCP_TASK_EXECUTION_GUIDE = taskExecution.trim();
|
||||
export const MCP_TASK_COMPLETION_GUIDE = taskCompletion.trim();
|
||||
export const MCP_INIT_REQUIRED_GUIDE = initRequired.trim();
|
||||
|
|
@ -0,0 +1,24 @@
|
|||
# Backlog.md Not Initialized
|
||||
|
||||
This directory does not have Backlog.md initialized.
|
||||
|
||||
**To set up task management for this project, run:**
|
||||
|
||||
```bash
|
||||
backlog init
|
||||
```
|
||||
|
||||
This will create the necessary `backlog/` directory structure and configuration file.
|
||||
|
||||
## What is Backlog.md?
|
||||
|
||||
Backlog.md is a task management system that uses markdown files to track features, bugs, and structured work. It integrates with AI coding agents to help you manage your project tasks effectively.
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. Run `backlog init` in your project directory
|
||||
2. Follow the interactive setup prompts
|
||||
3. Choose your preferred AI agent integration (Claude Code, Codex, or Gemini)
|
||||
4. Start creating and managing tasks!
|
||||
|
||||
For more information, visit: https://backlog.md
|
||||
|
|
@ -0,0 +1,52 @@
|
|||
## Backlog.md Overview (Tools)
|
||||
|
||||
Your client is using Backlog.md via tools. Use the following MCP tools to retrieve guidance and manage tasks.
|
||||
|
||||
### When to Use Backlog
|
||||
|
||||
**Create a task if the work requires planning or decision-making.** Ask yourself: "Do I need to think about HOW to do this?"
|
||||
|
||||
- **YES** → Search for existing task first, create if needed
|
||||
- **NO** → Just do it (the change is trivial/mechanical)
|
||||
|
||||
**Examples of work that needs tasks:**
|
||||
- "Fix the authentication bug" → need to investigate, understand root cause, choose fix
|
||||
- "Add error handling to the API" → need to decide what errors, how to handle them
|
||||
- "Refactor UserService" → need to plan new structure, migration path
|
||||
|
||||
**Examples of work that doesn't need tasks:**
|
||||
- "Fix typo in README" → obvious mechanical change
|
||||
- "Update version number to 2.0" → straightforward edit
|
||||
- "Add missing semicolon" → clear what to do
|
||||
|
||||
**Always skip tasks for:** questions, exploratory requests, or knowledge transfer only.
|
||||
|
||||
### Core Workflow Tools
|
||||
|
||||
Use these tools to retrieve the required Backlog.md guidance in markdown form:
|
||||
|
||||
- `get_workflow_overview` — Overview of when and how to use Backlog
|
||||
- `get_task_creation_guide` — Detailed instructions for creating tasks (scope, acceptance criteria, structure)
|
||||
- `get_task_execution_guide` — Planning and executing tasks (implementation plans, approvals, scope changes)
|
||||
- `get_task_completion_guide` — Definition of Done, completion workflow, next steps
|
||||
|
||||
Each tool returns the same content that resource-capable clients read via `backlog://workflow/...` URIs.
|
||||
|
||||
### Typical Workflow (Tools)
|
||||
|
||||
1. **Search first:** call `task_search` or `task_list` with filters to find existing work
|
||||
2. **If found:** read details via `task_view`; follow execution/plan guidance from the retrieved markdown
|
||||
3. **If not found:** consult `get_task_creation_guide`, then create tasks with `task_create`
|
||||
4. **Execute & complete:** use the execution/completion guides to manage status, plans, notes, and acceptance criteria (`task_edit`, `task_archive`)
|
||||
|
||||
### Core Principle
|
||||
|
||||
Backlog tracks **commitments** (what will be built). Use your judgment to distinguish between "help me understand X" (no task) vs "add feature Y" (create tasks).
|
||||
|
||||
### MCP Tools Quick Reference
|
||||
|
||||
- `get_workflow_overview`, `get_task_creation_guide`, `get_task_execution_guide`, `get_task_completion_guide`
|
||||
- `task_list`, `task_search`, `task_view`, `task_create`, `task_edit`, `task_archive`
|
||||
- `document_list`, `document_view`, `document_create`, `document_update`, `document_search`
|
||||
|
||||
**Always operate through the MCP tools above. Never edit markdown files directly; use the tools so relationships, metadata, and history stay consistent.**
|
||||
|
|
@ -0,0 +1,60 @@
|
|||
## Backlog.md Overview (MCP)
|
||||
|
||||
This project uses Backlog.md to track features, bugs, and structured work as tasks.
|
||||
|
||||
### When to Use Backlog
|
||||
|
||||
**Create a task if the work requires planning or decision-making:**
|
||||
|
||||
Ask yourself: "Do I need to think about HOW to do this?"
|
||||
- **YES** → Search for existing task first, create if needed
|
||||
- **NO** → Just do it (the change is trivial/mechanical)
|
||||
|
||||
**Examples of work that needs tasks:**
|
||||
- "Fix the authentication bug" → need to investigate, understand root cause, choose fix
|
||||
- "Add error handling to the API" → need to decide what errors, how to handle them
|
||||
- "Refactor UserService" → need to plan new structure, migration path
|
||||
|
||||
**Examples of work that doesn't need tasks:**
|
||||
- "Fix typo in README" → obvious mechanical change
|
||||
- "Update version number to 2.0" → straightforward edit
|
||||
- "Add missing semicolon" → clear what to do
|
||||
|
||||
**Always skip tasks for:**
|
||||
- Questions and informational requests
|
||||
- Reading/exploring/explaining code, issues, or concepts
|
||||
|
||||
### Typical Workflow
|
||||
|
||||
When the user requests non-trivial work:
|
||||
1. **Search first:** Use `task_search` or `task_list` (with status filters) - work might already be tracked
|
||||
2. **If found:** Work on the existing task. Check task-execution workflow to know how to proceed
|
||||
3. **If not found:** Create task(s) based on scope (single task or present breakdown for approval). Check task-creation workflow for details
|
||||
4. **Execute:** Follow task-execution guidelines
|
||||
|
||||
Searching first avoids duplicate tasks and helps you understand existing context.
|
||||
|
||||
### Detailed Guidance (Required)
|
||||
|
||||
Read these resources to get essential instructions when:
|
||||
|
||||
- **Creating tasks** → `backlog://workflow/task-creation` - Scope assessment, acceptance criteria, parent/subtasks structure
|
||||
- **Planning & executing work** → `backlog://workflow/task-execution` - Planning workflow, implementation discipline, scope changes
|
||||
- **Completing & reviewing tasks** → `backlog://workflow/task-completion` - Definition of Done, completion checklist, next steps
|
||||
|
||||
These guides contain critical workflows you need to follow for proper task management.
|
||||
|
||||
### Core Principle
|
||||
|
||||
Backlog tracks **commitments** (what will be built). Use your judgment to distinguish between "help me understand X" (no tracking) vs "add feature Y" (track in Backlog).
|
||||
|
||||
### MCP Tools Quick Reference
|
||||
|
||||
- `task_list` — list tasks with optional filtering by status, assignee, or labels
|
||||
- `task_search` — search tasks by title and description
|
||||
- `task_view` — read full task context (description, plan, notes, acceptance criteria)
|
||||
- `task_create` — create new tasks with description and acceptance criteria
|
||||
- `task_edit` — update task metadata, status, plan, notes, acceptance criteria, and dependencies
|
||||
- `task_archive` — archive completed tasks
|
||||
|
||||
**Always operate through MCP tools. Never edit markdown files directly so relationships, metadata, and history stay consistent.**
|
||||
|
|
@ -0,0 +1,52 @@
|
|||
## Task Completion Guide
|
||||
|
||||
### Completion Workflow
|
||||
|
||||
1. **Verify all acceptance criteria** - Confirm every criterion is satisfied (use `task_view` to see current status)
|
||||
2. **Run the Definition of Done checklist** (see below)
|
||||
3. **Summarize the work** - Use `task_edit` (notesAppend field) to document what changed and why (treat it like a PR description)
|
||||
4. **Confirm the implementation plan is captured and current** - Update the plan in Backlog if the executed approach deviated
|
||||
5. **Update task status** - Set status to "Done" via `task_edit`
|
||||
6. **Propose next steps** - Never autonomously create or start new tasks
|
||||
|
||||
### Definition of Done Checklist
|
||||
|
||||
- Implementation plan exists in the task record (`task_edit` planSet/planAppend) and reflects the final solution
|
||||
- Acceptance criteria are all checked via `task_edit` (acceptanceCriteriaCheck field)
|
||||
- Automated and relevant manual tests pass; no new warnings or regressions introduced
|
||||
- Documentation or configuration updates completed when required
|
||||
- Implementation notes capture what changed and why via `task_edit` (notesAppend field)
|
||||
- Status transitions to "Done" via `task_edit`
|
||||
|
||||
### After Completion
|
||||
|
||||
**Never autonomously create or start new tasks.** Instead:
|
||||
|
||||
- **If follow-up work is needed**: Present the idea to the user and ask whether to create a follow-up task
|
||||
- **If this was a subtask**:
|
||||
- Check if user explicitly told you to work on "parent task and all subtasks"
|
||||
- If YES: Proceed directly to the next subtask without asking
|
||||
- If NO: Ask user: "Subtask X is complete. Should I proceed with subtask Y, or would you like to review first?"
|
||||
- **If all subtasks in a series are complete**: Update parent task status if appropriate, then ask user what to do next
|
||||
|
||||
### Working with Subtasks
|
||||
|
||||
- When completing a subtask, check all its acceptance criteria individually
|
||||
- Update subtask status to "Done" via `task_edit`
|
||||
- Document subtask-specific outcomes in the subtask's notes
|
||||
- Only update parent task status when ALL subtasks are complete (or when explicitly instructed)
|
||||
|
||||
### Implementation notes (PR summary)
|
||||
|
||||
The implementation notes are often used as the summary of changes made, similar to a pull request description.
|
||||
|
||||
Use `task_edit` (notesAppend field) to record:
|
||||
- Implementation decisions and rationale
|
||||
- Blockers encountered and how they were resolved
|
||||
- Technical debt or future improvements identified
|
||||
- Testing approach and results
|
||||
|
||||
These notes help future developers (including AI agents) understand the context.
|
||||
Do not repeat the same information that is clearly understandable from the code.
|
||||
|
||||
Write a structured summary that highlights the key points of the implementation.
|
||||
|
|
@ -0,0 +1,92 @@
|
|||
## Task Creation Guide
|
||||
|
||||
This guide provides detailed instructions for creating well-structured tasks. You should already know WHEN to create tasks (from the overview).
|
||||
|
||||
### Step 1: Search for existing work
|
||||
|
||||
**IMPORTANT - Always use filters when searching:**
|
||||
- Use `task_search` with query parameter (e.g., query="desktop app")
|
||||
- Use `task_list` with status filter to exclude completed work (e.g., status="To Do" or status="In Progress")
|
||||
- Never list all tasks including "Done" status without explicit user request
|
||||
- Never search without a query or limit - this can overwhelm the context window
|
||||
|
||||
Use `task_view` to read full context of related tasks.
|
||||
|
||||
### Step 2: Assess scope BEFORE creating tasks
|
||||
|
||||
**CRITICAL**: Before creating any tasks, assess whether the user's request is:
|
||||
- **Single atomic task** (single focused PR): Create one task immediately
|
||||
- **Multi-task feature or initiative** (multiple PRs, or parent task with subtasks): Create appropriate task structure
|
||||
|
||||
**Scope assessment checklist** - Answer these questions FIRST:
|
||||
1. Can this be completed in a single focused pull request?
|
||||
2. Would a code reviewer be comfortable reviewing all changes in one sitting?
|
||||
3. Are there natural breaking points where work could be independently delivered and tested?
|
||||
4. Does the request span multiple subsystems, layers, or architectural concerns?
|
||||
5. Are multiple tasks working on the same component or closely related functionality?
|
||||
|
||||
If the work requires multiple tasks, proceed to choose the appropriate task structure (subtasks vs separate tasks).
|
||||
|
||||
### Step 3: Choose task structure
|
||||
|
||||
**When to use subtasks vs separate tasks:**
|
||||
|
||||
**Use subtasks** (parent-child relationship) when:
|
||||
- Multiple tasks all modify the same component or subsystem
|
||||
- Tasks are tightly coupled and share the same high-level goal
|
||||
- Tasks represent sequential phases of the same feature
|
||||
- Example: Parent task "Desktop Application" with subtasks for Electron setup, IPC bridge, UI adaptation, packaging
|
||||
|
||||
**Use separate tasks** (with dependencies) when:
|
||||
- Tasks span different components or subsystems
|
||||
- Tasks can be worked on independently by different developers
|
||||
- Tasks have loose coupling with clear boundaries
|
||||
- Example: Separate tasks for "API endpoint", "Frontend component", "Documentation"
|
||||
|
||||
**Concrete example**: If a request spans multiple layers—say an API change, a client update, and documentation—create one parent task ("Launch bulk-edit mode") with subtasks for each layer. Note cross-layer dependencies (e.g., "UI waits on API schema") so different collaborators can work in parallel without blocking each other.
|
||||
|
||||
### Step 4: Create multi-task structure
|
||||
|
||||
When scope requires multiple tasks:
|
||||
1. **Create the task structure**: Either parent task with subtasks, or separate tasks with dependencies
|
||||
2. **Explain what you created** to the user after creation, including the reasoning for the structure
|
||||
3. **Document relationships**: Record dependencies using `task_edit` so scheduling and merge-risk tooling stay accurate
|
||||
|
||||
Create all tasks in the same session to maintain consistency and context.
|
||||
|
||||
### Step 5: Create task(s) with proper scope
|
||||
|
||||
**Title and description**: Explain desired outcome and user value (the WHY)
|
||||
|
||||
**Acceptance criteria**: Specific, testable, and independent (the WHAT)
|
||||
- Keep each checklist item atomic (e.g., "Display saves when user presses Ctrl+S")
|
||||
- Include negative or edge scenarios when relevant
|
||||
- Capture testing expectations explicitly
|
||||
|
||||
**Never embed implementation details** in title, description, or acceptance criteria
|
||||
|
||||
**Record dependencies** using `task_edit` for task ordering
|
||||
|
||||
**Ask for clarification** if requirements are ambiguous
|
||||
|
||||
### Step 6: Report created tasks
|
||||
|
||||
After creation, show the user each new task's ID, title, description, and acceptance criteria (e.g., "Created task-290 – API endpoint: …"). This provides visibility into what was created and allows the user to request corrections if needed.
|
||||
|
||||
### Common Anti-patterns to Avoid
|
||||
|
||||
- Creating a single task called "Build desktop application" with 10+ acceptance criteria
|
||||
- Adding implementation steps to acceptance criteria
|
||||
- Creating a task before understanding if it needs to be split
|
||||
|
||||
### Correct Pattern
|
||||
|
||||
"This request spans electron setup, IPC bridge, UI adaptation, and packaging. I'll create 4 separate tasks to break this down properly."
|
||||
|
||||
Then create the tasks and report what was created.
|
||||
|
||||
### Additional Context Gathering
|
||||
|
||||
- Use `task_view` to read the description, acceptance criteria, dependencies, current plan, and notes before acting
|
||||
- Inspect relevant code/docs/tests in the repository to ground your understanding
|
||||
- When permitted, consult up-to-date external references (design docs, service manuals, API specs) so your plan reflects current reality
|
||||
|
|
@ -0,0 +1,68 @@
|
|||
## Task Execution Guide
|
||||
|
||||
### Planning Workflow
|
||||
|
||||
> **Non-negotiable:** Capture an implementation plan in the Backlog task _before_ writing any code or running commands. The plan must live in the task record prior to implementation and remain up to date when you close the task.
|
||||
|
||||
1. **Mark task as In Progress** via `task_edit` with status "In Progress"
|
||||
2. **Assign to yourself** via `task_edit` with assignee field
|
||||
3. **Draft the implementation plan** - Think through the approach, review code, identify key files
|
||||
4. **Present plan to user** - Show your proposed implementation approach
|
||||
5. **Wait for explicit approval** - Do not start coding until user confirms or asks you to skip review
|
||||
6. **Record approved plan** - Use `task_edit` with planSet or planAppend to capture the agreed approach in the task
|
||||
7. **Document the agreed breakdown** - In the parent task's plan, capture the final list of subtasks, owners, and sequencing so future agents see the structure the user approved
|
||||
|
||||
**IMPORTANT:** Use tasks as a permanent storage for everything related to the work. Implementation plan and notes are essential to resume work in case of interruptions or handoffs.
|
||||
|
||||
### Planning Guidelines
|
||||
|
||||
- Keep the Backlog task as the single plan of record: capture the agreed approach with `task_edit` (planSet field) before writing code
|
||||
- Use `task_edit` (planAppend field) to refine the plan when you learn more during implementation
|
||||
- Verify prerequisites before committing to a plan: confirm required tools, access, data, and environment support are in place
|
||||
- Keep plans structured and actionable: list concrete steps, highlight key files, call out risks, and note any checkpoints or validations
|
||||
- Ensure the plan reflects the agreed user outcome and acceptance criteria; if expectations are unclear, clarify them before proceeding
|
||||
- When additional context is required, review relevant code, documentation, or external references so the plan incorporates the latest knowledge
|
||||
- Treat the plan and acceptance criteria as living guides - update both when the approach or expectations change so future readers understand the rationale
|
||||
- If you need to add or remove tasks or shift scope later, pause and run the "present → approval" loop again before editing the backlog; never change the breakdown silently
|
||||
|
||||
### Working with Subtasks (Planning)
|
||||
|
||||
- If working on a parent task with subtasks, create a high-level plan for the parent that outlines the overall approach
|
||||
- Each subtask should have its own detailed implementation plan when you work on it
|
||||
- Ensure subtask plans are consistent with the parent task's overall strategy
|
||||
|
||||
### Execution Workflow
|
||||
|
||||
- **IMPORTANT**: Do not touch the codebase until the implementation plan is approved _and_ recorded in the task via `task_edit`
|
||||
- The recorded plan must stay accurate; if the approach shifts, update it first and get confirmation before continuing
|
||||
- If feedback requires changes, revise the plan first via `task_edit` (planSet or planAppend fields)
|
||||
- Work in short loops: implement, run the relevant tests, and immediately check off acceptance criteria with `task_edit` (acceptanceCriteriaCheck field) when they are met
|
||||
- Log progress with `task_edit` (notesAppend field) to document decisions, blockers, or learnings
|
||||
- Keep task status aligned with reality via `task_edit`
|
||||
|
||||
### Handling Scope Changes
|
||||
|
||||
If new work appears during implementation that wasn't in the original acceptance criteria:
|
||||
|
||||
**STOP and ask the user**:
|
||||
"I discovered [new work needed]. Should I:
|
||||
1. Add acceptance criteria to the current task and continue, or
|
||||
2. Create a follow-up task to handle this separately?"
|
||||
|
||||
**Never**:
|
||||
- Silently expand the scope without user approval
|
||||
- Create new tasks on your own initiative
|
||||
- Add acceptance criteria without user confirmation
|
||||
|
||||
### Staying on Track
|
||||
|
||||
- Stay within the scope defined by the plan and acceptance criteria
|
||||
- Update the plan first if direction changes, then get user approval for the revised approach
|
||||
- If you need to deviate from the plan, explain why and wait for confirmation
|
||||
|
||||
### Working with Subtasks (Execution)
|
||||
|
||||
- When user assigns you a parent task "and all subtasks", work through each subtask sequentially without asking for permission to move to the next one
|
||||
- When completing a single subtask (without explicit instruction to continue), present progress and ask: "Subtask X is complete. Should I proceed with subtask Y, or would you like to review first?"
|
||||
- Each subtask should be fully completed (all acceptance criteria met, tests passing) before moving to the next
|
||||
|
||||
|
|
@ -0,0 +1 @@
|
|||
../../.claude/agents/project-manager-backlog.md
|
||||
|
|
@ -0,0 +1,32 @@
|
|||
export * from "./readme.ts";
|
||||
// Types
|
||||
|
||||
export {
|
||||
_loadAgentGuideline,
|
||||
type AgentInstructionFile,
|
||||
addAgentInstructions,
|
||||
type EnsureMcpGuidelinesResult,
|
||||
ensureMcpGuidelines,
|
||||
installClaudeAgent,
|
||||
} from "./agent-instructions.ts";
|
||||
// Kanban board utilities
|
||||
export { exportKanbanBoardToFile, generateKanbanBoardWithMetadata } from "./board.ts";
|
||||
// Constants
|
||||
export * from "./constants/index.ts";
|
||||
// Core entry point
|
||||
export { Core } from "./core/backlog.ts";
|
||||
export { SearchService } from "./core/search-service.ts";
|
||||
|
||||
// File system operations
|
||||
export { FileSystem } from "./file-system/operations.ts";
|
||||
|
||||
// Git operations
|
||||
export {
|
||||
GitOperations,
|
||||
initializeGitRepository,
|
||||
isGitRepository,
|
||||
} from "./git/operations.ts";
|
||||
// Markdown operations
|
||||
export * from "./markdown/parser.ts";
|
||||
export * from "./markdown/serializer.ts";
|
||||
export * from "./types/index.ts";
|
||||
|
|
@ -0,0 +1,189 @@
|
|||
import matter from "gray-matter";
|
||||
import type { AcceptanceCriterion, Decision, Document, ParsedMarkdown, Task } from "../types/index.ts";
|
||||
import { AcceptanceCriteriaManager, extractStructuredSection, STRUCTURED_SECTION_KEYS } from "./structured-sections.ts";
|
||||
|
||||
function preprocessFrontmatter(frontmatter: string): string {
|
||||
return frontmatter
|
||||
.split(/\r?\n/) // Handle both Windows (\r\n) and Unix (\n) line endings
|
||||
.map((line) => {
|
||||
// Handle both assignee and reporter fields that start with @
|
||||
const match = line.match(/^(\s*(?:assignee|reporter):\s*)(.*)$/);
|
||||
if (!match) return line;
|
||||
|
||||
const [, prefix, raw] = match;
|
||||
const value = raw?.trim() || "";
|
||||
|
||||
if (
|
||||
value &&
|
||||
!value.startsWith("[") &&
|
||||
!value.startsWith("'") &&
|
||||
!value.startsWith('"') &&
|
||||
!value.startsWith("-")
|
||||
) {
|
||||
return `${prefix}"${value.replace(/"/g, '\\"')}"`;
|
||||
}
|
||||
return line;
|
||||
})
|
||||
.join("\n"); // Always join with \n for consistent YAML parsing
|
||||
}
|
||||
|
||||
function normalizeDate(value: unknown): string {
|
||||
if (!value) return "";
|
||||
if (value instanceof Date) {
|
||||
// Check if this Date object came from a date-only string (time is midnight UTC)
|
||||
const hours = value.getUTCHours();
|
||||
const minutes = value.getUTCMinutes();
|
||||
const seconds = value.getUTCSeconds();
|
||||
|
||||
if (hours === 0 && minutes === 0 && seconds === 0) {
|
||||
// This was likely a date-only value, preserve it as date-only
|
||||
return value.toISOString().slice(0, 10);
|
||||
}
|
||||
// This has actual time information, preserve it
|
||||
return value.toISOString().slice(0, 16).replace("T", " ");
|
||||
}
|
||||
const str = String(value)
|
||||
.trim()
|
||||
.replace(/^['"]|['"]$/g, "");
|
||||
if (!str) return "";
|
||||
|
||||
// Check for datetime format first (YYYY-MM-DD HH:mm)
|
||||
let match: RegExpMatchArray | null = str.match(/^(\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2})$/);
|
||||
if (match) {
|
||||
// Already in correct format, return as-is
|
||||
return str;
|
||||
}
|
||||
|
||||
// Check for ISO datetime format (YYYY-MM-DDTHH:mm)
|
||||
match = str.match(/^(\d{4})-(\d{2})-(\d{2})T(\d{2}):(\d{2})$/);
|
||||
if (match) {
|
||||
// Convert T separator to space
|
||||
return str.replace("T", " ");
|
||||
}
|
||||
|
||||
// Check for date-only format (YYYY-MM-DD) - backward compatibility
|
||||
match = str.match(/^(\d{4})-(\d{2})-(\d{2})$/);
|
||||
if (match) {
|
||||
return `${match[1]}-${match[2]}-${match[3]}`;
|
||||
}
|
||||
|
||||
// Legacy date formats (date-only for backward compatibility)
|
||||
match = str.match(/^(\d{2})-(\d{2})-(\d{2})$/);
|
||||
if (match) {
|
||||
const [day, month, year] = match.slice(1);
|
||||
return `20${year}-${month}-${day}`;
|
||||
}
|
||||
match = str.match(/^(\d{2})\/(\d{2})\/(\d{2})$/);
|
||||
if (match) {
|
||||
const [day, month, year] = match.slice(1);
|
||||
return `20${year}-${month}-${day}`;
|
||||
}
|
||||
match = str.match(/^(\d{2})\.(\d{2})\.(\d{2})$/);
|
||||
if (match) {
|
||||
const [day, month, year] = match.slice(1);
|
||||
return `20${year}-${month}-${day}`;
|
||||
}
|
||||
return str;
|
||||
}
|
||||
|
||||
export function parseMarkdown(content: string): ParsedMarkdown {
|
||||
// Updated regex to handle both Windows (\r\n) and Unix (\n) line endings
|
||||
const fmRegex = /^---\r?\n([\s\S]*?)\r?\n---/;
|
||||
const match = content.match(fmRegex);
|
||||
let toParse = content;
|
||||
|
||||
if (match) {
|
||||
const processed = preprocessFrontmatter(match[1] || "");
|
||||
// Replace with consistent line endings
|
||||
toParse = content.replace(fmRegex, `---\n${processed}\n---`);
|
||||
}
|
||||
|
||||
const parsed = matter(toParse);
|
||||
return {
|
||||
frontmatter: parsed.data,
|
||||
content: parsed.content.trim(),
|
||||
};
|
||||
}
|
||||
|
||||
export function parseTask(content: string): Task {
|
||||
const { frontmatter, content: rawContent } = parseMarkdown(content);
|
||||
|
||||
// Validate priority field
|
||||
const priority = frontmatter.priority ? String(frontmatter.priority).toLowerCase() : undefined;
|
||||
const validPriorities = ["high", "medium", "low"];
|
||||
const validatedPriority =
|
||||
priority && validPriorities.includes(priority) ? (priority as "high" | "medium" | "low") : undefined;
|
||||
|
||||
// Parse structured acceptance criteria (checked/text/index) from all sections
|
||||
const structuredCriteria: AcceptanceCriterion[] = AcceptanceCriteriaManager.parseAllCriteria(rawContent);
|
||||
|
||||
// Parse other sections
|
||||
const descriptionSection = extractStructuredSection(rawContent, STRUCTURED_SECTION_KEYS.description) || "";
|
||||
const planSection = extractStructuredSection(rawContent, STRUCTURED_SECTION_KEYS.implementationPlan) || undefined;
|
||||
const notesSection = extractStructuredSection(rawContent, STRUCTURED_SECTION_KEYS.implementationNotes) || undefined;
|
||||
|
||||
return {
|
||||
id: String(frontmatter.id || ""),
|
||||
title: String(frontmatter.title || ""),
|
||||
status: String(frontmatter.status || ""),
|
||||
assignee: Array.isArray(frontmatter.assignee)
|
||||
? frontmatter.assignee.map(String)
|
||||
: frontmatter.assignee
|
||||
? [String(frontmatter.assignee)]
|
||||
: [],
|
||||
reporter: frontmatter.reporter ? String(frontmatter.reporter) : undefined,
|
||||
createdDate: normalizeDate(frontmatter.created_date),
|
||||
updatedDate: frontmatter.updated_date ? normalizeDate(frontmatter.updated_date) : undefined,
|
||||
labels: Array.isArray(frontmatter.labels) ? frontmatter.labels.map(String) : [],
|
||||
milestone: frontmatter.milestone ? String(frontmatter.milestone) : undefined,
|
||||
dependencies: Array.isArray(frontmatter.dependencies) ? frontmatter.dependencies.map(String) : [],
|
||||
rawContent,
|
||||
acceptanceCriteriaItems: structuredCriteria,
|
||||
description: descriptionSection,
|
||||
implementationPlan: planSection,
|
||||
implementationNotes: notesSection,
|
||||
parentTaskId: frontmatter.parent_task_id ? String(frontmatter.parent_task_id) : undefined,
|
||||
subtasks: Array.isArray(frontmatter.subtasks) ? frontmatter.subtasks.map(String) : undefined,
|
||||
priority: validatedPriority,
|
||||
ordinal: frontmatter.ordinal !== undefined ? Number(frontmatter.ordinal) : undefined,
|
||||
onStatusChange: frontmatter.onStatusChange ? String(frontmatter.onStatusChange) : undefined,
|
||||
};
|
||||
}
|
||||
|
||||
export function parseDecision(content: string): Decision {
|
||||
const { frontmatter, content: rawContent } = parseMarkdown(content);
|
||||
|
||||
return {
|
||||
id: String(frontmatter.id || ""),
|
||||
title: String(frontmatter.title || ""),
|
||||
date: normalizeDate(frontmatter.date),
|
||||
status: String(frontmatter.status || "proposed") as Decision["status"],
|
||||
context: extractSection(rawContent, "Context") || "",
|
||||
decision: extractSection(rawContent, "Decision") || "",
|
||||
consequences: extractSection(rawContent, "Consequences") || "",
|
||||
alternatives: extractSection(rawContent, "Alternatives"),
|
||||
rawContent, // Raw markdown content without frontmatter
|
||||
};
|
||||
}
|
||||
|
||||
export function parseDocument(content: string): Document {
|
||||
const { frontmatter, content: rawContent } = parseMarkdown(content);
|
||||
|
||||
return {
|
||||
id: String(frontmatter.id || ""),
|
||||
title: String(frontmatter.title || ""),
|
||||
type: String(frontmatter.type || "other") as Document["type"],
|
||||
createdDate: normalizeDate(frontmatter.created_date),
|
||||
updatedDate: frontmatter.updated_date ? normalizeDate(frontmatter.updated_date) : undefined,
|
||||
rawContent,
|
||||
tags: Array.isArray(frontmatter.tags) ? frontmatter.tags.map(String) : undefined,
|
||||
};
|
||||
}
|
||||
|
||||
function extractSection(content: string, sectionTitle: string): string | undefined {
|
||||
// Normalize to LF for reliable matching across platforms
|
||||
const src = content.replace(/\r\n/g, "\n");
|
||||
const regex = new RegExp(`## ${sectionTitle}\\s*\\n([\\s\\S]*?)(?=\\n## |$)`, "i");
|
||||
const match = src.match(regex);
|
||||
return match?.[1]?.trim();
|
||||
}
|
||||
|
|
@ -0,0 +1,30 @@
|
|||
const BASE_SECTION_TITLES = [
|
||||
"Description",
|
||||
"Acceptance Criteria",
|
||||
"Implementation Plan",
|
||||
"Implementation Notes",
|
||||
] as const;
|
||||
|
||||
const SECTION_TITLE_VARIANTS: Record<string, string[]> = {
|
||||
"Acceptance Criteria": ["Acceptance Criteria (Optional)"],
|
||||
"Implementation Plan": ["Implementation Plan (Optional)"],
|
||||
"Implementation Notes": ["Implementation Notes (Optional)", "Notes", "Notes & Comments (Optional)"],
|
||||
};
|
||||
|
||||
export function getStructuredSectionTitles(): string[] {
|
||||
const titles = new Set<string>();
|
||||
for (const base of BASE_SECTION_TITLES) {
|
||||
titles.add(base);
|
||||
const variants = SECTION_TITLE_VARIANTS[base];
|
||||
if (variants) {
|
||||
for (const variant of variants) {
|
||||
titles.add(variant);
|
||||
}
|
||||
}
|
||||
}
|
||||
return Array.from(titles);
|
||||
}
|
||||
|
||||
export function getBaseStructuredSectionTitles(): string[] {
|
||||
return Array.from(BASE_SECTION_TITLES);
|
||||
}
|
||||
|
|
@ -0,0 +1,146 @@
|
|||
import matter from "gray-matter";
|
||||
import type { Decision, Document, Task } from "../types/index.ts";
|
||||
import { normalizeAssignee } from "../utils/assignee.ts";
|
||||
import { AcceptanceCriteriaManager, getStructuredSections, updateStructuredSections } from "./structured-sections.ts";
|
||||
|
||||
export function serializeTask(task: Task): string {
|
||||
normalizeAssignee(task);
|
||||
const frontmatter = {
|
||||
id: task.id,
|
||||
title: task.title,
|
||||
status: task.status,
|
||||
assignee: task.assignee,
|
||||
...(task.reporter && { reporter: task.reporter }),
|
||||
created_date: task.createdDate,
|
||||
...(task.updatedDate && { updated_date: task.updatedDate }),
|
||||
labels: task.labels,
|
||||
...(task.milestone && { milestone: task.milestone }),
|
||||
dependencies: task.dependencies,
|
||||
...(task.parentTaskId && { parent_task_id: task.parentTaskId }),
|
||||
...(task.subtasks && task.subtasks.length > 0 && { subtasks: task.subtasks }),
|
||||
...(task.priority && { priority: task.priority }),
|
||||
...(task.ordinal !== undefined && { ordinal: task.ordinal }),
|
||||
...(task.onStatusChange && { onStatusChange: task.onStatusChange }),
|
||||
};
|
||||
|
||||
let contentBody = task.rawContent ?? "";
|
||||
if (typeof task.description === "string" && task.description.trim() !== "") {
|
||||
contentBody = updateTaskDescription(contentBody, task.description);
|
||||
}
|
||||
if (Array.isArray(task.acceptanceCriteriaItems)) {
|
||||
const existingCriteria = AcceptanceCriteriaManager.parseAllCriteria(task.rawContent ?? "");
|
||||
const hasExistingStructuredCriteria = existingCriteria.length > 0;
|
||||
if (task.acceptanceCriteriaItems.length > 0 || hasExistingStructuredCriteria) {
|
||||
contentBody = AcceptanceCriteriaManager.updateContent(contentBody, task.acceptanceCriteriaItems);
|
||||
}
|
||||
}
|
||||
if (typeof task.implementationPlan === "string") {
|
||||
contentBody = updateTaskImplementationPlan(contentBody, task.implementationPlan);
|
||||
}
|
||||
if (typeof task.implementationNotes === "string") {
|
||||
contentBody = updateTaskImplementationNotes(contentBody, task.implementationNotes);
|
||||
}
|
||||
|
||||
const serialized = matter.stringify(contentBody, frontmatter);
|
||||
// Ensure there's a blank line between frontmatter and content
|
||||
return serialized.replace(/^(---\n(?:.*\n)*?---)\n(?!$)/, "$1\n\n");
|
||||
}
|
||||
|
||||
export function serializeDecision(decision: Decision): string {
|
||||
const frontmatter = {
|
||||
id: decision.id,
|
||||
title: decision.title,
|
||||
date: decision.date,
|
||||
status: decision.status,
|
||||
};
|
||||
|
||||
let content = `## Context\n\n${decision.context}\n\n`;
|
||||
content += `## Decision\n\n${decision.decision}\n\n`;
|
||||
content += `## Consequences\n\n${decision.consequences}`;
|
||||
|
||||
if (decision.alternatives) {
|
||||
content += `\n\n## Alternatives\n\n${decision.alternatives}`;
|
||||
}
|
||||
|
||||
return matter.stringify(content, frontmatter);
|
||||
}
|
||||
|
||||
export function serializeDocument(document: Document): string {
|
||||
const frontmatter = {
|
||||
id: document.id,
|
||||
title: document.title,
|
||||
type: document.type,
|
||||
created_date: document.createdDate,
|
||||
...(document.updatedDate && { updated_date: document.updatedDate }),
|
||||
...(document.tags && document.tags.length > 0 && { tags: document.tags }),
|
||||
};
|
||||
|
||||
return matter.stringify(document.rawContent, frontmatter);
|
||||
}
|
||||
|
||||
export function updateTaskAcceptanceCriteria(content: string, criteria: string[]): string {
|
||||
// Normalize to LF while computing, preserve original EOL at return
|
||||
const useCRLF = /\r\n/.test(content);
|
||||
const src = content.replace(/\r\n/g, "\n");
|
||||
// Find if there's already an Acceptance Criteria section
|
||||
const criteriaRegex = /## Acceptance Criteria\s*\n([\s\S]*?)(?=\n## |$)/i;
|
||||
const match = src.match(criteriaRegex);
|
||||
|
||||
const newCriteria = criteria.map((criterion) => `- [ ] ${criterion}`).join("\n");
|
||||
const newSection = `## Acceptance Criteria\n\n${newCriteria}`;
|
||||
|
||||
let out: string | undefined;
|
||||
if (match) {
|
||||
// Replace existing section
|
||||
out = src.replace(criteriaRegex, newSection);
|
||||
} else {
|
||||
// Add new section at the end
|
||||
out = `${src}\n\n${newSection}`;
|
||||
}
|
||||
return useCRLF ? out.replace(/\n/g, "\r\n") : out;
|
||||
}
|
||||
|
||||
export function updateTaskImplementationPlan(content: string, plan: string): string {
|
||||
const sections = getStructuredSections(content);
|
||||
return updateStructuredSections(content, {
|
||||
description: sections.description ?? "",
|
||||
implementationPlan: plan,
|
||||
implementationNotes: sections.implementationNotes ?? "",
|
||||
});
|
||||
}
|
||||
|
||||
export function updateTaskImplementationNotes(content: string, notes: string): string {
|
||||
const sections = getStructuredSections(content);
|
||||
return updateStructuredSections(content, {
|
||||
description: sections.description ?? "",
|
||||
implementationPlan: sections.implementationPlan ?? "",
|
||||
implementationNotes: notes,
|
||||
});
|
||||
}
|
||||
|
||||
export function appendTaskImplementationNotes(content: string, notesChunks: string | string[]): string {
|
||||
const chunks = (Array.isArray(notesChunks) ? notesChunks : [notesChunks])
|
||||
.map((c) => String(c))
|
||||
.map((c) => c.replace(/\r\n/g, "\n"))
|
||||
.map((c) => c.trim())
|
||||
.filter(Boolean);
|
||||
|
||||
const sections = getStructuredSections(content);
|
||||
const appendedBlock = chunks.join("\n\n");
|
||||
const existingNotes = sections.implementationNotes?.trim();
|
||||
const combined = existingNotes ? `${existingNotes}\n\n${appendedBlock}` : appendedBlock;
|
||||
return updateStructuredSections(content, {
|
||||
description: sections.description ?? "",
|
||||
implementationPlan: sections.implementationPlan ?? "",
|
||||
implementationNotes: combined,
|
||||
});
|
||||
}
|
||||
|
||||
export function updateTaskDescription(content: string, description: string): string {
|
||||
const sections = getStructuredSections(content);
|
||||
return updateStructuredSections(content, {
|
||||
description,
|
||||
implementationPlan: sections.implementationPlan ?? "",
|
||||
implementationNotes: sections.implementationNotes ?? "",
|
||||
});
|
||||
}
|
||||
|
|
@ -0,0 +1,520 @@
|
|||
import type { AcceptanceCriterion } from "../types/index.ts";
|
||||
import { getStructuredSectionTitles } from "./section-titles.ts";
|
||||
|
||||
export type StructuredSectionKey = "description" | "implementationPlan" | "implementationNotes";
|
||||
|
||||
export const STRUCTURED_SECTION_KEYS: Record<StructuredSectionKey, StructuredSectionKey> = {
|
||||
description: "description",
|
||||
implementationPlan: "implementationPlan",
|
||||
implementationNotes: "implementationNotes",
|
||||
};
|
||||
|
||||
interface SectionConfig {
|
||||
title: string;
|
||||
markerId: string;
|
||||
}
|
||||
|
||||
const SECTION_CONFIG: Record<StructuredSectionKey, SectionConfig> = {
|
||||
description: { title: "Description", markerId: "DESCRIPTION" },
|
||||
implementationPlan: { title: "Implementation Plan", markerId: "PLAN" },
|
||||
implementationNotes: { title: "Implementation Notes", markerId: "NOTES" },
|
||||
};
|
||||
|
||||
const SECTION_INSERTION_ORDER: StructuredSectionKey[] = ["description", "implementationPlan", "implementationNotes"];
|
||||
|
||||
const ACCEPTANCE_CRITERIA_SECTION_HEADER = "## Acceptance Criteria";
|
||||
const ACCEPTANCE_CRITERIA_TITLE = ACCEPTANCE_CRITERIA_SECTION_HEADER.replace(/^##\s*/, "");
|
||||
const KNOWN_SECTION_TITLES = new Set<string>([
|
||||
...getStructuredSectionTitles(),
|
||||
ACCEPTANCE_CRITERIA_TITLE,
|
||||
"Acceptance Criteria (Optional)",
|
||||
]);
|
||||
|
||||
function normalizeToLF(content: string): { text: string; useCRLF: boolean } {
|
||||
const useCRLF = /\r\n/.test(content);
|
||||
return { text: content.replace(/\r\n/g, "\n"), useCRLF };
|
||||
}
|
||||
|
||||
function restoreLineEndings(text: string, useCRLF: boolean): string {
|
||||
return useCRLF ? text.replace(/\n/g, "\r\n") : text;
|
||||
}
|
||||
|
||||
function escapeForRegex(value: string): string {
|
||||
return value.replace(/[.*+?^${}()|[\]\\]/g, "\\$&");
|
||||
}
|
||||
|
||||
function getConfig(key: StructuredSectionKey): SectionConfig {
|
||||
return SECTION_CONFIG[key];
|
||||
}
|
||||
|
||||
function getBeginMarker(key: StructuredSectionKey): string {
|
||||
return `<!-- SECTION:${getConfig(key).markerId}:BEGIN -->`;
|
||||
}
|
||||
|
||||
function getEndMarker(key: StructuredSectionKey): string {
|
||||
return `<!-- SECTION:${getConfig(key).markerId}:END -->`;
|
||||
}
|
||||
|
||||
function buildSectionBlock(key: StructuredSectionKey, body: string): string {
|
||||
const { title } = getConfig(key);
|
||||
const begin = getBeginMarker(key);
|
||||
const end = getEndMarker(key);
|
||||
const normalized = body.replace(/\r\n/g, "\n").replace(/\s+$/g, "");
|
||||
const content = normalized ? `${normalized}\n` : "";
|
||||
return `## ${title}\n\n${begin}\n${content}${end}`;
|
||||
}
|
||||
|
||||
function structuredSectionLookahead(currentTitle: string): string {
|
||||
const otherTitles = Array.from(KNOWN_SECTION_TITLES).filter(
|
||||
(title) => title.toLowerCase() !== currentTitle.toLowerCase(),
|
||||
);
|
||||
if (otherTitles.length === 0) return "(?=\\n*$)";
|
||||
const pattern = otherTitles.map((title) => escapeForRegex(title)).join("|");
|
||||
return `(?=\\n+## (?:${pattern})(?:\\s|$)|\\n*$)`;
|
||||
}
|
||||
|
||||
function sectionHeaderRegex(key: StructuredSectionKey): RegExp {
|
||||
const { title } = getConfig(key);
|
||||
return new RegExp(`## ${escapeForRegex(title)}\\s*\\n([\\s\\S]*?)${structuredSectionLookahead(title)}`, "i");
|
||||
}
|
||||
|
||||
function acceptanceCriteriaSentinelRegex(flags = "i"): RegExp {
|
||||
const header = escapeForRegex(ACCEPTANCE_CRITERIA_SECTION_HEADER);
|
||||
const begin = escapeForRegex(AcceptanceCriteriaManager.BEGIN_MARKER);
|
||||
const end = escapeForRegex(AcceptanceCriteriaManager.END_MARKER);
|
||||
return new RegExp(`(\\n|^)${header}\\s*\\n${begin}\\s*\\n([\\s\\S]*?)${end}`, flags);
|
||||
}
|
||||
|
||||
function legacySectionRegex(title: string, flags: string): RegExp {
|
||||
return new RegExp(`(\\n|^)## ${escapeForRegex(title)}\\s*\\n([\\s\\S]*?)${structuredSectionLookahead(title)}`, flags);
|
||||
}
|
||||
|
||||
function findSectionEndIndex(content: string, title: string): number | undefined {
|
||||
const normalizedTitle = title.trim();
|
||||
let sentinelMatch: RegExpExecArray | null = null;
|
||||
if (normalizedTitle.toLowerCase() === ACCEPTANCE_CRITERIA_TITLE.toLowerCase()) {
|
||||
sentinelMatch = acceptanceCriteriaSentinelRegex().exec(content);
|
||||
} else {
|
||||
const keyEntry = Object.entries(SECTION_CONFIG).find(
|
||||
([, config]) => config.title.toLowerCase() === normalizedTitle.toLowerCase(),
|
||||
);
|
||||
if (keyEntry) {
|
||||
const key = keyEntry[0] as StructuredSectionKey;
|
||||
sentinelMatch = new RegExp(
|
||||
`## ${escapeForRegex(getConfig(key).title)}\\s*\\n${escapeForRegex(getBeginMarker(key))}\\s*\\n([\\s\\S]*?)${escapeForRegex(getEndMarker(key))}`,
|
||||
"i",
|
||||
).exec(content);
|
||||
}
|
||||
}
|
||||
|
||||
if (sentinelMatch) {
|
||||
return sentinelMatch.index + sentinelMatch[0].length;
|
||||
}
|
||||
|
||||
const legacyMatch = legacySectionRegex(normalizedTitle, "i").exec(content);
|
||||
if (legacyMatch) {
|
||||
return legacyMatch.index + legacyMatch[0].length;
|
||||
}
|
||||
return undefined;
|
||||
}
|
||||
|
||||
function sentinelBlockRegex(key: StructuredSectionKey): RegExp {
|
||||
const { title } = getConfig(key);
|
||||
const begin = escapeForRegex(getBeginMarker(key));
|
||||
const end = escapeForRegex(getEndMarker(key));
|
||||
return new RegExp(`## ${escapeForRegex(title)}\\s*\\n${begin}\\s*\\n([\\s\\S]*?)${end}`, "i");
|
||||
}
|
||||
|
||||
function stripSectionInstances(content: string, key: StructuredSectionKey): string {
|
||||
const beginEsc = escapeForRegex(getBeginMarker(key));
|
||||
const endEsc = escapeForRegex(getEndMarker(key));
|
||||
const { title } = getConfig(key);
|
||||
|
||||
let stripped = content;
|
||||
const sentinelRegex = new RegExp(
|
||||
`(\n|^)## ${escapeForRegex(title)}\\s*\\n${beginEsc}\\s*\\n([\\s\\S]*?)${endEsc}(?:\\s*\n|$)`,
|
||||
"gi",
|
||||
);
|
||||
stripped = stripped.replace(sentinelRegex, "\n");
|
||||
|
||||
const legacyRegex = legacySectionRegex(title, "gi");
|
||||
stripped = stripped.replace(legacyRegex, "\n");
|
||||
|
||||
return stripped.replace(/\n{3,}/g, "\n\n").trimEnd();
|
||||
}
|
||||
|
||||
function insertAfterSection(content: string, title: string, block: string): { inserted: boolean; content: string } {
|
||||
if (!block.trim()) return { inserted: false, content };
|
||||
const insertPos = findSectionEndIndex(content, title);
|
||||
if (insertPos === undefined) return { inserted: false, content };
|
||||
const before = content.slice(0, insertPos).trimEnd();
|
||||
const after = content.slice(insertPos).replace(/^\s+/, "");
|
||||
const newContent = `${before}${before ? "\n\n" : ""}${block}${after ? `\n\n${after}` : ""}`;
|
||||
return { inserted: true, content: newContent };
|
||||
}
|
||||
|
||||
function insertAtStart(content: string, block: string): string {
|
||||
const trimmedBlock = block.trim();
|
||||
if (!trimmedBlock) return content;
|
||||
const trimmedContent = content.trim();
|
||||
if (!trimmedContent) return trimmedBlock;
|
||||
return `${trimmedBlock}\n\n${trimmedContent}`;
|
||||
}
|
||||
|
||||
function appendBlock(content: string, block: string): string {
|
||||
const trimmedBlock = block.trim();
|
||||
if (!trimmedBlock) return content;
|
||||
const trimmedContent = content.trim();
|
||||
if (!trimmedContent) return trimmedBlock;
|
||||
return `${trimmedContent}\n\n${trimmedBlock}`;
|
||||
}
|
||||
|
||||
export function extractStructuredSection(content: string, key: StructuredSectionKey): string | undefined {
|
||||
const src = content.replace(/\r\n/g, "\n");
|
||||
const sentinelMatch = sentinelBlockRegex(key).exec(src);
|
||||
if (sentinelMatch?.[1]) {
|
||||
return sentinelMatch[1].trim() || undefined;
|
||||
}
|
||||
const legacyMatch = sectionHeaderRegex(key).exec(src);
|
||||
return legacyMatch?.[1]?.trim() || undefined;
|
||||
}
|
||||
|
||||
export interface StructuredSectionValues {
|
||||
description?: string;
|
||||
implementationPlan?: string;
|
||||
implementationNotes?: string;
|
||||
}
|
||||
|
||||
interface SectionValues extends StructuredSectionValues {}
|
||||
|
||||
export function updateStructuredSections(content: string, sections: SectionValues): string {
|
||||
const { text: src, useCRLF } = normalizeToLF(content);
|
||||
|
||||
let working = src;
|
||||
for (const key of SECTION_INSERTION_ORDER) {
|
||||
working = stripSectionInstances(working, key);
|
||||
}
|
||||
working = working.trim();
|
||||
|
||||
const description = sections.description?.trim() || "";
|
||||
const plan = sections.implementationPlan?.trim() || "";
|
||||
const notes = sections.implementationNotes?.trim() || "";
|
||||
|
||||
let tail = working;
|
||||
|
||||
if (plan) {
|
||||
const planBlock = buildSectionBlock("implementationPlan", plan);
|
||||
let res = insertAfterSection(tail, ACCEPTANCE_CRITERIA_TITLE, planBlock);
|
||||
if (!res.inserted) {
|
||||
res = insertAfterSection(tail, getConfig("description").title, planBlock);
|
||||
}
|
||||
if (!res.inserted) {
|
||||
tail = insertAtStart(tail, planBlock);
|
||||
} else {
|
||||
tail = res.content;
|
||||
}
|
||||
}
|
||||
|
||||
if (notes) {
|
||||
const notesBlock = buildSectionBlock("implementationNotes", notes);
|
||||
let res = insertAfterSection(tail, getConfig("implementationPlan").title, notesBlock);
|
||||
if (!res.inserted) {
|
||||
res = insertAfterSection(tail, ACCEPTANCE_CRITERIA_TITLE, notesBlock);
|
||||
}
|
||||
if (!res.inserted) {
|
||||
tail = appendBlock(tail, notesBlock);
|
||||
} else {
|
||||
tail = res.content;
|
||||
}
|
||||
}
|
||||
|
||||
let output = tail;
|
||||
if (description) {
|
||||
const descriptionBlock = buildSectionBlock("description", description);
|
||||
output = insertAtStart(tail, descriptionBlock);
|
||||
}
|
||||
|
||||
const finalOutput = output.replace(/\n{3,}/g, "\n\n").trim();
|
||||
return restoreLineEndings(finalOutput, useCRLF);
|
||||
}
|
||||
|
||||
export function getStructuredSections(content: string): StructuredSectionValues {
|
||||
return {
|
||||
description: extractStructuredSection(content, "description") || undefined,
|
||||
implementationPlan: extractStructuredSection(content, "implementationPlan") || undefined,
|
||||
implementationNotes: extractStructuredSection(content, "implementationNotes") || undefined,
|
||||
};
|
||||
}
|
||||
|
||||
function acceptanceCriteriaLegacyRegex(flags: string): RegExp {
|
||||
return new RegExp(
|
||||
`(\\n|^)${escapeForRegex(ACCEPTANCE_CRITERIA_SECTION_HEADER)}\\s*\\n([\\s\\S]*?)${structuredSectionLookahead(ACCEPTANCE_CRITERIA_TITLE)}`,
|
||||
flags,
|
||||
);
|
||||
}
|
||||
|
||||
function extractExistingAcceptanceCriteriaBody(content: string): { body: string; hasMarkers: boolean } | undefined {
|
||||
const src = content.replace(/\r\n/g, "\n");
|
||||
const sentinelMatch = acceptanceCriteriaSentinelRegex("i").exec(src);
|
||||
if (sentinelMatch?.[2] !== undefined) {
|
||||
return { body: sentinelMatch[2], hasMarkers: true };
|
||||
}
|
||||
const legacyMatch = acceptanceCriteriaLegacyRegex("i").exec(src);
|
||||
if (legacyMatch?.[2] !== undefined) {
|
||||
return { body: legacyMatch[2], hasMarkers: false };
|
||||
}
|
||||
return undefined;
|
||||
}
|
||||
|
||||
/* biome-ignore lint/complexity/noStaticOnlyClass: Utility methods grouped for clarity */
|
||||
export class AcceptanceCriteriaManager {
|
||||
static readonly BEGIN_MARKER = "<!-- AC:BEGIN -->";
|
||||
static readonly END_MARKER = "<!-- AC:END -->";
|
||||
static readonly SECTION_HEADER = ACCEPTANCE_CRITERIA_SECTION_HEADER;
|
||||
|
||||
private static parseOldFormat(content: string): AcceptanceCriterion[] {
|
||||
const src = content.replace(/\r\n/g, "\n");
|
||||
const criteriaRegex = /## Acceptance Criteria\s*\n([\s\S]*?)(?=\n## |$)/i;
|
||||
const match = src.match(criteriaRegex);
|
||||
if (!match || !match[1]) {
|
||||
return [];
|
||||
}
|
||||
const lines = match[1].split("\n").filter((line) => line.trim());
|
||||
const criteria: AcceptanceCriterion[] = [];
|
||||
let index = 1;
|
||||
for (const line of lines) {
|
||||
const checkboxMatch = line.match(/^- \[([ x])\] (.+)$/);
|
||||
if (checkboxMatch?.[1] && checkboxMatch?.[2]) {
|
||||
criteria.push({
|
||||
checked: checkboxMatch[1] === "x",
|
||||
text: checkboxMatch[2],
|
||||
index: index++,
|
||||
});
|
||||
}
|
||||
}
|
||||
return criteria;
|
||||
}
|
||||
|
||||
static parseAcceptanceCriteria(content: string): AcceptanceCriterion[] {
|
||||
const src = content.replace(/\r\n/g, "\n");
|
||||
const beginIndex = src.indexOf(AcceptanceCriteriaManager.BEGIN_MARKER);
|
||||
const endIndex = src.indexOf(AcceptanceCriteriaManager.END_MARKER);
|
||||
if (beginIndex === -1 || endIndex === -1) {
|
||||
return AcceptanceCriteriaManager.parseOldFormat(src);
|
||||
}
|
||||
const acContent = src.substring(beginIndex + AcceptanceCriteriaManager.BEGIN_MARKER.length, endIndex);
|
||||
const lines = acContent.split("\n").filter((line) => line.trim());
|
||||
const criteria: AcceptanceCriterion[] = [];
|
||||
for (const line of lines) {
|
||||
const match = line.match(/^- \[([ x])\] #(\d+) (.+)$/);
|
||||
if (match?.[1] && match?.[2] && match?.[3]) {
|
||||
criteria.push({
|
||||
checked: match[1] === "x",
|
||||
text: match[3],
|
||||
index: Number.parseInt(match[2], 10),
|
||||
});
|
||||
}
|
||||
}
|
||||
return criteria;
|
||||
}
|
||||
|
||||
static formatAcceptanceCriteria(criteria: AcceptanceCriterion[], existingBody?: string): string {
|
||||
if (criteria.length === 0) {
|
||||
return "";
|
||||
}
|
||||
const body = AcceptanceCriteriaManager.composeAcceptanceCriteriaBody(criteria, existingBody);
|
||||
const lines = [AcceptanceCriteriaManager.SECTION_HEADER, AcceptanceCriteriaManager.BEGIN_MARKER];
|
||||
if (body.trim() !== "") {
|
||||
lines.push(...body.split("\n"));
|
||||
}
|
||||
lines.push(AcceptanceCriteriaManager.END_MARKER);
|
||||
return lines.join("\n");
|
||||
}
|
||||
|
||||
static updateContent(content: string, criteria: AcceptanceCriterion[]): string {
|
||||
// Normalize to LF while computing, preserve original EOL at return
|
||||
const useCRLF = /\r\n/.test(content);
|
||||
const src = content.replace(/\r\n/g, "\n");
|
||||
const existingBodyInfo = extractExistingAcceptanceCriteriaBody(src);
|
||||
const newSection = AcceptanceCriteriaManager.formatAcceptanceCriteria(criteria, existingBodyInfo?.body);
|
||||
|
||||
// Remove ALL existing Acceptance Criteria sections (legacy header blocks)
|
||||
const legacyBlockRegex = acceptanceCriteriaLegacyRegex("gi");
|
||||
const matches = Array.from(src.matchAll(legacyBlockRegex));
|
||||
let insertionIndex: number | null = null;
|
||||
const firstMatch = matches[0];
|
||||
if (firstMatch && firstMatch.index !== undefined) {
|
||||
insertionIndex = firstMatch.index;
|
||||
}
|
||||
|
||||
let stripped = src.replace(legacyBlockRegex, "").trimEnd();
|
||||
// Also remove any stray marker-only blocks (defensive)
|
||||
const markerBlockRegex = new RegExp(
|
||||
`${AcceptanceCriteriaManager.BEGIN_MARKER.replace(/[.*+?^${}()|[\\]\\]/g, "\\$&")}[\\s\\S]*?${AcceptanceCriteriaManager.END_MARKER.replace(/[.*+?^${}()|[\\]\\]/g, "\\$&")}`,
|
||||
"gi",
|
||||
);
|
||||
stripped = stripped.replace(markerBlockRegex, "").trimEnd();
|
||||
|
||||
if (!newSection) {
|
||||
// If criteria is empty, return stripped content (all AC sections removed)
|
||||
return stripped;
|
||||
}
|
||||
|
||||
// Insert the single consolidated section
|
||||
if (insertionIndex !== null) {
|
||||
const before = stripped.slice(0, insertionIndex).trimEnd();
|
||||
const after = stripped.slice(insertionIndex);
|
||||
const out = `${before}${before ? "\n\n" : ""}${newSection}${after ? `\n\n${after}` : ""}`;
|
||||
return useCRLF ? out.replace(/\n/g, "\r\n") : out;
|
||||
}
|
||||
|
||||
// No existing section found: append at end
|
||||
{
|
||||
const out = `${stripped}${stripped ? "\n\n" : ""}${newSection}`;
|
||||
return useCRLF ? out.replace(/\n/g, "\r\n") : out;
|
||||
}
|
||||
}
|
||||
|
||||
private static composeAcceptanceCriteriaBody(criteria: AcceptanceCriterion[], existingBody?: string): string {
|
||||
const sorted = [...criteria].sort((a, b) => a.index - b.index);
|
||||
if (sorted.length === 0) {
|
||||
return "";
|
||||
}
|
||||
const queue = [...sorted];
|
||||
const lines: string[] = [];
|
||||
let nextNumber = 1;
|
||||
const sourceLines = existingBody ? existingBody.replace(/\r\n/g, "\n").split("\n") : [];
|
||||
|
||||
if (sourceLines.length > 0) {
|
||||
for (const line of sourceLines) {
|
||||
const trimmed = line.trim();
|
||||
const checkboxMatch = trimmed.match(/^- \[([ x])\] (?:#\d+ )?(.*)$/);
|
||||
if (checkboxMatch) {
|
||||
const criterion = queue.shift();
|
||||
if (!criterion) {
|
||||
// Skip stale checklist entries when there are fewer criteria now
|
||||
continue;
|
||||
}
|
||||
const newLine = `- [${criterion.checked ? "x" : " "}] #${nextNumber++} ${criterion.text}`;
|
||||
lines.push(newLine);
|
||||
} else {
|
||||
lines.push(line);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
while (queue.length > 0) {
|
||||
const criterion = queue.shift();
|
||||
if (!criterion) continue;
|
||||
const lastLine = lines.length > 0 ? lines[lines.length - 1] : undefined;
|
||||
if (lastLine && lastLine.trim() !== "" && !lastLine.trim().startsWith("- [")) {
|
||||
lines.push("");
|
||||
}
|
||||
lines.push(`- [${criterion.checked ? "x" : " "}] #${nextNumber++} ${criterion.text}`);
|
||||
}
|
||||
|
||||
while (lines.length > 0) {
|
||||
const tail = lines[lines.length - 1];
|
||||
if (!tail || tail.trim() === "") {
|
||||
lines.pop();
|
||||
} else {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
return lines.join("\n");
|
||||
}
|
||||
|
||||
private static parseAllBlocks(content: string): AcceptanceCriterion[] {
|
||||
const marked: AcceptanceCriterion[] = [];
|
||||
const legacy: AcceptanceCriterion[] = [];
|
||||
// Normalize to LF to make matching platform-agnostic
|
||||
const src = content.replace(/\r\n/g, "\n");
|
||||
// Find all Acceptance Criteria blocks (legacy header blocks)
|
||||
const blockRegex = acceptanceCriteriaLegacyRegex("gi");
|
||||
let m: RegExpExecArray | null = blockRegex.exec(src);
|
||||
while (m !== null) {
|
||||
const block = m[2] || "";
|
||||
if (
|
||||
block.includes(AcceptanceCriteriaManager.BEGIN_MARKER) &&
|
||||
block.includes(AcceptanceCriteriaManager.END_MARKER)
|
||||
) {
|
||||
// Capture lines within each marked pair
|
||||
const markedBlockRegex = new RegExp(
|
||||
`${AcceptanceCriteriaManager.BEGIN_MARKER.replace(/[.*+?^${}()|[\\]\\]/g, "\\$&")}([\\s\\S]*?)${AcceptanceCriteriaManager.END_MARKER.replace(/[.*+?^${}()|[\\]\\]/g, "\\$&")}`,
|
||||
"gi",
|
||||
);
|
||||
let mm: RegExpExecArray | null = markedBlockRegex.exec(block);
|
||||
while (mm !== null) {
|
||||
const inside = mm[1] || "";
|
||||
const lineRegex = /^- \[([ x])\] (?:#\d+ )?(.+)$/gm;
|
||||
let lm: RegExpExecArray | null = lineRegex.exec(inside);
|
||||
while (lm !== null) {
|
||||
marked.push({ checked: lm[1] === "x", text: String(lm?.[2] ?? ""), index: marked.length + 1 });
|
||||
lm = lineRegex.exec(inside);
|
||||
}
|
||||
mm = markedBlockRegex.exec(block);
|
||||
}
|
||||
} else {
|
||||
// Legacy: parse checkbox lines without markers
|
||||
const lineRegex = /^- \[([ x])\] (.+)$/gm;
|
||||
let lm: RegExpExecArray | null = lineRegex.exec(block);
|
||||
while (lm !== null) {
|
||||
legacy.push({ checked: lm[1] === "x", text: String(lm?.[2] ?? ""), index: legacy.length + 1 });
|
||||
lm = lineRegex.exec(block);
|
||||
}
|
||||
}
|
||||
m = blockRegex.exec(src);
|
||||
}
|
||||
// Prefer marked content when present; otherwise fall back to legacy
|
||||
return marked.length > 0 ? marked : legacy;
|
||||
}
|
||||
|
||||
static parseAllCriteria(content: string): AcceptanceCriterion[] {
|
||||
const list = AcceptanceCriteriaManager.parseAllBlocks(content);
|
||||
return list.map((c, i) => ({ ...c, index: i + 1 }));
|
||||
}
|
||||
|
||||
static addCriteria(content: string, newCriteria: string[]): string {
|
||||
const existing = AcceptanceCriteriaManager.parseAllCriteria(content);
|
||||
let nextIndex = existing.length > 0 ? Math.max(...existing.map((c) => c.index)) + 1 : 1;
|
||||
for (const text of newCriteria) {
|
||||
existing.push({ checked: false, text: text.trim(), index: nextIndex++ });
|
||||
}
|
||||
return AcceptanceCriteriaManager.updateContent(content, existing);
|
||||
}
|
||||
|
||||
static removeCriterionByIndex(content: string, index: number): string {
|
||||
const criteria = AcceptanceCriteriaManager.parseAllCriteria(content);
|
||||
const filtered = criteria.filter((c) => c.index !== index);
|
||||
if (filtered.length === criteria.length) {
|
||||
throw new Error(`Acceptance criterion #${index} not found`);
|
||||
}
|
||||
const renumbered = filtered.map((c, i) => ({ ...c, index: i + 1 }));
|
||||
return AcceptanceCriteriaManager.updateContent(content, renumbered);
|
||||
}
|
||||
|
||||
static checkCriterionByIndex(content: string, index: number, checked: boolean): string {
|
||||
const criteria = AcceptanceCriteriaManager.parseAllCriteria(content);
|
||||
const criterion = criteria.find((c) => c.index === index);
|
||||
if (!criterion) {
|
||||
throw new Error(`Acceptance criterion #${index} not found`);
|
||||
}
|
||||
criterion.checked = checked;
|
||||
return AcceptanceCriteriaManager.updateContent(content, criteria);
|
||||
}
|
||||
|
||||
static migrateToStableFormat(content: string): string {
|
||||
const criteria = AcceptanceCriteriaManager.parseAllCriteria(content);
|
||||
if (criteria.length === 0) {
|
||||
return content;
|
||||
}
|
||||
if (
|
||||
content.includes(AcceptanceCriteriaManager.BEGIN_MARKER) &&
|
||||
content.includes(AcceptanceCriteriaManager.END_MARKER)
|
||||
) {
|
||||
return content;
|
||||
}
|
||||
return AcceptanceCriteriaManager.updateContent(content, criteria);
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,31 @@
|
|||
# Backlog.md MCP Implementation (MVP)
|
||||
|
||||
This directory exposes a minimal stdio MCP surface so local agents can work with backlog.md without duplicating business
|
||||
logic.
|
||||
|
||||
## What’s included
|
||||
|
||||
- `server.ts` / `createMcpServer()` – bootstraps a stdio-only server that extends `Core` and registers task and document tools (`task_*`, `document_*`) for MCP clients.
|
||||
- `tasks/` – consolidated task tooling that delegates to shared Core helpers (including plan/notes/AC editing).
|
||||
- `documents/` – document tooling layered on `Core`’s document helpers for list/view/create/update/search flows.
|
||||
- `tools/dependency-tools.ts` – dependency helpers reusing shared builders.
|
||||
- `resources/` – lightweight resource adapters for agents.
|
||||
- `guidelines/mcp/` – task workflow content surfaced via MCP.
|
||||
|
||||
Everything routes through existing Core APIs so the MCP layer stays a protocol wrapper.
|
||||
|
||||
## Development workflow
|
||||
|
||||
```bash
|
||||
# Run the stdio server from the repo
|
||||
bun run cli mcp start
|
||||
|
||||
# Or via the globally installed CLI
|
||||
backlog mcp start
|
||||
|
||||
# Tests
|
||||
bun test src/test/mcp-*.test.ts
|
||||
```
|
||||
|
||||
The test suite keeps to the reduced surface area and focuses on happy-path coverage for tasks, dependencies, and server
|
||||
bootstrap.
|
||||
|
|
@ -0,0 +1,126 @@
|
|||
import type { CallToolResult } from "../types.ts";
|
||||
|
||||
/**
|
||||
* Base MCP error class for all MCP-related errors
|
||||
*/
|
||||
export class McpError extends Error {
|
||||
constructor(
|
||||
message: string,
|
||||
public code: string,
|
||||
public details?: unknown,
|
||||
) {
|
||||
super(message);
|
||||
this.name = "McpError";
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Validation error for input validation failures
|
||||
*/
|
||||
export class McpValidationError extends McpError {
|
||||
constructor(message: string, validationError?: unknown) {
|
||||
super(message, "VALIDATION_ERROR", validationError);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Authentication error for auth failures
|
||||
*/
|
||||
export class McpAuthenticationError extends McpError {
|
||||
constructor(message = "Authentication required") {
|
||||
super(message, "AUTH_ERROR");
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Connection error for transport-level failures
|
||||
*/
|
||||
export class McpConnectionError extends McpError {
|
||||
constructor(message: string, details?: unknown) {
|
||||
super(message, "CONNECTION_ERROR", details);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Internal error for unexpected failures
|
||||
*/
|
||||
export class McpInternalError extends McpError {
|
||||
constructor(message = "An unexpected error occurred", details?: unknown) {
|
||||
super(message, "INTERNAL_ERROR", details);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Formats MCP errors into standardized tool responses
|
||||
*/
|
||||
function buildErrorResult(code: string, message: string, details?: unknown): CallToolResult {
|
||||
const includeDetails = !!process.env.DEBUG;
|
||||
const structured = details !== undefined ? { code, details } : { code };
|
||||
return {
|
||||
content: [
|
||||
{
|
||||
type: "text",
|
||||
text: formatErrorMarkdown(code, message, details, includeDetails),
|
||||
},
|
||||
],
|
||||
isError: true,
|
||||
structuredContent: structured,
|
||||
};
|
||||
}
|
||||
|
||||
export function handleMcpError(error: unknown): CallToolResult {
|
||||
if (error instanceof McpError) {
|
||||
return buildErrorResult(error.code, error.message, error.details);
|
||||
}
|
||||
|
||||
console.error("Unexpected MCP error:", error);
|
||||
|
||||
return {
|
||||
content: [
|
||||
{
|
||||
type: "text",
|
||||
text: formatErrorMarkdown("INTERNAL_ERROR", "An unexpected error occurred", error, !!process.env.DEBUG),
|
||||
},
|
||||
],
|
||||
isError: true,
|
||||
structuredContent: {
|
||||
code: "INTERNAL_ERROR",
|
||||
details: error,
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Formats successful responses in a consistent structure
|
||||
*/
|
||||
export function handleMcpSuccess(data: unknown): CallToolResult {
|
||||
return {
|
||||
content: [
|
||||
{
|
||||
type: "text",
|
||||
text: "OK",
|
||||
},
|
||||
],
|
||||
structuredContent: {
|
||||
success: true,
|
||||
data,
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Format error messages in markdown for consistent MCP error responses
|
||||
*/
|
||||
export function formatErrorMarkdown(code: string, message: string, details?: unknown, includeDetails = false): string {
|
||||
// Include details only when explicitly requested (e.g., debug mode)
|
||||
if (includeDetails && details) {
|
||||
let result = `${code}: ${message}`;
|
||||
|
||||
const detailsText = typeof details === "string" ? details : JSON.stringify(details, null, 2);
|
||||
result += `\n ${detailsText}`;
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
return message;
|
||||
}
|
||||
|
|
@ -0,0 +1,25 @@
|
|||
import { MCP_INIT_REQUIRED_GUIDE } from "../../../guidelines/mcp/index.ts";
|
||||
import type { McpServer } from "../../server.ts";
|
||||
import type { McpResourceHandler } from "../../types.ts";
|
||||
|
||||
function createInitRequiredResource(): McpResourceHandler {
|
||||
return {
|
||||
uri: "backlog://init-required",
|
||||
name: "Backlog.md Not Initialized",
|
||||
description: "Instructions for initializing Backlog.md in this project",
|
||||
mimeType: "text/markdown",
|
||||
handler: async () => ({
|
||||
contents: [
|
||||
{
|
||||
uri: "backlog://init-required",
|
||||
mimeType: "text/markdown",
|
||||
text: MCP_INIT_REQUIRED_GUIDE,
|
||||
},
|
||||
],
|
||||
}),
|
||||
};
|
||||
}
|
||||
|
||||
export function registerInitRequiredResource(server: McpServer): void {
|
||||
server.addResource(createInitRequiredResource());
|
||||
}
|
||||
|
|
@ -0,0 +1,25 @@
|
|||
import type { McpServer } from "../../server.ts";
|
||||
import type { McpResourceHandler } from "../../types.ts";
|
||||
import { WORKFLOW_GUIDES } from "../../workflow-guides.ts";
|
||||
|
||||
export function registerWorkflowResources(server: McpServer): void {
|
||||
for (const guide of WORKFLOW_GUIDES) {
|
||||
const resource: McpResourceHandler = {
|
||||
uri: guide.uri,
|
||||
name: guide.name,
|
||||
description: guide.description,
|
||||
mimeType: guide.mimeType,
|
||||
handler: async () => ({
|
||||
contents: [
|
||||
{
|
||||
uri: guide.uri,
|
||||
mimeType: guide.mimeType,
|
||||
text: guide.resourceText,
|
||||
},
|
||||
],
|
||||
}),
|
||||
};
|
||||
|
||||
server.addResource(resource);
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,289 @@
|
|||
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
|
||||
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
|
||||
import {
|
||||
CallToolRequestSchema,
|
||||
GetPromptRequestSchema,
|
||||
ListPromptsRequestSchema,
|
||||
ListResourcesRequestSchema,
|
||||
ListResourceTemplatesRequestSchema,
|
||||
ListToolsRequestSchema,
|
||||
ReadResourceRequestSchema,
|
||||
} from "@modelcontextprotocol/sdk/types.js";
|
||||
import { Core } from "../core/backlog.ts";
|
||||
import { getPackageName } from "../utils/app-info.ts";
|
||||
import { getVersion } from "../utils/version.ts";
|
||||
import { registerInitRequiredResource } from "./resources/init-required/index.ts";
|
||||
import { registerWorkflowResources } from "./resources/workflow/index.ts";
|
||||
import { registerDocumentTools } from "./tools/documents/index.ts";
|
||||
import { registerTaskTools } from "./tools/tasks/index.ts";
|
||||
import { registerWorkflowTools } from "./tools/workflow/index.ts";
|
||||
import type {
|
||||
CallToolResult,
|
||||
GetPromptResult,
|
||||
ListPromptsResult,
|
||||
ListResourcesResult,
|
||||
ListResourceTemplatesResult,
|
||||
ListToolsResult,
|
||||
McpPromptHandler,
|
||||
McpResourceHandler,
|
||||
McpToolHandler,
|
||||
ReadResourceResult,
|
||||
} from "./types.ts";
|
||||
|
||||
/**
|
||||
* Minimal MCP server implementation for stdio transport.
|
||||
*
|
||||
* The Backlog.md MCP server is intentionally local-only and exposes tools,
|
||||
* resources, and prompts through the stdio transport so that desktop editors
|
||||
* (e.g. Claude Code) can interact with a project without network exposure.
|
||||
*/
|
||||
const APP_NAME = getPackageName();
|
||||
const APP_VERSION = await getVersion();
|
||||
const INSTRUCTIONS_NORMAL =
|
||||
"At the beginning of each session, read the backlog://workflow/overview resource to understand when and how to use Backlog.md for task management. Additional detailed guides are available as resources when needed.";
|
||||
const INSTRUCTIONS_FALLBACK =
|
||||
"Backlog.md is not initialized in this directory. Read the backlog://init-required resource for setup instructions.";
|
||||
|
||||
type ServerInitOptions = {
|
||||
debug?: boolean;
|
||||
};
|
||||
|
||||
export class McpServer extends Core {
|
||||
private readonly server: Server;
|
||||
private transport?: StdioServerTransport;
|
||||
|
||||
private readonly tools = new Map<string, McpToolHandler>();
|
||||
private readonly resources = new Map<string, McpResourceHandler>();
|
||||
private readonly prompts = new Map<string, McpPromptHandler>();
|
||||
|
||||
constructor(projectRoot: string, instructions: string) {
|
||||
super(projectRoot, { enableWatchers: true });
|
||||
|
||||
this.server = new Server(
|
||||
{
|
||||
name: APP_NAME,
|
||||
version: APP_VERSION,
|
||||
},
|
||||
{
|
||||
capabilities: {
|
||||
tools: { listChanged: true },
|
||||
resources: { listChanged: true },
|
||||
prompts: { listChanged: true },
|
||||
},
|
||||
instructions,
|
||||
},
|
||||
);
|
||||
|
||||
this.setupHandlers();
|
||||
}
|
||||
|
||||
private setupHandlers(): void {
|
||||
this.server.setRequestHandler(ListToolsRequestSchema, async () => this.listTools());
|
||||
this.server.setRequestHandler(CallToolRequestSchema, async (request) => this.callTool(request));
|
||||
this.server.setRequestHandler(ListResourcesRequestSchema, async () => this.listResources());
|
||||
this.server.setRequestHandler(ListResourceTemplatesRequestSchema, async () => this.listResourceTemplates());
|
||||
this.server.setRequestHandler(ReadResourceRequestSchema, async (request) => this.readResource(request));
|
||||
this.server.setRequestHandler(ListPromptsRequestSchema, async () => this.listPrompts());
|
||||
this.server.setRequestHandler(GetPromptRequestSchema, async (request) => this.getPrompt(request));
|
||||
}
|
||||
|
||||
/**
|
||||
* Register a tool implementation with the server.
|
||||
*/
|
||||
public addTool(tool: McpToolHandler): void {
|
||||
this.tools.set(tool.name, tool);
|
||||
}
|
||||
|
||||
/**
|
||||
* Register a resource implementation with the server.
|
||||
*/
|
||||
public addResource(resource: McpResourceHandler): void {
|
||||
this.resources.set(resource.uri, resource);
|
||||
}
|
||||
|
||||
/**
|
||||
* Register a prompt implementation with the server.
|
||||
*/
|
||||
public addPrompt(prompt: McpPromptHandler): void {
|
||||
this.prompts.set(prompt.name, prompt);
|
||||
}
|
||||
|
||||
/**
|
||||
* Connect the server to the stdio transport.
|
||||
*/
|
||||
public async connect(): Promise<void> {
|
||||
if (this.transport) {
|
||||
return;
|
||||
}
|
||||
|
||||
this.transport = new StdioServerTransport();
|
||||
await this.server.connect(this.transport);
|
||||
}
|
||||
|
||||
/**
|
||||
* Start the server. The stdio transport begins handling requests as soon as
|
||||
* it is connected, so this method exists primarily for symmetry with
|
||||
* callers that expect an explicit start step.
|
||||
*/
|
||||
public async start(): Promise<void> {
|
||||
if (!this.transport) {
|
||||
throw new Error("MCP server not connected. Call connect() before start().");
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Stop the server and release transport resources.
|
||||
*/
|
||||
public async stop(): Promise<void> {
|
||||
await this.server.close();
|
||||
this.transport = undefined;
|
||||
}
|
||||
|
||||
public getServer(): Server {
|
||||
return this.server;
|
||||
}
|
||||
|
||||
// -- Internal handlers --------------------------------------------------
|
||||
|
||||
protected async listTools(): Promise<ListToolsResult> {
|
||||
return {
|
||||
tools: Array.from(this.tools.values()).map((tool) => ({
|
||||
name: tool.name,
|
||||
description: tool.description,
|
||||
inputSchema: {
|
||||
type: "object",
|
||||
...tool.inputSchema,
|
||||
},
|
||||
})),
|
||||
};
|
||||
}
|
||||
|
||||
protected async callTool(request: {
|
||||
params: { name: string; arguments?: Record<string, unknown> };
|
||||
}): Promise<CallToolResult> {
|
||||
const { name, arguments: args = {} } = request.params;
|
||||
const tool = this.tools.get(name);
|
||||
|
||||
if (!tool) {
|
||||
throw new Error(`Tool not found: ${name}`);
|
||||
}
|
||||
|
||||
return await tool.handler(args);
|
||||
}
|
||||
|
||||
protected async listResources(): Promise<ListResourcesResult> {
|
||||
return {
|
||||
resources: Array.from(this.resources.values()).map((resource) => ({
|
||||
uri: resource.uri,
|
||||
name: resource.name || "Unnamed Resource",
|
||||
description: resource.description,
|
||||
mimeType: resource.mimeType,
|
||||
})),
|
||||
};
|
||||
}
|
||||
|
||||
protected async listResourceTemplates(): Promise<ListResourceTemplatesResult> {
|
||||
return {
|
||||
resourceTemplates: [],
|
||||
};
|
||||
}
|
||||
|
||||
protected async readResource(request: { params: { uri: string } }): Promise<ReadResourceResult> {
|
||||
const { uri } = request.params;
|
||||
|
||||
// Exact match first
|
||||
let resource = this.resources.get(uri);
|
||||
|
||||
// Fallback to base URI for parameterised resources
|
||||
if (!resource) {
|
||||
const baseUri = uri.split("?")[0] || uri;
|
||||
resource = this.resources.get(baseUri);
|
||||
}
|
||||
|
||||
if (!resource) {
|
||||
throw new Error(`Resource not found: ${uri}`);
|
||||
}
|
||||
|
||||
return await resource.handler(uri);
|
||||
}
|
||||
|
||||
protected async listPrompts(): Promise<ListPromptsResult> {
|
||||
return {
|
||||
prompts: Array.from(this.prompts.values()).map((prompt) => ({
|
||||
name: prompt.name,
|
||||
description: prompt.description,
|
||||
arguments: prompt.arguments,
|
||||
})),
|
||||
};
|
||||
}
|
||||
|
||||
protected async getPrompt(request: {
|
||||
params: { name: string; arguments?: Record<string, unknown> };
|
||||
}): Promise<GetPromptResult> {
|
||||
const { name, arguments: args = {} } = request.params;
|
||||
const prompt = this.prompts.get(name);
|
||||
|
||||
if (!prompt) {
|
||||
throw new Error(`Prompt not found: ${name}`);
|
||||
}
|
||||
|
||||
return await prompt.handler(args);
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper exposed for tests so they can call handlers directly.
|
||||
*/
|
||||
public get testInterface() {
|
||||
return {
|
||||
listTools: () => this.listTools(),
|
||||
callTool: (request: { params: { name: string; arguments?: Record<string, unknown> } }) => this.callTool(request),
|
||||
listResources: () => this.listResources(),
|
||||
listResourceTemplates: () => this.listResourceTemplates(),
|
||||
readResource: (request: { params: { uri: string } }) => this.readResource(request),
|
||||
listPrompts: () => this.listPrompts(),
|
||||
getPrompt: (request: { params: { name: string; arguments?: Record<string, unknown> } }) =>
|
||||
this.getPrompt(request),
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Factory that bootstraps a fully configured MCP server instance.
|
||||
*
|
||||
* If backlog is not initialized in the project directory, the server will start
|
||||
* successfully but only provide the backlog://init-required resource to guide
|
||||
* users to run `backlog init`.
|
||||
*/
|
||||
export async function createMcpServer(projectRoot: string, options: ServerInitOptions = {}): Promise<McpServer> {
|
||||
// We need to check config first to determine which instructions to use
|
||||
const tempCore = new Core(projectRoot);
|
||||
await tempCore.ensureConfigLoaded();
|
||||
const config = await tempCore.filesystem.loadConfig();
|
||||
|
||||
// Create server with appropriate instructions
|
||||
const instructions = config ? INSTRUCTIONS_NORMAL : INSTRUCTIONS_FALLBACK;
|
||||
const server = new McpServer(projectRoot, instructions);
|
||||
|
||||
// Graceful fallback: if config doesn't exist, provide init-required resource
|
||||
if (!config) {
|
||||
registerInitRequiredResource(server);
|
||||
|
||||
if (options.debug) {
|
||||
console.error("MCP server initialised in fallback mode (backlog not initialized in this directory).");
|
||||
}
|
||||
|
||||
return server;
|
||||
}
|
||||
|
||||
// Normal mode: full tools and resources
|
||||
registerWorkflowResources(server);
|
||||
registerWorkflowTools(server);
|
||||
registerTaskTools(server, config);
|
||||
registerDocumentTools(server, config);
|
||||
|
||||
if (options.debug) {
|
||||
console.error("MCP server initialised (stdio transport only).");
|
||||
}
|
||||
|
||||
return server;
|
||||
}
|
||||
|
|
@ -0,0 +1,177 @@
|
|||
import type { Document, DocumentSearchResult } from "../../../types/index.ts";
|
||||
import { McpError } from "../../errors/mcp-errors.ts";
|
||||
import type { McpServer } from "../../server.ts";
|
||||
import type { CallToolResult } from "../../types.ts";
|
||||
import { formatDocumentCallResult } from "../../utils/document-response.ts";
|
||||
|
||||
export type DocumentListArgs = {
|
||||
search?: string;
|
||||
};
|
||||
|
||||
export type DocumentViewArgs = {
|
||||
id: string;
|
||||
};
|
||||
|
||||
export type DocumentCreateArgs = {
|
||||
title: string;
|
||||
content: string;
|
||||
};
|
||||
|
||||
export type DocumentUpdateArgs = {
|
||||
id: string;
|
||||
title?: string;
|
||||
content: string;
|
||||
};
|
||||
|
||||
export type DocumentSearchArgs = {
|
||||
query: string;
|
||||
limit?: number;
|
||||
};
|
||||
|
||||
export class DocumentHandlers {
|
||||
constructor(private readonly core: McpServer) {}
|
||||
|
||||
private formatDocumentSummaryLine(document: Document): string {
|
||||
const metadata: string[] = [`type: ${document.type}`, `created: ${document.createdDate}`];
|
||||
if (document.updatedDate) {
|
||||
metadata.push(`updated: ${document.updatedDate}`);
|
||||
}
|
||||
if (document.tags && document.tags.length > 0) {
|
||||
metadata.push(`tags: ${document.tags.join(", ")}`);
|
||||
} else {
|
||||
metadata.push("tags: (none)");
|
||||
}
|
||||
return ` ${document.id} - ${document.title} (${metadata.join(", ")})`;
|
||||
}
|
||||
|
||||
private formatScore(score: number | null): string {
|
||||
if (score === null || score === undefined) {
|
||||
return "";
|
||||
}
|
||||
const invertedScore = 1 - score;
|
||||
return ` [score ${invertedScore.toFixed(3)}]`;
|
||||
}
|
||||
|
||||
private async loadDocumentOrThrow(id: string): Promise<Document> {
|
||||
const document = await this.core.getDocument(id);
|
||||
if (!document) {
|
||||
throw new McpError(`Document not found: ${id}`, "DOCUMENT_NOT_FOUND");
|
||||
}
|
||||
return document;
|
||||
}
|
||||
|
||||
async listDocuments(args: DocumentListArgs = {}): Promise<CallToolResult> {
|
||||
const search = args.search?.toLowerCase();
|
||||
const documents = await this.core.filesystem.listDocuments();
|
||||
|
||||
const filtered =
|
||||
search && search.length > 0
|
||||
? documents.filter((document) => {
|
||||
const haystacks = [document.id, document.title];
|
||||
return haystacks.some((value) => value.toLowerCase().includes(search));
|
||||
})
|
||||
: documents;
|
||||
|
||||
if (filtered.length === 0) {
|
||||
return {
|
||||
content: [
|
||||
{
|
||||
type: "text",
|
||||
text: "No documents found.",
|
||||
},
|
||||
],
|
||||
};
|
||||
}
|
||||
|
||||
const lines: string[] = ["Documents:"];
|
||||
for (const document of filtered) {
|
||||
lines.push(this.formatDocumentSummaryLine(document));
|
||||
}
|
||||
|
||||
return {
|
||||
content: [
|
||||
{
|
||||
type: "text",
|
||||
text: lines.join("\n"),
|
||||
},
|
||||
],
|
||||
};
|
||||
}
|
||||
|
||||
async viewDocument(args: DocumentViewArgs): Promise<CallToolResult> {
|
||||
const document = await this.loadDocumentOrThrow(args.id);
|
||||
return await formatDocumentCallResult(document);
|
||||
}
|
||||
|
||||
async createDocument(args: DocumentCreateArgs): Promise<CallToolResult> {
|
||||
try {
|
||||
const document = await this.core.createDocumentWithId(args.title, args.content);
|
||||
return await formatDocumentCallResult(document, {
|
||||
summaryLines: ["Document created successfully."],
|
||||
});
|
||||
} catch (error) {
|
||||
if (error instanceof Error) {
|
||||
throw new McpError(`Failed to create document: ${error.message}`, "OPERATION_FAILED");
|
||||
}
|
||||
throw new McpError("Failed to create document.", "OPERATION_FAILED");
|
||||
}
|
||||
}
|
||||
|
||||
async updateDocument(args: DocumentUpdateArgs): Promise<CallToolResult> {
|
||||
const existing = await this.loadDocumentOrThrow(args.id);
|
||||
const nextDocument = args.title ? { ...existing, title: args.title } : existing;
|
||||
|
||||
try {
|
||||
await this.core.updateDocument(nextDocument, args.content);
|
||||
const refreshed = await this.core.getDocument(existing.id);
|
||||
if (!refreshed) {
|
||||
throw new McpError(`Document not found: ${args.id}`, "DOCUMENT_NOT_FOUND");
|
||||
}
|
||||
return await formatDocumentCallResult(refreshed, {
|
||||
summaryLines: ["Document updated successfully."],
|
||||
});
|
||||
} catch (error) {
|
||||
if (error instanceof Error) {
|
||||
throw new McpError(`Failed to update document: ${error.message}`, "OPERATION_FAILED");
|
||||
}
|
||||
throw new McpError("Failed to update document.", "OPERATION_FAILED");
|
||||
}
|
||||
}
|
||||
|
||||
async searchDocuments(args: DocumentSearchArgs): Promise<CallToolResult> {
|
||||
const searchService = await this.core.getSearchService();
|
||||
const results = searchService.search({
|
||||
query: args.query,
|
||||
limit: args.limit,
|
||||
types: ["document"],
|
||||
});
|
||||
|
||||
const documents = results.filter((result): result is DocumentSearchResult => result.type === "document");
|
||||
if (documents.length === 0) {
|
||||
return {
|
||||
content: [
|
||||
{
|
||||
type: "text",
|
||||
text: `No documents found for "${args.query}".`,
|
||||
},
|
||||
],
|
||||
};
|
||||
}
|
||||
|
||||
const lines: string[] = ["Documents:"];
|
||||
for (const result of documents) {
|
||||
const { document } = result;
|
||||
const scoreText = this.formatScore(result.score);
|
||||
lines.push(` ${document.id} - ${document.title}${scoreText}`);
|
||||
}
|
||||
|
||||
return {
|
||||
content: [
|
||||
{
|
||||
type: "text",
|
||||
text: lines.join("\n"),
|
||||
},
|
||||
],
|
||||
};
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,94 @@
|
|||
import type { BacklogConfig } from "../../../types/index.ts";
|
||||
import type { McpServer } from "../../server.ts";
|
||||
import type { McpToolHandler } from "../../types.ts";
|
||||
import { createSimpleValidatedTool } from "../../validation/tool-wrapper.ts";
|
||||
import type {
|
||||
DocumentCreateArgs,
|
||||
DocumentListArgs,
|
||||
DocumentSearchArgs,
|
||||
DocumentUpdateArgs,
|
||||
DocumentViewArgs,
|
||||
} from "./handlers.ts";
|
||||
import { DocumentHandlers } from "./handlers.ts";
|
||||
import {
|
||||
documentCreateSchema,
|
||||
documentListSchema,
|
||||
documentSearchSchema,
|
||||
documentUpdateSchema,
|
||||
documentViewSchema,
|
||||
} from "./schemas.ts";
|
||||
|
||||
export function registerDocumentTools(server: McpServer, _config: BacklogConfig): void {
|
||||
const handlers = new DocumentHandlers(server);
|
||||
|
||||
const listDocumentsTool: McpToolHandler = createSimpleValidatedTool(
|
||||
{
|
||||
name: "document_list",
|
||||
description: "List Backlog.md documents with optional substring filtering",
|
||||
inputSchema: documentListSchema,
|
||||
},
|
||||
documentListSchema,
|
||||
async (input) => handlers.listDocuments(input as DocumentListArgs),
|
||||
);
|
||||
|
||||
const viewDocumentTool: McpToolHandler = createSimpleValidatedTool(
|
||||
{
|
||||
name: "document_view",
|
||||
description: "View a Backlog.md document including metadata and markdown content",
|
||||
inputSchema: documentViewSchema,
|
||||
},
|
||||
documentViewSchema,
|
||||
async (input) => handlers.viewDocument(input as DocumentViewArgs),
|
||||
);
|
||||
|
||||
const createDocumentTool: McpToolHandler = createSimpleValidatedTool(
|
||||
{
|
||||
name: "document_create",
|
||||
description: "Create a Backlog.md document using the shared ID generator",
|
||||
inputSchema: documentCreateSchema,
|
||||
},
|
||||
documentCreateSchema,
|
||||
async (input) => handlers.createDocument(input as DocumentCreateArgs),
|
||||
);
|
||||
|
||||
const updateDocumentTool: McpToolHandler = createSimpleValidatedTool(
|
||||
{
|
||||
name: "document_update",
|
||||
description: "Update an existing Backlog.md document's content and optional title",
|
||||
inputSchema: documentUpdateSchema,
|
||||
},
|
||||
documentUpdateSchema,
|
||||
async (input) => handlers.updateDocument(input as DocumentUpdateArgs),
|
||||
);
|
||||
|
||||
const searchDocumentTool: McpToolHandler = createSimpleValidatedTool(
|
||||
{
|
||||
name: "document_search",
|
||||
description: "Search Backlog.md documents using the shared fuzzy index",
|
||||
inputSchema: documentSearchSchema,
|
||||
},
|
||||
documentSearchSchema,
|
||||
async (input) => handlers.searchDocuments(input as DocumentSearchArgs),
|
||||
);
|
||||
|
||||
server.addTool(listDocumentsTool);
|
||||
server.addTool(viewDocumentTool);
|
||||
server.addTool(createDocumentTool);
|
||||
server.addTool(updateDocumentTool);
|
||||
server.addTool(searchDocumentTool);
|
||||
}
|
||||
|
||||
export type {
|
||||
DocumentCreateArgs,
|
||||
DocumentListArgs,
|
||||
DocumentSearchArgs,
|
||||
DocumentUpdateArgs,
|
||||
DocumentViewArgs,
|
||||
} from "./handlers.ts";
|
||||
export {
|
||||
documentCreateSchema,
|
||||
documentListSchema,
|
||||
documentSearchSchema,
|
||||
documentUpdateSchema,
|
||||
documentViewSchema,
|
||||
} from "./schemas.ts";
|
||||