Self-hosted NotebookLM alternative - deployment config for Netcup RS 8000
Go to file
Jeff Emmett 4c9bb27ce8 Add priority 140 to rnotes subdomain router
Ensures opennotebook.rnotes.online routes to Open Notebook container
instead of the new rnotes-wildcard router (priority 100).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 09:45:39 +00:00
.streamlit Add opennotebook.rnotes.online route and iframe embedding support 2026-02-23 05:55:40 +00:00
backlog Add deployment scaffolding (Dockerfile, docker-compose, nginx) 2026-02-07 14:15:59 +01:00
pdf-ocr Fix ai-internal network name, add PDF OCR service 2025-12-14 12:52:16 -05:00
.dockerignore Add deployment scaffolding (Dockerfile, docker-compose, nginx) 2026-02-07 14:15:59 +01:00
.gitignore Add deployment scaffolding (Dockerfile, docker-compose, nginx) 2026-02-07 14:15:59 +01:00
README.md Initial commit: Open Notebook deployment config for Netcup RS 8000 2025-11-26 20:39:30 -08:00
deploy.sh Initial commit: Open Notebook deployment config for Netcup RS 8000 2025-11-26 20:39:30 -08:00
docker-compose.yml Add priority 140 to rnotes subdomain router 2026-02-23 09:45:39 +00:00
docker.env.example Fix: Remove deprecated version field, update API URL for Cloudflare routing 2025-11-26 21:17:18 -08:00

README.md

Open Notebook - Netcup RS 8000 Deployment

Self-hosted NotebookLM alternative integrated with the AI orchestrator stack.

Architecture

Open Notebook
├── Frontend (Next.js) → port 8502 → Traefik → notebook.jeffemmett.com
├── API (FastAPI) → port 5055
├── Database (SurrealDB) → embedded
└── AI Providers:
    ├── LLM → Ollama (local, FREE)
    ├── Embeddings → Ollama (local, FREE)
    ├── STT → Groq/OpenAI (cloud)
    └── TTS → ElevenLabs/OpenAI (cloud, for podcasts)

Quick Deploy

# 1. SSH to Netcup
ssh netcup

# 2. Clone/copy the deployment files
cd /opt/websites
git clone https://gitea.jeffemmett.com/jeff/open-notebook.git
# OR copy files manually
mkdir -p /opt/websites/open-notebook
cd /opt/websites/open-notebook

# 3. Pull required Ollama models
docker exec ollama ollama pull llama3.2:3b        # Fast LLM
docker exec ollama ollama pull llama3.1:8b        # Better LLM
docker exec ollama ollama pull nomic-embed-text   # Embeddings

# 4. Edit docker.env with your API keys (optional)
nano docker.env

# 5. Deploy
docker compose up -d

# 6. Verify
docker logs -f open-notebook

Configure DNS (Cloudflare Tunnel)

Option A: Add to existing tunnel config

ssh netcup
nano /root/cloudflared/config.yml

Add:

- hostname: notebook.jeffemmett.com
  service: http://localhost:80

Restart cloudflared:

docker restart cloudflared

Option B: Cloudflare Dashboard

  1. Go to Cloudflare Zero Trust → Access → Tunnels
  2. Select your tunnel → Public Hostnames
  3. Add notebook.jeffemmett.comhttp://localhost:80

DNS Records (if not using wildcard)

In Cloudflare DNS, add CNAME:

  • Type: CNAME
  • Name: notebook
  • Target: a838e9dc-0af5-4212-8af2-6864eb15e1b5.cfargotunnel.com
  • Proxy: Enabled

AI Provider Configuration

Local (FREE) - Already configured

Feature Provider Model Cost
LLM Ollama llama3.2:3b, llama3.1:8b FREE
Embeddings Ollama nomic-embed-text FREE

Cloud (for premium features)

Feature Recommended Provider Notes
STT Groq (free tier) Fast Whisper, 100 hrs/month free
TTS ElevenLabs Best voice quality for podcasts
TTS (alt) OpenAI Cheaper, good quality

Adding API Keys

Edit docker.env:

# For Speech-to-Text (transcription)
GROQ_API_KEY=gsk_your_key_here

# For Text-to-Speech (podcasts)
ELEVENLABS_API_KEY=your_key_here
# OR
OPENAI_API_KEY=sk-your_key_here

Then restart:

docker compose restart

Useful Commands

# View logs
docker logs -f open-notebook

# Restart
docker compose restart

# Update to latest version
docker compose pull
docker compose up -d

# Check Ollama models
docker exec ollama ollama list

# Pull new Ollama model
docker exec ollama ollama pull mistral:7b

# Backup data
tar -czvf notebook-backup.tar.gz notebook_data surreal_data

Accessing Open Notebook

Integration with AI Orchestrator

The Open Notebook instance connects to the same Ollama service used by the AI orchestrator, sharing:

  • Model cache (no duplicate downloads)
  • Compute resources
  • Network (ai-orchestrator_ai-internal)

For advanced routing (e.g., GPU-accelerated inference via RunPod), configure the AI orchestrator to expose OpenAI-compatible endpoints.

Troubleshooting

Container won't start:

docker logs open-notebook
# Check if ports are in use
netstat -tlnp | grep -E '8502|5055'

Can't connect to Ollama:

# Verify network connectivity
docker exec open-notebook curl http://ollama:11434/api/tags

Database issues:

# Reset database (CAUTION: loses data)
docker compose down
rm -rf surreal_data
docker compose up -d