Go to file
Jeff Emmett 08be7716f9 Aggressively optimize Ollama CPU inference speed
- Warm up both models on startup with keep_alive=24h (no cold starts)
- Use 16 threads for inference (server has 20 cores)
- Reduce context window to 1024 tokens, max output to 256
- Persistent httpx client for embedding calls (skip TCP handshake)
- Trim RAG chunks to 300 chars, history to 4 messages
- Shorter system prompt and context wrapper

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-17 01:12:04 -07:00
app Aggressively optimize Ollama CPU inference speed 2026-02-17 01:12:04 -07:00
backlog Initialize backlog and record deployment setup 2026-02-16 18:51:00 -07:00
.env.example Initial commit: Erowid conversational bot 2026-02-17 01:19:49 +00:00
.gitignore Initial commit: Erowid conversational bot 2026-02-17 01:19:49 +00:00
Dockerfile Initial commit: Erowid conversational bot 2026-02-17 01:19:49 +00:00
docker-compose.yml Update Traefik host to erowid.psilo-cyber.net 2026-02-16 18:38:58 -07:00
requirements.txt Initial commit: Erowid conversational bot 2026-02-17 01:19:49 +00:00