erowid-bot/app
Jeff Emmett 3215283f97 Speed up bot: use llama3.2:1b, reduce context, limit tokens
- Switch default model from llama3.1:8b to llama3.2:1b (2x faster on CPU)
- Limit Ollama context to 2048 tokens and max output to 512 tokens
- Reduce retrieval chunks from 4 to 3, chunk content from 800 to 500 chars
- Trim conversation history from 10 to 6 messages
- Shorten system prompt to reduce input tokens

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-16 19:44:04 -07:00
..
scraper Initial commit: Erowid conversational bot 2026-02-17 01:19:49 +00:00
static Initial commit: Erowid conversational bot 2026-02-17 01:19:49 +00:00
__init__.py Initial commit: Erowid conversational bot 2026-02-17 01:19:49 +00:00
config.py Speed up bot: use llama3.2:1b, reduce context, limit tokens 2026-02-16 19:44:04 -07:00
database.py Initial commit: Erowid conversational bot 2026-02-17 01:19:49 +00:00
embeddings.py Initial commit: Erowid conversational bot 2026-02-17 01:19:49 +00:00
llm.py Speed up bot: use llama3.2:1b, reduce context, limit tokens 2026-02-16 19:44:04 -07:00
main.py Initial commit: Erowid conversational bot 2026-02-17 01:19:49 +00:00
models.py Initial commit: Erowid conversational bot 2026-02-17 01:19:49 +00:00
rag.py Speed up bot: use llama3.2:1b, reduce context, limit tokens 2026-02-16 19:44:04 -07:00