Commit Graph

4 Commits

Author SHA1 Message Date
Jeff Emmett 30e2219551 feat: add Ollama private AI integration with model selection
- Add Ollama as priority AI provider (FREE, self-hosted)
- Add model selection UI in Settings dialog
- Support for multiple models: Llama 3.1 70B, Devstral, Qwen Coder, etc.
- Ollama server configured at http://159.195.32.209:11434
- Models dropdown shows quality vs speed tradeoffs
- Falls back to RunPod/cloud providers when Ollama unavailable

Models available:
- llama3.1:70b (Best quality, ~7s)
- devstral (Best for coding agents)
- qwen2.5-coder:7b (Fast coding)
- llama3.1:8b (Balanced)
- llama3.2:3b (Fastest)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-26 14:47:07 -08:00
Jeff Emmett de59c4a726 pin object, fix fathom, and a bunch of other things 2025-11-11 22:32:36 -08:00
Jeff Emmett 8d5b41f530 update typescript errors for vercel 2025-11-10 11:19:24 -08:00
Jeff Emmett e727deea19 everything working in dev 2025-11-10 11:06:13 -08:00