Jeff Emmett
|
30e2219551
|
feat: add Ollama private AI integration with model selection
- Add Ollama as priority AI provider (FREE, self-hosted)
- Add model selection UI in Settings dialog
- Support for multiple models: Llama 3.1 70B, Devstral, Qwen Coder, etc.
- Ollama server configured at http://159.195.32.209:11434
- Models dropdown shows quality vs speed tradeoffs
- Falls back to RunPod/cloud providers when Ollama unavailable
Models available:
- llama3.1:70b (Best quality, ~7s)
- devstral (Best for coding agents)
- qwen2.5-coder:7b (Fast coding)
- llama3.1:8b (Balanced)
- llama3.2:3b (Fastest)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
|
2025-11-26 14:47:07 -08:00 |