Increase Ollama timeout from hardcoded 120s to configurable 600s default. Add optional AI Orchestrator integration for RunPod GPU acceleration with automatic fallback to direct Ollama when orchestrator is unavailable. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> |
||
|---|---|---|
| .. | ||
| meeting-intelligence | ||
| .env.example | ||