clip-forge/backend/app/services
Jeff Emmett 362fe1e860 feat: add cloud AI inference support (Gemini/OpenAI-compatible)
CPU-based Ollama inference on Netcup is too slow due to server memory
pressure. Add OpenAI-compatible API support so we can use Gemini Flash
or other cloud APIs for clip analysis. Also increase transcript sample
size to 20K chars since cloud APIs handle it easily.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-10 00:44:13 +00:00
..
__init__.py feat: ClipForge Phase 1 - core pipeline MVP 2026-02-08 12:27:43 +00:00
ai_analysis.py feat: add cloud AI inference support (Gemini/OpenAI-compatible) 2026-02-10 00:44:13 +00:00
clip_extraction.py feat: ClipForge Phase 1 - core pipeline MVP 2026-02-08 12:27:43 +00:00
download.py fix: add web_creator client fallback, friendlier YouTube bot error 2026-02-09 18:41:19 +00:00
transcription.py feat: ClipForge Phase 1 - core pipeline MVP 2026-02-08 12:27:43 +00:00