clip-forge/backend
Jeff Emmett 362fe1e860 feat: add cloud AI inference support (Gemini/OpenAI-compatible)
CPU-based Ollama inference on Netcup is too slow due to server memory
pressure. Add OpenAI-compatible API support so we can use Gemini Flash
or other cloud APIs for clip analysis. Also increase transcript sample
size to 20K chars since cloud APIs handle it easily.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-10 00:44:13 +00:00
..
app feat: add cloud AI inference support (Gemini/OpenAI-compatible) 2026-02-10 00:44:13 +00:00
Dockerfile feat: add deno runtime for yt-dlp YouTube JS extraction 2026-02-09 18:35:11 +00:00
requirements.txt fix: update yt-dlp to latest for YouTube bot detection bypass 2026-02-08 12:40:59 +00:00