clip-forge/backend/app
Jeff Emmett 362fe1e860 feat: add cloud AI inference support (Gemini/OpenAI-compatible)
CPU-based Ollama inference on Netcup is too slow due to server memory
pressure. Add OpenAI-compatible API support so we can use Gemini Flash
or other cloud APIs for clip analysis. Also increase transcript sample
size to 20K chars since cloud APIs handle it easily.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-10 00:44:13 +00:00
..
api feat: ClipForge Phase 1 - core pipeline MVP 2026-02-08 12:27:43 +00:00
services feat: add cloud AI inference support (Gemini/OpenAI-compatible) 2026-02-10 00:44:13 +00:00
workers fix: add web_creator client fallback, friendlier YouTube bot error 2026-02-09 18:41:19 +00:00
__init__.py feat: ClipForge Phase 1 - core pipeline MVP 2026-02-08 12:27:43 +00:00
config.py feat: add cloud AI inference support (Gemini/OpenAI-compatible) 2026-02-10 00:44:13 +00:00
database.py feat: ClipForge Phase 1 - core pipeline MVP 2026-02-08 12:27:43 +00:00
frontend.py fix: use raw string for HTML template to preserve JS backslashes 2026-02-09 11:36:48 +00:00
main.py feat: add user-friendly frontend with upload, progress, and clip gallery 2026-02-09 11:27:29 +00:00
models.py feat: ClipForge Phase 1 - core pipeline MVP 2026-02-08 12:27:43 +00:00
schemas.py feat: ClipForge Phase 1 - core pipeline MVP 2026-02-08 12:27:43 +00:00
worker.py fix: use arq CLI to start worker instead of python -m 2026-02-08 12:33:02 +00:00