- Fix JSONB serialization in API database.py (json.dumps + ::jsonb casts) - Fix integer vs UUID job ID handling in transcriber - Fix UUID-to-string conversion for meeting_id in processor - Add whisper.cpp shared libraries to Dockerfile (libwhisper, libggml) - Fix Jibri finalize script log directory path - Add graceful error handling for speaker diarization - Support video_path parameter for automatic audio extraction Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> |
||
|---|---|---|
| .. | ||
| api | ||
| jibri/config | ||
| postgres | ||
| transcriber | ||
| .env.example | ||
| README.md | ||
| docker-compose.yml | ||
README.md
Meeting Intelligence System
A fully self-hosted, zero-cost meeting intelligence system for Jeffsi Meet that provides:
- Automatic meeting recording via Jibri
- Local transcription via whisper.cpp (CPU-only)
- Speaker diarization (who said what)
- AI-powered summaries via Ollama
- Searchable meeting archive with dashboard
Architecture
┌─────────────────────────────────────────────────────────────────────┐
│ Netcup RS 8000 (Backend) │
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────────┐ │
│ │ Jibri │───▶│ Whisper │───▶│ AI Processor │ │
│ │ Recording │ │ Transcriber │ │ (Ollama + Summarizer) │ │
│ │ Container │ │ Service │ │ │ │
│ └─────────────┘ └─────────────┘ └─────────────────────────┘ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ PostgreSQL + pgvector │ │
│ └─────────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────┘
Components
| Service | Port | Description |
|---|---|---|
| PostgreSQL | 5432 | Database with pgvector for semantic search |
| Redis | 6379 | Job queue for async processing |
| Transcriber | 8001 | whisper.cpp + speaker diarization |
| API | 8000 | REST API for meetings, transcripts, search |
| Jibri | - | Recording service (joins meetings as hidden participant) |
Deployment
Prerequisites
- Docker and Docker Compose installed
- Ollama running on the host (for AI summaries)
- Jeffsi Meet configured with recording enabled
Setup
-
Copy environment file:
cp .env.example .env -
Edit
.envwith your configuration:vim .env -
Create storage directories:
sudo mkdir -p /opt/meetings/{recordings,audio} sudo chown -R 1000:1000 /opt/meetings -
Start services:
docker compose up -d -
Check logs:
docker compose logs -f
API Endpoints
Base URL: https://meet.jeffemmett.com/api/intelligence
Meetings
GET /meetings- List all meetingsGET /meetings/{id}- Get meeting detailsDELETE /meetings/{id}- Delete meeting
Transcripts
GET /meetings/{id}/transcript- Get full transcriptGET /meetings/{id}/transcript/text- Get as plain textGET /meetings/{id}/speakers- Get speaker statistics
Summaries
GET /meetings/{id}/summary- Get AI summaryPOST /meetings/{id}/summary- Generate summary
Search
POST /search- Search transcripts (text + semantic)GET /search/suggest- Get search suggestions
Export
GET /meetings/{id}/export?format=markdown- Export as MarkdownGET /meetings/{id}/export?format=json- Export as JSONGET /meetings/{id}/export?format=pdf- Export as PDF
Webhooks
POST /webhooks/recording-complete- Jibri recording callback
Processing Pipeline
- Recording - Jibri joins meeting and records
- Webhook - Jibri calls
/webhooks/recording-complete - Audio Extraction - FFmpeg extracts audio from video
- Transcription - whisper.cpp transcribes audio
- Diarization - resemblyzer identifies speakers
- Embedding - Generate vector embeddings for search
- Summary - Ollama generates AI summary
- Ready - Meeting available in dashboard
Resource Usage
| Service | CPU | RAM | Storage |
|---|---|---|---|
| Transcriber | 8 cores | 12GB | 5GB (models) |
| API | 1 core | 2GB | - |
| PostgreSQL | 2 cores | 4GB | ~50GB |
| Jibri | 2 cores | 4GB | - |
| Redis | 0.5 cores | 512MB | - |
Troubleshooting
Transcription is slow
- Check CPU usage:
docker stats meeting-intelligence-transcriber - Increase
WHISPER_THREADSin docker-compose.yml - Consider using the
tinymodel for faster (less accurate) transcription
No summary generated
- Check Ollama is running:
curl http://localhost:11434/api/tags - Check logs:
docker compose logs api - Verify model is available:
ollama list
Recording not starting
- Check Jibri logs:
docker compose logs jibri - Verify XMPP credentials in
.env - Check Prosody recorder virtual host configuration
Cost Analysis
| Component | Monthly Cost |
|---|---|
| Jibri recording | $0 (local) |
| Whisper transcription | $0 (local CPU) |
| Ollama summarization | $0 (local) |
| PostgreSQL | $0 (local) |
| Total | $0/month |