feat: layered local-first data architecture — encrypted backup, relay persistence, at-rest encryption

Implement the 4-layer data model (device → encrypted backup → shared sync → federated):

- Extract shared encryption-utils from community-store (deriveSpaceKey, AES-256-GCM, rSEN format)
- Encrypt module docs at rest when space has meta.encrypted === true
- Fix relay mode persistence: relay-backup/relay-restore wire protocol + .automerge.enc blob storage
- Add backup store + REST API (PUT/GET/DELETE /api/backup/:space/:docId) with JWT auth
- Add client BackupSyncManager with delta-only push, full restore, auto-backup
- Wire backup stubs in encryptid-bridge to BackupSyncManager
- Add rspace-backups Docker volume
- Create docs/DATA-ARCHITECTURE.md design doc with threat model and data flow diagrams

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Jeff Emmett 2026-03-02 17:09:07 -08:00
parent 5ee59f86d6
commit 46c2a0b035
15 changed files with 1329 additions and 99 deletions

View File

@ -13,6 +13,7 @@ services:
- rspace-files:/data/files - rspace-files:/data/files
- rspace-splats:/data/splats - rspace-splats:/data/splats
- rspace-docs:/data/docs - rspace-docs:/data/docs
- rspace-backups:/data/backups
environment: environment:
- NODE_ENV=production - NODE_ENV=production
- STORAGE_DIR=/data/communities - STORAGE_DIR=/data/communities
@ -22,6 +23,7 @@ services:
- FILES_DIR=/data/files - FILES_DIR=/data/files
- SPLATS_DIR=/data/splats - SPLATS_DIR=/data/splats
- DOCS_STORAGE_DIR=/data/docs - DOCS_STORAGE_DIR=/data/docs
- BACKUPS_DIR=/data/backups
- INFISICAL_CLIENT_ID=${INFISICAL_CLIENT_ID} - INFISICAL_CLIENT_ID=${INFISICAL_CLIENT_ID}
- INFISICAL_CLIENT_SECRET=${INFISICAL_CLIENT_SECRET} - INFISICAL_CLIENT_SECRET=${INFISICAL_CLIENT_SECRET}
- INFISICAL_PROJECT_SLUG=rspace - INFISICAL_PROJECT_SLUG=rspace
@ -175,6 +177,7 @@ volumes:
rspace-files: rspace-files:
rspace-splats: rspace-splats:
rspace-docs: rspace-docs:
rspace-backups:
rspace-pgdata: rspace-pgdata:
networks: networks:

250
docs/DATA-ARCHITECTURE.md Normal file
View File

@ -0,0 +1,250 @@
# rSpace Data Architecture — Layered Local-First Model
> **Status:** Implemented (Layers 0-2), Designed (Layer 3), Deferred (P2P)
> **Last updated:** 2026-03-02
## Overview
rSpace uses a 4-layer data architecture where plaintext only exists on the user's device. Each layer adds availability and collaboration capabilities while maintaining zero-knowledge guarantees for encrypted spaces.
```
Layer 3: Federated Replication (future — user-owned VPS)
Layer 2: Shared Space Sync (collaboration — participant + relay mode)
Layer 1: Encrypted Server Backup (zero-knowledge — cross-device restore)
Layer 0: User's Device (maximum privacy — plaintext only here)
```
---
## Layer 0: User's Device (Maximum Privacy)
The only place plaintext exists for encrypted spaces.
### Storage
- **IndexedDB** via `EncryptedDocStore` — per-document AES-256-GCM encryption at rest
- Database: `rspace-docs` with object stores: `docs`, `meta`, `sync`
### Key Hierarchy
```
WebAuthn PRF output (from passkey)
→ HKDF (salt: "rspace-space-key-v1", info: "rspace:{spaceId}")
→ Space Key (HKDF)
→ HKDF (salt: "rspace-doc-key-v1", info: "doc:{docId}")
→ Doc Key (AES-256-GCM, non-extractable)
```
### Encryption
- `DocCrypto` class handles all key derivation and AES-256-GCM operations
- 12-byte random nonce per encryption
- Keys are non-extractable `CryptoKey` objects (Web Crypto API)
- `EncryptedDocBridge` connects WebAuthn PRF to DocCrypto
### Implementation
- `shared/local-first/crypto.ts` — DocCrypto
- `shared/local-first/storage.ts` — EncryptedDocStore
- `shared/local-first/encryptid-bridge.ts` — PRF-to-DocCrypto bridge
---
## Layer 1: Encrypted Server Backup (Zero-Knowledge)
Server stores opaque ciphertext blobs it cannot decrypt.
### Design Principles
- **Opt-in per user** (default OFF for maximum privacy)
- **Same encryption as Layer 0** — client encrypts before upload
- **Delta-only push** — compare local manifest vs server manifest, upload only changed docs
- **Cross-device restore** — after passkey auth, download all blobs, decrypt locally
### Storage Layout
```
/data/backups/{userId}/{spaceSlug}/{docId-hash}.enc
/data/backups/{userId}/{spaceSlug}/manifest.json
```
### API
```
PUT /api/backup/:space/:docId — upload encrypted blob (10 MB limit)
GET /api/backup/:space/:docId — download encrypted blob
GET /api/backup/:space — list manifest
DELETE /api/backup/:space/:docId — delete specific backup
DELETE /api/backup/:space — delete all for space
GET /api/backup/status — overall backup status
```
All endpoints require EncryptID JWT authentication.
### Client
- `BackupSyncManager` reads already-encrypted blobs from IndexedDB (no double-encryption)
- Auto-backup on configurable interval (default: 5 minutes)
- `pushBackup()` — delta-only upload
- `pullRestore()` — full download for new devices
### Implementation
- `server/local-first/backup-store.ts` — filesystem blob storage
- `server/local-first/backup-routes.ts` — Hono REST API
- `shared/local-first/backup.ts` — BackupSyncManager
---
## Layer 2: Shared Space Sync (Collaboration)
Multi-document real-time sync over WebSocket.
### Two Operating Modes
#### Participant Mode (unencrypted spaces)
- Server maintains its own copy of each Automerge document
- Full Automerge sync protocol — `receiveSyncMessage` + `generateSyncMessage`
- Server can read, index, validate, and persist documents
- Documents saved as Automerge binary at `/data/docs/{space}/{module}/{collection}/{itemId}.automerge`
#### Relay Mode (encrypted spaces)
- Server forwards encrypted sync messages between peers by `docId`
- Server cannot read document content
- Opaque backup blobs stored via `relay-backup` / `relay-restore` wire protocol
- Stored as `.automerge.enc` files alongside regular docs
### At-Rest Encryption for Module Docs
When a space has `meta.encrypted === true`, module documents are encrypted at rest using the server-side encryption utilities (HMAC-SHA256 derived AES-256-GCM from `ENCRYPTION_SECRET`).
File format (rSEN):
```
[4 bytes: magic "rSEN" (0x72 0x53 0x45 0x4E)]
[4 bytes: keyId length (uint32)]
[N bytes: keyId (UTF-8)]
[12 bytes: IV]
[remaining: ciphertext + 16-byte auth tag]
```
### Wire Protocol
```
{ type: 'sync', docId, data: number[] }
{ type: 'subscribe', docIds: string[] }
{ type: 'unsubscribe', docIds: string[] }
{ type: 'awareness', docId, peer, cursor?, selection?, username?, color? }
{ type: 'relay-backup', docId, data: number[] } — client → server (opaque blob)
{ type: 'relay-restore', docId, data: number[] } — server → client (stored blob)
{ type: 'ping' } / { type: 'pong' }
```
### Implementation
- `shared/local-first/sync.ts` — DocSyncManager (client)
- `server/local-first/sync-server.ts` — SyncServer (server)
- `server/local-first/doc-persistence.ts` — filesystem persistence with encryption
- `server/local-first/encryption-utils.ts` — shared server-side AES-256-GCM primitives
- `server/sync-instance.ts` — SyncServer singleton with encryption wiring
---
## Layer 3: Federated Replication (Future)
Optional replication to user's own infrastructure.
### Design (Not Yet Implemented)
- Same zero-knowledge blobs as Layer 1
- User configures a replication target (their own VPS, S3 bucket, etc.)
- Server pushes encrypted blobs to the target on change
- User can restore from their own infrastructure independently of rSpace
### Prerequisites
- Layer 1 must be proven stable
- User-facing configuration UI
- Replication protocol specification
---
## P2P WebRTC Sync (Future)
Direct peer-to-peer sync as fallback when server is unavailable.
### Design (Not Yet Implemented)
- WebRTC data channels between clients
- Signaling via existing WebSocket connection
- Same Automerge sync protocol as Layer 2
- Useful for: LAN-only operation, server downtime, low-latency collaboration
### Prerequisites
- Layer 1 backup solves the primary resilience concern
- WebRTC signaling server or STUN/TURN infrastructure
---
## Threat Model
### What the server knows (unencrypted spaces)
- Full document content (participant mode)
- Document metadata, sync state, member list
### What the server knows (encrypted spaces)
- Space exists, number of documents, document sizes
- Member DIDs (from community doc metadata)
- Timing of sync activity (when peers connect/disconnect)
### What the server CANNOT know (encrypted spaces)
- Document content (encrypted at rest, relay mode)
- Backup blob content (client-encrypted before upload)
- Encryption keys (derived from WebAuthn PRF on device)
### Compromised server scenario
- Attacker gets ciphertext blobs — cannot decrypt without passkey
- Attacker modifies ciphertext — AES-GCM auth tag detects tampering
- Attacker deletes blobs — client has local copy in IndexedDB (Layer 0)
### Compromised device scenario
- Plaintext exposed on that device only
- Other devices are unaffected (no key sharing between devices)
- Passkey revocation invalidates future PRF derivations
---
## Key Rotation
### Current Approach
- Server-side at-rest keys derived from `ENCRYPTION_SECRET` + keyId
- `keyId` stored in community doc `meta.encryptionKeyId`
- Rotation: generate new keyId → re-encrypt all docs → update meta
### Future Approach (with EncryptID Layer 2)
- Client-side key delegation via EncryptID key hierarchy
- Server never has access to plaintext keys
- Rotation managed by space admin through EncryptID
---
## Data Flow Diagrams
### Normal Operation (Unencrypted Space)
```
Client A Server Client B
| | |
|-- sync(docId, data) ---->| |
| |-- sync(docId, data) ---->|
| |-- saveDoc(docId) ------->| disk
|<-- sync(docId, resp) ----| |
```
### Relay Mode (Encrypted Space)
```
Client A Server Client B
| | |
|-- sync(docId, data) ---->| |
| |-- sync(docId, data) ---->| (forwarded)
|-- relay-backup --------->| |
| |-- save .enc blob ------->| disk
```
### Backup Restore (New Device)
```
New Device Server Backup Store
| | |
|-- GET /api/backup/space->| |
|<-- manifest -------------| |
|-- GET /api/backup/doc -->|-- load blob ------------>|
|<-- encrypted blob -------|<-- blob bytes -----------|
| | |
| (client decrypts with passkey, writes to IndexedDB) |
```

View File

@ -1,5 +1,13 @@
import { mkdir, readdir, unlink } from "node:fs/promises"; import { mkdir, readdir, unlink } from "node:fs/promises";
import * as Automerge from "@automerge/automerge"; import * as Automerge from "@automerge/automerge";
import {
deriveSpaceKey,
encryptBinary,
decryptBinary,
isEncryptedFile,
packEncrypted,
unpackEncrypted,
} from "./local-first/encryption-utils";
const STORAGE_DIR = process.env.STORAGE_DIR || "./data/communities"; const STORAGE_DIR = process.env.STORAGE_DIR || "./data/communities";
@ -202,15 +210,8 @@ export async function loadCommunity(slug: string): Promise<Automerge.Doc<Communi
let bytes = new Uint8Array(buffer); let bytes = new Uint8Array(buffer);
// Check for encrypted magic bytes // Check for encrypted magic bytes
if (bytes.length >= ENCRYPTED_MAGIC.length && if (isEncryptedFile(bytes)) {
bytes[0] === ENCRYPTED_MAGIC[0] && const { keyId, ciphertext } = unpackEncrypted(bytes);
bytes[1] === ENCRYPTED_MAGIC[1] &&
bytes[2] === ENCRYPTED_MAGIC[2] &&
bytes[3] === ENCRYPTED_MAGIC[3]) {
// Encrypted file: extract keyId length (4 bytes), keyId, then ciphertext
const keyIdLen = new DataView(bytes.buffer, bytes.byteOffset + 4, 4).getUint32(0);
const keyId = new TextDecoder().decode(bytes.slice(8, 8 + keyIdLen));
const ciphertext = bytes.slice(8 + keyIdLen);
const key = await deriveSpaceKey(keyId); const key = await deriveSpaceKey(keyId);
bytes = new Uint8Array(await decryptBinary(ciphertext, key)); bytes = new Uint8Array(await decryptBinary(ciphertext, key));
console.log(`[Store] Decrypted ${slug} (keyId: ${keyId})`); console.log(`[Store] Decrypted ${slug} (keyId: ${keyId})`);
@ -296,15 +297,9 @@ export async function saveCommunity(slug: string): Promise<void> {
const keyId = currentDoc.meta.encryptionKeyId; const keyId = currentDoc.meta.encryptionKeyId;
const key = await deriveSpaceKey(keyId); const key = await deriveSpaceKey(keyId);
const ciphertext = await encryptBinary(binary, key); const ciphertext = await encryptBinary(binary, key);
const keyIdBytes = new TextEncoder().encode(keyId); const packed = packEncrypted(keyId, ciphertext);
// Format: magic (4) + keyIdLen (4) + keyId + ciphertext await Bun.write(path, packed);
const header = new Uint8Array(8 + keyIdBytes.length + ciphertext.length); console.log(`[Store] Saved ${slug} encrypted (${packed.length} bytes, keyId: ${keyId})`);
header.set(ENCRYPTED_MAGIC, 0);
new DataView(header.buffer).setUint32(4, keyIdBytes.length);
header.set(keyIdBytes, 8);
header.set(ciphertext, 8 + keyIdBytes.length);
await Bun.write(path, header);
console.log(`[Store] Saved ${slug} encrypted (${header.length} bytes, keyId: ${keyId})`);
} catch (e) { } catch (e) {
// Fallback to unencrypted if encryption fails // Fallback to unencrypted if encryption fails
console.error(`[Store] Encryption failed for ${slug}, saving unencrypted:`, e); console.error(`[Store] Encryption failed for ${slug}, saving unencrypted:`, e);
@ -963,72 +958,6 @@ export function setEncryption(
return true; return true;
} }
/**
* Derive an AES-256-GCM key from a space's encryption key identifier.
* In production this will use EncryptID Layer 2 key derivation.
* For now, uses a deterministic HMAC-based key from a server secret.
*/
async function deriveSpaceKey(keyId: string): Promise<CryptoKey> {
const serverSecret = process.env.ENCRYPTION_SECRET;
if (!serverSecret) {
throw new Error('ENCRYPTION_SECRET environment variable is required');
}
const encoder = new TextEncoder();
const keyMaterial = await crypto.subtle.importKey(
'raw',
encoder.encode(serverSecret),
{ name: 'HMAC', hash: 'SHA-256' },
false,
['sign'],
);
const derived = await crypto.subtle.sign('HMAC', keyMaterial, encoder.encode(keyId));
return crypto.subtle.importKey(
'raw',
derived,
{ name: 'AES-GCM', length: 256 },
false,
['encrypt', 'decrypt'],
);
}
/**
* Encrypt binary data using AES-256-GCM.
* Returns: 12-byte IV + ciphertext + 16-byte auth tag (all concatenated).
*/
async function encryptBinary(data: Uint8Array, key: CryptoKey): Promise<Uint8Array> {
const iv = crypto.getRandomValues(new Uint8Array(12));
// Copy into a fresh ArrayBuffer to satisfy strict BufferSource typing
const plainBuf = new ArrayBuffer(data.byteLength);
new Uint8Array(plainBuf).set(data);
const ciphertext = await crypto.subtle.encrypt(
{ name: 'AES-GCM', iv },
key,
plainBuf,
);
const result = new Uint8Array(12 + ciphertext.byteLength);
result.set(iv, 0);
result.set(new Uint8Array(ciphertext), 12);
return result;
}
/**
* Decrypt binary data encrypted with AES-256-GCM.
* Expects: 12-byte IV + ciphertext + 16-byte auth tag.
*/
async function decryptBinary(data: Uint8Array, key: CryptoKey): Promise<Uint8Array> {
const iv = data.slice(0, 12);
const ciphertext = data.slice(12);
const plaintext = await crypto.subtle.decrypt(
{ name: 'AES-GCM', iv },
key,
ciphertext,
);
return new Uint8Array(plaintext);
}
// Magic bytes to identify encrypted Automerge files
const ENCRYPTED_MAGIC = new Uint8Array([0x72, 0x53, 0x45, 0x4E]); // "rSEN" (rSpace ENcrypted)
/** /**
* Find all spaces that a given space is nested into (reverse lookup) * Find all spaces that a given space is nested into (reverse lookup)
*/ */

View File

@ -74,6 +74,7 @@ import { renderMainLanding, renderSpaceDashboard } from "./landing";
import { fetchLandingPage } from "./landing-proxy"; import { fetchLandingPage } from "./landing-proxy";
import { syncServer } from "./sync-instance"; import { syncServer } from "./sync-instance";
import { loadAllDocs } from "./local-first/doc-persistence"; import { loadAllDocs } from "./local-first/doc-persistence";
import { backupRouter } from "./local-first/backup-routes";
// Register modules // Register modules
registerModule(canvasModule); registerModule(canvasModule);
@ -137,6 +138,9 @@ app.get("/.well-known/webauthn", (c) => {
// ── Space registry API ── // ── Space registry API ──
app.route("/api/spaces", spaces); app.route("/api/spaces", spaces);
// ── Backup API (encrypted blob storage) ──
app.route("/api/backup", backupRouter);
// ── mi — AI assistant endpoint ── // ── mi — AI assistant endpoint ──
const MI_MODEL = process.env.MI_MODEL || "llama3.2"; const MI_MODEL = process.env.MI_MODEL || "llama3.2";
const OLLAMA_URL = process.env.OLLAMA_URL || "http://localhost:11434"; const OLLAMA_URL = process.env.OLLAMA_URL || "http://localhost:11434";

View File

@ -0,0 +1,128 @@
/**
* Backup API Routes Hono router for encrypted backup operations.
*
* All endpoints require EncryptID JWT authentication.
* The server stores opaque ciphertext blobs it cannot decrypt.
*/
import { Hono } from "hono";
import type { Context, Next } from "hono";
import {
verifyEncryptIDToken,
extractToken,
} from "@encryptid/sdk/server";
import type { EncryptIDClaims } from "@encryptid/sdk/server";
import {
putBackup,
getBackup,
listBackups,
deleteBackup,
deleteAllBackups,
getUsage,
} from "./backup-store";
const MAX_BLOB_SIZE = 10 * 1024 * 1024; // 10 MB per blob
type BackupEnv = {
Variables: {
userId: string;
claims: EncryptIDClaims;
};
};
const backupRouter = new Hono<BackupEnv>();
/** Auth middleware — extracts and verifies JWT, sets userId. */
backupRouter.use("*", async (c: Context<BackupEnv>, next: Next) => {
const token = extractToken(c.req.raw.headers);
if (!token) {
return c.json({ error: "Authentication required" }, 401);
}
let claims: EncryptIDClaims;
try {
claims = await verifyEncryptIDToken(token);
} catch {
return c.json({ error: "Invalid or expired token" }, 401);
}
c.set("userId", claims.sub);
c.set("claims", claims);
await next();
});
/** PUT /api/backup/:space/:docId — upload encrypted blob */
backupRouter.put("/:space/:docId", async (c) => {
const userId = c.get("userId");
const space = c.req.param("space");
const docId = decodeURIComponent(c.req.param("docId"));
const blob = await c.req.arrayBuffer();
if (blob.byteLength > MAX_BLOB_SIZE) {
return c.json({ error: `Blob too large (max ${MAX_BLOB_SIZE} bytes)` }, 413);
}
if (blob.byteLength === 0) {
return c.json({ error: "Empty blob" }, 400);
}
await putBackup(userId, space, docId, new Uint8Array(blob));
return c.json({ ok: true, size: blob.byteLength });
});
/** GET /api/backup/:space/:docId — download encrypted blob */
backupRouter.get("/:space/:docId", async (c) => {
const userId = c.get("userId");
const space = c.req.param("space");
const docId = decodeURIComponent(c.req.param("docId"));
const blob = await getBackup(userId, space, docId);
if (!blob) {
return c.json({ error: "Not found" }, 404);
}
const body = new Uint8Array(blob).buffer as ArrayBuffer;
return new Response(body, {
headers: {
"Content-Type": "application/octet-stream",
"Content-Length": blob.byteLength.toString(),
},
});
});
/** GET /api/backup/:space — list manifest for a space */
backupRouter.get("/:space", async (c) => {
const userId = c.get("userId");
const space = c.req.param("space");
const manifest = await listBackups(userId, space);
return c.json(manifest);
});
/** DELETE /api/backup/:space/:docId — delete specific backup */
backupRouter.delete("/:space/:docId", async (c) => {
const userId = c.get("userId");
const space = c.req.param("space");
const docId = decodeURIComponent(c.req.param("docId"));
const ok = await deleteBackup(userId, space, docId);
if (!ok) {
return c.json({ error: "Not found or delete failed" }, 404);
}
return c.json({ ok: true });
});
/** DELETE /api/backup/:space — delete all backups for a space */
backupRouter.delete("/:space", async (c) => {
const userId = c.get("userId");
const space = c.req.param("space");
await deleteAllBackups(userId, space);
return c.json({ ok: true });
});
/** GET /api/backup/status — overall backup status for authenticated user */
backupRouter.get("/", async (c) => {
const userId = c.get("userId");
const usage = await getUsage(userId);
return c.json(usage);
});
export { backupRouter };

View File

@ -0,0 +1,210 @@
/**
* Backup Store Server-side opaque blob storage for encrypted backups.
*
* Layout: /data/backups/{userId}/{spaceSlug}/{docId-hash}.enc
* Manifest: /data/backups/{userId}/{spaceSlug}/manifest.json
*
* The server stores ciphertext blobs it cannot decrypt (zero-knowledge).
* Clients encrypt before upload and decrypt after download.
*/
import { resolve, dirname } from "node:path";
import { mkdir, readdir, readFile, writeFile, unlink, stat, rm } from "node:fs/promises";
import { createHash } from "node:crypto";
const BACKUPS_DIR = process.env.BACKUPS_DIR || "/data/backups";
export interface BackupManifestEntry {
docId: string;
hash: string;
size: number;
updatedAt: string;
}
export interface BackupManifest {
spaceSlug: string;
entries: BackupManifestEntry[];
updatedAt: string;
}
/** Hash a docId into a safe filename. */
function docIdHash(docId: string): string {
return createHash("sha256").update(docId).digest("hex").slice(0, 32);
}
/** Resolve the directory for a user+space backup. */
function backupDir(userId: string, spaceSlug: string): string {
return resolve(BACKUPS_DIR, userId, spaceSlug);
}
/** Resolve the path for a specific blob. */
function blobPath(userId: string, spaceSlug: string, docId: string): string {
return resolve(backupDir(userId, spaceSlug), `${docIdHash(docId)}.enc`);
}
/** Resolve the manifest path. */
function manifestPath(userId: string, spaceSlug: string): string {
return resolve(backupDir(userId, spaceSlug), "manifest.json");
}
/** Load a manifest (returns empty manifest if none exists). */
async function loadManifest(
userId: string,
spaceSlug: string,
): Promise<BackupManifest> {
try {
const path = manifestPath(userId, spaceSlug);
const file = Bun.file(path);
if (await file.exists()) {
return (await file.json()) as BackupManifest;
}
} catch {
// Corrupt or missing manifest
}
return { spaceSlug, entries: [], updatedAt: new Date().toISOString() };
}
/** Save a manifest. */
async function saveManifest(
userId: string,
spaceSlug: string,
manifest: BackupManifest,
): Promise<void> {
const path = manifestPath(userId, spaceSlug);
await mkdir(dirname(path), { recursive: true });
await writeFile(path, JSON.stringify(manifest, null, 2));
}
/**
* Store an encrypted backup blob.
*/
export async function putBackup(
userId: string,
spaceSlug: string,
docId: string,
blob: Uint8Array,
): Promise<void> {
const path = blobPath(userId, spaceSlug, docId);
await mkdir(dirname(path), { recursive: true });
await writeFile(path, blob);
// Update manifest
const manifest = await loadManifest(userId, spaceSlug);
const hash = createHash("sha256").update(blob).digest("hex");
const existing = manifest.entries.findIndex((e) => e.docId === docId);
const entry: BackupManifestEntry = {
docId,
hash,
size: blob.byteLength,
updatedAt: new Date().toISOString(),
};
if (existing >= 0) {
manifest.entries[existing] = entry;
} else {
manifest.entries.push(entry);
}
manifest.updatedAt = new Date().toISOString();
await saveManifest(userId, spaceSlug, manifest);
}
/**
* Retrieve an encrypted backup blob.
*/
export async function getBackup(
userId: string,
spaceSlug: string,
docId: string,
): Promise<Uint8Array | null> {
try {
const path = blobPath(userId, spaceSlug, docId);
const file = Bun.file(path);
if (await file.exists()) {
const buffer = await file.arrayBuffer();
return new Uint8Array(buffer);
}
} catch {
// File doesn't exist
}
return null;
}
/**
* List all backup entries for a space.
*/
export async function listBackups(
userId: string,
spaceSlug: string,
): Promise<BackupManifest> {
return loadManifest(userId, spaceSlug);
}
/**
* Delete a specific backup blob.
*/
export async function deleteBackup(
userId: string,
spaceSlug: string,
docId: string,
): Promise<boolean> {
try {
const path = blobPath(userId, spaceSlug, docId);
await unlink(path);
// Update manifest
const manifest = await loadManifest(userId, spaceSlug);
manifest.entries = manifest.entries.filter((e) => e.docId !== docId);
manifest.updatedAt = new Date().toISOString();
await saveManifest(userId, spaceSlug, manifest);
return true;
} catch {
return false;
}
}
/**
* Delete all backups for a space.
*/
export async function deleteAllBackups(
userId: string,
spaceSlug: string,
): Promise<boolean> {
try {
const dir = backupDir(userId, spaceSlug);
await rm(dir, { recursive: true, force: true });
return true;
} catch {
return false;
}
}
/**
* Get total backup storage usage for a user.
*/
export async function getUsage(userId: string): Promise<{
totalBytes: number;
spaceCount: number;
docCount: number;
}> {
let totalBytes = 0;
let spaceCount = 0;
let docCount = 0;
try {
const userDir = resolve(BACKUPS_DIR, userId);
const spaces = await readdir(userDir, { withFileTypes: true });
for (const entry of spaces) {
if (!entry.isDirectory()) continue;
spaceCount++;
const manifest = await loadManifest(userId, entry.name);
for (const e of manifest.entries) {
totalBytes += e.size;
docCount++;
}
}
} catch {
// User has no backups
}
return { totalBytes, spaceCount, docCount };
}

View File

@ -3,12 +3,23 @@
* *
* Storage layout: {DOCS_STORAGE_DIR}/{space}/{module}/{collection}[/{itemId}].automerge * Storage layout: {DOCS_STORAGE_DIR}/{space}/{module}/{collection}[/{itemId}].automerge
* Example: /data/docs/demo/notes/notebooks/abc.automerge * Example: /data/docs/demo/notes/notebooks/abc.automerge
*
* Encrypted docs: Same path but content is rSEN-encrypted (server-side at-rest encryption).
* Opaque blobs: {path}.automerge.enc relay-mode encrypted blobs the server can't decrypt.
*/ */
import { resolve, dirname } from "node:path"; import { resolve, dirname } from "node:path";
import { mkdir, readdir, readFile, writeFile, stat } from "node:fs/promises"; import { mkdir, readdir, readFile, writeFile, stat, unlink } from "node:fs/promises";
import * as Automerge from "@automerge/automerge"; import * as Automerge from "@automerge/automerge";
import type { SyncServer } from "./sync-server"; import type { SyncServer } from "./sync-server";
import {
deriveSpaceKey,
encryptBinary,
decryptBinary,
isEncryptedFile,
packEncrypted,
unpackEncrypted,
} from "./encryption-utils";
const DOCS_DIR = process.env.DOCS_STORAGE_DIR || "/data/docs"; const DOCS_DIR = process.env.DOCS_STORAGE_DIR || "/data/docs";
const SAVE_DEBOUNCE_MS = 2000; const SAVE_DEBOUNCE_MS = 2000;
@ -25,15 +36,22 @@ export function docIdToPath(docId: string): string {
/** Convert a filesystem path back to a docId. */ /** Convert a filesystem path back to a docId. */
function pathToDocId(filePath: string): string { function pathToDocId(filePath: string): string {
const rel = filePath.slice(DOCS_DIR.length + 1); // strip leading dir + / const rel = filePath.slice(DOCS_DIR.length + 1); // strip leading dir + /
const withoutExt = rel.replace(/\.automerge$/, ""); const withoutExt = rel.replace(/\.automerge(\.enc)?$/, "");
return withoutExt.split("/").join(":"); return withoutExt.split("/").join(":");
} }
// Debounce timers per docId // Debounce timers per docId
const saveTimers = new Map<string, ReturnType<typeof setTimeout>>(); const saveTimers = new Map<string, ReturnType<typeof setTimeout>>();
/** Debounced save — writes Automerge binary to disk after SAVE_DEBOUNCE_MS. */ /**
export function saveDoc(docId: string, doc: Automerge.Doc<any>): void { * Debounced save writes Automerge binary to disk after SAVE_DEBOUNCE_MS.
* If encryptionKeyId is provided, encrypts with rSEN header before writing.
*/
export function saveDoc(
docId: string,
doc: Automerge.Doc<any>,
encryptionKeyId?: string,
): void {
const existing = saveTimers.get(docId); const existing = saveTimers.get(docId);
if (existing) clearTimeout(existing); if (existing) clearTimeout(existing);
@ -45,16 +63,76 @@ export function saveDoc(docId: string, doc: Automerge.Doc<any>): void {
const filePath = docIdToPath(docId); const filePath = docIdToPath(docId);
await mkdir(dirname(filePath), { recursive: true }); await mkdir(dirname(filePath), { recursive: true });
const binary = Automerge.save(doc); const binary = Automerge.save(doc);
if (encryptionKeyId) {
const key = await deriveSpaceKey(encryptionKeyId);
const ciphertext = await encryptBinary(binary, key);
const packed = packEncrypted(encryptionKeyId, ciphertext);
await writeFile(filePath, packed);
console.log(
`[DocStore] Saved ${docId} encrypted (${packed.byteLength} bytes)`,
);
} else {
await writeFile(filePath, binary); await writeFile(filePath, binary);
console.log(`[DocStore] Saved ${docId} (${binary.byteLength} bytes)`); console.log(
`[DocStore] Saved ${docId} (${binary.byteLength} bytes)`,
);
}
} catch (e) { } catch (e) {
console.error(`[DocStore] Failed to save ${docId}:`, e); console.error(`[DocStore] Failed to save ${docId}:`, e);
} }
}, SAVE_DEBOUNCE_MS) }, SAVE_DEBOUNCE_MS),
); );
} }
/** Recursively scan DOCS_DIR and load all .automerge files into the SyncServer. */ /**
* Save an opaque encrypted blob for relay-mode docs.
* These are client-encrypted blobs the server cannot decrypt.
* Stored as {docIdPath}.automerge.enc
*/
export async function saveEncryptedBlob(
docId: string,
blob: Uint8Array,
): Promise<void> {
try {
const basePath = docIdToPath(docId);
const encPath = basePath.replace(/\.automerge$/, ".automerge.enc");
await mkdir(dirname(encPath), { recursive: true });
await writeFile(encPath, blob);
console.log(
`[DocStore] Saved encrypted blob ${docId} (${blob.byteLength} bytes)`,
);
} catch (e) {
console.error(`[DocStore] Failed to save encrypted blob ${docId}:`, e);
}
}
/**
* Load an opaque encrypted blob for relay-mode docs.
* Returns null if no blob exists.
*/
export async function loadEncryptedBlob(
docId: string,
): Promise<Uint8Array | null> {
try {
const basePath = docIdToPath(docId);
const encPath = basePath.replace(/\.automerge$/, ".automerge.enc");
const file = Bun.file(encPath);
if (await file.exists()) {
const buffer = await file.arrayBuffer();
return new Uint8Array(buffer);
}
} catch {
// File doesn't exist or read failed
}
return null;
}
/**
* Recursively scan DOCS_DIR and load all .automerge files into the SyncServer.
* Detects rSEN-encrypted files and decrypts them before loading.
* Skips .automerge.enc files (opaque relay blobs not Automerge docs).
*/
export async function loadAllDocs(syncServer: SyncServer): Promise<number> { export async function loadAllDocs(syncServer: SyncServer): Promise<number> {
let count = 0; let count = 0;
try { try {
@ -80,10 +158,33 @@ async function scanDir(dir: string, syncServer: SyncServer): Promise<number> {
const fullPath = resolve(dir, entry.name); const fullPath = resolve(dir, entry.name);
if (entry.isDirectory()) { if (entry.isDirectory()) {
count += await scanDir(fullPath, syncServer); count += await scanDir(fullPath, syncServer);
} else if (entry.name.endsWith(".automerge.enc")) {
// Skip opaque relay blobs — they're not loadable Automerge docs
continue;
} else if (entry.name.endsWith(".automerge")) { } else if (entry.name.endsWith(".automerge")) {
try { try {
const binary = await readFile(fullPath); const raw = await readFile(fullPath);
const doc = Automerge.load(new Uint8Array(binary)); let bytes = new Uint8Array(raw);
// Detect and decrypt rSEN-encrypted files
if (isEncryptedFile(bytes)) {
try {
const { keyId, ciphertext } = unpackEncrypted(bytes);
const key = await deriveSpaceKey(keyId);
bytes = new Uint8Array(await decryptBinary(ciphertext, key));
console.log(
`[DocStore] Decrypted ${entry.name} (keyId: ${keyId})`,
);
} catch (e) {
console.error(
`[DocStore] Failed to decrypt ${fullPath}:`,
e,
);
continue;
}
}
const doc = Automerge.load(bytes);
const docId = pathToDocId(fullPath); const docId = pathToDocId(fullPath);
syncServer.setDoc(docId, doc); syncServer.setDoc(docId, doc);
count++; count++;

View File

@ -0,0 +1,125 @@
/**
* Shared server-side encryption utilities for rSpace at-rest encryption.
*
* Uses AES-256-GCM with keys derived from ENCRYPTION_SECRET via HMAC-SHA256.
* File format: [4-byte magic "rSEN"][4-byte keyId length][keyId bytes][12-byte IV][ciphertext+tag]
*/
// Magic bytes to identify encrypted files: "rSEN" (rSpace ENcrypted)
export const ENCRYPTED_MAGIC = new Uint8Array([0x72, 0x53, 0x45, 0x4e]);
/**
* Derive an AES-256-GCM key from a key identifier using HMAC-SHA256.
* Uses ENCRYPTION_SECRET env var as the HMAC key.
*/
export async function deriveSpaceKey(keyId: string): Promise<CryptoKey> {
const serverSecret = process.env.ENCRYPTION_SECRET;
if (!serverSecret) {
throw new Error("ENCRYPTION_SECRET environment variable is required");
}
const encoder = new TextEncoder();
const keyMaterial = await crypto.subtle.importKey(
"raw",
encoder.encode(serverSecret),
{ name: "HMAC", hash: "SHA-256" },
false,
["sign"],
);
const derived = await crypto.subtle.sign(
"HMAC",
keyMaterial,
encoder.encode(keyId),
);
return crypto.subtle.importKey(
"raw",
derived,
{ name: "AES-GCM", length: 256 },
false,
["encrypt", "decrypt"],
);
}
/**
* Encrypt binary data using AES-256-GCM.
* Returns: 12-byte IV + ciphertext + 16-byte auth tag (concatenated).
*/
export async function encryptBinary(
data: Uint8Array,
key: CryptoKey,
): Promise<Uint8Array> {
const iv = crypto.getRandomValues(new Uint8Array(12));
const plainBuf = new ArrayBuffer(data.byteLength);
new Uint8Array(plainBuf).set(data);
const ciphertext = await crypto.subtle.encrypt(
{ name: "AES-GCM", iv },
key,
plainBuf,
);
const result = new Uint8Array(12 + ciphertext.byteLength);
result.set(iv, 0);
result.set(new Uint8Array(ciphertext), 12);
return result;
}
/**
* Decrypt binary data encrypted with AES-256-GCM.
* Expects: 12-byte IV + ciphertext + 16-byte auth tag.
*/
export async function decryptBinary(
data: Uint8Array,
key: CryptoKey,
): Promise<Uint8Array> {
const iv = data.slice(0, 12);
const ciphertext = data.slice(12);
const plaintext = await crypto.subtle.decrypt(
{ name: "AES-GCM", iv },
key,
ciphertext,
);
return new Uint8Array(plaintext);
}
/**
* Check if a byte array starts with the rSEN magic bytes.
*/
export function isEncryptedFile(bytes: Uint8Array): boolean {
return (
bytes.length >= ENCRYPTED_MAGIC.length &&
bytes[0] === ENCRYPTED_MAGIC[0] &&
bytes[1] === ENCRYPTED_MAGIC[1] &&
bytes[2] === ENCRYPTED_MAGIC[2] &&
bytes[3] === ENCRYPTED_MAGIC[3]
);
}
/**
* Pack an encrypted payload with the rSEN header.
* Format: [4-byte magic][4-byte keyId length][keyId UTF-8 bytes][ciphertext]
*/
export function packEncrypted(keyId: string, ciphertext: Uint8Array): Uint8Array {
const keyIdBytes = new TextEncoder().encode(keyId);
const packed = new Uint8Array(8 + keyIdBytes.length + ciphertext.length);
packed.set(ENCRYPTED_MAGIC, 0);
new DataView(packed.buffer).setUint32(4, keyIdBytes.length);
packed.set(keyIdBytes, 8);
packed.set(ciphertext, 8 + keyIdBytes.length);
return packed;
}
/**
* Unpack an rSEN-encrypted file into keyId and ciphertext components.
* Assumes caller already checked isEncryptedFile().
*/
export function unpackEncrypted(data: Uint8Array): {
keyId: string;
ciphertext: Uint8Array;
} {
const keyIdLen = new DataView(
data.buffer,
data.byteOffset + 4,
4,
).getUint32(0);
const keyId = new TextDecoder().decode(data.slice(8, 8 + keyIdLen));
const ciphertext = data.slice(8 + keyIdLen);
return { keyId, ciphertext };
}

View File

@ -50,11 +50,25 @@ interface AwarenessMessage {
color?: string; color?: string;
} }
interface RelayBackupMessage {
type: 'relay-backup';
docId: string;
data: number[];
}
interface RelayRestoreMessage {
type: 'relay-restore';
docId: string;
data: number[];
}
type WireMessage = type WireMessage =
| SyncMessage | SyncMessage
| SubscribeMessage | SubscribeMessage
| UnsubscribeMessage | UnsubscribeMessage
| AwarenessMessage | AwarenessMessage
| RelayBackupMessage
| RelayRestoreMessage
| { type: 'ping' } | { type: 'ping' }
| { type: 'pong' }; | { type: 'pong' };
@ -71,6 +85,10 @@ export interface SyncServerOptions {
participantMode?: boolean; participantMode?: boolean;
/** Called when a document changes (participant mode only) */ /** Called when a document changes (participant mode only) */
onDocChange?: (docId: string, doc: Automerge.Doc<any>) => void; onDocChange?: (docId: string, doc: Automerge.Doc<any>) => void;
/** Called when a relay-backup message is received (opaque blob storage) */
onRelayBackup?: (docId: string, blob: Uint8Array) => void;
/** Called to load a relay blob for restore on subscribe */
onRelayLoad?: (docId: string) => Promise<Uint8Array | null>;
} }
// ============================================================================ // ============================================================================
@ -84,10 +102,14 @@ export class SyncServer {
#participantMode: boolean; #participantMode: boolean;
#relayOnlyDocs = new Set<string>(); // docIds forced to relay mode (encrypted spaces) #relayOnlyDocs = new Set<string>(); // docIds forced to relay mode (encrypted spaces)
#onDocChange?: (docId: string, doc: Automerge.Doc<any>) => void; #onDocChange?: (docId: string, doc: Automerge.Doc<any>) => void;
#onRelayBackup?: (docId: string, blob: Uint8Array) => void;
#onRelayLoad?: (docId: string) => Promise<Uint8Array | null>;
constructor(opts: SyncServerOptions = {}) { constructor(opts: SyncServerOptions = {}) {
this.#participantMode = opts.participantMode ?? true; this.#participantMode = opts.participantMode ?? true;
this.#onDocChange = opts.onDocChange; this.#onDocChange = opts.onDocChange;
this.#onRelayBackup = opts.onRelayBackup;
this.#onRelayLoad = opts.onRelayLoad;
} }
/** /**
@ -174,6 +196,9 @@ export class SyncServer {
case 'awareness': case 'awareness':
this.#handleAwareness(peer, msg as AwarenessMessage); this.#handleAwareness(peer, msg as AwarenessMessage);
break; break;
case 'relay-backup':
this.#handleRelayBackup(peer, msg as RelayBackupMessage);
break;
case 'ping': case 'ping':
this.#sendToPeer(peer, { type: 'pong' }); this.#sendToPeer(peer, { type: 'pong' });
break; break;
@ -262,8 +287,23 @@ export class SyncServer {
peer.syncStates.set(docId, Automerge.initSyncState()); peer.syncStates.set(docId, Automerge.initSyncState());
} }
// If participant mode and we have a doc, send initial sync if (this.isRelayOnly(docId)) {
if (this.#participantMode && this.#docs.has(docId)) { // Relay mode: try to send stored encrypted blob
if (this.#onRelayLoad) {
this.#onRelayLoad(docId).then((blob) => {
if (blob) {
this.#sendToPeer(peer, {
type: 'relay-restore',
docId,
data: Array.from(blob),
});
}
}).catch((e) => {
console.error(`[SyncServer] Failed to load relay blob for ${docId}:`, e);
});
}
} else if (this.#participantMode && this.#docs.has(docId)) {
// Participant mode: send initial sync
this.#sendSyncToPeer(peer, docId); this.#sendSyncToPeer(peer, docId);
} }
} }
@ -343,6 +383,13 @@ export class SyncServer {
} }
} }
#handleRelayBackup(_peer: Peer, msg: RelayBackupMessage): void {
const blob = new Uint8Array(msg.data);
if (this.#onRelayBackup) {
this.#onRelayBackup(msg.docId, blob);
}
}
#sendSyncToPeer(peer: Peer, docId: string): void { #sendSyncToPeer(peer: Peer, docId: string): void {
const doc = this.#docs.get(docId); const doc = this.#docs.get(docId);
if (!doc) return; if (!doc) return;

View File

@ -3,12 +3,43 @@
* *
* Participant mode: server maintains its own Automerge docs. * Participant mode: server maintains its own Automerge docs.
* On any doc change, debounced-save to disk via doc-persistence. * On any doc change, debounced-save to disk via doc-persistence.
*
* When a doc belongs to an encrypted space (meta.encrypted === true),
* the save is encrypted at rest using the space's encryptionKeyId.
*
* Relay mode: for encrypted spaces, the server stores opaque blobs
* it cannot decrypt, enabling cross-device restore.
*/ */
import { SyncServer } from "./local-first/sync-server"; import { SyncServer } from "./local-first/sync-server";
import { saveDoc } from "./local-first/doc-persistence"; import { saveDoc, saveEncryptedBlob, loadEncryptedBlob } from "./local-first/doc-persistence";
import { getDocumentData } from "./community-store";
/**
* Look up the encryption key ID for a doc's space.
* DocIds are formatted as "spaceSlug:module:collection[:itemId]".
* Returns the encryptionKeyId if the space has encryption enabled, else undefined.
*/
function getEncryptionKeyId(docId: string): string | undefined {
const spaceSlug = docId.split(":")[0];
if (!spaceSlug || spaceSlug === "global") return undefined;
const data = getDocumentData(spaceSlug);
if (data?.meta?.encrypted && data.meta.encryptionKeyId) {
return data.meta.encryptionKeyId;
}
return undefined;
}
export const syncServer = new SyncServer({ export const syncServer = new SyncServer({
participantMode: true, participantMode: true,
onDocChange: (docId, doc) => saveDoc(docId, doc), onDocChange: (docId, doc) => {
const encryptionKeyId = getEncryptionKeyId(docId);
saveDoc(docId, doc, encryptionKeyId);
},
onRelayBackup: (docId, blob) => {
saveEncryptedBlob(docId, blob);
},
onRelayLoad: (docId) => {
return loadEncryptedBlob(docId);
},
}); });

View File

@ -0,0 +1,273 @@
/**
* Client-Side Backup Manager encrypted backup push/pull to server.
*
* Reads already-encrypted blobs from IndexedDB (no double-encryption needed).
* Compares local manifest vs server manifest, uploads only changed docs.
* On restore: downloads all blobs, writes to IndexedDB.
*/
import type { DocumentId } from './document';
import type { EncryptedDocStore } from './storage';
export interface BackupResult {
uploaded: number;
skipped: number;
errors: string[];
}
export interface RestoreResult {
downloaded: number;
errors: string[];
}
export interface BackupStatus {
enabled: boolean;
lastBackupAt: string | null;
docCount: number;
totalBytes: number;
}
interface ServerManifestEntry {
docId: string;
hash: string;
size: number;
updatedAt: string;
}
interface ServerManifest {
spaceSlug: string;
entries: ServerManifestEntry[];
updatedAt: string;
}
export class BackupSyncManager {
#spaceId: string;
#store: EncryptedDocStore;
#baseUrl: string;
#autoBackupTimer: ReturnType<typeof setInterval> | null = null;
constructor(spaceId: string, store: EncryptedDocStore, baseUrl?: string) {
this.#spaceId = spaceId;
this.#store = store;
this.#baseUrl = baseUrl || '';
}
/**
* Push backup upload changed docs to server.
* Reads encrypted blobs from IndexedDB and compares with server manifest.
*/
async pushBackup(): Promise<BackupResult> {
const result: BackupResult = { uploaded: 0, skipped: 0, errors: [] };
const token = this.#getAuthToken();
if (!token) {
result.errors.push('Not authenticated');
return result;
}
try {
// Get server manifest
const serverManifest = await this.#fetchManifest(token);
const serverHashes = new Map(
serverManifest.entries.map((e) => [e.docId, e.hash]),
);
// List all local docs for this space
const localDocs = await this.#store.listAll();
const spaceDocs = localDocs.filter((id) =>
id.startsWith(`${this.#spaceId}:`),
);
for (const docId of spaceDocs) {
try {
const blob = await this.#store.loadRaw(docId);
if (!blob) continue;
// Hash local blob and compare
const localHash = await this.#hashBlob(blob);
if (serverHashes.get(docId) === localHash) {
result.skipped++;
continue;
}
// Upload
await this.#uploadBlob(token, docId, blob);
result.uploaded++;
} catch (e) {
result.errors.push(`${docId}: ${e}`);
}
}
// Update last backup time
try {
localStorage.setItem(
`rspace:${this.#spaceId}:last_backup`,
new Date().toISOString(),
);
} catch { /* SSR */ }
} catch (e) {
result.errors.push(`Manifest fetch failed: ${e}`);
}
return result;
}
/**
* Pull restore download all blobs from server to IndexedDB.
* Used when setting up a new device or recovering data.
*/
async pullRestore(): Promise<RestoreResult> {
const result: RestoreResult = { downloaded: 0, errors: [] };
const token = this.#getAuthToken();
if (!token) {
result.errors.push('Not authenticated');
return result;
}
try {
const manifest = await this.#fetchManifest(token);
for (const entry of manifest.entries) {
try {
const blob = await this.#downloadBlob(
token,
entry.docId,
);
if (blob) {
await this.#store.saveImmediate(
entry.docId as DocumentId,
blob,
);
result.downloaded++;
}
} catch (e) {
result.errors.push(`${entry.docId}: ${e}`);
}
}
} catch (e) {
result.errors.push(`Manifest fetch failed: ${e}`);
}
return result;
}
/**
* Get current backup status.
*/
async getStatus(): Promise<BackupStatus> {
let lastBackupAt: string | null = null;
let enabled = false;
try {
lastBackupAt = localStorage.getItem(
`rspace:${this.#spaceId}:last_backup`,
);
enabled =
localStorage.getItem('encryptid_backup_enabled') === 'true';
} catch { /* SSR */ }
const token = this.#getAuthToken();
if (!token || !enabled) {
return { enabled, lastBackupAt, docCount: 0, totalBytes: 0 };
}
try {
const manifest = await this.#fetchManifest(token);
const totalBytes = manifest.entries.reduce(
(sum, e) => sum + e.size,
0,
);
return {
enabled,
lastBackupAt,
docCount: manifest.entries.length,
totalBytes,
};
} catch {
return { enabled, lastBackupAt, docCount: 0, totalBytes: 0 };
}
}
/**
* Enable/disable periodic auto-backup.
*/
setAutoBackup(enabled: boolean, intervalMs = 5 * 60 * 1000): void {
if (this.#autoBackupTimer) {
clearInterval(this.#autoBackupTimer);
this.#autoBackupTimer = null;
}
if (enabled) {
this.#autoBackupTimer = setInterval(() => {
this.pushBackup().catch((e) =>
console.error('[BackupSync] Auto-backup failed:', e),
);
}, intervalMs);
}
}
destroy(): void {
this.setAutoBackup(false);
}
// ---- Private ----
#getAuthToken(): string | null {
try {
const sess = JSON.parse(
localStorage.getItem('encryptid_session') || '',
);
return sess?.accessToken || null;
} catch {
return null;
}
}
async #fetchManifest(token: string): Promise<ServerManifest> {
const resp = await fetch(
`${this.#baseUrl}/api/backup/${encodeURIComponent(this.#spaceId)}`,
{ headers: { Authorization: `Bearer ${token}` } },
);
if (!resp.ok) throw new Error(`HTTP ${resp.status}`);
return resp.json();
}
async #uploadBlob(
token: string,
docId: string,
blob: Uint8Array,
): Promise<void> {
const body = new Uint8Array(blob).buffer as ArrayBuffer;
const resp = await fetch(
`${this.#baseUrl}/api/backup/${encodeURIComponent(this.#spaceId)}/${encodeURIComponent(docId)}`,
{
method: 'PUT',
headers: {
Authorization: `Bearer ${token}`,
'Content-Type': 'application/octet-stream',
},
body,
},
);
if (!resp.ok) throw new Error(`Upload failed: HTTP ${resp.status}`);
}
async #downloadBlob(
token: string,
docId: string,
): Promise<Uint8Array | null> {
const resp = await fetch(
`${this.#baseUrl}/api/backup/${encodeURIComponent(this.#spaceId)}/${encodeURIComponent(docId)}`,
{ headers: { Authorization: `Bearer ${token}` } },
);
if (resp.status === 404) return null;
if (!resp.ok) throw new Error(`Download failed: HTTP ${resp.status}`);
const buf = await resp.arrayBuffer();
return new Uint8Array(buf);
}
async #hashBlob(blob: Uint8Array): Promise<string> {
const buf = new Uint8Array(blob).buffer as ArrayBuffer;
const hash = await crypto.subtle.digest('SHA-256', buf);
return Array.from(new Uint8Array(hash))
.map((b) => b.toString(16).padStart(2, '0'))
.join('');
}
}

View File

@ -27,6 +27,8 @@
*/ */
import { DocCrypto } from './crypto'; import { DocCrypto } from './crypto';
import { BackupSyncManager } from './backup';
import type { EncryptedDocStore } from './storage';
// ============================================================================ // ============================================================================
// TYPES // TYPES
@ -198,8 +200,14 @@ export function isEncryptedBackupEnabled(): boolean {
/** /**
* Toggle encrypted backup flag. * Toggle encrypted backup flag.
* When enabled, creates and starts auto-backup for the given store.
* When disabled, stops auto-backup and destroys the manager.
*/ */
export function setEncryptedBackupEnabled(enabled: boolean): void { export function setEncryptedBackupEnabled(
enabled: boolean,
store?: EncryptedDocStore,
spaceId?: string,
): void {
try { try {
if (enabled) { if (enabled) {
localStorage.setItem('encryptid_backup_enabled', 'true'); localStorage.setItem('encryptid_backup_enabled', 'true');
@ -209,4 +217,48 @@ export function setEncryptedBackupEnabled(enabled: boolean): void {
} catch { } catch {
// localStorage unavailable // localStorage unavailable
} }
// Wire up BackupSyncManager
if (enabled && store && spaceId) {
if (_backupManager) {
_backupManager.destroy();
}
_backupManager = new BackupSyncManager(spaceId, store);
_backupManager.setAutoBackup(true);
} else if (!enabled && _backupManager) {
_backupManager.destroy();
_backupManager = null;
}
}
// ============================================================================
// Backup Manager singleton
// ============================================================================
let _backupManager: BackupSyncManager | null = null;
/**
* Get the current BackupSyncManager (if backup is enabled).
*/
export function getBackupManager(): BackupSyncManager | null {
return _backupManager;
}
/**
* Create a BackupSyncManager for the given space.
* Call this after auth + store setup when backup is enabled.
*/
export function initBackupManager(
spaceId: string,
store: EncryptedDocStore,
baseUrl?: string,
): BackupSyncManager {
if (_backupManager) {
_backupManager.destroy();
}
_backupManager = new BackupSyncManager(spaceId, store, baseUrl);
if (isEncryptedBackupEnabled()) {
_backupManager.setAutoBackup(true);
}
return _backupManager;
} }

View File

@ -38,9 +38,19 @@ export {
type SubscribeMessage, type SubscribeMessage,
type UnsubscribeMessage, type UnsubscribeMessage,
type AwarenessMessage, type AwarenessMessage,
type RelayBackupMessage,
type RelayRestoreMessage,
type WireMessage, type WireMessage,
} from './sync'; } from './sync';
// Backup
export {
BackupSyncManager,
type BackupResult,
type RestoreResult,
type BackupStatus,
} from './backup';
// Layer 5: Compute // Layer 5: Compute
export { export {
type Transform, type Transform,

View File

@ -154,6 +154,19 @@ export class EncryptedDocStore {
return stored.data; return stored.data;
} }
/**
* Load raw stored bytes for a document (without decrypting).
* Used by the backup manager to upload already-encrypted blobs.
*/
async loadRaw(docId: DocumentId): Promise<Uint8Array | null> {
if (!this.#db) return null;
const stored = await this.#getDoc(docId);
if (!stored) return null;
return stored.data;
}
/** /**
* Delete a document and its metadata. * Delete a document and its metadata.
*/ */

View File

@ -51,11 +51,27 @@ export interface AwarenessMessage {
color?: string; color?: string;
} }
/** Client sends full encrypted Automerge binary for server-side opaque storage. */
export interface RelayBackupMessage {
type: 'relay-backup';
docId: string;
data: number[];
}
/** Server sends stored encrypted blob to reconnecting client. */
export interface RelayRestoreMessage {
type: 'relay-restore';
docId: string;
data: number[];
}
export type WireMessage = export type WireMessage =
| SyncMessage | SyncMessage
| SubscribeMessage | SubscribeMessage
| UnsubscribeMessage | UnsubscribeMessage
| AwarenessMessage | AwarenessMessage
| RelayBackupMessage
| RelayRestoreMessage
| { type: 'ping' } | { type: 'ping' }
| { type: 'pong' }; | { type: 'pong' };
@ -347,6 +363,18 @@ export class DocSyncManager {
// ---------- Private ---------- // ---------- Private ----------
/**
* Send a relay-backup message full encrypted Automerge binary for
* server-side opaque storage. Used for relay-mode (encrypted) docs.
*/
sendRelayBackup(docId: DocumentId, encryptedBlob: Uint8Array): void {
this.#send({
type: 'relay-backup',
docId,
data: Array.from(encryptedBlob),
});
}
#handleMessage(raw: ArrayBuffer | string): void { #handleMessage(raw: ArrayBuffer | string): void {
try { try {
const data = typeof raw === 'string' ? raw : new TextDecoder().decode(raw); const data = typeof raw === 'string' ? raw : new TextDecoder().decode(raw);
@ -359,6 +387,9 @@ export class DocSyncManager {
case 'awareness': case 'awareness':
this.#handleAwareness(msg as AwarenessMessage); this.#handleAwareness(msg as AwarenessMessage);
break; break;
case 'relay-restore':
this.#handleRelayRestore(msg as RelayRestoreMessage);
break;
case 'pong': case 'pong':
// Keep-alive acknowledged // Keep-alive acknowledged
break; break;
@ -409,6 +440,29 @@ export class DocSyncManager {
} }
} }
/**
* Handle a relay-restore message server sends back a stored encrypted blob.
* Write it to IndexedDB so the client can decrypt and load it locally.
*/
#handleRelayRestore(msg: RelayRestoreMessage): void {
const docId = msg.docId as DocumentId;
const blob = new Uint8Array(msg.data);
if (this.#store) {
// Store as raw encrypted blob — EncryptedDocStore.load() will handle decryption
this.#store.saveImmediate(docId, blob).catch(() => {});
}
// Notify change listeners so UI can react
const listeners = this.#changeListeners.get(docId);
if (listeners) {
const doc = this.#documents.get(docId);
if (doc) {
for (const cb of listeners) {
try { cb(doc); } catch { /* ignore */ }
}
}
}
}
#sendSyncMessage(docId: DocumentId): void { #sendSyncMessage(docId: DocumentId): void {
const doc = this.#documents.get(docId); const doc = this.#documents.get(docId);
if (!doc) return; if (!doc) return;