Compare commits

...

23 Commits

Author SHA1 Message Date
Admin
c5c167035d fix(tts): fix pocket-tts voices missing in UI and 500 on first TTS enqueue
Some checks failed
CI / Test backend (pull_request) Successful in 40s
CI / Check ui (pull_request) Successful in 45s
Release / Test backend (push) Successful in 40s
Release / Check ui (push) Successful in 41s
Release / Docker / caddy (push) Successful in 1m8s
CI / Docker / runner (pull_request) Failing after 35s
CI / Docker / caddy (pull_request) Successful in 3m39s
CI / Docker / ui (pull_request) Successful in 1m12s
CI / Docker / backend (pull_request) Successful in 3m24s
Release / Upload source maps (push) Failing after 43s
Release / Docker / runner (push) Successful in 2m49s
Release / Docker / ui (push) Successful in 2m51s
Release / Docker / backend (push) Failing after 7m14s
Release / Gitea Release (push) Has been skipped
- Add POCKET_TTS_URL env to backend service in docker-compose.yml so
  pocket-tts voices appear in the voice selector (Doppler secret existed
  but the env var was never passed to the container)
- Fix GetAudioTask PocketBase filter using %q (double-quotes) instead of
  single-quoted string, causing the duplicate-task guard to always miss
- Fix AudioPlayer double-POST: GET /api/presign/audio already enqueues
  TTS internally on 404; AudioPlayer now skips the redundant POST and
  polls directly, eliminating the 500 from the PB unique-key conflict
2026-03-28 16:02:46 +05:00
Admin
4a00d953bb feat(ui): show build version in footer; enable watchtower for caddy
Some checks failed
CI / Check ui (pull_request) Successful in 50s
CI / Docker / ui (pull_request) Failing after 38s
CI / Test backend (pull_request) Successful in 3m31s
CI / Docker / backend (pull_request) Failing after 11s
CI / Docker / runner (pull_request) Failing after 11s
CI / Docker / caddy (pull_request) Successful in 11m0s
- Footer now displays build version tag + short commit SHA (from
  PUBLIC_BUILD_VERSION / PUBLIC_BUILD_COMMIT env vars baked in at
  image build time); falls back to 'dev' in local builds
- docker-compose.yml: add watchtower label to caddy service so it
  auto-updates alongside backend/ui/runner on new image pushes
- homelab/docker-compose.yml: use locally-built kokoro-fastapi:latest
  image (consistent with actual homelab setup)
2026-03-28 15:42:22 +05:00
Admin
fe1a933fd0 feat(queue): replace PocketBase polling with Asynq + Redis
Some checks failed
CI / Docker / caddy (pull_request) Failing after 50s
CI / Check ui (pull_request) Successful in 1m5s
Release / Check ui (push) Successful in 45s
Release / Test backend (push) Successful in 1m29s
CI / Docker / ui (pull_request) Successful in 1m28s
Release / Docker / backend (push) Successful in 3m14s
CI / Test backend (pull_request) Failing after 42s
CI / Docker / backend (pull_request) Has been skipped
CI / Docker / runner (pull_request) Has been skipped
Release / Docker / caddy (push) Successful in 6m48s
Release / Docker / ui (push) Successful in 2m8s
Release / Docker / runner (push) Successful in 2m51s
Release / Upload source maps (push) Failing after 53s
Release / Gitea Release (push) Has been skipped
Introduce a Redis-backed Asynq task queue so the runner consumes TTS
jobs pushed by the backend instead of polling PocketBase.

- backend/internal/asynqqueue: Producer and Consumer wrappers
- backend/internal/runner: AsynqRunner mux, per-instance Prometheus
  registry (fixes duplicate-collector panic in tests), redisConnOpt
- backend/internal/config: REDIS_ADDR / REDIS_PASSWORD env vars
- backend/cmd/{backend,runner}/main.go: wire Redis when env set; fall
  back to legacy poll mode when unset
- Caddyfile: caddy-l4 TCP proxy for redis.libnovel.cc:6380 → homelab
- caddy/Dockerfile: add --with github.com/mholt/caddy-l4
- docker-compose.yml: Caddy exposes 6380, backend/runner get Redis env
- homelab/runner/docker-compose.yml: Redis sidecar, runner depends_on
- homelab/otel/grafana: Grafana dashboards (backend, catalogue, runner)
  and alerting rules / contact-points provisioning
2026-03-28 14:32:40 +05:00
Admin
98e4a87432 feat(tts): dual-engine voice list (kokoro + pocket-tts)
Expose all available voices from both TTS engines via the /api/voices
endpoint. AudioPlayer and profile voice-selector now group voices by
engine and show a labelled optgroup. Voice type carries an engine field
so the chapter-reader can route synthesis to the correct backend.
2026-03-28 14:32:06 +05:00
Admin
9c8849c6cd fix(otel): accept full https:// URL in OTEL_EXPORTER_OTLP_ENDPOINT
Some checks failed
Release / Check ui (push) Successful in 39s
Release / Docker / caddy (push) Successful in 1m4s
Release / Test backend (push) Successful in 1m38s
Release / Docker / runner (push) Successful in 39s
Release / Docker / backend (push) Successful in 2m12s
Release / Docker / ui (push) Successful in 1m0s
Release / Upload source maps (push) Failing after 5m19s
Release / Gitea Release (push) Has been skipped
WithEndpoint expects host[:port] with no scheme. When Doppler has
https://otel.libnovel.cc the backend was crashing with 'invalid port'.
Now strip the scheme and enable TLS when prefix is https://.
2026-03-27 19:38:50 +05:00
Admin
b30aa23d64 fix(homelab): pocket-tts uses locally-built image, correct start command and volumes
Some checks failed
CI / Check ui (pull_request) Successful in 35s
CI / Test backend (pull_request) Successful in 39s
CI / Docker / ui (pull_request) Successful in 1m27s
CI / Docker / backend (pull_request) Successful in 1m57s
CI / Docker / runner (pull_request) Failing after 35s
CI / Docker / caddy (pull_request) Successful in 6m14s
- pocket-tts has no prebuilt image on ghcr.io — must be built from source on homelab
- Updated image to pocket-tts:latest (local tag), add command with --host 0.0.0.0
- Add model cache volumes (pocket_tts_cache, hf_cache) so model weights survive restarts
- start_period increased to 120s (first startup downloads weights)
2026-03-27 16:29:36 +05:00
Admin
fea09e3e23 feat(tts): add pocket-tts engine alongside kokoro-fastapi
Some checks failed
Release / Test backend (push) Successful in 36s
Release / Check ui (push) Successful in 50s
Release / Docker / caddy (push) Failing after 45s
CI / Test backend (pull_request) Successful in 39s
CI / Check ui (pull_request) Failing after 11s
CI / Docker / ui (pull_request) Has been skipped
CI / Docker / caddy (pull_request) Failing after 11s
Release / Docker / backend (push) Failing after 48s
Release / Docker / runner (push) Failing after 58s
Release / Upload source maps (push) Failing after 39s
CI / Docker / backend (pull_request) Failing after 34s
CI / Docker / runner (pull_request) Failing after 30s
Release / Docker / ui (push) Successful in 2m42s
Release / Gitea Release (push) Has been skipped
- New backend/internal/pockettts package: POST /tts client for
  kyutai-labs/pocket-tts; streams WAV response and transcodes to MP3
  via ffmpeg so the rest of the pipeline stays format-agnostic
- config.PocketTTS struct + POCKET_TTS_URL env var
- runner.Dependencies.PocketTTS field; runAudioTask routes by voice name:
  pockettts.IsPocketTTSVoice (alba, marius, javert, …) → pocket-tts,
  everything else → kokoro-fastapi
- Dockerfile: runner stage switched from distroless/static to alpine:3.21
  with ffmpeg + ca-certificates installed
- homelab compose: runner gets POCKET_TTS_URL=http://pocket-tts:8000
- Doppler prd + prd_homelab: KOKORO_URL=https://tts.libnovel.cc,
  POCKET_TTS_URL=https://pocket-tts.libnovel.cc
2026-03-27 16:17:13 +05:00
Admin
4831c74acc feat(observability+tts): OTel logs for backend/runner/ui; add kokoro-fastapi (GPU) and pocket-tts (CPU) to homelab
Some checks failed
Release / Check ui (push) Successful in 47s
Release / Docker / caddy (push) Successful in 48s
Release / Test backend (push) Successful in 3m24s
CI / Check ui (pull_request) Successful in 52s
CI / Test backend (pull_request) Successful in 3m6s
CI / Docker / caddy (pull_request) Failing after 38s
Release / Upload source maps (push) Failing after 33s
Release / Docker / ui (push) Failing after 2m41s
Release / Docker / backend (push) Successful in 2m48s
Release / Docker / runner (push) Failing after 1m1s
Release / Gitea Release (push) Has been skipped
CI / Docker / ui (pull_request) Successful in 1m48s
CI / Docker / backend (pull_request) Successful in 1m56s
CI / Docker / runner (pull_request) Successful in 1m45s
- otelsetup.Init now returns a *slog.Logger wired to the OTLP log exporter
  so all slog output is shipped to Loki with embedded trace IDs
- backend and runner both adopt the new OTel-bridged logger
- runner.runScrapeTask and runAudioTask now emit structured OTel spans
- ui/hooks.server.ts adds BatchLogRecordProcessor alongside existing trace exporter
- homelab: add kokoro-fastapi GPU service (ghcr.io/remsky/kokoro-fastapi-gpu)
  using deploy.resources.reservations for NVIDIA GPU, exposed internally on :8880
- homelab: add pocket-tts CPU service (ghcr.io/kyutai-labs/pocket-tts) on :8000
- runner KOKORO_URL hardcoded to http://kokoro-fastapi:8880 (fixes DNS failure
  for the stale kokoro.kalekber.cc hostname)
2026-03-27 16:05:22 +05:00
Admin
7e5e0495cf ci: retrigger after fixing runner DNS — job containers now use dnsmasq with AAAA filtering
Some checks failed
CI / Check ui (pull_request) Successful in 46s
CI / Test backend (pull_request) Successful in 2m56s
CI / Docker / ui (pull_request) Failing after 49s
CI / Docker / backend (pull_request) Successful in 1m47s
CI / Docker / caddy (pull_request) Successful in 5m45s
CI / Docker / runner (pull_request) Successful in 1m53s
2026-03-27 10:32:56 +05:00
Admin
188685e1b6 ci: retrigger to rebuild Docker backend image after transient IPv6 failure
Some checks failed
CI / Test backend (pull_request) Failing after 29s
CI / Docker / backend (pull_request) Has been skipped
CI / Docker / runner (pull_request) Has been skipped
CI / Check ui (pull_request) Successful in 53s
CI / Docker / ui (pull_request) Successful in 1m27s
CI / Docker / caddy (pull_request) Successful in 5m15s
2026-03-27 10:12:19 +05:00
Admin
3271a5f3e6 fix(ui): use resourceFromAttributes instead of new Resource to fix verbatimModuleSyntax TS error
Some checks failed
CI / Test backend (pull_request) Successful in 36s
CI / Check ui (pull_request) Successful in 46s
CI / Docker / backend (pull_request) Failing after 41s
CI / Docker / runner (pull_request) Successful in 1m38s
CI / Docker / ui (pull_request) Successful in 1m42s
CI / Docker / caddy (pull_request) Successful in 6m27s
2026-03-27 10:01:29 +05:00
Admin
ee3ed29316 Add OTel distributed tracing to backend, ui, and runner
Some checks failed
CI / Check ui (pull_request) Failing after 44s
CI / Docker / ui (pull_request) Has been skipped
CI / Test backend (pull_request) Successful in 3m30s
CI / Docker / backend (pull_request) Successful in 2m28s
CI / Docker / caddy (pull_request) Successful in 5m22s
CI / Docker / runner (pull_request) Successful in 1m52s
Go backend:
- Add OTel SDK + otelhttp middleware deps (go.mod)
- New internal/otelsetup package: init OTLP/HTTP TracerProvider from env vars
- cmd/backend/main.go: call otelsetup.Init() after logger + ctx setup
- internal/backend/server.go: wrap mux with otelhttp.NewHandler() before
  sentryhttp, so all HTTP spans are recorded

SvelteKit UI:
- Add @opentelemetry/sdk-node, exporter-trace-otlp-http, resources,
  semantic-conventions
- hooks.server.ts: init NodeSDK when OTEL_EXPORTER_OTLP_ENDPOINT is set;
  graceful shutdown on SIGTERM/SIGINT

Config:
- docker-compose.yml: pass OTEL_EXPORTER_OTLP_ENDPOINT + OTEL_SERVICE_NAME
  to backend, runner, and ui services
- homelab/docker-compose.yml: fix runner OTel endpoint to HTTP port 4318
- Doppler prd: OTEL_EXPORTER_OTLP_ENDPOINT=https://otel.libnovel.cc
- Doppler prd_homelab: OTEL_EXPORTER_OTLP_ENDPOINT=http://otel-collector:4318

All services no-op gracefully when the env var is unset (local dev).
2026-03-26 23:00:52 +05:00
Admin
a39f660a37 Migrate tooling to homelab; add OTel observability stack
All checks were successful
CI / Test backend (pull_request) Successful in 34s
CI / Check ui (pull_request) Successful in 33s
CI / Docker / backend (pull_request) Successful in 1m26s
CI / Docker / runner (pull_request) Successful in 1m50s
CI / Docker / ui (pull_request) Successful in 1m3s
CI / Docker / caddy (pull_request) Successful in 5m53s
- Remove GlitchTip, Umami, Fider, Gotify, Uptime Kuma, Dozzle, Watchtower
  from prod docker-compose (now run on homelab)
- Add dozzle-agent on prod (127.0.0.1:7007) for homelab Dozzle to connect to
- Remove corresponding subdomain blocks from Caddyfile (now routed via
  Cloudflare Tunnel from homelab)
- Add homelab/docker-compose.yml: unified homelab stack with all migrated
  tooling services plus full OTel stack (Tempo 2.6.1, Loki, Prometheus,
  OTel Collector, Grafana)
- Add homelab/otel/: Tempo, Loki, Prometheus, OTel Collector configs +
  Grafana provisioning (datasources + dashboards)
- Add homelab/dozzle/users.yml for Dozzle auth
2026-03-26 21:22:43 +05:00
Admin
69818089a6 perf(ui): eliminate full listBooks() scan on every page load
Some checks failed
Release / Test backend (push) Failing after 10s
Release / Docker / backend (push) Has been skipped
Release / Docker / runner (push) Has been skipped
Release / Docker / caddy (push) Failing after 10s
Release / Check ui (push) Successful in 32s
CI / Test backend (pull_request) Successful in 39s
CI / Check ui (pull_request) Successful in 48s
Release / Upload source maps (push) Successful in 2m17s
CI / Docker / caddy (pull_request) Successful in 2m44s
Release / Docker / ui (push) Successful in 2m31s
Release / Gitea Release (push) Has been skipped
CI / Docker / runner (pull_request) Successful in 1m38s
CI / Docker / ui (pull_request) Successful in 1m28s
CI / Docker / backend (pull_request) Successful in 2m11s
The home, library, and /books routes were fetching all 15k books from
PocketBase on every SSR request (31 sequential HTTP calls per request).

Changes:
- Add src/lib/server/cache.ts: generic Valkey JSON cache
- Add getBooksBySlugs(): single PB query fetching only requested slugs,
  with fallback to the 5-min Valkey cache populated by listBooks()
- listBooks(): now caches results in Valkey for 5 min (safety net for
  admin routes that still need the full list)
- Home + /api/home: replaced listBooks()+filter with getBooksBySlugs()
  on progress slugs only — typically 1 PB request instead of 31
- /books + /api/library: same pattern using progress+saved slug union
2026-03-26 16:14:47 +05:00
Admin
09062b8c82 fix(ci): use correct glitchtip-cli download URL for linux-x86_64
Some checks failed
Release / Test backend (push) Successful in 35s
Release / Check ui (push) Successful in 38s
Release / Docker / caddy (push) Successful in 48s
CI / Check ui (pull_request) Successful in 31s
CI / Test backend (pull_request) Successful in 36s
CI / Docker / caddy (pull_request) Successful in 4m38s
Release / Docker / ui (push) Successful in 1m45s
Release / Upload source maps (push) Successful in 2m16s
CI / Docker / ui (pull_request) Failing after 30s
CI / Docker / backend (pull_request) Failing after 42s
CI / Docker / runner (pull_request) Failing after 32s
Release / Docker / runner (push) Successful in 2m2s
Release / Docker / backend (push) Successful in 1m36s
Release / Gitea Release (push) Failing after 1s
2026-03-26 11:55:17 +05:00
Admin
d518710cc4 fix(observability): switch source map upload to glitchtip-cli
Some checks failed
Release / Test backend (push) Successful in 18s
CI / Test backend (pull_request) Successful in 18s
Release / Check ui (push) Successful in 47s
Release / Docker / caddy (push) Successful in 55s
CI / Check ui (pull_request) Successful in 21s
Release / Docker / backend (push) Failing after 1m13s
CI / Docker / backend (pull_request) Failing after 20s
Release / Docker / runner (push) Successful in 1m46s
CI / Docker / runner (pull_request) Failing after 1m17s
Release / Docker / ui (push) Failing after 39s
CI / Docker / ui (pull_request) Successful in 1m11s
CI / Docker / caddy (pull_request) Successful in 7m6s
Release / Upload source maps (push) Failing after 41s
Release / Gitea Release (push) Has been skipped
@sentry/vite-plugin uses sentry-cli which creates release entries but
doesn't upload files to GlitchTip's API correctly. Switch to the native
glitchtip-cli which uses the debug ID inject+upload approach that
GlitchTip actually supports.
2026-03-25 21:10:10 +05:00
Admin
e2c15f5931 fix(observability): correct sentryVitePlugin sourcemaps option key
Some checks failed
Release / Test backend (push) Successful in 18s
CI / Test backend (pull_request) Successful in 17s
Release / Docker / caddy (push) Successful in 42s
Release / Check ui (push) Successful in 45s
CI / Check ui (pull_request) Successful in 24s
CI / Docker / caddy (pull_request) Failing after 38s
Release / Docker / runner (push) Successful in 2m16s
Release / Docker / backend (push) Successful in 2m33s
CI / Docker / backend (pull_request) Successful in 2m14s
Release / Upload source maps (push) Successful in 30s
CI / Docker / runner (pull_request) Successful in 1m26s
CI / Docker / ui (pull_request) Successful in 1m14s
Release / Docker / ui (push) Successful in 2m6s
Release / Gitea Release (push) Failing after 2s
2026-03-25 20:39:26 +05:00
Admin
a50b968b95 fix(infra): expose Meilisearch via search.libnovel.cc for homelab runner indexing
Some checks failed
CI / Test backend (pull_request) Has been cancelled
CI / Check ui (pull_request) Has been cancelled
CI / Docker / backend (pull_request) Has been cancelled
CI / Docker / runner (pull_request) Has been cancelled
CI / Docker / ui (pull_request) Has been cancelled
CI / Docker / caddy (pull_request) Has been cancelled
Release / Check ui (push) Failing after 22s
Release / Upload source maps (push) Has been skipped
Release / Docker / ui (push) Has been skipped
Release / Test backend (push) Successful in 36s
Release / Docker / caddy (push) Successful in 1m16s
Release / Docker / backend (push) Successful in 2m49s
Release / Docker / runner (push) Successful in 3m29s
Release / Gitea Release (push) Has been skipped
- Add search.libnovel.cc Caddy vhost proxying to meilisearch:7700
- Pass MEILI_URL + MEILI_API_KEY from Doppler into homelab runner
- Set GODEBUG=preferIPv4=1 to work around missing IPv6 route on homelab
- Update comments to reflect runner now indexes books into Meilisearch
2026-03-25 20:27:50 +05:00
Admin
023b1f7fec feat(observability): add GlitchTip source map uploads for un-minified stack traces
Some checks failed
CI / Check ui (pull_request) Failing after 11s
CI / Docker / caddy (pull_request) Failing after 11s
CI / Docker / ui (pull_request) Has been skipped
CI / Test backend (pull_request) Successful in 30s
Release / Check ui (push) Failing after 38s
Release / Upload source maps (push) Has been skipped
Release / Docker / ui (push) Has been skipped
Release / Test backend (push) Successful in 48s
Release / Docker / caddy (push) Successful in 45s
CI / Docker / backend (pull_request) Has been cancelled
CI / Docker / runner (pull_request) Has been cancelled
Release / Docker / runner (push) Has been cancelled
Release / Gitea Release (push) Has been cancelled
Release / Docker / backend (push) Has been cancelled
- Enable sourcemap:true in vite.config.ts
- Add sentryVitePlugin: uploads maps to errors.libnovel.cc, deletes them post-upload so they never ship in the Docker image
- Wire release: PUBLIC_BUILD_VERSION in both hooks.client.ts and hooks.server.ts so events correlate to the correct artifact
- Add upload-sourcemaps CI job in release.yaml (parallel to docker-ui, uses GLITCHTIP_AUTH_TOKEN secret)
2026-03-25 20:26:19 +05:00
Admin
7e99fc6d70 fix(runner): fix audio task infinite loop and semaphore race
Some checks failed
Release / Check ui (push) Successful in 22s
Release / Test backend (push) Successful in 33s
Release / Docker / backend (push) Failing after 30s
Release / Docker / caddy (push) Successful in 1m9s
Release / Docker / ui (push) Successful in 1m34s
Release / Docker / runner (push) Failing after 1m15s
Release / Gitea Release (push) Has been skipped
Two bugs caused audio tasks to loop endlessly:

1. claimRecord never set heartbeat_at — newly claimed tasks had
   heartbeat_at=null, which matched the reaper's stale filter
   (heartbeat_at=null || heartbeat_at<threshold). Tasks were reaped
   and reset to pending within seconds of being claimed, before the
   30s heartbeat goroutine had a chance to write a timestamp.
   Fix: set heartbeat_at=now() in claimRecord alongside status=running.

2. Audio semaphore was checked AFTER claiming the task. When the
   semaphore was full the select/break only broke the inner select,
   not the for loop — the code fell through and launched an uncapped
   goroutine that blocked forever on <-audioSem drain. The task also
   stayed status=running with no heartbeat, feeding bug #1.
   Fix: pre-acquire a semaphore slot BEFORE claiming the task; release
   it immediately if the queue is empty or claim fails.
2026-03-25 15:09:52 +05:00
Admin
12d6d30fb0 feat: add /terms page and make disclaimer/privacy/dmca/terms public routes
Some checks failed
CI / Check ui (pull_request) Successful in 20s
Release / Test backend (push) Successful in 42s
Release / Check ui (push) Successful in 38s
CI / Docker / caddy (pull_request) Successful in 3m2s
Release / Docker / caddy (push) Successful in 1m36s
CI / Docker / ui (pull_request) Successful in 1m19s
Release / Docker / backend (push) Successful in 1m42s
CI / Test backend (pull_request) Successful in 6m54s
Release / Docker / ui (push) Successful in 1m38s
Release / Docker / runner (push) Successful in 2m28s
Release / Gitea Release (push) Failing after 2s
CI / Docker / backend (pull_request) Failing after 43s
CI / Docker / runner (pull_request) Successful in 1m20s
2026-03-25 13:55:27 +05:00
Admin
f9c14685b3 feat(auth): make /disclaimer, /privacy, /dmca public routes
Some checks failed
CI / Test backend (pull_request) Successful in 28s
CI / Check ui (pull_request) Successful in 33s
Release / Test backend (push) Successful in 18s
Release / Check ui (push) Successful in 29s
CI / Docker / backend (pull_request) Failing after 20s
Release / Docker / caddy (push) Successful in 1m31s
CI / Docker / ui (pull_request) Successful in 1m8s
CI / Docker / runner (pull_request) Successful in 2m17s
Release / Docker / backend (push) Successful in 1m31s
Release / Docker / runner (push) Successful in 2m24s
Release / Docker / ui (push) Successful in 1m25s
CI / Docker / caddy (pull_request) Successful in 6m47s
Release / Gitea Release (push) Failing after 1s
2026-03-25 13:51:36 +05:00
Admin
4a7009989c feat(auth): replace email/password registration with OAuth2 (Google + GitHub)
Some checks failed
CI / Test backend (pull_request) Successful in 19s
Release / Test backend (push) Successful in 18s
CI / Check ui (pull_request) Successful in 41s
Release / Check ui (push) Successful in 21s
CI / Docker / backend (pull_request) Successful in 1m43s
CI / Docker / runner (pull_request) Successful in 1m28s
Release / Docker / backend (push) Successful in 1m40s
CI / Docker / caddy (pull_request) Successful in 6m45s
Release / Docker / runner (push) Successful in 1m48s
Release / Docker / caddy (push) Successful in 7m12s
CI / Docker / ui (pull_request) Successful in 1m20s
Release / Docker / ui (push) Successful in 1m19s
Release / Gitea Release (push) Failing after 2s
- New /auth/[provider] route: generates state cookie, redirects to provider
- New /auth/[provider]/callback: exchanges code, fetches profile, auto-creates
  or links account, sets auth cookie
- pocketbase.ts: add oauth_provider/oauth_id to User; new getUserByOAuth(),
  createOAuthUser(), linkOAuthToUser() helpers; loginUser() drops email_verified gate
- pb-init-v3.sh: add oauth_provider + oauth_id fields (schema + migration)
- docker-compose.yml: GOOGLE/GITHUB client ID/secret env vars (replaces SMTP vars)
- Login page: two OAuth buttons (Google, GitHub) — register form removed
- /verify-email route and email.ts removed (provider handles email verification)
- /api/auth/register returns 410 (OAuth-only from now on)
2026-03-24 22:01:51 +05:00
68 changed files with 5462 additions and 1180 deletions

View File

@@ -135,6 +135,54 @@ jobs:
cache-from: type=registry,ref=${{ secrets.DOCKER_USER }}/libnovel-runner:latest
cache-to: type=inline
# ── ui: source map upload ─────────────────────────────────────────────────────
# Builds the UI with source maps and uploads them to GlitchTip so that error
# stack traces resolve to original .svelte/.ts file names and line numbers.
# Runs in parallel with docker-ui (both need check-ui to pass first).
upload-sourcemaps:
name: Upload source maps
runs-on: ubuntu-latest
needs: [check-ui]
defaults:
run:
working-directory: ui
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "22"
cache: npm
cache-dependency-path: ui/package-lock.json
- name: Install dependencies
run: npm ci
- name: Build with source maps
run: npm run build
- name: Download glitchtip-cli
run: |
curl -L "https://gitlab.com/glitchtip/glitchtip-cli/-/jobs/artifacts/v0.1.0/raw/artifacts/glitchtip-cli-linux-x86_64?job=build-linux-x86_64" \
-o /usr/local/bin/glitchtip-cli
chmod +x /usr/local/bin/glitchtip-cli
- name: Inject debug IDs into build artifacts
run: glitchtip-cli sourcemaps inject ./build
env:
SENTRY_URL: https://errors.libnovel.cc/
SENTRY_AUTH_TOKEN: ${{ secrets.GLITCHTIP_AUTH_TOKEN }}
SENTRY_ORG: libnovel
SENTRY_PROJECT: libnovel-ui
- name: Upload source maps to GlitchTip
run: glitchtip-cli sourcemaps upload ./build --release ${{ gitea.ref_name }}
env:
SENTRY_URL: https://errors.libnovel.cc/
SENTRY_AUTH_TOKEN: ${{ secrets.GLITCHTIP_AUTH_TOKEN }}
SENTRY_ORG: libnovel
SENTRY_PROJECT: libnovel-ui
# ── docker: ui ────────────────────────────────────────────────────────────────
docker-ui:
name: Docker / ui
@@ -213,7 +261,7 @@ jobs:
release:
name: Gitea Release
runs-on: ubuntu-latest
needs: [docker-backend, docker-runner, docker-ui, docker-caddy]
needs: [docker-backend, docker-runner, docker-ui, docker-caddy, upload-sourcemaps]
steps:
- uses: actions/checkout@v4
with:

View File

@@ -30,6 +30,7 @@
# logs.libnovel.cc → dozzle:8080 (Docker log viewer)
# uptime.libnovel.cc → uptime-kuma:3001 (uptime monitoring)
# push.libnovel.cc → gotify:80 (push notifications)
# search.libnovel.cc → meilisearch:7700 (search index — homelab runner)
#
# Routes intentionally removed from direct-to-backend:
# /api/scrape/* — SvelteKit has /api/scrape/ counterparts
@@ -203,41 +204,11 @@
}
# ── Tooling subdomains ────────────────────────────────────────────────────────
feedback.libnovel.cc {
import security_headers
reverse_proxy fider:3000
}
# ── GlitchTip: error tracking ─────────────────────────────────────────────────
errors.libnovel.cc {
import security_headers
reverse_proxy glitchtip-web:8000
}
# ── Umami: page analytics ─────────────────────────────────────────────────────
analytics.libnovel.cc {
import security_headers
reverse_proxy umami:3000
}
# ── Dozzle: Docker log viewer ─────────────────────────────────────────────────
logs.libnovel.cc {
import security_headers
reverse_proxy dozzle:8080
}
# ── Uptime Kuma: uptime monitoring ────────────────────────────────────────────
uptime.libnovel.cc {
import security_headers
reverse_proxy uptime-kuma:3001
}
# ── Gotify: push notifications ────────────────────────────────────────────────
push.libnovel.cc {
import security_headers
reverse_proxy gotify:80
}
# feedback.libnovel.cc, errors.libnovel.cc, analytics.libnovel.cc,
# logs.libnovel.cc, uptime.libnovel.cc, push.libnovel.cc, grafana.libnovel.cc
# are now routed via Cloudflare Tunnel directly to the homelab (192.168.0.109).
# No Caddy rules needed here — Cloudflare handles TLS termination and routing.
# ── PocketBase: exposed for homelab runner task polling ───────────────────────
# Allows the homelab runner to claim tasks and write results via the PB API.
# Admin UI is also accessible here for convenience.
@@ -254,3 +225,36 @@ storage.libnovel.cc {
reverse_proxy minio:9000
}
# ── Meilisearch: exposed for homelab runner search indexing ──────────────────
# The homelab runner connects here as MEILI_URL to index books after scraping.
# Protected by MEILI_MASTER_KEY bearer token — Meilisearch enforces auth on
# every request; Caddy just terminates TLS.
search.libnovel.cc {
import security_headers
reverse_proxy meilisearch:7700
}
# ── Redis TCP proxy: exposes homelab Redis over TLS for Asynq ─────────────────
# The backend (prod) connects to rediss://redis.libnovel.cc:6380 to enqueue
# Asynq jobs. Caddy terminates TLS (Let's Encrypt cert for redis.libnovel.cc)
# and proxies the raw TCP stream to the homelab Redis via this reverse proxy.
#
# NOTE: Redis is NOT running on the prod server — it runs on the homelab
# (192.168.0.109:6379) and is exposed to the internet via this Caddy proxy.
# The homelab Redis is protected by REDIS_PASSWORD (requirepass).
#
# Caddy layer4 app handles this; requires the caddy-l4 module in the build.
{
layer4 {
redis.libnovel.cc:6380 {
route {
tls
proxy {
# Homelab Redis — replace with actual homelab IP or FQDN
upstream {$HOMELAB_REDIS_ADDR:192.168.0.109:6379}
}
}
}
}
}
}

View File

@@ -36,7 +36,12 @@ COPY --from=builder /out/backend /backend
ENTRYPOINT ["/backend"]
# ── runner service ───────────────────────────────────────────────────────────
FROM gcr.io/distroless/static:nonroot AS runner
# Uses Alpine (not distroless) so ffmpeg is available for WAV→MP3 transcoding
# when pocket-tts voices are used.
FROM alpine:3.21 AS runner
RUN apk add --no-cache ffmpeg ca-certificates && \
addgroup -S appgroup && adduser -S appuser -G appgroup
COPY --from=builder /out/healthcheck /healthcheck
COPY --from=builder /out/runner /runner
USER appuser
ENTRYPOINT ["/runner"]

BIN
backend/backend Executable file

Binary file not shown.

View File

@@ -22,11 +22,16 @@ import (
"time"
"github.com/getsentry/sentry-go"
"github.com/hibiken/asynq"
"github.com/libnovel/backend/internal/asynqqueue"
"github.com/libnovel/backend/internal/backend"
"github.com/libnovel/backend/internal/config"
"github.com/libnovel/backend/internal/kokoro"
"github.com/libnovel/backend/internal/meili"
"github.com/libnovel/backend/internal/otelsetup"
"github.com/libnovel/backend/internal/pockettts"
"github.com/libnovel/backend/internal/storage"
"github.com/libnovel/backend/internal/taskqueue"
)
// version and commit are set at build time via -ldflags.
@@ -70,6 +75,19 @@ func run() error {
ctx, stop := signal.NotifyContext(context.Background(), os.Interrupt, syscall.SIGTERM)
defer stop()
// ── OpenTelemetry tracing + logs ──────────────────────────────────────────
otelShutdown, otelLog, err := otelsetup.Init(ctx, version)
if err != nil {
return fmt.Errorf("init otel: %w", err)
}
if otelShutdown != nil {
defer otelShutdown()
// Replace the plain slog logger with the OTel-bridged one so all
// structured log lines are forwarded to Loki with trace IDs attached.
log = otelLog
log.Info("otel tracing + logs enabled", "endpoint", os.Getenv("OTEL_EXPORTER_OTLP_ENDPOINT"))
}
// ── Storage ──────────────────────────────────────────────────────────────
store, err := storage.NewStore(ctx, cfg, log)
if err != nil {
@@ -86,6 +104,15 @@ func run() error {
kokoroClient = &noopKokoro{}
}
// ── Pocket-TTS (voice list + sample generation; audio generation is the runner's job) ──
var pocketTTSClient pockettts.Client
if cfg.PocketTTS.URL != "" {
pocketTTSClient = pockettts.New(cfg.PocketTTS.URL)
log.Info("pocket-tts voices enabled", "url", cfg.PocketTTS.URL)
} else {
log.Info("POCKET_TTS_URL not set — pocket-tts voices unavailable in backend")
}
// ── Meilisearch (search reads only; indexing is the runner's job) ────────
var searchIndex meili.Client
if cfg.Meilisearch.URL != "" {
@@ -96,6 +123,24 @@ func run() error {
searchIndex = meili.NoopClient{}
}
// ── Task Producer ────────────────────────────────────────────────────────
// When REDIS_ADDR is set the backend dual-writes: PocketBase record (audit)
// + Asynq job (immediate delivery). Otherwise it writes to PocketBase only
// and the runner picks up on the next poll tick.
var producer taskqueue.Producer = store
if cfg.Redis.Addr != "" {
redisOpt, parseErr := parseRedisOpt(cfg.Redis)
if parseErr != nil {
return fmt.Errorf("parse REDIS_ADDR: %w", parseErr)
}
asynqProducer := asynqqueue.NewProducer(store, redisOpt)
defer asynqProducer.Close() //nolint:errcheck
producer = asynqProducer
log.Info("backend: asynq task dispatch enabled", "addr", cfg.Redis.Addr)
} else {
log.Info("backend: poll-mode task dispatch (REDIS_ADDR not set)")
}
// ── Backend server ───────────────────────────────────────────────────────
srv := backend.New(
backend.Config{
@@ -111,10 +156,11 @@ func run() error {
PresignStore: store,
ProgressStore: store,
CoverStore: store,
Producer: store,
Producer: producer,
TaskReader: store,
SearchIndex: searchIndex,
Kokoro: kokoroClient,
PocketTTS: pocketTTSClient,
Log: log,
},
)
@@ -151,3 +197,16 @@ func (n *noopKokoro) GenerateAudio(_ context.Context, _, _ string) ([]byte, erro
func (n *noopKokoro) ListVoices(_ context.Context) ([]string, error) {
return nil, nil
}
// parseRedisOpt converts a config.Redis into an asynq.RedisConnOpt.
// Handles full "redis://" / "rediss://" URLs and plain "host:port".
func parseRedisOpt(cfg config.Redis) (asynq.RedisConnOpt, error) {
addr := cfg.Addr
if len(addr) > 7 && (addr[:8] == "redis://" || (len(addr) > 8 && addr[:9] == "rediss://")) {
return asynq.ParseRedisURI(addr)
}
return asynq.RedisClientOpt{
Addr: addr,
Password: cfg.Password,
}, nil
}

View File

@@ -20,13 +20,17 @@ import (
"time"
"github.com/getsentry/sentry-go"
"github.com/libnovel/backend/internal/asynqqueue"
"github.com/libnovel/backend/internal/browser"
"github.com/libnovel/backend/internal/config"
"github.com/libnovel/backend/internal/kokoro"
"github.com/libnovel/backend/internal/meili"
"github.com/libnovel/backend/internal/novelfire"
"github.com/libnovel/backend/internal/otelsetup"
"github.com/libnovel/backend/internal/pockettts"
"github.com/libnovel/backend/internal/runner"
"github.com/libnovel/backend/internal/storage"
"github.com/libnovel/backend/internal/taskqueue"
)
// version and commit are set at build time via -ldflags.
@@ -70,6 +74,19 @@ func run() error {
ctx, stop := signal.NotifyContext(context.Background(), os.Interrupt, syscall.SIGTERM)
defer stop()
// ── OpenTelemetry tracing + logs ─────────────────────────────────────────
otelShutdown, otelLog, err := otelsetup.Init(ctx, version)
if err != nil {
return fmt.Errorf("init otel: %w", err)
}
if otelShutdown != nil {
defer otelShutdown()
// Switch to the OTel-bridged logger so all structured log lines are
// forwarded to Loki with trace IDs attached.
log = otelLog
log.Info("otel tracing + logs enabled", "endpoint", os.Getenv("OTEL_EXPORTER_OTLP_ENDPOINT"))
}
// ── Storage ─────────────────────────────────────────────────────────────
store, err := storage.NewStore(ctx, cfg, log)
if err != nil {
@@ -98,10 +115,19 @@ func run() error {
kokoroClient = kokoro.New(cfg.Kokoro.URL)
log.Info("kokoro TTS enabled", "url", cfg.Kokoro.URL)
} else {
log.Warn("KOKORO_URL not set — audio tasks will fail")
log.Warn("KOKORO_URL not set — kokoro voice tasks will fail")
kokoroClient = &noopKokoro{}
}
// ── pocket-tts ──────────────────────────────────────────────────────────
var pocketTTSClient pockettts.Client
if cfg.PocketTTS.URL != "" {
pocketTTSClient = pockettts.New(cfg.PocketTTS.URL)
log.Info("pocket-tts enabled", "url", cfg.PocketTTS.URL)
} else {
log.Warn("POCKET_TTS_URL not set — pocket-tts voice tasks will fail")
}
// ── Meilisearch ─────────────────────────────────────────────────────────
var searchIndex meili.Client
if cfg.Meilisearch.URL != "" {
@@ -127,9 +153,23 @@ func run() error {
MetricsAddr: cfg.Runner.MetricsAddr,
CatalogueRefreshInterval: cfg.Runner.CatalogueRefreshInterval,
SkipInitialCatalogueRefresh: cfg.Runner.SkipInitialCatalogueRefresh,
RedisAddr: cfg.Redis.Addr,
RedisPassword: cfg.Redis.Password,
}
// In Asynq mode the Consumer is a thin wrapper: claim/heartbeat/reap are
// no-ops, but FinishAudioTask / FinishScrapeTask / FailTask write back to
// PocketBase as before.
var consumer taskqueue.Consumer = store
if cfg.Redis.Addr != "" {
log.Info("runner: asynq mode — using Redis for task dispatch", "addr", cfg.Redis.Addr)
consumer = asynqqueue.NewConsumer(store)
} else {
log.Info("runner: poll mode — using PocketBase for task dispatch")
}
deps := runner.Dependencies{
Consumer: store,
Consumer: consumer,
BookWriter: store,
BookReader: store,
AudioStore: store,
@@ -137,6 +177,7 @@ func run() error {
SearchIndex: searchIndex,
Novel: novel,
Kokoro: kokoroClient,
PocketTTS: pocketTTSClient,
Log: log,
}
r := runner.New(rCfg, deps)

View File

@@ -9,29 +9,63 @@ require (
require (
github.com/andybalholm/brotli v1.1.1 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/cenkalti/backoff/v5 v5.0.3 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/getsentry/sentry-go v0.43.0 // indirect
github.com/go-ini/ini v1.67.0 // indirect
github.com/go-logr/logr v1.4.3 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/golang-jwt/jwt/v5 v5.3.1 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.28.0 // indirect
github.com/hibiken/asynq v0.26.0 // indirect
github.com/hibiken/asynq/x v0.0.0-20260203063626-d704b68a426d // indirect
github.com/klauspost/compress v1.18.2 // indirect
github.com/klauspost/cpuid/v2 v2.2.11 // indirect
github.com/klauspost/crc32 v1.3.0 // indirect
github.com/meilisearch/meilisearch-go v0.36.1 // indirect
github.com/minio/crc64nvme v1.1.1 // indirect
github.com/minio/md5-simd v1.1.2 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/philhofer/fwd v1.2.0 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/prometheus/client_golang v1.23.2 // indirect
github.com/prometheus/client_model v0.6.2 // indirect
github.com/prometheus/common v0.66.1 // indirect
github.com/prometheus/procfs v0.16.1 // indirect
github.com/redis/go-redis/v9 v9.18.0 // indirect
github.com/robfig/cron/v3 v3.0.1 // indirect
github.com/rs/xid v1.6.0 // indirect
github.com/spf13/cast v1.10.0 // indirect
github.com/tinylib/msgp v1.6.1 // indirect
go.opentelemetry.io/auto/sdk v1.2.1 // indirect
go.opentelemetry.io/contrib/bridges/otelslog v0.17.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.67.0 // indirect
go.opentelemetry.io/otel v1.42.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp v0.18.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.42.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.42.0 // indirect
go.opentelemetry.io/otel/log v0.18.0 // indirect
go.opentelemetry.io/otel/metric v1.42.0 // indirect
go.opentelemetry.io/otel/sdk v1.42.0 // indirect
go.opentelemetry.io/otel/sdk/log v0.18.0 // indirect
go.opentelemetry.io/otel/trace v1.42.0 // indirect
go.opentelemetry.io/proto/otlp v1.9.0 // indirect
go.uber.org/atomic v1.11.0 // indirect
go.yaml.in/yaml/v2 v2.4.2 // indirect
go.yaml.in/yaml/v3 v3.0.4 // indirect
golang.org/x/crypto v0.48.0 // indirect
golang.org/x/sys v0.41.0 // indirect
golang.org/x/text v0.34.0 // indirect
golang.org/x/time v0.14.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20260209200024-4cfbd4190f57 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20260209200024-4cfbd4190f57 // indirect
google.golang.org/grpc v1.79.2 // indirect
google.golang.org/protobuf v1.36.11 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)

View File

@@ -1,5 +1,9 @@
github.com/andybalholm/brotli v1.1.1 h1:PR2pgnyFznKEugtsUo0xLdDop5SKXd5Qf5ysW+7XdTA=
github.com/andybalholm/brotli v1.1.1/go.mod h1:05ib4cKhjx3OQYUY22hTVd34Bc8upXjOLL2rKwwZBoA=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/cenkalti/backoff/v5 v5.0.3 h1:ZN+IMa753KfX5hd8vVaMixjnqRZ3y8CuJKRKj1xcsSM=
github.com/cenkalti/backoff/v5 v5.0.3/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
@@ -8,14 +12,27 @@ github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/r
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
github.com/getsentry/sentry-go v0.43.0 h1:XbXLpFicpo8HmBDaInk7dum18G9KSLcjZiyUKS+hLW4=
github.com/getsentry/sentry-go v0.43.0/go.mod h1:XDotiNZbgf5U8bPDUAfvcFmOnMQQceESxyKaObSssW0=
github.com/go-ini/ini v1.67.0 h1:z6ZrTEZqSWOTyH2FlglNbNgARyHG8oLW9gMELqKr06A=
github.com/go-ini/ini v1.67.0/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/golang-jwt/jwt/v5 v5.3.1 h1:kYf81DTWFe7t+1VvL7eS+jKFVWaUnK9cB1qbwn63YCY=
github.com/golang-jwt/jwt/v5 v5.3.1/go.mod h1:fxCRLWMO43lRc8nhHWY6LGqRcf+1gQWArsqaEUEa5bE=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.28.0 h1:HWRh5R2+9EifMyIHV7ZV+MIZqgz+PMpZ14Jynv3O2Zs=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.28.0/go.mod h1:JfhWUomR1baixubs02l85lZYYOm7LV6om4ceouMv45c=
github.com/hibiken/asynq v0.26.0 h1:1Zxr92MlDnb1Zt/QR5g2vSCqUS03i95lUfqx5X7/wrw=
github.com/hibiken/asynq v0.26.0/go.mod h1:Qk4e57bTnWDoyJ67VkchuV6VzSM9IQW2nPvAGuDyw58=
github.com/hibiken/asynq/x v0.0.0-20260203063626-d704b68a426d h1:Ld5m8EIK5QVOq/owOexKIbETij3skACg4eU1pArHsrw=
github.com/hibiken/asynq/x v0.0.0-20260203063626-d704b68a426d/go.mod h1:hhpStehaxSGg3ib9wJXzw5AXY1YS6lQ9BNavAgPbIhE=
github.com/klauspost/compress v1.18.2 h1:iiPHWW0YrcFgpBYhsA6D1+fqHssJscY/Tm/y2Uqnapk=
github.com/klauspost/compress v1.18.2/go.mod h1:R0h/fSBs8DE4ENlcrlib3PsXS61voFxhIs2DeRhCvJ4=
github.com/klauspost/cpuid/v2 v2.0.1/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
@@ -31,22 +48,72 @@ github.com/minio/md5-simd v1.1.2 h1:Gdi1DZK69+ZVMoNHRXJyNcxrMA4dSxoYHZSQbirFg34=
github.com/minio/md5-simd v1.1.2/go.mod h1:MzdKDxYpY2BT9XQFocsiZf/NKVtR7nkE4RoEpN+20RM=
github.com/minio/minio-go/v7 v7.0.98 h1:MeAVKjLVz+XJ28zFcuYyImNSAh8Mq725uNW4beRisi0=
github.com/minio/minio-go/v7 v7.0.98/go.mod h1:cY0Y+W7yozf0mdIclrttzo1Iiu7mEf9y7nk2uXqMOvM=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/philhofer/fwd v1.2.0 h1:e6DnBTl7vGY+Gz322/ASL4Gyp1FspeMvx1RNDoToZuM=
github.com/philhofer/fwd v1.2.0/go.mod h1:RqIHx9QI14HlwKwm98g9Re5prTQ6LdeRQn+gXJFxsJM=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v1.20.5 h1:cxppBPuYhUnsO6yo/aoRol4L7q7UFfdm+bR9r+8l63Y=
github.com/prometheus/client_golang v1.20.5/go.mod h1:PIEt8X02hGcP8JWbeHyeZ53Y/jReSnHgO035n//V5WE=
github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o=
github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg=
github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E=
github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY=
github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk=
github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE=
github.com/prometheus/common v0.55.0 h1:KEi6DK7lXW/m7Ig5i47x0vRzuBsHuvJdi5ee6Y3G1dc=
github.com/prometheus/common v0.55.0/go.mod h1:2SECS4xJG1kd8XF9IcM1gMX6510RAEL65zxzNImwdc8=
github.com/prometheus/common v0.66.1 h1:h5E0h5/Y8niHc5DlaLlWLArTQI7tMrsfQjHV+d9ZoGs=
github.com/prometheus/common v0.66.1/go.mod h1:gcaUsgf3KfRSwHY4dIMXLPV0K/Wg1oZ8+SbZk/HH/dA=
github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc=
github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk=
github.com/prometheus/procfs v0.16.1 h1:hZ15bTNuirocR6u0JZ6BAHHmwS1p8B4P6MRqxtzMyRg=
github.com/prometheus/procfs v0.16.1/go.mod h1:teAbpZRB1iIAJYREa1LsoWUXykVXA1KlTmWl8x/U+Is=
github.com/redis/go-redis/v9 v9.18.0 h1:pMkxYPkEbMPwRdenAzUNyFNrDgHx9U+DrBabWNfSRQs=
github.com/redis/go-redis/v9 v9.18.0/go.mod h1:k3ufPphLU5YXwNTUcCRXGxUoF1fqxnhFQmscfkCoDA0=
github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs=
github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro=
github.com/rs/xid v1.6.0 h1:fV591PaemRlL6JfRxGDEPl69wICngIQ3shQtzfy2gxU=
github.com/rs/xid v1.6.0/go.mod h1:7XoLgs4eV+QndskICGsho+ADou8ySMSjJKDIan90Nz0=
github.com/spf13/cast v1.10.0 h1:h2x0u2shc1QuLHfxi+cTJvs30+ZAHOGRic8uyGTDWxY=
github.com/spf13/cast v1.10.0/go.mod h1:jNfB8QC9IA6ZuY2ZjDp0KtFO2LZZlg4S/7bzP6qqeHo=
github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg=
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/tinylib/msgp v1.6.1 h1:ESRv8eL3u+DNHUoSAAQRE50Hm162zqAnBoGv9PzScPY=
github.com/tinylib/msgp v1.6.1/go.mod h1:RSp0LW9oSxFut3KzESt5Voq4GVWyS+PSulT77roAqEA=
github.com/xyproto/randomstring v1.0.5/go.mod h1:rgmS5DeNXLivK7YprL0pY+lTuhNQW3iGxZ18UQApw/E=
go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=
go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=
go.opentelemetry.io/contrib/bridges/otelslog v0.17.0 h1:NFIS6x7wyObQ7cR84x7bt1sr8nYBx89s3x3GwRjw40k=
go.opentelemetry.io/contrib/bridges/otelslog v0.17.0/go.mod h1:39SaByOyDMRMe872AE7uelMuQZidIw7LLFAnQi0FWTE=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.67.0 h1:OyrsyzuttWTSur2qN/Lm0m2a8yqyIjUVBZcxFPuXq2o=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.67.0/go.mod h1:C2NGBr+kAB4bk3xtMXfZ94gqFDtg/GkI7e9zqGh5Beg=
go.opentelemetry.io/otel v1.42.0 h1:lSQGzTgVR3+sgJDAU/7/ZMjN9Z+vUip7leaqBKy4sho=
go.opentelemetry.io/otel v1.42.0/go.mod h1:lJNsdRMxCUIWuMlVJWzecSMuNjE7dOYyWlqOXWkdqCc=
go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp v0.18.0 h1:icqq3Z34UrEFk2u+HMhTtRsvo7Ues+eiJVjaJt62njs=
go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp v0.18.0/go.mod h1:W2m8P+d5Wn5kipj4/xmbt9uMqezEKfBjzVJadfABSBE=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.42.0 h1:THuZiwpQZuHPul65w4WcwEnkX2QIuMT+UFoOrygtoJw=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.42.0/go.mod h1:J2pvYM5NGHofZ2/Ru6zw/TNWnEQp5crgyDeSrYpXkAw=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.42.0 h1:uLXP+3mghfMf7XmV4PkGfFhFKuNWoCvvx5wP/wOXo0o=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.42.0/go.mod h1:v0Tj04armyT59mnURNUJf7RCKcKzq+lgJs6QSjHjaTc=
go.opentelemetry.io/otel/log v0.18.0 h1:XgeQIIBjZZrliksMEbcwMZefoOSMI1hdjiLEiiB0bAg=
go.opentelemetry.io/otel/log v0.18.0/go.mod h1:KEV1kad0NofR3ycsiDH4Yjcoj0+8206I6Ox2QYFSNgI=
go.opentelemetry.io/otel/metric v1.42.0 h1:2jXG+3oZLNXEPfNmnpxKDeZsFI5o4J+nz6xUlaFdF/4=
go.opentelemetry.io/otel/metric v1.42.0/go.mod h1:RlUN/7vTU7Ao/diDkEpQpnz3/92J9ko05BIwxYa2SSI=
go.opentelemetry.io/otel/sdk v1.42.0 h1:LyC8+jqk6UJwdrI/8VydAq/hvkFKNHZVIWuslJXYsDo=
go.opentelemetry.io/otel/sdk v1.42.0/go.mod h1:rGHCAxd9DAph0joO4W6OPwxjNTYWghRWmkHuGbayMts=
go.opentelemetry.io/otel/sdk/log v0.18.0 h1:n8OyZr7t7otkeTnPTbDNom6rW16TBYGtvyy2Gk6buQw=
go.opentelemetry.io/otel/sdk/log v0.18.0/go.mod h1:C0+wxkTwKpOCZLrlJ3pewPiiQwpzycPI/u6W0Z9fuYk=
go.opentelemetry.io/otel/trace v1.42.0 h1:OUCgIPt+mzOnaUTpOQcBiM/PLQ/Op7oq6g4LenLmOYY=
go.opentelemetry.io/otel/trace v1.42.0/go.mod h1:f3K9S+IFqnumBkKhRJMeaZeNk9epyhnCmQh/EysQCdc=
go.opentelemetry.io/proto/otlp v1.9.0 h1:l706jCMITVouPOqEnii2fIAuO3IVGBRPV5ICjceRb/A=
go.opentelemetry.io/proto/otlp v1.9.0/go.mod h1:xE+Cx5E/eEHw+ISFkwPLwCZefwVjY+pqKg1qcK03+/4=
go.uber.org/atomic v1.11.0 h1:ZvwS0R+56ePWxUNi+Atn9dWONBPp/AUETXlHW0DxSjE=
go.uber.org/atomic v1.11.0/go.mod h1:LUxbIzbOniOlMKjJjyPfpl4v+PKK2cNJn91OQbhoJI0=
go.yaml.in/yaml/v2 v2.4.2 h1:DzmwEr2rDGHl7lsFgAHxmNz/1NlQ7xLIrlN2h5d1eGI=
go.yaml.in/yaml/v2 v2.4.2/go.mod h1:081UH+NErpNdqlCXm3TtEran0rJZGxAYx9hb/ELlsPU=
go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc=
go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
golang.org/x/crypto v0.48.0 h1:/VRzVqiRSggnhY7gNRxPauEQ5Drw9haKdM0jqfcCFts=
@@ -57,6 +124,16 @@ golang.org/x/sys v0.41.0 h1:Ivj+2Cp/ylzLiEU89QhWblYnOE9zerudt9Ftecq2C6k=
golang.org/x/sys v0.41.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/text v0.34.0 h1:oL/Qq0Kdaqxa1KbNeMKwQq0reLCCaFtqu2eNuSeNHbk=
golang.org/x/text v0.34.0/go.mod h1:homfLqTYRFyVYemLBFl5GgL/DWEiH5wcsQ5gSh1yziA=
golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI=
golang.org/x/time v0.14.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
google.golang.org/genproto/googleapis/api v0.0.0-20260209200024-4cfbd4190f57 h1:JLQynH/LBHfCTSbDWl+py8C+Rg/k1OVH3xfcaiANuF0=
google.golang.org/genproto/googleapis/api v0.0.0-20260209200024-4cfbd4190f57/go.mod h1:kSJwQxqmFXeo79zOmbrALdflXQeAYcUbgS7PbpMknCY=
google.golang.org/genproto/googleapis/rpc v0.0.0-20260209200024-4cfbd4190f57 h1:mWPCjDEyshlQYzBpMNHaEof6UX1PmHcaUODUywQ0uac=
google.golang.org/genproto/googleapis/rpc v0.0.0-20260209200024-4cfbd4190f57/go.mod h1:j9x/tPzZkyxcgEFkiKEEGxfvyumM01BEtsW8xzOahRQ=
google.golang.org/grpc v1.79.2 h1:fRMD94s2tITpyJGtBBn7MkMseNpOZU8ZxgC3MMBaXRU=
google.golang.org/grpc v1.79.2/go.mod h1:KmT0Kjez+0dde/v2j9vzwoAScgEPx/Bw1CYChhHLrHQ=
google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=
google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=

View File

@@ -0,0 +1,56 @@
package asynqqueue
import (
"context"
"time"
"github.com/libnovel/backend/internal/domain"
"github.com/libnovel/backend/internal/taskqueue"
)
// Consumer wraps the PocketBase-backed Consumer for result write-back only.
//
// When using Asynq, the runner no longer polls for work — Asynq delivers
// tasks via the ServeMux handlers. The only Consumer operations the handlers
// need are:
// - FinishAudioTask / FinishScrapeTask — write result back to PocketBase
// - FailTask — mark PocketBase record as failed
//
// ClaimNextAudioTask, ClaimNextScrapeTask, HeartbeatTask, and ReapStaleTasks
// are all no-ops here because Asynq owns those responsibilities.
type Consumer struct {
pb taskqueue.Consumer // underlying PocketBase consumer (for write-back)
}
// NewConsumer wraps an existing PocketBase Consumer.
func NewConsumer(pb taskqueue.Consumer) *Consumer {
return &Consumer{pb: pb}
}
// ── Write-back (delegated to PocketBase) ──────────────────────────────────────
func (c *Consumer) FinishScrapeTask(ctx context.Context, id string, result domain.ScrapeResult) error {
return c.pb.FinishScrapeTask(ctx, id, result)
}
func (c *Consumer) FinishAudioTask(ctx context.Context, id string, result domain.AudioResult) error {
return c.pb.FinishAudioTask(ctx, id, result)
}
func (c *Consumer) FailTask(ctx context.Context, id, errMsg string) error {
return c.pb.FailTask(ctx, id, errMsg)
}
// ── No-ops (Asynq owns claiming / heartbeating / reaping) ───────────────────
func (c *Consumer) ClaimNextScrapeTask(_ context.Context, _ string) (domain.ScrapeTask, bool, error) {
return domain.ScrapeTask{}, false, nil
}
func (c *Consumer) ClaimNextAudioTask(_ context.Context, _ string) (domain.AudioTask, bool, error) {
return domain.AudioTask{}, false, nil
}
func (c *Consumer) HeartbeatTask(_ context.Context, _ string) error { return nil }
func (c *Consumer) ReapStaleTasks(_ context.Context, _ time.Duration) (int, error) { return 0, nil }

View File

@@ -0,0 +1,90 @@
package asynqqueue
import (
"context"
"encoding/json"
"fmt"
"github.com/hibiken/asynq"
"github.com/libnovel/backend/internal/taskqueue"
)
// Producer dual-writes every task: first to PocketBase (via pb, for audit /
// UI status), then to Redis via Asynq so the runner picks it up immediately.
type Producer struct {
pb taskqueue.Producer // underlying PocketBase producer
client *asynq.Client
}
// NewProducer wraps an existing PocketBase Producer with Asynq dispatch.
func NewProducer(pb taskqueue.Producer, redisOpt asynq.RedisConnOpt) *Producer {
return &Producer{
pb: pb,
client: asynq.NewClient(redisOpt),
}
}
// Close shuts down the underlying Asynq client connection.
func (p *Producer) Close() error {
return p.client.Close()
}
// CreateScrapeTask creates a PocketBase record then enqueues an Asynq job.
func (p *Producer) CreateScrapeTask(ctx context.Context, kind, targetURL string, fromChapter, toChapter int) (string, error) {
id, err := p.pb.CreateScrapeTask(ctx, kind, targetURL, fromChapter, toChapter)
if err != nil {
return "", err
}
payload := ScrapePayload{
PBTaskID: id,
Kind: kind,
TargetURL: targetURL,
FromChapter: fromChapter,
ToChapter: toChapter,
}
taskType := TypeScrapeBook
if kind == "catalogue" {
taskType = TypeScrapeCatalogue
}
if err := p.enqueue(ctx, taskType, payload); err != nil {
// Non-fatal: PB record exists; runner will pick it up on next poll.
return id, fmt.Errorf("asynq enqueue scrape (task still in PB): %w", err)
}
return id, nil
}
// CreateAudioTask creates a PocketBase record then enqueues an Asynq job.
func (p *Producer) CreateAudioTask(ctx context.Context, slug string, chapter int, voice string) (string, error) {
id, err := p.pb.CreateAudioTask(ctx, slug, chapter, voice)
if err != nil {
return "", err
}
payload := AudioPayload{
PBTaskID: id,
Slug: slug,
Chapter: chapter,
Voice: voice,
}
if err := p.enqueue(ctx, TypeAudioGenerate, payload); err != nil {
return id, fmt.Errorf("asynq enqueue audio (task still in PB): %w", err)
}
return id, nil
}
// CancelTask delegates to PocketBase; Asynq jobs may already be running and
// cannot be reliably cancelled, so we only update the audit record.
func (p *Producer) CancelTask(ctx context.Context, id string) error {
return p.pb.CancelTask(ctx, id)
}
// enqueue serialises payload and dispatches it to Asynq.
func (p *Producer) enqueue(_ context.Context, taskType string, payload any) error {
b, err := json.Marshal(payload)
if err != nil {
return fmt.Errorf("marshal payload: %w", err)
}
_, err = p.client.Enqueue(asynq.NewTask(taskType, b))
return err
}

View File

@@ -0,0 +1,46 @@
// Package asynqqueue provides Asynq-backed implementations of the
// taskqueue.Producer and taskqueue.Consumer interfaces.
//
// Architecture:
// - Producer: dual-writes — creates a PocketBase record for audit/UI, then
// enqueues an Asynq job so the runner picks it up immediately (sub-ms).
// - Consumer: thin wrapper used only for result write-back (FinishAudioTask,
// FinishScrapeTask, FailTask). ClaimNext*/Heartbeat/Reap are no-ops because
// Asynq owns those responsibilities.
// - Handlers: asynq.HandlerFunc wrappers that decode job payloads and invoke
// the existing runner logic (runScrapeTask / runAudioTask).
//
// Fallback: when REDIS_ADDR is empty the caller should use the plain
// storage.Store (PocketBase-polling) implementation unchanged.
package asynqqueue
// Queue names — keep all jobs on the default queue for now.
// Add separate queues (e.g. "audio", "scrape") later if you need priority.
const QueueDefault = "default"
// Task type constants used for Asynq routing.
const (
TypeAudioGenerate = "audio:generate"
TypeScrapeBook = "scrape:book"
TypeScrapeCatalogue = "scrape:catalogue"
)
// AudioPayload is the Asynq job payload for audio generation tasks.
type AudioPayload struct {
// PBTaskID is the PocketBase record ID created before enqueueing.
// The handler uses it to write results back via Consumer.FinishAudioTask.
PBTaskID string `json:"pb_task_id"`
Slug string `json:"slug"`
Chapter int `json:"chapter"`
Voice string `json:"voice"`
}
// ScrapePayload is the Asynq job payload for scrape tasks.
type ScrapePayload struct {
// PBTaskID is the PocketBase record ID created before enqueueing.
PBTaskID string `json:"pb_task_id"`
Kind string `json:"kind"` // "catalogue", "book", or "book_range"
TargetURL string `json:"target_url"` // empty for catalogue tasks
FromChapter int `json:"from_chapter"` // 0 unless Kind=="book_range"
ToChapter int `json:"to_chapter"` // 0 unless Kind=="book_range"
}

View File

@@ -47,6 +47,7 @@ import (
"github.com/libnovel/backend/internal/kokoro"
"github.com/libnovel/backend/internal/meili"
"github.com/libnovel/backend/internal/novelfire/htmlutil"
"github.com/libnovel/backend/internal/pockettts"
"github.com/libnovel/backend/internal/scraper"
)
@@ -703,7 +704,7 @@ func (s *Server) handleAudioProxy(w http.ResponseWriter, r *http.Request) {
// ── Voices ─────────────────────────────────────────────────────────────────────
// handleVoices handles GET /api/voices.
// Returns {"voices": [...]} — fetched from Kokoro with built-in fallback.
// Returns {"voices": [...]} — merged list from Kokoro and pocket-tts.
func (s *Server) handleVoices(w http.ResponseWriter, r *http.Request) {
writeJSON(w, 0, map[string]any{"voices": s.voices(r.Context())})
}
@@ -763,8 +764,8 @@ const voiceSampleText = "Hello! This is a preview of what I sound like. I hope y
// handlePresignVoiceSample handles GET /api/presign/voice-sample/{voice}.
// If the sample has not been generated yet it synthesises it on the fly via
// Kokoro, stores the result in MinIO, and returns the presigned URL — so the
// caller always gets a playable URL in a single request.
// the appropriate TTS engine (Kokoro for kokoro voices, pocket-tts for
// pocket-tts voices), stores the result in MinIO, and returns the presigned URL.
func (s *Server) handlePresignVoiceSample(w http.ResponseWriter, r *http.Request) {
voice := r.PathValue("voice")
if voice == "" {
@@ -777,7 +778,20 @@ func (s *Server) handlePresignVoiceSample(w http.ResponseWriter, r *http.Request
// Generate sample on demand when it is not in MinIO yet.
if !s.deps.AudioStore.AudioExists(r.Context(), key) {
s.deps.Log.Info("generating voice sample on demand", "voice", voice)
mp3, err := s.deps.Kokoro.GenerateAudio(r.Context(), voiceSampleText, voice)
var (
mp3 []byte
err error
)
if pockettts.IsPocketTTSVoice(voice) {
if s.deps.PocketTTS == nil {
jsonError(w, http.StatusServiceUnavailable, "pocket-tts not configured")
return
}
mp3, err = s.deps.PocketTTS.GenerateAudio(r.Context(), voiceSampleText, voice)
} else {
mp3, err = s.deps.Kokoro.GenerateAudio(r.Context(), voiceSampleText, voice)
}
if err != nil {
s.deps.Log.Error("voice sample generation failed", "voice", voice, "err", err)
jsonError(w, http.StatusInternalServerError, "voice sample generation failed")
@@ -1148,9 +1162,9 @@ func stripMarkdown(src string) string {
// ── Hardcoded Kokoro voice fallback ───────────────────────────────────────────
// kokoroVoices is the built-in fallback list used when the Kokoro service is
// unavailable. Matches the list in the old scraper helpers.go.
var kokoroVoices = []string{
// kokoroVoiceIDs is the built-in fallback list of Kokoro voice IDs used when
// the Kokoro service is unavailable.
var kokoroVoiceIDs = []string{
// American English
"af_alloy", "af_aoede", "af_bella", "af_heart", "af_jadzia",
"af_jessica", "af_kore", "af_nicole", "af_nova", "af_river",

View File

@@ -30,9 +30,12 @@ import (
sentryhttp "github.com/getsentry/sentry-go/http"
"github.com/libnovel/backend/internal/bookstore"
"github.com/libnovel/backend/internal/domain"
"github.com/libnovel/backend/internal/kokoro"
"github.com/libnovel/backend/internal/meili"
"github.com/libnovel/backend/internal/pockettts"
"github.com/libnovel/backend/internal/taskqueue"
"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
)
// Dependencies holds all external services the backend server depends on.
@@ -58,9 +61,12 @@ type Dependencies struct {
// SearchIndex provides full-text book search via Meilisearch.
// If nil, the local-only fallback search is used.
SearchIndex meili.Client
// Kokoro is the TTS client (used for voice list only in the backend;
// Kokoro is the Kokoro TTS client (used for voice list only in the backend;
// audio generation is done by the runner).
Kokoro kokoro.Client
// PocketTTS is the pocket-tts client (used for voice list only in the backend;
// audio generation is done by the runner).
PocketTTS pockettts.Client
// Log is the structured logger.
Log *slog.Logger
}
@@ -83,7 +89,7 @@ type Server struct {
// voiceMu guards cachedVoices. Populated lazily on first GET /api/voices.
voiceMu sync.RWMutex
cachedVoices []string
cachedVoices []domain.Voice
}
// New creates a Server from cfg and deps.
@@ -170,9 +176,17 @@ func (s *Server) ListenAndServe(ctx context.Context) error {
mux.HandleFunc("POST /api/progress/{slug}", s.handleSetProgress)
mux.HandleFunc("DELETE /api/progress/{slug}", s.handleDeleteProgress)
// Wrap mux with OTel tracing (no-op when no TracerProvider is set),
// then with Sentry for panic recovery and error reporting.
var handler http.Handler = mux
handler = otelhttp.NewHandler(handler, "libnovel.backend",
otelhttp.WithMessageEvents(otelhttp.ReadEvents, otelhttp.WriteEvents),
)
handler = sentryhttp.New(sentryhttp.Options{Repanic: true}).Handle(handler)
srv := &http.Server{
Addr: s.cfg.Addr,
Handler: sentryhttp.New(sentryhttp.Options{Repanic: true}).Handle(mux),
Handler: handler,
ReadTimeout: 15 * time.Second,
WriteTimeout: 60 * time.Second,
IdleTimeout: 60 * time.Second,
@@ -255,10 +269,10 @@ func jsonError(w http.ResponseWriter, status int, msg string) {
_ = json.NewEncoder(w).Encode(map[string]string{"error": msg})
}
// voices returns the list of available Kokoro voices. On the first call it
// fetches from the Kokoro service and caches the result. Falls back to the
// hardcoded list on error.
func (s *Server) voices(ctx context.Context) []string {
// voices returns the merged list of available voices from Kokoro and pocket-tts.
// On the first call it fetches from both services and caches the result.
// Falls back to the hardcoded Kokoro list on error.
func (s *Server) voices(ctx context.Context) []domain.Voice {
s.voiceMu.RLock()
cached := s.cachedVoices
s.voiceMu.RUnlock()
@@ -266,23 +280,89 @@ func (s *Server) voices(ctx context.Context) []string {
return cached
}
if s.deps.Kokoro == nil {
return kokoroVoices
}
fetchCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
list, err := s.deps.Kokoro.ListVoices(fetchCtx)
if err != nil || len(list) == 0 {
s.deps.Log.Warn("backend: could not fetch kokoro voices, using built-in list", "err", err)
return kokoroVoices
var result []domain.Voice
// ── Kokoro voices ─────────────────────────────────────────────────────────
var kokoroIDs []string
if s.deps.Kokoro != nil {
ids, err := s.deps.Kokoro.ListVoices(fetchCtx)
if err != nil || len(ids) == 0 {
s.deps.Log.Warn("backend: could not fetch kokoro voices, using built-in list", "err", err)
ids = kokoroVoiceIDs
} else {
s.deps.Log.Info("backend: fetched kokoro voices", "count", len(ids))
}
kokoroIDs = ids
} else {
kokoroIDs = kokoroVoiceIDs
}
for _, id := range kokoroIDs {
result = append(result, kokoroVoice(id))
}
// ── Pocket-TTS voices ─────────────────────────────────────────────────────
if s.deps.PocketTTS != nil {
ids, err := s.deps.PocketTTS.ListVoices(fetchCtx)
if err != nil {
s.deps.Log.Warn("backend: could not fetch pocket-tts voices", "err", err)
} else {
for _, id := range ids {
result = append(result, pocketTTSVoice(id))
}
s.deps.Log.Info("backend: fetched pocket-tts voices", "count", len(ids))
}
}
s.voiceMu.Lock()
s.cachedVoices = list
s.cachedVoices = result
s.voiceMu.Unlock()
s.deps.Log.Info("backend: fetched kokoro voices", "count", len(list))
return list
return result
}
// kokoroVoice builds a domain.Voice for a Kokoro voice ID.
// The two-character prefix encodes language and gender:
//
// af/am → en-us f/m | bf/bm → en-gb f/m
// ef/em → es f/m | ff → fr f
// hf/hm → hi f/m | if/im → it f/m
// jf/jm → ja f/m | pf/pm → pt f/m
// zf/zm → zh f/m
func kokoroVoice(id string) domain.Voice {
type meta struct{ lang, gender string }
prefixMap := map[string]meta{
"af": {"en-us", "f"}, "am": {"en-us", "m"},
"bf": {"en-gb", "f"}, "bm": {"en-gb", "m"},
"ef": {"es", "f"}, "em": {"es", "m"},
"ff": {"fr", "f"},
"hf": {"hi", "f"}, "hm": {"hi", "m"},
"if": {"it", "f"}, "im": {"it", "m"},
"jf": {"ja", "f"}, "jm": {"ja", "m"},
"pf": {"pt", "f"}, "pm": {"pt", "m"},
"zf": {"zh", "f"}, "zm": {"zh", "m"},
}
if len(id) >= 2 {
if m, ok := prefixMap[id[:2]]; ok {
return domain.Voice{ID: id, Engine: "kokoro", Lang: m.lang, Gender: m.gender}
}
}
return domain.Voice{ID: id, Engine: "kokoro", Lang: "en", Gender: ""}
}
// pocketTTSVoice builds a domain.Voice for a pocket-tts voice ID.
// All pocket-tts voices are English audiobook narrators.
func pocketTTSVoice(id string) domain.Voice {
femaleVoices := map[string]struct{}{
"alba": {}, "fantine": {}, "cosette": {}, "eponine": {},
"azelma": {}, "anna": {}, "vera": {}, "mary": {}, "jane": {}, "eve": {},
}
gender := "m"
if _, ok := femaleVoices[id]; ok {
gender = "f"
}
return domain.Voice{ID: id, Engine: "pocket-tts", Lang: "en", Gender: gender}
}
// handleHealth handles GET /health.

View File

@@ -50,13 +50,20 @@ type MinIO struct {
// Kokoro holds connection settings for the Kokoro-FastAPI TTS service.
type Kokoro struct {
// URL is the base URL of the Kokoro service, e.g. https://kokoro.libnovel.cc
// An empty string disables TTS generation.
// URL is the base URL of the Kokoro service, e.g. https://tts.libnovel.cc
// An empty string disables Kokoro TTS generation.
URL string
// DefaultVoice is the voice used when none is specified.
DefaultVoice string
}
// PocketTTS holds connection settings for the kyutai-labs/pocket-tts service.
type PocketTTS struct {
// URL is the base URL of the pocket-tts service, e.g. https://pocket-tts.libnovel.cc
// An empty string disables pocket-tts generation.
URL string
}
// HTTP holds settings for the HTTP server (backend only).
type HTTP struct {
// Addr is the listen address, e.g. ":8080"
@@ -79,6 +86,19 @@ type Valkey struct {
Addr string
}
// Redis holds connection settings for the Asynq task queue Redis instance.
// This is separate from Valkey (presign cache) — it may point to the same
// Redis or a dedicated one. An empty Addr falls back to PocketBase polling.
type Redis struct {
// Addr is the host:port (or rediss://... URL) of the Redis instance.
// Use rediss:// scheme for TLS (e.g. rediss://:password@redis.libnovel.cc:6380).
// An empty string disables Asynq and falls back to PocketBase polling.
Addr string
// Password is the Redis AUTH password.
// Not needed when Addr is a full rediss:// URL that includes the password.
Password string
}
// Runner holds settings specific to the runner/worker binary.
type Runner struct {
// PollInterval is how often the runner checks PocketBase for pending tasks.
@@ -113,10 +133,12 @@ type Config struct {
PocketBase PocketBase
MinIO MinIO
Kokoro Kokoro
PocketTTS PocketTTS
HTTP HTTP
Runner Runner
Meilisearch Meilisearch
Valkey Valkey
Redis Redis
// LogLevel is one of "debug", "info", "warn", "error".
LogLevel string
}
@@ -156,6 +178,10 @@ func Load() Config {
DefaultVoice: envOr("KOKORO_VOICE", "af_bella"),
},
PocketTTS: PocketTTS{
URL: envOr("POCKET_TTS_URL", ""),
},
HTTP: HTTP{
Addr: envOr("BACKEND_HTTP_ADDR", ":8080"),
},
@@ -180,6 +206,11 @@ func Load() Config {
Valkey: Valkey{
Addr: envOr("VALKEY_ADDR", ""),
},
Redis: Redis{
Addr: envOr("REDIS_ADDR", ""),
Password: envOr("REDIS_PASSWORD", ""),
},
}
}

View File

@@ -60,6 +60,20 @@ type RankingItem struct {
Updated time.Time `json:"updated,omitempty"`
}
// ── Voice types ───────────────────────────────────────────────────────────────
// Voice describes a single text-to-speech voice available in the system.
type Voice struct {
// ID is the voice identifier passed to TTS clients (e.g. "af_bella", "alba").
ID string `json:"id"`
// Engine is "kokoro" or "pocket-tts".
Engine string `json:"engine"`
// Lang is the primary language tag (e.g. "en-us", "en-gb", "en", "es", "fr").
Lang string `json:"lang"`
// Gender is "f" or "m".
Gender string `json:"gender"`
}
// ── Storage record types ──────────────────────────────────────────────────────
// ChapterInfo is a lightweight chapter descriptor stored in the index.

View File

@@ -0,0 +1,120 @@
// Package otelsetup initialises the OpenTelemetry SDK for the LibNovel backend.
//
// It reads two environment variables:
//
// OTEL_EXPORTER_OTLP_ENDPOINT — OTLP/HTTP endpoint; accepts either a full
// URL ("https://otel.example.com") or a bare
// host[:port] ("otel-collector:4318").
// TLS is used when the value starts with "https://".
// OTEL_SERVICE_NAME — service name reported in traces (default: "backend")
//
// When OTEL_EXPORTER_OTLP_ENDPOINT is empty the function is a no-op: it
// returns a nil shutdown func and the default slog.Logger, so callers never
// need to branch on it.
//
// Usage in main.go:
//
// shutdown, log, err := otelsetup.Init(ctx, version)
// if err != nil { return err }
// if shutdown != nil { defer shutdown() }
package otelsetup
import (
"context"
"fmt"
"log/slog"
"os"
"strings"
"time"
"go.opentelemetry.io/contrib/bridges/otelslog"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp"
otellog "go.opentelemetry.io/otel/log/global"
"go.opentelemetry.io/otel/sdk/log"
"go.opentelemetry.io/otel/sdk/resource"
sdktrace "go.opentelemetry.io/otel/sdk/trace"
semconv "go.opentelemetry.io/otel/semconv/v1.26.0"
)
// Init sets up TracerProvider and LoggerProvider that export via OTLP/HTTP.
//
// Returns:
// - shutdown: flushes and stops both providers (nil when OTel is disabled).
// - logger: an slog.Logger bridged to OTel logs (falls back to default when disabled).
// - err: non-nil only on SDK initialisation failure.
func Init(ctx context.Context, version string) (shutdown func(), logger *slog.Logger, err error) {
rawEndpoint := os.Getenv("OTEL_EXPORTER_OTLP_ENDPOINT")
if rawEndpoint == "" {
return nil, slog.Default(), nil // OTel disabled — not an error
}
// WithEndpoint expects a host[:port] value — no scheme.
// Support both "https://otel.example.com" and "otel-collector:4318".
useTLS := strings.HasPrefix(rawEndpoint, "https://")
endpoint := strings.TrimPrefix(rawEndpoint, "https://")
endpoint = strings.TrimPrefix(endpoint, "http://")
serviceName := os.Getenv("OTEL_SERVICE_NAME")
if serviceName == "" {
serviceName = "backend"
}
// ── Shared resource ───────────────────────────────────────────────────────
res, err := resource.New(ctx,
resource.WithAttributes(
semconv.ServiceName(serviceName),
semconv.ServiceVersion(version),
),
)
if err != nil {
return nil, slog.Default(), fmt.Errorf("otelsetup: create resource: %w", err)
}
// ── Trace provider ────────────────────────────────────────────────────────
traceOpts := []otlptracehttp.Option{otlptracehttp.WithEndpoint(endpoint)}
if !useTLS {
traceOpts = append(traceOpts, otlptracehttp.WithInsecure())
}
traceExp, err := otlptracehttp.New(ctx, traceOpts...)
if err != nil {
return nil, slog.Default(), fmt.Errorf("otelsetup: create OTLP trace exporter: %w", err)
}
tp := sdktrace.NewTracerProvider(
sdktrace.WithBatcher(traceExp),
sdktrace.WithResource(res),
sdktrace.WithSampler(sdktrace.ParentBased(sdktrace.TraceIDRatioBased(0.2))),
)
otel.SetTracerProvider(tp)
// ── Log provider ──────────────────────────────────────────────────────────
logOpts := []otlploghttp.Option{otlploghttp.WithEndpoint(endpoint)}
if !useTLS {
logOpts = append(logOpts, otlploghttp.WithInsecure())
}
logExp, err := otlploghttp.New(ctx, logOpts...)
if err != nil {
return nil, slog.Default(), fmt.Errorf("otelsetup: create OTLP log exporter: %w", err)
}
lp := log.NewLoggerProvider(
log.WithProcessor(log.NewBatchProcessor(logExp)),
log.WithResource(res),
)
otellog.SetLoggerProvider(lp)
// Bridge slog → OTel logs. Structured fields and trace IDs are forwarded
// automatically; Grafana can correlate log lines with Tempo traces.
otelLogger := otelslog.NewLogger(serviceName)
shutdown = func() {
shutCtx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
_ = tp.Shutdown(shutCtx)
_ = lp.Shutdown(shutCtx)
}
return shutdown, otelLogger, nil
}

View File

@@ -0,0 +1,159 @@
// Package pockettts provides a client for the kyutai-labs/pocket-tts TTS service.
//
// pocket-tts exposes a non-OpenAI API:
//
// POST /tts (multipart form: text, voice_url) → streaming WAV
// GET /health → {"status":"healthy"}
//
// GenerateAudio streams the WAV response and transcodes it to MP3 using ffmpeg,
// so callers receive MP3 bytes — the same format as the kokoro client — and the
// rest of the pipeline does not need to care which TTS engine was used.
//
// Predefined voices (pass the bare name as the voice parameter):
//
// alba, marius, javert, jean, fantine, cosette, eponine, azelma,
// anna, vera, charles, paul, george, mary, jane, michael, eve,
// bill_boerst, peter_yearsley, stuart_bell
package pockettts
import (
"bytes"
"context"
"fmt"
"io"
"mime/multipart"
"net/http"
"os/exec"
"strings"
"time"
)
// PredefinedVoices is the set of voice names built into pocket-tts.
// The runner uses this to decide which TTS engine to route a task to.
var PredefinedVoices = map[string]struct{}{
"alba": {}, "marius": {}, "javert": {}, "jean": {},
"fantine": {}, "cosette": {}, "eponine": {}, "azelma": {},
"anna": {}, "vera": {}, "charles": {}, "paul": {},
"george": {}, "mary": {}, "jane": {}, "michael": {},
"eve": {}, "bill_boerst": {}, "peter_yearsley": {}, "stuart_bell": {},
}
// IsPocketTTSVoice reports whether voice is served by pocket-tts.
func IsPocketTTSVoice(voice string) bool {
_, ok := PredefinedVoices[voice]
return ok
}
// Client is the interface for interacting with the pocket-tts service.
type Client interface {
// GenerateAudio synthesises text using the given voice and returns MP3 bytes.
// Voice must be one of the predefined pocket-tts voice names.
GenerateAudio(ctx context.Context, text, voice string) ([]byte, error)
// ListVoices returns the available predefined voice names.
ListVoices(ctx context.Context) ([]string, error)
}
// httpClient is the concrete pocket-tts HTTP client.
type httpClient struct {
baseURL string
http *http.Client
}
// New returns a Client targeting baseURL (e.g. "https://pocket-tts.libnovel.cc").
func New(baseURL string) Client {
return &httpClient{
baseURL: strings.TrimRight(baseURL, "/"),
http: &http.Client{Timeout: 10 * time.Minute},
}
}
// GenerateAudio posts to POST /tts and transcodes the WAV response to MP3
// using the system ffmpeg binary. Requires ffmpeg to be on PATH (available in
// the runner Docker image via Alpine's ffmpeg package).
func (c *httpClient) GenerateAudio(ctx context.Context, text, voice string) ([]byte, error) {
if text == "" {
return nil, fmt.Errorf("pockettts: empty text")
}
if voice == "" {
voice = "alba"
}
// ── Build multipart form ──────────────────────────────────────────────────
var body bytes.Buffer
mw := multipart.NewWriter(&body)
if err := mw.WriteField("text", text); err != nil {
return nil, fmt.Errorf("pockettts: write text field: %w", err)
}
// pocket-tts accepts a predefined voice name as voice_url.
if err := mw.WriteField("voice_url", voice); err != nil {
return nil, fmt.Errorf("pockettts: write voice_url field: %w", err)
}
if err := mw.Close(); err != nil {
return nil, fmt.Errorf("pockettts: close multipart writer: %w", err)
}
req, err := http.NewRequestWithContext(ctx, http.MethodPost,
c.baseURL+"/tts", &body)
if err != nil {
return nil, fmt.Errorf("pockettts: build request: %w", err)
}
req.Header.Set("Content-Type", mw.FormDataContentType())
resp, err := c.http.Do(req)
if err != nil {
return nil, fmt.Errorf("pockettts: request: %w", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
_, _ = io.Copy(io.Discard, resp.Body)
return nil, fmt.Errorf("pockettts: server returned %d", resp.StatusCode)
}
wavData, err := io.ReadAll(resp.Body)
if err != nil {
return nil, fmt.Errorf("pockettts: read response body: %w", err)
}
// ── Transcode WAV → MP3 via ffmpeg ────────────────────────────────────────
mp3Data, err := wavToMP3(ctx, wavData)
if err != nil {
return nil, fmt.Errorf("pockettts: transcode to mp3: %w", err)
}
return mp3Data, nil
}
// ListVoices returns the statically known predefined voice names.
// pocket-tts has no REST endpoint for listing voices.
func (c *httpClient) ListVoices(_ context.Context) ([]string, error) {
voices := make([]string, 0, len(PredefinedVoices))
for v := range PredefinedVoices {
voices = append(voices, v)
}
return voices, nil
}
// wavToMP3 converts raw WAV bytes to MP3 using ffmpeg.
// ffmpeg reads from stdin (pipe:0) and writes to stdout (pipe:1).
func wavToMP3(ctx context.Context, wav []byte) ([]byte, error) {
cmd := exec.CommandContext(ctx,
"ffmpeg",
"-hide_banner", "-loglevel", "error",
"-i", "pipe:0", // read WAV from stdin
"-f", "mp3", // output format
"-q:a", "2", // VBR quality ~190 kbps
"pipe:1", // write MP3 to stdout
)
cmd.Stdin = bytes.NewReader(wav)
var out, stderr bytes.Buffer
cmd.Stdout = &out
cmd.Stderr = &stderr
if err := cmd.Run(); err != nil {
return nil, fmt.Errorf("ffmpeg: %w (stderr: %s)", err, stderr.String())
}
return out.Bytes(), nil
}

View File

@@ -0,0 +1,149 @@
package runner
// asynq_runner.go — Asynq-based task dispatch for the runner.
//
// When cfg.RedisAddr is set, Run() calls runAsynq() instead of runPoll().
// The Asynq server replaces the polling loop: it listens on Redis for tasks
// enqueued by the backend Producer and delivers them immediately.
//
// Handlers in this file decode Asynq job payloads and call the existing
// runScrapeTask / runAudioTask methods, keeping all execution logic in one place.
import (
"context"
"encoding/json"
"fmt"
"time"
"github.com/hibiken/asynq"
asynqmetrics "github.com/hibiken/asynq/x/metrics"
"github.com/libnovel/backend/internal/asynqqueue"
"github.com/libnovel/backend/internal/domain"
)
// runAsynq starts an Asynq server that replaces the PocketBase poll loop.
// It also starts the periodic catalogue refresh ticker.
// Blocks until ctx is cancelled.
func (r *Runner) runAsynq(ctx context.Context) error {
redisOpt, err := r.redisConnOpt()
if err != nil {
return fmt.Errorf("runner: parse redis addr: %w", err)
}
srv := asynq.NewServer(redisOpt, asynq.Config{
// Allocate concurrency slots for each task type.
// Total concurrency = scrape + audio slots.
Concurrency: r.cfg.MaxConcurrentScrape + r.cfg.MaxConcurrentAudio,
Queues: map[string]int{
asynqqueue.QueueDefault: 1,
},
// Let Asynq handle retries with exponential back-off.
RetryDelayFunc: asynq.DefaultRetryDelayFunc,
// Log errors from handlers via the existing structured logger.
ErrorHandler: asynq.ErrorHandlerFunc(func(_ context.Context, task *asynq.Task, err error) {
r.deps.Log.Error("runner: asynq task failed",
"type", task.Type(),
"err", err,
)
}),
})
mux := asynq.NewServeMux()
mux.HandleFunc(asynqqueue.TypeAudioGenerate, r.handleAudioTask)
mux.HandleFunc(asynqqueue.TypeScrapeBook, r.handleScrapeTask)
mux.HandleFunc(asynqqueue.TypeScrapeCatalogue, r.handleScrapeTask)
// Register Asynq queue metrics with the default Prometheus registry so
// the /metrics endpoint (metrics.go) can expose them.
inspector := asynq.NewInspector(redisOpt)
collector := asynqmetrics.NewQueueMetricsCollector(inspector)
if err := r.metricsRegistry.Register(collector); err != nil {
r.deps.Log.Warn("runner: could not register asynq prometheus collector", "err", err)
}
// Start the periodic catalogue refresh.
catalogueTick := time.NewTicker(r.cfg.CatalogueRefreshInterval)
defer catalogueTick.Stop()
if !r.cfg.SkipInitialCatalogueRefresh {
go r.runCatalogueRefresh(ctx)
} else {
r.deps.Log.Info("runner: skipping initial catalogue refresh (RUNNER_SKIP_INITIAL_CATALOGUE_REFRESH=true)")
}
r.deps.Log.Info("runner: asynq mode active", "redis_addr", r.cfg.RedisAddr)
// Run catalogue refresh ticker in the background.
go func() {
for {
select {
case <-ctx.Done():
return
case <-catalogueTick.C:
go r.runCatalogueRefresh(ctx)
}
}
}()
// Start Asynq server (non-blocking).
if err := srv.Start(mux); err != nil {
return fmt.Errorf("runner: asynq server start: %w", err)
}
// Block until context is cancelled, then gracefully stop.
<-ctx.Done()
r.deps.Log.Info("runner: context cancelled, shutting down asynq server")
srv.Shutdown()
return nil
}
// redisConnOpt parses cfg.RedisAddr into an asynq.RedisConnOpt.
// Supports full "redis://" / "rediss://" URLs and plain "host:port".
func (r *Runner) redisConnOpt() (asynq.RedisConnOpt, error) {
addr := r.cfg.RedisAddr
// ParseRedisURI handles redis:// and rediss:// schemes.
if len(addr) > 7 && (addr[:8] == "redis://" || addr[:9] == "rediss://") {
return asynq.ParseRedisURI(addr)
}
// Plain "host:port" — use RedisClientOpt directly.
return asynq.RedisClientOpt{
Addr: addr,
Password: r.cfg.RedisPassword,
}, nil
}
// handleScrapeTask is the Asynq handler for TypeScrapeBook and TypeScrapeCatalogue.
func (r *Runner) handleScrapeTask(ctx context.Context, t *asynq.Task) error {
var p asynqqueue.ScrapePayload
if err := json.Unmarshal(t.Payload(), &p); err != nil {
return fmt.Errorf("unmarshal scrape payload: %w", err)
}
task := domain.ScrapeTask{
ID: p.PBTaskID,
Kind: p.Kind,
TargetURL: p.TargetURL,
FromChapter: p.FromChapter,
ToChapter: p.ToChapter,
}
r.tasksRunning.Add(1)
defer r.tasksRunning.Add(-1)
r.runScrapeTask(ctx, task)
return nil
}
// handleAudioTask is the Asynq handler for TypeAudioGenerate.
func (r *Runner) handleAudioTask(ctx context.Context, t *asynq.Task) error {
var p asynqqueue.AudioPayload
if err := json.Unmarshal(t.Payload(), &p); err != nil {
return fmt.Errorf("unmarshal audio payload: %w", err)
}
task := domain.AudioTask{
ID: p.PBTaskID,
Slug: p.Slug,
Chapter: p.Chapter,
Voice: p.Voice,
}
r.tasksRunning.Add(1)
defer r.tasksRunning.Add(-1)
r.runAudioTask(ctx, task)
return nil
}

View File

@@ -1,21 +1,28 @@
package runner
// metrics.go — lightweight HTTP metrics endpoint for the runner.
// metrics.go — Prometheus metrics HTTP endpoint for the runner.
//
// GET /metrics returns a JSON document with live task counters and uptime.
// No external dependency (no Prometheus); plain net/http only.
// GET /metrics returns a Prometheus text/plain scrape response.
// Exposes:
// - Standard Go runtime metrics (via promhttp)
// - Runner task counters (tasks_running, tasks_completed, tasks_failed)
// - Asynq queue metrics (registered in asynq_runner.go when Redis is enabled)
//
// GET /health — simple liveness probe.
import (
"context"
"encoding/json"
"fmt"
"log/slog"
"net"
"net/http"
"time"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
)
// metricsServer serves GET /metrics for the runner process.
// metricsServer serves GET /metrics and GET /health for the runner process.
type metricsServer struct {
addr string
r *Runner
@@ -23,21 +30,62 @@ type metricsServer struct {
}
func newMetricsServer(addr string, r *Runner, log *slog.Logger) *metricsServer {
return &metricsServer{addr: addr, r: r, log: log}
ms := &metricsServer{addr: addr, r: r, log: log}
ms.registerCollectors()
return ms
}
// registerCollectors registers runner-specific Prometheus collectors.
// Called once at construction; Asynq queue collector is registered separately
// in asynq_runner.go after the Redis connection is established.
func (ms *metricsServer) registerCollectors() {
// Runner task gauges / counters backed by the atomic fields on Runner.
ms.r.metricsRegistry.MustRegister(prometheus.NewGaugeFunc(
prometheus.GaugeOpts{
Namespace: "runner",
Name: "tasks_running",
Help: "Number of tasks currently being processed.",
},
func() float64 { return float64(ms.r.tasksRunning.Load()) },
))
ms.r.metricsRegistry.MustRegister(prometheus.NewCounterFunc(
prometheus.CounterOpts{
Namespace: "runner",
Name: "tasks_completed_total",
Help: "Total number of tasks completed successfully since startup.",
},
func() float64 { return float64(ms.r.tasksCompleted.Load()) },
))
ms.r.metricsRegistry.MustRegister(prometheus.NewCounterFunc(
prometheus.CounterOpts{
Namespace: "runner",
Name: "tasks_failed_total",
Help: "Total number of tasks that ended in failure since startup.",
},
func() float64 { return float64(ms.r.tasksFailed.Load()) },
))
ms.r.metricsRegistry.MustRegister(prometheus.NewGaugeFunc(
prometheus.GaugeOpts{
Namespace: "runner",
Name: "uptime_seconds",
Help: "Seconds since the runner process started.",
},
func() float64 { return time.Since(ms.r.startedAt).Seconds() },
))
}
// ListenAndServe starts the HTTP server and blocks until ctx is cancelled or
// a fatal listen error occurs.
func (ms *metricsServer) ListenAndServe(ctx context.Context) error {
mux := http.NewServeMux()
mux.HandleFunc("GET /metrics", ms.handleMetrics)
mux.Handle("GET /metrics", promhttp.HandlerFor(ms.r.metricsRegistry, promhttp.HandlerOpts{}))
mux.HandleFunc("GET /health", ms.handleHealth)
srv := &http.Server{
Addr: ms.addr,
Handler: mux,
ReadTimeout: 5 * time.Second,
WriteTimeout: 5 * time.Second,
WriteTimeout: 10 * time.Second,
BaseContext: func(_ net.Listener) context.Context { return ctx },
}
@@ -58,35 +106,8 @@ func (ms *metricsServer) ListenAndServe(ctx context.Context) error {
}
}
// handleMetrics handles GET /metrics.
// Response shape (JSON):
//
// {
// "tasks_running": N,
// "tasks_completed": N,
// "tasks_failed": N,
// "uptime_seconds": N
// }
func (ms *metricsServer) handleMetrics(w http.ResponseWriter, _ *http.Request) {
uptimeSec := int64(time.Since(ms.r.startedAt).Seconds())
metricsWriteJSON(w, 0, map[string]int64{
"tasks_running": ms.r.tasksRunning.Load(),
"tasks_completed": ms.r.tasksCompleted.Load(),
"tasks_failed": ms.r.tasksFailed.Load(),
"uptime_seconds": uptimeSec,
})
}
// handleHealth handles GET /health — simple liveness probe for the metrics server.
// handleHealth handles GET /health — simple liveness probe.
func (ms *metricsServer) handleHealth(w http.ResponseWriter, _ *http.Request) {
metricsWriteJSON(w, 0, map[string]string{"status": "ok"})
}
// metricsWriteJSON writes v as a JSON response with the given status code.
func metricsWriteJSON(w http.ResponseWriter, status int, v any) {
w.Header().Set("Content-Type", "application/json")
if status != 0 {
w.WriteHeader(status)
}
_ = json.NewEncoder(w).Encode(v)
_, _ = w.Write([]byte(`{"status":"ok"}`))
}

View File

@@ -22,13 +22,19 @@ import (
"sync/atomic"
"time"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/codes"
"github.com/libnovel/backend/internal/bookstore"
"github.com/libnovel/backend/internal/domain"
"github.com/libnovel/backend/internal/kokoro"
"github.com/libnovel/backend/internal/meili"
"github.com/libnovel/backend/internal/orchestrator"
"github.com/libnovel/backend/internal/pockettts"
"github.com/libnovel/backend/internal/scraper"
"github.com/libnovel/backend/internal/taskqueue"
"github.com/prometheus/client_golang/prometheus"
)
// Config tunes the runner behaviour.
@@ -36,6 +42,7 @@ type Config struct {
// WorkerID uniquely identifies this runner instance in PocketBase records.
WorkerID string
// PollInterval is how often the runner checks for new tasks.
// Only used in PocketBase-polling mode (RedisAddr == "").
PollInterval time.Duration
// MaxConcurrentScrape limits simultaneous book-scrape goroutines.
MaxConcurrentScrape int
@@ -45,9 +52,11 @@ type Config struct {
OrchestratorWorkers int
// HeartbeatInterval is how often active tasks PATCH their heartbeat_at
// timestamp to signal they are still alive. Defaults to 30s when 0.
// Only used in PocketBase-polling mode.
HeartbeatInterval time.Duration
// StaleTaskThreshold is how old a heartbeat must be (or absent) before the
// task is considered orphaned and reset to pending. Defaults to 2m when 0.
// Only used in PocketBase-polling mode.
StaleTaskThreshold time.Duration
// CatalogueRefreshInterval is how often the runner walks the full catalogue,
// scrapes per-book metadata, downloads covers, and re-indexes everything in
@@ -61,6 +70,15 @@ type Config struct {
// MetricsAddr is the HTTP listen address for the /metrics endpoint.
// Defaults to ":9091". Set to "" to disable.
MetricsAddr string
// RedisAddr is the address of the Redis instance used for Asynq task
// dispatch. When set the runner switches from PocketBase-polling mode to
// Asynq ServeMux mode (immediate task delivery, no polling).
// Supports plain "host:port" or a full "rediss://..." URL.
// When empty the runner falls back to PocketBase polling.
RedisAddr string
// RedisPassword is the Redis AUTH password.
// Not required when RedisAddr is a full URL that includes credentials.
RedisPassword string
}
// Dependencies are the external services the runner depends on.
@@ -80,8 +98,11 @@ type Dependencies struct {
SearchIndex meili.Client
// Novel is the scraper implementation.
Novel scraper.NovelScraper
// Kokoro is the TTS client.
// Kokoro is the Kokoro-FastAPI TTS client (GPU, OpenAI-compatible voices).
Kokoro kokoro.Client
// PocketTTS is the pocket-tts client (CPU, kyutai voices: alba, marius, etc.).
// If nil, pocket-tts voice tasks will fail with a clear error.
PocketTTS pockettts.Client
// Log is the structured logger.
Log *slog.Logger
}
@@ -91,6 +112,8 @@ type Runner struct {
cfg Config
deps Dependencies
metricsRegistry *prometheus.Registry
// Atomic task counters — read by /metrics without locking.
tasksRunning atomic.Int64
tasksCompleted atomic.Int64
@@ -131,15 +154,18 @@ func New(cfg Config, deps Dependencies) *Runner {
if deps.SearchIndex == nil {
deps.SearchIndex = meili.NoopClient{}
}
return &Runner{cfg: cfg, deps: deps, startedAt: time.Now()}
return &Runner{cfg: cfg, deps: deps, startedAt: time.Now(), metricsRegistry: prometheus.NewRegistry()}
}
// Run starts the poll loop and the metrics HTTP server, blocking until ctx is
// cancelled.
// Run starts the worker loop and the metrics HTTP server, blocking until ctx
// is cancelled.
//
// When cfg.RedisAddr is set the runner uses Asynq (immediate task delivery).
// Otherwise it falls back to PocketBase polling (legacy mode).
func (r *Runner) Run(ctx context.Context) error {
r.deps.Log.Info("runner: starting",
"worker_id", r.cfg.WorkerID,
"poll_interval", r.cfg.PollInterval,
"mode", r.mode(),
"max_scrape", r.cfg.MaxConcurrentScrape,
"max_audio", r.cfg.MaxConcurrentAudio,
"catalogue_refresh_interval", r.cfg.CatalogueRefreshInterval,
@@ -156,6 +182,23 @@ func (r *Runner) Run(ctx context.Context) error {
}()
}
if r.cfg.RedisAddr != "" {
return r.runAsynq(ctx)
}
return r.runPoll(ctx)
}
// mode returns a short string describing the active dispatch mode.
func (r *Runner) mode() string {
if r.cfg.RedisAddr != "" {
return "asynq"
}
return "poll"
}
// runPoll is the legacy PocketBase-polling dispatch loop.
// Used when cfg.RedisAddr is empty.
func (r *Runner) runPoll(ctx context.Context) error {
scrapeSem := make(chan struct{}, r.cfg.MaxConcurrentScrape)
audioSem := make(chan struct{}, r.cfg.MaxConcurrentAudio)
var wg sync.WaitGroup
@@ -173,6 +216,8 @@ func (r *Runner) Run(ctx context.Context) error {
r.deps.Log.Info("runner: skipping initial catalogue refresh (RUNNER_SKIP_INITIAL_CATALOGUE_REFRESH=true)")
}
r.deps.Log.Info("runner: poll mode active", "poll_interval", r.cfg.PollInterval)
// Run one poll immediately on startup, then on each tick.
for {
r.poll(ctx, scrapeSem, audioSem, &wg)
@@ -248,23 +293,30 @@ func (r *Runner) poll(ctx context.Context, scrapeSem, audioSem chan struct{}, wg
}
// ── Audio tasks ───────────────────────────────────────────────────────
// Only claim tasks when there is a free slot in the semaphore.
// This avoids the old bug where we claimed (status→running) a task and
// then couldn't dispatch it, leaving it orphaned until the reaper fired.
audioLoop:
for {
if ctx.Err() != nil {
return
}
// Check capacity before claiming to avoid orphaning tasks.
select {
case audioSem <- struct{}{}:
// Slot acquired — proceed to claim a task.
default:
// All slots busy; leave remaining pending tasks for next tick.
break audioLoop
}
task, ok, err := r.deps.Consumer.ClaimNextAudioTask(ctx, r.cfg.WorkerID)
if err != nil {
<-audioSem // release the pre-acquired slot
r.deps.Log.Error("runner: ClaimNextAudioTask failed", "err", err)
break
}
if !ok {
break
}
select {
case audioSem <- struct{}{}:
default:
r.deps.Log.Warn("runner: audio semaphore full, will retry next tick",
"task_id", task.ID)
<-audioSem // release the pre-acquired slot; queue empty
break
}
r.tasksRunning.Add(1)
@@ -294,6 +346,14 @@ func (r *Runner) newOrchestrator() *orchestrator.Orchestrator {
// runScrapeTask executes one scrape task end-to-end and reports the result.
func (r *Runner) runScrapeTask(ctx context.Context, task domain.ScrapeTask) {
ctx, span := otel.Tracer("runner").Start(ctx, "runner.scrape_task")
defer span.End()
span.SetAttributes(
attribute.String("task.id", task.ID),
attribute.String("task.kind", task.Kind),
attribute.String("task.url", task.TargetURL),
)
log := r.deps.Log.With("task_id", task.ID, "kind", task.Kind, "url", task.TargetURL)
log.Info("runner: scrape task starting")
@@ -333,8 +393,10 @@ func (r *Runner) runScrapeTask(ctx context.Context, task domain.ScrapeTask) {
if result.ErrorMessage != "" {
r.tasksFailed.Add(1)
span.SetStatus(codes.Error, result.ErrorMessage)
} else {
r.tasksCompleted.Add(1)
span.SetStatus(codes.Ok, "")
}
log.Info("runner: scrape task finished",
@@ -377,6 +439,15 @@ func (r *Runner) runCatalogueTask(ctx context.Context, task domain.ScrapeTask, o
// runAudioTask executes one audio-generation task.
func (r *Runner) runAudioTask(ctx context.Context, task domain.AudioTask) {
ctx, span := otel.Tracer("runner").Start(ctx, "runner.audio_task")
defer span.End()
span.SetAttributes(
attribute.String("task.id", task.ID),
attribute.String("book.slug", task.Slug),
attribute.Int("chapter.number", task.Chapter),
attribute.String("audio.voice", task.Voice),
)
log := r.deps.Log.With("task_id", task.ID, "slug", task.Slug, "chapter", task.Chapter, "voice", task.Voice)
log.Info("runner: audio task starting")
@@ -400,6 +471,7 @@ func (r *Runner) runAudioTask(ctx context.Context, task domain.AudioTask) {
fail := func(msg string) {
log.Error("runner: audio task failed", "reason", msg)
r.tasksFailed.Add(1)
span.SetStatus(codes.Error, msg)
result := domain.AudioResult{ErrorMessage: msg}
if err := r.deps.Consumer.FinishAudioTask(ctx, task.ID, result); err != nil {
log.Error("runner: FinishAudioTask failed", "err", err)
@@ -417,14 +489,31 @@ func (r *Runner) runAudioTask(ctx context.Context, task domain.AudioTask) {
return
}
if r.deps.Kokoro == nil {
fail("kokoro client not configured")
return
}
audioData, err := r.deps.Kokoro.GenerateAudio(ctx, text, task.Voice)
if err != nil {
fail(fmt.Sprintf("kokoro generate: %v", err))
return
var audioData []byte
if pockettts.IsPocketTTSVoice(task.Voice) {
if r.deps.PocketTTS == nil {
fail("pocket-tts client not configured (POCKET_TTS_URL is empty)")
return
}
var genErr error
audioData, genErr = r.deps.PocketTTS.GenerateAudio(ctx, text, task.Voice)
if genErr != nil {
fail(fmt.Sprintf("pocket-tts generate: %v", genErr))
return
}
log.Info("runner: audio generated via pocket-tts", "voice", task.Voice)
} else {
if r.deps.Kokoro == nil {
fail("kokoro client not configured (KOKORO_URL is empty)")
return
}
var genErr error
audioData, genErr = r.deps.Kokoro.GenerateAudio(ctx, text, task.Voice)
if genErr != nil {
fail(fmt.Sprintf("kokoro generate: %v", genErr))
return
}
log.Info("runner: audio generated via kokoro-fastapi", "voice", task.Voice)
}
key := r.deps.AudioStore.AudioObjectKey(task.Slug, task.Chapter, task.Voice)
@@ -434,6 +523,7 @@ func (r *Runner) runAudioTask(ctx context.Context, task domain.AudioTask) {
}
r.tasksCompleted.Add(1)
span.SetStatus(codes.Ok, "")
result := domain.AudioResult{ObjectKey: key}
if err := r.deps.Consumer.FinishAudioTask(ctx, task.ID, result); err != nil {
log.Error("runner: FinishAudioTask failed", "err", err)

View File

@@ -247,8 +247,9 @@ func (c *pbClient) claimRecord(ctx context.Context, collection, workerID string,
}
claim := map[string]any{
"status": string(domain.TaskStatusRunning),
"worker_id": workerID,
"status": string(domain.TaskStatusRunning),
"worker_id": workerID,
"heartbeat_at": time.Now().UTC().Format(time.RFC3339),
}
for k, v := range extraClaim {
claim[k] = v

View File

@@ -706,7 +706,7 @@ func (s *Store) ListAudioTasks(ctx context.Context) ([]domain.AudioTask, error)
}
func (s *Store) GetAudioTask(ctx context.Context, cacheKey string) (domain.AudioTask, bool, error) {
filter := fmt.Sprintf(`cache_key=%q`, cacheKey)
filter := fmt.Sprintf(`cache_key='%s'`, cacheKey)
items, err := s.pb.listAll(ctx, "audio_jobs", filter, "-started")
if err != nil || len(items) == 0 {
return domain.AudioTask{}, false, err

View File

@@ -2,7 +2,8 @@ FROM caddy:2-builder AS builder
RUN xcaddy build \
--with github.com/mholt/caddy-ratelimit \
--with github.com/hslatman/caddy-crowdsec-bouncer/http
--with github.com/hslatman/caddy-crowdsec-bouncer/http \
--with github.com/mholt/caddy-l4
FROM caddy:2-alpine
COPY --from=builder /usr/bin/caddy /usr/bin/caddy

View File

@@ -154,13 +154,21 @@ services:
# No public port — all traffic is routed via Caddy.
expose:
- "8080"
environment:
environment:
<<: *infra-env
BACKEND_HTTP_ADDR: ":8080"
LOG_LEVEL: "${LOG_LEVEL}"
KOKORO_URL: "${KOKORO_URL}"
KOKORO_VOICE: "${KOKORO_VOICE}"
POCKET_TTS_URL: "${POCKET_TTS_URL}"
GLITCHTIP_DSN: "${GLITCHTIP_DSN}"
OTEL_EXPORTER_OTLP_ENDPOINT: "${OTEL_EXPORTER_OTLP_ENDPOINT}"
OTEL_SERVICE_NAME: "backend"
# Asynq task queue — backend enqueues jobs to homelab Redis via Caddy TLS proxy.
# Set to "rediss://:password@redis.libnovel.cc:6380" in Doppler prd config.
# Leave empty to fall back to PocketBase polling.
REDIS_ADDR: "${REDIS_ADDR}"
REDIS_PASSWORD: "${REDIS_PASSWORD}"
healthcheck:
test: ["CMD", "/healthcheck", "http://localhost:8080/health"]
interval: 15s
@@ -217,8 +225,9 @@ services:
KOKORO_URL: "${KOKORO_URL}"
KOKORO_VOICE: "${KOKORO_VOICE}"
GLITCHTIP_DSN: "${GLITCHTIP_DSN}"
OTEL_EXPORTER_OTLP_ENDPOINT: "${OTEL_EXPORTER_OTLP_ENDPOINT}"
OTEL_SERVICE_NAME: "runner"
healthcheck:
# The runner writes /tmp/runner.alive on every poll.
# 120s = 2× the default 30s poll interval with generous headroom.
test: ["CMD", "/healthcheck", "file", "/tmp/runner.alive", "120"]
interval: 60s
@@ -267,13 +276,14 @@ services:
PUBLIC_UMAMI_SCRIPT_URL: "${PUBLIC_UMAMI_SCRIPT_URL}"
# GlitchTip client + server-side error tracking
PUBLIC_GLITCHTIP_DSN: "${PUBLIC_GLITCHTIP_DSN}"
# Email verification (Resend SMTP — shared with Fider/GlitchTip)
SMTP_HOST: "${FIDER_SMTP_HOST}"
SMTP_PORT: "${FIDER_SMTP_PORT}"
SMTP_USER: "${FIDER_SMTP_USER}"
SMTP_PASSWORD: "${FIDER_SMTP_PASSWORD}"
SMTP_FROM: "noreply@libnovel.cc"
APP_URL: "${ORIGIN}"
# OpenTelemetry tracing
OTEL_EXPORTER_OTLP_ENDPOINT: "${OTEL_EXPORTER_OTLP_ENDPOINT}"
OTEL_SERVICE_NAME: "ui"
# OAuth2 providers
GOOGLE_CLIENT_ID: "${GOOGLE_CLIENT_ID}"
GOOGLE_CLIENT_SECRET: "${GOOGLE_CLIENT_SECRET}"
GITHUB_CLIENT_ID: "${GITHUB_CLIENT_ID}"
GITHUB_CLIENT_SECRET: "${GITHUB_CLIENT_SECRET}"
healthcheck:
test: ["CMD", "wget", "-qO-", "http://127.0.0.1:3000/health"]
interval: 15s
@@ -301,6 +311,19 @@ services:
timeout: 10s
retries: 5
# ─── Dozzle agent ────────────────────────────────────────────────────────────
# Exposes prod container logs to the Dozzle instance on the homelab.
# The homelab Dozzle connects here via DOZZLE_REMOTE_AGENT.
# Port 7007 is bound to localhost only — not reachable from the internet.
dozzle-agent:
image: amir20/dozzle:latest
restart: unless-stopped
command: agent
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
ports:
- "127.0.0.1:7007:7007"
# ─── CrowdSec bouncer registration ───────────────────────────────────────────
# One-shot: registers the Caddy bouncer with the CrowdSec LAPI and writes the
# generated API key to crowdsec/.crowdsec.env, which Caddy reads via env_file.
@@ -336,13 +359,16 @@ services:
# ─── Caddy (reverse proxy + automatic HTTPS) ──────────────────────────────────
# Custom build includes github.com/mholt/caddy-ratelimit and
# github.com/hslatman/caddy-crowdsec-bouncer/http.
# Custom build includes github.com/mholt/caddy-ratelimit,
# github.com/hslatman/caddy-crowdsec-bouncer/http, and
# github.com/mholt/caddy-l4 (TCP layer4 proxy for Redis).
caddy:
image: kalekber/libnovel-caddy:${GIT_TAG:-latest}
build:
context: ./caddy
dockerfile: Dockerfile
labels:
com.centurylinklabs.watchtower.enable: "true"
restart: unless-stopped
depends_on:
backend:
@@ -355,9 +381,12 @@ services:
- "80:80"
- "443:443"
- "443:443/udp" # HTTP/3 (QUIC)
- "6380:6380" # Redis TCP proxy (TLS) for homelab → Asynq
environment:
DOMAIN: "${DOMAIN}"
CADDY_ACME_EMAIL: "${CADDY_ACME_EMAIL}"
# Homelab Redis address — Caddy TCP-proxies inbound :6380 to this.
HOMELAB_REDIS_ADDR: "${HOMELAB_REDIS_ADDR:?HOMELAB_REDIS_ADDR required for Redis TCP proxy}"
env_file:
- path: ./crowdsec/.crowdsec.env
required: false
@@ -382,203 +411,6 @@ services:
WATCHTOWER_NOTIFICATION_URL: "${WATCHTOWER_NOTIFICATION_URL}"
DOCKER_API_VERSION: "1.44"
# ─── Shared PostgreSQL (Fider + GlitchTip + Umami) ───────────────────────────
# A single Postgres instance hosting three separate databases.
# PocketBase uses its own embedded SQLite; this postgres is only for the
# three new services below.
postgres:
image: postgres:16-alpine
restart: unless-stopped
environment:
POSTGRES_USER: "${POSTGRES_USER}"
POSTGRES_PASSWORD: "${POSTGRES_PASSWORD}"
POSTGRES_DB: postgres
expose:
- "5432"
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD", "pg_isready", "-U", "${POSTGRES_USER}"]
interval: 10s
timeout: 5s
retries: 5
# ─── Postgres database initialisation ────────────────────────────────────────
# One-shot: creates the fider, glitchtip, and umami databases if missing.
postgres-init:
image: postgres:16-alpine
depends_on:
postgres:
condition: service_healthy
environment:
PGPASSWORD: "${POSTGRES_PASSWORD}"
entrypoint: >
/bin/sh -c "
psql -h postgres -U ${POSTGRES_USER} -d postgres -tc \"SELECT 1 FROM pg_database WHERE datname='fider'\" | grep -q 1 ||
psql -h postgres -U ${POSTGRES_USER} -d postgres -c \"CREATE DATABASE fider\";
psql -h postgres -U ${POSTGRES_USER} -d postgres -tc \"SELECT 1 FROM pg_database WHERE datname='glitchtip'\" | grep -q 1 ||
psql -h postgres -U ${POSTGRES_USER} -d postgres -c \"CREATE DATABASE glitchtip\";
psql -h postgres -U ${POSTGRES_USER} -d postgres -tc \"SELECT 1 FROM pg_database WHERE datname='umami'\" | grep -q 1 ||
psql -h postgres -U ${POSTGRES_USER} -d postgres -c \"CREATE DATABASE umami\";
echo 'postgres-init: databases ready';
"
restart: "no"
# ─── Fider (user feedback & feature requests) ─────────────────────────────────
fider:
image: getfider/fider:stable
restart: unless-stopped
depends_on:
postgres-init:
condition: service_completed_successfully
postgres:
condition: service_healthy
expose:
- "3000"
environment:
BASE_URL: "${FIDER_BASE_URL}"
DATABASE_URL: "postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/fider?sslmode=disable"
JWT_SECRET: "${FIDER_JWT_SECRET}"
# Email: Resend SMTP
EMAIL_NOREPLY: "noreply@libnovel.cc"
EMAIL_SMTP_HOST: "${FIDER_SMTP_HOST}"
EMAIL_SMTP_PORT: "${FIDER_SMTP_PORT}"
EMAIL_SMTP_USERNAME: "${FIDER_SMTP_USER}"
EMAIL_SMTP_PASSWORD: "${FIDER_SMTP_PASSWORD}"
EMAIL_SMTP_ENABLE_STARTTLS: "false"
# ─── GlitchTip DB migration (one-shot) ───────────────────────────────────────
glitchtip-migrate:
image: glitchtip/glitchtip:latest
depends_on:
postgres-init:
condition: service_completed_successfully
postgres:
condition: service_healthy
environment:
DATABASE_URL: "postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/glitchtip"
SECRET_KEY: "${GLITCHTIP_SECRET_KEY}"
GLITCHTIP_DOMAIN: "${GLITCHTIP_DOMAIN}"
EMAIL_URL: "${GLITCHTIP_EMAIL_URL}"
DEFAULT_FROM_EMAIL: "noreply@libnovel.cc"
VALKEY_URL: "redis://valkey:6379/1"
command: "./manage.py migrate"
restart: "no"
# ─── GlitchTip web (error tracking UI + API) ─────────────────────────────────
glitchtip-web:
image: glitchtip/glitchtip:latest
restart: unless-stopped
depends_on:
glitchtip-migrate:
condition: service_completed_successfully
valkey:
condition: service_healthy
expose:
- "8000"
environment:
DATABASE_URL: "postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/glitchtip"
SECRET_KEY: "${GLITCHTIP_SECRET_KEY}"
GLITCHTIP_DOMAIN: "${GLITCHTIP_DOMAIN}"
EMAIL_URL: "${GLITCHTIP_EMAIL_URL}"
DEFAULT_FROM_EMAIL: "noreply@libnovel.cc"
VALKEY_URL: "redis://valkey:6379/1"
PORT: "8000"
ENABLE_USER_REGISTRATION: "false"
healthcheck:
test: ["CMD", "python3", "-c", "import urllib.request; urllib.request.urlopen('http://localhost:8000/api/0/')"]
interval: 15s
timeout: 5s
retries: 5
# ─── GlitchTip worker (background task processor) ─────────────────────────────
glitchtip-worker:
image: glitchtip/glitchtip:latest
restart: unless-stopped
depends_on:
glitchtip-migrate:
condition: service_completed_successfully
valkey:
condition: service_healthy
environment:
DATABASE_URL: "postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/glitchtip"
SECRET_KEY: "${GLITCHTIP_SECRET_KEY}"
GLITCHTIP_DOMAIN: "${GLITCHTIP_DOMAIN}"
EMAIL_URL: "${GLITCHTIP_EMAIL_URL}"
DEFAULT_FROM_EMAIL: "noreply@libnovel.cc"
VALKEY_URL: "redis://valkey:6379/1"
SERVER_ROLE: "worker"
# ─── Umami (page analytics) ───────────────────────────────────────────────────
umami:
image: ghcr.io/umami-software/umami:postgresql-latest
restart: unless-stopped
depends_on:
postgres-init:
condition: service_completed_successfully
postgres:
condition: service_healthy
expose:
- "3000"
environment:
DATABASE_URL: "postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/umami"
APP_SECRET: "${UMAMI_APP_SECRET}"
healthcheck:
test: ["CMD", "curl", "-sf", "http://localhost:3000/api/heartbeat"]
interval: 15s
timeout: 5s
retries: 5
# ─── Dozzle (Docker log viewer) ───────────────────────────────────────────────
dozzle:
image: amir20/dozzle:latest
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./dozzle/users.yml:/data/users.yml:ro
expose:
- "8080"
environment:
DOZZLE_AUTH_PROVIDER: simple
DOZZLE_HOSTNAME: "logs.libnovel.cc"
healthcheck:
test: ["CMD", "/dozzle", "healthcheck"]
interval: 15s
timeout: 5s
retries: 5
# ─── Uptime Kuma (uptime monitoring) ──────────────────────────────────────────
uptime-kuma:
image: louislam/uptime-kuma:1
restart: unless-stopped
volumes:
- uptime_kuma_data:/app/data
expose:
- "3001"
healthcheck:
test: ["CMD", "extra/healthcheck"]
interval: 15s
timeout: 5s
retries: 5
# ─── Gotify (push notifications) ──────────────────────────────────────────────
gotify:
image: gotify/server:latest
restart: unless-stopped
volumes:
- gotify_data:/app/data
expose:
- "80"
environment:
GOTIFY_DEFAULTUSER_NAME: "${GOTIFY_ADMIN_USER}"
GOTIFY_DEFAULTUSER_PASS: "${GOTIFY_ADMIN_PASS}"
GOTIFY_SERVER_PORT: "80"
healthcheck:
test: ["CMD", "curl", "-sf", "http://localhost:80/health"]
interval: 15s
timeout: 5s
retries: 5
volumes:
minio_data:
pb_data:
@@ -588,6 +420,3 @@ volumes:
caddy_config:
caddy_logs:
crowdsec_data:
postgres_data:
uptime_kuma_data:
gotify_data:

463
homelab/docker-compose.yml Normal file
View File

@@ -0,0 +1,463 @@
# LibNovel homelab
#
# Runs on 192.168.0.109. Hosts:
# - libnovel runner (background task worker)
# - tooling: GlitchTip, Umami, Fider, Dozzle, Uptime Kuma, Gotify
# - observability: OTel Collector, Tempo, Loki, Prometheus, Grafana
# - cloudflared tunnel (public subdomains via Cloudflare Zero Trust)
# - shared Postgres for tooling DBs
#
# All secrets come from Doppler (project=libnovel, config=prd_homelab).
# Run with: doppler run -- docker compose up -d
#
# Public subdomains (via Cloudflare Tunnel — no ports exposed to internet):
# errors.libnovel.cc → glitchtip-web:8000
# analytics.libnovel.cc → umami:3000
# feedback.libnovel.cc → fider:3000
# logs.libnovel.cc → dozzle:8080
# uptime.libnovel.cc → uptime-kuma:3001
# push.libnovel.cc → gotify:80
# grafana.libnovel.cc → grafana:3000
services:
# ── Cloudflare Tunnel ───────────────────────────────────────────────────────
# Outbound-only encrypted tunnel to Cloudflare.
# Routes all public subdomains to their respective containers on this network.
# No inbound ports needed — cloudflared initiates all connections outward.
cloudflared:
image: cloudflare/cloudflared:latest
restart: unless-stopped
command: tunnel --no-autoupdate run --token ${CLOUDFLARE_TUNNEL_TOKEN}
environment:
CLOUDFLARE_TUNNEL_TOKEN: "${CLOUDFLARE_TUNNEL_TOKEN}"
# ── LibNovel Runner ─────────────────────────────────────────────────────────
# Background task worker. Connects to prod PocketBase, MinIO, Meilisearch
# via their public subdomains (pb.libnovel.cc, storage.libnovel.cc, etc.)
runner:
image: kalekber/libnovel-runner:latest
restart: unless-stopped
stop_grace_period: 135s
labels:
com.centurylinklabs.watchtower.enable: "true"
environment:
POCKETBASE_URL: "https://pb.libnovel.cc"
POCKETBASE_ADMIN_EMAIL: "${POCKETBASE_ADMIN_EMAIL}"
POCKETBASE_ADMIN_PASSWORD: "${POCKETBASE_ADMIN_PASSWORD}"
MINIO_ENDPOINT: "storage.libnovel.cc"
MINIO_ACCESS_KEY: "${MINIO_ROOT_USER}"
MINIO_SECRET_KEY: "${MINIO_ROOT_PASSWORD}"
MINIO_USE_SSL: "true"
MINIO_PUBLIC_ENDPOINT: "${MINIO_PUBLIC_ENDPOINT}"
MINIO_PUBLIC_USE_SSL: "${MINIO_PUBLIC_USE_SSL}"
MEILI_URL: "${MEILI_URL}"
MEILI_API_KEY: "${MEILI_API_KEY}"
VALKEY_ADDR: ""
GODEBUG: "preferIPv4=1"
KOKORO_URL: "http://kokoro-fastapi:8880"
KOKORO_VOICE: "${KOKORO_VOICE}"
POCKET_TTS_URL: "http://pocket-tts:8000"
RUNNER_WORKER_ID: "${RUNNER_WORKER_ID}"
RUNNER_POLL_INTERVAL: "${RUNNER_POLL_INTERVAL}"
RUNNER_MAX_CONCURRENT_SCRAPE: "${RUNNER_MAX_CONCURRENT_SCRAPE}"
RUNNER_MAX_CONCURRENT_AUDIO: "${RUNNER_MAX_CONCURRENT_AUDIO}"
RUNNER_TIMEOUT: "${RUNNER_TIMEOUT}"
RUNNER_METRICS_ADDR: "${RUNNER_METRICS_ADDR}"
RUNNER_SKIP_INITIAL_CATALOGUE_REFRESH: "true"
LOG_LEVEL: "${LOG_LEVEL}"
GLITCHTIP_DSN: "${GLITCHTIP_DSN}"
# OTel — send runner traces/metrics to the local collector (HTTP)
OTEL_EXPORTER_OTLP_ENDPOINT: "http://otel-collector:4318"
OTEL_SERVICE_NAME: "runner"
healthcheck:
test: ["CMD", "/healthcheck", "file", "/tmp/runner.alive", "120"]
interval: 60s
timeout: 5s
retries: 3
# ── Shared Postgres ─────────────────────────────────────────────────────────
# Hosts glitchtip, umami, and fider databases.
postgres:
image: postgres:16-alpine
restart: unless-stopped
environment:
POSTGRES_USER: "${POSTGRES_USER}"
POSTGRES_PASSWORD: "${POSTGRES_PASSWORD}"
POSTGRES_DB: postgres
expose:
- "5432"
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD", "pg_isready", "-U", "${POSTGRES_USER}"]
interval: 10s
timeout: 5s
retries: 5
# ── Postgres database initialisation ────────────────────────────────────────
postgres-init:
image: postgres:16-alpine
depends_on:
postgres:
condition: service_healthy
environment:
PGPASSWORD: "${POSTGRES_PASSWORD}"
entrypoint: >
/bin/sh -c "
psql -h postgres -U ${POSTGRES_USER} -d postgres -tc \"SELECT 1 FROM pg_database WHERE datname='fider'\" | grep -q 1 ||
psql -h postgres -U ${POSTGRES_USER} -d postgres -c \"CREATE DATABASE fider\";
psql -h postgres -U ${POSTGRES_USER} -d postgres -tc \"SELECT 1 FROM pg_database WHERE datname='glitchtip'\" | grep -q 1 ||
psql -h postgres -U ${POSTGRES_USER} -d postgres -c \"CREATE DATABASE glitchtip\";
psql -h postgres -U ${POSTGRES_USER} -d postgres -tc \"SELECT 1 FROM pg_database WHERE datname='umami'\" | grep -q 1 ||
psql -h postgres -U ${POSTGRES_USER} -d postgres -c \"CREATE DATABASE umami\";
echo 'postgres-init: databases ready';
"
restart: "no"
# ── GlitchTip DB migration ──────────────────────────────────────────────────
glitchtip-migrate:
image: glitchtip/glitchtip:latest
depends_on:
postgres-init:
condition: service_completed_successfully
postgres:
condition: service_healthy
environment:
DATABASE_URL: "postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/glitchtip"
SECRET_KEY: "${GLITCHTIP_SECRET_KEY}"
GLITCHTIP_DOMAIN: "${GLITCHTIP_DOMAIN}"
EMAIL_URL: "${GLITCHTIP_EMAIL_URL}"
DEFAULT_FROM_EMAIL: "noreply@libnovel.cc"
VALKEY_URL: "redis://valkey:6379/1"
command: "./manage.py migrate"
restart: "no"
# ── GlitchTip web ───────────────────────────────────────────────────────────
glitchtip-web:
image: glitchtip/glitchtip:latest
restart: unless-stopped
depends_on:
glitchtip-migrate:
condition: service_completed_successfully
expose:
- "8000"
environment:
DATABASE_URL: "postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/glitchtip"
SECRET_KEY: "${GLITCHTIP_SECRET_KEY}"
GLITCHTIP_DOMAIN: "${GLITCHTIP_DOMAIN}"
EMAIL_URL: "${GLITCHTIP_EMAIL_URL}"
DEFAULT_FROM_EMAIL: "noreply@libnovel.cc"
VALKEY_URL: "redis://valkey:6379/1"
PORT: "8000"
ENABLE_USER_REGISTRATION: "false"
healthcheck:
test: ["CMD", "python3", "-c", "import urllib.request; urllib.request.urlopen('http://localhost:8000/api/0/')"]
interval: 15s
timeout: 5s
retries: 5
# ── GlitchTip worker ────────────────────────────────────────────────────────
glitchtip-worker:
image: glitchtip/glitchtip:latest
restart: unless-stopped
depends_on:
glitchtip-migrate:
condition: service_completed_successfully
environment:
DATABASE_URL: "postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/glitchtip"
SECRET_KEY: "${GLITCHTIP_SECRET_KEY}"
GLITCHTIP_DOMAIN: "${GLITCHTIP_DOMAIN}"
EMAIL_URL: "${GLITCHTIP_EMAIL_URL}"
DEFAULT_FROM_EMAIL: "noreply@libnovel.cc"
VALKEY_URL: "redis://valkey:6379/1"
SERVER_ROLE: "worker"
# ── Umami ───────────────────────────────────────────────────────────────────
umami:
image: ghcr.io/umami-software/umami:postgresql-latest
restart: unless-stopped
depends_on:
postgres-init:
condition: service_completed_successfully
postgres:
condition: service_healthy
expose:
- "3000"
environment:
DATABASE_URL: "postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/umami"
APP_SECRET: "${UMAMI_APP_SECRET}"
healthcheck:
test: ["CMD", "curl", "-sf", "http://localhost:3000/api/heartbeat"]
interval: 15s
timeout: 5s
retries: 5
# ── Fider ───────────────────────────────────────────────────────────────────
fider:
image: getfider/fider:stable
restart: unless-stopped
depends_on:
postgres-init:
condition: service_completed_successfully
postgres:
condition: service_healthy
expose:
- "3000"
environment:
BASE_URL: "${FIDER_BASE_URL}"
DATABASE_URL: "postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/fider?sslmode=disable"
JWT_SECRET: "${FIDER_JWT_SECRET}"
EMAIL_NOREPLY: "noreply@libnovel.cc"
EMAIL_SMTP_HOST: "${FIDER_SMTP_HOST}"
EMAIL_SMTP_PORT: "${FIDER_SMTP_PORT}"
EMAIL_SMTP_USERNAME: "${FIDER_SMTP_USER}"
EMAIL_SMTP_PASSWORD: "${FIDER_SMTP_PASSWORD}"
EMAIL_SMTP_ENABLE_STARTTLS: "false"
# ── Dozzle ──────────────────────────────────────────────────────────────────
# Watches both homelab and prod containers.
# Prod agent runs on 165.22.70.138:7007 (added separately to prod compose).
dozzle:
image: amir20/dozzle:latest
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./dozzle/users.yml:/data/users.yml:ro
expose:
- "8080"
environment:
DOZZLE_AUTH_PROVIDER: simple
DOZZLE_HOSTNAME: "logs.libnovel.cc"
DOZZLE_REMOTE_AGENT: "prod@165.22.70.138:7007"
healthcheck:
test: ["CMD", "/dozzle", "healthcheck"]
interval: 15s
timeout: 5s
retries: 5
# ── Uptime Kuma ─────────────────────────────────────────────────────────────
uptime-kuma:
image: louislam/uptime-kuma:1
restart: unless-stopped
volumes:
- uptime_kuma_data:/app/data
expose:
- "3001"
healthcheck:
test: ["CMD", "extra/healthcheck"]
interval: 15s
timeout: 5s
retries: 5
# ── Gotify ──────────────────────────────────────────────────────────────────
gotify:
image: gotify/server:latest
restart: unless-stopped
volumes:
- gotify_data:/app/data
expose:
- "80"
environment:
GOTIFY_DEFAULTUSER_NAME: "${GOTIFY_ADMIN_USER}"
GOTIFY_DEFAULTUSER_PASS: "${GOTIFY_ADMIN_PASS}"
GOTIFY_SERVER_PORT: "80"
healthcheck:
test: ["CMD", "curl", "-sf", "http://localhost:80/health"]
interval: 15s
timeout: 5s
retries: 5
# ── Valkey ──────────────────────────────────────────────────────────────────
# Used by GlitchTip for task queuing.
valkey:
image: valkey/valkey:7-alpine
restart: unless-stopped
expose:
- "6379"
volumes:
- valkey_data:/data
healthcheck:
test: ["CMD", "valkey-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
# ── OTel Collector ──────────────────────────────────────────────────────────
# Receives OTLP from backend/ui/runner, fans out to Tempo + Prometheus + Loki.
otel-collector:
image: otel/opentelemetry-collector-contrib:latest
restart: unless-stopped
volumes:
- ./otel/collector.yaml:/etc/otelcol-contrib/config.yaml:ro
expose:
- "4317" # OTLP gRPC
- "4318" # OTLP HTTP
- "8888" # Collector self-metrics (scraped by Prometheus)
depends_on:
- tempo
- prometheus
- loki
# No healthcheck — distroless image has no shell or curl
# ── Tempo ───────────────────────────────────────────────────────────────────
# Distributed trace storage. Receives OTLP from the collector.
tempo:
image: grafana/tempo:2.6.1
restart: unless-stopped
command: ["-config.file=/etc/tempo.yaml"]
volumes:
- ./otel/tempo.yaml:/etc/tempo.yaml:ro
- tempo_data:/var/tempo
expose:
- "3200" # Tempo query API (queried by Grafana)
- "4317" # OTLP gRPC ingest (collector → tempo)
healthcheck:
test: ["CMD", "wget", "-qO-", "http://localhost:3200/ready"]
interval: 15s
timeout: 5s
retries: 5
# ── Prometheus ──────────────────────────────────────────────────────────────
# Scrapes metrics from backend (via prod), runner, and otel-collector.
prometheus:
image: prom/prometheus:latest
restart: unless-stopped
command:
- "--config.file=/etc/prometheus/prometheus.yaml"
- "--storage.tsdb.path=/prometheus"
- "--storage.tsdb.retention.time=30d"
- "--web.enable-remote-write-receiver"
volumes:
- ./otel/prometheus.yaml:/etc/prometheus/prometheus.yaml:ro
- prometheus_data:/prometheus
expose:
- "9090"
healthcheck:
test: ["CMD", "wget", "-qO-", "http://localhost:9090/-/healthy"]
interval: 15s
timeout: 5s
retries: 5
# ── Loki ────────────────────────────────────────────────────────────────────
# Log aggregation. Receives logs from OTel collector. Replaces manual Dozzle
# tailing for structured log search.
loki:
image: grafana/loki:latest
restart: unless-stopped
command: ["-config.file=/etc/loki/loki.yaml"]
volumes:
- ./otel/loki.yaml:/etc/loki/loki.yaml:ro
- loki_data:/loki
expose:
- "3100"
# No healthcheck — distroless image has no shell or curl
# ── Grafana ─────────────────────────────────────────────────────────────────
# Single UI for traces (Tempo), metrics (Prometheus), and logs (Loki).
# Accessible at grafana.libnovel.cc via Cloudflare Tunnel.
grafana:
image: grafana/grafana:latest
restart: unless-stopped
depends_on:
- tempo
- prometheus
- loki
expose:
- "3000"
volumes:
- grafana_data:/var/lib/grafana
- ./otel/grafana/provisioning:/etc/grafana/provisioning:ro
environment:
GF_SERVER_ROOT_URL: "https://grafana.libnovel.cc"
GF_SECURITY_ADMIN_USER: "${GRAFANA_ADMIN_USER}"
GF_SECURITY_ADMIN_PASSWORD: "${GRAFANA_ADMIN_PASSWORD}"
GF_AUTH_ANONYMOUS_ENABLED: "false"
GF_FEATURE_TOGGLES_ENABLE: "traceqlEditor"
healthcheck:
test: ["CMD", "wget", "-qO-", "http://localhost:3000/api/health"]
interval: 15s
timeout: 5s
retries: 5
# ── Kokoro-FastAPI (GPU TTS) ────────────────────────────────────────────────
# OpenAI-compatible TTS service backed by the Kokoro model, running on the
# homelab RTX 3050 (8 GB VRAM). Replaces the broken kokoro.kalekber.cc DNS.
# Voices match existing IDs: af_bella, af_sky, af_heart, etc.
# The runner reaches it at http://kokoro-fastapi:8880 via the Docker network.
kokoro-fastapi:
image: kokoro-fastapi:latest
restart: unless-stopped
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
expose:
- "8880"
healthcheck:
test: ["CMD", "curl", "-sf", "http://localhost:8880/health"]
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
# ── pocket-tts (CPU TTS) ────────────────────────────────────────────────────
# Lightweight CPU-only TTS using kyutai-labs/pocket-tts.
# Image is built locally on homelab from https://github.com/kyutai-labs/pocket-tts
# (no prebuilt image published): cd /tmp && git clone --depth=1 https://github.com/kyutai-labs/pocket-tts.git && docker build -t pocket-tts:latest /tmp/pocket-tts
# OpenAI-compatible: POST /tts (multipart form) on port 8000.
# Voices: alba, marius, javert, jean, fantine, cosette, eponine, azelma, etc.
# Not currently used by the runner (runner uses kokoro-fastapi), but available
# for experimentation / fallback.
pocket-tts:
image: pocket-tts:latest
restart: unless-stopped
command: ["uv", "run", "pocket-tts", "serve", "--host", "0.0.0.0"]
expose:
- "8000"
volumes:
- pocket_tts_cache:/root/.cache/pocket_tts
- hf_cache:/root/.cache/huggingface
healthcheck:
test: ["CMD", "curl", "-sf", "http://localhost:8000/health"]
interval: 30s
timeout: 10s
retries: 5
start_period: 120s
# ── Watchtower ──────────────────────────────────────────────────────────────
# Auto-updates runner image when CI pushes a new tag.
# Only watches services with the watchtower label.
watchtower:
image: containrrr/watchtower:latest
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
command: --label-enable --interval 300 --cleanup
environment:
WATCHTOWER_NOTIFICATIONS: "${WATCHTOWER_NOTIFICATIONS}"
WATCHTOWER_NOTIFICATION_URL: "${WATCHTOWER_NOTIFICATION_URL}"
DOCKER_API_VERSION: "1.44"
volumes:
postgres_data:
valkey_data:
uptime_kuma_data:
gotify_data:
tempo_data:
prometheus_data:
loki_data:
grafana_data:
pocket_tts_cache:
hf_cache:

5
homelab/dozzle/users.yml Normal file
View File

@@ -0,0 +1,5 @@
users:
admin:
name: admin
email: admin@libnovel.cc
password: "$2y$10$4jqLza2grpxnQn0EGux2C.UmlSxRmOvH/J1ySzOBxMZgW6cA2TnmK"

View File

@@ -0,0 +1,68 @@
# OTel Collector config
#
# Receivers: OTLP (gRPC + HTTP) from backend, ui, runner
# Processors: batch for efficiency, resource detection for host metadata
# Exporters: Tempo (traces), Prometheus (metrics), Loki (logs)
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
processors:
batch:
timeout: 5s
send_batch_size: 512
# Attach host metadata to all telemetry
resourcedetection:
detectors: [env, system]
timeout: 5s
exporters:
# Traces → Tempo
otlp/tempo:
endpoint: tempo:4317
tls:
insecure: true
# Metrics → Prometheus (remote write)
prometheusremotewrite:
endpoint: "http://prometheus:9090/api/v1/write"
tls:
insecure_skip_verify: true
# Logs → Loki (via OTLP HTTP endpoint)
otlphttp/loki:
endpoint: "http://loki:3100/otlp"
tls:
insecure: true
# Collector self-observability (optional debug)
debug:
verbosity: basic
extensions:
health_check:
endpoint: 0.0.0.0:13133
pprof:
endpoint: 0.0.0.0:1777
service:
extensions: [health_check, pprof]
pipelines:
traces:
receivers: [otlp]
processors: [resourcedetection, batch]
exporters: [otlp/tempo]
metrics:
receivers: [otlp]
processors: [resourcedetection, batch]
exporters: [prometheusremotewrite]
logs:
receivers: [otlp]
processors: [resourcedetection, batch]
exporters: [otlphttp/loki]

View File

@@ -0,0 +1,16 @@
# Grafana alerting provisioning — contact points
# Sends all alerts to Gotify (self-hosted push notifications).
apiVersion: 1
contactPoints:
- orgId: 1
name: Gotify
receivers:
- uid: gotify-webhook
type: webhook
settings:
url: "http://gotify/message?token=ABZrZgCY-4ivcmt"
httpMethod: POST
title: "{{ .CommonLabels.alertname }}"
message: "{{ range .Alerts }}{{ .Annotations.summary }}\n{{ .Annotations.description }}{{ end }}"
disableResolveMessage: false

View File

@@ -0,0 +1,15 @@
# Grafana alerting provisioning — notification policies
# Routes all alerts to Gotify by default.
apiVersion: 1
policies:
- orgId: 1
receiver: Gotify
group_by: ["alertname", "service"]
group_wait: 30s
group_interval: 5m
repeat_interval: 4h
routes:
- receiver: Gotify
matchers:
- severity =~ "critical|warning"

View File

@@ -0,0 +1,214 @@
# Grafana alerting provisioning — alert rules
# Covers: runner down, high task failure rate, audio error spike, backend error spike.
apiVersion: 1
groups:
- orgId: 1
name: LibNovel Runner
folder: LibNovel
interval: 1m
rules:
- uid: runner-down
title: Runner Down
condition: C
for: 2m
annotations:
summary: "LibNovel runner is not reachable"
description: "The Prometheus scrape of runner:9091 has been failing for >2 minutes. Tasks are not being processed."
labels:
severity: critical
service: runner
data:
- refId: A
datasourceUid: prometheus
relativeTimeRange: { from: 300, to: 0 }
model:
expr: "up{job=\"libnovel-runner\"}"
instant: true
intervalMs: 1000
maxDataPoints: 43200
- refId: C
datasourceUid: __expr__
relativeTimeRange: { from: 300, to: 0 }
model:
type: classic_conditions
conditions:
- evaluator: { params: [1], type: lt }
operator: { type: and }
query: { params: [A] }
reducer: { params: [], type: last }
- uid: runner-high-failure-rate
title: Runner High Task Failure Rate
condition: C
for: 5m
annotations:
summary: "Runner task failure rate is above 20%"
description: "More than 20% of runner tasks have been failing for the last 5 minutes. Check runner logs."
labels:
severity: warning
service: runner
data:
- refId: A
datasourceUid: prometheus
relativeTimeRange: { from: 600, to: 0 }
model:
expr: "rate(libnovel_runner_tasks_failed_total[5m]) / clamp_min(rate(libnovel_runner_tasks_completed_total[5m]) + rate(libnovel_runner_tasks_failed_total[5m]), 0.001)"
instant: true
intervalMs: 1000
maxDataPoints: 43200
- refId: C
datasourceUid: __expr__
relativeTimeRange: { from: 600, to: 0 }
model:
type: classic_conditions
conditions:
- evaluator: { params: [0.2], type: gt }
operator: { type: and }
query: { params: [A] }
reducer: { params: [], type: last }
- uid: runner-tasks-stalled
title: Runner Tasks Stalled
condition: C
for: 10m
annotations:
summary: "Runner has tasks running for >10 minutes with no completions"
description: "tasks_running > 0 but rate(tasks_completed) is 0. Tasks may be stuck or the runner is in a crash loop."
labels:
severity: warning
service: runner
data:
- refId: Running
datasourceUid: prometheus
relativeTimeRange: { from: 900, to: 0 }
model:
expr: "libnovel_runner_tasks_running"
instant: true
intervalMs: 1000
maxDataPoints: 43200
- refId: Rate
datasourceUid: prometheus
relativeTimeRange: { from: 900, to: 0 }
model:
expr: "rate(libnovel_runner_tasks_completed_total[10m])"
instant: true
intervalMs: 1000
maxDataPoints: 43200
- refId: C
datasourceUid: __expr__
relativeTimeRange: { from: 900, to: 0 }
model:
type: classic_conditions
conditions:
- evaluator: { params: [0], type: gt }
operator: { type: and }
query: { params: [Running] }
reducer: { params: [], type: last }
- evaluator: { params: [0.001], type: lt }
operator: { type: and }
query: { params: [Rate] }
reducer: { params: [], type: last }
- orgId: 1
name: LibNovel Backend
folder: LibNovel
interval: 1m
rules:
- uid: backend-high-error-rate
title: Backend High Error Rate
condition: C
for: 5m
annotations:
summary: "Backend API error rate above 5%"
description: "More than 5% of backend HTTP requests are returning 5xx status codes (as seen from UI OTel instrumentation)."
labels:
severity: warning
service: backend
data:
- refId: A
datasourceUid: prometheus
relativeTimeRange: { from: 600, to: 0 }
model:
expr: "sum(rate(http_client_request_duration_seconds_count{job=\"ui\", server_address=\"backend\", http_response_status_code=~\"5..\"}[5m])) / clamp_min(sum(rate(http_client_request_duration_seconds_count{job=\"ui\", server_address=\"backend\"}[5m])), 0.001)"
instant: true
intervalMs: 1000
maxDataPoints: 43200
- refId: C
datasourceUid: __expr__
relativeTimeRange: { from: 600, to: 0 }
model:
type: classic_conditions
conditions:
- evaluator: { params: [0.05], type: gt }
operator: { type: and }
query: { params: [A] }
reducer: { params: [], type: last }
- uid: backend-high-p95-latency
title: Backend High p95 Latency
condition: C
for: 5m
annotations:
summary: "Backend p95 latency above 2s"
description: "95th percentile latency of backend spans has exceeded 2 seconds for >5 minutes."
labels:
severity: warning
service: backend
data:
- refId: A
datasourceUid: prometheus
relativeTimeRange: { from: 600, to: 0 }
model:
expr: "histogram_quantile(0.95, sum(rate(traces_spanmetrics_latency_bucket{service=\"backend\"}[5m])) by (le))"
instant: true
intervalMs: 1000
maxDataPoints: 43200
- refId: C
datasourceUid: __expr__
relativeTimeRange: { from: 600, to: 0 }
model:
type: classic_conditions
conditions:
- evaluator: { params: [2], type: gt }
operator: { type: and }
query: { params: [A] }
reducer: { params: [], type: last }
- orgId: 1
name: LibNovel OTel Pipeline
folder: LibNovel
interval: 2m
rules:
- uid: otel-collector-down
title: OTel Collector Down
condition: C
for: 3m
annotations:
summary: "OTel collector is not reachable"
description: "Prometheus cannot scrape otel-collector:8888. Traces and logs may be dropping."
labels:
severity: warning
service: otel-collector
data:
- refId: A
datasourceUid: prometheus
relativeTimeRange: { from: 600, to: 0 }
model:
expr: "up{job=\"otel-collector\"}"
instant: true
intervalMs: 1000
maxDataPoints: 43200
- refId: C
datasourceUid: __expr__
relativeTimeRange: { from: 600, to: 0 }
model:
type: classic_conditions
conditions:
- evaluator: { params: [1], type: lt }
operator: { type: and }
query: { params: [A] }
reducer: { params: [], type: last }

View File

@@ -0,0 +1,338 @@
{
"uid": "libnovel-backend",
"title": "Backend API",
"description": "Request rate, error rate, and latency for the LibNovel backend. Powered by Tempo span metrics and UI OTel instrumentation.",
"tags": ["libnovel", "backend", "api"],
"timezone": "browser",
"refresh": "30s",
"time": { "from": "now-3h", "to": "now" },
"schemaVersion": 39,
"panels": [
{
"id": 1,
"type": "stat",
"title": "Request Rate (RPS)",
"gridPos": { "x": 0, "y": 0, "w": 4, "h": 4 },
"options": {
"reduceOptions": { "calcs": ["lastNotNull"] },
"colorMode": "value",
"graphMode": "area",
"textMode": "auto"
},
"fieldConfig": {
"defaults": {
"unit": "reqps",
"color": { "mode": "thresholds" },
"thresholds": { "mode": "absolute", "steps": [{ "color": "green", "value": null }] }
}
},
"targets": [
{
"datasource": { "type": "prometheus", "uid": "prometheus" },
"expr": "sum(rate(traces_spanmetrics_calls_total{service=\"backend\"}[5m]))",
"legendFormat": "rps",
"instant": true
}
]
},
{
"id": 2,
"type": "stat",
"title": "Error Rate",
"gridPos": { "x": 4, "y": 0, "w": 4, "h": 4 },
"options": {
"reduceOptions": { "calcs": ["lastNotNull"] },
"colorMode": "background",
"graphMode": "none"
},
"fieldConfig": {
"defaults": {
"unit": "percentunit",
"thresholds": {
"mode": "absolute",
"steps": [
{ "color": "green", "value": null },
{ "color": "yellow", "value": 0.01 },
{ "color": "red", "value": 0.05 }
]
}
}
},
"targets": [
{
"datasource": { "type": "prometheus", "uid": "prometheus" },
"expr": "sum(rate(traces_spanmetrics_calls_total{service=\"backend\", status_code=\"STATUS_CODE_ERROR\"}[5m])) / clamp_min(sum(rate(traces_spanmetrics_calls_total{service=\"backend\"}[5m])), 0.001)",
"legendFormat": "error rate",
"instant": true
}
]
},
{
"id": 3,
"type": "stat",
"title": "p50 Latency",
"gridPos": { "x": 8, "y": 0, "w": 4, "h": 4 },
"options": { "reduceOptions": { "calcs": ["lastNotNull"] }, "colorMode": "value", "graphMode": "area" },
"fieldConfig": {
"defaults": {
"unit": "s",
"thresholds": {
"mode": "absolute",
"steps": [
{ "color": "green", "value": null },
{ "color": "yellow", "value": 0.2 },
{ "color": "red", "value": 1 }
]
}
}
},
"targets": [
{
"datasource": { "type": "prometheus", "uid": "prometheus" },
"expr": "histogram_quantile(0.50, sum(rate(traces_spanmetrics_latency_bucket{service=\"backend\"}[5m])) by (le))",
"legendFormat": "p50",
"instant": true
}
]
},
{
"id": 4,
"type": "stat",
"title": "p95 Latency",
"gridPos": { "x": 12, "y": 0, "w": 4, "h": 4 },
"options": { "reduceOptions": { "calcs": ["lastNotNull"] }, "colorMode": "value", "graphMode": "area" },
"fieldConfig": {
"defaults": {
"unit": "s",
"thresholds": {
"mode": "absolute",
"steps": [
{ "color": "green", "value": null },
{ "color": "yellow", "value": 0.5 },
{ "color": "red", "value": 2 }
]
}
}
},
"targets": [
{
"datasource": { "type": "prometheus", "uid": "prometheus" },
"expr": "histogram_quantile(0.95, sum(rate(traces_spanmetrics_latency_bucket{service=\"backend\"}[5m])) by (le))",
"legendFormat": "p95",
"instant": true
}
]
},
{
"id": 5,
"type": "stat",
"title": "p99 Latency",
"gridPos": { "x": 16, "y": 0, "w": 4, "h": 4 },
"options": { "reduceOptions": { "calcs": ["lastNotNull"] }, "colorMode": "value", "graphMode": "area" },
"fieldConfig": {
"defaults": {
"unit": "s",
"thresholds": {
"mode": "absolute",
"steps": [
{ "color": "green", "value": null },
{ "color": "yellow", "value": 1 },
{ "color": "red", "value": 5 }
]
}
}
},
"targets": [
{
"datasource": { "type": "prometheus", "uid": "prometheus" },
"expr": "histogram_quantile(0.99, sum(rate(traces_spanmetrics_latency_bucket{service=\"backend\"}[5m])) by (le))",
"legendFormat": "p99",
"instant": true
}
]
},
{
"id": 6,
"type": "stat",
"title": "5xx Errors / min",
"gridPos": { "x": 20, "y": 0, "w": 4, "h": 4 },
"options": { "reduceOptions": { "calcs": ["lastNotNull"] }, "colorMode": "background", "graphMode": "none" },
"fieldConfig": {
"defaults": {
"unit": "short",
"thresholds": {
"mode": "absolute",
"steps": [
{ "color": "green", "value": null },
{ "color": "yellow", "value": 1 },
{ "color": "red", "value": 5 }
]
}
}
},
"targets": [
{
"datasource": { "type": "prometheus", "uid": "prometheus" },
"expr": "sum(rate(http_client_request_duration_seconds_count{job=\"ui\", server_address=\"backend\", http_response_status_code=~\"5..\"}[5m])) * 60",
"legendFormat": "5xx/min",
"instant": true
}
]
},
{
"id": 10,
"type": "timeseries",
"title": "Request Rate by Status",
"gridPos": { "x": 0, "y": 4, "w": 12, "h": 8 },
"options": {
"tooltip": { "mode": "multi" },
"legend": { "displayMode": "list", "placement": "bottom" }
},
"fieldConfig": {
"defaults": { "unit": "reqps", "custom": { "lineWidth": 2, "fillOpacity": 10 } },
"overrides": [
{ "matcher": { "id": "byFrameRefID", "options": "errors" }, "properties": [{ "id": "color", "value": { "fixedColor": "red", "mode": "fixed" } }] }
]
},
"targets": [
{
"refId": "success",
"datasource": { "type": "prometheus", "uid": "prometheus" },
"expr": "sum(rate(http_client_request_duration_seconds_count{job=\"ui\", server_address=\"backend\", http_response_status_code=~\"2..\"}[5m]))",
"legendFormat": "2xx"
},
{
"refId": "notfound",
"datasource": { "type": "prometheus", "uid": "prometheus" },
"expr": "sum(rate(http_client_request_duration_seconds_count{job=\"ui\", server_address=\"backend\", http_response_status_code=~\"4..\"}[5m]))",
"legendFormat": "4xx"
},
{
"refId": "errors",
"datasource": { "type": "prometheus", "uid": "prometheus" },
"expr": "sum(rate(http_client_request_duration_seconds_count{job=\"ui\", server_address=\"backend\", http_response_status_code=~\"5..\"}[5m]))",
"legendFormat": "5xx"
}
]
},
{
"id": 11,
"type": "timeseries",
"title": "Latency Percentiles (backend spans)",
"gridPos": { "x": 12, "y": 4, "w": 12, "h": 8 },
"options": {
"tooltip": { "mode": "multi" },
"legend": { "displayMode": "list", "placement": "bottom" }
},
"fieldConfig": {
"defaults": { "unit": "s", "custom": { "lineWidth": 2, "fillOpacity": 10 } }
},
"targets": [
{
"datasource": { "type": "prometheus", "uid": "prometheus" },
"expr": "histogram_quantile(0.50, sum(rate(traces_spanmetrics_latency_bucket{service=\"backend\"}[5m])) by (le))",
"legendFormat": "p50"
},
{
"datasource": { "type": "prometheus", "uid": "prometheus" },
"expr": "histogram_quantile(0.95, sum(rate(traces_spanmetrics_latency_bucket{service=\"backend\"}[5m])) by (le))",
"legendFormat": "p95"
},
{
"datasource": { "type": "prometheus", "uid": "prometheus" },
"expr": "histogram_quantile(0.99, sum(rate(traces_spanmetrics_latency_bucket{service=\"backend\"}[5m])) by (le))",
"legendFormat": "p99"
}
]
},
{
"id": 12,
"type": "timeseries",
"title": "Requests / min by HTTP method (UI → Backend)",
"gridPos": { "x": 0, "y": 12, "w": 12, "h": 8 },
"options": {
"tooltip": { "mode": "multi" },
"legend": { "displayMode": "list", "placement": "bottom" }
},
"fieldConfig": {
"defaults": { "unit": "short", "custom": { "lineWidth": 2, "fillOpacity": 5 } }
},
"targets": [
{
"datasource": { "type": "prometheus", "uid": "prometheus" },
"expr": "sum(rate(http_client_request_duration_seconds_count{job=\"ui\", server_address=\"backend\"}[5m])) by (http_request_method) * 60",
"legendFormat": "{{http_request_method}}"
}
]
},
{
"id": 13,
"type": "timeseries",
"title": "Requests / min — UI → PocketBase",
"gridPos": { "x": 12, "y": 12, "w": 12, "h": 8 },
"description": "Traffic from SvelteKit server to PocketBase (auth, collections, etc.).",
"options": {
"tooltip": { "mode": "multi" },
"legend": { "displayMode": "list", "placement": "bottom" }
},
"fieldConfig": {
"defaults": { "unit": "short", "custom": { "lineWidth": 2, "fillOpacity": 5 } }
},
"targets": [
{
"datasource": { "type": "prometheus", "uid": "prometheus" },
"expr": "sum(rate(http_client_request_duration_seconds_count{job=\"ui\", server_address=\"pocketbase\"}[5m])) by (http_request_method, http_response_status_code) * 60",
"legendFormat": "{{http_request_method}} {{http_response_status_code}}"
}
]
},
{
"id": 14,
"type": "timeseries",
"title": "UI → Backend Latency (p50 / p95)",
"gridPos": { "x": 0, "y": 20, "w": 12, "h": 8 },
"description": "HTTP client latency as seen from the SvelteKit SSR layer calling backend.",
"options": {
"tooltip": { "mode": "multi" },
"legend": { "displayMode": "list", "placement": "bottom" }
},
"fieldConfig": {
"defaults": { "unit": "s", "custom": { "lineWidth": 2, "fillOpacity": 5 } }
},
"targets": [
{
"datasource": { "type": "prometheus", "uid": "prometheus" },
"expr": "histogram_quantile(0.50, sum(rate(http_client_request_duration_seconds_bucket{job=\"ui\", server_address=\"backend\"}[5m])) by (le))",
"legendFormat": "p50"
},
{
"datasource": { "type": "prometheus", "uid": "prometheus" },
"expr": "histogram_quantile(0.95, sum(rate(http_client_request_duration_seconds_bucket{job=\"ui\", server_address=\"backend\"}[5m])) by (le))",
"legendFormat": "p95"
}
]
},
{
"id": 20,
"type": "logs",
"title": "Backend Errors",
"gridPos": { "x": 0, "y": 28, "w": 24, "h": 10 },
"options": {
"showTime": true,
"showLabels": false,
"wrapLogMessage": true,
"prettifyLogMessage": true,
"enableLogDetails": true,
"sortOrder": "Descending",
"dedupStrategy": "none"
},
"targets": [
{
"datasource": { "type": "loki", "uid": "loki" },
"expr": "{service_name=\"backend\"} | json | level =~ `(WARN|ERROR|error|warn)`",
"legendFormat": ""
}
]
}
]
}

View File

@@ -0,0 +1,275 @@
{
"uid": "libnovel-catalogue",
"title": "Catalogue & Content Progress",
"description": "Scraping progress, audio generation coverage, and catalogue health derived from runner structured logs.",
"tags": ["libnovel", "catalogue", "content"],
"timezone": "browser",
"refresh": "1m",
"time": { "from": "now-24h", "to": "now" },
"schemaVersion": 39,
"panels": [
{
"id": 1,
"type": "stat",
"title": "Books Scraped (last 24h)",
"description": "Count of unique book slugs appearing in successful scrape task completions.",
"gridPos": { "x": 0, "y": 0, "w": 4, "h": 4 },
"options": { "reduceOptions": { "calcs": ["sum"] }, "colorMode": "value", "graphMode": "none" },
"fieldConfig": {
"defaults": {
"color": { "fixedColor": "blue", "mode": "fixed" },
"thresholds": { "mode": "absolute", "steps": [] }
}
},
"targets": [
{
"datasource": { "type": "loki", "uid": "loki" },
"expr": "sum_over_time({service_name=\"runner\"} | json | msg=`scrape task done` [24h])",
"legendFormat": "books scraped"
}
]
},
{
"id": 2,
"type": "stat",
"title": "Chapters Scraped (last 24h)",
"gridPos": { "x": 4, "y": 0, "w": 4, "h": 4 },
"options": { "reduceOptions": { "calcs": ["sum"] }, "colorMode": "value", "graphMode": "none" },
"fieldConfig": {
"defaults": {
"color": { "fixedColor": "blue", "mode": "fixed" },
"thresholds": { "mode": "absolute", "steps": [] }
}
},
"targets": [
{
"datasource": { "type": "loki", "uid": "loki" },
"expr": "sum_over_time({service_name=\"runner\"} | json | unwrap scraped [24h])",
"legendFormat": "chapters scraped"
}
]
},
{
"id": 3,
"type": "stat",
"title": "Audio Jobs Completed (last 24h)",
"gridPos": { "x": 8, "y": 0, "w": 4, "h": 4 },
"options": { "reduceOptions": { "calcs": ["sum"] }, "colorMode": "value", "graphMode": "none" },
"fieldConfig": {
"defaults": {
"color": { "fixedColor": "green", "mode": "fixed" },
"thresholds": { "mode": "absolute", "steps": [] }
}
},
"targets": [
{
"datasource": { "type": "loki", "uid": "loki" },
"expr": "sum_over_time({service_name=\"runner\"} | json | msg=`audio task done` [24h])",
"legendFormat": "audio done"
}
]
},
{
"id": 4,
"type": "stat",
"title": "Audio Jobs Failed (last 24h)",
"gridPos": { "x": 12, "y": 0, "w": 4, "h": 4 },
"options": { "reduceOptions": { "calcs": ["sum"] }, "colorMode": "background", "graphMode": "none" },
"fieldConfig": {
"defaults": {
"thresholds": {
"mode": "absolute",
"steps": [
{ "color": "green", "value": null },
{ "color": "yellow", "value": 1 },
{ "color": "red", "value": 5 }
]
}
}
},
"targets": [
{
"datasource": { "type": "loki", "uid": "loki" },
"expr": "sum_over_time({service_name=\"runner\"} | json | msg=`audio task failed` [24h])",
"legendFormat": "audio failed"
}
]
},
{
"id": 5,
"type": "stat",
"title": "Scrape Errors (last 24h)",
"gridPos": { "x": 16, "y": 0, "w": 4, "h": 4 },
"options": { "reduceOptions": { "calcs": ["sum"] }, "colorMode": "background", "graphMode": "none" },
"fieldConfig": {
"defaults": {
"thresholds": {
"mode": "absolute",
"steps": [
{ "color": "green", "value": null },
{ "color": "yellow", "value": 1 },
{ "color": "red", "value": 10 }
]
}
}
},
"targets": [
{
"datasource": { "type": "loki", "uid": "loki" },
"expr": "sum_over_time({service_name=\"runner\"} | json | msg=`scrape task failed` [24h])",
"legendFormat": "scrape errors"
}
]
},
{
"id": 6,
"type": "stat",
"title": "Catalogue Refresh — Books Indexed",
"description": "Total books indexed in the last catalogue refresh cycle (from the ok field in the summary log).",
"gridPos": { "x": 20, "y": 0, "w": 4, "h": 4 },
"options": { "reduceOptions": { "calcs": ["lastNotNull"] }, "colorMode": "value", "graphMode": "none" },
"fieldConfig": {
"defaults": {
"color": { "fixedColor": "purple", "mode": "fixed" },
"thresholds": { "mode": "absolute", "steps": [] }
}
},
"targets": [
{
"datasource": { "type": "loki", "uid": "loki" },
"expr": "last_over_time({service_name=\"runner\"} | json | op=`catalogue_refresh` | msg=`catalogue refresh done` | unwrap ok [7d])",
"legendFormat": "indexed"
}
]
},
{
"id": 10,
"type": "timeseries",
"title": "Audio Generation Rate (tasks/min)",
"gridPos": { "x": 0, "y": 4, "w": 12, "h": 8 },
"description": "Rate of audio task completions and failures over time.",
"options": {
"tooltip": { "mode": "multi" },
"legend": { "displayMode": "list", "placement": "bottom" }
},
"fieldConfig": {
"defaults": { "unit": "short", "custom": { "lineWidth": 2, "fillOpacity": 10 } },
"overrides": [
{ "matcher": { "id": "byName", "options": "failed" }, "properties": [{ "id": "color", "value": { "fixedColor": "red", "mode": "fixed" } }] },
{ "matcher": { "id": "byName", "options": "completed" }, "properties": [{ "id": "color", "value": { "fixedColor": "green", "mode": "fixed" } }] }
]
},
"targets": [
{
"datasource": { "type": "prometheus", "uid": "prometheus" },
"expr": "sum(rate(traces_spanmetrics_calls_total{service=\"runner\", span_name=\"runner.audio_task\", status_code!=\"STATUS_CODE_ERROR\"}[5m])) * 60",
"legendFormat": "completed"
},
{
"datasource": { "type": "prometheus", "uid": "prometheus" },
"expr": "sum(rate(traces_spanmetrics_calls_total{service=\"runner\", span_name=\"runner.audio_task\", status_code=\"STATUS_CODE_ERROR\"}[5m])) * 60",
"legendFormat": "failed"
}
]
},
{
"id": 11,
"type": "timeseries",
"title": "Scraping Rate (tasks/min)",
"gridPos": { "x": 12, "y": 4, "w": 12, "h": 8 },
"description": "Rate of scrape task completions and failures over time.",
"options": {
"tooltip": { "mode": "multi" },
"legend": { "displayMode": "list", "placement": "bottom" }
},
"fieldConfig": {
"defaults": { "unit": "short", "custom": { "lineWidth": 2, "fillOpacity": 10 } },
"overrides": [
{ "matcher": { "id": "byName", "options": "failed" }, "properties": [{ "id": "color", "value": { "fixedColor": "red", "mode": "fixed" } }] },
{ "matcher": { "id": "byName", "options": "completed" }, "properties": [{ "id": "color", "value": { "fixedColor": "blue", "mode": "fixed" } }] }
]
},
"targets": [
{
"datasource": { "type": "prometheus", "uid": "prometheus" },
"expr": "sum(rate(traces_spanmetrics_calls_total{service=\"runner\", span_name=\"runner.scrape_task\", status_code!=\"STATUS_CODE_ERROR\"}[5m])) * 60",
"legendFormat": "completed"
},
{
"datasource": { "type": "prometheus", "uid": "prometheus" },
"expr": "sum(rate(traces_spanmetrics_calls_total{service=\"runner\", span_name=\"runner.scrape_task\", status_code=\"STATUS_CODE_ERROR\"}[5m])) * 60",
"legendFormat": "failed"
}
]
},
{
"id": 20,
"type": "logs",
"title": "Scrape Task Events",
"description": "One log line per completed or failed scrape task. Fields: task_id, kind, url, scraped, skipped, errors.",
"gridPos": { "x": 0, "y": 12, "w": 24, "h": 10 },
"options": {
"showTime": true,
"showLabels": false,
"wrapLogMessage": false,
"prettifyLogMessage": true,
"enableLogDetails": true,
"sortOrder": "Descending",
"dedupStrategy": "none"
},
"targets": [
{
"datasource": { "type": "loki", "uid": "loki" },
"expr": "{service_name=\"runner\"} | json | msg =~ `scrape task (done|failed|starting)`",
"legendFormat": ""
}
]
},
{
"id": 21,
"type": "logs",
"title": "Audio Task Events",
"description": "One log line per completed or failed audio task. Fields: task_id, slug, chapter, voice, key (on success), reason (on failure).",
"gridPos": { "x": 0, "y": 22, "w": 24, "h": 10 },
"options": {
"showTime": true,
"showLabels": false,
"wrapLogMessage": false,
"prettifyLogMessage": true,
"enableLogDetails": true,
"sortOrder": "Descending",
"dedupStrategy": "none"
},
"targets": [
{
"datasource": { "type": "loki", "uid": "loki" },
"expr": "{service_name=\"runner\"} | json | msg =~ `audio task (done|failed|starting)`",
"legendFormat": ""
}
]
},
{
"id": 22,
"type": "logs",
"title": "Catalogue Refresh Progress",
"description": "Progress logs from the background catalogue refresh (every 24h). Fields: op=catalogue_refresh, scraped, ok, skipped, errors.",
"gridPos": { "x": 0, "y": 32, "w": 24, "h": 8 },
"options": {
"showTime": true,
"showLabels": false,
"wrapLogMessage": false,
"prettifyLogMessage": true,
"enableLogDetails": true,
"sortOrder": "Descending",
"dedupStrategy": "none"
},
"targets": [
{
"datasource": { "type": "loki", "uid": "loki" },
"expr": "{service_name=\"runner\"} | json | op=`catalogue_refresh`",
"legendFormat": ""
}
]
}
]
}

View File

@@ -0,0 +1,13 @@
# Grafana dashboard provisioning
# Points Grafana at the local dashboards directory.
# Drop any .json dashboard file into homelab/otel/grafana/provisioning/dashboards/
# and it will appear in Grafana automatically on restart.
apiVersion: 1
providers:
- name: libnovel
folder: LibNovel
type: file
options:
path: /etc/grafana/provisioning/dashboards

View File

@@ -0,0 +1,377 @@
{
"uid": "libnovel-runner",
"title": "Runner Operations",
"description": "Task queue health, throughput, TTS routing, and live logs for the homelab runner.",
"tags": ["libnovel", "runner"],
"timezone": "browser",
"refresh": "30s",
"time": { "from": "now-3h", "to": "now" },
"schemaVersion": 39,
"panels": [
{
"id": 1,
"type": "stat",
"title": "Tasks Running",
"gridPos": { "x": 0, "y": 0, "w": 4, "h": 4 },
"options": {
"reduceOptions": { "calcs": ["lastNotNull"] },
"colorMode": "background",
"graphMode": "none",
"textMode": "auto"
},
"fieldConfig": {
"defaults": {
"thresholds": {
"mode": "absolute",
"steps": [
{ "color": "green", "value": null },
{ "color": "yellow", "value": 1 },
{ "color": "red", "value": 3 }
]
},
"mappings": []
}
},
"targets": [
{
"datasource": { "type": "prometheus", "uid": "prometheus" },
"expr": "libnovel_runner_tasks_running",
"legendFormat": "running",
"instant": true
}
]
},
{
"id": 2,
"type": "stat",
"title": "Tasks Completed (total)",
"gridPos": { "x": 4, "y": 0, "w": 4, "h": 4 },
"options": {
"reduceOptions": { "calcs": ["lastNotNull"] },
"colorMode": "background",
"graphMode": "area",
"textMode": "auto"
},
"fieldConfig": {
"defaults": {
"color": { "fixedColor": "green", "mode": "fixed" },
"thresholds": { "mode": "absolute", "steps": [] }
}
},
"targets": [
{
"datasource": { "type": "prometheus", "uid": "prometheus" },
"expr": "libnovel_runner_tasks_completed_total",
"legendFormat": "completed",
"instant": true
}
]
},
{
"id": 3,
"type": "stat",
"title": "Tasks Failed (total)",
"gridPos": { "x": 8, "y": 0, "w": 4, "h": 4 },
"options": {
"reduceOptions": { "calcs": ["lastNotNull"] },
"colorMode": "background",
"graphMode": "none",
"textMode": "auto"
},
"fieldConfig": {
"defaults": {
"thresholds": {
"mode": "absolute",
"steps": [
{ "color": "green", "value": null },
{ "color": "yellow", "value": 1 },
{ "color": "red", "value": 5 }
]
}
}
},
"targets": [
{
"datasource": { "type": "prometheus", "uid": "prometheus" },
"expr": "libnovel_runner_tasks_failed_total",
"legendFormat": "failed",
"instant": true
}
]
},
{
"id": 4,
"type": "stat",
"title": "Runner Uptime",
"gridPos": { "x": 12, "y": 0, "w": 4, "h": 4 },
"options": {
"reduceOptions": { "calcs": ["lastNotNull"] },
"colorMode": "value",
"graphMode": "none",
"textMode": "auto"
},
"fieldConfig": {
"defaults": {
"unit": "s",
"thresholds": {
"mode": "absolute",
"steps": [
{ "color": "red", "value": null },
{ "color": "yellow", "value": 60 },
{ "color": "green", "value": 300 }
]
}
}
},
"targets": [
{
"datasource": { "type": "prometheus", "uid": "prometheus" },
"expr": "libnovel_runner_uptime_seconds",
"legendFormat": "uptime",
"instant": true
}
]
},
{
"id": 5,
"type": "stat",
"title": "Task Failure Rate",
"gridPos": { "x": 16, "y": 0, "w": 4, "h": 4 },
"options": {
"reduceOptions": { "calcs": ["lastNotNull"] },
"colorMode": "background",
"graphMode": "none",
"textMode": "auto"
},
"fieldConfig": {
"defaults": {
"unit": "percentunit",
"thresholds": {
"mode": "absolute",
"steps": [
{ "color": "green", "value": null },
{ "color": "yellow", "value": 0.05 },
{ "color": "red", "value": 0.2 }
]
}
}
},
"targets": [
{
"datasource": { "type": "prometheus", "uid": "prometheus" },
"expr": "libnovel_runner_tasks_failed_total / clamp_min(libnovel_runner_tasks_completed_total + libnovel_runner_tasks_failed_total, 1)",
"legendFormat": "failure rate",
"instant": true
}
]
},
{
"id": 6,
"type": "stat",
"title": "Runner Alive",
"gridPos": { "x": 20, "y": 0, "w": 4, "h": 4 },
"options": {
"reduceOptions": { "calcs": ["lastNotNull"] },
"colorMode": "background",
"graphMode": "none",
"textMode": "auto"
},
"fieldConfig": {
"defaults": {
"mappings": [
{ "type": "value", "options": { "1": { "text": "UP", "color": "green" }, "0": { "text": "DOWN", "color": "red" } } }
],
"thresholds": { "mode": "absolute", "steps": [] }
}
},
"targets": [
{
"datasource": { "type": "prometheus", "uid": "prometheus" },
"expr": "up{job=\"libnovel-runner\"}",
"legendFormat": "runner",
"instant": true
}
]
},
{
"id": 10,
"type": "timeseries",
"title": "Task Throughput (per minute)",
"gridPos": { "x": 0, "y": 4, "w": 12, "h": 8 },
"options": {
"tooltip": { "mode": "multi" },
"legend": { "displayMode": "list", "placement": "bottom" }
},
"fieldConfig": {
"defaults": {
"unit": "ops",
"custom": { "lineWidth": 2, "fillOpacity": 10 }
},
"overrides": [
{ "matcher": { "id": "byName", "options": "failed" }, "properties": [{ "id": "color", "value": { "fixedColor": "red", "mode": "fixed" } }] },
{ "matcher": { "id": "byName", "options": "completed" }, "properties": [{ "id": "color", "value": { "fixedColor": "green", "mode": "fixed" } }] }
]
},
"targets": [
{
"datasource": { "type": "prometheus", "uid": "prometheus" },
"expr": "rate(libnovel_runner_tasks_completed_total[5m]) * 60",
"legendFormat": "completed"
},
{
"datasource": { "type": "prometheus", "uid": "prometheus" },
"expr": "rate(libnovel_runner_tasks_failed_total[5m]) * 60",
"legendFormat": "failed"
},
{
"datasource": { "type": "prometheus", "uid": "prometheus" },
"expr": "libnovel_runner_tasks_running",
"legendFormat": "running"
}
]
},
{
"id": 11,
"type": "timeseries",
"title": "Audio Task Span Latency (p50 / p95 / p99)",
"gridPos": { "x": 12, "y": 4, "w": 12, "h": 8 },
"description": "End-to-end latency of runner.audio_task spans from Tempo span metrics.",
"options": {
"tooltip": { "mode": "multi" },
"legend": { "displayMode": "list", "placement": "bottom" }
},
"fieldConfig": {
"defaults": {
"unit": "s",
"custom": { "lineWidth": 2, "fillOpacity": 10 }
}
},
"targets": [
{
"datasource": { "type": "prometheus", "uid": "prometheus" },
"expr": "histogram_quantile(0.50, sum(rate(traces_spanmetrics_latency_bucket{service=\"runner\", span_name=\"runner.audio_task\"}[5m])) by (le))",
"legendFormat": "p50"
},
{
"datasource": { "type": "prometheus", "uid": "prometheus" },
"expr": "histogram_quantile(0.95, sum(rate(traces_spanmetrics_latency_bucket{service=\"runner\", span_name=\"runner.audio_task\"}[5m])) by (le))",
"legendFormat": "p95"
},
{
"datasource": { "type": "prometheus", "uid": "prometheus" },
"expr": "histogram_quantile(0.99, sum(rate(traces_spanmetrics_latency_bucket{service=\"runner\", span_name=\"runner.audio_task\"}[5m])) by (le))",
"legendFormat": "p99"
}
]
},
{
"id": 20,
"type": "timeseries",
"title": "Scrape Task Span Latency (p50 / p95 / p99)",
"gridPos": { "x": 0, "y": 12, "w": 12, "h": 8 },
"description": "End-to-end latency of runner.scrape_task spans from Tempo span metrics.",
"options": {
"tooltip": { "mode": "multi" },
"legend": { "displayMode": "list", "placement": "bottom" }
},
"fieldConfig": {
"defaults": {
"unit": "s",
"custom": { "lineWidth": 2, "fillOpacity": 10 }
}
},
"targets": [
{
"datasource": { "type": "prometheus", "uid": "prometheus" },
"expr": "histogram_quantile(0.50, sum(rate(traces_spanmetrics_latency_bucket{service=\"runner\", span_name=\"runner.scrape_task\"}[5m])) by (le))",
"legendFormat": "p50"
},
{
"datasource": { "type": "prometheus", "uid": "prometheus" },
"expr": "histogram_quantile(0.95, sum(rate(traces_spanmetrics_latency_bucket{service=\"runner\", span_name=\"runner.scrape_task\"}[5m])) by (le))",
"legendFormat": "p95"
},
{
"datasource": { "type": "prometheus", "uid": "prometheus" },
"expr": "histogram_quantile(0.99, sum(rate(traces_spanmetrics_latency_bucket{service=\"runner\", span_name=\"runner.scrape_task\"}[5m])) by (le))",
"legendFormat": "p99"
}
]
},
{
"id": 21,
"type": "timeseries",
"title": "Audio vs Scrape Task Rate",
"gridPos": { "x": 12, "y": 12, "w": 12, "h": 8 },
"description": "Relative throughput of audio generation vs book scraping.",
"options": {
"tooltip": { "mode": "multi" },
"legend": { "displayMode": "list", "placement": "bottom" }
},
"fieldConfig": {
"defaults": {
"unit": "ops",
"custom": { "lineWidth": 2, "fillOpacity": 10 }
}
},
"targets": [
{
"datasource": { "type": "prometheus", "uid": "prometheus" },
"expr": "sum(rate(traces_spanmetrics_calls_total{service=\"runner\", span_name=\"runner.audio_task\"}[5m]))",
"legendFormat": "audio tasks/s"
},
{
"datasource": { "type": "prometheus", "uid": "prometheus" },
"expr": "sum(rate(traces_spanmetrics_calls_total{service=\"runner\", span_name=\"runner.scrape_task\"}[5m]))",
"legendFormat": "scrape tasks/s"
}
]
},
{
"id": 30,
"type": "logs",
"title": "Runner Logs (errors & warnings)",
"gridPos": { "x": 0, "y": 20, "w": 24, "h": 10 },
"options": {
"showTime": true,
"showLabels": false,
"showCommonLabels": false,
"wrapLogMessage": true,
"prettifyLogMessage": true,
"enableLogDetails": true,
"sortOrder": "Descending",
"dedupStrategy": "none"
},
"targets": [
{
"datasource": { "type": "loki", "uid": "loki" },
"expr": "{service_name=\"runner\"} | json | level =~ `(WARN|ERROR|error|warn)`",
"legendFormat": ""
}
]
},
{
"id": 31,
"type": "logs",
"title": "Runner Logs (all)",
"gridPos": { "x": 0, "y": 30, "w": 24, "h": 10 },
"options": {
"showTime": true,
"showLabels": false,
"showCommonLabels": false,
"wrapLogMessage": true,
"prettifyLogMessage": true,
"enableLogDetails": true,
"sortOrder": "Descending",
"dedupStrategy": "none"
},
"targets": [
{
"datasource": { "type": "loki", "uid": "loki" },
"expr": "{service_name=\"runner\"} | json",
"legendFormat": ""
}
]
}
]
}

View File

@@ -0,0 +1,53 @@
# Grafana datasource provisioning
# Auto-configures Tempo, Prometheus, and Loki on first start.
# No manual setup needed in the UI.
apiVersion: 1
datasources:
- name: Tempo
type: tempo
uid: tempo
url: http://tempo:3200
access: proxy
isDefault: false
jsonData:
httpMethod: GET
serviceMap:
datasourceUid: prometheus
nodeGraph:
enabled: true
traceQuery:
timeShiftEnabled: true
spanStartTimeShift: "1h"
spanEndTimeShift: "-1h"
spanBar:
type: "Tag"
tag: "http.url"
lokiSearch:
datasourceUid: loki
- name: Prometheus
type: prometheus
uid: prometheus
url: http://prometheus:9090
access: proxy
isDefault: true
jsonData:
httpMethod: POST
exemplarTraceIdDestinations:
- name: traceID
datasourceUid: tempo
- name: Loki
type: loki
uid: loki
url: http://loki:3100
access: proxy
isDefault: false
jsonData:
derivedFields:
- datasourceUid: tempo
matcherRegex: '"traceID":"(\w+)"'
name: TraceID
url: "$${__value.raw}"

38
homelab/otel/loki.yaml Normal file
View File

@@ -0,0 +1,38 @@
# Loki config — minimal single-node setup
# Receives logs from OTel Collector. 30-day retention.
auth_enabled: false
server:
http_listen_port: 3100
grpc_listen_port: 9096
common:
instance_addr: 127.0.0.1
path_prefix: /loki
storage:
filesystem:
chunks_directory: /loki/chunks
rules_directory: /loki/rules
replication_factor: 1
ring:
kvstore:
store: inmemory
schema_config:
configs:
- from: 2024-01-01
store: tsdb
object_store: filesystem
schema: v13
index:
prefix: index_
period: 24h
limits_config:
retention_period: 720h # 30 days
compactor:
working_directory: /loki/compactor
delete_request_store: filesystem
retention_enabled: true

View File

@@ -0,0 +1,22 @@
# Prometheus config
# Scrapes OTel collector self-metrics and runner metrics endpoint.
# Backend metrics come in via OTel remote-write — no direct scrape needed.
global:
scrape_interval: 15s
evaluation_interval: 15s
external_labels:
environment: production
scrape_configs:
# OTel Collector self-metrics
- job_name: otel-collector
static_configs:
- targets: ["otel-collector:8888"]
# Runner JSON metrics endpoint (native format, no Prometheus client yet)
# Will be replaced by OTLP once runner is instrumented with OTel SDK.
- job_name: libnovel-runner
metrics_path: /metrics
static_configs:
- targets: ["runner:9091"]

45
homelab/otel/tempo.yaml Normal file
View File

@@ -0,0 +1,45 @@
# Tempo config — minimal single-node setup
# Stores traces locally. Grafana queries via the HTTP API on port 3200.
server:
http_listen_port: 3200
distributor:
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
ingester:
trace_idle_period: 10s
max_block_bytes: 104857600 # 100MB
max_block_duration: 30m
compactor:
compaction:
block_retention: 720h # 30 days
storage:
trace:
backend: local
local:
path: /var/tempo/blocks
wal:
path: /var/tempo/wal
metrics_generator:
registry:
external_labels:
source: tempo
storage:
path: /var/tempo/generator/wal
remote_write:
- url: http://prometheus:9090/api/v1/write
send_exemplars: true
overrides:
defaults:
metrics_generator:
processors: [service-graphs, span-metrics]
generate_native_histograms: both

View File

@@ -1,21 +1,41 @@
# LibNovel homelab runner
#
# Connects to production PocketBase and MinIO via public subdomains.
# All secrets come from Doppler (project=libnovel, config=prd).
# All secrets come from Doppler (project=libnovel, config=prd_homelab).
# Run with: doppler run -- docker compose up -d
#
# Differs from prod runner:
# - RUNNER_WORKER_ID=homelab-runner-1 (unique, avoids task claiming conflicts)
# - MINIO_ENDPOINT/USE_SSL → storage.libnovel.cc over HTTPS
# - POCKETBASE_URL → https://pb.libnovel.cc
# - MEILI_URL/VALKEY_ADDR → unset (not exposed publicly; not needed by runner)
# - MEILI_URL → https://search.libnovel.cc (Caddy-proxied)
# - VALKEY_ADDR → unset (not exposed publicly)
# - RUNNER_SKIP_INITIAL_CATALOGUE_REFRESH=true
# - Redis service for Asynq task queue (local to homelab, exposed to prod via Caddy TCP proxy)
services:
redis:
image: redis:7-alpine
restart: unless-stopped
volumes:
- redis_data:/data
command: >
redis-server
--appendonly yes
--requirepass "${REDIS_PASSWORD}"
healthcheck:
test: ["CMD", "redis-cli", "-a", "${REDIS_PASSWORD}", "ping"]
interval: 10s
timeout: 5s
retries: 5
runner:
image: kalekber/libnovel-runner:latest
restart: unless-stopped
stop_grace_period: 135s
depends_on:
redis:
condition: service_healthy
environment:
# ── PocketBase ──────────────────────────────────────────────────────────
POCKETBASE_URL: "https://pb.libnovel.cc"
@@ -30,14 +50,25 @@ services:
MINIO_PUBLIC_ENDPOINT: "${MINIO_PUBLIC_ENDPOINT}"
MINIO_PUBLIC_USE_SSL: "${MINIO_PUBLIC_USE_SSL}"
# ── Meilisearch / Valkey — not exposed, disabled ────────────────────────
MEILI_URL: ""
# ── Meilisearch (via search.libnovel.cc Caddy proxy) ────────────────────
MEILI_URL: "${MEILI_URL}"
MEILI_API_KEY: "${MEILI_API_KEY}"
VALKEY_ADDR: ""
# Force IPv4 DNS resolution — homelab has no IPv6 route to search.libnovel.cc
GODEBUG: "preferIPv4=1"
# ── Kokoro TTS ──────────────────────────────────────────────────────────
KOKORO_URL: "${KOKORO_URL}"
KOKORO_VOICE: "${KOKORO_VOICE}"
# ── Pocket TTS ──────────────────────────────────────────────────────────
POCKET_TTS_URL: "${POCKET_TTS_URL}"
# ── Asynq / Redis (local service) ───────────────────────────────────────
# The runner connects to the local Redis sidecar.
REDIS_ADDR: "redis:6379"
REDIS_PASSWORD: "${REDIS_PASSWORD}"
# ── Runner tuning ───────────────────────────────────────────────────────
RUNNER_WORKER_ID: "${RUNNER_WORKER_ID}"
RUNNER_POLL_INTERVAL: "${RUNNER_POLL_INTERVAL}"
@@ -56,3 +87,6 @@ services:
interval: 60s
timeout: 5s
retries: 3
volumes:
redis_data:

View File

@@ -185,7 +185,9 @@ create "app_users" '{
{"name":"email", "type":"text"},
{"name":"email_verified", "type":"bool"},
{"name":"verification_token", "type":"text"},
{"name":"verification_token_exp","type":"text"}
{"name":"verification_token_exp","type":"text"},
{"name":"oauth_provider", "type":"text"},
{"name":"oauth_id", "type":"text"}
]}'
create "user_sessions" '{
@@ -254,5 +256,7 @@ add_field "app_users" "email" "text"
add_field "app_users" "email_verified" "bool"
add_field "app_users" "verification_token" "text"
add_field "app_users" "verification_token_exp" "text"
add_field "app_users" "oauth_provider" "text"
add_field "app_users" "oauth_id" "text"
log "done"

1185
ui/package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -12,6 +12,7 @@
"check:watch": "svelte-kit sync && svelte-check --tsconfig ./tsconfig.json --watch"
},
"devDependencies": {
"@sentry/vite-plugin": "^5.1.1",
"@sveltejs/adapter-auto": "^7.0.0",
"@sveltejs/adapter-node": "^5.5.4",
"@sveltejs/kit": "^2.50.2",
@@ -29,6 +30,11 @@
"dependencies": {
"@aws-sdk/client-s3": "^3.1005.0",
"@aws-sdk/s3-request-presigner": "^3.1005.0",
"@opentelemetry/exporter-logs-otlp-http": "^0.214.0",
"@opentelemetry/exporter-trace-otlp-http": "^0.214.0",
"@opentelemetry/resources": "^2.6.1",
"@opentelemetry/sdk-node": "^0.214.0",
"@opentelemetry/semantic-conventions": "^1.40.0",
"@sentry/sveltekit": "^10.45.0",
"cropperjs": "^1.6.2",
"ioredis": "^5.3.2",

View File

@@ -6,7 +6,10 @@ import { env } from '$env/dynamic/public';
if (env.PUBLIC_GLITCHTIP_DSN) {
Sentry.init({
dsn: env.PUBLIC_GLITCHTIP_DSN,
tracesSampleRate: 0.1
tracesSampleRate: 0.1,
// Must match the release name used when uploading source maps in CI
// (BUILD_VERSION injected by Dockerfile as PUBLIC_BUILD_VERSION).
release: env.PUBLIC_BUILD_VERSION || undefined
});
}

View File

@@ -7,13 +7,43 @@ import { env as pubEnv } from '$env/dynamic/public';
import { log } from '$lib/server/logger';
import { createUserSession, touchUserSession, isSessionRevoked } from '$lib/server/pocketbase';
import { drain as drainPresignCache } from '$lib/server/presignCache';
import { NodeSDK } from '@opentelemetry/sdk-node';
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';
import { OTLPLogExporter } from '@opentelemetry/exporter-logs-otlp-http';
import { BatchLogRecordProcessor } from '@opentelemetry/sdk-logs';
import { resourceFromAttributes } from '@opentelemetry/resources';
import { ATTR_SERVICE_NAME, ATTR_SERVICE_VERSION } from '@opentelemetry/semantic-conventions';
// ─── OpenTelemetry server-side tracing + logs ─────────────────────────────────
// No-op when OTEL_EXPORTER_OTLP_ENDPOINT is unset (e.g. local dev).
const otlpEndpoint = process.env.OTEL_EXPORTER_OTLP_ENDPOINT;
if (otlpEndpoint) {
const sdk = new NodeSDK({
resource: resourceFromAttributes({
[ATTR_SERVICE_NAME]: process.env.OTEL_SERVICE_NAME ?? 'ui',
[ATTR_SERVICE_VERSION]: pubEnv.PUBLIC_BUILD_VERSION ?? 'dev'
}),
traceExporter: new OTLPTraceExporter({ url: `${otlpEndpoint}/v1/traces` }),
logRecordProcessors: [
new BatchLogRecordProcessor(
new OTLPLogExporter({ url: `${otlpEndpoint}/v1/logs` })
)
]
});
sdk.start();
process.once('SIGTERM', () => sdk.shutdown().catch(() => {}));
process.once('SIGINT', () => sdk.shutdown().catch(() => {}));
}
// ─── Sentry / GlitchTip server-side error tracking ────────────────────────────
// No-op when PUBLIC_GLITCHTIP_DSN is unset (e.g. local dev).
if (pubEnv.PUBLIC_GLITCHTIP_DSN) {
Sentry.init({
dsn: pubEnv.PUBLIC_GLITCHTIP_DSN,
tracesSampleRate: 0.1
tracesSampleRate: 0.1,
// Must match the release name used when uploading source maps in CI
// (BUILD_VERSION injected by Dockerfile as PUBLIC_BUILD_VERSION).
release: pubEnv.PUBLIC_BUILD_VERSION || undefined
});
}

View File

@@ -51,6 +51,7 @@
import { audioStore } from '$lib/audio.svelte';
import { Button } from '$lib/components/ui/button';
import { cn } from '$lib/utils';
import type { Voice } from '$lib/types';
interface Props {
slug: string;
@@ -63,8 +64,8 @@
nextChapter?: number | null;
/** Full chapter list for the book (number + title). Written into the store. */
chapters?: { number: number; title: string }[];
/** List of available voices from the Kokoro API. */
voices?: string[];
/** List of available voices from the backend. */
voices?: Voice[];
}
let {
@@ -78,6 +79,10 @@
voices = []
}: Props = $props();
// ── Derived: voices grouped by engine ──────────────────────────────────
const kokoroVoices = $derived(voices.filter((v) => v.engine === 'kokoro'));
const pocketVoices = $derived(voices.filter((v) => v.engine === 'pocket-tts'));
// ── Voice selector state ────────────────────────────────────────────────
let showVoicePanel = $state(false);
/** Voice whose sample is currently being fetched or playing. */
@@ -86,10 +91,33 @@
let sampleAudio = $state<HTMLAudioElement | null>(null);
/**
* Human-readable label for a voice ID.
* e.g. "af_bella" → "Bella (US F)" | "bm_george" → "George (UK M)"
* Human-readable label for a voice.
* Kokoro: "af_bella" → "Bella (US F)"
* Pocket-TTS: "alba" → "Alba (EN F)"
* Falls back gracefully if called with a bare string (e.g. from the store default).
*/
function voiceLabel(v: string): string {
function voiceLabel(v: Voice | string): string {
// Handle plain string IDs stored in audioStore.voice
if (typeof v === 'string') {
// Try to match against the voices list
const found = voices.find((x) => x.id === v);
if (found) return voiceLabel(found);
// Bare kokoro ID fallback (legacy / default "af_bella")
return kokoroLabelFromId(v);
}
if (v.engine === 'pocket-tts') {
const langLabel = v.lang.toUpperCase().replace('-', '');
const genderLabel = v.gender.toUpperCase();
const name = v.id.replace(/_/g, ' ').replace(/\b\w/g, (c) => c.toUpperCase());
return `${name} (${langLabel} ${genderLabel})`;
}
// Kokoro
return kokoroLabelFromId(v.id);
}
function kokoroLabelFromId(id: string): string {
const langMap: Record<string, string> = {
af: 'US', am: 'US',
bf: 'UK', bm: 'UK',
@@ -112,9 +140,8 @@
pf: 'F', pm: 'M',
zf: 'F', zm: 'M',
};
const prefix = v.slice(0, 2);
const name = v.slice(3);
// Capitalise and strip legacy v0 prefix.
const prefix = id.slice(0, 2);
const name = id.slice(3);
const displayName = name
.replace(/^v0/, '')
.replace(/^([a-z])/, (c: string) => c.toUpperCase());
@@ -316,23 +343,28 @@
// ── API helpers ────────────────────────────────────────────────────────────
type PresignResult =
| { ready: true; url: string }
| { ready: false; enqueued: boolean }; // enqueued=true → presign already POSTed
async function tryPresign(
targetSlug: string,
targetChapter: number,
targetVoice: string
): Promise<string | null> {
): Promise<PresignResult> {
const params = new URLSearchParams({
slug: targetSlug,
n: String(targetChapter),
voice: targetVoice
});
const res = await fetch(`/api/presign/audio?${params}`);
// 202: TTS was just enqueued by the presign endpoint — audio not ready yet.
// 202: presign endpoint already triggered TTS — skip the POST, go straight to polling.
// 404: legacy fallback (should no longer occur after endpoint change).
if (res.status === 202 || res.status === 404) return null;
if (res.status === 202) return { ready: false, enqueued: true };
if (res.status === 404) return { ready: false, enqueued: false };
if (!res.ok) throw new Error(`presign HTTP ${res.status}`);
const data = (await res.json()) as { url: string };
return data.url;
return { ready: true, url: data.url };
}
type AudioStatusResponse =
@@ -394,50 +426,52 @@
try {
// Fast path: already generated
const url = await tryPresign(slug, nextChapter, voice);
if (url) {
const presignResult = await tryPresign(slug, nextChapter, voice);
if (presignResult.ready) {
stopNextProgress();
audioStore.nextProgress = 100;
audioStore.nextAudioUrl = url;
audioStore.nextAudioUrl = presignResult.url;
audioStore.nextStatus = 'prefetched';
return;
}
// Slow path: trigger Kokoro generation (non-blocking POST), then poll.
const res = await fetch(`/api/audio/${slug}/${nextChapter}`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ voice })
});
if (!res.ok) throw new Error(`Prefetch generation failed: HTTP ${res.status}`);
// Slow path: trigger generation (or skip POST if presign already enqueued).
if (!presignResult.enqueued) {
const res = await fetch(`/api/audio/${slug}/${nextChapter}`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ voice })
});
if (!res.ok) throw new Error(`Prefetch generation failed: HTTP ${res.status}`);
// Whether the server returned 200 (already cached) or 202 (enqueued),
// always presign — the status endpoint no longer returns a proxy URL.
if (res.status === 200) {
// Body is { status: 'done' } — audio confirmed in MinIO. Presign it.
await res.body?.cancel();
}
// else 202: generation enqueued — fall through to poll.
if (res.status !== 200) {
// 202: poll until done.
const final = await pollAudioStatus(slug, nextChapter, voice);
stopNextProgress();
audioStore.nextProgress = 100;
if (final.status === 'failed') {
throw new Error(`Prefetch failed: ${(final as { error?: string }).error ?? 'unknown'}`);
if (res.status === 200) {
// Body is { status: 'done' } — audio confirmed in MinIO. Presign it.
await res.body?.cancel();
stopNextProgress();
audioStore.nextProgress = 100;
const doneUrl = await tryPresign(slug, nextChapter, voice);
if (!doneUrl.ready) throw new Error('Prefetch: audio done but presign returned 404');
audioStore.nextAudioUrl = doneUrl.url;
audioStore.nextStatus = 'prefetched';
return;
}
} else {
stopNextProgress();
audioStore.nextProgress = 100;
// 202: generation enqueued — fall through to poll.
}
// Poll until done (covers both: presign-enqueued and POST-enqueued paths).
const final = await pollAudioStatus(slug, nextChapter, voice);
stopNextProgress();
audioStore.nextProgress = 100;
if (final.status === 'failed') {
throw new Error(`Prefetch failed: ${(final as { error?: string }).error ?? 'unknown'}`);
}
// Audio is ready in MinIO — get a direct presigned URL.
const doneUrl = await tryPresign(slug, nextChapter, voice);
if (!doneUrl) throw new Error('Prefetch: audio done but presign returned 404');
if (!doneUrl.ready) throw new Error('Prefetch: audio done but presign returned 404');
audioStore.nextAudioUrl = doneUrl;
audioStore.nextAudioUrl = doneUrl.url;
audioStore.nextStatus = 'prefetched';
} catch {
stopNextProgress();
@@ -505,9 +539,9 @@
}
// Fast path B: audio already in MinIO (presign check).
const url = await tryPresign(slug, chapter, voice);
if (url) {
audioStore.audioUrl = url;
const presignResult = await tryPresign(slug, chapter, voice);
if (presignResult.ready) {
audioStore.audioUrl = presignResult.url;
audioStore.status = 'ready';
// Restore last saved position after the audio element loads
restoreSavedAudioTime();
@@ -520,33 +554,44 @@
audioStore.status = 'generating';
startProgress();
const res = await fetch(`/api/audio/${slug}/${chapter}`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ voice })
});
if (!res.ok) throw new Error(`Generation failed: HTTP ${res.status}`);
// presignResult.enqueued=true means /api/presign/audio already POSTed on our
// behalf — skip the duplicate POST and go straight to polling.
if (!presignResult.enqueued) {
const res = await fetch(`/api/audio/${slug}/${chapter}`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ voice })
});
if (!res.ok) throw new Error(`Generation failed: HTTP ${res.status}`);
if (res.status !== 200) {
// 202: generation enqueued — poll until done.
const final = await pollAudioStatus(slug, chapter, voice);
if (final.status === 'failed') {
throw new Error(
`Generation failed: ${(final as { error?: string }).error ?? 'unknown error'}`
);
if (res.status === 200) {
// Already cached — body is { status: 'done' }, no url needed.
await res.body?.cancel();
await finishProgress();
const doneUrl = await tryPresign(slug, chapter, voice);
if (!doneUrl.ready) throw new Error('Audio generated but presign returned 404');
audioStore.audioUrl = doneUrl.url;
audioStore.status = 'ready';
maybeStartPrefetch();
return;
}
} else {
// 200: already cached — body is { status: 'done' }, no url needed.
await res.body?.cancel();
// 202: fall through to polling below.
}
// Poll until the runner finishes generating.
const final = await pollAudioStatus(slug, chapter, voice);
if (final.status === 'failed') {
throw new Error(
`Generation failed: ${(final as { error?: string }).error ?? 'unknown error'}`
);
}
await finishProgress();
// Audio is ready in MinIO — always use a presigned URL for direct playback.
const doneUrl = await tryPresign(slug, chapter, voice);
if (!doneUrl) throw new Error('Audio generated but presign returned 404');
audioStore.audioUrl = doneUrl;
if (!doneUrl.ready) throw new Error('Audio generated but presign returned 404');
audioStore.audioUrl = doneUrl.url;
audioStore.status = 'ready';
// Don't restore time for freshly generated audio — position is 0
// Immediately start pre-generating the next chapter in background.
@@ -627,6 +672,52 @@
<svelte:window onkeydown={handleKeyDown} />
<!-- ── Voice row snippet (reused in both engine sections) ──────────────── -->
{#snippet voiceRow(v: import('$lib/types').Voice)}
<div
class={cn('flex items-center gap-2 px-3 py-2 hover:bg-zinc-800 transition-colors cursor-pointer', audioStore.voice === v.id && 'bg-amber-400/10')}
role="button"
tabindex="0"
onclick={() => selectVoice(v.id)}
onkeydown={(e) => e.key === 'Enter' && selectVoice(v.id)}
>
<!-- Selected indicator -->
<div class="w-4 flex-shrink-0">
{#if audioStore.voice === v.id}
<svg class="w-3.5 h-3.5 text-amber-400" fill="currentColor" viewBox="0 0 24 24">
<path d="M9 16.17L4.83 12l-1.42 1.41L9 19 21 7l-1.41-1.41L9 16.17z"/>
</svg>
{/if}
</div>
<!-- Voice name -->
<span class={cn('flex-1 text-xs', audioStore.voice === v.id ? 'text-amber-400 font-medium' : 'text-zinc-300')}>
{voiceLabel(v)}
</span>
<span class="text-zinc-600 text-xs font-mono">{v.id}</span>
<!-- Sample play button -->
<Button
variant="ghost"
size="icon"
class={cn('h-6 w-6 flex-shrink-0', samplePlayingVoice === v.id ? 'text-amber-400 bg-amber-400/15 hover:bg-amber-400/25' : 'text-zinc-500 hover:text-zinc-200')}
onclick={(e) => { e.stopPropagation(); playSample(v.id); }}
title={samplePlayingVoice === v.id ? 'Stop sample' : 'Play sample'}
aria-label={samplePlayingVoice === v.id ? `Stop ${v.id} sample` : `Play ${v.id} sample`}
>
{#if samplePlayingVoice === v.id}
<svg class="w-3.5 h-3.5" fill="currentColor" viewBox="0 0 24 24">
<path d="M6 6h12v12H6z"/>
</svg>
{:else}
<svg class="w-3.5 h-3.5" fill="currentColor" viewBox="0 0 24 24">
<path d="M8 5v14l11-7z"/>
</svg>
{/if}
</Button>
</div>
{/snippet}
<div class="mt-6 p-4 rounded-lg bg-zinc-800 border border-zinc-700">
<div class="flex items-center justify-between gap-2 mb-3">
<div class="flex items-center gap-2">
@@ -674,50 +765,25 @@
</Button>
</div>
<div class="max-h-64 overflow-y-auto">
{#each voices as v (v)}
<div
class={cn('flex items-center gap-2 px-3 py-2 hover:bg-zinc-800 transition-colors cursor-pointer', audioStore.voice === v && 'bg-amber-400/10')}
role="button"
tabindex="0"
onclick={() => selectVoice(v)}
onkeydown={(e) => e.key === 'Enter' && selectVoice(v)}
>
<!-- Selected indicator -->
<div class="w-4 flex-shrink-0">
{#if audioStore.voice === v}
<svg class="w-3.5 h-3.5 text-amber-400" fill="currentColor" viewBox="0 0 24 24">
<path d="M9 16.17L4.83 12l-1.42 1.41L9 19 21 7l-1.41-1.41L9 16.17z"/>
</svg>
{/if}
</div>
<!-- Voice name -->
<span class={cn('flex-1 text-xs', audioStore.voice === v ? 'text-amber-400 font-medium' : 'text-zinc-300')}>
{voiceLabel(v)}
</span>
<span class="text-zinc-600 text-xs font-mono">{v}</span>
<!-- Sample play button (stop propagation so click doesn't select) -->
<Button
variant="ghost"
size="icon"
class={cn('h-6 w-6 flex-shrink-0', samplePlayingVoice === v ? 'text-amber-400 bg-amber-400/15 hover:bg-amber-400/25' : 'text-zinc-500 hover:text-zinc-200')}
onclick={(e) => { e.stopPropagation(); playSample(v); }}
title={samplePlayingVoice === v ? 'Stop sample' : 'Play sample'}
aria-label={samplePlayingVoice === v ? `Stop ${v} sample` : `Play ${v} sample`}
>
{#if samplePlayingVoice === v}
<svg class="w-3.5 h-3.5" fill="currentColor" viewBox="0 0 24 24">
<path d="M6 6h12v12H6z"/>
</svg>
{:else}
<svg class="w-3.5 h-3.5" fill="currentColor" viewBox="0 0 24 24">
<path d="M8 5v14l11-7z"/>
</svg>
{/if}
</Button>
<!-- Kokoro (GPU) section -->
{#if kokoroVoices.length > 0}
<div class="px-3 py-1.5 bg-zinc-800/70 border-b border-zinc-700/50">
<span class="text-[10px] font-semibold text-zinc-500 uppercase tracking-widest">Kokoro (GPU)</span>
</div>
{/each}
{#each kokoroVoices as v (v.id)}
{@render voiceRow(v)}
{/each}
{/if}
<!-- Pocket TTS (CPU) section -->
{#if pocketVoices.length > 0}
<div class="px-3 py-1.5 bg-zinc-800/70 border-b border-zinc-700/50 {kokoroVoices.length > 0 ? 'border-t border-zinc-700' : ''}">
<span class="text-[10px] font-semibold text-zinc-500 uppercase tracking-widest">Pocket TTS (CPU)</span>
</div>
{#each pocketVoices as v (v.id)}
{@render voiceRow(v)}
{/each}
{/if}
</div>
<div class="px-3 py-2 border-t border-zinc-700 bg-zinc-800/50">
<p class="text-xs text-zinc-500">

View File

@@ -0,0 +1,72 @@
/**
* Generic Valkey (Redis-compatible) cache.
*
* Reuses the same ioredis singleton from presignCache.ts but exposes a
* simple typed get/set/invalidate API for arbitrary JSON values.
*
* Usage:
* const books = await cache.get<Book[]>('books:all');
* await cache.set('books:all', books, 5 * 60);
* await cache.invalidate('books:all');
*/
import Redis from 'ioredis';
let _client: Redis | null = null;
function client(): Redis {
if (!_client) {
const url = process.env.VALKEY_URL ?? 'redis://valkey:6379';
_client = new Redis(url, {
lazyConnect: false,
enableOfflineQueue: true,
maxRetriesPerRequest: 2
});
_client.on('error', (err: Error) => {
console.error('[cache] Valkey error:', err.message);
});
}
return _client;
}
/** Return the cached value for key, or null if absent / expired / error. */
export async function get<T>(key: string): Promise<T | null> {
try {
const raw = await client().get(key);
if (!raw) return null;
return JSON.parse(raw) as T;
} catch {
return null;
}
}
/**
* Store a value under key for ttlSeconds seconds.
* Silently no-ops on Valkey errors so callers never crash.
*/
export async function set<T>(key: string, value: T, ttlSeconds: number): Promise<void> {
try {
await client().set(key, JSON.stringify(value), 'EX', ttlSeconds);
} catch {
// non-fatal
}
}
/** Delete a key immediately (e.g. after a write that invalidates it). */
export async function invalidate(key: string): Promise<void> {
try {
await client().del(key);
} catch {
// non-fatal
}
}
/** Invalidate all keys matching a glob pattern (e.g. 'books:*'). */
export async function invalidatePattern(pattern: string): Promise<void> {
try {
const keys = await client().keys(pattern);
if (keys.length > 0) await client().del(...keys);
} catch {
// non-fatal
}
}

View File

@@ -1,195 +0,0 @@
/**
* Minimal SMTP mailer for email verification.
*
* Uses Node's built-in `tls` module to connect to smtp.resend.com:465
* (implicit TLS / SMTPS) — no external dependencies required.
*
* Env vars (injected by docker-compose via Doppler):
* SMTP_HOST smtp.resend.com
* SMTP_PORT 465
* SMTP_USER resend
* SMTP_PASSWORD re_...
* SMTP_FROM noreply@libnovel.cc
* APP_URL https://libnovel.cc (used to build verification links)
*/
import { env } from '$env/dynamic/private';
import { log } from '$lib/server/logger';
import * as tls from 'node:tls';
const SMTP_HOST = env.SMTP_HOST ?? 'smtp.resend.com';
const SMTP_PORT = parseInt(env.SMTP_PORT ?? '465', 10);
const SMTP_USER = env.SMTP_USER ?? '';
const SMTP_PASSWORD = env.SMTP_PASSWORD ?? '';
const SMTP_FROM = env.SMTP_FROM ?? 'noreply@libnovel.cc';
export const APP_URL = (env.APP_URL ?? 'https://libnovel.cc').replace(/\/$/, '');
// ─── Low-level SMTP over implicit TLS ────────────────────────────────────────
function smtpEncode(s: string): string {
return Buffer.from(s).toString('base64');
}
/**
* Send a raw email via SMTP over implicit TLS (port 465).
* Returns true on success, throws on failure.
*/
async function sendSmtp(opts: {
to: string;
subject: string;
html: string;
text: string;
}): Promise<void> {
return new Promise((resolve, reject) => {
const socket = tls.connect(
{ host: SMTP_HOST, port: SMTP_PORT, rejectUnauthorized: true },
() => {
// TLS handshake complete — SMTP conversation begins
}
);
socket.setEncoding('utf8');
socket.setTimeout(15_000);
socket.on('timeout', () => {
socket.destroy(new Error('SMTP connection timed out'));
});
let buf = '';
let step = 0;
const send = (cmd: string) => socket.write(cmd + '\r\n');
const boundary = `----=_Part_${Date.now()}`;
const multipart = [
`--${boundary}`,
'Content-Type: text/plain; charset=UTF-8',
'',
opts.text,
`--${boundary}`,
'Content-Type: text/html; charset=UTF-8',
'',
opts.html,
`--${boundary}--`
].join('\r\n');
const message = [
`From: LibNovel <${SMTP_FROM}>`,
`To: ${opts.to}`,
`Subject: ${opts.subject}`,
'MIME-Version: 1.0',
`Content-Type: multipart/alternative; boundary="${boundary}"`,
'',
multipart
].join('\r\n');
socket.on('data', (chunk: string) => {
buf += chunk;
// Process complete lines
const lines = buf.split('\r\n');
buf = lines.pop() ?? '';
for (const line of lines) {
if (!line) continue;
const code = parseInt(line.slice(0, 3), 10);
// Only act on the final response line (no continuation dash)
if (line[3] === '-') continue;
if (code >= 400) {
socket.destroy(new Error(`SMTP error: ${line}`));
return;
}
switch (step) {
case 0: // 220 banner
send(`EHLO libnovel.cc`);
step++;
break;
case 1: // 250 EHLO
send('AUTH LOGIN');
step++;
break;
case 2: // 334 Username prompt
send(smtpEncode(SMTP_USER));
step++;
break;
case 3: // 334 Password prompt
send(smtpEncode(SMTP_PASSWORD));
step++;
break;
case 4: // 235 Auth success
send(`MAIL FROM:<${SMTP_FROM}>`);
step++;
break;
case 5: // 250 MAIL FROM ok
send(`RCPT TO:<${opts.to}>`);
step++;
break;
case 6: // 250 RCPT TO ok
send('DATA');
step++;
break;
case 7: // 354 Start data
send(message + '\r\n.');
step++;
break;
case 8: // 250 Message accepted
send('QUIT');
step++;
break;
case 9: // 221 Bye
socket.destroy();
resolve();
break;
}
}
});
socket.on('error', (err) => reject(err));
socket.on('close', () => {
if (step < 9) reject(new Error('SMTP connection closed unexpectedly'));
});
});
}
// ─── Email templates ──────────────────────────────────────────────────────────
export async function sendVerificationEmail(to: string, token: string): Promise<void> {
const link = `${APP_URL}/verify-email?token=${token}`;
const html = `
<!DOCTYPE html>
<html>
<head><meta charset="UTF-8"></head>
<body style="font-family:sans-serif;background:#18181b;color:#f4f4f5;padding:32px;">
<div style="max-width:480px;margin:0 auto;">
<h1 style="color:#f59e0b;font-size:24px;margin-bottom:8px;">Verify your email</h1>
<p style="color:#a1a1aa;margin-bottom:24px;">
Thanks for signing up to LibNovel. Click the button below to verify your email address.
The link expires in 24 hours.
</p>
<a href="${link}"
style="display:inline-block;background:#f59e0b;color:#18181b;font-weight:600;
padding:12px 24px;border-radius:6px;text-decoration:none;font-size:15px;">
Verify email
</a>
<p style="margin-top:24px;color:#71717a;font-size:13px;">
Or copy this link:<br>
<a href="${link}" style="color:#f59e0b;word-break:break-all;">${link}</a>
</p>
<p style="margin-top:32px;color:#52525b;font-size:12px;">
If you didn't create a LibNovel account, you can safely ignore this email.
</p>
</div>
</body>
</html>`;
const text = `Verify your LibNovel email address\n\nClick this link to verify your account (expires in 24 hours):\n${link}\n\nIf you didn't sign up, ignore this email.`;
try {
await sendSmtp({ to, subject: 'Verify your LibNovel email', html, text });
log.info('email', 'verification email sent', { to });
} catch (err) {
log.error('email', 'failed to send verification email', { to, err: String(err) });
throw err;
}
}

View File

@@ -6,6 +6,7 @@
import { env } from '$env/dynamic/private';
import { log } from '$lib/server/logger';
import * as cache from '$lib/server/cache';
const PB_URL = env.POCKETBASE_URL ?? 'http://localhost:8090';
const PB_EMAIL = env.POCKETBASE_ADMIN_EMAIL ?? 'admin@libnovel.local';
@@ -67,6 +68,8 @@ export interface User {
email_verified?: boolean;
verification_token?: string;
verification_token_exp?: string;
oauth_provider?: string;
oauth_id?: string;
}
// ─── Auth token cache ─────────────────────────────────────────────────────────
@@ -198,16 +201,65 @@ async function listOne<T>(collection: string, filter: string): Promise<T | null>
// ─── Books ────────────────────────────────────────────────────────────────────
const BOOKS_CACHE_KEY = 'books:all';
const BOOKS_CACHE_TTL = 5 * 60; // 5 minutes
export async function listBooks(): Promise<Book[]> {
const cached = await cache.get<Book[]>(BOOKS_CACHE_KEY);
if (cached) {
log.debug('pocketbase', 'listBooks cache hit', { total: cached.length });
return cached;
}
const books = await listAll<Book>('books', '', '+title');
const nullTitles = books.filter((b) => b.title == null).length;
if (nullTitles > 0) {
log.warn('pocketbase', 'listBooks: books with null title', { count: nullTitles, total: books.length });
}
log.debug('pocketbase', 'listBooks', { total: books.length, nullTitles });
log.debug('pocketbase', 'listBooks cache miss', { total: books.length, nullTitles });
await cache.set(BOOKS_CACHE_KEY, books, BOOKS_CACHE_TTL);
return books;
}
/**
* Fetch only the books whose slugs are in the given set.
* Uses PocketBase filter `slug IN (...)` — a single request regardless of how
* many slugs are requested. Falls back to empty array on error.
*
* Use this instead of listBooks() whenever you only need a small subset of
* books (e.g. the user's reading list or saved shelf).
*
* PocketBase filter syntax for IN: slug='a' || slug='b' || ...
* Limited to 200 slugs to keep the filter URL sane; callers with larger sets
* should fall back to listBooks().
*/
export async function getBooksBySlugs(slugs: Iterable<string>): Promise<Book[]> {
const slugArr = [...new Set(slugs)].slice(0, 200);
if (slugArr.length === 0) return [];
// Check cache for each slug individually (populated by prior listBooks calls).
// If all slugs hit, skip the network round-trip entirely.
const cached = await cache.get<Book[]>(BOOKS_CACHE_KEY);
if (cached) {
const slugSet = new Set(slugArr);
const found = cached.filter((b) => slugSet.has(b.slug));
if (found.length === slugArr.length) {
log.debug('pocketbase', 'getBooksBySlugs cache hit', { count: found.length });
return found;
}
}
// Build filter: slug='a' || slug='b' || ...
const filter = slugArr.map((s) => `slug='${s.replace(/'/g, "\\'")}'`).join(' || ');
const books = await listAll<Book>('books', filter, '+title');
log.debug('pocketbase', 'getBooksBySlugs', { requested: slugArr.length, found: books.length });
return books;
}
/** Invalidate the books cache (call after a book is created/updated/deleted). */
export async function invalidateBooksCache(): Promise<void> {
await cache.invalidate(BOOKS_CACHE_KEY);
}
export async function getBook(slug: string): Promise<Book | null> {
return listOne<Book>('books', `slug="${slug}"`);
}
@@ -496,8 +548,75 @@ export async function getUserByEmail(email: string): Promise<User | null> {
return listOne<User>('app_users', `email="${email.replace(/"/g, '\\"')}"`);
}
/**
* Look up a user by OAuth provider + provider user ID. Returns null if not found.
*/
export async function getUserByOAuth(provider: string, oauthId: string): Promise<User | null> {
return listOne<User>(
'app_users',
`oauth_provider="${provider.replace(/"/g, '\\"')}"&&oauth_id="${oauthId.replace(/"/g, '\\"')}"`
);
}
/**
* Create a new user via OAuth (no password). email_verified is true since the
* provider already verified it. Throws on DB errors.
*/
export async function createOAuthUser(
username: string,
email: string,
provider: string,
oauthId: string,
avatarUrl?: string,
role = 'user'
): Promise<User> {
log.info('pocketbase', 'createOAuthUser', { username, email, provider });
const res = await pbPost('/api/collections/app_users/records', {
username,
password_hash: '',
role,
email,
email_verified: true,
oauth_provider: provider,
oauth_id: oauthId,
avatar_url: avatarUrl ?? '',
created: new Date().toISOString()
});
if (!res.ok) {
const body = await res.text().catch(() => '');
log.error('pocketbase', 'createOAuthUser: PocketBase rejected record', {
username,
status: res.status,
body
});
throw new Error(`Failed to create OAuth user: ${res.status} ${body}`);
}
return res.json() as Promise<User>;
}
/**
* Link an OAuth provider to an existing user account.
*/
export async function linkOAuthToUser(
userId: string,
provider: string,
oauthId: string
): Promise<void> {
const res = await pbPatch(`/api/collections/app_users/records/${userId}`, {
oauth_provider: provider,
oauth_id: oauthId,
email_verified: true
});
if (!res.ok) {
const body = await res.text().catch(() => '');
log.error('pocketbase', 'linkOAuthToUser: PATCH failed', { userId, status: res.status, body });
throw new Error(`Failed to link OAuth: ${res.status}`);
}
}
/**
* Look up a user by verification token. Returns null if not found.
* @deprecated Email verification removed — kept only for migration safety.
*/
export async function getUserByVerificationToken(token: string): Promise<User | null> {
return listOne<User>('app_users', `verification_token="${token.replace(/"/g, '\\"')}"`);
@@ -608,7 +727,7 @@ export async function changePassword(
/**
* Verify username + password. Returns the user on success, null on failure.
* Throws with message 'Email not verified' if the account exists but hasn't been verified.
* Only used for legacy accounts that still have a password_hash.
*/
export async function loginUser(username: string, password: string): Promise<User | null> {
log.debug('pocketbase', 'loginUser: lookup', { username });
@@ -617,15 +736,15 @@ export async function loginUser(username: string, password: string): Promise<Use
log.warn('pocketbase', 'loginUser: username not found', { username });
return null;
}
if (!user.password_hash) {
log.warn('pocketbase', 'loginUser: account has no password (OAuth-only)', { username });
return null;
}
const ok = verifyPassword(password, user.password_hash);
if (!ok) {
log.warn('pocketbase', 'loginUser: wrong password', { username });
return null;
}
if (!user.email_verified) {
log.warn('pocketbase', 'loginUser: email not verified', { username });
throw new Error('Email not verified');
}
log.info('pocketbase', 'loginUser: success', { username, role: user.role });
return user;
}

View File

@@ -6,6 +6,20 @@
* safe to import in both server and client code.
*/
// ── Voice ─────────────────────────────────────────────────────────────────────
/** A single TTS voice returned by GET /api/voices. */
export interface Voice {
/** Voice identifier passed to TTS clients (e.g. "af_bella", "alba"). */
id: string;
/** TTS engine: "kokoro" | "pocket-tts". */
engine: string;
/** Primary language tag (e.g. "en-us", "en-gb", "en", "es", "fr"). */
lang: string;
/** Gender: "f" | "m". */
gender: string;
}
// ── Comments ─────────────────────────────────────────────────────────────────
export interface BookComment {

View File

@@ -4,10 +4,12 @@ import { getSettings } from '$lib/server/pocketbase';
import { log } from '$lib/server/logger';
// Routes that are accessible without being logged in
const PUBLIC_ROUTES = new Set(['/login', '/verify-email']);
const PUBLIC_ROUTES = new Set(['/login', '/disclaimer', '/privacy', '/dmca', '/terms']);
export const load: LayoutServerLoad = async ({ locals, url }) => {
if (!PUBLIC_ROUTES.has(url.pathname) && !locals.user) {
// Allow /auth/* (OAuth initiation + callbacks) without login
const isPublic = PUBLIC_ROUTES.has(url.pathname) || url.pathname.startsWith('/auth/');
if (!isPublic && !locals.user) {
redirect(302, `/login`);
}

View File

@@ -426,16 +426,26 @@
</svg>
</a>
</nav>
<!-- Bottom row: legal links + copyright -->
<div class="flex flex-wrap items-center justify-center gap-x-5 gap-y-2 text-zinc-700">
<a href="/disclaimer" class="hover:text-zinc-500 transition-colors">Disclaimer</a>
<a href="/privacy" class="hover:text-zinc-500 transition-colors">Privacy</a>
<a href="/dmca" class="hover:text-zinc-500 transition-colors">DMCA</a>
<span>&copy; {new Date().getFullYear()} libnovel</span>
{#if env.PUBLIC_BUILD_VERSION && env.PUBLIC_BUILD_VERSION !== 'dev'}
<span class="text-zinc-800">{env.PUBLIC_BUILD_VERSION}+{env.PUBLIC_BUILD_COMMIT?.slice(0, 7)}</span>
<!-- Bottom row: legal links + copyright -->
<div class="flex flex-wrap items-center justify-center gap-x-5 gap-y-2 text-zinc-700">
<a href="/disclaimer" class="hover:text-zinc-500 transition-colors">Disclaimer</a>
<a href="/privacy" class="hover:text-zinc-500 transition-colors">Privacy</a>
<a href="/dmca" class="hover:text-zinc-500 transition-colors">DMCA</a>
<span>&copy; {new Date().getFullYear()} libnovel</span>
</div>
<!-- Build version / commit SHA -->
<div class="text-zinc-700 tabular-nums font-mono">
{#if env.PUBLIC_BUILD_VERSION && env.PUBLIC_BUILD_VERSION !== 'dev'}
<span title="Build version">{env.PUBLIC_BUILD_VERSION}</span>
{#if env.PUBLIC_BUILD_COMMIT && env.PUBLIC_BUILD_COMMIT !== 'unknown'}
<span class="text-zinc-800 select-all" title="Commit SHA"
>+{env.PUBLIC_BUILD_COMMIT.slice(0, 7)}</span
>
{/if}
</div>
{:else}
<span class="text-zinc-800">dev</span>
{/if}
</div>
</div>
</footer>
</div>

View File

@@ -1,6 +1,6 @@
import type { PageServerLoad } from './$types';
import {
listBooks,
getBooksBySlugs,
recentlyAddedBooks,
allProgress,
getHomeStats,
@@ -10,14 +10,15 @@ import { log } from '$lib/server/logger';
import type { Book, Progress } from '$lib/server/pocketbase';
export const load: PageServerLoad = async ({ locals }) => {
let allBooks: Book[] = [];
// Step 1: fetch progress + recent books + stats in parallel.
// We intentionally do NOT call listBooks() here — we only need books that
// appear in the user's progress list, which is a tiny subset of 15k books.
let recentBooks: Book[] = [];
let progressList: Progress[] = [];
let stats = { totalBooks: 0, totalChapters: 0 };
try {
[allBooks, recentBooks, progressList, stats] = await Promise.all([
listBooks(),
[recentBooks, progressList, stats] = await Promise.all([
recentlyAddedBooks(8),
allProgress(locals.sessionId, locals.user?.id),
getHomeStats()
@@ -26,8 +27,14 @@ export const load: PageServerLoad = async ({ locals }) => {
log.error('home', 'failed to load home data', { err: String(e) });
}
// Build slug → book lookup
const bookMap = new Map<string, Book>(allBooks.map((b) => [b.slug, b]));
// Step 2: fetch only the books we actually need for continue-reading.
// This is O(progress entries) instead of O(15k books).
const progressSlugs = progressList.map((p) => p.slug);
const progressBooks = progressSlugs.length > 0
? await getBooksBySlugs(progressSlugs).catch(() => [] as Book[])
: [];
const bookMap = new Map<string, Book>(progressBooks.map((b) => [b.slug, b]));
// Continue reading: progress entries joined with book data, most recent first
const continueReading = progressList

View File

@@ -1,66 +1,12 @@
import { json, error } from '@sveltejs/kit';
import { error } from '@sveltejs/kit';
import type { RequestHandler } from './$types';
import { createUser } from '$lib/server/pocketbase';
import { sendVerificationEmail } from '$lib/server/email';
import { log } from '$lib/server/logger';
/**
* POST /api/auth/register
* Body: { username: string, email: string, password: string }
* Returns: { pending_verification: true, email: string }
*
* Account is created but NOT activated until the user clicks the verification
* link sent to their email. The iOS app should show a "check your inbox" screen.
* Username/password registration has been replaced by OAuth2 (Google & GitHub).
* This endpoint is no longer supported.
*/
export const POST: RequestHandler = async ({ request }) => {
let body: { username?: string; email?: string; password?: string };
try {
body = await request.json();
} catch {
error(400, 'Invalid JSON body');
}
const username = (body.username ?? '').trim();
const email = (body.email ?? '').trim().toLowerCase();
const password = body.password ?? '';
if (!username || !email || !password) {
error(400, 'Username, email and password are required');
}
if (username.length < 3 || username.length > 32) {
error(400, 'Username must be between 3 and 32 characters');
}
if (!/^[a-zA-Z0-9_-]+$/.test(username)) {
error(400, 'Username may only contain letters, numbers, underscores and hyphens');
}
if (!/^[^\s@]+@[^\s@]+\.[^\s@]+$/.test(email)) {
error(400, 'Please enter a valid email address');
}
if (password.length < 8) {
error(400, 'Password must be at least 8 characters');
}
let user;
try {
user = await createUser(username, password, email);
} catch (e: unknown) {
const msg = e instanceof Error ? e.message : 'Registration failed.';
if (msg.includes('Username already taken')) {
error(409, 'That username is already taken');
}
if (msg.includes('Email already in use')) {
error(409, 'That email address is already registered');
}
log.error('api/auth/register', 'unexpected error', { username, err: String(e) });
error(500, 'An error occurred. Please try again.');
}
// Send verification email (non-fatal)
try {
await sendVerificationEmail(email, user.verification_token!);
} catch (e) {
log.error('api/auth/register', 'failed to send verification email', { username, email, err: String(e) });
}
return json({ pending_verification: true, email });
export const POST: RequestHandler = async () => {
error(410, 'Username/password registration is no longer supported. Please sign in with Google or GitHub.');
};

View File

@@ -4,6 +4,7 @@ import type { RequestHandler } from './$types';
import { getBook, listChapterIdx } from '$lib/server/pocketbase';
import { log } from '$lib/server/logger';
import { backendFetch } from '$lib/server/scraper';
import type { Voice } from '$lib/types';
/**
* GET /api/chapter/[slug]/[n]
@@ -48,11 +49,11 @@ export const GET: RequestHandler = async ({ params, url, locals }) => {
? '<p>' + chapterData.text.replace(/\n{2,}/g, '</p><p>').replace(/\n/g, '<br>') + '</p>'
: '';
let voices: string[] = [];
let voices: Voice[] = [];
try {
const vRes = await backendFetch('/api/voices');
if (vRes.ok) {
const d = (await vRes.json()) as { voices: string[] };
const d = (await vRes.json()) as { voices: Voice[] };
voices = d.voices ?? [];
}
} catch {
@@ -85,10 +86,10 @@ export const GET: RequestHandler = async ({ params, url, locals }) => {
const chapterIdx = chapters.find((c) => c.number === n);
if (!chapterIdx) error(404, `Chapter ${n} not found`);
let voices: string[] = [];
let voices: Voice[] = [];
try {
if (voicesRes?.ok) {
const data = (await voicesRes.json()) as { voices: string[] };
const data = (await voicesRes.json()) as { voices: Voice[] };
voices = data.voices ?? [];
}
} catch {

View File

@@ -1,7 +1,7 @@
import { json } from '@sveltejs/kit';
import type { RequestHandler } from './$types';
import {
listBooks,
getBooksBySlugs,
recentlyAddedBooks,
allProgress,
getHomeStats,
@@ -17,14 +17,12 @@ import type { Book, Progress } from '$lib/server/pocketbase';
* Requires authentication (enforced by layout guard).
*/
export const GET: RequestHandler = async ({ locals }) => {
let allBooks: Book[] = [];
let recentBooks: Book[] = [];
let progressList: Progress[] = [];
let stats = { totalBooks: 0, totalChapters: 0 };
try {
[allBooks, recentBooks, progressList, stats] = await Promise.all([
listBooks(),
[recentBooks, progressList, stats] = await Promise.all([
recentlyAddedBooks(8),
allProgress(locals.sessionId, locals.user?.id),
getHomeStats()
@@ -33,7 +31,13 @@ export const GET: RequestHandler = async ({ locals }) => {
log.error('api/home', 'failed to load home data', { err: String(e) });
}
const bookMap = new Map<string, Book>(allBooks.map((b) => [b.slug, b]));
// Fetch only the books we actually need for continue-reading.
const progressSlugs = progressList.map((p) => p.slug);
const progressBooks = progressSlugs.length > 0
? await getBooksBySlugs(progressSlugs).catch(() => [] as Book[])
: [];
const bookMap = new Map<string, Book>(progressBooks.map((b) => [b.slug, b]));
const continueReading = progressList
.filter((p) => bookMap.has(p.slug))

View File

@@ -1,7 +1,8 @@
import { json } from '@sveltejs/kit';
import type { RequestHandler } from './$types';
import { listBooks, allProgress, getSavedSlugs } from '$lib/server/pocketbase';
import { getBooksBySlugs, allProgress, getSavedSlugs } from '$lib/server/pocketbase';
import { log } from '$lib/server/logger';
import type { Book } from '$lib/server/pocketbase';
/**
* GET /api/library
@@ -11,23 +12,25 @@ import { log } from '$lib/server/logger';
* Response shape mirrors LibraryItem in the iOS APIClient.
*/
export const GET: RequestHandler = async ({ locals }) => {
let allBooks: Awaited<ReturnType<typeof listBooks>>;
let progressList: Awaited<ReturnType<typeof allProgress>>;
let savedSlugs: Set<string>;
let progressList: Awaited<ReturnType<typeof allProgress>> = [];
let savedSlugs: Set<string> = new Set();
try {
[allBooks, progressList, savedSlugs] = await Promise.all([
listBooks(),
[progressList, savedSlugs] = await Promise.all([
allProgress(locals.sessionId, locals.user?.id),
getSavedSlugs(locals.sessionId, locals.user?.id)
]);
} catch (e) {
log.error('api/library', 'failed to load library data', { err: String(e) });
allBooks = [];
progressList = [];
savedSlugs = new Set();
}
// Fetch only the books the user actually has in their library.
const progressSlugs = new Set(progressList.map((p) => p.slug));
const allNeededSlugs = new Set([...progressSlugs, ...savedSlugs]);
const books = allNeededSlugs.size > 0
? await getBooksBySlugs(allNeededSlugs).catch(() => [] as Book[])
: [];
const progressMap: Record<string, number> = {};
const progressUpdatedMap: Record<string, string> = {};
for (const p of progressList) {
@@ -35,9 +38,6 @@ export const GET: RequestHandler = async ({ locals }) => {
progressUpdatedMap[p.slug] = p.updated;
}
const progressSlugs = new Set(progressList.map((p) => p.slug));
const books = allBooks.filter((b) => progressSlugs.has(b.slug) || savedSlugs.has(b.slug));
const withProgress = books.filter((b) => progressSlugs.has(b.slug));
const savedOnly = books
.filter((b) => !progressSlugs.has(b.slug))

View File

@@ -1,11 +1,12 @@
import { json } from '@sveltejs/kit';
import type { RequestHandler } from './$types';
import { backendFetch } from '$lib/server/scraper';
import type { Voice } from '$lib/types';
/**
* GET /api/voices
* Proxies the voice list from the backend Kokoro.
* Returns { voices: string[] }
* Proxies the voice list from the backend (Kokoro + pocket-tts).
* Returns { voices: Voice[] }
*/
export const GET: RequestHandler = async () => {
try {
@@ -13,7 +14,7 @@ export const GET: RequestHandler = async () => {
if (!res.ok) {
return json({ voices: [] });
}
const data = (await res.json()) as { voices: string[] };
const data = (await res.json()) as { voices: Voice[] };
return json({ voices: data.voices ?? [] });
} catch {
return json({ voices: [] });

View File

@@ -0,0 +1,79 @@
/**
* GET /auth/[provider]
*
* Initiates the OAuth2 authorization code flow.
* Generates a random `state` param (stored in a short-lived cookie) to
* prevent CSRF, then redirects the browser to the provider's auth URL.
*
* Supported providers: google, github
*/
import { redirect, error } from '@sveltejs/kit';
import type { RequestHandler } from './$types';
import { env } from '$env/dynamic/private';
import { randomBytes } from 'node:crypto';
const PROVIDERS = {
google: {
authUrl: 'https://accounts.google.com/o/oauth2/v2/auth',
scopes: 'openid email profile'
},
github: {
authUrl: 'https://github.com/login/oauth/authorize',
scopes: 'read:user user:email'
}
} as const;
type Provider = keyof typeof PROVIDERS;
function clientId(provider: Provider): string {
if (provider === 'google') return env.GOOGLE_CLIENT_ID ?? '';
if (provider === 'github') return env.GITHUB_CLIENT_ID ?? '';
return '';
}
function redirectUri(provider: Provider, origin: string): string {
return `${origin}/auth/${provider}/callback`;
}
export const GET: RequestHandler = async ({ params, url, cookies }) => {
const provider = params.provider as Provider;
if (!(provider in PROVIDERS)) {
error(404, 'Unknown OAuth provider');
}
const id = clientId(provider);
if (!id) {
error(500, `OAuth provider "${provider}" is not configured`);
}
// Generate state token — stored in a 10-minute cookie
const state = randomBytes(16).toString('hex');
cookies.set(`oauth_state_${provider}`, state, {
path: '/',
httpOnly: true,
sameSite: 'lax',
maxAge: 60 * 10 // 10 minutes
});
// Where to send the user after successful auth (default: home)
const next = url.searchParams.get('next') ?? '/';
cookies.set(`oauth_next_${provider}`, next, {
path: '/',
httpOnly: true,
sameSite: 'lax',
maxAge: 60 * 10
});
const origin = url.origin;
const cfg = PROVIDERS[provider];
const params2 = new URLSearchParams({
client_id: id,
redirect_uri: redirectUri(provider, origin),
response_type: 'code',
scope: cfg.scopes,
state
});
redirect(302, `${cfg.authUrl}?${params2.toString()}`);
};

View File

@@ -0,0 +1,246 @@
/**
* GET /auth/[provider]/callback
*
* Handles the OAuth2 authorization code callback.
*
* Flow:
* 1. Validate state cookie (CSRF check).
* 2. Exchange code for access token with the provider.
* 3. Fetch the user's profile (email, name, avatar) from the provider.
* 4. Look up app_users by (oauth_provider, oauth_id).
* - If found: log in.
* - If not found but email matches an existing user: link the account.
* - If not found at all: auto-create a new account.
* 5. Set auth cookie, redirect to `next` (default: '/').
*/
import { redirect, error } from '@sveltejs/kit';
import type { RequestHandler } from './$types';
import { env } from '$env/dynamic/private';
import { randomBytes } from 'node:crypto';
import {
getUserByOAuth,
getUserByEmail,
createOAuthUser,
linkOAuthToUser
} from '$lib/server/pocketbase';
import { createAuthToken } from '../../../../hooks.server';
import { createUserSession, mergeSessionProgress } from '$lib/server/pocketbase';
import { log } from '$lib/server/logger';
type Provider = 'google' | 'github';
const AUTH_COOKIE = 'libnovel_auth';
const ONE_YEAR = 60 * 60 * 24 * 365;
// ─── Token exchange ───────────────────────────────────────────────────────────
interface TokenResponse {
access_token: string;
token_type: string;
error?: string;
}
async function exchangeCode(
provider: Provider,
code: string,
redirectUri: string
): Promise<string> {
const clientId = provider === 'google' ? env.GOOGLE_CLIENT_ID : env.GITHUB_CLIENT_ID;
const clientSecret =
provider === 'google' ? env.GOOGLE_CLIENT_SECRET : env.GITHUB_CLIENT_SECRET;
const tokenUrl =
provider === 'google'
? 'https://oauth2.googleapis.com/token'
: 'https://github.com/login/oauth/access_token';
const res = await fetch(tokenUrl, {
method: 'POST',
headers: {
'Content-Type': 'application/x-www-form-urlencoded',
Accept: 'application/json'
},
body: new URLSearchParams({
code,
client_id: clientId ?? '',
client_secret: clientSecret ?? '',
redirect_uri: redirectUri,
grant_type: 'authorization_code'
}).toString()
});
if (!res.ok) {
const body = await res.text().catch(() => '');
log.error('oauth', 'token exchange failed', { provider, status: res.status, body });
throw new Error(`Token exchange failed: ${res.status}`);
}
const data = (await res.json()) as TokenResponse;
if (data.error || !data.access_token) {
log.error('oauth', 'token response error', { provider, error: data.error });
throw new Error(data.error ?? 'No access_token in response');
}
return data.access_token;
}
// ─── Profile fetching ─────────────────────────────────────────────────────────
interface OAuthProfile {
id: string; // provider's user ID (as string)
email: string;
name: string;
avatarUrl?: string;
}
async function fetchGoogleProfile(accessToken: string): Promise<OAuthProfile> {
const res = await fetch('https://www.googleapis.com/oauth2/v2/userinfo', {
headers: { Authorization: `Bearer ${accessToken}` }
});
if (!res.ok) throw new Error(`Google userinfo failed: ${res.status}`);
const d = await res.json();
return {
id: String(d.id),
email: d.email ?? '',
name: d.name ?? d.email ?? '',
avatarUrl: d.picture
};
}
async function fetchGitHubProfile(accessToken: string): Promise<OAuthProfile> {
const [userRes, emailRes] = await Promise.all([
fetch('https://api.github.com/user', {
headers: { Authorization: `Bearer ${accessToken}`, Accept: 'application/vnd.github+json' }
}),
fetch('https://api.github.com/user/emails', {
headers: { Authorization: `Bearer ${accessToken}`, Accept: 'application/vnd.github+json' }
})
]);
if (!userRes.ok) throw new Error(`GitHub user API failed: ${userRes.status}`);
const user = await userRes.json();
// Primary verified email — required for account linking
let email = user.email ?? '';
if (emailRes.ok) {
const emails = (await emailRes.json()) as Array<{
email: string;
primary: boolean;
verified: boolean;
}>;
const primary = emails.find((e) => e.primary && e.verified);
if (primary) email = primary.email;
}
if (!email) throw new Error('GitHub account has no verified primary email');
return {
id: String(user.id),
email,
name: user.name ?? user.login ?? email,
avatarUrl: user.avatar_url
};
}
// ─── Username derivation ──────────────────────────────────────────────────────
/** Derive a valid username from name/email. Sanitises to [a-zA-Z0-9_-], max 32 chars. */
function deriveUsername(name: string, email: string): string {
// Prefer the part before @ in the email for predictability
const base = (email.split('@')[0] ?? name)
.toLowerCase()
.replace(/[^a-z0-9_-]/g, '_')
.replace(/^_+|_+$/g, '')
.slice(0, 28);
// Append 4 random hex chars to avoid collisions without needing a DB round-trip
const suffix = randomBytes(2).toString('hex');
return `${base || 'user'}_${suffix}`;
}
// ─── Handler ──────────────────────────────────────────────────────────────────
export const GET: RequestHandler = async ({ params, url, cookies, locals }) => {
const provider = params.provider as Provider;
if (provider !== 'google' && provider !== 'github') {
error(404, 'Unknown OAuth provider');
}
const code = url.searchParams.get('code');
const state = url.searchParams.get('state');
const storedState = cookies.get(`oauth_state_${provider}`);
const next = cookies.get(`oauth_next_${provider}`) ?? '/';
// Clear short-lived cookies
cookies.delete(`oauth_state_${provider}`, { path: '/' });
cookies.delete(`oauth_next_${provider}`, { path: '/' });
if (!code || !state || state !== storedState) {
log.warn('oauth', 'state mismatch or missing code', { provider });
redirect(302, '/login?error=oauth_state');
}
const redirectUri = `${url.origin}/auth/${provider}/callback`;
let profile: OAuthProfile;
try {
const accessToken = await exchangeCode(provider, code, redirectUri);
profile =
provider === 'google'
? await fetchGoogleProfile(accessToken)
: await fetchGitHubProfile(accessToken);
} catch (err) {
log.error('oauth', 'profile fetch failed', { provider, err: String(err) });
redirect(302, '/login?error=oauth_failed');
}
if (!profile.email) {
log.warn('oauth', 'no email in profile', { provider, id: profile.id });
redirect(302, '/login?error=oauth_no_email');
}
// ── Find or create user ────────────────────────────────────────────────────
let user = await getUserByOAuth(provider, profile.id);
if (!user) {
// Try to link by email (user may have registered via the other provider)
const existing = await getUserByEmail(profile.email);
if (existing) {
// Link this provider to the existing account
await linkOAuthToUser(existing.id, provider, profile.id);
user = existing;
log.info('oauth', 'linked provider to existing account', {
provider,
userId: existing.id
});
} else {
// Auto-create a new account
const username = deriveUsername(profile.name, profile.email);
user = await createOAuthUser(username, profile.email, provider, profile.id, profile.avatarUrl);
log.info('oauth', 'created new account via oauth', { provider, username });
}
}
// ── Merge anonymous session progress ───────────────────────────────────────
mergeSessionProgress(locals.sessionId, user.id).catch((err) =>
log.warn('oauth', 'mergeSessionProgress failed (non-fatal)', { err: String(err) })
);
// ── Create session + auth cookie ──────────────────────────────────────────
const authSessionId = randomBytes(16).toString('hex');
const userAgent = '' ; // not available in RequestHandler — omit
const ip = '';
createUserSession(user.id, authSessionId, userAgent, ip).catch((err) =>
log.warn('oauth', 'createUserSession failed (non-fatal)', { err: String(err) })
);
const token = createAuthToken(user.id, user.username, user.role ?? 'user', authSessionId);
cookies.set(AUTH_COOKIE, token, {
path: '/',
httpOnly: true,
sameSite: 'lax',
maxAge: ONE_YEAR
});
redirect(302, next);
};

View File

@@ -1,48 +1,42 @@
import { error } from '@sveltejs/kit';
import type { PageServerLoad } from './$types';
import { listBooks, allProgress, getSavedSlugs } from '$lib/server/pocketbase';
import { getBooksBySlugs, allProgress, getSavedSlugs } from '$lib/server/pocketbase';
import { log } from '$lib/server/logger';
import type { Book } from '$lib/server/pocketbase';
export const load: PageServerLoad = async ({ locals }) => {
let allBooks: Awaited<ReturnType<typeof listBooks>>;
let progressList: Awaited<ReturnType<typeof allProgress>>;
let savedSlugs: Set<string>;
let progressList: Awaited<ReturnType<typeof allProgress>> = [];
let savedSlugs: Set<string> = new Set();
try {
[allBooks, progressList, savedSlugs] = await Promise.all([
listBooks(),
[progressList, savedSlugs] = await Promise.all([
allProgress(locals.sessionId, locals.user?.id),
getSavedSlugs(locals.sessionId, locals.user?.id)
]);
} catch (e) {
log.error('books', 'failed to load library data', { err: String(e) });
allBooks = [];
progressList = [];
savedSlugs = new Set();
}
// Fetch only the books the user actually has in their library.
const progressSlugs = new Set(progressList.map((p) => p.slug));
const allNeededSlugs = new Set([...progressSlugs, ...savedSlugs]);
const books = allNeededSlugs.size > 0
? await getBooksBySlugs(allNeededSlugs).catch(() => [] as Book[])
: [];
// Build a quick lookup: slug → last chapter read
const progressMap: Record<string, number> = {};
const progressUpdatedMap: Record<string, string> = {};
for (const p of progressList) {
progressMap[p.slug] = p.chapter;
progressUpdatedMap[p.slug] = p.updated;
}
// Library = books the user has started reading OR explicitly saved
const progressSlugs = new Set(progressList.map((p) => p.slug));
const books = allBooks.filter((b) => progressSlugs.has(b.slug) || savedSlugs.has(b.slug));
// Sort: books with progress first (most-recently-read order is implicit via progressList),
// then saved-only books alphabetically.
// Sort: books with progress first (most-recently-read), then saved-only alphabetically.
const withProgress = books.filter((b) => progressSlugs.has(b.slug));
const savedOnly = books
.filter((b) => !progressSlugs.has(b.slug))
.sort((a, b) => (a.title ?? '').localeCompare(b.title ?? ''));
// Re-sort withProgress by most recent progress update
const progressUpdatedMap: Record<string, string> = {};
for (const p of progressList) {
progressUpdatedMap[p.slug] = p.updated;
}
withProgress.sort((a, b) => {
const ta = progressUpdatedMap[a.slug] ?? '';
const tb = progressUpdatedMap[b.slug] ?? '';

View File

@@ -4,6 +4,7 @@ import type { PageServerLoad } from './$types';
import { getBook, listChapterIdx } from '$lib/server/pocketbase';
import { log } from '$lib/server/logger';
import { backendFetch } from '$lib/server/scraper';
import type { Voice } from '$lib/types';
export const load: PageServerLoad = async ({ params, url, locals }) => {
const { slug } = params;
@@ -43,11 +44,11 @@ export const load: PageServerLoad = async ({ params, url, locals }) => {
: '';
// Fetch voices (non-critical for preview)
let voices: string[] = [];
let voices: Voice[] = [];
try {
const vRes = await backendFetch('/api/voices');
if (vRes.ok) {
const d = (await vRes.json()) as { voices: string[] };
const d = (await vRes.json()) as { voices: Voice[] };
voices = d.voices ?? [];
}
} catch {
@@ -93,11 +94,11 @@ export const load: PageServerLoad = async ({ params, url, locals }) => {
const chapterIdx = chapters.find((c) => c.number === n);
if (!chapterIdx) error(404, `Chapter ${n} not found`);
// Parse voices — fall back to a minimal default list on error
let voices: string[] = [];
// Parse voices — fall back to empty list on error
let voices: Voice[] = [];
try {
if (voicesRes?.ok) {
const data = (await voicesRes.json()) as { voices: string[] };
const data = (await voicesRes.json()) as { voices: Voice[] };
voices = data.voices ?? [];
}
} catch {

View File

@@ -1,140 +1,12 @@
import { fail, redirect } from '@sveltejs/kit';
import type { Actions, PageServerLoad } from './$types';
import { loginUser, createUser, mergeSessionProgress, createUserSession } from '$lib/server/pocketbase';
import { sendVerificationEmail } from '$lib/server/email';
import { createAuthToken } from '../../hooks.server';
import { log } from '$lib/server/logger';
import { randomBytes } from 'node:crypto';
import { redirect } from '@sveltejs/kit';
import type { PageServerLoad } from './$types';
const AUTH_COOKIE = 'libnovel_auth';
const ONE_YEAR = 60 * 60 * 24 * 365;
export const load: PageServerLoad = async ({ locals }) => {
export const load: PageServerLoad = async ({ locals, url }) => {
// Already logged in — send to home
if (locals.user) {
redirect(302, '/');
}
return {};
};
export const actions: Actions = {
login: async ({ request, cookies, locals }) => {
const data = await request.formData();
const username = (data.get('username') as string | null)?.trim() ?? '';
const password = (data.get('password') as string | null) ?? '';
if (!username || !password) {
return fail(400, { action: 'login', error: 'Username and password are required.' });
}
let user;
try {
user = await loginUser(username, password);
} catch (err) {
const msg = err instanceof Error ? err.message : '';
if (msg === 'Email not verified') {
return fail(403, {
action: 'login',
error: 'Please verify your email before signing in. Check your inbox for the verification link.'
});
}
log.error('auth', 'login unexpected error', { username, err: String(err) });
return fail(500, { action: 'login', error: 'An error occurred. Please try again.' });
}
if (!user) {
return fail(401, { action: 'login', error: 'Invalid username or password.' });
}
// Merge any anonymous session progress into the user's account so that
// chapters read before logging in are preserved and portable across devices.
mergeSessionProgress(locals.sessionId, user.id).catch((err) =>
log.warn('auth', 'login: mergeSessionProgress failed (non-fatal)', { err: String(err) })
);
// Create a unique auth session ID for this login
const authSessionId = randomBytes(16).toString('hex');
// Record the session in PocketBase (best-effort, non-fatal)
const userAgent = request.headers.get('user-agent') ?? '';
const ip =
request.headers.get('x-forwarded-for')?.split(',')[0]?.trim() ??
request.headers.get('x-real-ip') ??
'';
createUserSession(user.id, authSessionId, userAgent, ip).catch((err) =>
log.warn('auth', 'login: createUserSession failed (non-fatal)', { err: String(err) })
);
const token = createAuthToken(user.id, user.username, user.role ?? 'user', authSessionId);
cookies.set(AUTH_COOKIE, token, {
path: '/',
httpOnly: true,
sameSite: 'lax',
maxAge: ONE_YEAR
});
redirect(302, '/');
},
register: async ({ request }) => {
const data = await request.formData();
const username = (data.get('username') as string | null)?.trim() ?? '';
const email = (data.get('email') as string | null)?.trim().toLowerCase() ?? '';
const password = (data.get('password') as string | null) ?? '';
const confirm = (data.get('confirm') as string | null) ?? '';
if (!username || !email || !password) {
return fail(400, { action: 'register', error: 'All fields are required.' });
}
if (username.length < 3 || username.length > 32) {
return fail(400, {
action: 'register',
error: 'Username must be between 3 and 32 characters.'
});
}
if (!/^[a-zA-Z0-9_-]+$/.test(username)) {
return fail(400, {
action: 'register',
error: 'Username may only contain letters, numbers, underscores and hyphens.'
});
}
if (!/^[^\s@]+@[^\s@]+\.[^\s@]+$/.test(email)) {
return fail(400, { action: 'register', error: 'Please enter a valid email address.' });
}
if (password.length < 8) {
return fail(400, {
action: 'register',
error: 'Password must be at least 8 characters.'
});
}
if (password !== confirm) {
return fail(400, { action: 'register', error: 'Passwords do not match.' });
}
let user;
try {
user = await createUser(username, password, email);
} catch (err: unknown) {
const msg = err instanceof Error ? err.message : 'Registration failed.';
if (msg.includes('Username already taken')) {
return fail(409, { action: 'register', error: 'That username is already taken.' });
}
if (msg.includes('Email already in use')) {
return fail(409, { action: 'register', error: 'That email address is already registered.' });
}
log.error('auth', 'register unexpected error', { username, err: String(err) });
return fail(500, { action: 'register', error: 'An error occurred. Please try again.' });
}
// Send verification email (non-fatal — user can re-request later)
try {
await sendVerificationEmail(email, user.verification_token!);
} catch (err) {
log.error('auth', 'register: failed to send verification email', { username, email, err: String(err) });
// Don't fail registration if email fails — user sees the pending screen
}
// Return success state — do NOT log the user in yet
return { action: 'register', registered: true, email };
}
// Surface provider error codes to the page (oauth_state, oauth_failed, etc.)
const error = url.searchParams.get('error') ?? undefined;
return { error };
};

View File

@@ -1,12 +1,13 @@
<script lang="ts">
import type { ActionData } from './$types';
import type { PageServerLoad } from './$types';
let { form }: { form: ActionData } = $props();
let { data }: { data: { error?: string } } = $props();
// Cast to access union members that TypeScript can't narrow statically
const f = $derived(form as (typeof form) & { registered?: boolean; email?: string } | null);
let mode: 'login' | 'register' = $state('login');
const errorMessages: Record<string, string> = {
oauth_state: 'Sign-in was cancelled or expired. Please try again.',
oauth_failed: 'Could not connect to the provider. Please try again.',
oauth_no_email: 'Your account has no verified email address. Please add one and retry.'
};
</script>
<svelte:head>
@@ -16,155 +17,71 @@
<div class="flex items-center justify-center min-h-[60vh]">
<div class="w-full max-w-sm">
<!-- Post-registration: check inbox -->
{#if f?.registered}
<div class="text-center">
<div class="mb-4 text-4xl">✉️</div>
<h2 class="text-lg font-semibold text-zinc-100 mb-2">Check your inbox</h2>
<p class="text-sm text-zinc-400 mb-6">
We sent a verification link to <span class="text-zinc-200 font-medium">{f?.email}</span>.
Click it to activate your account.
</p>
<p class="text-xs text-zinc-500">
Didn't receive it? Check your spam folder, or
<a href="/login" class="text-amber-400 hover:text-amber-300 transition-colors">try again</a>.
</p>
</div>
{:else}
<!-- Tab switcher -->
<div class="flex mb-6 border-b border-zinc-700">
<button
type="button"
onclick={() => (mode = 'login')}
class="flex-1 pb-3 text-sm font-medium transition-colors
{mode === 'login'
? 'text-amber-400 border-b-2 border-amber-400 -mb-px'
: 'text-zinc-400 hover:text-zinc-100'}"
>
Sign in
</button>
<button
type="button"
onclick={() => (mode = 'register')}
class="flex-1 pb-3 text-sm font-medium transition-colors
{mode === 'register'
? 'text-amber-400 border-b-2 border-amber-400 -mb-px'
: 'text-zinc-400 hover:text-zinc-100'}"
>
Create account
</button>
</div>
<div class="text-center mb-8">
<h1 class="text-2xl font-bold text-zinc-100 mb-2">Sign in to libnovel</h1>
<p class="text-sm text-zinc-400">Choose a provider to continue</p>
</div>
{#if form?.error && (form?.action === mode || !form?.action)}
<div class="mb-4 rounded bg-red-900/40 border border-red-700 px-4 py-3 text-sm text-red-300">
{form.error}
</div>
{/if}
{#if mode === 'login'}
<form method="POST" action="?/login" class="flex flex-col gap-4">
<div>
<label for="login-username" class="block text-xs text-zinc-400 mb-1">Username</label>
<input
id="login-username"
name="username"
type="text"
autocomplete="username"
required
class="w-full rounded bg-zinc-800 border border-zinc-700 px-3 py-2 text-sm text-zinc-100
placeholder-zinc-500 focus:outline-none focus:border-amber-400 focus:ring-1 focus:ring-amber-400"
placeholder="your_username"
/>
</div>
<div>
<label for="login-password" class="block text-xs text-zinc-400 mb-1">Password</label>
<input
id="login-password"
name="password"
type="password"
autocomplete="current-password"
required
class="w-full rounded bg-zinc-800 border border-zinc-700 px-3 py-2 text-sm text-zinc-100
placeholder-zinc-500 focus:outline-none focus:border-amber-400 focus:ring-1 focus:ring-amber-400"
placeholder="••••••••"
/>
</div>
<button
type="submit"
class="w-full py-2 rounded bg-amber-400 text-zinc-900 font-semibold text-sm hover:bg-amber-300 transition-colors"
>
Sign in
</button>
</form>
{:else}
<form method="POST" action="?/register" class="flex flex-col gap-4">
<div>
<label for="reg-username" class="block text-xs text-zinc-400 mb-1">Username</label>
<input
id="reg-username"
name="username"
type="text"
autocomplete="username"
required
minlength="3"
maxlength="32"
pattern="[a-zA-Z0-9_\-]+"
class="w-full rounded bg-zinc-800 border border-zinc-700 px-3 py-2 text-sm text-zinc-100
placeholder-zinc-500 focus:outline-none focus:border-amber-400 focus:ring-1 focus:ring-amber-400"
placeholder="your_username"
/>
<p class="mt-1 text-xs text-zinc-500">332 characters: letters, numbers, _ or -</p>
</div>
<div>
<label for="reg-email" class="block text-xs text-zinc-400 mb-1">Email</label>
<input
id="reg-email"
name="email"
type="email"
autocomplete="email"
required
class="w-full rounded bg-zinc-800 border border-zinc-700 px-3 py-2 text-sm text-zinc-100
placeholder-zinc-500 focus:outline-none focus:border-amber-400 focus:ring-1 focus:ring-amber-400"
placeholder="you@example.com"
/>
<p class="mt-1 text-xs text-zinc-500">Used to verify your account — not shown publicly</p>
</div>
<div>
<label for="reg-password" class="block text-xs text-zinc-400 mb-1">Password</label>
<input
id="reg-password"
name="password"
type="password"
autocomplete="new-password"
required
minlength="8"
class="w-full rounded bg-zinc-800 border border-zinc-700 px-3 py-2 text-sm text-zinc-100
placeholder-zinc-500 focus:outline-none focus:border-amber-400 focus:ring-1 focus:ring-amber-400"
placeholder="••••••••"
/>
<p class="mt-1 text-xs text-zinc-500">At least 8 characters</p>
</div>
<div>
<label for="reg-confirm" class="block text-xs text-zinc-400 mb-1">Confirm password</label>
<input
id="reg-confirm"
name="confirm"
type="password"
autocomplete="new-password"
required
class="w-full rounded bg-zinc-800 border border-zinc-700 px-3 py-2 text-sm text-zinc-100
placeholder-zinc-500 focus:outline-none focus:border-amber-400 focus:ring-1 focus:ring-amber-400"
placeholder="••••••••"
/>
</div>
<button
type="submit"
class="w-full py-2 rounded bg-amber-400 text-zinc-900 font-semibold text-sm hover:bg-amber-300 transition-colors"
>
Create account
</button>
</form>
{/if}
{#if data.error && errorMessages[data.error]}
<div class="mb-6 rounded bg-red-900/40 border border-red-700 px-4 py-3 text-sm text-red-300">
{errorMessages[data.error]}
</div>
{/if}
<div class="flex flex-col gap-3">
<!-- Google -->
<a
href="/auth/google"
class="flex items-center justify-center gap-3 w-full py-3 px-4 rounded-lg
bg-zinc-800 border border-zinc-700 text-zinc-100 text-sm font-medium
hover:bg-zinc-700 hover:border-zinc-600 transition-colors"
>
<svg class="w-5 h-5 shrink-0" viewBox="0 0 24 24" aria-hidden="true">
<path
d="M22.56 12.25c0-.78-.07-1.53-.2-2.25H12v4.26h5.92c-.26 1.37-1.04 2.53-2.21 3.31v2.77h3.57c2.08-1.92 3.28-4.74 3.28-8.09z"
fill="#4285F4"
/>
<path
d="M12 23c2.97 0 5.46-.98 7.28-2.66l-3.57-2.77c-.98.66-2.23 1.06-3.71 1.06-2.86 0-5.29-1.93-6.16-4.53H2.18v2.84C3.99 20.53 7.7 23 12 23z"
fill="#34A853"
/>
<path
d="M5.84 14.09c-.22-.66-.35-1.36-.35-2.09s.13-1.43.35-2.09V7.07H2.18C1.43 8.55 1 10.22 1 12s.43 3.45 1.18 4.93l2.85-2.22.81-.62z"
fill="#FBBC05"
/>
<path
d="M12 5.38c1.62 0 3.06.56 4.21 1.64l3.15-3.15C17.45 2.09 14.97 1 12 1 7.7 1 3.99 3.47 2.18 7.07l3.66 2.84c.87-2.6 3.3-4.53 6.16-4.53z"
fill="#EA4335"
/>
</svg>
Continue with Google
</a>
<!-- GitHub -->
<a
href="/auth/github"
class="flex items-center justify-center gap-3 w-full py-3 px-4 rounded-lg
bg-zinc-800 border border-zinc-700 text-zinc-100 text-sm font-medium
hover:bg-zinc-700 hover:border-zinc-600 transition-colors"
>
<svg class="w-5 h-5 shrink-0 fill-zinc-100" viewBox="0 0 24 24" aria-hidden="true">
<path
d="M12 2C6.477 2 2 6.484 2 12.017c0 4.425 2.865 8.18 6.839 9.504.5.092.682-.217.682-.483
0-.237-.008-.868-.013-1.703-2.782.605-3.369-1.343-3.369-1.343-.454-1.158-1.11-1.466-1.11-1.466
-.908-.62.069-.608.069-.608 1.003.07 1.531 1.032 1.531 1.032.892 1.53 2.341 1.088 2.91.832
.092-.647.35-1.088.636-1.338-2.22-.253-4.555-1.113-4.555-4.951 0-1.093.39-1.988 1.029-2.688
-.103-.253-.446-1.272.098-2.65 0 0 .84-.27 2.75 1.026A9.564 9.564 0 0 1 12 6.844a9.59 9.59 0
0 1 2.504.337c1.909-1.296 2.747-1.027 2.747-1.027.546 1.379.202 2.398.1 2.651.64.7 1.028
1.595 1.028 2.688 0 3.848-2.339 4.695-4.566 4.943.359.309.678.92.678 1.855 0 1.338-.012
2.419-.012 2.747 0 .268.18.58.688.482A10.02 10.02 0 0 0 22 12.017C22 6.484 17.522 2 12 2z"
/>
</svg>
Continue with GitHub
</a>
</div>
<p class="mt-8 text-center text-xs text-zinc-500">
By signing in you agree to our terms of service.
</p>
</div>
</div>

View File

@@ -5,6 +5,7 @@
import type { PageData, ActionData } from './$types';
import { audioStore } from '$lib/audio.svelte';
import { browser } from '$app/environment';
import type { Voice } from '$lib/types';
let { data, form }: { data: PageData; form: ActionData } = $props();
@@ -56,14 +57,18 @@
}
// ── Settings ────────────────────────────────────────────────────────────────
let voices = $state<string[]>([]);
let voices = $state<Voice[]>([]);
let voicesLoaded = $state(false);
// Derived: voices grouped by engine
const kokoroVoices = $derived(voices.filter((v) => v.engine === 'kokoro'));
const pocketVoices = $derived(voices.filter((v) => v.engine === 'pocket-tts'));
// Load voices on mount
$effect(() => {
fetch('/api/voices')
.then((r) => r.json())
.then((d: { voices: string[] }) => {
.then((d: { voices: Voice[] }) => {
voices = d.voices ?? [];
voicesLoaded = true;
})
@@ -276,9 +281,20 @@
bind:value={voice}
class="w-full bg-zinc-700 border border-zinc-600 rounded-lg px-3 py-2 text-zinc-100 text-sm focus:outline-none focus:ring-2 focus:ring-amber-400"
>
{#each voices as v}
<option value={v}>{v}</option>
{/each}
{#if kokoroVoices.length > 0}
<optgroup label="Kokoro (GPU)">
{#each kokoroVoices as v}
<option value={v.id}>{v.id}</option>
{/each}
</optgroup>
{/if}
{#if pocketVoices.length > 0}
<optgroup label="Pocket TTS (CPU)">
{#each pocketVoices as v}
<option value={v.id}>{v.id}</option>
{/each}
</optgroup>
{/if}
</select>
{/if}
</div>

View File

@@ -0,0 +1,51 @@
<svelte:head>
<title>Terms of Service — libnovel</title>
</svelte:head>
<div class="max-w-2xl mx-auto py-10 px-4">
<h1 class="text-2xl font-bold text-zinc-100 mb-6">Terms of Service</h1>
<div class="space-y-5 text-sm text-zinc-400 leading-relaxed">
<p>
By using libnovel you agree to these terms. If you do not agree, please do not use the service.
</p>
<h2 class="text-base font-semibold text-zinc-200 mt-6">Use of the service</h2>
<ul class="list-disc list-inside space-y-2 pl-1">
<li>libnovel is provided for personal, non-commercial reading use only.</li>
<li>You may not scrape, crawl, or systematically download content from the site.</li>
<li>You may not use the service for any unlawful purpose.</li>
<li>Accounts may be suspended or terminated for abuse.</li>
</ul>
<h2 class="text-base font-semibold text-zinc-200 mt-6">Content</h2>
<p>
libnovel aggregates publicly available web novel content from third-party sources for
personal reading convenience. We do not claim ownership of any novel content displayed on
the site. If you are a rights holder and wish to have content removed, please see our
<a href="/dmca" class="text-amber-400 hover:text-amber-300 transition-colors">DMCA policy</a>.
</p>
<h2 class="text-base font-semibold text-zinc-200 mt-6">Accounts</h2>
<p>
You are responsible for maintaining the security of your account. libnovel is not liable
for any loss or damage resulting from unauthorised access to your account.
</p>
<h2 class="text-base font-semibold text-zinc-200 mt-6">Disclaimer of warranties</h2>
<p>
The service is provided "as is" without warranty of any kind. We do not guarantee
availability, accuracy, or completeness of any content. See our full
<a href="/disclaimer" class="text-amber-400 hover:text-amber-300 transition-colors">disclaimer</a>
for details.
</p>
<h2 class="text-base font-semibold text-zinc-200 mt-6">Changes to these terms</h2>
<p>
We may update these terms at any time. Continued use of the service after changes are
posted constitutes acceptance of the revised terms.
</p>
<p class="text-zinc-600 text-xs mt-8">Last updated: {new Date().getFullYear()}</p>
</div>
</div>

View File

@@ -1,72 +0,0 @@
import { redirect } from '@sveltejs/kit';
import type { PageServerLoad } from './$types';
import {
getUserByVerificationToken,
verifyUserEmail,
createUserSession
} from '$lib/server/pocketbase';
import { createAuthToken } from '../../hooks.server';
import { log } from '$lib/server/logger';
import { randomBytes } from 'node:crypto';
const AUTH_COOKIE = 'libnovel_auth';
const ONE_YEAR = 60 * 60 * 24 * 365;
export const load: PageServerLoad = async ({ url, cookies, request }) => {
const token = url.searchParams.get('token') ?? '';
if (!token) {
return { success: false, error: 'Missing verification token.' };
}
let user;
try {
user = await getUserByVerificationToken(token);
} catch (e) {
log.error('verify-email', 'lookup failed', { err: String(e) });
return { success: false, error: 'An error occurred. Please try again.' };
}
if (!user) {
return { success: false, error: 'Invalid or expired verification link.' };
}
// Check expiry
if (user.verification_token_exp) {
const exp = new Date(user.verification_token_exp).getTime();
if (Date.now() > exp) {
return { success: false, error: 'This verification link has expired. Please register again.' };
}
}
// Mark email as verified
try {
await verifyUserEmail(user.id);
} catch (e) {
log.error('verify-email', 'verifyUserEmail failed', { userId: user.id, err: String(e) });
return { success: false, error: 'Failed to verify email. Please try again.' };
}
// Log the user in automatically
const authSessionId = randomBytes(16).toString('hex');
const userAgent = request.headers.get('user-agent') ?? '';
const ip =
request.headers.get('x-forwarded-for')?.split(',')[0]?.trim() ??
request.headers.get('x-real-ip') ??
'';
createUserSession(user.id, authSessionId, userAgent, ip).catch((e) =>
log.warn('verify-email', 'createUserSession failed (non-fatal)', { err: String(e) })
);
const authToken = createAuthToken(user.id, user.username, user.role ?? 'user', authSessionId);
cookies.set(AUTH_COOKIE, authToken, {
path: '/',
httpOnly: true,
sameSite: 'lax',
maxAge: ONE_YEAR
});
log.info('verify-email', 'email verified, user logged in', { userId: user.id, username: user.username });
redirect(302, '/');
};

View File

@@ -1,21 +0,0 @@
<script lang="ts">
import type { PageData } from './$types';
let { data }: { data: PageData } = $props();
</script>
<svelte:head>
<title>Verify email — libnovel</title>
</svelte:head>
<div class="flex items-center justify-center min-h-[60vh]">
<div class="w-full max-w-sm text-center">
{#if data.error}
<div class="mb-6 rounded bg-red-900/40 border border-red-700 px-4 py-3 text-sm text-red-300">
{data.error}
</div>
<a href="/login" class="text-sm text-amber-400 hover:text-amber-300 transition-colors">
Back to sign in
</a>
{/if}
</div>
</div>

View File

@@ -2,7 +2,12 @@ import { sveltekit } from '@sveltejs/kit/vite';
import tailwindcss from '@tailwindcss/vite';
import { defineConfig } from 'vite';
// Source maps are always generated so that the CI pipeline can upload them to
// GlitchTip via glitchtip-cli after a release build.
export default defineConfig({
build: {
sourcemap: true
},
plugins: [tailwindcss(), sveltekit()],
ssr: {
// Force these packages to be bundled into the server output rather than