Compare commits

..

7 Commits

Author SHA1 Message Date
Admin
69818089a6 perf(ui): eliminate full listBooks() scan on every page load
Some checks failed
Release / Test backend (push) Failing after 10s
Release / Docker / backend (push) Has been skipped
Release / Docker / runner (push) Has been skipped
Release / Docker / caddy (push) Failing after 10s
Release / Check ui (push) Successful in 32s
CI / Test backend (pull_request) Successful in 39s
CI / Check ui (pull_request) Successful in 48s
Release / Upload source maps (push) Successful in 2m17s
CI / Docker / caddy (pull_request) Successful in 2m44s
Release / Docker / ui (push) Successful in 2m31s
Release / Gitea Release (push) Has been skipped
CI / Docker / runner (pull_request) Successful in 1m38s
CI / Docker / ui (pull_request) Successful in 1m28s
CI / Docker / backend (pull_request) Successful in 2m11s
The home, library, and /books routes were fetching all 15k books from
PocketBase on every SSR request (31 sequential HTTP calls per request).

Changes:
- Add src/lib/server/cache.ts: generic Valkey JSON cache
- Add getBooksBySlugs(): single PB query fetching only requested slugs,
  with fallback to the 5-min Valkey cache populated by listBooks()
- listBooks(): now caches results in Valkey for 5 min (safety net for
  admin routes that still need the full list)
- Home + /api/home: replaced listBooks()+filter with getBooksBySlugs()
  on progress slugs only — typically 1 PB request instead of 31
- /books + /api/library: same pattern using progress+saved slug union
2026-03-26 16:14:47 +05:00
Admin
09062b8c82 fix(ci): use correct glitchtip-cli download URL for linux-x86_64
Some checks failed
Release / Test backend (push) Successful in 35s
Release / Check ui (push) Successful in 38s
Release / Docker / caddy (push) Successful in 48s
CI / Check ui (pull_request) Successful in 31s
CI / Test backend (pull_request) Successful in 36s
CI / Docker / caddy (pull_request) Successful in 4m38s
Release / Docker / ui (push) Successful in 1m45s
Release / Upload source maps (push) Successful in 2m16s
CI / Docker / ui (pull_request) Failing after 30s
CI / Docker / backend (pull_request) Failing after 42s
CI / Docker / runner (pull_request) Failing after 32s
Release / Docker / runner (push) Successful in 2m2s
Release / Docker / backend (push) Successful in 1m36s
Release / Gitea Release (push) Failing after 1s
2026-03-26 11:55:17 +05:00
Admin
d518710cc4 fix(observability): switch source map upload to glitchtip-cli
Some checks failed
Release / Test backend (push) Successful in 18s
CI / Test backend (pull_request) Successful in 18s
Release / Check ui (push) Successful in 47s
Release / Docker / caddy (push) Successful in 55s
CI / Check ui (pull_request) Successful in 21s
Release / Docker / backend (push) Failing after 1m13s
CI / Docker / backend (pull_request) Failing after 20s
Release / Docker / runner (push) Successful in 1m46s
CI / Docker / runner (pull_request) Failing after 1m17s
Release / Docker / ui (push) Failing after 39s
CI / Docker / ui (pull_request) Successful in 1m11s
CI / Docker / caddy (pull_request) Successful in 7m6s
Release / Upload source maps (push) Failing after 41s
Release / Gitea Release (push) Has been skipped
@sentry/vite-plugin uses sentry-cli which creates release entries but
doesn't upload files to GlitchTip's API correctly. Switch to the native
glitchtip-cli which uses the debug ID inject+upload approach that
GlitchTip actually supports.
2026-03-25 21:10:10 +05:00
Admin
e2c15f5931 fix(observability): correct sentryVitePlugin sourcemaps option key
Some checks failed
Release / Test backend (push) Successful in 18s
CI / Test backend (pull_request) Successful in 17s
Release / Docker / caddy (push) Successful in 42s
Release / Check ui (push) Successful in 45s
CI / Check ui (pull_request) Successful in 24s
CI / Docker / caddy (pull_request) Failing after 38s
Release / Docker / runner (push) Successful in 2m16s
Release / Docker / backend (push) Successful in 2m33s
CI / Docker / backend (pull_request) Successful in 2m14s
Release / Upload source maps (push) Successful in 30s
CI / Docker / runner (pull_request) Successful in 1m26s
CI / Docker / ui (pull_request) Successful in 1m14s
Release / Docker / ui (push) Successful in 2m6s
Release / Gitea Release (push) Failing after 2s
2026-03-25 20:39:26 +05:00
Admin
a50b968b95 fix(infra): expose Meilisearch via search.libnovel.cc for homelab runner indexing
Some checks failed
CI / Test backend (pull_request) Has been cancelled
CI / Check ui (pull_request) Has been cancelled
CI / Docker / backend (pull_request) Has been cancelled
CI / Docker / runner (pull_request) Has been cancelled
CI / Docker / ui (pull_request) Has been cancelled
CI / Docker / caddy (pull_request) Has been cancelled
Release / Check ui (push) Failing after 22s
Release / Upload source maps (push) Has been skipped
Release / Docker / ui (push) Has been skipped
Release / Test backend (push) Successful in 36s
Release / Docker / caddy (push) Successful in 1m16s
Release / Docker / backend (push) Successful in 2m49s
Release / Docker / runner (push) Successful in 3m29s
Release / Gitea Release (push) Has been skipped
- Add search.libnovel.cc Caddy vhost proxying to meilisearch:7700
- Pass MEILI_URL + MEILI_API_KEY from Doppler into homelab runner
- Set GODEBUG=preferIPv4=1 to work around missing IPv6 route on homelab
- Update comments to reflect runner now indexes books into Meilisearch
2026-03-25 20:27:50 +05:00
Admin
023b1f7fec feat(observability): add GlitchTip source map uploads for un-minified stack traces
Some checks failed
CI / Check ui (pull_request) Failing after 11s
CI / Docker / caddy (pull_request) Failing after 11s
CI / Docker / ui (pull_request) Has been skipped
CI / Test backend (pull_request) Successful in 30s
Release / Check ui (push) Failing after 38s
Release / Upload source maps (push) Has been skipped
Release / Docker / ui (push) Has been skipped
Release / Test backend (push) Successful in 48s
Release / Docker / caddy (push) Successful in 45s
CI / Docker / backend (pull_request) Has been cancelled
CI / Docker / runner (pull_request) Has been cancelled
Release / Docker / runner (push) Has been cancelled
Release / Gitea Release (push) Has been cancelled
Release / Docker / backend (push) Has been cancelled
- Enable sourcemap:true in vite.config.ts
- Add sentryVitePlugin: uploads maps to errors.libnovel.cc, deletes them post-upload so they never ship in the Docker image
- Wire release: PUBLIC_BUILD_VERSION in both hooks.client.ts and hooks.server.ts so events correlate to the correct artifact
- Add upload-sourcemaps CI job in release.yaml (parallel to docker-ui, uses GLITCHTIP_AUTH_TOKEN secret)
2026-03-25 20:26:19 +05:00
Admin
7e99fc6d70 fix(runner): fix audio task infinite loop and semaphore race
Some checks failed
Release / Check ui (push) Successful in 22s
Release / Test backend (push) Successful in 33s
Release / Docker / backend (push) Failing after 30s
Release / Docker / caddy (push) Successful in 1m9s
Release / Docker / ui (push) Successful in 1m34s
Release / Docker / runner (push) Failing after 1m15s
Release / Gitea Release (push) Has been skipped
Two bugs caused audio tasks to loop endlessly:

1. claimRecord never set heartbeat_at — newly claimed tasks had
   heartbeat_at=null, which matched the reaper's stale filter
   (heartbeat_at=null || heartbeat_at<threshold). Tasks were reaped
   and reset to pending within seconds of being claimed, before the
   30s heartbeat goroutine had a chance to write a timestamp.
   Fix: set heartbeat_at=now() in claimRecord alongside status=running.

2. Audio semaphore was checked AFTER claiming the task. When the
   semaphore was full the select/break only broke the inner select,
   not the for loop — the code fell through and launched an uncapped
   goroutine that blocked forever on <-audioSem drain. The task also
   stayed status=running with no heartbeat, feeding bug #1.
   Fix: pre-acquire a semaphore slot BEFORE claiming the task; release
   it immediately if the queue is empty or claim fails.
2026-03-25 15:09:52 +05:00
16 changed files with 270 additions and 60 deletions

View File

@@ -135,6 +135,54 @@ jobs:
cache-from: type=registry,ref=${{ secrets.DOCKER_USER }}/libnovel-runner:latest
cache-to: type=inline
# ── ui: source map upload ─────────────────────────────────────────────────────
# Builds the UI with source maps and uploads them to GlitchTip so that error
# stack traces resolve to original .svelte/.ts file names and line numbers.
# Runs in parallel with docker-ui (both need check-ui to pass first).
upload-sourcemaps:
name: Upload source maps
runs-on: ubuntu-latest
needs: [check-ui]
defaults:
run:
working-directory: ui
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "22"
cache: npm
cache-dependency-path: ui/package-lock.json
- name: Install dependencies
run: npm ci
- name: Build with source maps
run: npm run build
- name: Download glitchtip-cli
run: |
curl -L "https://gitlab.com/glitchtip/glitchtip-cli/-/jobs/artifacts/v0.1.0/raw/artifacts/glitchtip-cli-linux-x86_64?job=build-linux-x86_64" \
-o /usr/local/bin/glitchtip-cli
chmod +x /usr/local/bin/glitchtip-cli
- name: Inject debug IDs into build artifacts
run: glitchtip-cli sourcemaps inject ./build
env:
SENTRY_URL: https://errors.libnovel.cc/
SENTRY_AUTH_TOKEN: ${{ secrets.GLITCHTIP_AUTH_TOKEN }}
SENTRY_ORG: libnovel
SENTRY_PROJECT: libnovel-ui
- name: Upload source maps to GlitchTip
run: glitchtip-cli sourcemaps upload ./build --release ${{ gitea.ref_name }}
env:
SENTRY_URL: https://errors.libnovel.cc/
SENTRY_AUTH_TOKEN: ${{ secrets.GLITCHTIP_AUTH_TOKEN }}
SENTRY_ORG: libnovel
SENTRY_PROJECT: libnovel-ui
# ── docker: ui ────────────────────────────────────────────────────────────────
docker-ui:
name: Docker / ui
@@ -213,7 +261,7 @@ jobs:
release:
name: Gitea Release
runs-on: ubuntu-latest
needs: [docker-backend, docker-runner, docker-ui, docker-caddy]
needs: [docker-backend, docker-runner, docker-ui, docker-caddy, upload-sourcemaps]
steps:
- uses: actions/checkout@v4
with:

View File

@@ -30,6 +30,7 @@
# logs.libnovel.cc → dozzle:8080 (Docker log viewer)
# uptime.libnovel.cc → uptime-kuma:3001 (uptime monitoring)
# push.libnovel.cc → gotify:80 (push notifications)
# search.libnovel.cc → meilisearch:7700 (search index — homelab runner)
#
# Routes intentionally removed from direct-to-backend:
# /api/scrape/* — SvelteKit has /api/scrape/ counterparts
@@ -254,3 +255,12 @@ storage.libnovel.cc {
reverse_proxy minio:9000
}
# ── Meilisearch: exposed for homelab runner search indexing ──────────────────
# The homelab runner connects here as MEILI_URL to index books after scraping.
# Protected by MEILI_MASTER_KEY bearer token — Meilisearch enforces auth on
# every request; Caddy just terminates TLS.
search.libnovel.cc {
import security_headers
reverse_proxy meilisearch:7700
}
}

View File

@@ -248,23 +248,30 @@ func (r *Runner) poll(ctx context.Context, scrapeSem, audioSem chan struct{}, wg
}
// ── Audio tasks ───────────────────────────────────────────────────────
// Only claim tasks when there is a free slot in the semaphore.
// This avoids the old bug where we claimed (status→running) a task and
// then couldn't dispatch it, leaving it orphaned until the reaper fired.
audioLoop:
for {
if ctx.Err() != nil {
return
}
// Check capacity before claiming to avoid orphaning tasks.
select {
case audioSem <- struct{}{}:
// Slot acquired — proceed to claim a task.
default:
// All slots busy; leave remaining pending tasks for next tick.
break audioLoop
}
task, ok, err := r.deps.Consumer.ClaimNextAudioTask(ctx, r.cfg.WorkerID)
if err != nil {
<-audioSem // release the pre-acquired slot
r.deps.Log.Error("runner: ClaimNextAudioTask failed", "err", err)
break
}
if !ok {
break
}
select {
case audioSem <- struct{}{}:
default:
r.deps.Log.Warn("runner: audio semaphore full, will retry next tick",
"task_id", task.ID)
<-audioSem // release the pre-acquired slot; queue empty
break
}
r.tasksRunning.Add(1)

View File

@@ -247,8 +247,9 @@ func (c *pbClient) claimRecord(ctx context.Context, collection, workerID string,
}
claim := map[string]any{
"status": string(domain.TaskStatusRunning),
"worker_id": workerID,
"status": string(domain.TaskStatusRunning),
"worker_id": workerID,
"heartbeat_at": time.Now().UTC().Format(time.RFC3339),
}
for k, v := range extraClaim {
claim[k] = v

View File

@@ -8,7 +8,8 @@
# - RUNNER_WORKER_ID=homelab-runner-1 (unique, avoids task claiming conflicts)
# - MINIO_ENDPOINT/USE_SSL → storage.libnovel.cc over HTTPS
# - POCKETBASE_URL → https://pb.libnovel.cc
# - MEILI_URL/VALKEY_ADDR → unset (not exposed publicly; not needed by runner)
# - MEILI_URL → https://search.libnovel.cc (Caddy-proxied)
# - VALKEY_ADDR → unset (not exposed publicly)
# - RUNNER_SKIP_INITIAL_CATALOGUE_REFRESH=true
services:
@@ -30,9 +31,12 @@ services:
MINIO_PUBLIC_ENDPOINT: "${MINIO_PUBLIC_ENDPOINT}"
MINIO_PUBLIC_USE_SSL: "${MINIO_PUBLIC_USE_SSL}"
# ── Meilisearch / Valkey — not exposed, disabled ────────────────────────
MEILI_URL: ""
# ── Meilisearch (via search.libnovel.cc Caddy proxy) ────────────────────
MEILI_URL: "${MEILI_URL}"
MEILI_API_KEY: "${MEILI_API_KEY}"
VALKEY_ADDR: ""
# Force IPv4 DNS resolution — homelab has no IPv6 route to search.libnovel.cc
GODEBUG: "preferIPv4=1"
# ── Kokoro TTS ──────────────────────────────────────────────────────────
KOKORO_URL: "${KOKORO_URL}"

1
ui/package-lock.json generated
View File

@@ -17,6 +17,7 @@
"pocketbase": "^0.26.8"
},
"devDependencies": {
"@sentry/vite-plugin": "^5.1.1",
"@sveltejs/adapter-auto": "^7.0.0",
"@sveltejs/adapter-node": "^5.5.4",
"@sveltejs/kit": "^2.50.2",

View File

@@ -12,6 +12,7 @@
"check:watch": "svelte-kit sync && svelte-check --tsconfig ./tsconfig.json --watch"
},
"devDependencies": {
"@sentry/vite-plugin": "^5.1.1",
"@sveltejs/adapter-auto": "^7.0.0",
"@sveltejs/adapter-node": "^5.5.4",
"@sveltejs/kit": "^2.50.2",

View File

@@ -6,7 +6,10 @@ import { env } from '$env/dynamic/public';
if (env.PUBLIC_GLITCHTIP_DSN) {
Sentry.init({
dsn: env.PUBLIC_GLITCHTIP_DSN,
tracesSampleRate: 0.1
tracesSampleRate: 0.1,
// Must match the release name used when uploading source maps in CI
// (BUILD_VERSION injected by Dockerfile as PUBLIC_BUILD_VERSION).
release: env.PUBLIC_BUILD_VERSION || undefined
});
}

View File

@@ -13,7 +13,10 @@ import { drain as drainPresignCache } from '$lib/server/presignCache';
if (pubEnv.PUBLIC_GLITCHTIP_DSN) {
Sentry.init({
dsn: pubEnv.PUBLIC_GLITCHTIP_DSN,
tracesSampleRate: 0.1
tracesSampleRate: 0.1,
// Must match the release name used when uploading source maps in CI
// (BUILD_VERSION injected by Dockerfile as PUBLIC_BUILD_VERSION).
release: pubEnv.PUBLIC_BUILD_VERSION || undefined
});
}

View File

@@ -0,0 +1,72 @@
/**
* Generic Valkey (Redis-compatible) cache.
*
* Reuses the same ioredis singleton from presignCache.ts but exposes a
* simple typed get/set/invalidate API for arbitrary JSON values.
*
* Usage:
* const books = await cache.get<Book[]>('books:all');
* await cache.set('books:all', books, 5 * 60);
* await cache.invalidate('books:all');
*/
import Redis from 'ioredis';
let _client: Redis | null = null;
function client(): Redis {
if (!_client) {
const url = process.env.VALKEY_URL ?? 'redis://valkey:6379';
_client = new Redis(url, {
lazyConnect: false,
enableOfflineQueue: true,
maxRetriesPerRequest: 2
});
_client.on('error', (err: Error) => {
console.error('[cache] Valkey error:', err.message);
});
}
return _client;
}
/** Return the cached value for key, or null if absent / expired / error. */
export async function get<T>(key: string): Promise<T | null> {
try {
const raw = await client().get(key);
if (!raw) return null;
return JSON.parse(raw) as T;
} catch {
return null;
}
}
/**
* Store a value under key for ttlSeconds seconds.
* Silently no-ops on Valkey errors so callers never crash.
*/
export async function set<T>(key: string, value: T, ttlSeconds: number): Promise<void> {
try {
await client().set(key, JSON.stringify(value), 'EX', ttlSeconds);
} catch {
// non-fatal
}
}
/** Delete a key immediately (e.g. after a write that invalidates it). */
export async function invalidate(key: string): Promise<void> {
try {
await client().del(key);
} catch {
// non-fatal
}
}
/** Invalidate all keys matching a glob pattern (e.g. 'books:*'). */
export async function invalidatePattern(pattern: string): Promise<void> {
try {
const keys = await client().keys(pattern);
if (keys.length > 0) await client().del(...keys);
} catch {
// non-fatal
}
}

View File

@@ -6,6 +6,7 @@
import { env } from '$env/dynamic/private';
import { log } from '$lib/server/logger';
import * as cache from '$lib/server/cache';
const PB_URL = env.POCKETBASE_URL ?? 'http://localhost:8090';
const PB_EMAIL = env.POCKETBASE_ADMIN_EMAIL ?? 'admin@libnovel.local';
@@ -200,16 +201,65 @@ async function listOne<T>(collection: string, filter: string): Promise<T | null>
// ─── Books ────────────────────────────────────────────────────────────────────
const BOOKS_CACHE_KEY = 'books:all';
const BOOKS_CACHE_TTL = 5 * 60; // 5 minutes
export async function listBooks(): Promise<Book[]> {
const cached = await cache.get<Book[]>(BOOKS_CACHE_KEY);
if (cached) {
log.debug('pocketbase', 'listBooks cache hit', { total: cached.length });
return cached;
}
const books = await listAll<Book>('books', '', '+title');
const nullTitles = books.filter((b) => b.title == null).length;
if (nullTitles > 0) {
log.warn('pocketbase', 'listBooks: books with null title', { count: nullTitles, total: books.length });
}
log.debug('pocketbase', 'listBooks', { total: books.length, nullTitles });
log.debug('pocketbase', 'listBooks cache miss', { total: books.length, nullTitles });
await cache.set(BOOKS_CACHE_KEY, books, BOOKS_CACHE_TTL);
return books;
}
/**
* Fetch only the books whose slugs are in the given set.
* Uses PocketBase filter `slug IN (...)` — a single request regardless of how
* many slugs are requested. Falls back to empty array on error.
*
* Use this instead of listBooks() whenever you only need a small subset of
* books (e.g. the user's reading list or saved shelf).
*
* PocketBase filter syntax for IN: slug='a' || slug='b' || ...
* Limited to 200 slugs to keep the filter URL sane; callers with larger sets
* should fall back to listBooks().
*/
export async function getBooksBySlugs(slugs: Iterable<string>): Promise<Book[]> {
const slugArr = [...new Set(slugs)].slice(0, 200);
if (slugArr.length === 0) return [];
// Check cache for each slug individually (populated by prior listBooks calls).
// If all slugs hit, skip the network round-trip entirely.
const cached = await cache.get<Book[]>(BOOKS_CACHE_KEY);
if (cached) {
const slugSet = new Set(slugArr);
const found = cached.filter((b) => slugSet.has(b.slug));
if (found.length === slugArr.length) {
log.debug('pocketbase', 'getBooksBySlugs cache hit', { count: found.length });
return found;
}
}
// Build filter: slug='a' || slug='b' || ...
const filter = slugArr.map((s) => `slug='${s.replace(/'/g, "\\'")}'`).join(' || ');
const books = await listAll<Book>('books', filter, '+title');
log.debug('pocketbase', 'getBooksBySlugs', { requested: slugArr.length, found: books.length });
return books;
}
/** Invalidate the books cache (call after a book is created/updated/deleted). */
export async function invalidateBooksCache(): Promise<void> {
await cache.invalidate(BOOKS_CACHE_KEY);
}
export async function getBook(slug: string): Promise<Book | null> {
return listOne<Book>('books', `slug="${slug}"`);
}

View File

@@ -1,6 +1,6 @@
import type { PageServerLoad } from './$types';
import {
listBooks,
getBooksBySlugs,
recentlyAddedBooks,
allProgress,
getHomeStats,
@@ -10,14 +10,15 @@ import { log } from '$lib/server/logger';
import type { Book, Progress } from '$lib/server/pocketbase';
export const load: PageServerLoad = async ({ locals }) => {
let allBooks: Book[] = [];
// Step 1: fetch progress + recent books + stats in parallel.
// We intentionally do NOT call listBooks() here — we only need books that
// appear in the user's progress list, which is a tiny subset of 15k books.
let recentBooks: Book[] = [];
let progressList: Progress[] = [];
let stats = { totalBooks: 0, totalChapters: 0 };
try {
[allBooks, recentBooks, progressList, stats] = await Promise.all([
listBooks(),
[recentBooks, progressList, stats] = await Promise.all([
recentlyAddedBooks(8),
allProgress(locals.sessionId, locals.user?.id),
getHomeStats()
@@ -26,8 +27,14 @@ export const load: PageServerLoad = async ({ locals }) => {
log.error('home', 'failed to load home data', { err: String(e) });
}
// Build slug → book lookup
const bookMap = new Map<string, Book>(allBooks.map((b) => [b.slug, b]));
// Step 2: fetch only the books we actually need for continue-reading.
// This is O(progress entries) instead of O(15k books).
const progressSlugs = progressList.map((p) => p.slug);
const progressBooks = progressSlugs.length > 0
? await getBooksBySlugs(progressSlugs).catch(() => [] as Book[])
: [];
const bookMap = new Map<string, Book>(progressBooks.map((b) => [b.slug, b]));
// Continue reading: progress entries joined with book data, most recent first
const continueReading = progressList

View File

@@ -1,7 +1,7 @@
import { json } from '@sveltejs/kit';
import type { RequestHandler } from './$types';
import {
listBooks,
getBooksBySlugs,
recentlyAddedBooks,
allProgress,
getHomeStats,
@@ -17,14 +17,12 @@ import type { Book, Progress } from '$lib/server/pocketbase';
* Requires authentication (enforced by layout guard).
*/
export const GET: RequestHandler = async ({ locals }) => {
let allBooks: Book[] = [];
let recentBooks: Book[] = [];
let progressList: Progress[] = [];
let stats = { totalBooks: 0, totalChapters: 0 };
try {
[allBooks, recentBooks, progressList, stats] = await Promise.all([
listBooks(),
[recentBooks, progressList, stats] = await Promise.all([
recentlyAddedBooks(8),
allProgress(locals.sessionId, locals.user?.id),
getHomeStats()
@@ -33,7 +31,13 @@ export const GET: RequestHandler = async ({ locals }) => {
log.error('api/home', 'failed to load home data', { err: String(e) });
}
const bookMap = new Map<string, Book>(allBooks.map((b) => [b.slug, b]));
// Fetch only the books we actually need for continue-reading.
const progressSlugs = progressList.map((p) => p.slug);
const progressBooks = progressSlugs.length > 0
? await getBooksBySlugs(progressSlugs).catch(() => [] as Book[])
: [];
const bookMap = new Map<string, Book>(progressBooks.map((b) => [b.slug, b]));
const continueReading = progressList
.filter((p) => bookMap.has(p.slug))

View File

@@ -1,7 +1,8 @@
import { json } from '@sveltejs/kit';
import type { RequestHandler } from './$types';
import { listBooks, allProgress, getSavedSlugs } from '$lib/server/pocketbase';
import { getBooksBySlugs, allProgress, getSavedSlugs } from '$lib/server/pocketbase';
import { log } from '$lib/server/logger';
import type { Book } from '$lib/server/pocketbase';
/**
* GET /api/library
@@ -11,23 +12,25 @@ import { log } from '$lib/server/logger';
* Response shape mirrors LibraryItem in the iOS APIClient.
*/
export const GET: RequestHandler = async ({ locals }) => {
let allBooks: Awaited<ReturnType<typeof listBooks>>;
let progressList: Awaited<ReturnType<typeof allProgress>>;
let savedSlugs: Set<string>;
let progressList: Awaited<ReturnType<typeof allProgress>> = [];
let savedSlugs: Set<string> = new Set();
try {
[allBooks, progressList, savedSlugs] = await Promise.all([
listBooks(),
[progressList, savedSlugs] = await Promise.all([
allProgress(locals.sessionId, locals.user?.id),
getSavedSlugs(locals.sessionId, locals.user?.id)
]);
} catch (e) {
log.error('api/library', 'failed to load library data', { err: String(e) });
allBooks = [];
progressList = [];
savedSlugs = new Set();
}
// Fetch only the books the user actually has in their library.
const progressSlugs = new Set(progressList.map((p) => p.slug));
const allNeededSlugs = new Set([...progressSlugs, ...savedSlugs]);
const books = allNeededSlugs.size > 0
? await getBooksBySlugs(allNeededSlugs).catch(() => [] as Book[])
: [];
const progressMap: Record<string, number> = {};
const progressUpdatedMap: Record<string, string> = {};
for (const p of progressList) {
@@ -35,9 +38,6 @@ export const GET: RequestHandler = async ({ locals }) => {
progressUpdatedMap[p.slug] = p.updated;
}
const progressSlugs = new Set(progressList.map((p) => p.slug));
const books = allBooks.filter((b) => progressSlugs.has(b.slug) || savedSlugs.has(b.slug));
const withProgress = books.filter((b) => progressSlugs.has(b.slug));
const savedOnly = books
.filter((b) => !progressSlugs.has(b.slug))

View File

@@ -1,48 +1,42 @@
import { error } from '@sveltejs/kit';
import type { PageServerLoad } from './$types';
import { listBooks, allProgress, getSavedSlugs } from '$lib/server/pocketbase';
import { getBooksBySlugs, allProgress, getSavedSlugs } from '$lib/server/pocketbase';
import { log } from '$lib/server/logger';
import type { Book } from '$lib/server/pocketbase';
export const load: PageServerLoad = async ({ locals }) => {
let allBooks: Awaited<ReturnType<typeof listBooks>>;
let progressList: Awaited<ReturnType<typeof allProgress>>;
let savedSlugs: Set<string>;
let progressList: Awaited<ReturnType<typeof allProgress>> = [];
let savedSlugs: Set<string> = new Set();
try {
[allBooks, progressList, savedSlugs] = await Promise.all([
listBooks(),
[progressList, savedSlugs] = await Promise.all([
allProgress(locals.sessionId, locals.user?.id),
getSavedSlugs(locals.sessionId, locals.user?.id)
]);
} catch (e) {
log.error('books', 'failed to load library data', { err: String(e) });
allBooks = [];
progressList = [];
savedSlugs = new Set();
}
// Fetch only the books the user actually has in their library.
const progressSlugs = new Set(progressList.map((p) => p.slug));
const allNeededSlugs = new Set([...progressSlugs, ...savedSlugs]);
const books = allNeededSlugs.size > 0
? await getBooksBySlugs(allNeededSlugs).catch(() => [] as Book[])
: [];
// Build a quick lookup: slug → last chapter read
const progressMap: Record<string, number> = {};
const progressUpdatedMap: Record<string, string> = {};
for (const p of progressList) {
progressMap[p.slug] = p.chapter;
progressUpdatedMap[p.slug] = p.updated;
}
// Library = books the user has started reading OR explicitly saved
const progressSlugs = new Set(progressList.map((p) => p.slug));
const books = allBooks.filter((b) => progressSlugs.has(b.slug) || savedSlugs.has(b.slug));
// Sort: books with progress first (most-recently-read order is implicit via progressList),
// then saved-only books alphabetically.
// Sort: books with progress first (most-recently-read), then saved-only alphabetically.
const withProgress = books.filter((b) => progressSlugs.has(b.slug));
const savedOnly = books
.filter((b) => !progressSlugs.has(b.slug))
.sort((a, b) => (a.title ?? '').localeCompare(b.title ?? ''));
// Re-sort withProgress by most recent progress update
const progressUpdatedMap: Record<string, string> = {};
for (const p of progressList) {
progressUpdatedMap[p.slug] = p.updated;
}
withProgress.sort((a, b) => {
const ta = progressUpdatedMap[a.slug] ?? '';
const tb = progressUpdatedMap[b.slug] ?? '';

View File

@@ -2,7 +2,12 @@ import { sveltekit } from '@sveltejs/kit/vite';
import tailwindcss from '@tailwindcss/vite';
import { defineConfig } from 'vite';
// Source maps are always generated so that the CI pipeline can upload them to
// GlitchTip via glitchtip-cli after a release build.
export default defineConfig({
build: {
sourcemap: true
},
plugins: [tailwindcss(), sveltekit()],
ssr: {
// Force these packages to be bundled into the server output rather than