Compare commits

...

10 Commits

Author SHA1 Message Date
46129d89ae fix: satisfy newer GitHub clippy lints 2026-04-16 17:13:02 -04:00
df771d3f7c fix: satisfy CI clippy lints 2026-04-16 17:10:14 -04:00
b0646e2629 chore: release v0.3.1-rc.5 2026-04-16 11:37:48 -04:00
c454de6116 chore: release v0.3.1-rc.4 2026-04-16 07:01:46 -04:00
c26c2d4420 style: format executor, pipeline, and notifications
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-13 16:20:58 -04:00
f511f1c084 fix: remove old db.rs and format db/ submodules
The db.rs → db/ split left the old file tracked, causing
"module found at both db.rs and db/mod.rs" on CI. Also
fixes import ordering flagged by cargo fmt.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-13 16:16:48 -04:00
e50ca64e80 Resolve audit findings + split db.rs into db/ module
- P1: Fix cancel race in pipeline, fix VideoToolbox quality mapping
- P2: SSRF protection, batch cancel N+1, archived filter fixes,
  metadata persistence, reverse proxy hardening, reprobe logging
- TD: Remove AlchemistEvent legacy bridge, fix silent .ok() on DB
  writes, optimize sort-by-size query, split db.rs (3400 LOC) into
  8 focused submodules under src/db/
- UX: Add queue position display for queued jobs
- Docs: Update API docs, engine modes, library doctor, config ref
- Plans: Add plans.md for remaining open items (UX-2/3, FG-4, RG-2)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-13 16:02:11 -04:00
5ca33835f1 chore: release v0.3.1-rc.3 2026-04-12 10:25:12 -04:00
3e9b0f0290 fix bug with set up causing unreachable set up page, fix bug not allowing shutdown to occur. 2026-04-09 14:58:52 -04:00
f47e90c658 fix bug with set up causing unreachable set up page 2026-04-08 11:39:36 -04:00
125 changed files with 12757 additions and 6715 deletions

View File

@@ -0,0 +1,67 @@
---
name: caveman
description: >
Ultra-compressed communication mode. Cuts token usage ~75% by speaking like caveman
while keeping full technical accuracy. Supports intensity levels: lite, full (default), ultra,
wenyan-lite, wenyan-full, wenyan-ultra.
Use when user says "caveman mode", "talk like caveman", "use caveman", "less tokens",
"be brief", or invokes /caveman. Also auto-triggers when token efficiency is requested.
---
Respond terse like smart caveman. All technical substance stay. Only fluff die.
## Persistence
ACTIVE EVERY RESPONSE. No revert after many turns. No filler drift. Still active if unsure. Off only: "stop caveman" / "normal mode".
Default: **full**. Switch: `/caveman lite|full|ultra`.
## Rules
Drop: articles (a/an/the), filler (just/really/basically/actually/simply), pleasantries (sure/certainly/of course/happy to), hedging. Fragments OK. Short synonyms (big not extensive, fix not "implement a solution for"). Technical terms exact. Code blocks unchanged. Errors quoted exact.
Pattern: `[thing] [action] [reason]. [next step].`
Not: "Sure! I'd be happy to help you with that. The issue you're experiencing is likely caused by..."
Yes: "Bug in auth middleware. Token expiry check use `<` not `<=`. Fix:"
## Intensity
| Level | What change |
|-------|------------|
| **lite** | No filler/hedging. Keep articles + full sentences. Professional but tight |
| **full** | Drop articles, fragments OK, short synonyms. Classic caveman |
| **ultra** | Abbreviate (DB/auth/config/req/res/fn/impl), strip conjunctions, arrows for causality (X → Y), one word when one word enough |
| **wenyan-lite** | Semi-classical. Drop filler/hedging but keep grammar structure, classical register |
| **wenyan-full** | Maximum classical terseness. Fully 文言文. 80-90% character reduction. Classical sentence patterns, verbs precede objects, subjects often omitted, classical particles (之/乃/為/其) |
| **wenyan-ultra** | Extreme abbreviation while keeping classical Chinese feel. Maximum compression, ultra terse |
Example — "Why React component re-render?"
- lite: "Your component re-renders because you create a new object reference each render. Wrap it in `useMemo`."
- full: "New object ref each render. Inline object prop = new ref = re-render. Wrap in `useMemo`."
- ultra: "Inline obj prop → new ref → re-render. `useMemo`."
- wenyan-lite: "組件頻重繪,以每繪新生對象參照故。以 useMemo 包之。"
- wenyan-full: "物出新參照致重繪。useMemo .Wrap之。"
- wenyan-ultra: "新參照→重繪。useMemo Wrap。"
Example — "Explain database connection pooling."
- lite: "Connection pooling reuses open connections instead of creating new ones per request. Avoids repeated handshake overhead."
- full: "Pool reuse open DB connections. No new connection per request. Skip handshake overhead."
- ultra: "Pool = reuse DB conn. Skip handshake → fast under load."
- wenyan-full: "池reuse open connection。不每req新開。skip handshake overhead。"
- wenyan-ultra: "池reuse conn。skip handshake → fast。"
## Auto-Clarity
Drop caveman for: security warnings, irreversible action confirmations, multi-step sequences where fragment order risks misread, user asks to clarify or repeats question. Resume caveman after clear part done.
Example — destructive op:
> **Warning:** This will permanently delete all rows in the `users` table and cannot be undone.
> ```sql
> DROP TABLE users;
> ```
> Caveman resume. Verify backup exist first.
## Boundaries
Code/commits/PRs: write normal. "stop caveman" or "normal mode": revert. Level persist until changed or session end.

View File

@@ -12,7 +12,32 @@
"Bash(bash --version)",
"Bash(git tag:*)",
"Bash(cargo clippy:*)",
"Bash(bun run:*)"
"Bash(bun run:*)",
"Bash(ls /Users/brooklyn/data/alchemist/*.md)",
"Bash(ls /Users/brooklyn/data/alchemist/docs/*.md)",
"Bash(npx skills:*)",
"Bash(find /Users/brooklyn/data/alchemist/web -name tailwind.config.* -o -name *.config.ts -o -name *.config.js)",
"Bash(just check-web:*)",
"Bash(git stash:*)",
"Bash(just test-e2e:*)",
"Bash(bunx tsc:*)",
"Bash(wait)",
"Bash(npx playwright:*)",
"Bash(just check-rust:*)",
"Bash(cargo fmt:*)",
"Bash(cargo test:*)",
"Bash(just check:*)",
"Bash(just test:*)",
"Bash(find /Users/brooklyn/data/alchemist -name *.sql -o -name *migration*)",
"Bash(grep -l \"DROP\\\\|RENAME\\\\|DELETE FROM\" /Users/brooklyn/data/alchemist/migrations/*.sql)",
"Bash(just test-filter:*)",
"Bash(npx tsc:*)",
"Bash(find /Users/brooklyn/data/alchemist -type f -name *.rs)",
"Bash(ls -la /Users/brooklyn/data/alchemist/src/*.rs)",
"Bash(grep -rn \"from_alchemist_event\\\\|AlchemistEvent.*JobEvent\\\\|JobEvent.*AlchemistEvent\" /Users/brooklyn/data/alchemist/src/ --include=*.rs)",
"Bash(grep -l AlchemistEvent /Users/brooklyn/data/alchemist/src/*.rs /Users/brooklyn/data/alchemist/src/**/*.rs)",
"Bash(/tmp/audit_report.txt:*)",
"Read(//tmp/**)"
]
}
}

1
.claude/skills/caveman Symbolic link
View File

@@ -0,0 +1 @@
../../.agents/skills/caveman

5
.idea/alchemist.iml generated
View File

@@ -1,5 +1,10 @@
<?xml version="1.0" encoding="UTF-8"?>
<module type="EMPTY_MODULE" version="4">
<component name="FacetManager">
<facet type="Python" name="Python facet">
<configuration sdkName="" />
</facet>
</component>
<component name="NewModuleRootManager">
<content url="file://$MODULE_DIR$">
<sourceFolder url="file://$MODULE_DIR$/src" isTestSource="false" />

View File

@@ -0,0 +1,8 @@
<component name="InspectionProjectProfileManager">
<profile version="1.0">
<option name="myName" value="Project Default" />
<inspection_tool class="CyclomaticComplexityInspection" enabled="true" level="WARNING" enabled_by_default="true" />
<inspection_tool class="SqlNoDataSourceInspection" enabled="false" level="WARNING" enabled_by_default="false" />
<inspection_tool class="TsLint" enabled="true" level="WARNING" enabled_by_default="true" />
</profile>
</component>

6
.idea/misc.xml generated Normal file
View File

@@ -0,0 +1,6 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="Black">
<option name="sdkName" value="Python 3.14 (alchemist)" />
</component>
</project>

View File

@@ -2,6 +2,72 @@
All notable changes to this project will be documented in this file.
## [0.3.1-rc.5] - 2026-04-16
### Reliability & Stability
- **Segment-based encode resume** — interrupted encode jobs now persist resume sessions and completed segments so restart and recovery flows can continue without discarding all completed work.
- **Notification target compatibility hardening** — notification target reads/writes now preserve the additive migration path, tolerate legacy shapes, and avoid duplicate-delete projection bugs in settings management.
- **Daily summary reliability** — summary delivery now retries safely after transient failures and avoids duplicate sends across restart boundaries by persisting the last successful day.
- **Job-detail correctness** — completed-job detail loading now fails closed on database errors instead of returning partial `200 OK` payloads, and encode stat duration fallback uses the encoded output rather than the source file.
- **Auth and settings safety** — login now returns server errors for real database failures, and duplicate notification/schedule rows no longer disappear together from a single delete action.
### Jobs & UX
- **Manual enqueue flow** — the jobs UI now supports enqueueing a single absolute file path through the same backend dedupe and output rules used by library scans.
- **Queued-job visibility** — job detail now exposes queue position and processor blocked reasons so operators can see why a queued job is not starting.
- **Attempt-history surfacing** — job detail now shows encode attempt history directly in the modal, including outcome, timing, and captured failure summary.
- **Jobs UI follow-through** — the `JobManager` refactor now ships with dedicated controller/dialog helpers and tighter SSE reconciliation so filtered tables and open detail modals stay aligned with backend truth.
- **Intelligence actions** — remux recommendations and duplicate candidates are now actionable directly from the Intelligence page.
## [0.3.1-rc.3] - 2026-04-12
### New Features
#### Job Management Refactor
- **Componentized Job Manager** — extracted monolithic `JobManager.tsx` into a modular suite under `web/src/components/jobs/`, including dedicated components for the toolbar, table, and detail modal.
- **Enhanced Job Detail Modal** — rebuilt the job detail view with better loading states, smoother transitions, and improved information hierarchy for analysis, decisions, and failure reasons.
- **Job SSE Hook** — unified job-related Server-Sent Events logic into a custom `useJobSSE` hook for better state management and reduced re-renders.
#### Themes & UX
- **Midnight OLED+** — enhanced the `midnight` theme with true-black surfaces and suppressed decorative gradients to maximize OLED power savings.
- **Improved Toasts** — toast notifications now feature a high-quality backdrop blur and refined border styling for better visibility against busy backgrounds.
#### Reliability & Observability
- **Engine Lifecycle Specs** — added a comprehensive Playwright suite for validating engine transitions (Running -> Draining -> Paused -> Stopped).
- **Planner & Lifecycle Docs** — added detailed technical documentation for the transcoding planner logic and engine state machine.
- **Encode Attempt Tracking** — added a database migration to track individual encode attempts, laying the groundwork for more granular retry statistics.
#### Hardware & Performance
- **Concurrency & Speed Optimizations** — internal refinements to the executor and processor to improve hardware utilization and address reported speed issues on certain platforms.
- **Backlog Grooming** — updated `TODO.md` with a focus on validating AMF and VAAPI AV1 hardware encoders.
## [0.3.1-rc.1] - 2026-04-08
### New Features
#### Conversion & Library Workflows
- **Experimental Conversion / Remux page** — upload a single file, inspect streams, preview the generated FFmpeg command, run a remux/transcode job through Alchemist, and download the result when complete.
- **Expanded Library Intelligence** — duplicate detection now sits alongside storage-focused recommendation sections for remux-only opportunities, wasteful audio layouts, and commentary/descriptive-track cleanup candidates.
#### Authentication & Automation
- **Named API tokens** — create bearer tokens from Settings with `read_only` or `full_access` access classes. Tokens are only shown once at creation time and stored server-side as hashes.
- **OpenAPI contract** — hand-maintained OpenAPI spec added alongside expanded human API docs for auth, token management, and update-check behavior.
#### Notifications
- **Provider-specific notification targets** — notification settings now use provider-specific configuration payloads instead of the old shared endpoint/token shape.
- **Provider expansion** — Discord webhook, Discord bot, Gotify, generic webhook, Telegram, and SMTP email targets are supported.
- **Richer event model** — notification events now distinguish queue/start/completion/failure plus scan completion, engine idle, and daily summary delivery.
- **Daily summary scheduling** — notifications include a global `daily_summary_time_local` setting and per-target opt-in for digest delivery.
#### Deployment & Distribution
- **Windows update check** — the About dialog now checks GitHub Releases for the latest stable version and links directly to the release download page when an update is available.
- **Distribution metadata generation** — in-repo Homebrew and AUR packaging templates plus workflow rendering were added as the foundation for package-manager distribution.
### Documentation
- **Config path clarity** — docs now consistently describe `~/.config/alchemist/config.toml` as the default host-side config location on Linux/macOS, while Docker examples still use `/app/config/config.toml` inside the container.
- **Backlog realignment** — the backlog was rewritten around current repo reality, marking large newly implemented surfaces as “Implemented / In Progress” and keeping the roadmap automation-first.
## [0.3.0] - 2026-04-06
### Security

View File

@@ -44,6 +44,8 @@ just test-e2e-headed # E2e with browser visible
Integration tests require FFmpeg and FFprobe installed locally.
Integration tests live in `tests/` — notably `integration_db_upgrade.rs` tests schema migrations against a v0.2.5 baseline database. Every migration must pass this.
### Database
```bash
just db-reset # Wipe dev database (keeps config)
@@ -53,6 +55,10 @@ just db-shell # SQLite shell
## Architecture
### Clippy Strictness
CI enforces `-D clippy::unwrap_used` and `-D clippy::expect_used`. Use `?` propagation or explicit match — no `.unwrap()` or `.expect()` in production code paths.
### Rust Backend (`src/`)
The backend is structured around a central `AppState` (holding SQLite pool, config, broadcast channels) passed to Axum handlers:
@@ -77,15 +83,32 @@ The backend is structured around a central `AppState` (holding SQLite pool, conf
- `pipeline.rs` — Orchestrates scan → analyze → plan → execute
- `processor.rs` — Job queue controller (concurrency, pausing, draining)
- `ffmpeg/` — FFmpeg command builder and progress parser, with platform-specific encoder modules
- **`orchestrator.rs`** — Spawns and monitors FFmpeg processes, streams progress back via channels
- **`orchestrator.rs`** — Spawns and monitors FFmpeg processes, streams progress back via channels. Uses `std::sync::Mutex` (not tokio) intentionally — critical sections never cross `.await` boundaries.
- **`system/`** — Hardware detection (`hardware.rs`), file watcher (`watcher.rs`), library scanner (`scanner.rs`)
- **`scheduler.rs`** — Off-peak cron scheduling
- **`notifications.rs`** — Discord, Gotify, Webhook integrations
- **`wizard.rs`** — First-run setup flow
#### Event Channel Architecture
Three typed broadcast channels in `AppState` (defined in `db.rs`):
- `jobs` (capacity 1000) — high-frequency: progress, state changes, decisions, logs
- `config` (capacity 50) — watch folder changes, settings updates
- `system` (capacity 100) — scan lifecycle, hardware state changes
`sse.rs` merges all three via `futures::stream::select_all`. SSE is capped at 50 concurrent connections (`MAX_SSE_CONNECTIONS`), enforced with a RAII guard that decrements on stream drop.
`AlchemistEvent` still exists as a legacy bridge; `JobEvent` is the canonical type — new code uses `JobEvent`/`ConfigEvent`/`SystemEvent`.
#### FFmpeg Command Builder
`FFmpegCommandBuilder<'a>` in `src/media/ffmpeg/mod.rs` uses lifetime references to avoid cloning input/output paths. `.with_hardware(Option<&HardwareInfo>)` injects hardware flags; `.build_args()` returns `Vec<String>` for unit testing without spawning a process. Each hardware platform is a submodule (amf, cpu, nvenc, qsv, vaapi, videotoolbox). `EncoderCapabilities` is detected once via live ffmpeg invocation and cached in `OnceLock`.
### Frontend (`web/src/`)
Astro pages with React islands. UI reflects backend state via Server-Sent Events (SSE) — avoid optimistic UI unless reconciled with backend truth.
Astro pages (`web/src/pages/`) with React islands. UI reflects backend state via SSE — avoid optimistic UI unless reconciled with backend truth.
Job management UI is split into focused subcomponents under `web/src/components/jobs/`: `JobsTable`, `JobDetailModal`, `JobsToolbar`, `JobExplanations`, `useJobSSE.ts` (SSE hook), and `types.ts` (shared types + pure data utilities). `JobManager.tsx` is the parent that owns state and wires them together.
### Database Schema

2
Cargo.lock generated
View File

@@ -13,7 +13,7 @@ dependencies = [
[[package]]
name = "alchemist"
version = "0.3.1-rc.1"
version = "0.3.1-rc.5"
dependencies = [
"anyhow",
"argon2",

View File

@@ -1,6 +1,6 @@
[package]
name = "alchemist"
version = "0.3.1-rc.1"
version = "0.3.1-rc.5"
edition = "2024"
rust-version = "1.85"
license = "GPL-3.0"

View File

@@ -74,6 +74,7 @@ services:
```
Then open [http://localhost:3000](http://localhost:3000) in your browser.
First-time setup is only reachable from the local network.
On Linux and macOS, the default host-side config location is
`~/.config/alchemist/config.toml`. When you use Docker, the
@@ -132,10 +133,26 @@ just check
The core contributor path is supported on Windows. Broader release and utility recipes remain Unix-first.
## CLI
Alchemist exposes explicit CLI subcommands:
```bash
alchemist scan /path/to/media
alchemist run /path/to/media
alchemist plan /path/to/media
alchemist plan /path/to/media --json
```
- `scan` enqueues matching work and exits
- `run` scans, enqueues, and waits for processing to finish
- `plan` analyzes files and reports what Alchemist would do without enqueuing jobs
## First Run
1. Open [http://localhost:3000](http://localhost:3000).
2. Complete the setup wizard. It takes about 2 minutes.
During first-time setup, the web UI is reachable only from the local network.
3. Add your media folders in Watch Folders.
4. Alchemist scans and starts working automatically.
5. Check the Dashboard to see progress and savings.
@@ -144,8 +161,6 @@ The core contributor path is supported on Windows. Broader release and utility r
- API automation can use bearer tokens created in **Settings → API Tokens**.
- Read-only tokens are limited to observability and monitoring routes.
- Alchemist can also be served under a subpath such as `/alchemist`
using `ALCHEMIST_BASE_URL=/alchemist`.
## Supported Platforms

View File

@@ -30,15 +30,15 @@ Then complete the release-candidate preflight:
Promote to stable only after the RC burn-in is complete and the same automated preflight is still green.
1. Run `just bump 0.3.0`.
1. Run `just bump 0.3.1`.
2. Update `CHANGELOG.md` and `docs/docs/changelog.md` for the stable cut.
3. Run `just release-check`.
4. Re-run the manual smoke checklist against the final release artifacts:
- Docker fresh install
- Packaged binary first-run
- Upgrade from the most recent `0.2.x` or `0.3.0-rc.x`
- Upgrade from the most recent `0.2.x` or `0.3.1-rc.x`
- Encode, skip, failure, and notification verification
5. Re-run the Windows contributor verification checklist if Windows parity changed after the last RC.
6. Confirm release notes, docs, and hardware-support wording match the tested release state.
7. Merge the stable release commit to `main`.
8. Create the annotated tag `v0.3.0` on the exact merged commit.
8. Create the annotated tag `v0.3.1` on the exact merged commit.

8
TODO.md Normal file
View File

@@ -0,0 +1,8 @@
# Todo List
## AMD / VAAPI / AMF
- Validate `av1_vaapi` on real Linux VAAPI hardware — confirm encode succeeds with current args.
- Validate `av1_amf` on real Windows AMF hardware — confirm encode succeeds with current args.
- If either encoder needs quality/rate-control params, apply the same pattern as the VideoToolbox fix (add `rate_control: Option<&RateControl>` to `vaapi::append_args` and `amf::append_args`).
- Update support claims in README and UI only after validation passes.

View File

@@ -1 +1 @@
0.3.1-rc.1
0.3.1-rc.5

420
audit.md Normal file
View File

@@ -0,0 +1,420 @@
# Audit Findings
Last updated: 2026-04-12 (second pass)
---
## P1 Issues
---
### [P1-1] Cancel during analysis can be overwritten by the pipeline
**Status: RESOLVED**
**Files:**
- `src/server/jobs.rs:4163`
- `src/media/pipeline.rs:11781221`
- `src/orchestrator.rs:8490`
**Severity:** P1
**Problem:**
`request_job_cancel()` in `jobs.rs` immediately writes `Cancelled` to the DB for jobs in `Analyzing` or `Resuming` state. The pipeline used to have race windows where it could overwrite this state with `Skipped`, `Encoding`, or `Remuxing` if it reached a checkpoint after the cancel was issued but before it could be processed.
**Fix:**
Implemented `cancel_requested: Arc<tokio::sync::RwLock<HashSet<i64>>>` in `Transcoder` (orchestrator). The `update_job_state` wrapper in `pipeline.rs` now checks this set before any DB write for `Encoding`, `Remuxing`, `Skipped`, and `Completed` states. Terminal states (Completed, Failed, Cancelled, Skipped) also trigger removal from the set.
---
### [P1-2] VideoToolbox quality controls are effectively ignored
**Status: RESOLVED**
**Files:**
- `src/media/planner.rs:630650`
- `src/media/ffmpeg/videotoolbox.rs:2554`
- `src/config.rs:8592`
**Severity:** P1
**Problem:**
The planner used to emit `RateControl::Cq` values that were incorrectly mapped for VideoToolbox, resulting in uncalibrated or inverted quality.
**Fix:**
Fixed the mapping in `videotoolbox.rs` to use `-q:v` (1-100, lower is better) and clamped the input range to 1-51 to match user expectations from x264/x265. Updated `QualityProfile` in `config.rs` to provide sane default values (24, 28, 32) for VideoToolbox quality.
---
## P2 Issues
---
### [P2-1] Convert does not reuse subtitle/container compatibility checks
**Status: RESOLVED**
**Files:**
- `src/conversion.rs:372380`
- `src/media/planner.rs`
**Severity:** P2
**Problem:**
The conversion path was not validating subtitle/container compatibility, leading to FFmpeg runtime failures instead of early validation errors.
**Fix:**
Integrated `crate::media::planner::subtitle_copy_supported` into `src/conversion.rs:build_subtitle_plan`. The "copy" mode now returns an `AlchemistError::Config` if the combination is unsupported.
---
### [P2-2] Completed job metadata omitted at the API layer
**Status: RESOLVED**
**Files:**
- `src/db.rs:254263`
- `src/media/pipeline.rs:599`
- `src/server/jobs.rs:343`
**Severity:** P2
**Problem:**
Job details required a live re-probe of the input file to show metadata, which failed if the file was moved or deleted after completion.
**Fix:**
Added `input_metadata_json` column to the `jobs` table (migration `20260412000000_store_job_metadata.sql`). The pipeline now stores the metadata string immediately after analysis. `get_job_detail_handler` reads this stored value, ensuring metadata is always available even if the source file is missing.
---
### [P2-3] LAN-only setup exposed to reverse proxy misconfig
**Status: RESOLVED**
**Files:**
- `src/config.rs``SystemConfig.trusted_proxies`
- `src/server/mod.rs``AppState.trusted_proxies`, `AppState.setup_token`
- `src/server/middleware.rs``is_trusted_peer`, `request_ip`, `auth_middleware`
**Severity:** P2
**Problem:**
The setup wizard gate trusts all private/loopback IPs for header forwarding. When running behind a misconfigured proxy that doesn't set headers, it falls back to the proxy's own IP (e.g. 127.0.0.1), making the setup endpoint accessible to external traffic.
**Fix:**
Added two independent security layers:
1. `trusted_proxies: Vec<String>` to `SystemConfig`. When non-empty, only those exact IPs (plus loopback) are trusted for proxy header forwarding instead of all RFC-1918 ranges. Empty = previous behavior preserved.
2. `ALCHEMIST_SETUP_TOKEN` env var. When set, setup endpoints require `?token=<value>` query param regardless of client IP. Token mode takes precedence over IP-based LAN check.
---
### [P2-4] N+1 DB update in batch cancel
**Status: RESOLVED**
**Files:**
- `src/server/jobs.rs``batch_jobs_handler`
**Severity:** P2
**Problem:**
`batch_jobs_handler` for "cancel" action iterates over jobs and calls `request_job_cancel` which performs an individual `update_job_status` query per job. Cancelling a large number of jobs triggers N queries.
**Fix:**
Restructured the "cancel" branch in `batch_jobs_handler`. Orchestrator in-memory operations (add_cancel_request, cancel_job) still run per-job, but all DB status updates are batched into a single `db.batch_cancel_jobs(&ids)` call (which already existed at db.rs). Immediate-resolution jobs (Queued + successfully signalled Analyzing/Resuming) are collected and written in one UPDATE ... WHERE id IN (...) query.
---
### [P2-5] Missing archived filter in health and stats queries
**Status: RESOLVED**
**Files:**
- `src/db.rs``get_aggregated_stats`, `get_job_stats`, `get_health_summary`
**Severity:** P2
**Problem:**
`get_health_summary` and `get_aggregated_stats` (total_jobs) do not include `AND archived = 0`. Archived (deleted) jobs are incorrectly included in library health metrics and total job counts.
**Fix:**
Added `AND archived = 0` to all three affected queries: `total_jobs` and `completed_jobs` subqueries in `get_aggregated_stats`, the `GROUP BY status` query in `get_job_stats`, and both subqueries in `get_health_summary`. Updated tests that were asserting the old (incorrect) behavior.
---
### [P2-6] Daily summary notifications bypass SSRF protections
**Status: RESOLVED**
**Files:**
- `src/notifications.rs``build_safe_client()`, `send()`, `send_daily_summary_target()`
**Severity:** P2
**Problem:**
`send_daily_summary_target()` used `Client::new()` without any SSRF defences, while `send()` applied DNS timeout, private-IP blocking, no-redirect policy, and request timeout.
**Fix:**
Extracted all client-building logic into `build_safe_client(&self, target)` which applies the full SSRF defence stack. Both `send()` and `send_daily_summary_target()` now use this shared helper.
---
### [P2-7] Silent reprobe failure corrupts saved encode stats
**Status: RESOLVED**
**Files:**
- `src/media/pipeline.rs``finalize_job()` duration reprobe
**Severity:** P2
**Problem:**
When a completed encode's metadata has `duration_secs <= 0.0`, the pipeline reprobes the output file to get the actual duration. If reprobe fails, the error was silently swallowed via `.ok()` and duration defaulted to 0.0, poisoning downstream stats.
**Fix:**
Replaced `.ok().and_then().unwrap_or(0.0)` chain with explicit `match` that logs the error via `tracing::warn!` and falls through to 0.0. Existing guards at the stats computation lines already handle `duration <= 0.0` correctly — operators now see *why* stats are zeroed.
---
## Technical Debt
---
### [TD-1] `db.rs` is a 3481-line monolith
**Status: RESOLVED**
**File:** `src/db/` (was `src/db.rs`)
**Severity:** TD
**Problem:**
The database layer had grown to nearly 3500 lines. Every query, migration flag, and state enum was in one file, making navigation and maintenance difficult.
**Fix:**
Split into `src/db/` module with 8 submodules: `mod.rs` (Db struct, init, migrations, hash fns), `types.rs` (all type defs), `events.rs` (event enums + channels), `jobs.rs` (job CRUD/filtering/decisions), `stats.rs` (encode/aggregated/daily stats), `config.rs` (watch dirs/profiles/notifications/schedules/file settings/preferences), `conversion.rs` (ConversionJob CRUD), `system.rs` (auth/sessions/API tokens/logs/health). All tests moved alongside their methods. Public API unchanged — all types re-exported from `db/mod.rs`.
---
### [TD-2] `AlchemistEvent` legacy bridge is dead weight
**Status: RESOLVED**
**Files:**
- `src/db.rs` — enum and From impls removed
- `src/media/pipeline.rs`, `src/media/executor.rs`, `src/media/processor.rs` — legacy `tx` channel removed
- `src/notifications.rs` — migrated to typed `EventChannels` (jobs + system)
- `src/server/mod.rs`, `src/main.rs` — legacy channel removed from AppState/RunServerArgs
**Severity:** TD
**Problem:**
`AlchemistEvent` was a legacy event type duplicated by `JobEvent`, `ConfigEvent`, and `SystemEvent`. All senders were emitting events on both channels.
**Fix:**
Migrated the notification system (the sole consumer) to subscribe to `EventChannels.jobs` and `EventChannels.system` directly. Added `SystemEvent::EngineIdle` variant. Removed `AlchemistEvent` enum, its `From` impls, the legacy `tx` broadcast channel from all structs, and the `pub use` from `lib.rs`.
---
### [TD-3] `pipeline.rs` legacy `AlchemistEvent::Progress` stub
**Status: RESOLVED**
**Files:**
- `src/media/pipeline.rs:1228`
**Severity:** TD
**Problem:**
The pipeline used to emit zeroed progress events that could overwrite real stats from the executor.
**Fix:**
Emission removed. A comment at line 1228-1229 confirms that `AlchemistEvent::Progress` is no longer emitted from the pipeline wrapper.
---
### [TD-4] Silent `.ok()` on pipeline decision and attempt DB writes
**Status: RESOLVED**
**Files:**
- `src/media/pipeline.rs` — all `add_decision`, `insert_encode_attempt`, `upsert_job_failure_explanation`, and `add_log` call sites
**Severity:** TD
**Problem:**
Decision records, encode attempt records, failure explanations, and error logs were written with `.ok()` or `let _ =`, silently discarding DB write failures. These records are the only audit trail of *why* a job was skipped/transcoded/failed.
**Fix:**
Replaced all `.ok()` / `let _ =` patterns on `add_decision`, `insert_encode_attempt`, `upsert_job_failure_explanation`, and `add_log` calls with `if let Err(e) = ... { tracing::warn!(...) }`. Pipeline still continues on failure, but operators now see the error.
---
### [TD-5] Correlated subquery for sort-by-size in job listing
**Status: RESOLVED**
**Files:**
- `src/db.rs``get_jobs_filtered()` query
**Severity:** TD
**Problem:**
Sorting jobs by file size used a correlated subquery in ORDER BY, executing one subquery per row and producing NULL for jobs without encode_stats.
**Fix:**
Added `LEFT JOIN encode_stats es ON es.job_id = j.id` to the base query. Sort column changed to `COALESCE(es.input_size_bytes, 0)`, ensuring jobs without stats sort as 0 (smallest) instead of NULL.
---
## Reliability Gaps
---
### [RG-1] No encode resume after crash or restart
**Status: PARTIALLY RESOLVED**
**Files:**
- `src/main.rs:320`
- `src/media/processor.rs:255`
**Severity:** RG
**Problem:**
Interrupted encodes were left in `Encoding` state and orphaned temp files remained on disk.
**Fix:**
Implemented `db.reset_interrupted_jobs()` in `main.rs` which resets `Encoding`, `Remuxing`, `Resuming`, and `Analyzing` jobs to `Queued` on startup. Orphaned temp files are also detected and removed. Full bitstream-level resume (resuming from the middle of a file) is still missing.
---
### [RG-2] AMD VAAPI/AMF hardware paths unvalidated
**Files:**
- `src/media/ffmpeg/vaapi.rs`
- `src/media/ffmpeg/amf.rs`
**Severity:** RG
**Problem:**
Hardware paths for AMD (VAAPI on Linux, AMF on Windows) were implemented without real hardware validation.
**Fix:**
Verify exact flag compatibility on AMD hardware and add integration tests gated on GPU presence.
---
## UX Gaps
---
### [UX-1] Queued jobs show no position or estimated wait time
**Status: RESOLVED**
**Files:**
- `src/db.rs``get_queue_position`
- `src/server/jobs.rs``JobDetailResponse.queue_position`
- `web/src/components/jobs/JobDetailModal.tsx`
- `web/src/components/jobs/types.ts``JobDetail.queue_position`
**Severity:** UX
**Problem:**
Queued jobs only show "Waiting" without indicating their position in the priority queue.
**Fix:**
Implemented `db.get_queue_position(job_id)` which counts jobs with higher priority or earlier `created_at` (matching the `priority DESC, created_at ASC` dequeue order). Added `queue_position: Option<u32>` to `JobDetailResponse` — populated only when `status == Queued`. Frontend shows `Queue position: #N` in the empty state card in `JobDetailModal`.
---
### [UX-2] No way to add a single file to the queue via the UI
**Severity:** UX
**Problem:**
Jobs only enter the queue via full library scans. No manual "Enqueue path" exists in the UI.
**Fix:**
Add `POST /api/jobs/enqueue` and a "Add file" action in the `JobsToolbar`.
---
### [UX-3] Workers-blocked reason not surfaced for queued jobs
**Severity:** UX
**Problem:**
Users cannot see why a job is stuck in Queued (paused, scheduled, or slots full).
**Fix:**
Add `GET /api/processor/status` and show the reason in the job detail.
---
## Feature Gaps
---
### [FG-4] Intelligence page content not actionable
**Files:**
- `web/src/components/LibraryIntelligence.tsx`
**Severity:** FG
**Problem:**
Intelligence page is informational only; recommendations cannot be acted upon directly from the page.
**Fix:**
Add "Queue all" for remux opportunities and "Review" actions for duplicates.
---
## What To Fix Next
1. **[UX-2]** Single file enqueue — New feature.
2. **[UX-3]** Workers-blocked reason — New feature.
3. **[FG-4]** Intelligence page actions — New feature.
4. **[RG-2]** AMD VAAPI/AMF validation — Needs real hardware.

View File

@@ -45,11 +45,6 @@ documentation, or iteration.
- Token management endpoints and Settings UI
- Hand-maintained OpenAPI contract plus human API docs
### Base URL / Subpath Support
- `ALCHEMIST_BASE_URL` and matching config support
- Router nesting under a configured path prefix
- Frontend fetches, redirects, navigation, and SSE path generation updated for subpaths
### Distribution Foundation
- In-repo distribution metadata sources for:
- Homebrew
@@ -64,37 +59,37 @@ documentation, or iteration.
- remux-only opportunities
- wasteful audio layouts
- commentary/descriptive-track cleanup candidates
- Direct actions now exist for queueing remux recommendations and opening duplicate candidates in the shared job-detail flow
### Engine Lifecycle + Planner Docs
- Runtime drain/restart controls exist in the product surface
- Backend and Playwright lifecycle coverage now exists for the current behavior
- Planner and engine lifecycle docs are in-repo and should now be kept in sync with shipped semantics rather than treated as missing work
### Jobs UI Refactor / In Flight
- `JobManager` has been decomposed into focused jobs subcomponents and controller hooks
- SSE ownership is now centered in a dedicated hook and job-detail controller flow
- Treat the current jobs UI surface as shipping product that still needs stabilization and regression coverage, not as a future refactor candidate
---
## Active Priorities
### Engine Lifecycle Controls
- Finish and harden restart/shutdown semantics from the About/header surface
- Restart must reset the engine loop without re-execing the process
- Shutdown must cancel active jobs and exit cleanly
- Add final backend and Playwright coverage for lifecycle transitions
### `0.3.1` RC Stability Follow-Through
- Keep the current in-flight backend/frontend/test delta focused on reliability, upgrade safety, and release hardening
- Expand regression coverage for resume/restart/cancel flows, job-detail refresh semantics, settings projection, and intelligence actions
- Keep release docs, changelog entries, and support wording aligned with what the RC actually ships
### Planner and Lifecycle Documentation
- Document planner heuristics and stable skip/transcode/remux decision boundaries
- Document hardware fallback rules and backend selection semantics
- Document pause, drain, restart, cancel, and shutdown semantics from actual behavior
### Per-File Encode History
- Show full attempt history in job detail, grouped by canonical file identity
- Include outcome, encode stats, and failure reason where available
- Make retries, reruns, and settings-driven requeues legible
### Behavior-Preserving Refactor Pass
- Decompose `web/src/components/JobManager.tsx` without changing current behavior
- Extract shared formatting logic
- Clarify SSE vs polling ownership
- Add regression coverage before deeper structural cleanup
### Per-File Encode History Follow-Through
- Attempt history now exists in job detail, but it is still job-scoped rather than grouped by canonical file identity
- Next hardening pass should make retries, reruns, and settings-driven requeues legible across a files full history
- Include outcome, encode stats, and failure reason where available without regressing the existing job-detail flow
### AMD AV1 Validation
- Validate Linux VAAPI and Windows AMF AV1 paths on real hardware
- Confirm encoder selection, fallback behavior, and defaults
- Keep support claims conservative until validation is real
- Deferred from the current `0.3.1-rc.5` automated-stability pass; do not broaden support claims before this work is complete
---

3
docs/bun.lock generated
View File

@@ -24,6 +24,7 @@
},
},
"overrides": {
"follow-redirects": "^1.16.0",
"lodash": "^4.18.1",
"serialize-javascript": "^7.0.5",
},
@@ -1108,7 +1109,7 @@
"flat": ["flat@5.0.2", "", { "bin": { "flat": "cli.js" } }, "sha512-b6suED+5/3rTpUBdG1gupIl8MPFCAMA0QXwmljLhvCUKcUvdE4gWky9zpuGCcXHOsz4J9wPGNWq6OKpmIzz3hQ=="],
"follow-redirects": ["follow-redirects@1.15.11", "", {}, "sha512-deG2P0JfjrTxl50XGCDyfI97ZGVCxIpfKYmfyrQ54n5FO/0gfIES8C/Psl6kWVDolizcaaxZJnTS0QSMxvnsBQ=="],
"follow-redirects": ["follow-redirects@1.16.0", "", {}, "sha512-y5rN/uOsadFT/JfYwhxRS5R7Qce+g3zG97+JrtFZlC9klX/W5hD7iiLzScI4nZqUS7DNUdhPgw4xI8W2LuXlUw=="],
"form-data-encoder": ["form-data-encoder@2.1.4", "", {}, "sha512-yDYSgNMraqvnxiEXO4hi88+YZxaHC6QKzb5N84iRCTDeRO7ZALpir/lVmf/uXUhnwUr2O4HU8s/n6x+yNjQkHw=="],

View File

@@ -1,53 +1,35 @@
---
title: API
title: API Reference
description: REST and SSE API reference for Alchemist.
---
All API routes require the `alchemist_session` auth cookie
except:
- `/api/auth/*`
- `/api/health`
- `/api/ready`
- setup-mode exceptions: `/api/setup/*`, `/api/fs/*`,
`/api/settings/bundle`, `/api/system/hardware`
Authentication is established by `POST /api/auth/login`.
The backend also accepts `Authorization: Bearer <token>`.
Bearer tokens now come in two classes:
- `read_only` — observability-only routes
- `full_access` — same route access as an authenticated session
The web UI still uses the session cookie.
Machine-readable contract:
- [OpenAPI spec](/openapi.yaml)
## Authentication
### API tokens
All API routes require the `alchemist_session` auth cookie established via `/api/auth/login`, or an `Authorization: Bearer <token>` header.
API tokens are created in **Settings → API Tokens**.
Machine-readable contract: [OpenAPI spec](/openapi.yaml)
- token values are only shown once at creation time
- only hashed token material is stored server-side
- revoked tokens stop working immediately
### `POST /api/auth/login`
Establish a session. Returns a `Set-Cookie` header.
Read-only tokens are intentionally limited to observability
routes such as stats, jobs, logs history, SSE, system info,
hardware info, library intelligence, and health/readiness.
**Request:**
```json
{
"username": "admin",
"password": "..."
}
```
### `POST /api/auth/logout`
Invalidate current session and clear cookie.
### `GET /api/settings/api-tokens`
Lists token metadata only. Plaintext token values are never
returned after creation.
List metadata for configured API tokens.
### `POST /api/settings/api-tokens`
Create a new API token. The plaintext value is only returned once.
Request:
**Request:**
```json
{
"name": "Prometheus",
@@ -55,411 +37,114 @@ Request:
}
```
Response:
```json
{
"token": {
"id": 1,
"name": "Prometheus",
"access_level": "read_only"
},
"plaintext_token": "alc_tok_..."
}
```
### `DELETE /api/settings/api-tokens/:id`
Revoke a token.
Revokes a token in place. Existing automations using it will
begin receiving `401` or `403` depending on route class.
### `POST /api/auth/login`
Request:
```json
{
"username": "admin",
"password": "secret"
}
```
Response:
```http
HTTP/1.1 200 OK
Set-Cookie: alchemist_session=...; HttpOnly; SameSite=Lax; Path=/; Max-Age=2592000
```
```json
{
"status": "ok"
}
```
### `POST /api/auth/logout`
Clears the session cookie and deletes the server-side
session if one exists.
```json
{
"status": "ok"
}
```
---
## Jobs
### `GET /api/jobs`
List jobs with filtering and pagination.
Canonical job listing endpoint. Supports query params such
as `limit`, `page`, `status`, `search`, `sort_by`,
`sort_desc`, and `archived`.
Each returned job row still includes the legacy
`decision_reason` string when present, and now also includes
an optional `decision_explanation` object:
- `category`
- `code`
- `summary`
- `detail`
- `operator_guidance`
- `measured`
- `legacy_reason`
Example:
```bash
curl -b cookie.txt \
'http://localhost:3000/api/jobs?status=queued,failed&limit=50&page=1'
```
**Params:** `limit`, `page`, `status`, `search`, `sort_by`, `sort_desc`, `archived`.
### `GET /api/jobs/:id/details`
Returns the job row, any available analyzed metadata,
encode stats for completed jobs, recent job logs, and a
failure summary for failed jobs. Structured explanation
fields are included when available:
- `decision_explanation`
- `failure_explanation`
- `job_failure_summary` is retained as a compatibility field
Example response shape:
```json
{
"job": {
"id": 42,
"input_path": "/media/movies/example.mkv",
"status": "completed"
},
"metadata": {
"codec_name": "h264",
"width": 1920,
"height": 1080
},
"encode_stats": {
"input_size_bytes": 8011223344,
"output_size_bytes": 4112233445,
"compression_ratio": 1.95,
"encode_speed": 2.4,
"vmaf_score": 93.1
},
"job_logs": [],
"job_failure_summary": null,
"decision_explanation": {
"category": "decision",
"code": "transcode_recommended",
"summary": "Transcode recommended",
"detail": "Alchemist determined the file should be transcoded based on the current codec and measured efficiency.",
"operator_guidance": null,
"measured": {
"target_codec": "av1",
"current_codec": "h264",
"bpp": "0.1200"
},
"legacy_reason": "transcode_recommended|target_codec=av1,current_codec=h264,bpp=0.1200"
},
"failure_explanation": null
}
```
Fetch full job state, metadata, logs, and stats.
### `POST /api/jobs/:id/cancel`
Cancels a queued or active job if the current state allows
it.
Cancel a queued or active job.
### `POST /api/jobs/:id/restart`
Restarts a non-active job by sending it back to `queued`.
Restart a terminal job (failed/cancelled/completed).
### `POST /api/jobs/:id/priority`
Update job priority.
Request:
```json
{
"priority": 100
}
```
Response:
```json
{
"id": 42,
"priority": 100
}
```
**Request:** `{"priority": 100}`
### `POST /api/jobs/batch`
Bulk action on multiple jobs.
Supported `action` values: `cancel`, `restart`, `delete`.
**Request:**
```json
{
"action": "restart",
"ids": [41, 42, 43]
}
```
Response:
```json
{
"count": 3
"action": "restart|cancel|delete",
"ids": [1, 2, 3]
}
```
### `POST /api/jobs/restart-failed`
Response:
```json
{
"count": 2,
"message": "Queued 2 failed or cancelled jobs for retry."
}
```
Restart all failed or cancelled jobs.
### `POST /api/jobs/clear-completed`
Archive all completed jobs from the active queue.
Archives completed jobs from the visible queue while
preserving historical encode stats.
```json
{
"count": 12,
"message": "Cleared 12 completed jobs from the queue. Historical stats were preserved."
}
```
---
## Engine
### `POST /api/engine/pause`
### `GET /api/engine/status`
Get current operational status and limits.
```json
{
"status": "paused"
}
```
### `POST /api/engine/pause`
Pause the engine (suspend active jobs).
### `POST /api/engine/resume`
```json
{
"status": "running"
}
```
Resume the engine.
### `POST /api/engine/drain`
```json
{
"status": "draining"
}
```
### `POST /api/engine/stop-drain`
```json
{
"status": "running"
}
```
### `GET /api/engine/status`
Response fields:
- `status`
- `mode`
- `concurrent_limit`
- `manual_paused`
- `scheduler_paused`
- `draining`
- `is_manual_override`
Example:
```json
{
"status": "paused",
"manual_paused": true,
"scheduler_paused": false,
"draining": false,
"mode": "balanced",
"concurrent_limit": 2,
"is_manual_override": false
}
```
### `GET /api/engine/mode`
Returns current mode, whether a manual override is active,
the current concurrent limit, CPU count, and computed mode
limits.
Enter drain mode (finish active jobs, don't start new ones).
### `POST /api/engine/mode`
Switch engine mode or apply manual overrides.
Request:
**Request:**
```json
{
"mode": "balanced",
"mode": "background|balanced|throughput",
"concurrent_jobs_override": 2,
"threads_override": 0
}
```
Response:
---
```json
{
"status": "ok",
"mode": "balanced",
"concurrent_limit": 2,
"is_manual_override": true
}
```
## Stats
## Statistics
### `GET /api/stats/aggregated`
```json
{
"total_input_bytes": 1234567890,
"total_output_bytes": 678901234,
"total_savings_bytes": 555666656,
"total_time_seconds": 81234.5,
"total_jobs": 87,
"avg_vmaf": 92.4
}
```
Total savings, job counts, and global efficiency.
### `GET /api/stats/daily`
Returns the last 30 days of encode activity.
### `GET /api/stats/detailed`
Returns the most recent detailed encode stats rows.
Encode activity history for the last 30 days.
### `GET /api/stats/savings`
Detailed breakdown of storage savings.
Returns the storage-savings summary used by the statistics
dashboard.
## Settings
### `GET /api/settings/transcode`
Returns the transcode settings payload currently loaded by
the backend.
### `POST /api/settings/transcode`
Request:
```json
{
"concurrent_jobs": 2,
"size_reduction_threshold": 0.3,
"min_bpp_threshold": 0.1,
"min_file_size_mb": 50,
"output_codec": "av1",
"quality_profile": "balanced",
"threads": 0,
"allow_fallback": true,
"hdr_mode": "preserve",
"tonemap_algorithm": "hable",
"tonemap_peak": 100.0,
"tonemap_desat": 0.2,
"subtitle_mode": "copy",
"stream_rules": {
"strip_audio_by_title": ["commentary"],
"keep_audio_languages": ["eng"],
"keep_only_default_audio": false
}
}
```
---
## System
### `GET /api/system/hardware`
Returns the current detected hardware backend, supported
codecs, backends, selection reason, probe summary, and any
detection notes.
Detected hardware backend and codec support matrix.
### `GET /api/system/hardware/probe-log`
Returns the per-encoder probe log with success/failure
status, selected-flag metadata, summary text, and stderr
excerpts.
Full logs from the startup hardware probe.
### `GET /api/system/resources`
Live telemetry: CPU, Memory, GPU utilization, and uptime.
Returns live resource data:
---
- `cpu_percent`
- `memory_used_mb`
- `memory_total_mb`
- `memory_percent`
- `uptime_seconds`
- `active_jobs`
- `concurrent_limit`
- `cpu_count`
- `gpu_utilization`
- `gpu_memory_percent`
## Server-Sent Events
## Events (SSE)
### `GET /api/events`
Real-time event stream.
Internal event types are `JobStateChanged`, `Progress`,
`Decision`, and `Log`. The SSE stream exposed to clients
emits lower-case event names:
- `status`
- `progress`
- `decision`
- `log`
Additional config/system events may also appear, including
`config_updated`, `scan_started`, `scan_completed`,
`engine_status_changed`, and `hardware_state_changed`.
Example:
```text
event: progress
data: {"job_id":42,"percentage":61.4,"time":"00:11:32"}
```
`decision` events include the legacy `reason` plus an
optional structured `explanation` object with the same shape
used by the jobs API.
**Emitted Events:**
- `status`: Job state changes.
- `progress`: Real-time encode statistics.
- `decision`: Skip/Transcode logic results.
- `log`: Engine and job logs.
- `config_updated`: Configuration hot-reload notification.
- `scan_started` / `scan_completed`: Library scan status.

View File

@@ -3,6 +3,72 @@ title: Changelog
description: Release history for Alchemist.
---
## [0.3.1-rc.5] - 2026-04-16
### Reliability & Stability
- **Segment-based encode resume** — interrupted encode jobs now persist resume sessions and completed segments so restart and recovery flows can continue without discarding all completed work.
- **Notification target compatibility hardening** — notification target reads/writes now preserve the additive migration path, tolerate legacy shapes, and avoid duplicate-delete projection bugs in settings management.
- **Daily summary reliability** — summary delivery now retries safely after transient failures and avoids duplicate sends across restart boundaries by persisting the last successful day.
- **Job-detail correctness** — completed-job detail loading now fails closed on database errors instead of returning partial `200 OK` payloads, and encode stat duration fallback uses the encoded output rather than the source file.
- **Auth and settings safety** — login now returns server errors for real database failures, and duplicate notification/schedule rows no longer disappear together from a single delete action.
### Jobs & UX
- **Manual enqueue flow** — the jobs UI now supports enqueueing a single absolute file path through the same backend dedupe and output rules used by library scans.
- **Queued-job visibility** — job detail now exposes queue position and processor blocked reasons so operators can see why a queued job is not starting.
- **Attempt-history surfacing** — job detail now shows encode attempt history directly in the modal, including outcome, timing, and captured failure summary.
- **Jobs UI follow-through** — the `JobManager` refactor now ships with dedicated controller/dialog helpers and tighter SSE reconciliation so filtered tables and open detail modals stay aligned with backend truth.
- **Intelligence actions** — remux recommendations and duplicate candidates are now actionable directly from the Intelligence page.
## [0.3.1-rc.3] - 2026-04-12
### New Features
#### Job Management Refactor
- **Componentized Job Manager** — extracted monolithic `JobManager.tsx` into a modular suite under `web/src/components/jobs/`, including dedicated components for the toolbar, table, and detail modal.
- **Enhanced Job Detail Modal** — rebuilt the job detail view with better loading states, smoother transitions, and improved information hierarchy for analysis, decisions, and failure reasons.
- **Job SSE Hook** — unified job-related Server-Sent Events logic into a custom `useJobSSE` hook for better state management and reduced re-renders.
#### Themes & UX
- **Midnight OLED+** — enhanced the `midnight` theme with true-black surfaces and suppressed decorative gradients to maximize OLED power savings.
- **Improved Toasts** — toast notifications now feature a high-quality backdrop blur and refined border styling for better visibility against busy backgrounds.
#### Reliability & Observability
- **Engine Lifecycle Specs** — added a comprehensive Playwright suite for validating engine transitions (Running -> Draining -> Paused -> Stopped).
- **Planner & Lifecycle Docs** — added detailed technical documentation for the transcoding planner logic and engine state machine.
- **Encode Attempt Tracking** — added a database migration to track individual encode attempts, laying the groundwork for more granular retry statistics.
#### Hardware & Performance
- **Concurrency & Speed Optimizations** — internal refinements to the executor and processor to improve hardware utilization and address reported speed issues on certain platforms.
- **Backlog Grooming** — updated `TODO.md` with a focus on validating AMF and VAAPI AV1 hardware encoders.
## [0.3.1-rc.1] - 2026-04-08
### New Features
#### Conversion & Library Workflows
- **Experimental Conversion / Remux page** — upload a single file, inspect streams, preview the generated FFmpeg command, run a remux/transcode job through Alchemist, and download the result when complete.
- **Expanded Library Intelligence** — duplicate detection now sits alongside storage-focused recommendation sections for remux-only opportunities, wasteful audio layouts, and commentary/descriptive-track cleanup candidates.
#### Authentication & Automation
- **Named API tokens** — create bearer tokens from Settings with `read_only` or `full_access` access classes. Tokens are only shown once at creation time and stored server-side as hashes.
- **OpenAPI contract** — hand-maintained OpenAPI spec added alongside expanded human API docs for auth, token management, and update-check behavior.
#### Notifications
- **Provider-specific notification targets** — notification settings now use provider-specific configuration payloads instead of the old shared endpoint/token shape.
- **Provider expansion** — Discord webhook, Discord bot, Gotify, generic webhook, Telegram, and SMTP email targets are supported.
- **Richer event model** — notification events now distinguish queue/start/completion/failure plus scan completion, engine idle, and daily summary delivery.
- **Daily summary scheduling** — notifications include a global `daily_summary_time_local` setting and per-target opt-in for digest delivery.
#### Deployment & Distribution
- **Windows update check** — the About dialog now checks GitHub Releases for the latest stable version and links directly to the release download page when an update is available.
- **Distribution metadata generation** — in-repo Homebrew and AUR packaging templates plus workflow rendering were added as the foundation for package-manager distribution.
### Documentation
- **Config path clarity** — docs now consistently describe `~/.config/alchemist/config.toml` as the default host-side config location on Linux/macOS, while Docker examples still use `/app/config/config.toml` inside the container.
- **Backlog realignment** — the backlog was rewritten around current repo reality, marking large newly implemented surfaces as “Implemented / In Progress” and keeping the roadmap automation-first.
## [0.3.0] - 2026-04-06
### Security

View File

@@ -70,7 +70,7 @@ Default config file location:
| `output_extension` | string | `"mkv"` | Output file extension |
| `output_suffix` | string | `"-alchemist"` | Suffix added to the output filename |
| `replace_strategy` | string | `"keep"` | Replace behavior for output collisions |
| `output_root` | string | optional | Mirror outputs into another root path instead of writing beside the source |
| `output_root` | string | optional | If set, Alchemist mirrors the source library directory structure under this root path instead of writing outputs alongside the source files |
## `[schedule]`
@@ -97,7 +97,6 @@ requires at least one day in every window.
| `enable_telemetry` | bool | `false` | Opt-in anonymous telemetry switch |
| `log_retention_days` | int | `30` | Log retention period in days |
| `engine_mode` | string | `"balanced"` | Runtime engine mode: `background`, `balanced`, or `throughput` |
| `base_url` | string | `""` | Path prefix for serving Alchemist under a subpath such as `/alchemist` |
## Example

View File

@@ -0,0 +1,152 @@
---
title: Engine Lifecycle
description: Engine states, transitions, and job cancellation semantics.
---
The Alchemist engine is a background loop that claims queued jobs, processes them, and manages concurrent execution. This page documents all states, what triggers each transition, and the exact behavior during cancel, pause, drain, and restart.
---
## Engine states
| State | Jobs start? | Active jobs affected? | How to enter |
|-------|------------|----------------------|-------------|
| **Running** | Yes | Not affected | Resume, restart |
| **Paused** (manual) | No | Not cancelled | Header → Stop, `POST /api/engine/pause` |
| **Paused** (scheduler) | No | Not cancelled | Schedule window activates |
| **Draining** | No | Run to completion | Header → Stop (while running), `POST /api/engine/drain` |
| **Restarting** | No (briefly) | Cancelled | `POST /api/engine/restart` |
| **Shutdown** | No | Force-cancelled | Process exit / SIGTERM |
Paused-manual and paused-scheduler are independent. Both must be cleared for jobs to start again.
---
## State transitions
```
Resume
┌──────────────────────────────┐
│ ▼
Paused ◄─── Pause ─────── Running ──── Drain ───► Draining
│ ▲ │ │
│ Restart │ └─── Shutdown ──► Shutdown
│ ┌──────────┐ │
└─────►│ Restart │────────┘
└──────────┘
(brief pause,
cancel in-flight,
then resume)
```
### Pause
- Sets `manual_paused = true`.
- The claim loop polls every 2 seconds and blocks while paused.
- Active jobs continue until they finish naturally.
- Does **not** affect draining state.
### Resume
- Clears `manual_paused`.
- Does **not** clear `scheduler_paused` (scheduler manages its own flag).
- The claim loop immediately resumes on the next iteration.
- Does **not** cancel the drain if draining.
### Drain
- Sets `draining = true` without setting `paused`.
- No new jobs are claimed.
- Active jobs run to completion.
- When `in_flight_jobs` reaches zero: drain completes, `draining` is cleared, engine transitions to **Paused** (manual).
### Restart
1. Pause (set `manual_paused = true`).
2. Cancel all in-flight jobs (Encoding, Remuxing, Analyzing, Resuming) via FFmpeg kill signal.
3. Clear `draining` flag.
4. Clear `idle_notified` flag.
5. Resume (clear `manual_paused`).
Cancelled in-flight jobs are marked `failed` with `failure_summary = "cancelled"`. They are eligible for automatic retry per the retry backoff schedule.
### Shutdown
Called when the process exits (SIGTERM / graceful shutdown):
1. Cancel all active jobs via FFmpeg kill.
2. Wait up to a short timeout for kills to complete.
3. No retry is scheduled — the jobs return to `queued` on next startup.
---
## Job states
| Job state | Meaning | Terminal? |
|-----------|---------|-----------|
| `queued` | Waiting to be claimed | No |
| `analyzing` | FFprobe running on the file | No |
| `encoding` | FFmpeg encoding in progress | No |
| `remuxing` | FFmpeg stream-copy in progress | No |
| `resuming` | Job being re-queued after retry | No |
| `completed` | Encode finished successfully | Yes |
| `skipped` | Planner decided not to transcode | Yes |
| `failed` | Encode or analysis failed | Yes (with retry) |
| `cancelled` | Cancelled by operator | Yes (with retry) |
---
## Retry backoff
Failed and cancelled jobs are automatically retried. The engine checks elapsed time before claiming.
| Attempt # | Backoff before retry |
|-----------|---------------------|
| 1 | 5 minutes |
| 2 | 15 minutes |
| 3 | 60 minutes |
| 4+ | 6 hours |
After 3 consecutive failures with no success, the job still retries on the 6-hour schedule. There is no permanent failure state from retries alone — operator must manually delete or cancel the job to stop retries.
---
## Cancel semantics
### Cancel mid-analysis
FFprobe process is not currently cancellable via signal. The cancel flag is checked before FFprobe starts. If analysis is in progress when cancel arrives, the job will be cancelled after analysis completes (before encoding starts).
### Cancel mid-encode
The FFmpeg process receives a kill signal immediately. The partial output file is cleaned up. The job is marked `failed` with `failure_summary = "cancelled"`.
### Cancel while queued
The job status is set to `cancelled` directly without any process kill.
---
## Pause vs. drain vs. restart
| Operation | In-flight jobs | Partial output | New jobs |
|-----------|---------------|---------------|----------|
| Pause | Finish normally | Not affected | Blocked |
| Drain | Finish normally | Not affected | Blocked until drain completes |
| Restart | Killed | Cleaned up | Blocked briefly, then resume |
| Shutdown | Killed | Cleaned up | N/A |
Use **Pause** when you need to inspect the queue or change settings without losing progress.
Use **Drain** when you want to stop gracefully after the current batch finishes (e.g. before maintenance).
Use **Restart** to force a clean slate — e.g. after changing hardware settings that affect in-flight jobs.
---
## Boot sequence
1. Migrations run.
2. Any jobs left in `encoding`, `remuxing`, `analyzing`, or `resuming` are reset to `queued` (crash recovery).
3. Boot analysis runs — all `queued` jobs that have no metadata have FFprobe run on them. This uses a single-slot semaphore and blocks the claim loop.
4. Engine claim loop starts — jobs are claimed and processed up to the concurrent limit.

View File

@@ -1,37 +1,39 @@
---
title: Engine Modes
description: Background, Balanced, and Throughput — what they mean and when to use each.
title: Engine Modes & States
description: Background, Balanced, and Throughput — understanding concurrency and execution flow.
---
Engine modes set the concurrent job limit.
Alchemist uses **Modes** to dictate performance limits and **States** to control execution flow.
## Modes
## Engine Modes (Concurrency)
| Mode | Concurrent jobs | Use when |
|------|----------------|----------|
| Background | 1 | Server in active use |
| Balanced (default) | floor(cpu_count / 2), min 1, max 4 | General shared server |
| Throughput | floor(cpu_count / 2), min 1, no cap | Dedicated server, clear a backlog |
Modes define the maximum number of concurrent jobs the engine will attempt to run.
## Manual override
| Mode | Concurrent Jobs | Ideal For |
|------|----------------|-----------|
| **Background** | 1 | Server in active use by other applications. |
| **Balanced** | `floor(cpu_count / 2)` (min 1, max 4) | Default. Shared server usage. |
| **Throughput** | `floor(cpu_count / 2)` (min 1, no cap) | Dedicated server; clearing a large backlog. |
Override the computed limit in **Settings → Runtime**. Takes
effect immediately. A "manual" badge appears in engine
status. Switching modes clears the override.
:::tip Manual Override
You can override the computed limit in **Settings → Runtime**. A "Manual" badge will appear in the engine status. Switching modes clears manual overrides.
:::
## States vs. modes
---
Modes determine *how many* jobs run. States determine
*whether* they run.
## Engine States (Execution)
States determine whether the engine is actively processing the queue.
| State | Behavior |
|-------|----------|
| Running | Jobs start up to the mode's limit |
| Paused | No jobs start; active jobs freeze |
| Draining | Active jobs finish; no new jobs start |
| Scheduler paused | Paused by a schedule window |
| **Running** | Engine is active. Jobs start up to the current mode's limit. |
| **Paused** | Engine is suspended. No new jobs start; active jobs are frozen. |
| **Draining** | Engine is stopping. Active jobs finish, but no new jobs start. |
| **Scheduler Paused** | Engine is temporarily paused by a configured [Schedule Window](/scheduling). |
## Changing modes
---
**Settings → Runtime**. Takes effect immediately; in-progress
jobs are not cancelled.
## Changing Engine Behavior
Engine behavior can be adjusted in real-time via the **Runtime** dashboard or the [API](/api#engine). Changes take effect immediately without cancelling in-progress jobs.

View File

@@ -9,7 +9,6 @@ description: All environment variables Alchemist reads at startup.
| `ALCHEMIST_CONFIG` | (alias) | Alias for `ALCHEMIST_CONFIG_PATH` |
| `ALCHEMIST_DB_PATH` | `~/.config/alchemist/alchemist.db` | Path to SQLite database |
| `ALCHEMIST_DATA_DIR` | (none) | Sets data dir; `alchemist.db` placed here |
| `ALCHEMIST_BASE_URL` | root (`/`) | Path prefix for serving Alchemist under a subpath such as `/alchemist` |
| `ALCHEMIST_CONFIG_MUTABLE` | `true` | Set `false` to block runtime config writes |
| `RUST_LOG` | `info` | Log level: `info`, `debug`, `alchemist=trace` |

View File

@@ -5,7 +5,8 @@ description: Getting through the setup wizard and starting your first scan.
When you first open Alchemist at `http://localhost:3000`
the setup wizard runs automatically. It takes about two
minutes.
minutes. Until the first account is created, setup is
reachable only from the local network.
## Wizard steps

View File

@@ -32,7 +32,8 @@ docker compose up -d
```
Open [http://localhost:3000](http://localhost:3000). The
setup wizard runs on first visit.
setup wizard runs on first visit and is only reachable
from the local network until the first account is created.
For GPU passthrough (NVIDIA, Intel, AMD) see
[GPU Passthrough](/gpu-passthrough).
@@ -110,6 +111,19 @@ just dev
Windows contributor support covers the core `install/dev/check` path.
Broader `just` release and utility recipes remain Unix-first.
## CLI subcommands
```bash
alchemist scan /path/to/media
alchemist run /path/to/media
alchemist plan /path/to/media
alchemist plan /path/to/media --json
```
- `scan` enqueues matching jobs and exits
- `run` scans, enqueues, and waits for processing to finish
- `plan` reports what Alchemist would do without enqueueing jobs
## Nightly builds
```bash

View File

@@ -1,32 +1,35 @@
---
title: Library Doctor
description: Scan for corrupt, truncated, and unreadable media files.
description: Identifying corrupt, truncated, and unreadable media files in your library.
---
Library Doctor scans your configured directories for files
that are corrupt, truncated, or unreadable by FFprobe.
Library Doctor is a specialized diagnostic tool that scans your library for media files that are corrupt, truncated, or otherwise unreadable by the Alchemist analyzer.
Run from **Settings → Runtime → Library Doctor → Run Scan**.
Run a scan manually from **Settings → Runtime → Library Doctor**.
## What it checks
## Core Checks
| Check | What it detects |
|-------|-----------------|
| Probe failure | Files FFprobe cannot read at all |
| No video stream | Files with no detectable video track |
| Zero duration | Files reporting 0 seconds of content |
| Truncated file | Files that appear to end prematurely |
| Missing codec data | Files missing metadata needed to plan a transcode |
Library Doctor runs an intensive probe on every file in your watch directories to identify the following issues:
## What to do with results
| Check | Technical Detection | Action Recommended |
|-------|-----------------|--------------------|
| **Probe Failure** | `ffprobe` returns a non-zero exit code or cannot parse headers. | Re-download or Re-rip. |
| **No Video Stream** | File container is valid but contains no detectable video tracks. | Verify source; delete if unintended. |
| **Zero Duration** | File metadata reports a duration of 0 seconds. | Check for interrupted transfers. |
| **Truncated File** | File size is significantly smaller than expected for the reported bitrate/duration. | Check filesystem integrity. |
| **Missing Metadata** | Missing critical codec data (e.g., pixel format, profile) needed for planning. | Possible unsupported codec variant. |
Library Doctor reports issues — it does not repair or delete
files automatically.
---
- **Re-download** — interrupted download
- **Re-rip** — disc read errors
- **Delete** — duplicate or unrecoverable
- **Ignore** — player handles it despite FFprobe failing
## Relationship to Jobs
Files that fail Library Doctor also fail the Analyzing
stage of a transcode job and appear as Failed in Jobs.
Files that fail Library Doctor checks will also fail the **Analyzing** stage of a standard transcode job.
- **Pre-emptive detection**: Running Library Doctor helps you clear "broken" files from your library before they enter the processing queue.
- **Reporting**: Issues identified by the Doctor appear in the **Health** tab of the dashboard, separate from active transcode jobs.
## Handling Results
Library Doctor is read-only; it will **never delete or modify** your files automatically.
If a file is flagged, you should manually verify it using a media player. If the file is indeed unplayable, we recommend replacing it from the source. Flags can be cleared by deleting the file or moving it out of a watched directory.

View File

@@ -37,13 +37,9 @@ FFmpeg expert.
## Hardware support
| Vendor | AV1 | HEVC | H.264 | Notes |
|--------|-----|------|-------|-------|
| NVIDIA NVENC | RTX 30/40 | Maxwell+ | All | Best for speed |
| Intel QSV | 12th gen+ | 6th gen+ | All | Best for power efficiency |
| AMD VAAPI/AMF | RDNA 2+ on compatible driver/FFmpeg stacks | Polaris+ | All | Linux VAAPI / Windows AMF; HEVC/H.264 are the validated AMD paths for `0.3.0` |
| Apple VideoToolbox | M3+ | M1+ / T2 | All | Binary install recommended |
| CPU (SVT-AV1/x265/x264) | All | All | All | Always available |
Alchemist detects and selects the best available hardware encoder automatically (NVIDIA NVENC, Intel QSV, AMD VAAPI/AMF, Apple VideoToolbox, or CPU fallback).
For detailed codec support matrices (AV1, HEVC, H.264) and vendor-specific setup guides, see the [Hardware Acceleration](/hardware) documentation.
## Where to start

176
docs/docs/planner.md Normal file
View File

@@ -0,0 +1,176 @@
---
title: Planner Heuristics
description: How Alchemist decides whether to transcode, skip, or remux a file.
---
The planner runs once per job during the analysis phase and produces one of three decisions:
- **Transcode** — re-encode the video stream.
- **Remux** — copy streams into a different container (lossless, fast).
- **Skip** — mark the file as not worth processing.
Decisions are deterministic and based solely on file metadata and settings.
---
## Decision flow
Each condition is evaluated in order. The first match wins.
```
1. already_target_codec → Skip (or Remux if container mismatch)
2. no_available_encoders → Skip
3. preferred_codec_unavailable → Skip (if fallback disabled)
4. no_suitable_encoder → Skip (no encoder selected)
5. incomplete_metadata → Skip (missing resolution)
6. bpp_below_threshold → Skip (already efficient)
7. below_min_file_size → Skip (too small)
8. h264 source → Transcode (priority path)
9. everything else → Transcode (transcode_recommended)
```
---
## Skip conditions
### already_target_codec
The video stream is already in the target codec at the required bit depth.
- **AV1 / HEVC target:** skip if codec matches AND bit depth is 10-bit.
- **H.264 target:** skip if codec is h264 AND bit depth is 8-bit or lower.
If the codec matches but the container does not (e.g. AV1 in an MP4, target MKV), the decision is **Remux** instead.
```
skip if: codec == target AND bit_depth == required_depth
remux if: above AND container != target_container
```
---
### bpp_below_threshold
**Bits-per-pixel** measures how efficiently a file is already compressed relative to its resolution and frame rate.
#### Formula
```
raw_bpp = video_bitrate_bps / (width × height × fps)
normalized_bpp = raw_bpp × resolution_multiplier
effective_threshold = min_bpp_threshold × confidence_multiplier × codec_multiplier × target_multiplier
skip if: normalized_bpp < effective_threshold
```
#### Resolution multipliers
| Resolution | Multiplier | Reason |
|------------|-----------|--------|
| ≥ 3840px wide (4K) | 0.60× | 4K compression is naturally denser |
| ≥ 1920px wide (1080p) | 0.80× | HD has moderate density premium |
| < 1920px (SD) | 1.00× | No adjustment |
#### Confidence multipliers
Applied to the threshold when Alchemist is uncertain about bitrate accuracy:
| Confidence | Multiplier | When |
|-----------|-----------|------|
| High | 1.00× | Video bitrate directly reported by FFprobe |
| Medium | 0.70× | Bitrate estimated from container/file size |
| Low | 0.50× | Bitrate estimated with low reliability |
Lower confidence lower threshold harder to skip safer.
#### Codec multipliers
| Source codec | Multiplier | Reason |
|-------------|-----------|--------|
| h264 (AVC) | 0.60× | H.264 needs more bits to match HEVC/AV1 quality |
#### Target multipliers
| Target codec | Multiplier | Reason |
|-------------|-----------|--------|
| AV1 | 0.70× | AV1 is more efficient; skip more aggressively |
| HEVC/H.264 | 1.00× | No adjustment |
#### Worked example
Settings: `min_bpp_threshold = 0.10`, target AV1, source HEVC 10-bit 4K.
```
raw_bpp = 15_000_000 / (3840 × 2160 × 24) = 0.0756
normalized_bpp = 0.0756 × 0.60 = 0.0454 (4K multiplier)
threshold = 0.10 × 1.00 × 1.00 × 0.70 = 0.070 (AV1 multiplier, HEVC source)
0.0454 < 0.070 → SKIP (bpp_below_threshold)
```
---
### below_min_file_size
Files smaller than `min_file_size_mb` (default: 50 MB) are skipped. Small files have minimal savings potential relative to overhead.
**Adjust:** Settings Transcoding Minimum file size.
---
### incomplete_metadata
FFprobe could not determine resolution (width or height is zero). Without resolution, BPP cannot be computed and no valid decision can be made.
**Diagnose:** run Library Doctor on the file.
---
### no_available_encoders
No encoder is available for the target codec at all. Either:
- CPU encoding is disabled (`allow_cpu_encoding = false`)
- Hardware detection failed and CPU fallback is off
**Fix:** Settings Hardware Enable CPU fallback.
---
### preferred_codec_unavailable_fallback_disabled
The requested codec encoder is not available, and `allow_fallback = false` prevents using any substitute.
**Fix:** Enable CPU fallback in Settings Hardware, or check GPU detection.
---
## Transcode paths
### transcode_h264_source
H.264 files are unconditionally transcoded (if not skipped by BPP or size filters above). H.264 is the largest space-saving opportunity in most libraries.
### transcode_recommended
Everything else that passes the skip filters. Alchemist transcodes it because it is a plausible candidate based on the current codec and measured efficiency.
---
## Remux path
### already_target_codec_wrong_container
The video is already in the correct codec but wrapped in the wrong container (e.g. AV1 in `.mp4`, target is `.mkv`). Alchemist remuxes using stream copy fast and lossless.
---
## Tuning
| Setting | Effect |
|---------|--------|
| `min_bpp_threshold` | Higher = skip more files. Default: 0.10. |
| `min_file_size_mb` | Higher = skip more small files. Default: 50. |
| `size_reduction_threshold` | Minimum predicted savings. Default: 30%. |
| `allow_fallback` | Allow CPU encoding when hardware is unavailable. |
| `allow_cpu_encoding` | Allow CPU to encode (not just fall back). |

View File

@@ -103,7 +103,6 @@ const config: Config = {
],
},
footer: {
style: 'dark',
links: [
{
title: 'Get Started',

View File

@@ -1,6 +1,6 @@
{
"name": "alchemist-docs",
"version": "0.3.1-rc.1",
"version": "0.3.1-rc.5",
"private": true,
"packageManager": "bun@1.3.5",
"scripts": {
@@ -48,6 +48,7 @@
"node": ">=20.0"
},
"overrides": {
"follow-redirects": "^1.16.0",
"lodash": "^4.18.1",
"serialize-javascript": "^7.0.5"
}

View File

@@ -94,8 +94,9 @@ html {
}
.footer {
border-top: 1px solid rgba(200, 155, 90, 0.22);
background: var(--ifm-footer-background-color);
border-top: 1px solid var(--doc-border);
background-color: var(--ifm-footer-background-color) !important;
color: #ddd0be;
}
.footer__links {
@@ -118,13 +119,22 @@ html {
}
.footer__title {
color: #fdf6ee;
font-weight: 700;
}
.footer__bottom,
.footer__link-item {
color: #cfc0aa;
}
.footer__link-item:hover {
color: var(--ifm-link-hover-color);
text-decoration: none;
}
.footer__copyright {
text-align: center;
color: #b8a88e;
text-align: center;
}
.main-wrapper {

View File

@@ -1,34 +1,25 @@
CREATE TABLE IF NOT EXISTS notification_targets_new (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL,
target_type TEXT CHECK(target_type IN ('discord_webhook', 'discord_bot', 'gotify', 'webhook', 'telegram', 'email')) NOT NULL,
config_json TEXT NOT NULL DEFAULT '{}',
events TEXT NOT NULL DEFAULT '["encode.failed","encode.completed"]',
enabled BOOLEAN DEFAULT 1,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
ALTER TABLE notification_targets
ADD COLUMN target_type_v2 TEXT;
INSERT INTO notification_targets_new (id, name, target_type, config_json, events, enabled, created_at)
SELECT
id,
name,
CASE target_type
ALTER TABLE notification_targets
ADD COLUMN config_json TEXT NOT NULL DEFAULT '{}';
UPDATE notification_targets
SET
target_type_v2 = CASE target_type
WHEN 'discord' THEN 'discord_webhook'
WHEN 'gotify' THEN 'gotify'
ELSE 'webhook'
END,
CASE target_type
config_json = CASE target_type
WHEN 'discord' THEN json_object('webhook_url', endpoint_url)
WHEN 'gotify' THEN json_object('server_url', endpoint_url, 'app_token', COALESCE(auth_token, ''))
ELSE json_object('url', endpoint_url, 'auth_token', auth_token)
END,
COALESCE(events, '["failed","completed"]'),
enabled,
created_at
FROM notification_targets;
DROP TABLE notification_targets;
ALTER TABLE notification_targets_new RENAME TO notification_targets;
END
WHERE target_type_v2 IS NULL
OR target_type_v2 = ''
OR config_json IS NULL
OR trim(config_json) = '';
CREATE INDEX IF NOT EXISTS idx_notification_targets_enabled
ON notification_targets(enabled);

View File

@@ -0,0 +1,21 @@
CREATE TABLE IF NOT EXISTS encode_attempts (
id INTEGER PRIMARY KEY AUTOINCREMENT,
job_id INTEGER NOT NULL REFERENCES jobs(id) ON DELETE CASCADE,
attempt_number INTEGER NOT NULL,
started_at TEXT,
finished_at TEXT NOT NULL DEFAULT (datetime('now')),
outcome TEXT NOT NULL CHECK(outcome IN ('completed', 'failed', 'cancelled')),
failure_code TEXT,
failure_summary TEXT,
input_size_bytes INTEGER,
output_size_bytes INTEGER,
encode_time_seconds REAL,
created_at TEXT NOT NULL DEFAULT (datetime('now'))
);
CREATE INDEX IF NOT EXISTS idx_encode_attempts_job_id ON encode_attempts(job_id);
INSERT OR REPLACE INTO schema_info (key, value) VALUES
('schema_version', '8'),
('min_compatible_version', '0.2.5'),
('last_updated', datetime('now'));

View File

@@ -0,0 +1,2 @@
-- Store input metadata as JSON to avoid live re-probing completed jobs
ALTER TABLE jobs ADD COLUMN input_metadata_json TEXT;

View File

@@ -0,0 +1,38 @@
CREATE TABLE IF NOT EXISTS job_resume_sessions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
job_id INTEGER NOT NULL UNIQUE REFERENCES jobs(id) ON DELETE CASCADE,
strategy TEXT NOT NULL,
plan_hash TEXT NOT NULL,
mtime_hash TEXT NOT NULL,
temp_dir TEXT NOT NULL,
concat_manifest_path TEXT NOT NULL,
segment_length_secs INTEGER NOT NULL,
status TEXT NOT NULL DEFAULT 'active',
created_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE IF NOT EXISTS job_resume_segments (
id INTEGER PRIMARY KEY AUTOINCREMENT,
job_id INTEGER NOT NULL REFERENCES jobs(id) ON DELETE CASCADE,
segment_index INTEGER NOT NULL,
start_secs REAL NOT NULL,
duration_secs REAL NOT NULL,
temp_path TEXT NOT NULL,
status TEXT NOT NULL DEFAULT 'pending',
attempt_count INTEGER NOT NULL DEFAULT 0,
created_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
UNIQUE(job_id, segment_index)
);
CREATE INDEX IF NOT EXISTS idx_job_resume_sessions_status
ON job_resume_sessions(status);
CREATE INDEX IF NOT EXISTS idx_job_resume_segments_job_status
ON job_resume_segments(job_id, status);
INSERT OR REPLACE INTO schema_info (key, value) VALUES
('schema_version', '9'),
('min_compatible_version', '0.2.5'),
('last_updated', datetime('now'));

191
plans.md Normal file
View File

@@ -0,0 +1,191 @@
# Open Item Plans
---
## [UX-2] Single File Enqueue
### Goal
`POST /api/jobs/enqueue` + "Add file" button in JobsToolbar.
### Backend
**New handler in `src/server/jobs.rs`:**
```rust
#[derive(Deserialize)]
struct EnqueueFilePayload {
input_path: String,
source_root: Option<String>,
}
async fn enqueue_file_handler(State(state), Json(payload)) -> impl IntoResponse
```
Logic:
1. Validate `input_path` exists on disk, is a file
2. Read `mtime` from filesystem metadata
3. Build `DiscoveredMedia { path, mtime, source_root }`
4. Call `enqueue_discovered_with_db(&db, discovered)` — reuses all existing skip checks, output path computation, file settings
5. If `Ok(true)` → fetch job via `db.get_job_by_input_path()`, return it
6. If `Ok(false)` → 409 "already tracked / output exists"
7. If `Err` → 400 with error
**Route:** Add `.route("/api/jobs/enqueue", post(enqueue_file_handler))` in `src/server/mod.rs`
### Frontend
**`web/src/components/jobs/JobsToolbar.tsx`:**
- Add "Add File" button next to refresh
- Opens small modal/dialog with text input for path
- POST to `/api/jobs/enqueue`, toast result
- SSE handles job appearing in table automatically
### Files to modify
- `src/server/jobs.rs` — new handler + payload struct
- `src/server/mod.rs` — route registration
- `web/src/components/jobs/JobsToolbar.tsx` — button + dialog
- `web/src/components/jobs/` — optional: new `EnqueueDialog.tsx` component
### Verification
- `cargo check && cargo test && cargo clippy`
- Manual: POST valid path → job appears queued
- POST nonexistent path → 400
- POST already-tracked path → 409
- Frontend: click Add File, enter path, see job in table
---
## [UX-3] Workers-Blocked Reason
### Goal
Surface why queued jobs aren't being processed. Extend `/api/engine/status` → show reason in JobDetailModal.
### Backend
**Extend `engine_status_handler` response** (or create new endpoint) to include blocking state:
```rust
struct EngineStatusResponse {
// existing fields...
blocked_reason: Option<String>, // "paused", "scheduled", "draining", "boot_analysis", "slots_full", null
schedule_resume: Option<String>, // next window open time if scheduler_paused
}
```
Derive from `Agent` state:
- `agent.is_manual_paused()``"paused"`
- `agent.is_scheduler_paused()``"scheduled"`
- `agent.is_draining()``"draining"`
- `agent.is_boot_analyzing()``"boot_analysis"`
- `agent.in_flight_jobs >= agent.concurrent_jobs_limit()``"slots_full"`
- else → `null` (processing normally)
### Frontend
**`web/src/components/jobs/JobDetailModal.tsx`:**
- Below queue position display, show blocked reason if present
- Fetch from engine status (already available via SSE `EngineStatusChanged` events, or poll `/api/engine/status`)
- Color-coded: yellow for schedule/pause, blue for boot analysis, gray for slots full
### Files to modify
- `src/server/jobs.rs` or wherever `engine_status_handler` lives — extend response
- `web/src/components/jobs/JobDetailModal.tsx` — display blocked reason
- `web/src/components/jobs/useJobSSE.ts` — optionally track engine status via SSE
### Verification
- Pause engine → queued job detail shows "Engine paused"
- Set schedule window outside current time → shows "Outside schedule window"
- Fill all slots → shows "All worker slots occupied"
- Resume → reason disappears
---
## [FG-4] Intelligence Page Actions
### Goal
Add actionable buttons to `LibraryIntelligence.tsx`: delete duplicates, queue remux opportunities.
### Duplicate Group Actions
**"Keep Latest, Delete Rest" button per group:**
- Each duplicate group card gets a "Clean Up" button
- Selects all jobs except the one with latest `updated_at`
- Calls `POST /api/jobs/batch` with `{ action: "delete", ids: [...] }`
- Confirmation modal: "Archive N duplicate jobs?"
**"Clean All Duplicates" bulk button:**
- Top-level button in duplicates section header
- Same logic across all groups
- Shows total count in confirmation
### Recommendation Actions
**"Queue All Remux" button:**
- Gathers IDs of all remux opportunity jobs
- Calls `POST /api/jobs/batch` with `{ action: "restart", ids: [...] }`
- Jobs re-enter queue for remux processing
**Per-recommendation "Queue" button:**
- Individual restart for single recommendation items
### Backend
No new endpoints needed — existing `POST /api/jobs/batch` handles all actions (cancel/delete/restart).
### Frontend
**`web/src/components/LibraryIntelligence.tsx`:**
- Add "Clean Up" button to each duplicate group card
- Add "Clean All Duplicates" button to section header
- Add "Queue All" button to remux opportunities section
- Add confirmation modal component
- Add toast notifications for success/error
- Refresh data after action completes
### Files to modify
- `web/src/components/LibraryIntelligence.tsx` — buttons, modals, action handlers
### Verification
- Click "Clean Up" on duplicate group → archives all but latest
- Click "Queue All Remux" → remux jobs reset to queued
- Confirm counts in modal match actual
- Data refreshes after action
---
## [RG-2] AMD VAAPI/AMF Validation
### Goal
Verify AMD hardware encoder paths produce correct FFmpeg commands on real AMD hardware.
### Problem
`src/media/ffmpeg/vaapi.rs` and `src/media/ffmpeg/amf.rs` were implemented without real hardware validation. Flag mappings, device paths, and quality controls may be incorrect.
### Validation checklist
**VAAPI (Linux):**
- [ ] Device path `/dev/dri/renderD128` detection works
- [ ] `hevc_vaapi` / `h264_vaapi` encoder selection
- [ ] CRF/quality mapping → `-rc_mode CQP -qp N` or `-rc_mode ICQ -quality N`
- [ ] HDR passthrough flags (if applicable)
- [ ] Container compatibility (MKV/MP4)
**AMF (Windows):**
- [ ] `hevc_amf` / `h264_amf` encoder selection
- [ ] Quality mapping → `-quality quality -qp_i N -qp_p N`
- [ ] B-frame support detection
- [ ] HDR passthrough
### Approach
1. Write unit tests for `build_args()` output — verify flag strings without hardware
2. Gate integration tests on `AMD_GPU_AVAILABLE` env var
3. Document known-good flag sets from AMD documentation
4. Add `EncoderCapabilities` detection for AMF/VAAPI (similar to existing NVENC/QSV detection)
### Files to modify
- `src/media/ffmpeg/vaapi.rs` — flag corrections if needed
- `src/media/ffmpeg/amf.rs` — flag corrections if needed
- `tests/` — new integration test file gated on hardware
### Verification
- Unit tests pass on CI (no hardware needed)
- Integration tests pass on AMD hardware (manual)
- Generated FFmpeg commands reviewed against AMD documentation

View File

@@ -1,124 +0,0 @@
# Security Best Practices Report
## Executive Summary
I found one critical security bug and one additional high-severity issue in the setup/bootstrap flow.
The critical problem is that first-run setup is remotely accessible without authentication while the server listens on `0.0.0.0`. A network-reachable attacker can win the initial setup race, create the first admin account, and take over the instance.
I did not find evidence of major client-side XSS sinks or obvious SQL injection paths during this audit. Most of the remaining concerns I saw were hardening-level issues rather than immediately exploitable major bugs.
## Critical Findings
### ALCH-SEC-001
- Severity: Critical
- Location:
- `src/server/middleware.rs:80-86`
- `src/server/wizard.rs:95-210`
- `src/server/mod.rs:176-197`
- `README.md:61-79`
- Impact: Any attacker who can reach the service before the legitimate operator completes setup can create the first admin account and fully compromise the instance.
#### Evidence
`auth_middleware` exempts the full `/api/setup` namespace from authentication:
- `src/server/middleware.rs:80-86`
`setup_complete_handler` only checks `setup_required` and then creates the user, session cookie, and persisted config:
- `src/server/wizard.rs:95-210`
The server binds to all interfaces by default:
- `src/server/mod.rs:176-197`
The documented Docker quick-start publishes port `3000` directly:
- `README.md:61-79`
#### Why This Is Exploitable
On a fresh install, or any run where `setup_required == true`, the application accepts unauthenticated requests to `/api/setup/complete`. Because the listener binds `0.0.0.0`, that endpoint is reachable from any network that can reach the host unless an external firewall or reverse proxy blocks it.
That lets a remote attacker:
1. POST their own username and password to `/api/setup/complete`
2. Receive the initial authenticated session cookie
3. Persist attacker-controlled configuration and start operating as the admin user
This is a full-authentication-bypass takeover of the instance during bootstrap.
#### Recommended Fix
Require setup completion to come only from a trusted local origin during bootstrap, matching the stricter treatment already used for `/api/fs/*` during setup.
Minimal safe options:
1. Restrict `/api/setup/*` and `/api/settings/bundle` to loopback-only while `setup_required == true`.
2. Alternatively require an explicit one-time bootstrap secret/token generated on startup and printed locally.
3. Consider binding to `127.0.0.1` by default until setup is complete, then allowing an explicit public bind only after bootstrap.
#### Mitigation Until Fixed
- Do not expose the service to any network before setup is completed.
- Do not publish the container port directly on untrusted networks.
- Complete setup only through a local-only tunnel or host firewall rule.
## High Findings
### ALCH-SEC-002
- Severity: High
- Location:
- `src/server/middleware.rs:116-117`
- `src/server/settings.rs:244-285`
- `src/config.rs:366-390`
- `src/main.rs:369-383`
- `src/db.rs:2566-2571`
- Impact: During setup mode, an unauthenticated remote attacker can read and overwrite the full runtime configuration; after `--reset-auth`, this can expose existing notification endpoints/tokens and let the attacker reconfigure the instance before the operator reclaims it.
#### Evidence
While `setup_required == true`, `auth_middleware` explicitly allows `/api/settings/bundle` without authentication:
- `src/server/middleware.rs:116-117`
`get_settings_bundle_handler` returns the full `Config`, and `update_settings_bundle_handler` writes an attacker-supplied `Config` back to disk and runtime state:
- `src/server/settings.rs:244-285`
The config structure includes notification targets and optional `auth_token` fields:
- `src/config.rs:366-390`
`--reset-auth` only clears users and sessions, then re-enters setup mode:
- `src/main.rs:369-383`
- `src/db.rs:2566-2571`
#### Why This Is Exploitable
This endpoint is effectively a public config API whenever the app is in setup mode. On a brand-new install that broadens the same bootstrap attack surface as ALCH-SEC-001. On an existing deployment where an operator runs `--reset-auth`, the previous configuration remains on disk while authentication is removed, so a remote caller can:
1. GET `/api/settings/bundle` and read the current config
2. Learn configured paths, schedules, webhook targets, and any stored notification bearer tokens
3. PUT a replacement config before the legitimate operator finishes recovery
That creates both confidential-data exposure and unauthenticated remote reconfiguration during recovery/bootstrap windows.
#### Recommended Fix
Do not expose `/api/settings/bundle` anonymously.
Safer options:
1. Apply the same loopback-only setup restriction used for `/api/fs/*`.
2. Split bootstrap-safe fields from privileged configuration and expose only the minimal bootstrap payload anonymously.
3. Redact secret-bearing config fields such as notification tokens from any unauthenticated response path.
## Notes
- I did not find a major DOM-XSS path in `web/src`; there were no `dangerouslySetInnerHTML`, `innerHTML`, `insertAdjacentHTML`, `eval`, or similar high-risk sinks in the audited code paths.
- I also did not see obvious raw SQL string interpolation issues; the database code I reviewed uses parameter binding.

10
skills-lock.json Normal file
View File

@@ -0,0 +1,10 @@
{
"version": 1,
"skills": {
"caveman": {
"source": "JuliusBrussee/caveman",
"sourceType": "github",
"computedHash": "a818cdc41dcfaa50dd891c5cb5e5705968338de02e7e37949ca56e8c30ad4176"
}
}
}

View File

@@ -82,12 +82,12 @@ impl QualityProfile {
}
}
/// Get FFmpeg quality value for Apple VideoToolbox
/// Get FFmpeg quality value for Apple VideoToolbox (-q:v 1-100, lower is better)
pub fn videotoolbox_quality(&self) -> &'static str {
match self {
Self::Quality => "55",
Self::Balanced => "65",
Self::Speed => "75",
Self::Quality => "24",
Self::Balanced => "28",
Self::Speed => "32",
}
}
}
@@ -357,7 +357,9 @@ pub(crate) fn default_allow_fallback() -> bool {
}
pub(crate) fn default_tonemap_peak() -> f32 {
100.0
// HDR10 content is typically mastered at 1000 nits. Using 100 (SDR level)
// causes severe over-compression of highlights during tone-mapping.
1000.0
}
pub(crate) fn default_tonemap_desat() -> f32 {
@@ -488,36 +490,34 @@ impl NotificationTargetConfig {
match self.target_type.as_str() {
"discord_webhook" => {
if !config_map.contains_key("webhook_url") {
if let Some(endpoint_url) = self.endpoint_url.clone() {
config_map
.insert("webhook_url".to_string(), JsonValue::String(endpoint_url));
}
if let Some(endpoint_url) = self.endpoint_url.clone() {
config_map
.entry("webhook_url".to_string())
.or_insert_with(|| JsonValue::String(endpoint_url));
}
}
"gotify" => {
if !config_map.contains_key("server_url") {
if let Some(endpoint_url) = self.endpoint_url.clone() {
config_map
.insert("server_url".to_string(), JsonValue::String(endpoint_url));
}
if let Some(endpoint_url) = self.endpoint_url.clone() {
config_map
.entry("server_url".to_string())
.or_insert_with(|| JsonValue::String(endpoint_url));
}
if !config_map.contains_key("app_token") {
if let Some(auth_token) = self.auth_token.clone() {
config_map.insert("app_token".to_string(), JsonValue::String(auth_token));
}
if let Some(auth_token) = self.auth_token.clone() {
config_map
.entry("app_token".to_string())
.or_insert_with(|| JsonValue::String(auth_token));
}
}
"webhook" => {
if !config_map.contains_key("url") {
if let Some(endpoint_url) = self.endpoint_url.clone() {
config_map.insert("url".to_string(), JsonValue::String(endpoint_url));
}
if let Some(endpoint_url) = self.endpoint_url.clone() {
config_map
.entry("url".to_string())
.or_insert_with(|| JsonValue::String(endpoint_url));
}
if !config_map.contains_key("auth_token") {
if let Some(auth_token) = self.auth_token.clone() {
config_map.insert("auth_token".to_string(), JsonValue::String(auth_token));
}
if let Some(auth_token) = self.auth_token.clone() {
config_map
.entry("auth_token".to_string())
.or_insert_with(|| JsonValue::String(auth_token));
}
}
_ => {}
@@ -682,8 +682,13 @@ pub struct SystemConfig {
/// Enable HSTS header (only enable if running behind HTTPS)
#[serde(default)]
pub https_only: bool,
/// Explicit list of reverse proxy IPs (e.g. "192.168.1.1") whose
/// X-Forwarded-For / X-Real-IP headers are trusted. When non-empty,
/// only these IPs (plus loopback) are trusted as proxies; private
/// ranges are no longer trusted by default. Leave empty to preserve
/// the previous behaviour (trust all RFC-1918 private addresses).
#[serde(default)]
pub base_url: String,
pub trusted_proxies: Vec<String>,
}
fn default_true() -> bool {
@@ -710,7 +715,7 @@ impl Default for SystemConfig {
log_retention_days: default_log_retention_days(),
engine_mode: EngineMode::default(),
https_only: false,
base_url: String::new(),
trusted_proxies: Vec::new(),
}
}
}
@@ -826,7 +831,7 @@ impl Default for Config {
log_retention_days: default_log_retention_days(),
engine_mode: EngineMode::default(),
https_only: false,
base_url: String::new(),
trusted_proxies: Vec::new(),
},
}
}
@@ -923,7 +928,6 @@ impl Config {
}
validate_schedule_time(&self.notifications.daily_summary_time_local)?;
normalize_base_url(&self.system.base_url)?;
for target in &self.notifications.targets {
target.validate()?;
}
@@ -1026,7 +1030,6 @@ impl Config {
}
pub(crate) fn canonicalize_for_save(&mut self) {
self.system.base_url = normalize_base_url(&self.system.base_url).unwrap_or_default();
if !self.notifications.targets.is_empty() {
self.notifications.webhook_url = None;
self.notifications.discord_webhook = None;
@@ -1046,33 +1049,7 @@ impl Config {
}
}
pub(crate) fn apply_env_overrides(&mut self) {
if let Ok(base_url) = std::env::var("ALCHEMIST_BASE_URL") {
self.system.base_url = base_url;
}
self.system.base_url = normalize_base_url(&self.system.base_url).unwrap_or_default();
}
}
pub fn normalize_base_url(value: &str) -> Result<String> {
let trimmed = value.trim();
if trimmed.is_empty() || trimmed == "/" {
return Ok(String::new());
}
if trimmed.contains("://") {
anyhow::bail!("system.base_url must be a path prefix, not a full URL");
}
if !trimmed.starts_with('/') {
anyhow::bail!("system.base_url must start with '/'");
}
if trimmed.contains('?') || trimmed.contains('#') {
anyhow::bail!("system.base_url must not contain query or fragment components");
}
let normalized = trimmed.trim_end_matches('/');
if normalized.contains("//") {
anyhow::bail!("system.base_url must not contain repeated slashes");
}
Ok(normalized.to_string())
pub(crate) fn apply_env_overrides(&mut self) {}
}
fn validate_schedule_time(value: &str) -> Result<()> {
@@ -1158,65 +1135,4 @@ mod tests {
assert_eq!(EngineMode::default(), EngineMode::Balanced);
assert_eq!(EngineMode::Balanced.concurrent_jobs_for_cpu_count(8), 4);
}
#[test]
fn normalize_base_url_accepts_root_or_empty() {
assert_eq!(
normalize_base_url("").unwrap_or_else(|err| panic!("empty base url: {err}")),
""
);
assert_eq!(
normalize_base_url("/").unwrap_or_else(|err| panic!("root base url: {err}")),
""
);
assert_eq!(
normalize_base_url("/alchemist/")
.unwrap_or_else(|err| panic!("trimmed base url: {err}")),
"/alchemist"
);
}
#[test]
fn normalize_base_url_rejects_invalid_values() {
assert!(normalize_base_url("alchemist").is_err());
assert!(normalize_base_url("https://example.com/alchemist").is_err());
assert!(normalize_base_url("/a//b").is_err());
}
#[test]
fn env_base_url_override_takes_priority_on_load() {
let config_path = std::env::temp_dir().join(format!(
"alchemist_base_url_override_{}.toml",
rand::random::<u64>()
));
std::fs::write(
&config_path,
r#"
[transcode]
size_reduction_threshold = 0.3
min_bpp_threshold = 0.1
min_file_size_mb = 50
concurrent_jobs = 1
[hardware]
preferred_vendor = "cpu"
allow_cpu_fallback = true
[scanner]
directories = []
[system]
base_url = "/from-config"
"#,
)
.unwrap_or_else(|err| panic!("failed to write temp config: {err}"));
// SAFETY: test-only environment mutation.
unsafe { std::env::set_var("ALCHEMIST_BASE_URL", "/from-env") };
let config =
Config::load(&config_path).unwrap_or_else(|err| panic!("failed to load config: {err}"));
assert_eq!(config.system.base_url, "/from-env");
unsafe { std::env::remove_var("ALCHEMIST_BASE_URL") };
let _ = std::fs::remove_file(config_path);
}
}

View File

@@ -195,8 +195,8 @@ pub fn build_plan(
match normalized.video.hdr_mode.as_str() {
"tonemap" => filters.push(FilterStep::Tonemap {
algorithm: TonemapAlgorithm::Hable,
peak: 100.0,
desat: 0.2,
peak: crate::config::default_tonemap_peak(),
desat: crate::config::default_tonemap_desat(),
}),
"strip_metadata" => filters.push(FilterStep::StripHdrMetadata),
_ => {}
@@ -369,7 +369,18 @@ fn build_subtitle_plan(
copy_video: bool,
) -> Result<SubtitleStreamPlan> {
match settings.subtitles.mode.as_str() {
"copy" => Ok(SubtitleStreamPlan::CopyAllCompatible),
"copy" => {
if !crate::media::planner::subtitle_copy_supported(
&settings.output_container,
&analysis.metadata.subtitle_streams,
) {
return Err(AlchemistError::Config(
"Subtitle copy is not supported for the selected output container with these subtitle codecs. \
Use 'remove' or 'burn' instead.".to_string(),
));
}
Ok(SubtitleStreamPlan::CopyAllCompatible)
}
"remove" | "drop" | "none" => Ok(SubtitleStreamPlan::Drop),
"burn" => {
if copy_video {
@@ -431,6 +442,7 @@ fn build_rate_control(mode: &str, value: Option<u32>, encoder: Encoder) -> Resul
match encoder.backend() {
EncoderBackend::Qsv => Ok(RateControl::QsvQuality { value: quality }),
EncoderBackend::Cpu => Ok(RateControl::Crf { value: quality }),
EncoderBackend::Videotoolbox => Ok(RateControl::Cq { value: quality }),
_ => Ok(RateControl::Cq { value: quality }),
}
}

3349
src/db.rs

File diff suppressed because it is too large Load Diff

864
src/db/config.rs Normal file
View File

@@ -0,0 +1,864 @@
use crate::error::Result;
use serde_json::Value as JsonValue;
use sqlx::Row;
use std::collections::HashMap;
use std::path::{Path, PathBuf};
use super::Db;
use super::types::*;
fn notification_config_string(config_json: &str, key: &str) -> Option<String> {
serde_json::from_str::<JsonValue>(config_json)
.ok()
.and_then(|value| {
value
.get(key)
.and_then(JsonValue::as_str)
.map(str::to_string)
})
.map(|value| value.trim().to_string())
.filter(|value| !value.is_empty())
}
fn notification_legacy_columns(
target_type: &str,
config_json: &str,
) -> (String, Option<String>, Option<String>) {
match target_type {
"discord_webhook" => (
"discord".to_string(),
notification_config_string(config_json, "webhook_url"),
None,
),
"discord_bot" => (
"discord".to_string(),
Some("https://discord.com".to_string()),
notification_config_string(config_json, "bot_token"),
),
"gotify" => (
"gotify".to_string(),
notification_config_string(config_json, "server_url"),
notification_config_string(config_json, "app_token"),
),
"webhook" => (
"webhook".to_string(),
notification_config_string(config_json, "url"),
notification_config_string(config_json, "auth_token"),
),
"telegram" => (
"webhook".to_string(),
Some("https://api.telegram.org".to_string()),
notification_config_string(config_json, "bot_token"),
),
"email" => ("webhook".to_string(), None, None),
other => (other.to_string(), None, None),
}
}
impl Db {
pub async fn get_watch_dirs(&self) -> Result<Vec<WatchDir>> {
let has_is_recursive = self.watch_dir_flags.has_is_recursive;
let has_recursive = self.watch_dir_flags.has_recursive;
let has_enabled = self.watch_dir_flags.has_enabled;
let has_profile_id = self.watch_dir_flags.has_profile_id;
let recursive_expr = if has_is_recursive {
"is_recursive"
} else if has_recursive {
"recursive"
} else {
"1"
};
let enabled_filter = if has_enabled {
"WHERE enabled = 1 "
} else {
""
};
let profile_expr = if has_profile_id { "profile_id" } else { "NULL" };
let query = format!(
"SELECT id, path, {} as is_recursive, {} as profile_id, created_at
FROM watch_dirs {}ORDER BY path ASC",
recursive_expr, profile_expr, enabled_filter
);
let dirs = sqlx::query_as::<_, WatchDir>(&query)
.fetch_all(&self.pool)
.await?;
Ok(dirs)
}
pub async fn add_watch_dir(&self, path: &str, is_recursive: bool) -> Result<WatchDir> {
let has_is_recursive = self.watch_dir_flags.has_is_recursive;
let has_recursive = self.watch_dir_flags.has_recursive;
let has_profile_id = self.watch_dir_flags.has_profile_id;
let row = if has_is_recursive && has_profile_id {
sqlx::query_as::<_, WatchDir>(
"INSERT INTO watch_dirs (path, is_recursive) VALUES (?, ?)
RETURNING id, path, is_recursive, profile_id, created_at",
)
.bind(path)
.bind(is_recursive)
.fetch_one(&self.pool)
.await?
} else if has_is_recursive {
sqlx::query_as::<_, WatchDir>(
"INSERT INTO watch_dirs (path, is_recursive) VALUES (?, ?)
RETURNING id, path, is_recursive, NULL as profile_id, created_at",
)
.bind(path)
.bind(is_recursive)
.fetch_one(&self.pool)
.await?
} else if has_recursive && has_profile_id {
sqlx::query_as::<_, WatchDir>(
"INSERT INTO watch_dirs (path, recursive) VALUES (?, ?)
RETURNING id, path, recursive as is_recursive, profile_id, created_at",
)
.bind(path)
.bind(is_recursive)
.fetch_one(&self.pool)
.await?
} else if has_recursive {
sqlx::query_as::<_, WatchDir>(
"INSERT INTO watch_dirs (path, recursive) VALUES (?, ?)
RETURNING id, path, recursive as is_recursive, NULL as profile_id, created_at",
)
.bind(path)
.bind(is_recursive)
.fetch_one(&self.pool)
.await?
} else {
sqlx::query_as::<_, WatchDir>(
"INSERT INTO watch_dirs (path) VALUES (?)
RETURNING id, path, 1 as is_recursive, NULL as profile_id, created_at",
)
.bind(path)
.fetch_one(&self.pool)
.await?
};
Ok(row)
}
pub async fn replace_watch_dirs(
&self,
watch_dirs: &[crate::config::WatchDirConfig],
) -> Result<()> {
let has_is_recursive = self.watch_dir_flags.has_is_recursive;
let has_recursive = self.watch_dir_flags.has_recursive;
let has_profile_id = self.watch_dir_flags.has_profile_id;
let preserved_profiles = if has_profile_id {
let rows = sqlx::query("SELECT path, profile_id FROM watch_dirs")
.fetch_all(&self.pool)
.await?;
rows.into_iter()
.map(|row| {
let path: String = row.get("path");
let profile_id: Option<i64> = row.get("profile_id");
(path, profile_id)
})
.collect::<HashMap<_, _>>()
} else {
HashMap::new()
};
let mut tx = self.pool.begin().await?;
sqlx::query("DELETE FROM watch_dirs")
.execute(&mut *tx)
.await?;
for watch_dir in watch_dirs {
let preserved_profile_id = preserved_profiles.get(&watch_dir.path).copied().flatten();
if has_is_recursive && has_profile_id {
sqlx::query(
"INSERT INTO watch_dirs (path, is_recursive, profile_id) VALUES (?, ?, ?)",
)
.bind(&watch_dir.path)
.bind(watch_dir.is_recursive)
.bind(preserved_profile_id)
.execute(&mut *tx)
.await?;
} else if has_recursive && has_profile_id {
sqlx::query(
"INSERT INTO watch_dirs (path, recursive, profile_id) VALUES (?, ?, ?)",
)
.bind(&watch_dir.path)
.bind(watch_dir.is_recursive)
.bind(preserved_profile_id)
.execute(&mut *tx)
.await?;
} else if has_recursive {
sqlx::query("INSERT INTO watch_dirs (path, recursive) VALUES (?, ?)")
.bind(&watch_dir.path)
.bind(watch_dir.is_recursive)
.execute(&mut *tx)
.await?;
} else {
sqlx::query("INSERT INTO watch_dirs (path) VALUES (?)")
.bind(&watch_dir.path)
.execute(&mut *tx)
.await?;
}
}
tx.commit().await?;
Ok(())
}
pub async fn remove_watch_dir(&self, id: i64) -> Result<()> {
let res = sqlx::query("DELETE FROM watch_dirs WHERE id = ?")
.bind(id)
.execute(&self.pool)
.await?;
if res.rows_affected() == 0 {
return Err(crate::error::AlchemistError::Database(
sqlx::Error::RowNotFound,
));
}
Ok(())
}
pub async fn get_all_profiles(&self) -> Result<Vec<LibraryProfile>> {
let profiles = sqlx::query_as::<_, LibraryProfile>(
"SELECT id, name, preset, codec, quality_profile, hdr_mode, audio_mode,
crf_override, notes, created_at, updated_at
FROM library_profiles
ORDER BY id ASC",
)
.fetch_all(&self.pool)
.await?;
Ok(profiles)
}
pub async fn get_profile(&self, id: i64) -> Result<Option<LibraryProfile>> {
let profile = sqlx::query_as::<_, LibraryProfile>(
"SELECT id, name, preset, codec, quality_profile, hdr_mode, audio_mode,
crf_override, notes, created_at, updated_at
FROM library_profiles
WHERE id = ?",
)
.bind(id)
.fetch_optional(&self.pool)
.await?;
Ok(profile)
}
pub async fn create_profile(&self, profile: NewLibraryProfile) -> Result<i64> {
let id = sqlx::query(
"INSERT INTO library_profiles
(name, preset, codec, quality_profile, hdr_mode, audio_mode, crf_override, notes, updated_at)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, CURRENT_TIMESTAMP)",
)
.bind(profile.name)
.bind(profile.preset)
.bind(profile.codec)
.bind(profile.quality_profile)
.bind(profile.hdr_mode)
.bind(profile.audio_mode)
.bind(profile.crf_override)
.bind(profile.notes)
.execute(&self.pool)
.await?
.last_insert_rowid();
Ok(id)
}
pub async fn update_profile(&self, id: i64, profile: NewLibraryProfile) -> Result<()> {
let result = sqlx::query(
"UPDATE library_profiles
SET name = ?,
preset = ?,
codec = ?,
quality_profile = ?,
hdr_mode = ?,
audio_mode = ?,
crf_override = ?,
notes = ?,
updated_at = CURRENT_TIMESTAMP
WHERE id = ?",
)
.bind(profile.name)
.bind(profile.preset)
.bind(profile.codec)
.bind(profile.quality_profile)
.bind(profile.hdr_mode)
.bind(profile.audio_mode)
.bind(profile.crf_override)
.bind(profile.notes)
.bind(id)
.execute(&self.pool)
.await?;
if result.rows_affected() == 0 {
return Err(crate::error::AlchemistError::Database(
sqlx::Error::RowNotFound,
));
}
Ok(())
}
pub async fn delete_profile(&self, id: i64) -> Result<()> {
let result = sqlx::query("DELETE FROM library_profiles WHERE id = ?")
.bind(id)
.execute(&self.pool)
.await?;
if result.rows_affected() == 0 {
return Err(crate::error::AlchemistError::Database(
sqlx::Error::RowNotFound,
));
}
Ok(())
}
pub async fn assign_profile_to_watch_dir(
&self,
dir_id: i64,
profile_id: Option<i64>,
) -> Result<()> {
let result = sqlx::query(
"UPDATE watch_dirs
SET profile_id = ?
WHERE id = ?",
)
.bind(profile_id)
.bind(dir_id)
.execute(&self.pool)
.await?;
if result.rows_affected() == 0 {
return Err(crate::error::AlchemistError::Database(
sqlx::Error::RowNotFound,
));
}
Ok(())
}
pub async fn get_profile_for_path(&self, path: &str) -> Result<Option<LibraryProfile>> {
let normalized = Path::new(path);
let candidate = sqlx::query_as::<_, LibraryProfile>(
"SELECT lp.id, lp.name, lp.preset, lp.codec, lp.quality_profile, lp.hdr_mode,
lp.audio_mode, lp.crf_override, lp.notes, lp.created_at, lp.updated_at
FROM watch_dirs wd
JOIN library_profiles lp ON lp.id = wd.profile_id
WHERE wd.profile_id IS NOT NULL
AND (
? = wd.path
OR (
length(?) > length(wd.path)
AND (
substr(?, 1, length(wd.path) + 1) = wd.path || '/'
OR substr(?, 1, length(wd.path) + 1) = wd.path || '\\'
)
)
)
ORDER BY LENGTH(wd.path) DESC
LIMIT 1",
)
.bind(path)
.bind(path)
.bind(path)
.bind(path)
.fetch_optional(&self.pool)
.await?;
if candidate.is_some() {
return Ok(candidate);
}
// SQLite prefix matching is a fast first pass; fall back to strict path ancestry
// if separators or normalization differ.
let rows = sqlx::query(
"SELECT wd.path,
lp.id, lp.name, lp.preset, lp.codec, lp.quality_profile, lp.hdr_mode,
lp.audio_mode, lp.crf_override, lp.notes, lp.created_at, lp.updated_at
FROM watch_dirs wd
JOIN library_profiles lp ON lp.id = wd.profile_id
WHERE wd.profile_id IS NOT NULL",
)
.fetch_all(&self.pool)
.await?;
let mut best: Option<(usize, LibraryProfile)> = None;
for row in rows {
let watch_path: String = row.get("path");
let profile = LibraryProfile {
id: row.get("id"),
name: row.get("name"),
preset: row.get("preset"),
codec: row.get("codec"),
quality_profile: row.get("quality_profile"),
hdr_mode: row.get("hdr_mode"),
audio_mode: row.get("audio_mode"),
crf_override: row.get("crf_override"),
notes: row.get("notes"),
created_at: row.get("created_at"),
updated_at: row.get("updated_at"),
};
let watch_path_buf = PathBuf::from(&watch_path);
if normalized == watch_path_buf || normalized.starts_with(&watch_path_buf) {
let score = watch_path.len();
if best
.as_ref()
.is_none_or(|(best_score, _)| score > *best_score)
{
best = Some((score, profile));
}
}
}
Ok(best.map(|(_, profile)| profile))
}
pub async fn count_watch_dirs_using_profile(&self, profile_id: i64) -> Result<i64> {
let row: (i64,) = sqlx::query_as("SELECT COUNT(*) FROM watch_dirs WHERE profile_id = ?")
.bind(profile_id)
.fetch_one(&self.pool)
.await?;
Ok(row.0)
}
pub async fn get_notification_targets(&self) -> Result<Vec<NotificationTarget>> {
let flags = &self.notification_target_flags;
let targets = if flags.has_target_type_v2 {
sqlx::query_as::<_, NotificationTarget>(
"SELECT
id,
name,
COALESCE(
NULLIF(target_type_v2, ''),
CASE target_type
WHEN 'discord' THEN 'discord_webhook'
WHEN 'gotify' THEN 'gotify'
ELSE 'webhook'
END
) AS target_type,
CASE
WHEN trim(config_json) != '' THEN config_json
WHEN target_type = 'discord' THEN json_object('webhook_url', endpoint_url)
WHEN target_type = 'gotify' THEN json_object('server_url', endpoint_url, 'app_token', COALESCE(auth_token, ''))
ELSE json_object('url', endpoint_url, 'auth_token', auth_token)
END AS config_json,
events,
enabled,
created_at
FROM notification_targets
ORDER BY id ASC",
)
.fetch_all(&self.pool)
.await?
} else {
sqlx::query_as::<_, NotificationTarget>(
"SELECT id, name, target_type, config_json, events, enabled, created_at
FROM notification_targets
ORDER BY id ASC",
)
.fetch_all(&self.pool)
.await?
};
Ok(targets)
}
pub async fn add_notification_target(
&self,
name: &str,
target_type: &str,
config_json: &str,
events: &str,
enabled: bool,
) -> Result<NotificationTarget> {
let flags = &self.notification_target_flags;
if flags.has_target_type_v2 {
let (legacy_target_type, endpoint_url, auth_token) =
notification_legacy_columns(target_type, config_json);
let result = sqlx::query(
"INSERT INTO notification_targets
(name, target_type, target_type_v2, endpoint_url, auth_token, config_json, events, enabled)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)",
)
.bind(name)
.bind(legacy_target_type)
.bind(target_type)
.bind(endpoint_url)
.bind(auth_token)
.bind(config_json)
.bind(events)
.bind(enabled)
.execute(&self.pool)
.await?;
self.get_notification_target_by_id(result.last_insert_rowid())
.await
} else {
let result = sqlx::query(
"INSERT INTO notification_targets (name, target_type, config_json, events, enabled)
VALUES (?, ?, ?, ?, ?)",
)
.bind(name)
.bind(target_type)
.bind(config_json)
.bind(events)
.bind(enabled)
.execute(&self.pool)
.await?;
self.get_notification_target_by_id(result.last_insert_rowid())
.await
}
}
pub async fn delete_notification_target(&self, id: i64) -> Result<()> {
let res = sqlx::query("DELETE FROM notification_targets WHERE id = ?")
.bind(id)
.execute(&self.pool)
.await?;
if res.rows_affected() == 0 {
return Err(crate::error::AlchemistError::Database(
sqlx::Error::RowNotFound,
));
}
Ok(())
}
pub async fn replace_notification_targets(
&self,
targets: &[crate::config::NotificationTargetConfig],
) -> Result<()> {
let flags = &self.notification_target_flags;
let mut tx = self.pool.begin().await?;
sqlx::query("DELETE FROM notification_targets")
.execute(&mut *tx)
.await?;
for target in targets {
let config_json = target.config_json.to_string();
let events = serde_json::to_string(&target.events).unwrap_or_else(|_| "[]".to_string());
if flags.has_target_type_v2 {
let (legacy_target_type, endpoint_url, auth_token) =
notification_legacy_columns(&target.target_type, &config_json);
sqlx::query(
"INSERT INTO notification_targets
(name, target_type, target_type_v2, endpoint_url, auth_token, config_json, events, enabled)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)",
)
.bind(&target.name)
.bind(legacy_target_type)
.bind(&target.target_type)
.bind(endpoint_url)
.bind(auth_token)
.bind(&config_json)
.bind(&events)
.bind(target.enabled)
.execute(&mut *tx)
.await?;
} else {
sqlx::query(
"INSERT INTO notification_targets (name, target_type, config_json, events, enabled) VALUES (?, ?, ?, ?, ?)",
)
.bind(&target.name)
.bind(&target.target_type)
.bind(&config_json)
.bind(&events)
.bind(target.enabled)
.execute(&mut *tx)
.await?;
}
}
tx.commit().await?;
Ok(())
}
async fn get_notification_target_by_id(&self, id: i64) -> Result<NotificationTarget> {
let flags = &self.notification_target_flags;
let row = if flags.has_target_type_v2 {
sqlx::query_as::<_, NotificationTarget>(
"SELECT
id,
name,
COALESCE(
NULLIF(target_type_v2, ''),
CASE target_type
WHEN 'discord' THEN 'discord_webhook'
WHEN 'gotify' THEN 'gotify'
ELSE 'webhook'
END
) AS target_type,
CASE
WHEN trim(config_json) != '' THEN config_json
WHEN target_type = 'discord' THEN json_object('webhook_url', endpoint_url)
WHEN target_type = 'gotify' THEN json_object('server_url', endpoint_url, 'app_token', COALESCE(auth_token, ''))
ELSE json_object('url', endpoint_url, 'auth_token', auth_token)
END AS config_json,
events,
enabled,
created_at
FROM notification_targets
WHERE id = ?",
)
.bind(id)
.fetch_one(&self.pool)
.await?
} else {
sqlx::query_as::<_, NotificationTarget>(
"SELECT id, name, target_type, config_json, events, enabled, created_at
FROM notification_targets
WHERE id = ?",
)
.bind(id)
.fetch_one(&self.pool)
.await?
};
Ok(row)
}
pub async fn get_schedule_windows(&self) -> Result<Vec<ScheduleWindow>> {
let windows =
sqlx::query_as::<_, ScheduleWindow>("SELECT * FROM schedule_windows ORDER BY id ASC")
.fetch_all(&self.pool)
.await?;
Ok(windows)
}
pub async fn add_schedule_window(
&self,
start_time: &str,
end_time: &str,
days_of_week: &str,
enabled: bool,
) -> Result<ScheduleWindow> {
let row = sqlx::query_as::<_, ScheduleWindow>(
"INSERT INTO schedule_windows (start_time, end_time, days_of_week, enabled)
VALUES (?, ?, ?, ?)
RETURNING *",
)
.bind(start_time)
.bind(end_time)
.bind(days_of_week)
.bind(enabled)
.fetch_one(&self.pool)
.await?;
Ok(row)
}
pub async fn delete_schedule_window(&self, id: i64) -> Result<()> {
let res = sqlx::query("DELETE FROM schedule_windows WHERE id = ?")
.bind(id)
.execute(&self.pool)
.await?;
if res.rows_affected() == 0 {
return Err(crate::error::AlchemistError::Database(
sqlx::Error::RowNotFound,
));
}
Ok(())
}
pub async fn replace_schedule_windows(
&self,
windows: &[crate::config::ScheduleWindowConfig],
) -> Result<()> {
let mut tx = self.pool.begin().await?;
sqlx::query("DELETE FROM schedule_windows")
.execute(&mut *tx)
.await?;
for window in windows {
sqlx::query(
"INSERT INTO schedule_windows (start_time, end_time, days_of_week, enabled) VALUES (?, ?, ?, ?)",
)
.bind(&window.start_time)
.bind(&window.end_time)
.bind(serde_json::to_string(&window.days_of_week).unwrap_or_else(|_| "[]".to_string()))
.bind(window.enabled)
.execute(&mut *tx)
.await?;
}
tx.commit().await?;
Ok(())
}
pub async fn get_file_settings(&self) -> Result<FileSettings> {
// Migration ensures row 1 exists, but we handle missing just in case
let row = sqlx::query_as::<_, FileSettings>("SELECT * FROM file_settings WHERE id = 1")
.fetch_optional(&self.pool)
.await?;
match row {
Some(s) => Ok(s),
None => {
// If missing (shouldn't happen), return default
Ok(FileSettings {
id: 1,
delete_source: false,
output_extension: "mkv".to_string(),
output_suffix: "-alchemist".to_string(),
replace_strategy: "keep".to_string(),
output_root: None,
})
}
}
}
pub async fn update_file_settings(
&self,
delete_source: bool,
output_extension: &str,
output_suffix: &str,
replace_strategy: &str,
output_root: Option<&str>,
) -> Result<FileSettings> {
let row = sqlx::query_as::<_, FileSettings>(
"UPDATE file_settings
SET delete_source = ?, output_extension = ?, output_suffix = ?, replace_strategy = ?, output_root = ?
WHERE id = 1
RETURNING *",
)
.bind(delete_source)
.bind(output_extension)
.bind(output_suffix)
.bind(replace_strategy)
.bind(output_root)
.fetch_one(&self.pool)
.await?;
Ok(row)
}
pub async fn replace_file_settings_projection(
&self,
settings: &crate::config::FileSettingsConfig,
) -> Result<FileSettings> {
self.update_file_settings(
settings.delete_source,
&settings.output_extension,
&settings.output_suffix,
&settings.replace_strategy,
settings.output_root.as_deref(),
)
.await
}
/// Set UI preference
pub async fn set_preference(&self, key: &str, value: &str) -> Result<()> {
sqlx::query(
"INSERT INTO ui_preferences (key, value, updated_at) VALUES (?, ?, CURRENT_TIMESTAMP)
ON CONFLICT(key) DO UPDATE SET value = excluded.value, updated_at = CURRENT_TIMESTAMP",
)
.bind(key)
.bind(value)
.execute(&self.pool)
.await?;
Ok(())
}
/// Get UI preference
pub async fn get_preference(&self, key: &str) -> Result<Option<String>> {
let row: Option<(String,)> =
sqlx::query_as("SELECT value FROM ui_preferences WHERE key = ?")
.bind(key)
.fetch_optional(&self.pool)
.await?;
Ok(row.map(|r| r.0))
}
pub async fn delete_preference(&self, key: &str) -> Result<()> {
sqlx::query("DELETE FROM ui_preferences WHERE key = ?")
.bind(key)
.execute(&self.pool)
.await?;
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::*;
fn temp_db_path(prefix: &str) -> PathBuf {
let mut path = std::env::temp_dir();
path.push(format!("{prefix}_{}.db", rand::random::<u64>()));
path
}
fn sample_profile(name: &str) -> NewLibraryProfile {
NewLibraryProfile {
name: name.to_string(),
preset: "balanced".to_string(),
codec: "av1".to_string(),
quality_profile: "balanced".to_string(),
hdr_mode: "preserve".to_string(),
audio_mode: "copy".to_string(),
crf_override: None,
notes: None,
}
}
#[tokio::test]
async fn profile_lookup_treats_percent_and_underscore_as_literals() -> anyhow::Result<()> {
let db_path = temp_db_path("alchemist_profile_lookup_literals");
let db = Db::new(db_path.to_string_lossy().as_ref()).await?;
let underscore_profile = db.create_profile(sample_profile("underscore")).await?;
let percent_profile = db.create_profile(sample_profile("percent")).await?;
let underscore_watch = db.add_watch_dir("/media/TV_4K", true).await?;
db.assign_profile_to_watch_dir(underscore_watch.id, Some(underscore_profile))
.await?;
let percent_watch = db.add_watch_dir("/media/Movies%20", true).await?;
db.assign_profile_to_watch_dir(percent_watch.id, Some(percent_profile))
.await?;
assert_eq!(
db.get_profile_for_path("/media/TV_4K/show/file.mkv")
.await?
.map(|profile| profile.name),
Some("underscore".to_string())
);
assert_eq!(
db.get_profile_for_path("/media/TVA4K/show/file.mkv")
.await?
.map(|profile| profile.name),
None
);
assert_eq!(
db.get_profile_for_path("/media/Movies%20/title/file.mkv")
.await?
.map(|profile| profile.name),
Some("percent".to_string())
);
assert_eq!(
db.get_profile_for_path("/media/MoviesABCD/title/file.mkv")
.await?
.map(|profile| profile.name),
None
);
db.pool.close().await;
let _ = std::fs::remove_file(db_path);
Ok(())
}
#[tokio::test]
async fn profile_lookup_prefers_longest_literal_matching_watch_dir() -> anyhow::Result<()> {
let db_path = temp_db_path("alchemist_profile_lookup_longest");
let db = Db::new(db_path.to_string_lossy().as_ref()).await?;
let base_profile = db.create_profile(sample_profile("base")).await?;
let nested_profile = db.create_profile(sample_profile("nested")).await?;
let base_watch = db.add_watch_dir("/media", true).await?;
db.assign_profile_to_watch_dir(base_watch.id, Some(base_profile))
.await?;
let nested_watch = db.add_watch_dir("/media/TV_4K", true).await?;
db.assign_profile_to_watch_dir(nested_watch.id, Some(nested_profile))
.await?;
assert_eq!(
db.get_profile_for_path("/media/TV_4K/show/file.mkv")
.await?
.map(|profile| profile.name),
Some("nested".to_string())
);
db.pool.close().await;
let _ = std::fs::remove_file(db_path);
Ok(())
}
}

152
src/db/conversion.rs Normal file
View File

@@ -0,0 +1,152 @@
use crate::error::Result;
use super::Db;
use super::types::*;
impl Db {
pub async fn create_conversion_job(
&self,
upload_path: &str,
mode: &str,
settings_json: &str,
probe_json: Option<&str>,
expires_at: &str,
) -> Result<ConversionJob> {
let row = sqlx::query_as::<_, ConversionJob>(
"INSERT INTO conversion_jobs (upload_path, mode, settings_json, probe_json, expires_at)
VALUES (?, ?, ?, ?, ?)
RETURNING *",
)
.bind(upload_path)
.bind(mode)
.bind(settings_json)
.bind(probe_json)
.bind(expires_at)
.fetch_one(&self.pool)
.await?;
Ok(row)
}
pub async fn get_conversion_job(&self, id: i64) -> Result<Option<ConversionJob>> {
let row = sqlx::query_as::<_, ConversionJob>(
"SELECT id, upload_path, output_path, mode, settings_json, probe_json, linked_job_id, status, expires_at, downloaded_at, created_at, updated_at
FROM conversion_jobs
WHERE id = ?",
)
.bind(id)
.fetch_optional(&self.pool)
.await?;
Ok(row)
}
pub async fn get_conversion_job_by_linked_job_id(
&self,
linked_job_id: i64,
) -> Result<Option<ConversionJob>> {
let row = sqlx::query_as::<_, ConversionJob>(
"SELECT id, upload_path, output_path, mode, settings_json, probe_json, linked_job_id, status, expires_at, downloaded_at, created_at, updated_at
FROM conversion_jobs
WHERE linked_job_id = ?",
)
.bind(linked_job_id)
.fetch_optional(&self.pool)
.await?;
Ok(row)
}
pub async fn update_conversion_job_probe(&self, id: i64, probe_json: &str) -> Result<()> {
sqlx::query(
"UPDATE conversion_jobs
SET probe_json = ?, updated_at = datetime('now')
WHERE id = ?",
)
.bind(probe_json)
.bind(id)
.execute(&self.pool)
.await?;
Ok(())
}
pub async fn update_conversion_job_settings(
&self,
id: i64,
settings_json: &str,
mode: &str,
) -> Result<()> {
sqlx::query(
"UPDATE conversion_jobs
SET settings_json = ?, mode = ?, updated_at = datetime('now')
WHERE id = ?",
)
.bind(settings_json)
.bind(mode)
.bind(id)
.execute(&self.pool)
.await?;
Ok(())
}
pub async fn update_conversion_job_start(
&self,
id: i64,
output_path: &str,
linked_job_id: i64,
) -> Result<()> {
sqlx::query(
"UPDATE conversion_jobs
SET output_path = ?, linked_job_id = ?, status = 'queued', updated_at = datetime('now')
WHERE id = ?",
)
.bind(output_path)
.bind(linked_job_id)
.bind(id)
.execute(&self.pool)
.await?;
Ok(())
}
pub async fn update_conversion_job_status(&self, id: i64, status: &str) -> Result<()> {
sqlx::query(
"UPDATE conversion_jobs
SET status = ?, updated_at = datetime('now')
WHERE id = ?",
)
.bind(status)
.bind(id)
.execute(&self.pool)
.await?;
Ok(())
}
pub async fn mark_conversion_job_downloaded(&self, id: i64) -> Result<()> {
sqlx::query(
"UPDATE conversion_jobs
SET downloaded_at = datetime('now'), status = 'downloaded', updated_at = datetime('now')
WHERE id = ?",
)
.bind(id)
.execute(&self.pool)
.await?;
Ok(())
}
pub async fn delete_conversion_job(&self, id: i64) -> Result<()> {
sqlx::query("DELETE FROM conversion_jobs WHERE id = ?")
.bind(id)
.execute(&self.pool)
.await?;
Ok(())
}
pub async fn get_expired_conversion_jobs(&self, now: &str) -> Result<Vec<ConversionJob>> {
let rows = sqlx::query_as::<_, ConversionJob>(
"SELECT id, upload_path, output_path, mode, settings_json, probe_json, linked_job_id, status, expires_at, downloaded_at, created_at, updated_at
FROM conversion_jobs
WHERE expires_at <= ?",
)
.bind(now)
.fetch_all(&self.pool)
.await?;
Ok(rows)
}
}

54
src/db/events.rs Normal file
View File

@@ -0,0 +1,54 @@
use crate::explanations::Explanation;
use serde::{Deserialize, Serialize};
use super::types::JobState;
// Typed event channels for separating high-volume vs low-volume events
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(tag = "type", content = "data")]
pub enum JobEvent {
StateChanged {
job_id: i64,
status: JobState,
},
Progress {
job_id: i64,
percentage: f64,
time: String,
},
Decision {
job_id: i64,
action: String,
reason: String,
explanation: Option<Explanation>,
},
Log {
level: String,
job_id: Option<i64>,
message: String,
},
}
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(tag = "type", content = "data")]
pub enum ConfigEvent {
Updated(Box<crate::config::Config>),
WatchFolderAdded(String),
WatchFolderRemoved(String),
}
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(tag = "type", content = "data")]
pub enum SystemEvent {
ScanStarted,
ScanCompleted,
EngineIdle,
EngineStatusChanged,
HardwareStateChanged,
}
pub struct EventChannels {
pub jobs: tokio::sync::broadcast::Sender<JobEvent>, // 1000 capacity - high volume
pub config: tokio::sync::broadcast::Sender<ConfigEvent>, // 50 capacity - rare
pub system: tokio::sync::broadcast::Sender<SystemEvent>, // 100 capacity - medium
}

1236
src/db/jobs.rs Normal file

File diff suppressed because it is too large Load Diff

173
src/db/mod.rs Normal file
View File

@@ -0,0 +1,173 @@
mod config;
mod conversion;
mod events;
mod jobs;
mod stats;
mod system;
mod types;
pub use events::*;
pub use types::*;
use crate::error::{AlchemistError, Result};
use sha2::{Digest, Sha256};
use sqlx::SqlitePool;
use sqlx::sqlite::{SqliteConnectOptions, SqliteJournalMode};
use std::time::Duration;
use tokio::time::timeout;
use tracing::info;
/// Default timeout for potentially slow database queries
pub(crate) const QUERY_TIMEOUT: Duration = Duration::from_secs(5);
/// Execute a query with a timeout to prevent blocking the job loop
pub(crate) async fn timed_query<T, F, Fut>(operation: &str, f: F) -> Result<T>
where
F: FnOnce() -> Fut,
Fut: std::future::Future<Output = Result<T>>,
{
match timeout(QUERY_TIMEOUT, f()).await {
Ok(result) => result,
Err(_) => Err(AlchemistError::QueryTimeout(
QUERY_TIMEOUT.as_secs(),
operation.to_string(),
)),
}
}
#[derive(Clone, Debug)]
pub(crate) struct WatchDirSchemaFlags {
has_is_recursive: bool,
has_recursive: bool,
has_enabled: bool,
has_profile_id: bool,
}
#[derive(Clone, Debug)]
pub(crate) struct NotificationTargetSchemaFlags {
has_target_type_v2: bool,
}
#[derive(Clone, Debug)]
pub struct Db {
pub(crate) pool: SqlitePool,
pub(crate) watch_dir_flags: std::sync::Arc<WatchDirSchemaFlags>,
pub(crate) notification_target_flags: std::sync::Arc<NotificationTargetSchemaFlags>,
}
impl Db {
pub async fn new(db_path: &str) -> Result<Self> {
let start = std::time::Instant::now();
let options = SqliteConnectOptions::new()
.filename(db_path)
.create_if_missing(true)
.foreign_keys(true)
.journal_mode(SqliteJournalMode::Wal)
.busy_timeout(Duration::from_secs(5));
let pool = sqlx::sqlite::SqlitePoolOptions::new()
.max_connections(1)
.connect_with(options)
.await?;
info!(
target: "startup",
"Database connection opened in {} ms",
start.elapsed().as_millis()
);
// Run migrations
let migrate_start = std::time::Instant::now();
sqlx::migrate!("./migrations")
.run(&pool)
.await
.map_err(|e| crate::error::AlchemistError::Database(e.into()))?;
info!(
target: "startup",
"Database migrations completed in {} ms",
migrate_start.elapsed().as_millis()
);
// Cache watch_dirs schema flags once at startup to avoid repeated PRAGMA queries.
let check = |column: &str| {
let pool = pool.clone();
let column = column.to_string();
async move {
let row =
sqlx::query("SELECT name FROM pragma_table_info('watch_dirs') WHERE name = ?")
.bind(&column)
.fetch_optional(&pool)
.await
.unwrap_or(None);
row.is_some()
}
};
let watch_dir_flags = WatchDirSchemaFlags {
has_is_recursive: check("is_recursive").await,
has_recursive: check("recursive").await,
has_enabled: check("enabled").await,
has_profile_id: check("profile_id").await,
};
let notification_check = |column: &str| {
let pool = pool.clone();
let column = column.to_string();
async move {
let row = sqlx::query(
"SELECT name FROM pragma_table_info('notification_targets') WHERE name = ?",
)
.bind(&column)
.fetch_optional(&pool)
.await
.unwrap_or(None);
row.is_some()
}
};
let notification_target_flags = NotificationTargetSchemaFlags {
has_target_type_v2: notification_check("target_type_v2").await,
};
Ok(Self {
pool,
watch_dir_flags: std::sync::Arc::new(watch_dir_flags),
notification_target_flags: std::sync::Arc::new(notification_target_flags),
})
}
}
/// Hash a session token using SHA256 for secure storage.
///
/// # Security: Timing Attack Resistance
///
/// Session tokens are hashed before storage and lookup. Token validation uses
/// SQL `WHERE token = ?` with the hashed value, so the comparison occurs in
/// SQLite rather than in Rust code. This is inherently constant-time from the
/// application's perspective because:
/// 1. The database performs the comparison, not our code
/// 2. Database query time doesn't leak information about partial matches
/// 3. No early-exit comparison in application code
///
/// This design makes timing attacks infeasible without requiring the `subtle`
/// crate for constant-time comparison.
pub(crate) fn hash_session_token(token: &str) -> String {
let mut hasher = Sha256::new();
hasher.update(token.as_bytes());
let digest = hasher.finalize();
let mut out = String::with_capacity(64);
for byte in digest {
use std::fmt::Write;
let _ = write!(&mut out, "{:02x}", byte);
}
out
}
pub fn hash_api_token(token: &str) -> String {
let mut hasher = Sha256::new();
hasher.update(token.as_bytes());
let digest = hasher.finalize();
let mut out = String::with_capacity(64);
for byte in digest {
use std::fmt::Write;
let _ = write!(&mut out, "{:02x}", byte);
}
out
}

422
src/db/stats.rs Normal file
View File

@@ -0,0 +1,422 @@
use crate::error::Result;
use sqlx::Row;
use super::Db;
use super::timed_query;
use super::types::*;
impl Db {
pub async fn get_stats(&self) -> Result<serde_json::Value> {
let pool = &self.pool;
timed_query("get_stats", || async {
let stats = sqlx::query("SELECT status, count(*) as count FROM jobs GROUP BY status")
.fetch_all(pool)
.await?;
let mut map = serde_json::Map::new();
for row in stats {
use sqlx::Row;
let status: String = row.get("status");
let count: i64 = row.get("count");
map.insert(status, serde_json::Value::Number(count.into()));
}
Ok(serde_json::Value::Object(map))
})
.await
}
/// Save encode statistics
pub async fn save_encode_stats(&self, stats: EncodeStatsInput) -> Result<()> {
let result = sqlx::query(
"INSERT INTO encode_stats
(job_id, input_size_bytes, output_size_bytes, compression_ratio,
encode_time_seconds, encode_speed, avg_bitrate_kbps, vmaf_score, output_codec)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(job_id) DO UPDATE SET
input_size_bytes = excluded.input_size_bytes,
output_size_bytes = excluded.output_size_bytes,
compression_ratio = excluded.compression_ratio,
encode_time_seconds = excluded.encode_time_seconds,
encode_speed = excluded.encode_speed,
avg_bitrate_kbps = excluded.avg_bitrate_kbps,
vmaf_score = excluded.vmaf_score,
output_codec = excluded.output_codec",
)
.bind(stats.job_id)
.bind(stats.input_size as i64)
.bind(stats.output_size as i64)
.bind(stats.compression_ratio)
.bind(stats.encode_time)
.bind(stats.encode_speed)
.bind(stats.avg_bitrate)
.bind(stats.vmaf_score)
.bind(stats.output_codec)
.execute(&self.pool)
.await?;
if result.rows_affected() == 0 {
return Err(crate::error::AlchemistError::Database(
sqlx::Error::RowNotFound,
));
}
Ok(())
}
/// Record a single encode attempt outcome
pub async fn insert_encode_attempt(&self, input: EncodeAttemptInput) -> Result<()> {
sqlx::query(
"INSERT INTO encode_attempts
(job_id, attempt_number, started_at, finished_at, outcome,
failure_code, failure_summary, input_size_bytes, output_size_bytes,
encode_time_seconds)
VALUES (?, ?, ?, datetime('now'), ?, ?, ?, ?, ?, ?)",
)
.bind(input.job_id)
.bind(input.attempt_number)
.bind(input.started_at)
.bind(input.outcome)
.bind(input.failure_code)
.bind(input.failure_summary)
.bind(input.input_size_bytes)
.bind(input.output_size_bytes)
.bind(input.encode_time_seconds)
.execute(&self.pool)
.await?;
Ok(())
}
/// Get all encode attempts for a job, ordered by attempt_number
pub async fn get_encode_attempts_by_job(&self, job_id: i64) -> Result<Vec<EncodeAttempt>> {
let attempts = sqlx::query_as::<_, EncodeAttempt>(
"SELECT id, job_id, attempt_number, started_at, finished_at, outcome,
failure_code, failure_summary, input_size_bytes, output_size_bytes,
encode_time_seconds, created_at
FROM encode_attempts
WHERE job_id = ?
ORDER BY attempt_number ASC",
)
.bind(job_id)
.fetch_all(&self.pool)
.await?;
Ok(attempts)
}
pub async fn get_encode_stats_by_job_id(&self, job_id: i64) -> Result<DetailedEncodeStats> {
let stats = sqlx::query_as::<_, DetailedEncodeStats>(
"SELECT
e.job_id,
j.input_path,
e.input_size_bytes,
e.output_size_bytes,
e.compression_ratio,
e.encode_time_seconds,
e.encode_speed,
e.avg_bitrate_kbps,
e.vmaf_score,
e.created_at
FROM encode_stats e
JOIN jobs j ON e.job_id = j.id
WHERE e.job_id = ?",
)
.bind(job_id)
.fetch_one(&self.pool)
.await?;
Ok(stats)
}
pub async fn get_aggregated_stats(&self) -> Result<AggregatedStats> {
let pool = &self.pool;
timed_query("get_aggregated_stats", || async {
let row = sqlx::query(
"SELECT
(SELECT COUNT(*) FROM jobs WHERE archived = 0) as total_jobs,
(SELECT COUNT(*) FROM jobs WHERE status = 'completed' AND archived = 0) as completed_jobs,
COALESCE(SUM(input_size_bytes), 0) as total_input_size,
COALESCE(SUM(output_size_bytes), 0) as total_output_size,
AVG(vmaf_score) as avg_vmaf,
COALESCE(SUM(encode_time_seconds), 0.0) as total_encode_time
FROM encode_stats",
)
.fetch_one(pool)
.await?;
Ok(AggregatedStats {
total_jobs: row.get("total_jobs"),
completed_jobs: row.get("completed_jobs"),
total_input_size: row.get("total_input_size"),
total_output_size: row.get("total_output_size"),
avg_vmaf: row.get("avg_vmaf"),
total_encode_time_seconds: row.get("total_encode_time"),
})
})
.await
}
/// Get daily statistics for the last N days (for time-series charts)
pub async fn get_daily_stats(&self, days: i32) -> Result<Vec<DailyStats>> {
let pool = &self.pool;
let days_str = format!("-{}", days);
timed_query("get_daily_stats", || async {
let rows = sqlx::query(
"SELECT
DATE(e.created_at) as date,
COUNT(*) as jobs_completed,
COALESCE(SUM(e.input_size_bytes - e.output_size_bytes), 0) as bytes_saved,
COALESCE(SUM(e.input_size_bytes), 0) as total_input_bytes,
COALESCE(SUM(e.output_size_bytes), 0) as total_output_bytes
FROM encode_stats e
WHERE e.created_at >= DATE('now', ? || ' days')
GROUP BY DATE(e.created_at)
ORDER BY date ASC",
)
.bind(&days_str)
.fetch_all(pool)
.await?;
let stats = rows
.iter()
.map(|row| DailyStats {
date: row.get("date"),
jobs_completed: row.get("jobs_completed"),
bytes_saved: row.get("bytes_saved"),
total_input_bytes: row.get("total_input_bytes"),
total_output_bytes: row.get("total_output_bytes"),
})
.collect();
Ok(stats)
})
.await
}
/// Get detailed per-job encoding statistics (most recent first)
pub async fn get_detailed_encode_stats(&self, limit: i32) -> Result<Vec<DetailedEncodeStats>> {
let pool = &self.pool;
timed_query("get_detailed_encode_stats", || async {
let stats = sqlx::query_as::<_, DetailedEncodeStats>(
"SELECT
e.job_id,
j.input_path,
e.input_size_bytes,
e.output_size_bytes,
e.compression_ratio,
e.encode_time_seconds,
e.encode_speed,
e.avg_bitrate_kbps,
e.vmaf_score,
e.created_at
FROM encode_stats e
JOIN jobs j ON e.job_id = j.id
ORDER BY e.created_at DESC
LIMIT ?",
)
.bind(limit)
.fetch_all(pool)
.await?;
Ok(stats)
})
.await
}
pub async fn get_savings_summary(&self) -> Result<SavingsSummary> {
let pool = &self.pool;
timed_query("get_savings_summary", || async {
let totals = sqlx::query(
"SELECT
COALESCE(SUM(input_size_bytes), 0) as total_input_bytes,
COALESCE(SUM(output_size_bytes), 0) as total_output_bytes,
COUNT(*) as job_count
FROM encode_stats
WHERE output_size_bytes IS NOT NULL",
)
.fetch_one(pool)
.await?;
let total_input_bytes: i64 = totals.get("total_input_bytes");
let total_output_bytes: i64 = totals.get("total_output_bytes");
let job_count: i64 = totals.get("job_count");
let total_bytes_saved = (total_input_bytes - total_output_bytes).max(0);
let savings_percent = if total_input_bytes > 0 {
(total_bytes_saved as f64 / total_input_bytes as f64) * 100.0
} else {
0.0
};
let savings_by_codec = sqlx::query(
"SELECT
COALESCE(NULLIF(TRIM(e.output_codec), ''), 'unknown') as codec,
COALESCE(SUM(e.input_size_bytes - e.output_size_bytes), 0) as bytes_saved,
COUNT(*) as job_count
FROM encode_stats e
JOIN jobs j ON j.id = e.job_id
WHERE e.output_size_bytes IS NOT NULL
GROUP BY codec
ORDER BY bytes_saved DESC, codec ASC",
)
.fetch_all(pool)
.await?
.into_iter()
.map(|row| CodecSavings {
codec: row.get("codec"),
bytes_saved: row.get("bytes_saved"),
job_count: row.get("job_count"),
})
.collect::<Vec<_>>();
let savings_over_time = sqlx::query(
"SELECT
DATE(e.created_at) as date,
COALESCE(SUM(e.input_size_bytes - e.output_size_bytes), 0) as bytes_saved
FROM encode_stats e
WHERE e.output_size_bytes IS NOT NULL
AND e.created_at >= datetime('now', '-30 days')
GROUP BY DATE(e.created_at)
ORDER BY date ASC",
)
.fetch_all(pool)
.await?
.into_iter()
.map(|row| DailySavings {
date: row.get("date"),
bytes_saved: row.get("bytes_saved"),
})
.collect::<Vec<_>>();
Ok(SavingsSummary {
total_input_bytes,
total_output_bytes,
total_bytes_saved,
savings_percent,
job_count,
savings_by_codec,
savings_over_time,
})
})
.await
}
pub async fn get_job_stats(&self) -> Result<JobStats> {
let pool = &self.pool;
timed_query("get_job_stats", || async {
let rows = sqlx::query(
"SELECT status, COUNT(*) as count FROM jobs WHERE archived = 0 GROUP BY status",
)
.fetch_all(pool)
.await?;
let mut stats = JobStats::default();
for row in rows {
let status_str: String = row.get("status");
let count: i64 = row.get("count");
match status_str.as_str() {
"queued" => stats.queued += count,
"encoding" | "analyzing" | "remuxing" | "resuming" => stats.active += count,
"completed" => stats.completed += count,
"failed" | "cancelled" => stats.failed += count,
_ => {}
}
}
Ok(stats)
})
.await
}
pub async fn get_daily_summary_stats(&self) -> Result<DailySummaryStats> {
let pool = &self.pool;
timed_query("get_daily_summary_stats", || async {
let row = sqlx::query(
"SELECT
COALESCE(SUM(CASE WHEN status = 'completed' AND DATE(updated_at, 'localtime') = DATE('now', 'localtime') THEN 1 ELSE 0 END), 0) AS completed,
COALESCE(SUM(CASE WHEN status = 'failed' AND DATE(updated_at, 'localtime') = DATE('now', 'localtime') THEN 1 ELSE 0 END), 0) AS failed,
COALESCE(SUM(CASE WHEN status = 'skipped' AND DATE(updated_at, 'localtime') = DATE('now', 'localtime') THEN 1 ELSE 0 END), 0) AS skipped
FROM jobs",
)
.fetch_one(pool)
.await?;
let completed: i64 = row.get("completed");
let failed: i64 = row.get("failed");
let skipped: i64 = row.get("skipped");
let bytes_row = sqlx::query(
"SELECT COALESCE(SUM(input_size_bytes - output_size_bytes), 0) AS bytes_saved
FROM encode_stats
WHERE DATE(created_at, 'localtime') = DATE('now', 'localtime')",
)
.fetch_one(pool)
.await?;
let bytes_saved: i64 = bytes_row.get("bytes_saved");
let failure_rows = sqlx::query(
"SELECT code, COUNT(*) AS count
FROM job_failure_explanations
WHERE DATE(updated_at, 'localtime') = DATE('now', 'localtime')
GROUP BY code
ORDER BY count DESC, code ASC
LIMIT 3",
)
.fetch_all(pool)
.await?;
let top_failure_reasons = failure_rows
.into_iter()
.map(|row| row.get::<String, _>("code"))
.collect::<Vec<_>>();
let skip_rows = sqlx::query(
"SELECT COALESCE(reason_code, action) AS code, COUNT(*) AS count
FROM decisions
WHERE action = 'skip'
AND DATE(created_at, 'localtime') = DATE('now', 'localtime')
GROUP BY COALESCE(reason_code, action)
ORDER BY count DESC, code ASC
LIMIT 3",
)
.fetch_all(pool)
.await?;
let top_skip_reasons = skip_rows
.into_iter()
.map(|row| row.get::<String, _>("code"))
.collect::<Vec<_>>();
Ok(DailySummaryStats {
completed,
failed,
skipped,
bytes_saved,
top_failure_reasons,
top_skip_reasons,
})
})
.await
}
pub async fn get_skip_reason_counts(&self) -> Result<Vec<(String, i64)>> {
let pool = &self.pool;
timed_query("get_skip_reason_counts", || async {
let rows = sqlx::query(
"SELECT COALESCE(reason_code, action) AS code, COUNT(*) AS count
FROM decisions
WHERE action = 'skip'
AND DATE(created_at, 'localtime') = DATE('now', 'localtime')
GROUP BY COALESCE(reason_code, action)
ORDER BY count DESC, code ASC
LIMIT 20",
)
.fetch_all(pool)
.await?;
Ok(rows
.into_iter()
.map(|row| {
let code: String = row.get("code");
let count: i64 = row.get("count");
(code, count)
})
.collect())
})
.await
}
}

389
src/db/system.rs Normal file
View File

@@ -0,0 +1,389 @@
use crate::error::Result;
use chrono::{DateTime, Utc};
use sqlx::Row;
use super::timed_query;
use super::types::*;
use super::{Db, hash_api_token, hash_session_token};
impl Db {
pub async fn clear_completed_jobs(&self) -> Result<u64> {
let result = sqlx::query(
"UPDATE jobs
SET archived = 1, updated_at = CURRENT_TIMESTAMP
WHERE status = 'completed' AND archived = 0",
)
.execute(&self.pool)
.await?;
Ok(result.rows_affected())
}
pub async fn cleanup_sessions(&self) -> Result<()> {
sqlx::query("DELETE FROM sessions WHERE expires_at <= CURRENT_TIMESTAMP")
.execute(&self.pool)
.await?;
Ok(())
}
pub async fn cleanup_expired_sessions(&self) -> Result<u64> {
let result = sqlx::query("DELETE FROM sessions WHERE expires_at <= CURRENT_TIMESTAMP")
.execute(&self.pool)
.await?;
Ok(result.rows_affected())
}
pub async fn add_log(&self, level: &str, job_id: Option<i64>, message: &str) -> Result<()> {
sqlx::query("INSERT INTO logs (level, job_id, message) VALUES (?, ?, ?)")
.bind(level)
.bind(job_id)
.bind(message)
.execute(&self.pool)
.await?;
Ok(())
}
pub async fn get_logs(&self, limit: i64, offset: i64) -> Result<Vec<LogEntry>> {
let logs = sqlx::query_as::<_, LogEntry>(
"SELECT id, level, job_id, message, created_at FROM logs ORDER BY created_at DESC LIMIT ? OFFSET ?"
)
.bind(limit)
.bind(offset)
.fetch_all(&self.pool)
.await?;
Ok(logs)
}
pub async fn get_logs_for_job(&self, job_id: i64, limit: i64) -> Result<Vec<LogEntry>> {
sqlx::query_as::<_, LogEntry>(
"SELECT id, level, job_id, message, created_at
FROM logs
WHERE job_id = ?
ORDER BY created_at ASC
LIMIT ?",
)
.bind(job_id)
.bind(limit)
.fetch_all(&self.pool)
.await
.map_err(Into::into)
}
pub async fn clear_logs(&self) -> Result<()> {
sqlx::query("DELETE FROM logs").execute(&self.pool).await?;
Ok(())
}
pub async fn prune_old_logs(&self, max_age_days: u32) -> Result<u64> {
let result = sqlx::query(
"DELETE FROM logs
WHERE created_at < datetime('now', '-' || ? || ' days')",
)
.bind(max_age_days as i64)
.execute(&self.pool)
.await?;
Ok(result.rows_affected())
}
pub async fn create_user(&self, username: &str, password_hash: &str) -> Result<i64> {
let id = sqlx::query("INSERT INTO users (username, password_hash) VALUES (?, ?)")
.bind(username)
.bind(password_hash)
.execute(&self.pool)
.await?
.last_insert_rowid();
Ok(id)
}
pub async fn get_user_by_username(&self, username: &str) -> Result<Option<User>> {
let user = sqlx::query_as::<_, User>("SELECT * FROM users WHERE username = ?")
.bind(username)
.fetch_optional(&self.pool)
.await?;
Ok(user)
}
pub async fn has_users(&self) -> Result<bool> {
let count: (i64,) = sqlx::query_as("SELECT COUNT(*) FROM users")
.fetch_one(&self.pool)
.await?;
Ok(count.0 > 0)
}
pub async fn create_session(
&self,
user_id: i64,
token: &str,
expires_at: DateTime<Utc>,
) -> Result<()> {
let token_hash = hash_session_token(token);
sqlx::query("INSERT INTO sessions (token, user_id, expires_at) VALUES (?, ?, ?)")
.bind(token_hash)
.bind(user_id)
.bind(expires_at)
.execute(&self.pool)
.await?;
Ok(())
}
pub async fn get_session(&self, token: &str) -> Result<Option<Session>> {
let token_hash = hash_session_token(token);
let session = sqlx::query_as::<_, Session>(
"SELECT * FROM sessions WHERE token = ? AND expires_at > CURRENT_TIMESTAMP",
)
.bind(&token_hash)
.fetch_optional(&self.pool)
.await?;
Ok(session)
}
pub async fn delete_session(&self, token: &str) -> Result<()> {
let token_hash = hash_session_token(token);
sqlx::query("DELETE FROM sessions WHERE token = ?")
.bind(&token_hash)
.execute(&self.pool)
.await?;
Ok(())
}
pub async fn list_api_tokens(&self) -> Result<Vec<ApiToken>> {
let tokens = sqlx::query_as::<_, ApiToken>(
"SELECT id, name, access_level, created_at, last_used_at, revoked_at
FROM api_tokens
ORDER BY created_at DESC",
)
.fetch_all(&self.pool)
.await?;
Ok(tokens)
}
pub async fn create_api_token(
&self,
name: &str,
token: &str,
access_level: ApiTokenAccessLevel,
) -> Result<ApiToken> {
let token_hash = hash_api_token(token);
let row = sqlx::query_as::<_, ApiToken>(
"INSERT INTO api_tokens (name, token_hash, access_level)
VALUES (?, ?, ?)
RETURNING id, name, access_level, created_at, last_used_at, revoked_at",
)
.bind(name)
.bind(token_hash)
.bind(access_level)
.fetch_one(&self.pool)
.await?;
Ok(row)
}
pub async fn get_active_api_token(&self, token: &str) -> Result<Option<ApiTokenRecord>> {
let token_hash = hash_api_token(token);
let row = sqlx::query_as::<_, ApiTokenRecord>(
"SELECT id, name, token_hash, access_level, created_at, last_used_at, revoked_at
FROM api_tokens
WHERE token_hash = ? AND revoked_at IS NULL",
)
.bind(token_hash)
.fetch_optional(&self.pool)
.await?;
Ok(row)
}
pub async fn update_api_token_last_used(&self, id: i64) -> Result<()> {
sqlx::query("UPDATE api_tokens SET last_used_at = CURRENT_TIMESTAMP WHERE id = ?")
.bind(id)
.execute(&self.pool)
.await?;
Ok(())
}
pub async fn revoke_api_token(&self, id: i64) -> Result<()> {
let result = sqlx::query(
"UPDATE api_tokens
SET revoked_at = COALESCE(revoked_at, CURRENT_TIMESTAMP)
WHERE id = ?",
)
.bind(id)
.execute(&self.pool)
.await?;
if result.rows_affected() == 0 {
return Err(crate::error::AlchemistError::Database(
sqlx::Error::RowNotFound,
));
}
Ok(())
}
pub async fn record_health_check(
&self,
job_id: i64,
issues: Option<&crate::media::health::HealthIssueReport>,
) -> Result<()> {
let serialized_issues = issues
.map(serde_json::to_string)
.transpose()
.map_err(|err| {
crate::error::AlchemistError::Unknown(format!(
"Failed to serialize health issue report: {}",
err
))
})?;
sqlx::query(
"UPDATE jobs
SET health_issues = ?,
last_health_check = datetime('now')
WHERE id = ?",
)
.bind(serialized_issues)
.bind(job_id)
.execute(&self.pool)
.await?;
Ok(())
}
pub async fn get_health_summary(&self) -> Result<HealthSummary> {
let pool = &self.pool;
timed_query("get_health_summary", || async {
let row = sqlx::query(
"SELECT
(SELECT COUNT(*) FROM jobs WHERE last_health_check IS NOT NULL AND archived = 0) as total_checked,
(SELECT COUNT(*)
FROM jobs
WHERE health_issues IS NOT NULL AND TRIM(health_issues) != '' AND archived = 0) as issues_found,
(SELECT MAX(started_at) FROM health_scan_runs) as last_run",
)
.fetch_one(pool)
.await?;
Ok(HealthSummary {
total_checked: row.get("total_checked"),
issues_found: row.get("issues_found"),
last_run: row.get("last_run"),
})
})
.await
}
pub async fn create_health_scan_run(&self) -> Result<i64> {
let id = sqlx::query("INSERT INTO health_scan_runs DEFAULT VALUES")
.execute(&self.pool)
.await?
.last_insert_rowid();
Ok(id)
}
pub async fn complete_health_scan_run(
&self,
id: i64,
files_checked: i64,
issues_found: i64,
) -> Result<()> {
sqlx::query(
"UPDATE health_scan_runs
SET completed_at = datetime('now'),
files_checked = ?,
issues_found = ?
WHERE id = ?",
)
.bind(files_checked)
.bind(issues_found)
.bind(id)
.execute(&self.pool)
.await?;
Ok(())
}
pub async fn get_jobs_with_health_issues(&self) -> Result<Vec<JobWithHealthIssueRow>> {
let pool = &self.pool;
timed_query("get_jobs_with_health_issues", || async {
let jobs = sqlx::query_as::<_, JobWithHealthIssueRow>(
"SELECT j.id, j.input_path, j.output_path, j.status,
(SELECT reason FROM decisions WHERE job_id = j.id ORDER BY created_at DESC LIMIT 1) as decision_reason,
COALESCE(j.priority, 0) as priority,
COALESCE(CAST(j.progress AS REAL), 0.0) as progress,
COALESCE(j.attempt_count, 0) as attempt_count,
(SELECT vmaf_score FROM encode_stats WHERE job_id = j.id) as vmaf_score,
j.created_at, j.updated_at,
j.input_metadata_json,
j.health_issues
FROM jobs j
WHERE j.archived = 0
AND j.health_issues IS NOT NULL
AND TRIM(j.health_issues) != ''
ORDER BY j.updated_at DESC",
)
.fetch_all(pool)
.await?;
Ok(jobs)
})
.await
}
pub async fn reset_auth(&self) -> Result<()> {
sqlx::query("DELETE FROM sessions")
.execute(&self.pool)
.await?;
sqlx::query("DELETE FROM users").execute(&self.pool).await?;
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::path::Path;
use std::time::SystemTime;
#[tokio::test]
async fn clear_completed_archives_jobs_but_preserves_encode_stats()
-> std::result::Result<(), Box<dyn std::error::Error>> {
let mut db_path = std::env::temp_dir();
let token: u64 = rand::random();
db_path.push(format!("alchemist_archive_completed_{}.db", token));
let db = Db::new(db_path.to_string_lossy().as_ref()).await?;
let input = Path::new("movie.mkv");
let output = Path::new("movie-alchemist.mkv");
let _ = db
.enqueue_job(input, output, SystemTime::UNIX_EPOCH)
.await?;
let job = db
.get_job_by_input_path("movie.mkv")
.await?
.ok_or_else(|| std::io::Error::other("missing job"))?;
db.update_job_status(job.id, JobState::Completed).await?;
db.save_encode_stats(EncodeStatsInput {
job_id: job.id,
input_size: 2_000,
output_size: 1_000,
compression_ratio: 0.5,
encode_time: 42.0,
encode_speed: 1.2,
avg_bitrate: 800.0,
vmaf_score: Some(96.5),
output_codec: Some("av1".to_string()),
})
.await?;
let cleared = db.clear_completed_jobs().await?;
assert_eq!(cleared, 1);
assert!(db.get_job_by_id(job.id).await?.is_none());
assert!(db.get_job_by_input_path("movie.mkv").await?.is_none());
let visible_completed = db.get_jobs_by_status(JobState::Completed).await?;
assert!(visible_completed.is_empty());
let aggregated = db.get_aggregated_stats().await?;
// Archived jobs are excluded from active stats.
assert_eq!(aggregated.completed_jobs, 0);
// encode_stats rows are preserved even after archiving.
assert_eq!(aggregated.total_input_size, 2_000);
assert_eq!(aggregated.total_output_size, 1_000);
drop(db);
let _ = std::fs::remove_file(db_path);
Ok(())
}
}

692
src/db/types.rs Normal file
View File

@@ -0,0 +1,692 @@
use chrono::{DateTime, Utc};
use serde::{Deserialize, Serialize};
use std::path::{Path, PathBuf};
#[derive(Debug, Serialize, Deserialize, Clone, Copy, PartialEq, Eq, sqlx::Type)]
#[sqlx(rename_all = "lowercase")]
#[serde(rename_all = "lowercase")]
pub enum JobState {
Queued,
Analyzing,
Encoding,
Remuxing,
Completed,
Skipped,
Failed,
Cancelled,
Resuming,
}
impl std::fmt::Display for JobState {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let s = match self {
JobState::Queued => "queued",
JobState::Analyzing => "analyzing",
JobState::Encoding => "encoding",
JobState::Remuxing => "remuxing",
JobState::Completed => "completed",
JobState::Skipped => "skipped",
JobState::Failed => "failed",
JobState::Cancelled => "cancelled",
JobState::Resuming => "resuming",
};
write!(f, "{}", s)
}
}
#[derive(Debug, Serialize, Deserialize, Default, Clone)]
#[serde(default)]
pub struct JobStats {
pub active: i64,
pub queued: i64,
pub completed: i64,
pub failed: i64,
}
#[derive(Debug, Serialize, Deserialize, Default, Clone)]
#[serde(default)]
pub struct DailySummaryStats {
pub completed: i64,
pub failed: i64,
pub skipped: i64,
pub bytes_saved: i64,
pub top_failure_reasons: Vec<String>,
pub top_skip_reasons: Vec<String>,
}
#[derive(Debug, Serialize, Deserialize, sqlx::FromRow)]
pub struct LogEntry {
pub id: i64,
pub level: String,
pub job_id: Option<i64>,
pub message: String,
pub created_at: String, // SQLite datetime as string
}
#[derive(Debug, Serialize, Deserialize, Clone, sqlx::FromRow)]
pub struct Job {
pub id: i64,
pub input_path: String,
pub output_path: String,
pub status: JobState,
pub decision_reason: Option<String>,
pub priority: i32,
pub progress: f64,
pub attempt_count: i32,
pub vmaf_score: Option<f64>,
pub created_at: DateTime<Utc>,
pub updated_at: DateTime<Utc>,
pub input_metadata_json: Option<String>,
}
impl Job {
pub fn input_metadata(&self) -> Option<crate::media::pipeline::MediaMetadata> {
self.input_metadata_json
.as_ref()
.and_then(|json| serde_json::from_str(json).ok())
}
pub fn is_active(&self) -> bool {
matches!(
self.status,
JobState::Encoding | JobState::Analyzing | JobState::Remuxing | JobState::Resuming
)
}
pub fn can_retry(&self) -> bool {
matches!(self.status, JobState::Failed | JobState::Cancelled)
}
pub fn status_class(&self) -> &'static str {
match self.status {
JobState::Completed => "badge-green",
JobState::Encoding | JobState::Remuxing | JobState::Resuming => "badge-yellow",
JobState::Analyzing => "badge-blue",
JobState::Failed | JobState::Cancelled => "badge-red",
_ => "badge-gray",
}
}
pub fn progress_fixed(&self) -> String {
format!("{:.1}", self.progress)
}
pub fn vmaf_fixed(&self) -> String {
self.vmaf_score
.map(|s| format!("{:.1}", s))
.unwrap_or_else(|| "N/A".to_string())
}
}
#[derive(Debug, Serialize, Deserialize, Clone, sqlx::FromRow)]
pub struct JobWithHealthIssueRow {
pub id: i64,
pub input_path: String,
pub output_path: String,
pub status: JobState,
pub decision_reason: Option<String>,
pub priority: i32,
pub progress: f64,
pub attempt_count: i32,
pub vmaf_score: Option<f64>,
pub created_at: DateTime<Utc>,
pub updated_at: DateTime<Utc>,
pub input_metadata_json: Option<String>,
pub health_issues: String,
}
impl JobWithHealthIssueRow {
pub fn into_parts(self) -> (Job, String) {
(
Job {
id: self.id,
input_path: self.input_path,
output_path: self.output_path,
status: self.status,
decision_reason: self.decision_reason,
priority: self.priority,
progress: self.progress,
attempt_count: self.attempt_count,
vmaf_score: self.vmaf_score,
created_at: self.created_at,
updated_at: self.updated_at,
input_metadata_json: self.input_metadata_json,
},
self.health_issues,
)
}
}
#[derive(Debug, Clone, Serialize, sqlx::FromRow)]
pub struct DuplicateCandidate {
pub id: i64,
pub input_path: String,
pub status: String,
}
#[derive(Debug, Serialize, Deserialize, Clone, sqlx::FromRow)]
pub struct WatchDir {
pub id: i64,
pub path: String,
pub is_recursive: bool,
pub profile_id: Option<i64>,
pub created_at: DateTime<Utc>,
}
#[derive(Debug, Serialize, Deserialize, Clone, sqlx::FromRow)]
pub struct LibraryProfile {
pub id: i64,
pub name: String,
pub preset: String,
pub codec: String,
pub quality_profile: String,
pub hdr_mode: String,
pub audio_mode: String,
pub crf_override: Option<i32>,
pub notes: Option<String>,
pub created_at: DateTime<Utc>,
pub updated_at: DateTime<Utc>,
}
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct NewLibraryProfile {
pub name: String,
pub preset: String,
pub codec: String,
pub quality_profile: String,
pub hdr_mode: String,
pub audio_mode: String,
pub crf_override: Option<i32>,
pub notes: Option<String>,
}
#[derive(Debug, Clone, Default)]
pub struct JobFilterQuery {
pub limit: i64,
pub offset: i64,
pub statuses: Option<Vec<JobState>>,
pub search: Option<String>,
pub sort_by: Option<String>,
pub sort_desc: bool,
pub archived: Option<bool>,
}
#[derive(Debug, Serialize, Deserialize, Clone, sqlx::FromRow)]
pub struct NotificationTarget {
pub id: i64,
pub name: String,
pub target_type: String,
pub config_json: String,
pub events: String,
pub enabled: bool,
pub created_at: DateTime<Utc>,
}
#[derive(Debug, Serialize, Deserialize, Clone, sqlx::FromRow)]
pub struct ConversionJob {
pub id: i64,
pub upload_path: String,
pub output_path: Option<String>,
pub mode: String,
pub settings_json: String,
pub probe_json: Option<String>,
pub linked_job_id: Option<i64>,
pub status: String,
pub expires_at: String,
pub downloaded_at: Option<String>,
pub created_at: String,
pub updated_at: String,
}
#[derive(Debug, Serialize, Deserialize, Clone, sqlx::FromRow)]
pub struct JobResumeSession {
pub id: i64,
pub job_id: i64,
pub strategy: String,
pub plan_hash: String,
pub mtime_hash: String,
pub temp_dir: String,
pub concat_manifest_path: String,
pub segment_length_secs: i64,
pub status: String,
pub created_at: String,
pub updated_at: String,
}
#[derive(Debug, Serialize, Deserialize, Clone, sqlx::FromRow)]
pub struct JobResumeSegment {
pub id: i64,
pub job_id: i64,
pub segment_index: i64,
pub start_secs: f64,
pub duration_secs: f64,
pub temp_path: String,
pub status: String,
pub attempt_count: i32,
pub created_at: String,
pub updated_at: String,
}
#[derive(Debug, Clone)]
pub struct UpsertJobResumeSessionInput {
pub job_id: i64,
pub strategy: String,
pub plan_hash: String,
pub mtime_hash: String,
pub temp_dir: String,
pub concat_manifest_path: String,
pub segment_length_secs: i64,
pub status: String,
}
#[derive(Debug, Clone)]
pub struct UpsertJobResumeSegmentInput {
pub job_id: i64,
pub segment_index: i64,
pub start_secs: f64,
pub duration_secs: f64,
pub temp_path: String,
pub status: String,
pub attempt_count: i32,
}
#[derive(Debug, Serialize, Deserialize, Clone, sqlx::FromRow)]
pub struct ScheduleWindow {
pub id: i64,
pub start_time: String,
pub end_time: String,
pub days_of_week: String, // as JSON string
pub enabled: bool,
}
#[derive(Debug, Serialize, Deserialize, Clone, sqlx::FromRow)]
pub struct FileSettings {
pub id: i64,
pub delete_source: bool,
pub output_extension: String,
pub output_suffix: String,
pub replace_strategy: String,
pub output_root: Option<String>,
}
impl FileSettings {
pub fn output_path_for(&self, input_path: &Path) -> PathBuf {
self.output_path_for_source(input_path, None)
}
pub fn output_path_for_source(&self, input_path: &Path, source_root: Option<&Path>) -> PathBuf {
let mut output_path = self.output_base_path(input_path, source_root);
let stem = input_path.file_stem().unwrap_or_default().to_string_lossy();
let extension = self.output_extension.trim_start_matches('.');
let suffix = self.output_suffix.as_str();
let mut filename = String::new();
filename.push_str(&stem);
filename.push_str(suffix);
if !extension.is_empty() {
filename.push('.');
filename.push_str(extension);
}
if filename.is_empty() {
filename.push_str("output");
}
output_path.set_file_name(filename);
if output_path == input_path {
let safe_suffix = if suffix.is_empty() {
"-alchemist".to_string()
} else {
format!("{}-alchemist", suffix)
};
let mut safe_name = String::new();
safe_name.push_str(&stem);
safe_name.push_str(&safe_suffix);
if !extension.is_empty() {
safe_name.push('.');
safe_name.push_str(extension);
}
output_path.set_file_name(safe_name);
}
output_path
}
fn output_base_path(&self, input_path: &Path, source_root: Option<&Path>) -> PathBuf {
let Some(output_root) = self
.output_root
.as_deref()
.filter(|value| !value.trim().is_empty())
else {
return input_path.to_path_buf();
};
let Some(root) = source_root else {
return input_path.to_path_buf();
};
let Ok(relative_path) = input_path.strip_prefix(root) else {
return input_path.to_path_buf();
};
let mut output_path = PathBuf::from(output_root);
if let Some(parent) = relative_path.parent() {
output_path.push(parent);
}
output_path.push(relative_path.file_name().unwrap_or_default());
output_path
}
pub fn should_replace_existing_output(&self) -> bool {
let strategy = self.replace_strategy.trim();
strategy.eq_ignore_ascii_case("replace") || strategy.eq_ignore_ascii_case("overwrite")
}
}
#[derive(Debug, Serialize, Deserialize, Clone, Default)]
#[serde(default)]
pub struct AggregatedStats {
pub total_jobs: i64,
pub completed_jobs: i64,
pub total_input_size: i64,
pub total_output_size: i64,
pub avg_vmaf: Option<f64>,
pub total_encode_time_seconds: f64,
}
impl AggregatedStats {
pub fn total_savings_gb(&self) -> f64 {
self.total_input_size.saturating_sub(self.total_output_size) as f64 / 1_073_741_824.0
}
pub fn total_input_gb(&self) -> f64 {
self.total_input_size as f64 / 1_073_741_824.0
}
pub fn avg_reduction_percentage(&self) -> f64 {
if self.total_input_size == 0 {
0.0
} else {
(1.0 - (self.total_output_size as f64 / self.total_input_size as f64)) * 100.0
}
}
pub fn total_time_hours(&self) -> f64 {
self.total_encode_time_seconds / 3600.0
}
pub fn total_savings_fixed(&self) -> String {
format!("{:.1}", self.total_savings_gb())
}
pub fn total_input_fixed(&self) -> String {
format!("{:.1}", self.total_input_gb())
}
pub fn efficiency_fixed(&self) -> String {
format!("{:.1}", self.avg_reduction_percentage())
}
pub fn time_fixed(&self) -> String {
format!("{:.1}", self.total_time_hours())
}
pub fn avg_vmaf_fixed(&self) -> String {
self.avg_vmaf
.map(|v| format!("{:.1}", v))
.unwrap_or_else(|| "N/A".to_string())
}
}
/// Daily aggregated statistics for time-series charts
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct DailyStats {
pub date: String,
pub jobs_completed: i64,
pub bytes_saved: i64,
pub total_input_bytes: i64,
pub total_output_bytes: i64,
}
/// Detailed per-job encoding statistics
#[derive(Debug, Serialize, Deserialize, Clone, sqlx::FromRow)]
pub struct DetailedEncodeStats {
pub job_id: i64,
pub input_path: String,
pub input_size_bytes: i64,
pub output_size_bytes: i64,
pub compression_ratio: f64,
pub encode_time_seconds: f64,
pub encode_speed: f64,
pub avg_bitrate_kbps: f64,
pub vmaf_score: Option<f64>,
pub created_at: DateTime<Utc>,
}
#[derive(Debug, Serialize, Deserialize, Clone, sqlx::FromRow)]
pub struct EncodeAttempt {
pub id: i64,
pub job_id: i64,
pub attempt_number: i32,
pub started_at: Option<String>,
pub finished_at: String,
pub outcome: String,
pub failure_code: Option<String>,
pub failure_summary: Option<String>,
pub input_size_bytes: Option<i64>,
pub output_size_bytes: Option<i64>,
pub encode_time_seconds: Option<f64>,
pub created_at: String,
}
#[derive(Debug, Clone)]
pub struct EncodeAttemptInput {
pub job_id: i64,
pub attempt_number: i32,
pub started_at: Option<String>,
pub outcome: String,
pub failure_code: Option<String>,
pub failure_summary: Option<String>,
pub input_size_bytes: Option<i64>,
pub output_size_bytes: Option<i64>,
pub encode_time_seconds: Option<f64>,
}
#[derive(Debug, Clone)]
pub struct EncodeStatsInput {
pub job_id: i64,
pub input_size: u64,
pub output_size: u64,
pub compression_ratio: f64,
pub encode_time: f64,
pub encode_speed: f64,
pub avg_bitrate: f64,
pub vmaf_score: Option<f64>,
pub output_codec: Option<String>,
}
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct CodecSavings {
pub codec: String,
pub bytes_saved: i64,
pub job_count: i64,
}
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct DailySavings {
pub date: String,
pub bytes_saved: i64,
}
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct SavingsSummary {
pub total_input_bytes: i64,
pub total_output_bytes: i64,
pub total_bytes_saved: i64,
pub savings_percent: f64,
pub job_count: i64,
pub savings_by_codec: Vec<CodecSavings>,
pub savings_over_time: Vec<DailySavings>,
}
#[derive(Debug, Serialize, Deserialize, Clone, Default)]
pub struct HealthSummary {
pub total_checked: i64,
pub issues_found: i64,
pub last_run: Option<String>,
}
#[derive(Debug, Serialize, Deserialize, Clone, sqlx::FromRow)]
pub struct Decision {
pub id: i64,
pub job_id: i64,
pub action: String, // "encode", "skip", "reject"
pub reason: String,
pub reason_code: Option<String>,
pub reason_payload_json: Option<String>,
pub created_at: DateTime<Utc>,
}
#[derive(Debug, Clone, sqlx::FromRow)]
pub(crate) struct DecisionRecord {
pub(crate) job_id: i64,
pub(crate) action: String,
pub(crate) reason: String,
pub(crate) reason_payload_json: Option<String>,
}
#[derive(Debug, Clone, sqlx::FromRow)]
pub(crate) struct FailureExplanationRecord {
pub(crate) legacy_summary: Option<String>,
pub(crate) code: String,
pub(crate) payload_json: String,
}
// Auth related structs
#[derive(Debug, Serialize, Deserialize, Clone, sqlx::FromRow)]
pub struct User {
pub id: i64,
pub username: String,
pub password_hash: String,
pub created_at: DateTime<Utc>,
}
#[derive(Debug, Serialize, Deserialize, Clone, sqlx::FromRow)]
pub struct Session {
pub token: String,
pub user_id: i64,
pub expires_at: DateTime<Utc>,
pub created_at: DateTime<Utc>,
}
#[derive(Debug, Serialize, Deserialize, Clone, Copy, PartialEq, Eq, sqlx::Type)]
#[sqlx(rename_all = "snake_case")]
#[serde(rename_all = "snake_case")]
pub enum ApiTokenAccessLevel {
ReadOnly,
FullAccess,
}
#[derive(Debug, Serialize, Deserialize, Clone, sqlx::FromRow)]
pub struct ApiToken {
pub id: i64,
pub name: String,
pub access_level: ApiTokenAccessLevel,
pub created_at: DateTime<Utc>,
pub last_used_at: Option<DateTime<Utc>>,
pub revoked_at: Option<DateTime<Utc>>,
}
#[derive(Debug, Clone, sqlx::FromRow)]
pub struct ApiTokenRecord {
pub id: i64,
pub name: String,
pub token_hash: String,
pub access_level: ApiTokenAccessLevel,
pub created_at: DateTime<Utc>,
pub last_used_at: Option<DateTime<Utc>>,
pub revoked_at: Option<DateTime<Utc>>,
}
#[cfg(test)]
mod tests {
use super::*;
use std::path::{Path, PathBuf};
#[test]
fn test_output_path_for_suffix() {
let settings = FileSettings {
id: 1,
delete_source: false,
output_extension: "mkv".to_string(),
output_suffix: "-alchemist".to_string(),
replace_strategy: "keep".to_string(),
output_root: None,
};
let input = Path::new("video.mp4");
let output = settings.output_path_for(input);
assert_eq!(output, PathBuf::from("video-alchemist.mkv"));
}
#[test]
fn test_output_path_avoids_inplace() {
let settings = FileSettings {
id: 1,
delete_source: false,
output_extension: "mkv".to_string(),
output_suffix: "".to_string(),
replace_strategy: "keep".to_string(),
output_root: None,
};
let input = Path::new("video.mkv");
let output = settings.output_path_for(input);
assert_ne!(output, input);
}
#[test]
fn test_output_path_mirrors_source_root_under_output_root() {
let settings = FileSettings {
id: 1,
delete_source: false,
output_extension: "mkv".to_string(),
output_suffix: "-alchemist".to_string(),
replace_strategy: "keep".to_string(),
output_root: Some("/encoded".to_string()),
};
let input = Path::new("/library/movies/action/video.mp4");
let output = settings.output_path_for_source(input, Some(Path::new("/library")));
assert_eq!(
output,
PathBuf::from("/encoded/movies/action/video-alchemist.mkv")
);
}
#[test]
fn test_output_path_falls_back_when_source_root_does_not_match() {
let settings = FileSettings {
id: 1,
delete_source: false,
output_extension: "mkv".to_string(),
output_suffix: "-alchemist".to_string(),
replace_strategy: "keep".to_string(),
output_root: Some("/encoded".to_string()),
};
let input = Path::new("/library/movies/video.mp4");
let output = settings.output_path_for_source(input, Some(Path::new("/other")));
assert_eq!(output, PathBuf::from("/library/movies/video-alchemist.mkv"));
}
#[test]
fn test_replace_strategy() {
let mut settings = FileSettings {
id: 1,
delete_source: false,
output_extension: "mkv".to_string(),
output_suffix: "-alchemist".to_string(),
replace_strategy: "keep".to_string(),
output_root: None,
};
assert!(!settings.should_replace_existing_output());
settings.replace_strategy = "replace".to_string();
assert!(settings.should_replace_existing_output());
}
}

View File

@@ -606,6 +606,21 @@ pub fn failure_from_summary(summary: &str) -> Explanation {
);
}
if normalized.contains("vtcompressionsession")
|| normalized.contains("kvtvideoencoder")
|| normalized.contains("kvtvideoencodenotavailablenowerr")
|| normalized.contains("videotoolbox session")
{
return Explanation::new(
ExplanationCategory::Failure,
"videotoolbox_session_failure",
"VideoToolbox session failed",
"The macOS VideoToolbox hardware encoder could not initialize or lost its session mid-encode. This can happen when the GPU is under load or when another process holds the hardware encoder.",
Some("Retry the job. If this repeats, reduce concurrent jobs, restart Alchemist, or enable CPU fallback.".to_string()),
summary,
);
}
if normalized.contains("videotoolbox")
|| normalized.contains("vt_compression")
|| normalized.contains("mediaserverd")

View File

@@ -18,7 +18,6 @@ pub mod version;
pub mod wizard;
pub use config::QualityProfile;
pub use db::AlchemistEvent;
pub use media::ffmpeg::{EncodeStats, EncoderCapabilities, HardwareAccelerators};
pub use media::processor::Agent;
pub use orchestrator::Transcoder;

View File

@@ -2,15 +2,18 @@
use alchemist::db::EventChannels;
use alchemist::error::Result;
use alchemist::media::pipeline::{Analyzer as _, Planner as _};
use alchemist::system::hardware;
use alchemist::version;
use alchemist::{Agent, Transcoder, config, db, runtime};
use clap::Parser;
use clap::{Parser, Subcommand};
use serde::Serialize;
use std::path::{Path, PathBuf};
use std::sync::Arc;
use std::time::Instant;
use tracing::{debug, error, info, warn};
use tracing_subscriber::EnvFilter;
use tracing_subscriber::fmt::time::time;
use notify::{RecursiveMode, Watcher};
use tokio::sync::RwLock;
@@ -19,21 +22,55 @@ use tokio::sync::broadcast;
#[derive(Parser, Debug)]
#[command(author, version = version::current(), about, long_about = None)]
struct Args {
/// Run in CLI mode (process directories and exit)
#[arg(long)]
cli: bool,
/// Directories to scan for media files (CLI mode only)
#[arg(long, value_name = "DIR")]
directories: Vec<PathBuf>,
/// Dry run (don't actually transcode)
#[arg(short, long)]
dry_run: bool,
/// Reset admin user/password and sessions (forces setup mode)
#[arg(long)]
reset_auth: bool,
/// Enable verbose terminal logging and default DEBUG filtering
#[arg(long)]
debug_flags: bool,
#[command(subcommand)]
command: Option<Commands>,
}
#[derive(Subcommand, Debug, Clone)]
enum Commands {
/// Scan directories and enqueue matching work, then exit
Scan {
#[arg(value_name = "DIR", required = true)]
directories: Vec<PathBuf>,
},
/// Scan directories, enqueue work, and wait for processing to finish
Run {
#[arg(value_name = "DIR", required = true)]
directories: Vec<PathBuf>,
/// Don't actually transcode
#[arg(short, long)]
dry_run: bool,
},
/// Analyze files and report what Alchemist would do without enqueuing jobs
Plan {
#[arg(value_name = "DIR", required = true)]
directories: Vec<PathBuf>,
/// Emit machine-readable JSON instead of human-readable text
#[arg(long)]
json: bool,
},
}
#[derive(Debug, Serialize)]
struct CliPlanItem {
input_path: String,
output_path: Option<String>,
profile: Option<String>,
decision: String,
reason: String,
encoder: Option<String>,
backend: Option<String>,
rate_control: Option<String>,
fallback: Option<String>,
error: Option<String>,
}
#[tokio::main]
@@ -160,76 +197,79 @@ fn should_enter_setup_mode_for_missing_users(is_server_mode: bool, has_users: bo
}
async fn run() -> Result<()> {
// Initialize logging
tracing_subscriber::fmt()
.with_env_filter(EnvFilter::from_default_env().add_directive(tracing::Level::INFO.into()))
.with_target(true)
.with_thread_ids(true)
.with_thread_names(true)
.init();
let args = Args::parse();
init_logging(args.debug_flags);
let is_server_mode = args.command.is_none();
let boot_start = Instant::now();
// Startup Banner
info!(
" ______ __ ______ __ __ ______ __ __ __ ______ ______ "
);
info!(
"/\\ __ \\ /\\ \\ /\\ ___\\ /\\ \\_\\ \\ /\\ ___\\ /\\ \"-./ \\ /\\ \\ /\\ ___\\ /\\__ _\\"
);
info!(
"\\ \\ __ \\ \\ \\ \\____ \\ \\ \\____ \\ \\ __ \\ \\ \\ __\\ \\ \\ \\-./\\ \\ \\ \\ \\ \\ \\___ \\ \\/_/\\ \\/"
);
info!(
" \\ \\_\\ \\_\\ \\ \\_____\\ \\ \\_____\\ \\ \\_\\ \\_\\ \\ \\_____\\ \\ \\_\\ \\ \\_\\ \\ \\_\\ \\/\\_____\\ \\ \\_\\"
);
info!(
" \\/_/\\/_/ \\/_____/ \\/_____/ \\/_/\\/_/ \\/_____/ \\/_/ \\/_/ \\/_/ \\/_____/ \\/_/"
);
info!("");
info!("");
let version = alchemist::version::current();
let build_info = option_env!("BUILD_INFO")
.or(option_env!("GIT_SHA"))
.or(option_env!("VERGEN_GIT_SHA"))
.unwrap_or("unknown");
info!("Version: {}", version);
info!("Build: {}", build_info);
info!("");
info!("System Information:");
info!(
" OS: {} ({})",
std::env::consts::OS,
std::env::consts::ARCH
);
info!(" CPUs: {}", num_cpus::get());
info!("");
let args = Args::parse();
info!(
target: "startup",
"Parsed CLI args: cli_mode={}, reset_auth={}, dry_run={}, directories={}",
args.cli,
"Parsed CLI args: command={:?}, reset_auth={}, debug_flags={}",
args.command,
args.reset_auth,
args.dry_run,
args.directories.len()
args.debug_flags
);
// ... rest of logic remains largely the same, just inside run()
// Default to server mode unless CLI is explicitly requested.
let is_server_mode = !args.cli;
info!(target: "startup", "Resolved server mode: {}", is_server_mode);
if is_server_mode && !args.directories.is_empty() {
warn!("Directories were provided without --cli; ignoring CLI inputs.");
if is_server_mode {
info!(
" ______ __ ______ __ __ ______ __ __ __ ______ ______ "
);
info!(
"/\\ __ \\ /\\ \\ /\\ ___\\ /\\ \\_\\ \\ /\\ ___\\ /\\ \"-./ \\ /\\ \\ /\\ ___\\ /\\__ _\\"
);
info!(
"\\ \\ __ \\ \\ \\ \\____ \\ \\ \\____ \\ \\ __ \\ \\ \\ __\\ \\ \\ \\-./\\ \\ \\ \\ \\ \\ \\___ \\ \\/_/\\ \\/"
);
info!(
" \\ \\_\\ \\_\\ \\ \\_____\\ \\ \\_____\\ \\ \\_\\ \\_\\ \\ \\_____\\ \\ \\_\\ \\ \\_\\ \\ \\_\\ \\/\\_____\\ \\ \\_\\"
);
info!(
" \\/_/\\/_/ \\/_____/ \\/_____/ \\/_/\\/_/ \\/_____/ \\/_/ \\/_/ \\/_/ \\/_____/ \\/_/"
);
info!("");
info!("");
let version = alchemist::version::current();
let build_info = option_env!("BUILD_INFO")
.or(option_env!("GIT_SHA"))
.or(option_env!("VERGEN_GIT_SHA"))
.unwrap_or("unknown");
info!("Version: {}", version);
info!("Build: {}", build_info);
info!("");
info!("System Information:");
info!(
" OS: {} ({})",
std::env::consts::OS,
std::env::consts::ARCH
);
info!(" CPUs: {}", num_cpus::get());
info!("");
}
info!(target: "startup", "Resolved server mode: {}", is_server_mode);
// 0. Load Configuration
let config_start = Instant::now();
let config_path = runtime::config_path();
let db_path = runtime::db_path();
let config_mutable = runtime::config_mutable();
let (config, mut setup_mode, config_exists) =
load_startup_config(config_path.as_path(), is_server_mode);
let (config, mut setup_mode, config_exists) = if is_server_mode {
load_startup_config(config_path.as_path(), true)
} else {
if !config_path.exists() {
error!(
"Configuration required. Run Alchemist in server mode to complete setup, or create {:?} manually.",
config_path
);
return Err(alchemist::error::AlchemistError::Config(
"Missing configuration".into(),
));
}
let config = config::Config::load(config_path.as_path())
.map_err(|err| alchemist::error::AlchemistError::Config(err.to_string()))?;
(config, false, true)
};
info!(
target: "startup",
"Config loaded (path={:?}, exists={}, mutable={}, setup_mode={}) in {} ms",
@@ -266,6 +306,10 @@ async fn run() -> Result<()> {
Ok(mut remuxing_jobs) => jobs.append(&mut remuxing_jobs),
Err(err) => error!("Failed to load interrupted remuxing jobs: {}", err),
}
match db.get_jobs_by_status(db::JobState::Resuming).await {
Ok(mut resuming_jobs) => jobs.append(&mut resuming_jobs),
Err(err) => error!("Failed to load interrupted resuming jobs: {}", err),
}
match db.get_jobs_by_status(db::JobState::Analyzing).await {
Ok(mut analyzing_jobs) => jobs.append(&mut analyzing_jobs),
Err(err) => error!("Failed to load interrupted analyzing jobs: {}", err),
@@ -277,6 +321,11 @@ async fn run() -> Result<()> {
Ok(count) if count > 0 => {
warn!("{} interrupted jobs reset to queued", count);
for job in interrupted_jobs {
let has_resume_session =
db.get_resume_session(job.id).await.ok().flatten().is_some();
if has_resume_session {
continue;
}
let temp_path = orphaned_temp_output_path(&job.output_path);
if std::fs::metadata(&temp_path).is_ok() {
match std::fs::remove_file(&temp_path) {
@@ -371,9 +420,9 @@ async fn run() -> Result<()> {
warn!("Auth reset requested. All users and sessions cleared.");
setup_mode = true;
}
let has_users = db.has_users().await?;
if is_server_mode {
let users_start = Instant::now();
let has_users = db.has_users().await?;
info!(
target: "startup",
"User check completed (has_users={}) in {} ms",
@@ -386,6 +435,13 @@ async fn run() -> Result<()> {
}
setup_mode = true;
}
} else if !has_users {
error!(
"Setup is not complete. Run Alchemist in server mode to finish creating the first account."
);
return Err(alchemist::error::AlchemistError::Config(
"Setup incomplete".into(),
));
}
if !setup_mode {
@@ -468,9 +524,6 @@ async fn run() -> Result<()> {
system: system_tx,
});
// Keep legacy channel for transition compatibility
let (tx, _rx) = broadcast::channel(100);
let transcoder = Arc::new(Transcoder::new());
let hardware_state = hardware::HardwareState::new(Some(hw_info.clone()));
let hardware_probe_log = Arc::new(RwLock::new(initial_probe_log));
@@ -481,7 +534,7 @@ async fn run() -> Result<()> {
db.as_ref().clone(),
config.clone(),
));
notification_manager.start_listener(tx.subscribe());
notification_manager.start_listener(&event_channels);
let maintenance_db = db.clone();
let maintenance_config = config.clone();
@@ -516,9 +569,8 @@ async fn run() -> Result<()> {
transcoder.clone(),
config.clone(),
hardware_state.clone(),
tx.clone(),
event_channels.clone(),
args.dry_run,
matches!(args.command, Some(Commands::Run { dry_run: true, .. })),
)
.await,
);
@@ -720,7 +772,6 @@ async fn run() -> Result<()> {
transcoder,
scheduler: scheduler_handle,
event_channels,
tx,
setup_required: setup_mode,
config_path: config_path.clone(),
config_mutable,
@@ -748,56 +799,312 @@ async fn run() -> Result<()> {
}
}
} else {
// CLI Mode
if setup_mode {
error!(
"Configuration required. Run without --cli to use the web-based setup wizard, or create {:?} manually.",
config_path
);
// CLI early exit - error
// (Caller will handle pause-on-exit if needed)
return Err(alchemist::error::AlchemistError::Config(
"Missing configuration".into(),
));
}
if args.directories.is_empty() {
error!("No directories provided. Usage: alchemist --cli --dir <DIR> [--dir <DIR> ...]");
return Err(alchemist::error::AlchemistError::Config(
"Missing directories for CLI mode".into(),
));
}
agent.scan_and_enqueue(args.directories).await?;
// Wait until all jobs are processed
info!("Waiting for jobs to complete...");
loop {
let stats = db.get_stats().await?;
let active = stats
.as_object()
.map(|m| {
m.iter()
.filter(|(k, _)| {
["encoding", "analyzing", "remuxing", "resuming"].contains(&k.as_str())
})
.map(|(_, v)| v.as_i64().unwrap_or(0))
.sum::<i64>()
})
.unwrap_or(0);
let queued = stats.get("queued").and_then(|v| v.as_i64()).unwrap_or(0);
if active + queued == 0 {
break;
let command = match args.command.clone() {
Some(command) => command,
None => {
return Err(alchemist::error::AlchemistError::Config(
"Missing CLI command".into(),
));
}
};
match command {
Commands::Scan { directories } => {
agent.scan_and_enqueue(directories).await?;
info!("Scan complete. Matching files were enqueued.");
}
Commands::Run { directories, .. } => {
agent.scan_and_enqueue(directories).await?;
wait_for_cli_jobs(db.as_ref()).await?;
info!("All jobs processed.");
}
Commands::Plan { directories, json } => {
let items =
build_cli_plan(db.as_ref(), config.clone(), &hardware_state, directories)
.await?;
if json {
println!(
"{}",
serde_json::to_string_pretty(&items).unwrap_or_else(|_| "[]".to_string())
);
} else {
print_cli_plan(&items);
}
}
tokio::time::sleep(tokio::time::Duration::from_secs(2)).await;
}
info!("All jobs processed.");
}
Ok(())
}
async fn wait_for_cli_jobs(db: &db::Db) -> Result<()> {
info!("Waiting for jobs to complete...");
loop {
let stats = db.get_stats().await?;
let active = stats
.as_object()
.map(|m| {
m.iter()
.filter(|(k, _)| {
["encoding", "analyzing", "remuxing", "resuming"].contains(&k.as_str())
})
.map(|(_, v)| v.as_i64().unwrap_or(0))
.sum::<i64>()
})
.unwrap_or(0);
let queued = stats.get("queued").and_then(|v| v.as_i64()).unwrap_or(0);
if active + queued == 0 {
break;
}
tokio::time::sleep(tokio::time::Duration::from_secs(2)).await;
}
Ok(())
}
async fn build_cli_plan(
db: &db::Db,
config_state: Arc<RwLock<config::Config>>,
hardware_state: &hardware::HardwareState,
directories: Vec<PathBuf>,
) -> Result<Vec<CliPlanItem>> {
let files = tokio::task::spawn_blocking(move || {
let scanner = alchemist::media::scanner::Scanner::new();
scanner.scan(directories)
})
.await
.map_err(|err| alchemist::error::AlchemistError::Unknown(format!("scan task failed: {err}")))?;
let file_settings = match db.get_file_settings().await {
Ok(settings) => settings,
Err(err) => {
error!("Failed to fetch file settings, using defaults: {}", err);
alchemist::media::pipeline::default_file_settings()
}
};
let config_snapshot = Arc::new(config_state.read().await.clone());
let hw_info = hardware_state.snapshot().await;
let planner = alchemist::media::planner::BasicPlanner::new(config_snapshot, hw_info);
let analyzer = alchemist::media::analyzer::FfmpegAnalyzer;
let mut items = Vec::new();
for discovered in files {
let input_path = discovered.path.clone();
let input_path_string = input_path.display().to_string();
if let Some(reason) = alchemist::media::pipeline::skip_reason_for_discovered_path(
db,
&input_path,
&file_settings,
)
.await?
{
items.push(CliPlanItem {
input_path: input_path_string,
output_path: None,
profile: None,
decision: "skip".to_string(),
reason: reason.to_string(),
encoder: None,
backend: None,
rate_control: None,
fallback: None,
error: None,
});
continue;
}
let output_path =
file_settings.output_path_for_source(&input_path, discovered.source_root.as_deref());
if output_path.exists() && !file_settings.should_replace_existing_output() {
items.push(CliPlanItem {
input_path: input_path_string,
output_path: Some(output_path.display().to_string()),
profile: None,
decision: "skip".to_string(),
reason: "output exists and replace strategy is keep".to_string(),
encoder: None,
backend: None,
rate_control: None,
fallback: None,
error: None,
});
continue;
}
let analysis = match analyzer.analyze(&input_path).await {
Ok(analysis) => analysis,
Err(err) => {
items.push(CliPlanItem {
input_path: input_path_string,
output_path: Some(output_path.display().to_string()),
profile: None,
decision: "error".to_string(),
reason: "analysis failed".to_string(),
encoder: None,
backend: None,
rate_control: None,
fallback: None,
error: Some(err.to_string()),
});
continue;
}
};
let profile = match db.get_profile_for_path(&input_path.to_string_lossy()).await {
Ok(profile) => profile,
Err(err) => {
items.push(CliPlanItem {
input_path: input_path_string,
output_path: Some(output_path.display().to_string()),
profile: None,
decision: "error".to_string(),
reason: "profile resolution failed".to_string(),
encoder: None,
backend: None,
rate_control: None,
fallback: None,
error: Some(err.to_string()),
});
continue;
}
};
let plan = match planner
.plan(&analysis, &output_path, profile.as_ref())
.await
{
Ok(plan) => plan,
Err(err) => {
items.push(CliPlanItem {
input_path: input_path_string,
output_path: Some(output_path.display().to_string()),
profile: profile.as_ref().map(|p| p.name.clone()),
decision: "error".to_string(),
reason: "planning failed".to_string(),
encoder: None,
backend: None,
rate_control: None,
fallback: None,
error: Some(err.to_string()),
});
continue;
}
};
let (decision, reason) = match &plan.decision {
alchemist::media::pipeline::TranscodeDecision::Skip { reason } => {
("skip".to_string(), reason.clone())
}
alchemist::media::pipeline::TranscodeDecision::Remux { reason } => {
("remux".to_string(), reason.clone())
}
alchemist::media::pipeline::TranscodeDecision::Transcode { reason } => {
("transcode".to_string(), reason.clone())
}
};
items.push(CliPlanItem {
input_path: input_path_string,
output_path: Some(output_path.display().to_string()),
profile: profile.as_ref().map(|p| p.name.clone()),
decision,
reason,
encoder: plan
.encoder
.map(|encoder| encoder.ffmpeg_encoder_name().to_string()),
backend: plan.backend.map(|backend| backend.as_str().to_string()),
rate_control: plan.rate_control.as_ref().map(format_rate_control),
fallback: plan
.fallback
.as_ref()
.map(|fallback| fallback.reason.clone()),
error: None,
});
}
Ok(items)
}
fn format_rate_control(rate_control: &alchemist::media::pipeline::RateControl) -> String {
match rate_control {
alchemist::media::pipeline::RateControl::Crf { value } => format!("crf:{value}"),
alchemist::media::pipeline::RateControl::Cq { value } => format!("cq:{value}"),
alchemist::media::pipeline::RateControl::QsvQuality { value } => {
format!("qsv_quality:{value}")
}
alchemist::media::pipeline::RateControl::Bitrate { kbps } => format!("bitrate:{kbps}k"),
}
}
fn print_cli_plan(items: &[CliPlanItem]) {
for item in items {
println!("{}", item.input_path);
println!(" decision: {}{}", item.decision, item.reason);
if let Some(output_path) = &item.output_path {
println!(" output: {}", output_path);
}
if let Some(profile) = &item.profile {
println!(" profile: {}", profile);
}
if let Some(encoder) = &item.encoder {
let backend = item.backend.as_deref().unwrap_or("unknown");
println!(" encoder: {} ({})", encoder, backend);
}
if let Some(rate_control) = &item.rate_control {
println!(" rate: {}", rate_control);
}
if let Some(fallback) = &item.fallback {
println!(" fallback: {}", fallback);
}
if let Some(error) = &item.error {
println!(" error: {}", error);
}
println!();
}
}
fn init_logging(debug_flags: bool) {
let default_level = if debug_flags {
tracing::Level::DEBUG
} else {
tracing::Level::INFO
};
let env_filter = EnvFilter::from_default_env().add_directive(default_level.into());
if debug_flags {
tracing_subscriber::fmt()
.with_env_filter(env_filter)
.with_target(true)
.with_thread_ids(true)
.with_thread_names(true)
.with_timer(time())
.init();
} else {
tracing_subscriber::fmt()
.with_env_filter(env_filter)
.without_time()
.with_target(false)
.with_thread_ids(false)
.with_thread_names(false)
.compact()
.init();
}
}
#[cfg(test)]
mod logging_tests {
use super::*;
use clap::Parser;
#[test]
fn debug_flags_arg_parses() {
let args = Args::try_parse_from(["alchemist", "--debug-flags"])
.unwrap_or_else(|err| panic!("failed to parse debug flag: {err}"));
assert!(args.debug_flags);
}
}
#[cfg(test)]
mod version_cli_tests {
use super::*;
@@ -836,6 +1143,41 @@ mod tests {
assert!(Args::try_parse_from(["alchemist", "--output-dir", "/tmp/out"]).is_err());
}
#[test]
fn args_reject_removed_cli_flag() {
assert!(Args::try_parse_from(["alchemist", "--cli"]).is_err());
}
#[test]
fn scan_subcommand_parses() {
let args = Args::try_parse_from(["alchemist", "scan", "/tmp/media"])
.unwrap_or_else(|err| panic!("failed to parse scan subcommand: {err}"));
assert!(matches!(
args.command,
Some(Commands::Scan { directories }) if directories == vec![PathBuf::from("/tmp/media")]
));
}
#[test]
fn run_subcommand_parses_with_dry_run() {
let args = Args::try_parse_from(["alchemist", "run", "/tmp/media", "--dry-run"])
.unwrap_or_else(|err| panic!("failed to parse run subcommand: {err}"));
assert!(matches!(
args.command,
Some(Commands::Run { directories, dry_run }) if directories == vec![PathBuf::from("/tmp/media")] && dry_run
));
}
#[test]
fn plan_subcommand_parses_with_json() {
let args = Args::try_parse_from(["alchemist", "plan", "/tmp/media", "--json"])
.unwrap_or_else(|err| panic!("failed to parse plan subcommand: {err}"));
assert!(matches!(
args.command,
Some(Commands::Plan { directories, json }) if directories == vec![PathBuf::from("/tmp/media")] && json
));
}
#[test]
fn config_reload_matches_create_modify_and_rename_events() {
let config_path = PathBuf::from("/tmp/alchemist-config.toml");
@@ -940,7 +1282,6 @@ mod tests {
}));
let hardware_probe_log = Arc::new(RwLock::new(hardware::HardwareProbeLog::default()));
let transcoder = Arc::new(Transcoder::new());
let (tx, _rx) = broadcast::channel(8);
let (jobs_tx, _) = broadcast::channel(100);
let (config_tx, _) = broadcast::channel(10);
let (system_tx, _) = broadcast::channel(10);
@@ -955,7 +1296,6 @@ mod tests {
transcoder,
config_state.clone(),
hardware_state.clone(),
tx,
event_channels,
true,
)

View File

@@ -7,7 +7,7 @@ use serde::{Deserialize, Serialize};
use std::path::Path;
use tokio::process::Command;
const FFPROBE_TIMEOUT_SECS: u64 = 120;
const FFPROBE_TIMEOUT_SECS: u64 = 30;
async fn run_ffprobe(args: &[&str], path: &Path) -> Result<std::process::Output> {
match tokio::time::timeout(

View File

@@ -1,8 +1,6 @@
use crate::db::{AlchemistEvent, Db, EventChannels, Job, JobEvent};
use crate::db::{Db, EventChannels, Job, JobEvent};
use crate::error::Result;
use crate::media::pipeline::{
Encoder, ExecutionResult, ExecutionStats, Executor, MediaAnalysis, TranscodePlan,
};
use crate::media::pipeline::{Encoder, ExecutionResult, Executor, MediaAnalysis, TranscodePlan};
use crate::orchestrator::{
AsyncExecutionObserver, ExecutionObserver, TranscodeRequest, Transcoder,
};
@@ -10,13 +8,12 @@ use crate::system::hardware::HardwareInfo;
use std::path::PathBuf;
use std::sync::Arc;
use std::time::{Duration, Instant};
use tokio::sync::{Mutex, broadcast};
use tokio::sync::Mutex;
pub struct FfmpegExecutor {
transcoder: Arc<Transcoder>,
db: Arc<Db>,
hw_info: Option<HardwareInfo>,
event_tx: Arc<broadcast::Sender<AlchemistEvent>>,
event_channels: Arc<EventChannels>,
dry_run: bool,
}
@@ -26,7 +23,6 @@ impl FfmpegExecutor {
transcoder: Arc<Transcoder>,
db: Arc<Db>,
hw_info: Option<HardwareInfo>,
event_tx: Arc<broadcast::Sender<AlchemistEvent>>,
event_channels: Arc<EventChannels>,
dry_run: bool,
) -> Self {
@@ -34,7 +30,6 @@ impl FfmpegExecutor {
transcoder,
db,
hw_info,
event_tx,
event_channels,
dry_run,
}
@@ -44,22 +39,15 @@ impl FfmpegExecutor {
struct JobExecutionObserver {
job_id: i64,
db: Arc<Db>,
event_tx: Arc<broadcast::Sender<AlchemistEvent>>,
event_channels: Arc<EventChannels>,
last_progress: Mutex<Option<(f64, Instant)>>,
}
impl JobExecutionObserver {
fn new(
job_id: i64,
db: Arc<Db>,
event_tx: Arc<broadcast::Sender<AlchemistEvent>>,
event_channels: Arc<EventChannels>,
) -> Self {
fn new(job_id: i64, db: Arc<Db>, event_channels: Arc<EventChannels>) -> Self {
Self {
job_id,
db,
event_tx,
event_channels,
last_progress: Mutex::new(None),
}
@@ -68,18 +56,11 @@ impl JobExecutionObserver {
impl AsyncExecutionObserver for JobExecutionObserver {
async fn on_log(&self, message: String) {
// Send to typed channel
let _ = self.event_channels.jobs.send(JobEvent::Log {
level: "info".to_string(),
job_id: Some(self.job_id),
message: message.clone(),
});
// Also send to legacy channel for backwards compatibility
let _ = self.event_tx.send(AlchemistEvent::Log {
level: "info".to_string(),
job_id: Some(self.job_id),
message: message.clone(),
});
if let Err(err) = self.db.add_log("info", Some(self.job_id), &message).await {
tracing::warn!(
"Failed to persist transcode log for job {}: {}",
@@ -117,14 +98,7 @@ impl AsyncExecutionObserver for JobExecutionObserver {
}
}
// Send to typed channel
let _ = self.event_channels.jobs.send(JobEvent::Progress {
job_id: self.job_id,
percentage,
time: progress.time.clone(),
});
// Also send to legacy channel for backwards compatibility
let _ = self.event_tx.send(AlchemistEvent::Progress {
job_id: self.job_id,
percentage,
time: progress.time,
@@ -155,10 +129,21 @@ impl Executor for FfmpegExecutor {
let observer: Arc<dyn ExecutionObserver> = Arc::new(JobExecutionObserver::new(
job.id,
self.db.clone(),
self.event_tx.clone(),
self.event_channels.clone(),
));
tracing::info!(
"Job {} execution path: requested_codec={}, planned_codec={}, encoder={:?}, backend={:?}, fallback={:?}",
job.id,
plan.requested_codec.as_str(),
planned_output_codec.as_str(),
encoder.map(|value| value.ffmpeg_encoder_name()),
used_backend.map(|value| value.as_str()),
plan.fallback
.as_ref()
.map(|fallback| fallback.reason.as_str())
);
self.transcoder
.transcode_media(TranscodeRequest {
job_id: Some(job.id),
@@ -169,6 +154,8 @@ impl Executor for FfmpegExecutor {
metadata: &analysis.metadata,
plan,
observer: Some(observer.clone()),
clip_start_seconds: None,
clip_duration_seconds: None,
})
.await?;
@@ -186,6 +173,8 @@ impl Executor for FfmpegExecutor {
metadata: &analysis.metadata,
plan,
observer: Some(observer),
clip_start_seconds: None,
clip_duration_seconds: None,
})
.await?;
}
@@ -244,6 +233,14 @@ impl Executor for FfmpegExecutor {
);
}
tracing::info!(
"Job {} output probe: actual_codec={:?}, actual_encoder={:?}, fallback_occurred={}",
job.id,
actual_output_codec.map(|value| value.as_str()),
actual_encoder_name.as_deref(),
plan.fallback.is_some() || codec_mismatch || encoder_mismatch
);
Ok(ExecutionResult {
requested_codec: plan.requested_codec,
planned_output_codec,
@@ -254,17 +251,11 @@ impl Executor for FfmpegExecutor {
fallback_occurred: plan.fallback.is_some() || codec_mismatch || encoder_mismatch,
actual_output_codec,
actual_encoder_name,
stats: ExecutionStats {
encode_time_secs: 0.0,
input_size: 0,
output_size: 0,
vmaf: None,
},
})
}
}
fn output_codec_from_name(codec: &str) -> Option<crate::config::OutputCodec> {
pub(crate) fn output_codec_from_name(codec: &str) -> Option<crate::config::OutputCodec> {
if codec.eq_ignore_ascii_case("av1") {
Some(crate::config::OutputCodec::Av1)
} else if codec.eq_ignore_ascii_case("hevc") || codec.eq_ignore_ascii_case("h265") {
@@ -276,7 +267,10 @@ fn output_codec_from_name(codec: &str) -> Option<crate::config::OutputCodec> {
}
}
fn encoder_tag_matches(requested: crate::media::pipeline::Encoder, encoder_tag: &str) -> bool {
pub(crate) fn encoder_tag_matches(
requested: crate::media::pipeline::Encoder,
encoder_tag: &str,
) -> bool {
let tag = encoder_tag.to_ascii_lowercase();
let expected_markers: &[&str] = match requested {
crate::media::pipeline::Encoder::Av1Qsv
@@ -372,8 +366,7 @@ mod tests {
let Some(job) = db.get_job_by_input_path("input.mkv").await? else {
panic!("expected seeded job");
};
let (tx, mut rx) = broadcast::channel(8);
let (jobs_tx, _) = broadcast::channel(100);
let (jobs_tx, mut jobs_rx) = broadcast::channel(100);
let (config_tx, _) = broadcast::channel(10);
let (system_tx, _) = broadcast::channel(10);
let event_channels = Arc::new(crate::db::EventChannels {
@@ -381,7 +374,7 @@ mod tests {
config: config_tx,
system: system_tx,
});
let observer = JobExecutionObserver::new(job.id, db.clone(), Arc::new(tx), event_channels);
let observer = JobExecutionObserver::new(job.id, db.clone(), event_channels);
LocalExecutionObserver::on_log(&observer, "ffmpeg line".to_string()).await;
LocalExecutionObserver::on_progress(
@@ -403,10 +396,10 @@ mod tests {
};
assert!((updated.progress - 20.0).abs() < 0.01);
let first = rx.recv().await?;
assert!(matches!(first, AlchemistEvent::Log { .. }));
let second = rx.recv().await?;
assert!(matches!(second, AlchemistEvent::Progress { .. }));
let first = jobs_rx.recv().await?;
assert!(matches!(first, JobEvent::Log { .. }));
let second = jobs_rx.recv().await?;
assert!(matches!(second, JobEvent::Progress { .. }));
drop(db);
let _ = std::fs::remove_file(db_path);

View File

@@ -1,6 +1,13 @@
use crate::media::pipeline::Encoder;
use crate::media::pipeline::{Encoder, RateControl};
pub fn append_args(args: &mut Vec<String>, encoder: Encoder, rate_control: Option<&RateControl>) {
// AMF quality: CQP mode uses -rc cqp with -qp_i and -qp_p.
// The config uses CQ-style semantics (lower value = better quality).
let (use_cqp, qp_value) = match rate_control {
Some(RateControl::Cq { value }) => (true, *value),
_ => (false, 25),
};
pub fn append_args(args: &mut Vec<String>, encoder: Encoder) {
match encoder {
Encoder::Av1Amf => {
args.extend(["-c:v".to_string(), "av1_amf".to_string()]);
@@ -13,4 +20,15 @@ pub fn append_args(args: &mut Vec<String>, encoder: Encoder) {
}
_ => {}
}
if use_cqp {
args.extend([
"-rc".to_string(),
"cqp".to_string(),
"-qp_i".to_string(),
qp_value.to_string(),
"-qp_p".to_string(),
qp_value.to_string(),
]);
}
}

View File

@@ -6,6 +6,7 @@ pub fn append_args(
encoder: Encoder,
rate_control: Option<RateControl>,
preset: Option<&str>,
tag_hevc_as_hvc1: bool,
) {
match encoder {
Encoder::Av1Svt => {
@@ -48,9 +49,10 @@ pub fn append_args(
preset.unwrap_or(CpuPreset::Medium.as_str()).to_string(),
"-crf".to_string(),
crf,
"-tag:v".to_string(),
"hvc1".to_string(),
]);
if tag_hevc_as_hvc1 {
args.extend(["-tag:v".to_string(), "hvc1".to_string()]);
}
}
Encoder::H264X264 => {
let crf = match rate_control {

View File

@@ -135,6 +135,8 @@ pub struct FFmpegCommandBuilder<'a> {
metadata: &'a crate::media::pipeline::MediaMetadata,
plan: &'a TranscodePlan,
hw_info: Option<&'a HardwareInfo>,
clip_start_seconds: Option<f64>,
clip_duration_seconds: Option<f64>,
}
impl<'a> FFmpegCommandBuilder<'a> {
@@ -150,6 +152,8 @@ impl<'a> FFmpegCommandBuilder<'a> {
metadata,
plan,
hw_info: None,
clip_start_seconds: None,
clip_duration_seconds: None,
}
}
@@ -158,6 +162,16 @@ impl<'a> FFmpegCommandBuilder<'a> {
self
}
pub fn with_clip(
mut self,
clip_start_seconds: Option<f64>,
clip_duration_seconds: Option<f64>,
) -> Self {
self.clip_start_seconds = clip_start_seconds;
self.clip_duration_seconds = clip_duration_seconds;
self
}
pub fn build(self) -> Result<tokio::process::Command> {
let args = self.build_args()?;
let mut cmd = tokio::process::Command::new("ffmpeg");
@@ -182,20 +196,30 @@ impl<'a> FFmpegCommandBuilder<'a> {
}
let rate_control = self.plan.rate_control.clone();
let tag_hevc_as_hvc1 = uses_quicktime_container(&self.plan.container);
let mut args = vec![
"-hide_banner".to_string(),
"-y".to_string(),
"-nostats".to_string(),
"-progress".to_string(),
"pipe:2".to_string(),
"-i".to_string(),
self.input.display().to_string(),
"-map_metadata".to_string(),
"0".to_string(),
"-map".to_string(),
"0:v:0".to_string(),
];
args.push("-i".to_string());
args.push(self.input.display().to_string());
if let Some(clip_start_seconds) = self.clip_start_seconds {
args.push("-ss".to_string());
args.push(format!("{clip_start_seconds:.3}"));
}
if let Some(clip_duration_seconds) = self.clip_duration_seconds {
args.push("-t".to_string());
args.push(format!("{clip_duration_seconds:.3}"));
}
args.push("-map_metadata".to_string());
args.push("0".to_string());
args.push("-map".to_string());
args.push("0:v:0".to_string());
if !matches!(self.plan.audio, AudioStreamPlan::Drop) {
match &self.plan.audio_stream_indices {
None => {
@@ -241,10 +265,10 @@ impl<'a> FFmpegCommandBuilder<'a> {
);
}
Encoder::Av1Vaapi | Encoder::HevcVaapi | Encoder::H264Vaapi => {
vaapi::append_args(&mut args, encoder, self.hw_info);
vaapi::append_args(&mut args, encoder, self.hw_info, rate_control.as_ref());
}
Encoder::Av1Amf | Encoder::HevcAmf | Encoder::H264Amf => {
amf::append_args(&mut args, encoder);
amf::append_args(&mut args, encoder, rate_control.as_ref());
}
Encoder::Av1Videotoolbox
| Encoder::HevcVideotoolbox
@@ -252,8 +276,8 @@ impl<'a> FFmpegCommandBuilder<'a> {
videotoolbox::append_args(
&mut args,
encoder,
rate_control.clone(),
default_quality(&self.plan.rate_control, 65),
tag_hevc_as_hvc1,
rate_control.as_ref(),
);
}
Encoder::Av1Svt | Encoder::Av1Aom | Encoder::HevcX265 | Encoder::H264X264 => {
@@ -262,11 +286,18 @@ impl<'a> FFmpegCommandBuilder<'a> {
encoder,
rate_control.clone(),
self.plan.encoder_preset.as_deref(),
tag_hevc_as_hvc1,
);
}
}
}
// Set maximum keyframe interval (~10s GOP) for all non-copy encodes.
// Improves seeking reliability; hardware encoders respect this upper bound.
if !self.plan.copy_video {
args.extend(["-g".to_string(), "250".to_string()]);
}
if let Some(RateControl::Bitrate { kbps }) = rate_control {
args.extend(["-b:v".to_string(), format!("{kbps}k")]);
}
@@ -285,7 +316,7 @@ impl<'a> FFmpegCommandBuilder<'a> {
apply_subtitle_plan(&mut args, &self.plan.subtitles);
apply_color_metadata(&mut args, self.metadata, &self.plan.filters);
if matches!(self.plan.container.as_str(), "mp4" | "m4v" | "mov") {
if uses_quicktime_container(&self.plan.container) {
args.push("-movflags".to_string());
args.push("+faststart".to_string());
}
@@ -483,6 +514,10 @@ fn output_format_name(container: &str) -> &str {
}
}
fn uses_quicktime_container(container: &str) -> bool {
matches!(container, "mp4" | "m4v" | "mov")
}
#[derive(Debug, Clone, Default)]
pub struct FFmpegProgress {
pub frame: u64,
@@ -592,10 +627,11 @@ impl FFmpegProgressState {
}
}
"speed" => self.current.speed = value.to_string(),
"progress" if matches!(value, "continue" | "end") => {
if self.current.time_seconds > 0.0 || self.current.frame > 0 {
return Some(self.current.clone());
}
"progress"
if matches!(value, "continue" | "end")
&& (self.current.time_seconds > 0.0 || self.current.frame > 0) =>
{
return Some(self.current.clone());
}
_ => {}
}
@@ -1027,6 +1063,30 @@ mod tests {
assert!(args.iter().any(|arg| arg.contains("format=nv12,hwupload")));
}
#[test]
fn vaapi_cq_mode_sets_inverted_global_quality() {
let metadata = metadata();
let mut plan = plan_for(Encoder::HevcVaapi);
plan.rate_control = Some(RateControl::Cq { value: 23 });
let mut info = hw_info("/dev/dri/renderD128");
info.vendor = crate::system::hardware::Vendor::Amd;
let builder = FFmpegCommandBuilder::new(
Path::new("/tmp/in.mkv"),
Path::new("/tmp/out.mkv"),
&metadata,
&plan,
)
.with_hardware(Some(&info));
let args = builder
.build_args()
.unwrap_or_else(|err| panic!("failed to build vaapi cq args: {err}"));
let quality_index = args
.iter()
.position(|arg| arg == "-global_quality")
.unwrap_or_else(|| panic!("missing -global_quality"));
assert_eq!(args.get(quality_index + 1).map(String::as_str), Some("77"));
}
#[test]
fn command_args_cover_videotoolbox_backend() {
let metadata = metadata();
@@ -1041,6 +1101,83 @@ mod tests {
.build_args()
.unwrap_or_else(|err| panic!("failed to build videotoolbox args: {err}"));
assert!(args.contains(&"hevc_videotoolbox".to_string()));
assert!(!args.contains(&"hvc1".to_string()));
assert!(args.contains(&"-q:v".to_string())); // P1-2 fix: Cq maps to -q:v
assert!(!args.contains(&"-b:v".to_string()));
}
#[test]
fn hevc_videotoolbox_mp4_adds_hvc1_tag() {
let metadata = metadata();
let mut plan = plan_for(Encoder::HevcVideotoolbox);
plan.container = "mp4".to_string();
let builder = FFmpegCommandBuilder::new(
Path::new("/tmp/in.mkv"),
Path::new("/tmp/out.mp4"),
&metadata,
&plan,
);
let args = builder
.build_args()
.unwrap_or_else(|err| panic!("failed to build mp4 videotoolbox args: {err}"));
assert!(args.contains(&"hevc_videotoolbox".to_string()));
assert!(args.contains(&"hvc1".to_string()));
assert!(args.contains(&"-q:v".to_string())); // P1-2 fix: Cq maps to -q:v
}
#[test]
fn hevc_videotoolbox_bitrate_mode_uses_generic_bitrate_flag() {
let metadata = metadata();
let mut plan = plan_for(Encoder::HevcVideotoolbox);
plan.rate_control = Some(RateControl::Bitrate { kbps: 2500 });
let builder = FFmpegCommandBuilder::new(
Path::new("/tmp/in.mkv"),
Path::new("/tmp/out.mkv"),
&metadata,
&plan,
);
let args = builder
.build_args()
.unwrap_or_else(|err| panic!("failed to build bitrate videotoolbox args: {err}"));
assert!(args.contains(&"hevc_videotoolbox".to_string()));
assert!(args.contains(&"-b:v".to_string()));
assert!(args.contains(&"2500k".to_string()));
assert!(!args.contains(&"-q:v".to_string()));
}
#[test]
fn hevc_x265_mkv_does_not_add_hvc1_tag() {
let metadata = metadata();
let plan = plan_for(Encoder::HevcX265);
let builder = FFmpegCommandBuilder::new(
Path::new("/tmp/in.mkv"),
Path::new("/tmp/out.mkv"),
&metadata,
&plan,
);
let args = builder
.build_args()
.unwrap_or_else(|err| panic!("failed to build mkv x265 args: {err}"));
assert!(args.contains(&"libx265".to_string()));
assert!(!args.contains(&"hvc1".to_string()));
}
#[test]
fn hevc_x265_mp4_adds_hvc1_tag() {
let metadata = metadata();
let mut plan = plan_for(Encoder::HevcX265);
plan.container = "mp4".to_string();
let builder = FFmpegCommandBuilder::new(
Path::new("/tmp/in.mkv"),
Path::new("/tmp/out.mp4"),
&metadata,
&plan,
);
let args = builder
.build_args()
.unwrap_or_else(|err| panic!("failed to build mp4 x265 args: {err}"));
assert!(args.contains(&"libx265".to_string()));
assert!(args.contains(&"hvc1".to_string()));
}
#[test]
@@ -1059,6 +1196,42 @@ mod tests {
assert!(args.contains(&"hevc_amf".to_string()));
}
#[test]
fn amf_cq_mode_sets_cqp_flags() {
let metadata = metadata();
let mut plan = plan_for(Encoder::HevcAmf);
plan.rate_control = Some(RateControl::Cq { value: 19 });
let builder = FFmpegCommandBuilder::new(
Path::new("/tmp/in.mkv"),
Path::new("/tmp/out.mkv"),
&metadata,
&plan,
);
let args = builder
.build_args()
.unwrap_or_else(|err| panic!("failed to build amf cq args: {err}"));
assert!(args.windows(2).any(|window| window == ["-rc", "cqp"]));
assert!(args.windows(2).any(|window| window == ["-qp_i", "19"]));
assert!(args.windows(2).any(|window| window == ["-qp_p", "19"]));
}
#[test]
fn clip_window_adds_trim_arguments() {
let metadata = metadata();
let plan = plan_for(Encoder::H264X264);
let args = FFmpegCommandBuilder::new(
Path::new("/tmp/in.mkv"),
Path::new("/tmp/out.mkv"),
&metadata,
&plan,
)
.with_clip(Some(12.5), Some(8.0))
.build_args()
.unwrap_or_else(|err| panic!("failed to build clipped args: {err}"));
assert!(args.windows(2).any(|window| window == ["-ss", "12.500"]));
assert!(args.windows(2).any(|window| window == ["-t", "8.000"]));
}
#[test]
fn mp4_audio_transcode_uses_aac_profile() {
let mut plan = plan_for(Encoder::H264X264);

View File

@@ -19,6 +19,8 @@ pub fn append_args(
"av1_nvenc".to_string(),
"-preset".to_string(),
preset.clone(),
"-rc".to_string(),
"vbr".to_string(),
"-cq".to_string(),
cq.to_string(),
]);
@@ -29,6 +31,8 @@ pub fn append_args(
"hevc_nvenc".to_string(),
"-preset".to_string(),
preset.clone(),
"-rc".to_string(),
"vbr".to_string(),
"-cq".to_string(),
cq.to_string(),
]);
@@ -39,6 +43,8 @@ pub fn append_args(
"h264_nvenc".to_string(),
"-preset".to_string(),
preset,
"-rc".to_string(),
"vbr".to_string(),
"-cq".to_string(),
cq.to_string(),
]);

View File

@@ -32,7 +32,7 @@ pub fn append_args(
"-global_quality".to_string(),
quality.to_string(),
"-look_ahead".to_string(),
"1".to_string(),
"20".to_string(),
]);
}
Encoder::HevcQsv => {
@@ -42,7 +42,7 @@ pub fn append_args(
"-global_quality".to_string(),
quality.to_string(),
"-look_ahead".to_string(),
"1".to_string(),
"20".to_string(),
]);
}
Encoder::H264Qsv => {
@@ -52,7 +52,7 @@ pub fn append_args(
"-global_quality".to_string(),
quality.to_string(),
"-look_ahead".to_string(),
"1".to_string(),
"20".to_string(),
]);
}
_ => {}

View File

@@ -1,7 +1,12 @@
use crate::media::pipeline::Encoder;
use crate::media::pipeline::{Encoder, RateControl};
use crate::system::hardware::HardwareInfo;
pub fn append_args(args: &mut Vec<String>, encoder: Encoder, hw_info: Option<&HardwareInfo>) {
pub fn append_args(
args: &mut Vec<String>,
encoder: Encoder,
hw_info: Option<&HardwareInfo>,
rate_control: Option<&RateControl>,
) {
if let Some(hw) = hw_info {
if let Some(ref device_path) = hw.device_path {
args.extend(["-vaapi_device".to_string(), device_path.to_string()]);
@@ -20,4 +25,12 @@ pub fn append_args(args: &mut Vec<String>, encoder: Encoder, hw_info: Option<&Ha
}
_ => {}
}
// VAAPI quality is set via -global_quality (0100, higher = better).
// The config uses CQ-style semantics where lower value = better quality,
// so we invert: global_quality = 100 - cq_value.
if let Some(RateControl::Cq { value }) = rate_control {
let global_quality = 100u8.saturating_sub(*value);
args.extend(["-global_quality".to_string(), global_quality.to_string()]);
}
}

View File

@@ -3,53 +3,45 @@ use crate::media::pipeline::{Encoder, RateControl};
pub fn append_args(
args: &mut Vec<String>,
encoder: Encoder,
rate_control: Option<RateControl>,
default_quality: u8,
tag_hevc_as_hvc1: bool,
rate_control: Option<&RateControl>,
) {
let cq = match rate_control {
Some(RateControl::Cq { value }) => value,
_ => default_quality,
};
match encoder {
Encoder::Av1Videotoolbox => {
args.extend([
"-c:v".to_string(),
"av1_videotoolbox".to_string(),
"-b:v".to_string(),
"0".to_string(),
"-q:v".to_string(),
cq.to_string(),
"-allow_sw".to_string(),
"1".to_string(),
]);
args.extend(["-c:v".to_string(), "av1_videotoolbox".to_string()]);
}
Encoder::HevcVideotoolbox => {
args.extend([
"-c:v".to_string(),
"hevc_videotoolbox".to_string(),
"-b:v".to_string(),
"0".to_string(),
"-q:v".to_string(),
cq.to_string(),
"-tag:v".to_string(),
"hvc1".to_string(),
"-allow_sw".to_string(),
"1".to_string(),
]);
args.extend(["-c:v".to_string(), "hevc_videotoolbox".to_string()]);
if tag_hevc_as_hvc1 {
args.extend(["-tag:v".to_string(), "hvc1".to_string()]);
}
}
Encoder::H264Videotoolbox => {
args.extend([
"-c:v".to_string(),
"h264_videotoolbox".to_string(),
"-b:v".to_string(),
"0".to_string(),
"-q:v".to_string(),
cq.to_string(),
"-allow_sw".to_string(),
"1".to_string(),
]);
args.extend(["-c:v".to_string(), "h264_videotoolbox".to_string()]);
}
_ => {}
}
match rate_control {
Some(RateControl::Cq { value }) => {
// VideoToolbox -q:v: 1 (best) to 100 (worst). Config value is CRF-style
// where lower = better quality. Clamp to 1-51 range matching x264/x265.
let q = (*value).clamp(1, 51);
args.extend(["-q:v".to_string(), q.to_string()]);
}
Some(RateControl::Bitrate { kbps, .. }) => {
args.extend([
"-b:v".to_string(),
format!("{}k", kbps),
"-maxrate".to_string(),
format!("{}k", kbps * 2),
"-bufsize".to_string(),
format!("{}k", kbps * 4),
]);
}
_ => {
// Default: constant quality at 28 (HEVC-equivalent mid quality)
args.extend(["-q:v".to_string(), "28".to_string()]);
}
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -339,12 +339,13 @@ fn should_transcode(
};
let normalized_bpp = bpp.map(|value| value * res_correction);
// Raise threshold for uncertain analysis: low confidence = fewer speculative encodes.
let mut threshold = match analysis.confidence {
crate::media::pipeline::AnalysisConfidence::High => config.transcode.min_bpp_threshold,
crate::media::pipeline::AnalysisConfidence::Medium => {
config.transcode.min_bpp_threshold * 0.7
config.transcode.min_bpp_threshold * 1.3
}
crate::media::pipeline::AnalysisConfidence::Low => config.transcode.min_bpp_threshold * 0.5,
crate::media::pipeline::AnalysisConfidence::Low => config.transcode.min_bpp_threshold * 1.8,
};
if target_codec == OutputCodec::Av1 {
threshold *= 0.7;
@@ -626,8 +627,16 @@ fn encoder_runtime_settings(
},
None,
),
Encoder::Av1Nvenc | Encoder::HevcNvenc | Encoder::H264Nvenc => (
RateControl::Cq { value: 25 },
Encoder::Av1Nvenc => (
RateControl::Cq { value: 28 },
Some(quality_profile.nvenc_preset().to_string()),
),
Encoder::HevcNvenc => (
RateControl::Cq { value: 24 },
Some(quality_profile.nvenc_preset().to_string()),
),
Encoder::H264Nvenc => (
RateControl::Cq { value: 21 },
Some(quality_profile.nvenc_preset().to_string()),
),
Encoder::Av1Videotoolbox | Encoder::HevcVideotoolbox | Encoder::H264Videotoolbox => (
@@ -645,7 +654,18 @@ fn encoder_runtime_settings(
Some(preset.to_string()),
)
}
Encoder::Av1Aom => (RateControl::Crf { value: 32 }, Some("6".to_string())),
Encoder::Av1Aom => {
let (cpu_used, default_crf) = match config.hardware.cpu_preset {
crate::config::CpuPreset::Slow => ("2", 24u8),
crate::config::CpuPreset::Medium => ("4", 28u8),
crate::config::CpuPreset::Fast => ("6", 30u8),
crate::config::CpuPreset::Faster => ("8", 32u8),
};
(
RateControl::Crf { value: default_crf },
Some(cpu_used.to_string()),
)
}
Encoder::HevcX265 => {
let preset = config.hardware.cpu_preset.as_str().to_string();
let default_crf = match config.hardware.cpu_preset {
@@ -901,7 +921,10 @@ fn plan_subtitles(
}
}
fn subtitle_copy_supported(container: &str, subtitle_streams: &[SubtitleStreamMetadata]) -> bool {
pub(crate) fn subtitle_copy_supported(
container: &str,
subtitle_streams: &[SubtitleStreamMetadata],
) -> bool {
if subtitle_streams.is_empty() {
return true;
}

View File

@@ -1,6 +1,6 @@
use crate::Transcoder;
use crate::config::Config;
use crate::db::{AlchemistEvent, Db, EventChannels, JobEvent, SystemEvent};
use crate::db::{Db, EventChannels, JobEvent, SystemEvent};
use crate::error::Result;
use crate::media::pipeline::Pipeline;
use crate::media::scanner::Scanner;
@@ -8,7 +8,7 @@ use crate::system::hardware::HardwareState;
use std::path::PathBuf;
use std::sync::Arc;
use std::sync::atomic::{AtomicBool, AtomicUsize, Ordering};
use tokio::sync::{Mutex, OwnedSemaphorePermit, RwLock, Semaphore, broadcast};
use tokio::sync::{Mutex, OwnedSemaphorePermit, RwLock, Semaphore};
use tracing::{debug, error, info};
pub struct Agent {
@@ -16,7 +16,6 @@ pub struct Agent {
orchestrator: Arc<Transcoder>,
config: Arc<RwLock<Config>>,
hardware_state: HardwareState,
tx: Arc<broadcast::Sender<AlchemistEvent>>,
event_channels: Arc<EventChannels>,
semaphore: Arc<Semaphore>,
semaphore_limit: Arc<AtomicUsize>,
@@ -39,7 +38,6 @@ impl Agent {
orchestrator: Arc<Transcoder>,
config: Arc<RwLock<Config>>,
hardware_state: HardwareState,
tx: broadcast::Sender<AlchemistEvent>,
event_channels: Arc<EventChannels>,
dry_run: bool,
) -> Self {
@@ -54,7 +52,6 @@ impl Agent {
orchestrator,
config,
hardware_state,
tx: Arc::new(tx),
event_channels,
semaphore: Arc::new(Semaphore::new(concurrent_jobs)),
semaphore_limit: Arc::new(AtomicUsize::new(concurrent_jobs)),
@@ -68,7 +65,7 @@ impl Agent {
in_flight_jobs: Arc::new(AtomicUsize::new(0)),
idle_notified: Arc::new(AtomicBool::new(false)),
analyzing_boot: Arc::new(AtomicBool::new(false)),
analysis_semaphore: Arc::new(tokio::sync::Semaphore::new(1)),
analysis_semaphore: Arc::new(tokio::sync::Semaphore::new(concurrent_jobs.clamp(1, 4))),
}
}
@@ -99,15 +96,8 @@ impl Agent {
job_id: 0,
status: crate::db::JobState::Queued,
});
// Also send to legacy channel for backwards compatibility
let _ = self.tx.send(AlchemistEvent::JobStateChanged {
job_id: 0,
status: crate::db::JobState::Queued,
});
// Notify scan completed
let _ = self.event_channels.system.send(SystemEvent::ScanCompleted);
let _ = self.tx.send(AlchemistEvent::ScanCompleted);
Ok(())
}
@@ -167,12 +157,44 @@ impl Agent {
self.draining.store(false, Ordering::SeqCst);
}
/// Restart the engine loop without re-execing the process.
/// Pauses the engine, cancels all in-flight jobs, resets state flags,
/// and resumes. Cancelled jobs remain in the cancelled state.
pub async fn restart(&self) {
info!("Engine restart requested.");
self.pause();
let active_states = [
crate::db::JobState::Encoding,
crate::db::JobState::Remuxing,
crate::db::JobState::Analyzing,
crate::db::JobState::Resuming,
];
for state in &active_states {
match self.db.get_jobs_by_status(*state).await {
Ok(jobs) => {
for job in jobs {
self.orchestrator.cancel_job(job.id);
}
}
Err(e) => {
error!("Restart: failed to fetch {:?} jobs: {}", state, e);
}
}
}
self.draining.store(false, Ordering::SeqCst);
self.idle_notified.store(false, Ordering::SeqCst);
self.resume();
info!("Engine restart complete.");
}
pub fn set_boot_analyzing(&self, value: bool) {
self.analyzing_boot.store(value, Ordering::SeqCst);
if value {
info!("Boot analysis started — engine claim loop paused.");
debug!("Boot analysis started — engine claim loop paused.");
} else {
info!("Boot analysis complete — engine claim loop resumed.");
debug!("Boot analysis complete — engine claim loop resumed.");
}
}
@@ -218,7 +240,7 @@ impl Agent {
/// semaphore permit.
async fn _run_analysis_pass(&self) {
self.set_boot_analyzing(true);
info!("Auto-analysis: starting pass...");
debug!("Auto-analysis: starting pass...");
// NOTE: reset_interrupted_jobs is intentionally
// NOT called here. It is a one-time startup
@@ -244,7 +266,7 @@ impl Agent {
}
let batch_len = batch.len();
info!("Auto-analysis: analyzing {} job(s)...", batch_len);
debug!("Auto-analysis: analyzing {} job(s)...", batch_len);
for job in batch {
let pipeline = self.pipeline();
@@ -264,9 +286,9 @@ impl Agent {
self.set_boot_analyzing(false);
if total_analyzed == 0 {
info!("Auto-analysis: no jobs pending analysis.");
debug!("Auto-analysis: no jobs pending analysis.");
} else {
info!(
debug!(
"Auto-analysis: complete. {} job(s) analyzed.",
total_analyzed
);
@@ -311,6 +333,11 @@ impl Agent {
return;
}
info!(
"Updating concurrent job limit from {} to {}",
current, new_limit
);
if new_limit > current {
let mut held = self.held_permits.lock().await;
let mut increase = new_limit - current;
@@ -359,7 +386,7 @@ impl Agent {
}
pub async fn run_loop(self: Arc<Self>) {
info!("Agent loop started.");
debug!("Agent loop started.");
loop {
// Block while paused OR while boot analysis runs
if self.is_paused() || self.is_boot_analyzing() {
@@ -392,6 +419,11 @@ impl Agent {
continue;
}
};
debug!(
"Worker slot acquired (in_flight={}, limit={})",
self.in_flight_jobs.load(Ordering::SeqCst),
self.concurrent_jobs_limit()
);
// Re-check drain after permit acquisition (belt-and-suspenders)
if self.is_draining() {
@@ -403,7 +435,13 @@ impl Agent {
match self.db.claim_next_job().await {
Ok(Some(job)) => {
self.idle_notified.store(false, Ordering::SeqCst);
self.in_flight_jobs.fetch_add(1, Ordering::SeqCst);
let next_in_flight = self.in_flight_jobs.fetch_add(1, Ordering::SeqCst) + 1;
info!(
"Claimed job {} for processing (in_flight={}, limit={})",
job.id,
next_in_flight,
self.concurrent_jobs_limit()
);
let agent = self.clone();
let counter = self.in_flight_jobs.clone();
tokio::spawn(async move {
@@ -423,10 +461,15 @@ impl Agent {
});
}
Ok(None) => {
debug!(
"No queued job available (in_flight={}, limit={})",
self.in_flight_jobs.load(Ordering::SeqCst),
self.concurrent_jobs_limit()
);
if self.in_flight_jobs.load(Ordering::SeqCst) == 0
&& !self.idle_notified.swap(true, Ordering::SeqCst)
{
let _ = self.tx.send(crate::db::AlchemistEvent::EngineIdle);
let _ = self.event_channels.system.send(SystemEvent::EngineIdle);
}
drop(permit);
tokio::time::sleep(tokio::time::Duration::from_secs(5)).await;
@@ -454,7 +497,6 @@ impl Agent {
self.orchestrator.clone(),
self.config.clone(),
self.hardware_state.clone(),
self.tx.clone(),
self.event_channels.clone(),
self.dry_run,
)

View File

@@ -2,7 +2,7 @@ use rayon::prelude::*;
use std::path::{Path, PathBuf};
use std::sync::{Arc, Mutex};
use std::time::SystemTime;
use tracing::{debug, error, info};
use tracing::{debug, error};
use walkdir::WalkDir;
use crate::media::pipeline::DiscoveredMedia;
@@ -45,7 +45,7 @@ impl Scanner {
);
directories.into_par_iter().for_each(|(dir, recursive)| {
info!("Scanning directory: {:?} (recursive: {})", dir, recursive);
debug!("Scanning directory: {:?} (recursive: {})", dir, recursive);
let mut local_files = Vec::new();
let source_roots = source_roots.clone();
let walker = if recursive {
@@ -90,7 +90,6 @@ impl Scanner {
// Deterministic ordering
final_files.sort_by(|a, b| a.path.cmp(&b.path));
info!("Found {} candidate media files", final_files.len());
final_files
}
}

View File

@@ -1,5 +1,5 @@
use crate::config::Config;
use crate::db::{AlchemistEvent, Db, NotificationTarget};
use crate::db::{Db, EventChannels, JobEvent, NotificationTarget, SystemEvent};
use crate::explanations::Explanation;
use chrono::Timelike;
use lettre::message::{Mailbox, Message, SinglePart, header::ContentType};
@@ -12,10 +12,11 @@ use std::net::IpAddr;
use std::sync::Arc;
use std::time::Duration;
use tokio::net::lookup_host;
use tokio::sync::{Mutex, RwLock, broadcast};
use tokio::sync::{Mutex, RwLock};
use tracing::{error, warn};
type NotificationResult<T> = Result<T, Box<dyn std::error::Error + Send + Sync>>;
const DAILY_SUMMARY_LAST_SUCCESS_KEY: &str = "notifications.daily_summary.last_success_date";
#[derive(Clone)]
pub struct NotificationManager {
@@ -86,9 +87,21 @@ fn endpoint_url_for_target(target: &NotificationTarget) -> NotificationResult<Op
}
}
fn event_key_from_event(event: &AlchemistEvent) -> Option<&'static str> {
/// Internal event type that unifies the events the notification system cares about.
#[derive(Debug, Clone, serde::Serialize)]
#[serde(tag = "type", content = "data")]
enum NotifiableEvent {
JobStateChanged {
job_id: i64,
status: crate::db::JobState,
},
ScanCompleted,
EngineIdle,
}
fn event_key(event: &NotifiableEvent) -> Option<&'static str> {
match event {
AlchemistEvent::JobStateChanged { status, .. } => match status {
NotifiableEvent::JobStateChanged { status, .. } => match status {
crate::db::JobState::Queued => Some(crate::config::NOTIFICATION_EVENT_ENCODE_QUEUED),
crate::db::JobState::Encoding | crate::db::JobState::Remuxing => {
Some(crate::config::NOTIFICATION_EVENT_ENCODE_STARTED)
@@ -99,9 +112,8 @@ fn event_key_from_event(event: &AlchemistEvent) -> Option<&'static str> {
crate::db::JobState::Failed => Some(crate::config::NOTIFICATION_EVENT_ENCODE_FAILED),
_ => None,
},
AlchemistEvent::ScanCompleted => Some(crate::config::NOTIFICATION_EVENT_SCAN_COMPLETED),
AlchemistEvent::EngineIdle => Some(crate::config::NOTIFICATION_EVENT_ENGINE_IDLE),
_ => None,
NotifiableEvent::ScanCompleted => Some(crate::config::NOTIFICATION_EVENT_SCAN_COMPLETED),
NotifiableEvent::EngineIdle => Some(crate::config::NOTIFICATION_EVENT_ENGINE_IDLE),
}
}
@@ -114,30 +126,121 @@ impl NotificationManager {
}
}
pub fn start_listener(&self, mut rx: broadcast::Receiver<AlchemistEvent>) {
/// Build an HTTP client with SSRF protections: DNS resolution timeout,
/// private-IP blocking (unless allow_local_notifications), no redirects,
/// and a 10-second request timeout.
async fn build_safe_client(&self, target: &NotificationTarget) -> NotificationResult<Client> {
if let Some(endpoint_url) = endpoint_url_for_target(target)? {
let url = Url::parse(&endpoint_url)?;
let host = url
.host_str()
.ok_or("notification endpoint host is missing")?;
let port = url.port_or_known_default().ok_or("invalid port")?;
let allow_local = self
.config
.read()
.await
.notifications
.allow_local_notifications;
if !allow_local && host.eq_ignore_ascii_case("localhost") {
return Err("localhost is not allowed as a notification endpoint".into());
}
let addr = format!("{}:{}", host, port);
let ips = tokio::time::timeout(Duration::from_secs(3), lookup_host(&addr)).await??;
let target_ip = if allow_local {
ips.into_iter()
.map(|a| a.ip())
.next()
.ok_or("no IP address found for notification endpoint")?
} else {
ips.into_iter()
.map(|a| a.ip())
.find(|ip| !is_private_ip(*ip))
.ok_or("no public IP address found for notification endpoint")?
};
Ok(Client::builder()
.timeout(Duration::from_secs(10))
.redirect(Policy::none())
.resolve(host, std::net::SocketAddr::new(target_ip, port))
.build()?)
} else {
Ok(Client::builder()
.timeout(Duration::from_secs(10))
.redirect(Policy::none())
.build()?)
}
}
pub fn start_listener(&self, event_channels: &EventChannels) {
let manager_clone = self.clone();
let summary_manager = self.clone();
// Listen for job events (state changes are the only ones we notify on)
let mut jobs_rx = event_channels.jobs.subscribe();
let job_manager = self.clone();
tokio::spawn(async move {
loop {
match rx.recv().await {
Ok(event) => {
if let Err(e) = manager_clone.handle_event(event).await {
match jobs_rx.recv().await {
Ok(JobEvent::StateChanged { job_id, status }) => {
let event = NotifiableEvent::JobStateChanged { job_id, status };
if let Err(e) = job_manager.handle_event(event).await {
error!("Notification error: {}", e);
}
}
Err(broadcast::error::RecvError::Lagged(_)) => {
warn!("Notification listener lagged")
Ok(_) => {} // Ignore Progress, Decision, Log
Err(tokio::sync::broadcast::error::RecvError::Lagged(_)) => {
warn!("Notification job listener lagged")
}
Err(broadcast::error::RecvError::Closed) => break,
Err(tokio::sync::broadcast::error::RecvError::Closed) => break,
}
}
});
// Listen for system events (scan completed, engine idle)
let mut system_rx = event_channels.system.subscribe();
tokio::spawn(async move {
loop {
match system_rx.recv().await {
Ok(SystemEvent::ScanCompleted) => {
if let Err(e) = manager_clone
.handle_event(NotifiableEvent::ScanCompleted)
.await
{
error!("Notification error: {}", e);
}
}
Ok(SystemEvent::EngineIdle) => {
if let Err(e) = manager_clone
.handle_event(NotifiableEvent::EngineIdle)
.await
{
error!("Notification error: {}", e);
}
}
Ok(_) => {} // Ignore ScanStarted, EngineStatusChanged, HardwareStateChanged
Err(tokio::sync::broadcast::error::RecvError::Lagged(_)) => {
warn!("Notification system listener lagged")
}
Err(tokio::sync::broadcast::error::RecvError::Closed) => break,
}
}
});
tokio::spawn(async move {
let start = tokio::time::Instant::now()
+ delay_until_next_minute_boundary(chrono::Local::now());
let mut interval = tokio::time::interval_at(start, Duration::from_secs(60));
loop {
tokio::time::sleep(Duration::from_secs(30)).await;
if let Err(err) = summary_manager.maybe_send_daily_summary().await {
interval.tick().await;
if let Err(err) = summary_manager
.maybe_send_daily_summary_at(chrono::Local::now())
.await
{
error!("Daily summary notification error: {}", err);
}
}
@@ -145,14 +248,14 @@ impl NotificationManager {
}
pub async fn send_test(&self, target: &NotificationTarget) -> NotificationResult<()> {
let event = AlchemistEvent::JobStateChanged {
let event = NotifiableEvent::JobStateChanged {
job_id: 0,
status: crate::db::JobState::Completed,
};
self.send(target, &event).await
}
async fn handle_event(&self, event: AlchemistEvent) -> NotificationResult<()> {
async fn handle_event(&self, event: NotifiableEvent) -> NotificationResult<()> {
let targets = match self.db.get_notification_targets().await {
Ok(t) => t,
Err(e) => {
@@ -165,7 +268,7 @@ impl NotificationManager {
return Ok(());
}
let event_key = match event_key_from_event(&event) {
let event_key = match event_key(&event) {
Some(event_key) => event_key,
None => return Ok(()),
};
@@ -205,9 +308,11 @@ impl NotificationManager {
Ok(())
}
async fn maybe_send_daily_summary(&self) -> NotificationResult<()> {
async fn maybe_send_daily_summary_at(
&self,
now: chrono::DateTime<chrono::Local>,
) -> NotificationResult<()> {
let config = self.config.read().await.clone();
let now = chrono::Local::now();
let parts = config
.notifications
.daily_summary_time_local
@@ -218,97 +323,113 @@ impl NotificationManager {
}
let hour = parts[0].parse::<u32>().unwrap_or(9);
let minute = parts[1].parse::<u32>().unwrap_or(0);
if now.hour() != hour || now.minute() != minute {
let Some(scheduled_at) = now
.with_hour(hour)
.and_then(|value| value.with_minute(minute))
.and_then(|value| value.with_second(0))
.and_then(|value| value.with_nanosecond(0))
else {
return Ok(());
};
if now < scheduled_at {
return Ok(());
}
let summary_key = now.format("%Y-%m-%d").to_string();
{
let last_sent = self.daily_summary_last_sent.lock().await;
if last_sent.as_deref() == Some(summary_key.as_str()) {
return Ok(());
}
if self.daily_summary_already_sent(&summary_key).await? {
return Ok(());
}
let summary = self.db.get_daily_summary_stats().await?;
let targets = self.db.get_notification_targets().await?;
let mut eligible_targets = Vec::new();
for target in targets {
if !target.enabled {
continue;
}
let allowed: Vec<String> = serde_json::from_str(&target.events).unwrap_or_default();
let allowed: Vec<String> = match serde_json::from_str(&target.events) {
Ok(events) => events,
Err(err) => {
warn!(
"Failed to parse events for notification target '{}': {}",
target.name, err
);
Vec::new()
}
};
let normalized_allowed = crate::config::normalize_notification_events(&allowed);
if !normalized_allowed
if normalized_allowed
.iter()
.any(|event| event == crate::config::NOTIFICATION_EVENT_DAILY_SUMMARY)
{
continue;
eligible_targets.push(target);
}
}
if eligible_targets.is_empty() {
self.mark_daily_summary_sent(&summary_key).await?;
return Ok(());
}
let summary = self.db.get_daily_summary_stats().await?;
let mut delivered = 0usize;
for target in eligible_targets {
if let Err(err) = self.send_daily_summary_target(&target, &summary).await {
error!(
"Failed to send daily summary to target '{}': {}",
target.name, err
);
continue;
}
delivered += 1;
}
if delivered > 0 {
self.mark_daily_summary_sent(&summary_key).await?;
}
Ok(())
}
async fn daily_summary_already_sent(&self, summary_key: &str) -> NotificationResult<bool> {
{
let last_sent = self.daily_summary_last_sent.lock().await;
if last_sent.as_deref() == Some(summary_key) {
return Ok(true);
}
}
*self.daily_summary_last_sent.lock().await = Some(summary_key);
let persisted = self
.db
.get_preference(DAILY_SUMMARY_LAST_SUCCESS_KEY)
.await?;
if persisted.as_deref() == Some(summary_key) {
let mut last_sent = self.daily_summary_last_sent.lock().await;
*last_sent = Some(summary_key.to_string());
return Ok(true);
}
Ok(false)
}
async fn mark_daily_summary_sent(&self, summary_key: &str) -> NotificationResult<()> {
self.db
.set_preference(DAILY_SUMMARY_LAST_SUCCESS_KEY, summary_key)
.await?;
let mut last_sent = self.daily_summary_last_sent.lock().await;
*last_sent = Some(summary_key.to_string());
Ok(())
}
async fn send(
&self,
target: &NotificationTarget,
event: &AlchemistEvent,
event: &NotifiableEvent,
) -> NotificationResult<()> {
let event_key = event_key_from_event(event).unwrap_or("unknown");
let client = if let Some(endpoint_url) = endpoint_url_for_target(target)? {
let url = Url::parse(&endpoint_url)?;
let host = url
.host_str()
.ok_or("notification endpoint host is missing")?;
let port = url.port_or_known_default().ok_or("invalid port")?;
let allow_local = self
.config
.read()
.await
.notifications
.allow_local_notifications;
if !allow_local && host.eq_ignore_ascii_case("localhost") {
return Err("localhost is not allowed as a notification endpoint".into());
}
let addr = format!("{}:{}", host, port);
let ips = tokio::time::timeout(Duration::from_secs(3), lookup_host(&addr)).await??;
let target_ip = if allow_local {
ips.into_iter()
.map(|a| a.ip())
.next()
.ok_or("no IP address found for notification endpoint")?
} else {
ips.into_iter()
.map(|a| a.ip())
.find(|ip| !is_private_ip(*ip))
.ok_or("no public IP address found for notification endpoint")?
};
Client::builder()
.timeout(Duration::from_secs(10))
.redirect(Policy::none())
.resolve(host, std::net::SocketAddr::new(target_ip, port))
.build()?
} else {
Client::builder()
.timeout(Duration::from_secs(10))
.redirect(Policy::none())
.build()?
};
let event_key = event_key(event).unwrap_or("unknown");
let client = self.build_safe_client(target).await?;
let (decision_explanation, failure_explanation) = match event {
AlchemistEvent::JobStateChanged { job_id, status } => {
NotifiableEvent::JobStateChanged { job_id, status } => {
let decision_explanation = self
.db
.get_job_decision_explanation(*job_id)
@@ -423,25 +544,24 @@ impl NotificationManager {
fn message_for_event(
&self,
event: &AlchemistEvent,
event: &NotifiableEvent,
decision_explanation: Option<&Explanation>,
failure_explanation: Option<&Explanation>,
) -> String {
match event {
AlchemistEvent::JobStateChanged { job_id, status } => self.notification_message(
NotifiableEvent::JobStateChanged { job_id, status } => self.notification_message(
*job_id,
&status.to_string(),
decision_explanation,
failure_explanation,
),
AlchemistEvent::ScanCompleted => {
NotifiableEvent::ScanCompleted => {
"Library scan completed. Review the queue for newly discovered work.".to_string()
}
AlchemistEvent::EngineIdle => {
NotifiableEvent::EngineIdle => {
"The engine is idle. There are no active jobs and no queued work ready to run."
.to_string()
}
_ => "Event occurred".to_string(),
}
}
@@ -472,7 +592,7 @@ impl NotificationManager {
&self,
client: &Client,
target: &NotificationTarget,
event: &AlchemistEvent,
event: &NotifiableEvent,
event_key: &str,
decision_explanation: Option<&Explanation>,
failure_explanation: Option<&Explanation>,
@@ -511,7 +631,7 @@ impl NotificationManager {
&self,
client: &Client,
target: &NotificationTarget,
event: &AlchemistEvent,
event: &NotifiableEvent,
_event_key: &str,
decision_explanation: Option<&Explanation>,
failure_explanation: Option<&Explanation>,
@@ -536,7 +656,7 @@ impl NotificationManager {
&self,
client: &Client,
target: &NotificationTarget,
event: &AlchemistEvent,
event: &NotifiableEvent,
event_key: &str,
decision_explanation: Option<&Explanation>,
failure_explanation: Option<&Explanation>,
@@ -550,16 +670,21 @@ impl NotificationManager {
_ => 2,
};
let req = client.post(&config.server_url).json(&json!({
"title": "Alchemist",
"message": message,
"priority": priority,
"extras": {
"client::display": {
"contentType": "text/plain"
let req = client
.post(format!(
"{}/message",
config.server_url.trim_end_matches('/')
))
.json(&json!({
"title": "Alchemist",
"message": message,
"priority": priority,
"extras": {
"client::display": {
"contentType": "text/plain"
}
}
}
}));
}));
req.header("X-Gotify-Key", config.app_token)
.send()
.await?
@@ -571,7 +696,7 @@ impl NotificationManager {
&self,
client: &Client,
target: &NotificationTarget,
event: &AlchemistEvent,
event: &NotifiableEvent,
event_key: &str,
decision_explanation: Option<&Explanation>,
failure_explanation: Option<&Explanation>,
@@ -601,7 +726,7 @@ impl NotificationManager {
&self,
client: &Client,
target: &NotificationTarget,
event: &AlchemistEvent,
event: &NotifiableEvent,
_event_key: &str,
decision_explanation: Option<&Explanation>,
failure_explanation: Option<&Explanation>,
@@ -627,7 +752,7 @@ impl NotificationManager {
async fn send_email(
&self,
target: &NotificationTarget,
event: &AlchemistEvent,
event: &NotifiableEvent,
_event_key: &str,
decision_explanation: Option<&Explanation>,
failure_explanation: Option<&Explanation>,
@@ -677,10 +802,11 @@ impl NotificationManager {
summary: &crate::db::DailySummaryStats,
) -> NotificationResult<()> {
let message = self.daily_summary_message(summary);
let client = self.build_safe_client(target).await?;
match target.target_type.as_str() {
"discord_webhook" => {
let config = parse_target_config::<DiscordWebhookConfig>(target)?;
Client::new()
client
.post(config.webhook_url)
.json(&json!({
"embeds": [{
@@ -696,7 +822,7 @@ impl NotificationManager {
}
"discord_bot" => {
let config = parse_target_config::<DiscordBotConfig>(target)?;
Client::new()
client
.post(format!(
"https://discord.com/api/v10/channels/{}/messages",
config.channel_id
@@ -709,7 +835,7 @@ impl NotificationManager {
}
"gotify" => {
let config = parse_target_config::<GotifyConfig>(target)?;
Client::new()
client
.post(config.server_url)
.header("X-Gotify-Key", config.app_token)
.json(&json!({
@@ -723,7 +849,7 @@ impl NotificationManager {
}
"webhook" => {
let config = parse_target_config::<WebhookConfig>(target)?;
let mut req = Client::new().post(config.url).json(&json!({
let mut req = client.post(config.url).json(&json!({
"event": crate::config::NOTIFICATION_EVENT_DAILY_SUMMARY,
"summary": summary,
"message": message,
@@ -736,7 +862,7 @@ impl NotificationManager {
}
"telegram" => {
let config = parse_target_config::<TelegramConfig>(target)?;
Client::new()
client
.post(format!(
"https://api.telegram.org/bot{}/sendMessage",
config.bot_token
@@ -791,6 +917,17 @@ impl NotificationManager {
}
}
fn delay_until_next_minute_boundary(now: chrono::DateTime<chrono::Local>) -> Duration {
let remaining_seconds = 60_u64.saturating_sub(now.second() as u64).max(1);
let mut delay = Duration::from_secs(remaining_seconds);
if now.nanosecond() > 0 {
delay = delay
.checked_sub(Duration::from_nanos(now.nanosecond() as u64))
.unwrap_or_else(|| Duration::from_millis(1));
}
delay
}
async fn _unused_ensure_public_endpoint(raw: &str) -> Result<(), Box<dyn std::error::Error>> {
let url = Url::parse(raw)?;
let host = match url.host_str() {
@@ -852,9 +989,38 @@ fn is_private_ip(ip: IpAddr) -> bool {
mod tests {
use super::*;
use crate::db::JobState;
use std::sync::{
Arc,
atomic::{AtomicUsize, Ordering},
};
use tokio::io::{AsyncReadExt, AsyncWriteExt};
use tokio::net::TcpListener;
fn scheduled_test_time(hour: u32, minute: u32) -> chrono::DateTime<chrono::Local> {
chrono::Local::now()
.with_hour(hour)
.and_then(|value| value.with_minute(minute))
.and_then(|value| value.with_second(0))
.and_then(|value| value.with_nanosecond(0))
.unwrap_or_else(chrono::Local::now)
}
async fn add_daily_summary_webhook_target(
db: &Db,
addr: std::net::SocketAddr,
) -> NotificationResult<()> {
let config_json = serde_json::json!({ "url": format!("http://{}", addr) }).to_string();
db.add_notification_target(
"daily-summary",
"webhook",
&config_json,
"[\"daily.summary\"]",
true,
)
.await?;
Ok(())
}
#[tokio::test]
async fn test_webhook_errors_on_non_success()
-> std::result::Result<(), Box<dyn std::error::Error + Send + Sync>> {
@@ -896,7 +1062,7 @@ mod tests {
enabled: true,
created_at: chrono::Utc::now(),
};
let event = AlchemistEvent::JobStateChanged {
let event = NotifiableEvent::JobStateChanged {
job_id: 1,
status: crate::db::JobState::Failed,
};
@@ -976,7 +1142,7 @@ mod tests {
enabled: true,
created_at: chrono::Utc::now(),
};
let event = AlchemistEvent::JobStateChanged {
let event = NotifiableEvent::JobStateChanged {
job_id: job.id,
status: JobState::Failed,
};
@@ -1001,4 +1167,154 @@ mod tests {
let _ = std::fs::remove_file(db_path);
Ok(())
}
#[tokio::test]
async fn daily_summary_retries_after_failed_delivery_and_marks_success()
-> std::result::Result<(), Box<dyn std::error::Error + Send + Sync>> {
let mut db_path = std::env::temp_dir();
let token: u64 = rand::random();
db_path.push(format!("alchemist_notifications_daily_retry_{}.db", token));
let db = Db::new(db_path.to_string_lossy().as_ref()).await?;
let mut test_config = crate::config::Config::default();
test_config.notifications.allow_local_notifications = true;
test_config.notifications.daily_summary_time_local = "09:00".to_string();
let config = Arc::new(RwLock::new(test_config));
let manager = NotificationManager::new(db.clone(), config);
let listener = match TcpListener::bind("127.0.0.1:0").await {
Ok(listener) => listener,
Err(err) if err.kind() == std::io::ErrorKind::PermissionDenied => {
return Ok(());
}
Err(err) => return Err(err.into()),
};
let addr = listener.local_addr()?;
add_daily_summary_webhook_target(&db, addr).await?;
let request_count = Arc::new(AtomicUsize::new(0));
let request_count_task = request_count.clone();
let listener_task = tokio::spawn(async move {
loop {
let Ok((mut socket, _)) = listener.accept().await else {
break;
};
let mut buf = [0u8; 1024];
let _ = socket.read(&mut buf).await;
let index = request_count_task.fetch_add(1, Ordering::SeqCst);
let response = if index == 0 {
"HTTP/1.1 500 Internal Server Error\r\nContent-Length: 0\r\n\r\n"
} else {
"HTTP/1.1 200 OK\r\nContent-Length: 0\r\n\r\n"
};
let _ = socket.write_all(response.as_bytes()).await;
}
});
let first_now = scheduled_test_time(9, 5);
manager.maybe_send_daily_summary_at(first_now).await?;
assert_eq!(request_count.load(Ordering::SeqCst), 1);
assert_eq!(
db.get_preference(DAILY_SUMMARY_LAST_SUCCESS_KEY).await?,
None
);
manager
.maybe_send_daily_summary_at(first_now + chrono::Duration::minutes(1))
.await?;
assert_eq!(request_count.load(Ordering::SeqCst), 2);
assert_eq!(
db.get_preference(DAILY_SUMMARY_LAST_SUCCESS_KEY).await?,
Some(first_now.format("%Y-%m-%d").to_string())
);
listener_task.abort();
let _ = std::fs::remove_file(db_path);
Ok(())
}
#[tokio::test]
async fn daily_summary_is_restart_safe_after_successful_delivery()
-> std::result::Result<(), Box<dyn std::error::Error + Send + Sync>> {
let mut db_path = std::env::temp_dir();
let token: u64 = rand::random();
db_path.push(format!(
"alchemist_notifications_daily_restart_{}.db",
token
));
let db = Db::new(db_path.to_string_lossy().as_ref()).await?;
let mut test_config = crate::config::Config::default();
test_config.notifications.allow_local_notifications = true;
test_config.notifications.daily_summary_time_local = "09:00".to_string();
let config = Arc::new(RwLock::new(test_config));
let listener = match TcpListener::bind("127.0.0.1:0").await {
Ok(listener) => listener,
Err(err) if err.kind() == std::io::ErrorKind::PermissionDenied => {
return Ok(());
}
Err(err) => return Err(err.into()),
};
let addr = listener.local_addr()?;
add_daily_summary_webhook_target(&db, addr).await?;
let request_count = Arc::new(AtomicUsize::new(0));
let request_count_task = request_count.clone();
let listener_task = tokio::spawn(async move {
loop {
let Ok((mut socket, _)) = listener.accept().await else {
break;
};
let mut buf = [0u8; 1024];
let _ = socket.read(&mut buf).await;
request_count_task.fetch_add(1, Ordering::SeqCst);
let _ = socket
.write_all(b"HTTP/1.1 200 OK\r\nContent-Length: 0\r\n\r\n")
.await;
}
});
let first_now = scheduled_test_time(9, 2);
let manager = NotificationManager::new(db.clone(), config.clone());
manager.maybe_send_daily_summary_at(first_now).await?;
assert_eq!(request_count.load(Ordering::SeqCst), 1);
let restarted_manager = NotificationManager::new(db.clone(), config.clone());
restarted_manager
.maybe_send_daily_summary_at(first_now + chrono::Duration::minutes(10))
.await?;
assert_eq!(request_count.load(Ordering::SeqCst), 1);
listener_task.abort();
let _ = std::fs::remove_file(db_path);
Ok(())
}
#[tokio::test]
async fn daily_summary_marks_day_sent_when_no_targets_are_eligible()
-> std::result::Result<(), Box<dyn std::error::Error + Send + Sync>> {
let mut db_path = std::env::temp_dir();
let token: u64 = rand::random();
db_path.push(format!(
"alchemist_notifications_daily_no_targets_{}.db",
token
));
let db = Db::new(db_path.to_string_lossy().as_ref()).await?;
let mut test_config = crate::config::Config::default();
test_config.notifications.daily_summary_time_local = "09:00".to_string();
let config = Arc::new(RwLock::new(test_config));
let manager = NotificationManager::new(db.clone(), config);
let now = scheduled_test_time(9, 1);
manager.maybe_send_daily_summary_at(now).await?;
assert_eq!(
db.get_preference(DAILY_SUMMARY_LAST_SUCCESS_KEY).await?,
Some(now.format("%Y-%m-%d").to_string())
);
let _ = std::fs::remove_file(db_path);
Ok(())
}
}

View File

@@ -13,8 +13,11 @@ use tokio::sync::oneshot;
use tracing::{error, info, warn};
pub struct Transcoder {
// std::sync::Mutex is intentional: critical sections never cross .await boundaries,
// so there is no deadlock risk. Contention is negligible (≤ concurrent_jobs entries).
cancel_channels: Arc<Mutex<HashMap<i64, oneshot::Sender<()>>>>,
pending_cancels: Arc<Mutex<HashSet<i64>>>,
pub(crate) cancel_requested: Arc<tokio::sync::RwLock<HashSet<i64>>>,
}
pub struct TranscodeRequest<'a> {
@@ -26,6 +29,8 @@ pub struct TranscodeRequest<'a> {
pub metadata: &'a crate::media::pipeline::MediaMetadata,
pub plan: &'a TranscodePlan,
pub observer: Option<Arc<dyn ExecutionObserver>>,
pub clip_start_seconds: Option<f64>,
pub clip_duration_seconds: Option<f64>,
}
#[allow(async_fn_in_trait)]
@@ -78,9 +83,22 @@ impl Transcoder {
Self {
cancel_channels: Arc::new(Mutex::new(HashMap::new())),
pending_cancels: Arc::new(Mutex::new(HashSet::new())),
cancel_requested: Arc::new(tokio::sync::RwLock::new(HashSet::new())),
}
}
pub async fn is_cancel_requested(&self, job_id: i64) -> bool {
self.cancel_requested.read().await.contains(&job_id)
}
pub async fn remove_cancel_request(&self, job_id: i64) {
self.cancel_requested.write().await.remove(&job_id);
}
pub async fn add_cancel_request(&self, job_id: i64) {
self.cancel_requested.write().await.insert(job_id);
}
pub fn cancel_job(&self, job_id: i64) -> bool {
let mut channels = match self.cancel_channels.lock() {
Ok(channels) => channels,
@@ -171,6 +189,7 @@ impl Transcoder {
request.plan,
)
.with_hardware(request.hw_info)
.with_clip(request.clip_start_seconds, request.clip_duration_seconds)
.build()?;
info!("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━");
@@ -234,6 +253,7 @@ impl Transcoder {
total_duration: Option<f64>,
) -> Result<()> {
info!("Executing FFmpeg command: {:?}", cmd);
let ffmpeg_start = std::time::Instant::now();
cmd.stdout(Stdio::null()).stderr(Stdio::piped());
if let Some(id) = job_id {
@@ -286,15 +306,21 @@ impl Transcoder {
}
}
info!(
"Job {:?}: FFmpeg spawned ({:.3}s since command start)",
job_id,
ffmpeg_start.elapsed().as_secs_f64()
);
let mut reader = BufReader::new(stderr).lines();
let mut kill_rx = kill_rx;
let mut killed = false;
let mut last_lines = std::collections::VecDeque::with_capacity(20);
let mut progress_state = FFmpegProgressState::default();
let mut first_frame_logged = false;
loop {
tokio::select! {
line_res_timeout = tokio::time::timeout(tokio::time::Duration::from_secs(600), reader.next_line()) => {
line_res_timeout = tokio::time::timeout(tokio::time::Duration::from_secs(120), reader.next_line()) => {
match line_res_timeout {
Ok(line_res) => match line_res {
Ok(Some(line)) => {
@@ -308,11 +334,28 @@ impl Transcoder {
last_lines.pop_front();
}
// Detect VideoToolbox software fallback
if line.contains("Using software encoder") || line.contains("using software encoder") {
warn!(
"Job {:?}: VideoToolbox falling back to software encoder ({}s elapsed)",
job_id,
ffmpeg_start.elapsed().as_secs_f64()
);
}
if let Some(observer) = observer.as_ref() {
observer.on_log(line.clone()).await;
if let Some(total_duration) = total_duration {
if let Some(progress) = progress_state.ingest_line(&line) {
if !first_frame_logged {
first_frame_logged = true;
info!(
"Job {:?}: first progress event ({:.3}s since spawn)",
job_id,
ffmpeg_start.elapsed().as_secs_f64()
);
}
observer.on_progress(progress, total_duration).await;
}
}
@@ -325,7 +368,7 @@ impl Transcoder {
}
},
Err(_) => {
error!("Job {:?} stalled: No output from FFmpeg for 10 minutes. Killing process...", job_id);
error!("Job {:?} stalled: No output from FFmpeg for 2 minutes. Killing process...", job_id);
let _ = child.kill().await;
killed = true;
if let Some(id) = job_id {
@@ -379,7 +422,11 @@ impl Transcoder {
}
if status.success() {
info!("FFmpeg command completed successfully");
info!(
"Job {:?}: FFmpeg completed successfully ({:.3}s total)",
job_id,
ffmpeg_start.elapsed().as_secs_f64()
);
Ok(())
} else {
let error_detail = last_lines.make_contiguous().join("\n");

View File

@@ -15,6 +15,7 @@ use chrono::Utc;
use rand::Rng;
use std::net::SocketAddr;
use std::sync::Arc;
use tracing::error;
#[derive(serde::Deserialize)]
pub(crate) struct LoginPayload {
@@ -32,11 +33,13 @@ pub(crate) async fn login_handler(
}
let mut is_valid = true;
let user_result = state
.db
.get_user_by_username(&payload.username)
.await
.unwrap_or(None);
let user_result = match state.db.get_user_by_username(&payload.username).await {
Ok(user) => user,
Err(err) => {
error!("Login lookup failed for '{}': {}", payload.username, err);
return (StatusCode::INTERNAL_SERVER_ERROR, err.to_string()).into_response();
}
};
// A valid argon2 static hash of a random string used to simulate work and equalize timing
const DUMMY_HASH: &str = "$argon2id$v=19$m=19456,t=2,p=1$c2FsdHN0cmluZzEyMzQ1Ng$1tJ2tA109qj15m3u5+kS/sX5X1UoZ6/H9b/30tX9N/g";

View File

@@ -10,7 +10,11 @@ use axum::{
response::{IntoResponse, Response},
};
use serde::{Deserialize, Serialize};
use std::sync::Arc;
use std::{
path::{Path as FsPath, PathBuf},
sync::Arc,
time::SystemTime,
};
#[derive(Serialize)]
struct BlockedJob {
@@ -24,6 +28,17 @@ struct BlockedJobsResponse {
blocked: Vec<BlockedJob>,
}
#[derive(Deserialize)]
pub(crate) struct EnqueueJobPayload {
path: String,
}
#[derive(Serialize)]
pub(crate) struct EnqueueJobResponse {
enqueued: bool,
message: String,
}
pub(crate) fn blocked_jobs_response(message: impl Into<String>, blocked: &[Job]) -> Response {
let payload = BlockedJobsResponse {
message: message.into(),
@@ -38,13 +53,175 @@ pub(crate) fn blocked_jobs_response(message: impl Into<String>, blocked: &[Job])
(StatusCode::CONFLICT, axum::Json(payload)).into_response()
}
fn resolve_source_root(path: &FsPath, watch_dirs: &[crate::db::WatchDir]) -> Option<PathBuf> {
watch_dirs
.iter()
.map(|watch_dir| PathBuf::from(&watch_dir.path))
.filter(|watch_dir| path.starts_with(watch_dir))
.max_by_key(|watch_dir| watch_dir.components().count())
}
async fn purge_resume_sessions_for_jobs(state: &AppState, ids: &[i64]) {
let sessions = match state.db.get_resume_sessions_by_job_ids(ids).await {
Ok(sessions) => sessions,
Err(err) => {
tracing::warn!("Failed to load resume sessions for purge: {}", err);
return;
}
};
for session in sessions {
if let Err(err) = state.db.delete_resume_session(session.job_id).await {
tracing::warn!(
job_id = session.job_id,
"Failed to delete resume session rows: {err}"
);
continue;
}
let temp_dir = PathBuf::from(&session.temp_dir);
if temp_dir.exists() {
if let Err(err) = tokio::fs::remove_dir_all(&temp_dir).await {
tracing::warn!(
job_id = session.job_id,
path = %temp_dir.display(),
"Failed to remove resume temp dir: {err}"
);
}
}
}
}
pub(crate) async fn enqueue_job_handler(
State(state): State<Arc<AppState>>,
axum::Json(payload): axum::Json<EnqueueJobPayload>,
) -> impl IntoResponse {
let submitted_path = payload.path.trim();
if submitted_path.is_empty() {
return (
StatusCode::BAD_REQUEST,
axum::Json(EnqueueJobResponse {
enqueued: false,
message: "Path must not be empty.".to_string(),
}),
)
.into_response();
}
let requested_path = PathBuf::from(submitted_path);
if !requested_path.is_absolute() {
return (
StatusCode::BAD_REQUEST,
axum::Json(EnqueueJobResponse {
enqueued: false,
message: "Path must be absolute.".to_string(),
}),
)
.into_response();
}
let canonical_path = match std::fs::canonicalize(&requested_path) {
Ok(path) => path,
Err(err) => {
return (
StatusCode::BAD_REQUEST,
axum::Json(EnqueueJobResponse {
enqueued: false,
message: format!("Unable to resolve path: {err}"),
}),
)
.into_response();
}
};
let metadata = match std::fs::metadata(&canonical_path) {
Ok(metadata) => metadata,
Err(err) => {
return (
StatusCode::BAD_REQUEST,
axum::Json(EnqueueJobResponse {
enqueued: false,
message: format!("Unable to read file metadata: {err}"),
}),
)
.into_response();
}
};
if !metadata.is_file() {
return (
StatusCode::BAD_REQUEST,
axum::Json(EnqueueJobResponse {
enqueued: false,
message: "Path must point to a file.".to_string(),
}),
)
.into_response();
}
let extension = canonical_path
.extension()
.and_then(|value| value.to_str())
.map(|value| value.to_ascii_lowercase());
let supported = crate::media::scanner::Scanner::new().extensions;
if extension
.as_deref()
.is_none_or(|value| !supported.iter().any(|candidate| candidate == value))
{
return (
StatusCode::BAD_REQUEST,
axum::Json(EnqueueJobResponse {
enqueued: false,
message: "File type is not supported for enqueue.".to_string(),
}),
)
.into_response();
}
let watch_dirs = match state.db.get_watch_dirs().await {
Ok(watch_dirs) => watch_dirs,
Err(err) => {
return (StatusCode::INTERNAL_SERVER_ERROR, err.to_string()).into_response();
}
};
let discovered = crate::media::pipeline::DiscoveredMedia {
path: canonical_path.clone(),
mtime: metadata.modified().unwrap_or(SystemTime::UNIX_EPOCH),
source_root: resolve_source_root(&canonical_path, &watch_dirs),
};
match crate::media::pipeline::enqueue_discovered_with_db(state.db.as_ref(), discovered).await {
Ok(true) => (
StatusCode::OK,
axum::Json(EnqueueJobResponse {
enqueued: true,
message: format!("Enqueued {}.", canonical_path.display()),
}),
)
.into_response(),
Ok(false) => (
StatusCode::OK,
axum::Json(EnqueueJobResponse {
enqueued: false,
message:
"File was not enqueued because it matched existing output or dedupe rules."
.to_string(),
}),
)
.into_response(),
Err(err) => (StatusCode::INTERNAL_SERVER_ERROR, err.to_string()).into_response(),
}
}
pub(crate) async fn request_job_cancel(state: &AppState, job: &Job) -> Result<bool> {
state.transcoder.add_cancel_request(job.id).await;
match job.status {
JobState::Queued => {
state
.db
.update_job_status(job.id, JobState::Cancelled)
.await?;
state.transcoder.remove_cancel_request(job.id).await;
Ok(true)
}
JobState::Analyzing | JobState::Resuming => {
@@ -55,6 +232,7 @@ pub(crate) async fn request_job_cancel(state: &AppState, job: &Job) -> Result<bo
.db
.update_job_status(job.id, JobState::Cancelled)
.await?;
state.transcoder.remove_cancel_request(job.id).await;
Ok(true)
}
JobState::Encoding | JobState::Remuxing => Ok(state.transcoder.cancel_job(job.id)),
@@ -162,17 +340,49 @@ pub(crate) async fn batch_jobs_handler(
match payload.action.as_str() {
"cancel" => {
let mut count = 0_u64;
// Add all cancel requests first (in-memory, cheap).
for job in &jobs {
match request_job_cancel(&state, job).await {
Ok(true) => count += 1,
Ok(false) => {}
Err(e) if is_row_not_found(&e) => {}
state.transcoder.add_cancel_request(job.id).await;
}
// Collect IDs that can be immediately set to Cancelled in the DB.
let mut immediate_ids: Vec<i64> = Vec::new();
let mut active_count: u64 = 0;
for job in &jobs {
match job.status {
JobState::Queued => {
immediate_ids.push(job.id);
}
JobState::Analyzing | JobState::Resuming
if state.transcoder.cancel_job(job.id) =>
{
immediate_ids.push(job.id);
}
JobState::Encoding | JobState::Remuxing
if state.transcoder.cancel_job(job.id) =>
{
active_count += 1;
}
_ => {}
}
}
// Single batch DB update instead of N individual queries.
if !immediate_ids.is_empty() {
match state.db.batch_cancel_jobs(&immediate_ids).await {
Ok(_) => {}
Err(e) => {
return (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()).into_response();
}
}
// Remove cancel requests for jobs already resolved in DB.
for id in &immediate_ids {
state.transcoder.remove_cancel_request(*id).await;
}
}
let count = immediate_ids.len() as u64 + active_count;
axum::Json(serde_json::json!({ "count": count })).into_response()
}
"delete" | "restart" => {
@@ -191,7 +401,12 @@ pub(crate) async fn batch_jobs_handler(
};
match result {
Ok(count) => axum::Json(serde_json::json!({ "count": count })).into_response(),
Ok(count) => {
if payload.action == "delete" {
purge_resume_sessions_for_jobs(state.as_ref(), &payload.ids).await;
}
axum::Json(serde_json::json!({ "count": count })).into_response()
}
Err(e) => (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()).into_response(),
}
}
@@ -235,8 +450,13 @@ pub(crate) async fn restart_failed_handler(
pub(crate) async fn clear_completed_handler(
State(state): State<Arc<AppState>>,
) -> impl IntoResponse {
let completed_job_ids = match state.db.get_jobs_by_status(JobState::Completed).await {
Ok(jobs) => jobs.into_iter().map(|job| job.id).collect::<Vec<_>>(),
Err(e) => return (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()).into_response(),
};
match state.db.clear_completed_jobs().await {
Ok(count) => {
purge_resume_sessions_for_jobs(state.as_ref(), &completed_job_ids).await;
let message = if count == 0 {
"No completed jobs were waiting to be cleared.".to_string()
} else if count == 1 {
@@ -289,7 +509,10 @@ pub(crate) async fn delete_job_handler(
state.transcoder.cancel_job(id);
match state.db.delete_job(id).await {
Ok(_) => StatusCode::OK.into_response(),
Ok(_) => {
purge_resume_sessions_for_jobs(state.as_ref(), &[id]).await;
StatusCode::OK.into_response()
}
Err(e) if is_row_not_found(&e) => StatusCode::NOT_FOUND.into_response(),
Err(e) => (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()).into_response(),
}
@@ -325,10 +548,12 @@ pub(crate) struct JobDetailResponse {
job: Job,
metadata: Option<crate::media::pipeline::MediaMetadata>,
encode_stats: Option<crate::db::DetailedEncodeStats>,
encode_attempts: Vec<crate::db::EncodeAttempt>,
job_logs: Vec<crate::db::LogEntry>,
job_failure_summary: Option<String>,
decision_explanation: Option<Explanation>,
failure_explanation: Option<Explanation>,
queue_position: Option<u32>,
}
pub(crate) async fn get_job_detail_handler(
@@ -341,28 +566,18 @@ pub(crate) async fn get_job_detail_handler(
Err(e) => return (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()).into_response(),
};
// Avoid long probes while the job is still active.
let metadata = match job.status {
JobState::Queued
| JobState::Analyzing
| JobState::Encoding
| JobState::Remuxing
| JobState::Completed => None,
_ => {
let analyzer = crate::media::analyzer::FfmpegAnalyzer;
use crate::media::pipeline::Analyzer;
analyzer
.analyze(std::path::Path::new(&job.input_path))
.await
.ok()
.map(|analysis| analysis.metadata)
}
};
let metadata = job.input_metadata();
// Try to get encode stats (using the subquery result or a specific query)
// For now we'll just query the encode_stats table if completed
let encode_stats = if job.status == JobState::Completed {
state.db.get_encode_stats_by_job_id(id).await.ok()
match state.db.get_encode_stats_by_job_id(id).await {
Ok(stats) => Some(stats),
Err(err) if is_row_not_found(&err) => None,
Err(err) => {
return (StatusCode::INTERNAL_SERVER_ERROR, err.to_string()).into_response();
}
}
} else {
None
};
@@ -403,14 +618,32 @@ pub(crate) async fn get_job_detail_handler(
(None, None)
};
let encode_attempts = match state.db.get_encode_attempts_by_job(id).await {
Ok(attempts) => attempts,
Err(err) => return (StatusCode::INTERNAL_SERVER_ERROR, err.to_string()).into_response(),
};
let queue_position = if job.status == JobState::Queued {
match state.db.get_queue_position(id).await {
Ok(position) => position,
Err(err) => {
return (StatusCode::INTERNAL_SERVER_ERROR, err.to_string()).into_response();
}
}
} else {
None
};
axum::Json(JobDetailResponse {
job,
metadata,
encode_stats,
encode_attempts,
job_logs,
job_failure_summary,
decision_explanation,
failure_explanation,
queue_position,
})
.into_response()
}
@@ -439,6 +672,13 @@ pub(crate) async fn stop_drain_handler(State(state): State<Arc<AppState>>) -> im
axum::Json(serde_json::json!({ "status": "running" }))
}
pub(crate) async fn restart_engine_handler(
State(state): State<Arc<AppState>>,
) -> impl IntoResponse {
state.agent.restart().await;
axum::Json(serde_json::json!({ "status": "running" }))
}
pub(crate) async fn engine_status_handler(State(state): State<Arc<AppState>>) -> impl IntoResponse {
axum::Json(serde_json::json!({
"status": if state.agent.is_draining() {

View File

@@ -76,6 +76,28 @@ pub(crate) async fn auth_middleware(
let path = req.uri().path();
let method = req.method().clone();
if state.setup_required.load(Ordering::Relaxed) && path != "/api/health" && path != "/api/ready"
{
let allowed = if let Some(expected_token) = &state.setup_token {
// Token mode: require `?token=<value>` regardless of client IP.
req.uri()
.query()
.and_then(|q| q.split('&').find_map(|pair| pair.strip_prefix("token=")))
.map(|t| t == expected_token.as_str())
.unwrap_or(false)
} else {
request_is_lan(&req, &state.trusted_proxies)
};
if !allowed {
return (
StatusCode::FORBIDDEN,
"Alchemist setup is only available from the local network",
)
.into_response();
}
}
// 1. API Protection: Only lock down /api routes
if path.starts_with("/api") {
// Public API endpoints
@@ -92,28 +114,7 @@ pub(crate) async fn auth_middleware(
return next.run(req).await;
}
if state.setup_required.load(Ordering::Relaxed) && path.starts_with("/api/fs/") {
// Only allow filesystem browsing from localhost
// during setup — no account exists yet so we
// cannot authenticate the caller.
let connect_info = req.extensions().get::<ConnectInfo<SocketAddr>>();
let is_local = connect_info
.map(|ci| {
let ip = ci.0.ip();
ip.is_loopback()
})
.unwrap_or(false);
if is_local {
return next.run(req).await;
}
// Non-local request during setup -> 403
return Response::builder()
.status(StatusCode::FORBIDDEN)
.body(axum::body::Body::from(
"Filesystem browsing is only available \
from localhost during setup",
))
.unwrap_or_else(|_| StatusCode::FORBIDDEN.into_response());
return next.run(req).await;
}
if state.setup_required.load(Ordering::Relaxed) && path == "/api/settings/bundle" {
return next.run(req).await;
@@ -157,6 +158,31 @@ pub(crate) async fn auth_middleware(
next.run(req).await
}
fn request_is_lan(req: &Request, trusted_proxies: &[IpAddr]) -> bool {
let direct_peer = req
.extensions()
.get::<ConnectInfo<SocketAddr>>()
.map(|info| info.0.ip());
let resolved = request_ip(req, trusted_proxies);
// If resolved IP differs from direct peer, forwarded headers were used.
// Warn operators so misconfigured proxies surface in logs.
if let (Some(peer), Some(resolved_ip)) = (direct_peer, resolved) {
if peer != resolved_ip && is_lan_ip(resolved_ip) {
tracing::warn!(
peer_ip = %peer,
resolved_ip = %resolved_ip,
"Setup gate: access permitted via forwarded headers. \
Verify your reverse proxy is forwarding client IPs correctly \
(X-Forwarded-For / X-Real-IP). Misconfigured proxies may \
expose setup to public traffic."
);
}
}
resolved.is_some_and(is_lan_ip)
}
fn read_only_api_token_allows(method: &Method, path: &str) -> bool {
if *method != Method::GET && *method != Method::HEAD {
return false;
@@ -200,7 +226,7 @@ pub(crate) async fn rate_limit_middleware(
return next.run(req).await;
}
let ip = request_ip(&req).unwrap_or(IpAddr::from([0, 0, 0, 0]));
let ip = request_ip(&req, &state.trusted_proxies).unwrap_or(IpAddr::from([0, 0, 0, 0]));
if !allow_global_request(&state, ip).await {
return (StatusCode::TOO_MANY_REQUESTS, "Too many requests").into_response();
}
@@ -271,18 +297,18 @@ pub(crate) fn get_cookie_value(headers: &axum::http::HeaderMap, name: &str) -> O
None
}
pub(crate) fn request_ip(req: &Request) -> Option<IpAddr> {
pub(crate) fn request_ip(req: &Request, trusted_proxies: &[IpAddr]) -> Option<IpAddr> {
let peer_ip = req
.extensions()
.get::<ConnectInfo<SocketAddr>>()
.map(|info| info.0.ip());
// Only trust proxy headers (X-Forwarded-For, X-Real-IP) when the direct
// TCP peer is a loopback or private IP — i.e., a trusted reverse proxy.
// This prevents external attackers from spoofing these headers to bypass
// rate limiting.
// TCP peer is a trusted reverse proxy. When trusted_proxies is non-empty,
// only those exact IPs (plus loopback) are trusted. Otherwise, fall back
// to trusting all RFC-1918 private ranges (legacy behaviour).
if let Some(peer) = peer_ip {
if is_trusted_peer(peer) {
if is_trusted_peer(peer, trusted_proxies) {
if let Some(xff) = req.headers().get("X-Forwarded-For") {
if let Ok(xff_str) = xff.to_str() {
if let Some(ip_str) = xff_str.split(',').next() {
@@ -305,10 +331,31 @@ pub(crate) fn request_ip(req: &Request) -> Option<IpAddr> {
peer_ip
}
/// Returns true if the peer IP is a loopback or private address,
/// meaning it is likely a local reverse proxy that can be trusted
/// to set forwarded headers.
fn is_trusted_peer(ip: IpAddr) -> bool {
/// Returns true if the peer IP may be trusted to set forwarded headers.
///
/// When `trusted_proxies` is non-empty, only loopback addresses and the
/// explicitly configured IPs are trusted, tightening the default which
/// previously trusted all RFC-1918 private ranges.
fn is_trusted_peer(ip: IpAddr, trusted_proxies: &[IpAddr]) -> bool {
let is_loopback = match ip {
IpAddr::V4(v4) => v4.is_loopback(),
IpAddr::V6(v6) => v6.is_loopback(),
};
if is_loopback {
return true;
}
if trusted_proxies.is_empty() {
// Legacy: trust all private ranges when no explicit list is configured.
match ip {
IpAddr::V4(v4) => v4.is_private() || v4.is_link_local(),
IpAddr::V6(v6) => v6.is_unique_local() || v6.is_unicast_link_local(),
}
} else {
trusted_proxies.contains(&ip)
}
}
fn is_lan_ip(ip: IpAddr) -> bool {
match ip {
IpAddr::V4(v4) => v4.is_loopback() || v4.is_private() || v4.is_link_local(),
IpAddr::V6(v6) => v6.is_loopback() || v6.is_unique_local() || v6.is_unicast_link_local(),

View File

@@ -17,7 +17,7 @@ mod tests;
use crate::Agent;
use crate::Transcoder;
use crate::config::Config;
use crate::db::{AlchemistEvent, Db, EventChannels};
use crate::db::{Db, EventChannels};
use crate::error::{AlchemistError, Result};
use crate::system::hardware::{HardwareInfo, HardwareProbeLog, HardwareState};
use axum::{
@@ -25,7 +25,7 @@ use axum::{
extract::State,
http::{StatusCode, Uri, header},
middleware as axum_middleware,
response::{IntoResponse, Redirect, Response},
response::{IntoResponse, Response},
routing::{delete, get, post},
};
#[cfg(feature = "embed-web")]
@@ -38,7 +38,7 @@ use std::sync::Arc;
use std::sync::atomic::{AtomicBool, Ordering};
use std::time::Instant;
use tokio::net::lookup_host;
use tokio::sync::{Mutex, RwLock, broadcast};
use tokio::sync::{Mutex, RwLock};
use tokio::time::Duration;
#[cfg(not(feature = "embed-web"))]
use tracing::warn;
@@ -71,7 +71,6 @@ pub struct AppState {
pub transcoder: Arc<Transcoder>,
pub scheduler: crate::scheduler::SchedulerHandle,
pub event_channels: Arc<EventChannels>,
pub tx: broadcast::Sender<AlchemistEvent>, // Legacy channel for transition
pub setup_required: Arc<AtomicBool>,
pub start_time: Instant,
pub telemetry_runtime_id: String,
@@ -81,13 +80,16 @@ pub struct AppState {
pub library_scanner: Arc<crate::system::scanner::LibraryScanner>,
pub config_path: PathBuf,
pub config_mutable: bool,
pub base_url: String,
pub hardware_state: HardwareState,
pub hardware_probe_log: Arc<tokio::sync::RwLock<HardwareProbeLog>>,
pub resources_cache: Arc<tokio::sync::Mutex<Option<(serde_json::Value, std::time::Instant)>>>,
pub(crate) login_rate_limiter: Mutex<HashMap<IpAddr, RateLimitEntry>>,
pub(crate) global_rate_limiter: Mutex<HashMap<IpAddr, RateLimitEntry>>,
pub(crate) sse_connections: Arc<std::sync::atomic::AtomicUsize>,
/// IPs whose proxy headers are trusted. Empty = trust all private ranges.
pub(crate) trusted_proxies: Vec<IpAddr>,
/// If set, setup endpoints require `?token=<value>` query parameter.
pub(crate) setup_token: Option<String>,
}
pub struct RunServerArgs {
@@ -97,7 +99,6 @@ pub struct RunServerArgs {
pub transcoder: Arc<Transcoder>,
pub scheduler: crate::scheduler::SchedulerHandle,
pub event_channels: Arc<EventChannels>,
pub tx: broadcast::Sender<AlchemistEvent>, // Legacy channel for transition
pub setup_required: bool,
pub config_path: PathBuf,
pub config_mutable: bool,
@@ -116,7 +117,6 @@ pub async fn run_server(args: RunServerArgs) -> Result<()> {
transcoder,
scheduler,
event_channels,
tx,
setup_required,
config_path,
config_mutable,
@@ -146,10 +146,33 @@ pub async fn run_server(args: RunServerArgs) -> Result<()> {
sys.refresh_cpu_usage();
sys.refresh_memory();
let base_url = {
let config = config.read().await;
config.system.base_url.clone()
// Read setup token from environment (opt-in security layer).
let setup_token = std::env::var("ALCHEMIST_SETUP_TOKEN").ok();
if setup_token.is_some() {
info!("ALCHEMIST_SETUP_TOKEN is set — setup endpoints require token query param");
}
// Parse trusted proxy IPs from config. Unparseable entries are logged and skipped.
let trusted_proxies: Vec<IpAddr> = {
let cfg = config.read().await;
cfg.system
.trusted_proxies
.iter()
.filter_map(|s| {
s.parse::<IpAddr>()
.map_err(|_| {
error!("Invalid trusted_proxy entry (not a valid IP address): {s}");
})
.ok()
})
.collect()
};
if !trusted_proxies.is_empty() {
info!(
"Trusted proxies configured ({}): only these IPs will be trusted for X-Forwarded-For headers",
trusted_proxies.len()
);
}
let state = Arc::new(AppState {
db,
@@ -158,7 +181,6 @@ pub async fn run_server(args: RunServerArgs) -> Result<()> {
transcoder,
scheduler,
event_channels,
tx,
setup_required: Arc::new(AtomicBool::new(setup_required)),
start_time: std::time::Instant::now(),
telemetry_runtime_id: Uuid::new_v4().to_string(),
@@ -168,30 +190,20 @@ pub async fn run_server(args: RunServerArgs) -> Result<()> {
library_scanner,
config_path,
config_mutable,
base_url: base_url.clone(),
hardware_state,
hardware_probe_log,
resources_cache: Arc::new(tokio::sync::Mutex::new(None)),
login_rate_limiter: Mutex::new(HashMap::new()),
global_rate_limiter: Mutex::new(HashMap::new()),
sse_connections: Arc::new(std::sync::atomic::AtomicUsize::new(0)),
trusted_proxies,
setup_token,
});
// Clone agent for shutdown handler before moving state into router
let shutdown_agent = state.agent.clone();
let inner_app = app_router(state.clone());
let app = if base_url.is_empty() {
inner_app
} else {
let redirect_target = format!("{base_url}/");
Router::new()
.route(
"/",
get(move || async move { Redirect::permanent(&redirect_target) }),
)
.nest(&base_url, inner_app)
};
let app = app_router(state.clone());
let port = std::env::var("ALCHEMIST_SERVER_PORT")
.ok()
@@ -295,6 +307,8 @@ pub async fn run_server(args: RunServerArgs) -> Result<()> {
// Forceful immediate shutdown of active jobs
shutdown_agent.graceful_shutdown().await;
info!("Shutdown complete. Forcing process exit.");
std::process::exit(0);
})
.await
.map_err(|e| AlchemistError::Unknown(format!("Server error: {}", e)))?;
@@ -323,9 +337,11 @@ fn app_router(state: Arc<AppState>) -> Router {
.route("/api/stats/daily", get(daily_stats_handler))
.route("/api/stats/detailed", get(detailed_stats_handler))
.route("/api/stats/savings", get(savings_summary_handler))
.route("/api/stats/skip-reasons", get(skip_reasons_handler))
// Canonical job list endpoint.
.route("/api/jobs", get(jobs_table_handler))
.route("/api/jobs/table", get(jobs_table_handler))
.route("/api/jobs/enqueue", post(enqueue_job_handler))
.route("/api/jobs/batch", post(batch_jobs_handler))
.route("/api/logs/history", get(logs_history_handler))
.route("/api/logs", delete(clear_logs_handler))
@@ -355,11 +371,13 @@ fn app_router(state: Arc<AppState>) -> Router {
.route("/api/engine/resume", post(resume_engine_handler))
.route("/api/engine/drain", post(drain_engine_handler))
.route("/api/engine/stop-drain", post(stop_drain_handler))
.route("/api/engine/restart", post(restart_engine_handler))
.route(
"/api/engine/mode",
get(get_engine_mode_handler).post(set_engine_mode_handler),
)
.route("/api/engine/status", get(engine_status_handler))
.route("/api/processor/status", get(processor_status_handler))
.route(
"/api/settings/transcode",
get(get_transcode_settings_handler).post(update_transcode_settings_handler),
@@ -828,7 +846,7 @@ async fn index_handler(State(state): State<Arc<AppState>>) -> impl IntoResponse
static_handler(State(state), Uri::from_static("/index.html")).await
}
async fn static_handler(State(state): State<Arc<AppState>>, uri: Uri) -> impl IntoResponse {
async fn static_handler(State(_state): State<Arc<AppState>>, uri: Uri) -> impl IntoResponse {
let raw_path = uri.path().trim_start_matches('/');
let path = match sanitize_asset_path(raw_path) {
Some(path) => path,
@@ -837,11 +855,7 @@ async fn static_handler(State(state): State<Arc<AppState>>, uri: Uri) -> impl In
if let Some(content) = load_static_asset(&path) {
let mime = mime_guess::from_path(&path).first_or_octet_stream();
return (
[(header::CONTENT_TYPE, mime.as_ref())],
maybe_inject_base_url(content, mime.as_ref(), &state.base_url),
)
.into_response();
return ([(header::CONTENT_TYPE, mime.as_ref())], content).into_response();
}
// Attempt to serve index.html for directory paths (e.g. /jobs -> jobs/index.html)
@@ -849,11 +863,7 @@ async fn static_handler(State(state): State<Arc<AppState>>, uri: Uri) -> impl In
let index_path = format!("{}/index.html", path);
if let Some(content) = load_static_asset(&index_path) {
let mime = mime_guess::from_path("index.html").first_or_octet_stream();
return (
[(header::CONTENT_TYPE, mime.as_ref())],
maybe_inject_base_url(content, mime.as_ref(), &state.base_url),
)
.into_response();
return ([(header::CONTENT_TYPE, mime.as_ref())], content).into_response();
}
}
@@ -890,14 +900,3 @@ async fn static_handler(State(state): State<Arc<AppState>>, uri: Uri) -> impl In
// Default fallback to 404 for missing files.
StatusCode::NOT_FOUND.into_response()
}
fn maybe_inject_base_url(content: Vec<u8>, mime: &str, base_url: &str) -> Vec<u8> {
if !mime.starts_with("text/html") {
return content;
}
let Ok(text) = String::from_utf8(content.clone()) else {
return content;
};
text.replace("__ALCHEMIST_BASE_URL__", base_url)
.into_bytes()
}

View File

@@ -126,7 +126,7 @@ async fn run_library_health_scan(db: Arc<crate::db::Db>) {
let semaphore = Arc::new(tokio::sync::Semaphore::new(2));
stream::iter(jobs)
.for_each_concurrent(Some(10), {
.for_each_concurrent(None, {
let db = db.clone();
let counters = counters.clone();
let semaphore = semaphore.clone();

View File

@@ -461,47 +461,36 @@ fn normalize_notification_payload(
unreachable!("notification config_json should always be an object here");
};
match payload.target_type.as_str() {
"discord_webhook" | "discord" => {
if !config_map.contains_key("webhook_url") {
if let Some(endpoint_url) = payload.endpoint_url.as_ref() {
config_map.insert(
"webhook_url".to_string(),
JsonValue::String(endpoint_url.clone()),
);
}
"discord_webhook" | "discord" if !config_map.contains_key("webhook_url") => {
if let Some(endpoint_url) = payload.endpoint_url.as_ref() {
config_map.insert(
"webhook_url".to_string(),
JsonValue::String(endpoint_url.clone()),
);
}
}
"gotify" => {
if !config_map.contains_key("server_url") {
if let Some(endpoint_url) = payload.endpoint_url.as_ref() {
config_map.insert(
"server_url".to_string(),
JsonValue::String(endpoint_url.clone()),
);
}
if let Some(endpoint_url) = payload.endpoint_url.as_ref() {
config_map
.entry("server_url".to_string())
.or_insert_with(|| JsonValue::String(endpoint_url.clone()));
}
if !config_map.contains_key("app_token") {
if let Some(auth_token) = payload.auth_token.as_ref() {
config_map.insert(
"app_token".to_string(),
JsonValue::String(auth_token.clone()),
);
}
if let Some(auth_token) = payload.auth_token.as_ref() {
config_map
.entry("app_token".to_string())
.or_insert_with(|| JsonValue::String(auth_token.clone()));
}
}
"webhook" => {
if !config_map.contains_key("url") {
if let Some(endpoint_url) = payload.endpoint_url.as_ref() {
config_map.insert("url".to_string(), JsonValue::String(endpoint_url.clone()));
}
if let Some(endpoint_url) = payload.endpoint_url.as_ref() {
config_map
.entry("url".to_string())
.or_insert_with(|| JsonValue::String(endpoint_url.clone()));
}
if !config_map.contains_key("auth_token") {
if let Some(auth_token) = payload.auth_token.as_ref() {
config_map.insert(
"auth_token".to_string(),
JsonValue::String(auth_token.clone()),
);
}
if let Some(auth_token) = payload.auth_token.as_ref() {
config_map
.entry("auth_token".to_string())
.or_insert_with(|| JsonValue::String(auth_token.clone()));
}
}
_ => {}
@@ -641,9 +630,8 @@ pub(crate) async fn add_notification_handler(
}
match state.db.get_notification_targets().await {
Ok(targets) => targets
.into_iter()
.find(|target| target.name == payload.name)
Ok(mut targets) => targets
.pop()
.map(|target| axum::Json(notification_target_response(target)).into_response())
.unwrap_or_else(|| StatusCode::OK.into_response()),
Err(e) => (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()).into_response(),
@@ -654,23 +642,23 @@ pub(crate) async fn delete_notification_handler(
State(state): State<Arc<AppState>>,
Path(id): Path<i64>,
) -> impl IntoResponse {
let target = match state.db.get_notification_targets().await {
Ok(targets) => targets.into_iter().find(|target| target.id == id),
let target_index = match state.db.get_notification_targets().await {
Ok(targets) => targets.iter().position(|target| target.id == id),
Err(e) => return (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()).into_response(),
};
let Some(target) = target else {
let Some(target_index) = target_index else {
return StatusCode::NOT_FOUND.into_response();
};
let mut next_config = state.config.read().await.clone();
let target_config_json = target.config_json.clone();
let parsed_target_config_json =
serde_json::from_str::<JsonValue>(&target_config_json).unwrap_or(JsonValue::Null);
next_config.notifications.targets.retain(|candidate| {
!(candidate.name == target.name
&& candidate.target_type == target.target_type
&& candidate.config_json == parsed_target_config_json)
});
if target_index >= next_config.notifications.targets.len() {
return (
StatusCode::INTERNAL_SERVER_ERROR,
"notification settings projection is out of sync with config",
)
.into_response();
}
next_config.notifications.targets.remove(target_index);
if let Err(response) = save_config_or_response(&state, &next_config).await {
return *response;
}
@@ -837,13 +825,8 @@ pub(crate) async fn add_schedule_handler(
state.scheduler.trigger();
match state.db.get_schedule_windows().await {
Ok(windows) => windows
.into_iter()
.find(|window| {
window.start_time == start_time
&& window.end_time == end_time
&& window.enabled == payload.enabled
})
Ok(mut windows) => windows
.pop()
.map(|window| axum::Json(serde_json::json!(window)).into_response())
.unwrap_or_else(|| StatusCode::OK.into_response()),
Err(e) => (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()).into_response(),
@@ -854,22 +837,23 @@ pub(crate) async fn delete_schedule_handler(
State(state): State<Arc<AppState>>,
Path(id): Path<i64>,
) -> impl IntoResponse {
let window = match state.db.get_schedule_windows().await {
Ok(windows) => windows.into_iter().find(|window| window.id == id),
let window_index = match state.db.get_schedule_windows().await {
Ok(windows) => windows.iter().position(|window| window.id == id),
Err(e) => return (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()).into_response(),
};
let Some(window) = window else {
let Some(window_index) = window_index else {
return StatusCode::NOT_FOUND.into_response();
};
let days_of_week: Vec<i32> = serde_json::from_str(&window.days_of_week).unwrap_or_default();
let mut next_config = state.config.read().await.clone();
next_config.schedule.windows.retain(|candidate| {
!(candidate.start_time == window.start_time
&& candidate.end_time == window.end_time
&& candidate.enabled == window.enabled
&& candidate.days_of_week == days_of_week)
});
if window_index >= next_config.schedule.windows.len() {
return (
StatusCode::INTERNAL_SERVER_ERROR,
"schedule settings projection is out of sync with config",
)
.into_response();
}
next_config.schedule.windows.remove(window_index);
if let Err(response) = save_config_or_response(&state, &next_config).await {
return *response;
}

View File

@@ -108,6 +108,10 @@ pub(crate) fn sse_message_for_system_event(event: &SystemEvent) -> SseMessage {
event_name: "scan_completed",
data: "{}".to_string(),
},
SystemEvent::EngineIdle => SseMessage {
event_name: "engine_idle",
data: "{}".to_string(),
},
SystemEvent::EngineStatusChanged => SseMessage {
event_name: "engine_status_changed",
data: "{}".to_string(),

View File

@@ -101,3 +101,16 @@ pub(crate) async fn savings_summary_handler(
Err(err) => config_read_error_response("load storage savings summary", &err),
}
}
pub(crate) async fn skip_reasons_handler(State(state): State<Arc<AppState>>) -> impl IntoResponse {
match state.db.get_skip_reason_counts().await {
Ok(counts) => {
let items: Vec<serde_json::Value> = counts
.into_iter()
.map(|(code, count)| serde_json::json!({ "code": code, "count": count }))
.collect();
axum::Json(serde_json::json!({ "today": items })).into_response()
}
Err(err) => config_read_error_response("load skip reason counts", &err),
}
}

View File

@@ -1,7 +1,7 @@
//! System information, hardware info, resources, health handlers.
use super::{AppState, config_read_error_response};
use crate::media::pipeline::{Analyzer as _, Planner as _, TranscodeDecision};
use crate::media::pipeline::{Planner as _, TranscodeDecision};
use axum::{
extract::State,
http::StatusCode,
@@ -27,6 +27,17 @@ struct SystemResources {
gpu_memory_percent: Option<f32>,
}
#[derive(Serialize)]
pub(crate) struct ProcessorStatusResponse {
blocked_reason: Option<&'static str>,
message: String,
manual_paused: bool,
scheduler_paused: bool,
draining: bool,
active_jobs: i64,
concurrent_limit: usize,
}
#[derive(Serialize)]
struct DuplicateGroup {
stem: String,
@@ -135,6 +146,54 @@ pub(crate) async fn system_resources_handler(State(state): State<Arc<AppState>>)
axum::Json(value).into_response()
}
pub(crate) async fn processor_status_handler(State(state): State<Arc<AppState>>) -> Response {
let stats = match state.db.get_job_stats().await {
Ok(stats) => stats,
Err(err) => return config_read_error_response("load processor status", &err),
};
let concurrent_limit = state.agent.concurrent_jobs_limit();
let manual_paused = state.agent.is_manual_paused();
let scheduler_paused = state.agent.is_scheduler_paused();
let draining = state.agent.is_draining();
let active_jobs = stats.active;
let (blocked_reason, message) = if manual_paused {
(
Some("manual_paused"),
"The engine is manually paused and will not start queued jobs.".to_string(),
)
} else if scheduler_paused {
(
Some("scheduled_pause"),
"The schedule is currently pausing the engine.".to_string(),
)
} else if draining {
(
Some("draining"),
"The engine is draining and will not start new queued jobs.".to_string(),
)
} else if active_jobs >= concurrent_limit as i64 {
(
Some("workers_busy"),
"All worker slots are currently busy.".to_string(),
)
} else {
(None, "Workers are available.".to_string())
};
axum::Json(ProcessorStatusResponse {
blocked_reason,
message,
manual_paused,
scheduler_paused,
draining,
active_jobs,
concurrent_limit,
})
.into_response()
}
pub(crate) async fn library_intelligence_handler(State(state): State<Arc<AppState>>) -> Response {
use std::collections::HashMap;
use std::path::Path;
@@ -195,7 +254,6 @@ pub(crate) async fn library_intelligence_handler(State(state): State<Arc<AppStat
return StatusCode::INTERNAL_SERVER_ERROR.into_response();
}
};
let analyzer = crate::media::analyzer::FfmpegAnalyzer;
let config_snapshot = state.config.read().await.clone();
let hw_snapshot = state.hardware_state.snapshot().await;
let planner = crate::media::planner::BasicPlanner::new(
@@ -207,14 +265,16 @@ pub(crate) async fn library_intelligence_handler(State(state): State<Arc<AppStat
if job.status == crate::db::JobState::Cancelled {
continue;
}
let input_path = std::path::Path::new(&job.input_path);
if !input_path.exists() {
continue;
}
let analysis = match analyzer.analyze(input_path).await {
Ok(analysis) => analysis,
Err(_) => continue,
// Use stored metadata only — no live ffprobe spawning per job.
let metadata = match job.input_metadata() {
Some(m) => m,
None => continue,
};
let analysis = crate::media::pipeline::MediaAnalysis {
metadata,
warnings: vec![],
confidence: crate::media::pipeline::AnalysisConfidence::High,
};
let profile: Option<crate::db::LibraryProfile> = state

View File

@@ -61,7 +61,6 @@ where
probe_summary: crate::system::hardware::ProbeSummary::default(),
}));
let hardware_probe_log = Arc::new(RwLock::new(HardwareProbeLog::default()));
let (tx, _rx) = broadcast::channel(tx_capacity);
let transcoder = Arc::new(Transcoder::new());
// Create event channels before Agent
@@ -81,7 +80,6 @@ where
transcoder.clone(),
config.clone(),
hardware_state.clone(),
tx.clone(),
event_channels.clone(),
true,
)
@@ -101,7 +99,6 @@ where
transcoder,
scheduler: scheduler.handle(),
event_channels,
tx,
setup_required: Arc::new(AtomicBool::new(setup_required)),
start_time: Instant::now(),
telemetry_runtime_id: "test-runtime".to_string(),
@@ -114,13 +111,14 @@ where
library_scanner: Arc::new(crate::system::scanner::LibraryScanner::new(db, config)),
config_path: config_path.clone(),
config_mutable: true,
base_url: String::new(),
hardware_state,
hardware_probe_log,
resources_cache: Arc::new(tokio::sync::Mutex::new(None)),
login_rate_limiter: Mutex::new(HashMap::new()),
global_rate_limiter: Mutex::new(HashMap::new()),
sse_connections: Arc::new(std::sync::atomic::AtomicUsize::new(0)),
trusted_proxies: Vec::new(),
setup_token: None,
});
Ok((state.clone(), app_router(state), config_path, db_path))
@@ -211,6 +209,17 @@ fn remote_request(method: Method, uri: &str, body: Body) -> Request<Body> {
request
}
fn lan_request(method: Method, uri: &str, body: Body) -> Request<Body> {
let mut request = match Request::builder().method(method).uri(uri).body(body) {
Ok(request) => request,
Err(err) => panic!("failed to build LAN request: {err}"),
};
request
.extensions_mut()
.insert(ConnectInfo(SocketAddr::from(([192, 168, 1, 25], 3000))));
request
}
async fn body_text(response: axum::response::Response) -> String {
let bytes = match to_bytes(response.into_body(), usize::MAX).await {
Ok(bytes) => bytes,
@@ -538,6 +547,69 @@ async fn engine_status_endpoint_reports_draining_state()
Ok(())
}
#[tokio::test]
async fn processor_status_endpoint_reports_blocking_reason_precedence()
-> std::result::Result<(), Box<dyn std::error::Error>> {
let (state, app, config_path, db_path) = build_test_app(false, 8, |_| {}).await?;
let token = create_session(state.db.as_ref()).await?;
let (_job, input_path, output_path) = seed_job(state.db.as_ref(), JobState::Encoding).await?;
let response = app
.clone()
.oneshot(auth_request(
Method::GET,
"/api/processor/status",
&token,
Body::empty(),
))
.await?;
assert_eq!(response.status(), StatusCode::OK);
let payload: serde_json::Value = serde_json::from_str(&body_text(response).await)?;
assert_eq!(payload["blocked_reason"], "workers_busy");
state.agent.drain();
let response = app
.clone()
.oneshot(auth_request(
Method::GET,
"/api/processor/status",
&token,
Body::empty(),
))
.await?;
let payload: serde_json::Value = serde_json::from_str(&body_text(response).await)?;
assert_eq!(payload["blocked_reason"], "draining");
state.agent.set_scheduler_paused(true);
let response = app
.clone()
.oneshot(auth_request(
Method::GET,
"/api/processor/status",
&token,
Body::empty(),
))
.await?;
let payload: serde_json::Value = serde_json::from_str(&body_text(response).await)?;
assert_eq!(payload["blocked_reason"], "scheduled_pause");
state.agent.pause();
let response = app
.clone()
.oneshot(auth_request(
Method::GET,
"/api/processor/status",
&token,
Body::empty(),
))
.await?;
let payload: serde_json::Value = serde_json::from_str(&body_text(response).await)?;
assert_eq!(payload["blocked_reason"], "manual_paused");
cleanup_paths(&[input_path, output_path, config_path, db_path]);
Ok(())
}
#[tokio::test]
async fn read_only_api_token_allows_observability_only_routes()
-> std::result::Result<(), Box<dyn std::error::Error>> {
@@ -740,32 +812,6 @@ async fn read_only_api_token_cannot_access_settings_config()
Ok(())
}
#[tokio::test]
async fn nested_base_url_routes_engine_status_through_auth_middleware()
-> std::result::Result<(), Box<dyn std::error::Error>> {
let (state, _app, config_path, db_path) = build_test_app(false, 8, |config| {
config.system.base_url = "/alchemist".to_string();
})
.await?;
let token = create_session(state.db.as_ref()).await?;
let app = Router::new().nest("/alchemist", app_router(state.clone()));
let response = app
.oneshot(auth_request(
Method::GET,
"/alchemist/api/engine/status",
&token,
Body::empty(),
))
.await?;
assert_eq!(response.status(), StatusCode::OK);
drop(state);
let _ = std::fs::remove_file(config_path);
let _ = std::fs::remove_file(db_path);
Ok(())
}
#[tokio::test]
async fn hardware_probe_log_route_returns_runtime_log()
-> std::result::Result<(), Box<dyn std::error::Error>> {
@@ -818,12 +864,11 @@ async fn setup_complete_updates_runtime_hardware_without_mirroring_watch_dirs()
let response = app
.clone()
.oneshot(
Request::builder()
.method(Method::POST)
.uri("/api/setup/complete")
.header(header::CONTENT_TYPE, "application/json")
.body(Body::from(
.oneshot({
let mut request = localhost_request(
Method::POST,
"/api/setup/complete",
Body::from(
json!({
"username": "admin",
"password": "password123",
@@ -838,9 +883,14 @@ async fn setup_complete_updates_runtime_hardware_without_mirroring_watch_dirs()
"quality_profile": "balanced"
})
.to_string(),
))
.unwrap_or_else(|err| panic!("failed to build setup completion request: {err}")),
)
),
);
request.headers_mut().insert(
header::CONTENT_TYPE,
axum::http::HeaderValue::from_static("application/json"),
);
request
})
.await?;
assert_eq!(response.status(), StatusCode::OK);
@@ -932,23 +982,25 @@ async fn setup_complete_accepts_nested_settings_payload()
let response = app
.clone()
.oneshot(
Request::builder()
.method(Method::POST)
.uri("/api/setup/complete")
.header(header::CONTENT_TYPE, "application/json")
.body(Body::from(
.oneshot({
let mut request = localhost_request(
Method::POST,
"/api/setup/complete",
Body::from(
json!({
"username": "admin",
"password": "password123",
"settings": settings,
})
.to_string(),
))
.unwrap_or_else(|err| {
panic!("failed to build nested setup completion request: {err}")
}),
)
),
);
request.headers_mut().insert(
header::CONTENT_TYPE,
axum::http::HeaderValue::from_static("application/json"),
);
request
})
.await?;
assert_eq!(response.status(), StatusCode::OK);
assert!(
@@ -981,23 +1033,25 @@ async fn setup_complete_rejects_nested_settings_without_library_directories()
let response = app
.clone()
.oneshot(
Request::builder()
.method(Method::POST)
.uri("/api/setup/complete")
.header(header::CONTENT_TYPE, "application/json")
.body(Body::from(
.oneshot({
let mut request = localhost_request(
Method::POST,
"/api/setup/complete",
Body::from(
json!({
"username": "admin",
"password": "password123",
"settings": settings,
})
.to_string(),
))
.unwrap_or_else(|err| {
panic!("failed to build nested setup rejection request: {err}")
}),
)
),
);
request.headers_mut().insert(
header::CONTENT_TYPE,
axum::http::HeaderValue::from_static("application/json"),
);
request
})
.await?;
assert_eq!(response.status(), StatusCode::BAD_REQUEST);
let body = body_text(response).await;
@@ -1076,7 +1130,7 @@ async fn fs_endpoints_require_loopback_during_setup()
.await?;
assert_eq!(browse_response.status(), StatusCode::FORBIDDEN);
let browse_body = body_text(browse_response).await;
assert!(browse_body.contains("Filesystem browsing is only available"));
assert!(browse_body.contains("local network"));
let mut preview_request = remote_request(
Method::POST,
@@ -1096,12 +1150,107 @@ async fn fs_endpoints_require_loopback_during_setup()
let preview_response = app.clone().oneshot(preview_request).await?;
assert_eq!(preview_response.status(), StatusCode::FORBIDDEN);
let preview_body = body_text(preview_response).await;
assert!(preview_body.contains("Filesystem browsing is only available"));
assert!(preview_body.contains("local network"));
cleanup_paths(&[browse_root, config_path, db_path]);
Ok(())
}
#[tokio::test]
async fn setup_html_routes_allow_lan_clients() -> std::result::Result<(), Box<dyn std::error::Error>>
{
let (_state, app, config_path, db_path) = build_test_app(true, 8, |_| {}).await?;
let response = app
.clone()
.oneshot(lan_request(Method::GET, "/setup", Body::empty()))
.await?;
assert_ne!(response.status(), StatusCode::FORBIDDEN);
cleanup_paths(&[config_path, db_path]);
Ok(())
}
#[tokio::test]
async fn setup_html_routes_reject_public_clients()
-> std::result::Result<(), Box<dyn std::error::Error>> {
let (_state, app, config_path, db_path) = build_test_app(true, 8, |_| {}).await?;
let response = app
.clone()
.oneshot(remote_request(Method::GET, "/setup", Body::empty()))
.await?;
assert_eq!(response.status(), StatusCode::FORBIDDEN);
let body = body_text(response).await;
assert!(body.contains("only available from the local network"));
cleanup_paths(&[config_path, db_path]);
Ok(())
}
#[tokio::test]
async fn setup_status_rejects_public_clients_during_setup()
-> std::result::Result<(), Box<dyn std::error::Error>> {
let (_state, app, config_path, db_path) = build_test_app(true, 8, |_| {}).await?;
let response = app
.clone()
.oneshot(remote_request(
Method::GET,
"/api/setup/status",
Body::empty(),
))
.await?;
assert_eq!(response.status(), StatusCode::FORBIDDEN);
cleanup_paths(&[config_path, db_path]);
Ok(())
}
#[tokio::test]
async fn public_clients_can_reach_login_after_setup()
-> std::result::Result<(), Box<dyn std::error::Error>> {
let (_state, app, config_path, db_path) = build_test_app(false, 8, |_| {}).await?;
let response = app
.clone()
.oneshot(remote_request(Method::GET, "/login", Body::empty()))
.await?;
assert_ne!(response.status(), StatusCode::FORBIDDEN);
cleanup_paths(&[config_path, db_path]);
Ok(())
}
#[tokio::test]
async fn login_returns_internal_error_when_user_lookup_fails()
-> std::result::Result<(), Box<dyn std::error::Error>> {
let (state, app, config_path, db_path) = build_test_app(false, 8, |_| {}).await?;
state.db.pool.close().await;
let mut request = remote_request(
Method::POST,
"/api/auth/login",
Body::from(
json!({
"username": "tester",
"password": "not-important"
})
.to_string(),
),
);
request.headers_mut().insert(
header::CONTENT_TYPE,
header::HeaderValue::from_static("application/json"),
);
let response = app.clone().oneshot(request).await?;
assert_eq!(response.status(), StatusCode::INTERNAL_SERVER_ERROR);
cleanup_paths(&[config_path, db_path]);
Ok(())
}
#[tokio::test]
async fn settings_bundle_requires_auth_after_setup()
-> std::result::Result<(), Box<dyn std::error::Error>> {
@@ -1306,6 +1455,93 @@ async fn settings_bundle_put_projects_extended_settings_to_db()
Ok(())
}
#[tokio::test]
async fn delete_notification_removes_only_one_duplicate_target()
-> std::result::Result<(), Box<dyn std::error::Error>> {
let duplicate_target = crate::config::NotificationTargetConfig {
name: "Discord".to_string(),
target_type: "discord_webhook".to_string(),
config_json: serde_json::json!({
"webhook_url": "https://discord.com/api/webhooks/test"
}),
endpoint_url: None,
auth_token: None,
events: vec!["encode.completed".to_string()],
enabled: true,
};
let (state, app, config_path, db_path) = build_test_app(false, 8, |config| {
config.notifications.targets = vec![duplicate_target.clone(), duplicate_target.clone()];
})
.await?;
let projected = state.config.read().await.clone();
crate::settings::project_config_to_db(state.db.as_ref(), &projected).await?;
let token = create_session(state.db.as_ref()).await?;
let targets = state.db.get_notification_targets().await?;
assert_eq!(targets.len(), 2);
let response = app
.clone()
.oneshot(auth_request(
Method::DELETE,
&format!("/api/settings/notifications/{}", targets[0].id),
&token,
Body::empty(),
))
.await?;
assert_eq!(response.status(), StatusCode::OK);
let persisted = crate::config::Config::load(config_path.as_path())?;
assert_eq!(persisted.notifications.targets.len(), 1);
let stored_targets = state.db.get_notification_targets().await?;
assert_eq!(stored_targets.len(), 1);
cleanup_paths(&[config_path, db_path]);
Ok(())
}
#[tokio::test]
async fn delete_schedule_removes_only_one_duplicate_window()
-> std::result::Result<(), Box<dyn std::error::Error>> {
let duplicate_window = crate::config::ScheduleWindowConfig {
start_time: "22:00".to_string(),
end_time: "06:00".to_string(),
days_of_week: vec![1, 2, 3],
enabled: true,
};
let (state, app, config_path, db_path) = build_test_app(false, 8, |config| {
config.schedule.windows = vec![duplicate_window.clone(), duplicate_window.clone()];
})
.await?;
let projected = state.config.read().await.clone();
crate::settings::project_config_to_db(state.db.as_ref(), &projected).await?;
let token = create_session(state.db.as_ref()).await?;
let windows = state.db.get_schedule_windows().await?;
assert_eq!(windows.len(), 2);
let response = app
.clone()
.oneshot(auth_request(
Method::DELETE,
&format!("/api/settings/schedule/{}", windows[0].id),
&token,
Body::empty(),
))
.await?;
assert_eq!(response.status(), StatusCode::OK);
let persisted = crate::config::Config::load(config_path.as_path())?;
assert_eq!(persisted.schedule.windows.len(), 1);
let stored_windows = state.db.get_schedule_windows().await?;
assert_eq!(stored_windows.len(), 1);
cleanup_paths(&[config_path, db_path]);
Ok(())
}
#[tokio::test]
async fn raw_config_put_overwrites_divergent_db_projection()
-> std::result::Result<(), Box<dyn std::error::Error>> {
@@ -1558,6 +1794,219 @@ async fn job_detail_route_falls_back_to_legacy_failure_summary()
Ok(())
}
#[tokio::test]
async fn job_detail_route_returns_internal_error_when_encode_attempts_query_fails()
-> std::result::Result<(), Box<dyn std::error::Error>> {
let (state, app, config_path, db_path) = build_test_app(false, 8, |_| {}).await?;
let token = create_session(state.db.as_ref()).await?;
let (job, input_path, output_path) = seed_job(state.db.as_ref(), JobState::Queued).await?;
sqlx::query("DROP TABLE encode_attempts")
.execute(&state.db.pool)
.await?;
let response = app
.clone()
.oneshot(auth_request(
Method::GET,
&format!("/api/jobs/{}/details", job.id),
&token,
Body::empty(),
))
.await?;
assert_eq!(response.status(), StatusCode::INTERNAL_SERVER_ERROR);
cleanup_paths(&[input_path, output_path, config_path, db_path]);
Ok(())
}
#[tokio::test]
async fn enqueue_job_endpoint_accepts_supported_absolute_files()
-> std::result::Result<(), Box<dyn std::error::Error>> {
let (state, app, config_path, db_path) = build_test_app(false, 8, |_| {}).await?;
let token = create_session(state.db.as_ref()).await?;
let input_path = temp_path("alchemist_enqueue_input", "mkv");
std::fs::write(&input_path, b"test")?;
let canonical_input = std::fs::canonicalize(&input_path)?;
let response = app
.clone()
.oneshot(auth_json_request(
Method::POST,
"/api/jobs/enqueue",
&token,
json!({ "path": input_path.to_string_lossy() }),
))
.await?;
assert_eq!(response.status(), StatusCode::OK);
let payload: serde_json::Value = serde_json::from_str(&body_text(response).await)?;
assert_eq!(payload["enqueued"], true);
assert!(
state
.db
.get_job_by_input_path(canonical_input.to_string_lossy().as_ref())
.await?
.is_some()
);
cleanup_paths(&[input_path, config_path, db_path]);
Ok(())
}
#[tokio::test]
async fn enqueue_job_endpoint_rejects_relative_paths_and_unsupported_extensions()
-> std::result::Result<(), Box<dyn std::error::Error>> {
let (_state, app, config_path, db_path) = build_test_app(false, 8, |_| {}).await?;
let token = create_session(_state.db.as_ref()).await?;
let response = app
.clone()
.oneshot(auth_json_request(
Method::POST,
"/api/jobs/enqueue",
&token,
json!({ "path": "relative/movie.mkv" }),
))
.await?;
assert_eq!(response.status(), StatusCode::BAD_REQUEST);
let unsupported = temp_path("alchemist_enqueue_unsupported", "txt");
std::fs::write(&unsupported, b"test")?;
let response = app
.clone()
.oneshot(auth_json_request(
Method::POST,
"/api/jobs/enqueue",
&token,
json!({ "path": unsupported.to_string_lossy() }),
))
.await?;
assert_eq!(response.status(), StatusCode::BAD_REQUEST);
cleanup_paths(&[unsupported, config_path, db_path]);
Ok(())
}
#[tokio::test]
async fn enqueue_job_endpoint_returns_noop_for_generated_output_paths()
-> std::result::Result<(), Box<dyn std::error::Error>> {
let (_state, app, config_path, db_path) = build_test_app(false, 8, |_| {}).await?;
let token = create_session(_state.db.as_ref()).await?;
let generated_dir = temp_path("alchemist_enqueue_generated_dir", "dir");
std::fs::create_dir_all(&generated_dir)?;
let generated = generated_dir.join("movie-alchemist.mkv");
std::fs::write(&generated, b"test")?;
let response = app
.clone()
.oneshot(auth_json_request(
Method::POST,
"/api/jobs/enqueue",
&token,
json!({ "path": generated.to_string_lossy() }),
))
.await?;
assert_eq!(response.status(), StatusCode::OK);
let payload: serde_json::Value = serde_json::from_str(&body_text(response).await)?;
assert_eq!(payload["enqueued"], false);
cleanup_paths(&[generated_dir, config_path, db_path]);
Ok(())
}
#[tokio::test]
async fn delete_job_endpoint_purges_resume_session_temp_dir()
-> std::result::Result<(), Box<dyn std::error::Error>> {
let (state, app, config_path, db_path) = build_test_app(false, 8, |_| {}).await?;
let token = create_session(state.db.as_ref()).await?;
let (job, input_path, output_path) = seed_job(state.db.as_ref(), JobState::Failed).await?;
let resume_dir = temp_path("alchemist_resume_delete", "dir");
std::fs::create_dir_all(&resume_dir)?;
std::fs::write(resume_dir.join("segment-00000.mkv"), b"segment")?;
state
.db
.upsert_resume_session(&crate::db::UpsertJobResumeSessionInput {
job_id: job.id,
strategy: "segment_v1".to_string(),
plan_hash: "plan".to_string(),
mtime_hash: "mtime".to_string(),
temp_dir: resume_dir.to_string_lossy().to_string(),
concat_manifest_path: resume_dir
.join("segments.ffconcat")
.to_string_lossy()
.to_string(),
segment_length_secs: 120,
status: "active".to_string(),
})
.await?;
let response = app
.clone()
.oneshot(auth_request(
Method::POST,
&format!("/api/jobs/{}/delete", job.id),
&token,
Body::empty(),
))
.await?;
assert_eq!(response.status(), StatusCode::OK);
assert!(state.db.get_resume_session(job.id).await?.is_none());
assert!(!resume_dir.exists());
cleanup_paths(&[resume_dir, input_path, output_path, config_path, db_path]);
Ok(())
}
#[tokio::test]
async fn clear_completed_purges_resume_sessions()
-> std::result::Result<(), Box<dyn std::error::Error>> {
let (state, app, config_path, db_path) = build_test_app(false, 8, |_| {}).await?;
let token = create_session(state.db.as_ref()).await?;
let (job, input_path, output_path) = seed_job(state.db.as_ref(), JobState::Completed).await?;
let resume_dir = temp_path("alchemist_resume_clear_completed", "dir");
std::fs::create_dir_all(&resume_dir)?;
std::fs::write(resume_dir.join("segment-00000.mkv"), b"segment")?;
state
.db
.upsert_resume_session(&crate::db::UpsertJobResumeSessionInput {
job_id: job.id,
strategy: "segment_v1".to_string(),
plan_hash: "plan".to_string(),
mtime_hash: "mtime".to_string(),
temp_dir: resume_dir.to_string_lossy().to_string(),
concat_manifest_path: resume_dir
.join("segments.ffconcat")
.to_string_lossy()
.to_string(),
segment_length_secs: 120,
status: "segments_complete".to_string(),
})
.await?;
let response = app
.clone()
.oneshot(auth_request(
Method::POST,
"/api/jobs/clear-completed",
&token,
Body::empty(),
))
.await?;
assert_eq!(response.status(), StatusCode::OK);
assert!(state.db.get_resume_session(job.id).await?.is_none());
assert!(!resume_dir.exists());
cleanup_paths(&[resume_dir, input_path, output_path, config_path, db_path]);
Ok(())
}
#[tokio::test]
async fn delete_active_job_returns_conflict() -> std::result::Result<(), Box<dyn std::error::Error>>
{
@@ -1661,7 +2110,9 @@ async fn clear_completed_archives_jobs_and_preserves_stats()
assert!(state.db.get_job_by_id(job.id).await?.is_none());
let aggregated = state.db.get_aggregated_stats().await?;
assert_eq!(aggregated.completed_jobs, 1);
// Archived jobs are excluded from active stats.
assert_eq!(aggregated.completed_jobs, 0);
// encode_stats rows are preserved even after archiving.
assert_eq!(aggregated.total_input_size, 2_000);
assert_eq!(aggregated.total_output_size, 1_000);

View File

@@ -162,7 +162,7 @@ fn browse_blocking(path: &Path) -> Result<FsBrowseResponse> {
Vec::new()
};
entries.sort_by(|a, b| a.name.to_lowercase().cmp(&b.name.to_lowercase()));
entries.sort_by_key(|entry| entry.name.to_lowercase());
Ok(FsBrowseResponse {
path: path.to_string_lossy().to_string(),

View File

@@ -1,74 +0,0 @@
# Alchemist Project Audit & Findings
This document provides a comprehensive audit of the Alchemist media transcoding project (v0.3.0-rc.3), covering backend architecture, frontend design, database schema, and operational workflows.
---
## 1. Project Architecture & Pipeline
Alchemist implements a robust, asynchronous media transcoding pipeline managed by a central `Agent`. The pipeline follows a strictly ordered lifecycle:
1. **Scanner (`src/media/scanner.rs`):** Performs a high-speed traversal of watch folders. It uses `mtime_hash` (seconds + nanoseconds) to detect changes without full file analysis, efficiently handling re-scans and minimizing DB writes.
2. **Analyzer (`src/media/analyzer.rs`):** Executes `ffprobe` to extract normalized media metadata (codecs, bit depth, BPP, bitrate). Analysis results are used to populate the `DetailedEncodeStats` and `Decision` tables.
3. **Planner (`src/media/planner.rs`):** A complex decision engine that evaluates whether to **Skip**, **Remux**, or **Transcode** a file based on user profiles.
* *Finding:* The planning logic is heavily hardcoded with "magic thresholds" (e.g., Bits-per-pixel thresholds). While effective, these could be more exposed as "Advanced Settings" in the UI.
4. **Executor (`src/media/executor.rs`):** Orchestrates the `ffmpeg` process. It dynamically selects encoders (NVENC, VAAPI, QSV, ProRes, or CPU fallback) based on the target profile and host hardware capabilities detected in `src/system/hardware.rs`.
---
## 2. Backend & API Design (Rust/Axum)
* **Concurrency:** Utilizes `tokio` for async orchestration and `rayon` for CPU-intensive tasks (like file hashing or list processing). The scheduler supports multiple concurrency modes: `Background` (1 job), `Balanced` (capped), and `Throughput` (uncapped).
* **State Management:** The backend uses `broadcast` channels to separate high-volume events (Progress, Logs) from low-volume system events (Config updates). This prevents UI "flicker" and unnecessary re-renders in the frontend.
* **API Structure:**
* **RESTful endpoints** for jobs, settings, and stats.
* **SSE (`src/server/sse.rs`)** for real-time progress updates, ensuring a reactive UI without high-frequency polling.
* **Auth (`src/server/auth.rs`):** Implements JWT-based authentication with Argon2 hashing for the initial setup.
---
## 3. Database Schema (SQLite/SQLx)
* **Stability:** The project uses 16+ migrations, showing a mature evolution from a simple schema to a sophisticated job-tracking system.
* **Decision Logging:** The `decisions` and `job_failure_explanations` tables are a standout feature. They store the "why" behind every action as structured JSON, which is then humanized in the UI (e.g., explaining exactly why a file was skipped).
* **Data Integrity:** Foreign keys and WAL (Write-Ahead Logging) mode ensure database stability even during heavy concurrent I/O.
---
## 4. Frontend Design (Astro/React/Helios)
* **Stack:** Astro 5 provides a fast, static-first framework with React 18 handles the complex stateful dashboards.
* **Design System ("Helios"):**
* *Identity:* A dark-themed, data-dense industrial aesthetic.
* *Findings:* While functional, the system suffers from "component bloat." `JobManager.tsx` (~2,000 lines) is a significant maintainability risk. It contains UI logic, filtering logic, and data transformation logic mixed together.
* **Data Visualization:** Uses `recharts` for historical trends and performance metrics.
* *Improvement:* The charts are currently static snapshots. Adding real-time interactivity (brushing, zooming) would improve the exploration of large datasets.
---
## 5. System & Hardware Integration
* **Hardware Discovery:** `src/system/hardware.rs` is extensive, detecting NVIDIA, Intel, AMD, and Apple Silicon capabilities. It correctly maps these to `ffmpeg` encoder flags.
* **FS Browser:** A custom filesystem browser (`src/system/fs_browser.rs`) allows for secure directory selection during setup, preventing path injection and ensuring platform-agnostic path handling.
---
## 6. Critical Areas for Improvement
### **Maintainability (High Priority)**
* **Decouple `JobManager.tsx`:** Refactor into functional hooks (`useJobs`, `useFilters`) and smaller, presentation-only components.
* **Standardize Formatters:** Move `formatBytes`, `formatTime`, and `formatReduction` into a centralized `lib/formatters.ts` to reduce code duplication across the Dashboard and Stats pages.
### **UX & Performance (Medium Priority)**
* **Polling vs. SSE:** Ensure all real-time metrics (like GPU temperature) are delivered via SSE rather than periodic polling to reduce backend load and improve UI responsiveness.
* **Interactive Decision Explanations:** The current skip reasons are helpful but static. Adding links to the relevant settings (e.g., "Change this threshold in Transcoding Settings") would close the loop for users.
### **Reliability (Low Priority)**
* **E2E Testing:** While Playwright tests exist, they focus on "reliability." Expanding these to cover complex "edge cases" (like network-attached storage disconnects during a scan) would improve long-term stability.
---
## 7. Stitch Recommendation
Use Stitch to generate **atomic component refinements** based on this audit.
* *Prompt Example:* "Refine the JobTable row to use iconic status indicators with tooltips for skip reasons, as outlined in the Alchemist Audit."
* *Prompt Example:* "Create a unified `Formatter` utility library in TypeScript that handles bytes, time, and percentage formatting for the Helios design system."

View File

@@ -317,7 +317,6 @@ where
selection_reason: String::new(),
probe_summary: alchemist::system::hardware::ProbeSummary::default(),
})),
Arc::new(broadcast::channel(16).0),
event_channels,
false,
);

View File

@@ -49,6 +49,12 @@ async fn v0_2_5_fixture_upgrades_and_preserves_core_state() -> Result<()> {
let notifications = db.get_notification_targets().await?;
assert_eq!(notifications.len(), 1);
assert_eq!(notifications[0].target_type, "discord_webhook");
let notification_config: serde_json::Value =
serde_json::from_str(&notifications[0].config_json)?;
assert_eq!(
notification_config["webhook_url"].as_str(),
Some("https://discord.invalid/webhook")
);
let schedule_windows = db.get_schedule_windows().await?;
assert_eq!(schedule_windows.len(), 1);
@@ -101,7 +107,7 @@ async fn v0_2_5_fixture_upgrades_and_preserves_core_state() -> Result<()> {
.fetch_one(&pool)
.await?
.get("value");
assert_eq!(schema_version, "8");
assert_eq!(schema_version, "9");
let min_compatible_version: String =
sqlx::query("SELECT value FROM schema_info WHERE key = 'min_compatible_version'")
@@ -153,6 +159,45 @@ async fn v0_2_5_fixture_upgrades_and_preserves_core_state() -> Result<()> {
.get("count");
assert_eq!(job_failure_explanations_exists, 1);
let notification_columns = sqlx::query("PRAGMA table_info(notification_targets)")
.fetch_all(&pool)
.await?
.into_iter()
.map(|row| row.get::<String, _>("name"))
.collect::<Vec<_>>();
assert!(
notification_columns
.iter()
.any(|name| name == "endpoint_url")
);
assert!(notification_columns.iter().any(|name| name == "auth_token"));
assert!(
notification_columns
.iter()
.any(|name| name == "target_type_v2")
);
assert!(
notification_columns
.iter()
.any(|name| name == "config_json")
);
let resume_sessions_exists: i64 = sqlx::query(
"SELECT COUNT(*) as count FROM sqlite_master WHERE type = 'table' AND name = 'job_resume_sessions'",
)
.fetch_one(&pool)
.await?
.get("count");
assert_eq!(resume_sessions_exists, 1);
let resume_segments_exists: i64 = sqlx::query(
"SELECT COUNT(*) as count FROM sqlite_master WHERE type = 'table' AND name = 'job_resume_segments'",
)
.fetch_one(&pool)
.await?
.get("count");
assert_eq!(resume_segments_exists, 1);
pool.close().await;
drop(db);
let _ = fs::remove_file(&db_path);

View File

@@ -43,6 +43,20 @@ fn ffmpeg_ready() -> bool {
ffmpeg_available() && ffprobe_available()
}
fn ffmpeg_has_encoder(name: &str) -> bool {
Command::new("ffmpeg")
.args(["-hide_banner", "-encoders"])
.output()
.ok()
.map(|output| {
output.status.success()
&& String::from_utf8_lossy(&output.stdout)
.lines()
.any(|line| line.contains(name))
})
.unwrap_or(false)
}
/// Get the path to test fixtures
fn fixtures_path() -> PathBuf {
let mut path = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
@@ -68,6 +82,75 @@ fn cleanup_temp_dir(path: &Path) {
let _ = std::fs::remove_dir_all(path);
}
#[tokio::test]
async fn amd_vaapi_smoke_test_is_hardware_gated() -> Result<()> {
let Some(device_path) = std::env::var("ALCHEMIST_TEST_AMD_VAAPI_DEVICE").ok() else {
println!("Skipping test: ALCHEMIST_TEST_AMD_VAAPI_DEVICE not set");
return Ok(());
};
if !ffmpeg_available() || !ffmpeg_has_encoder("h264_vaapi") {
println!("Skipping test: ffmpeg or h264_vaapi encoder not available");
return Ok(());
}
let status = Command::new("ffmpeg")
.args([
"-hide_banner",
"-loglevel",
"error",
"-vaapi_device",
&device_path,
"-f",
"lavfi",
"-i",
"testsrc=size=64x64:rate=1:d=1",
"-vf",
"format=nv12,hwupload",
"-c:v",
"h264_vaapi",
"-f",
"null",
"-",
])
.status()?;
assert!(
status.success(),
"expected VAAPI smoke transcode to succeed"
);
Ok(())
}
#[tokio::test]
async fn amd_amf_smoke_test_is_hardware_gated() -> Result<()> {
if std::env::var("ALCHEMIST_TEST_AMD_AMF").ok().as_deref() != Some("1") {
println!("Skipping test: ALCHEMIST_TEST_AMD_AMF not set");
return Ok(());
}
if !ffmpeg_available() || !ffmpeg_has_encoder("h264_amf") {
println!("Skipping test: ffmpeg or h264_amf encoder not available");
return Ok(());
}
let status = Command::new("ffmpeg")
.args([
"-hide_banner",
"-loglevel",
"error",
"-f",
"lavfi",
"-i",
"testsrc=size=64x64:rate=1:d=1",
"-c:v",
"h264_amf",
"-f",
"null",
"-",
])
.status()?;
assert!(status.success(), "expected AMF smoke transcode to succeed");
Ok(())
}
/// Create a test database
async fn create_test_db() -> Result<(Arc<Db>, PathBuf)> {
let mut db_path = std::env::temp_dir();
@@ -120,7 +203,6 @@ where
selection_reason: String::new(),
probe_summary: alchemist::system::hardware::ProbeSummary::default(),
})),
Arc::new(broadcast::channel(16).0),
event_channels,
false,
);

View File

@@ -1,6 +1,6 @@
{
"name": "alchemist-web-e2e",
"version": "0.3.1-rc.1",
"version": "0.3.1-rc.5",
"private": true,
"packageManager": "bun@1",
"type": "module",
@@ -8,7 +8,7 @@
"test": "playwright test",
"test:headed": "playwright test --headed",
"test:ui": "playwright test --ui",
"test:reliability": "playwright test tests/settings-nonok.spec.ts tests/setup-recovery.spec.ts tests/setup-happy-path.spec.ts tests/new-user-redirect.spec.ts tests/stats-poller.spec.ts tests/jobs-actions-nonok.spec.ts tests/jobs-stability.spec.ts tests/library-intake-stability.spec.ts"
"test:reliability": "playwright test tests/settings-nonok.spec.ts tests/setup-recovery.spec.ts tests/setup-happy-path.spec.ts tests/new-user-redirect.spec.ts tests/stats-poller.spec.ts tests/jobs-actions-nonok.spec.ts tests/jobs-stability.spec.ts tests/library-intake-stability.spec.ts tests/intelligence-actions.spec.ts"
},
"devDependencies": {
"@playwright/test": "^1.54.2"

View File

@@ -37,10 +37,10 @@ export default defineConfig({
],
webServer: {
command:
"sh -c 'mkdir -p .runtime/media && cd .. && (cd web && bun install --frozen-lockfile && bun run build) && if [ -x ./target/debug/alchemist ]; then ./target/debug/alchemist --reset-auth; else cargo run --locked --no-default-features -- --reset-auth; fi'",
"sh -c 'mkdir -p .runtime/media && rm -f .runtime/alchemist.db .runtime/alchemist.db-wal .runtime/alchemist.db-shm && cd .. && (cd web && bun install --frozen-lockfile && bun run build) && if [ -x ./target/debug/alchemist ]; then ./target/debug/alchemist --reset-auth; else cargo run --locked --no-default-features -- --reset-auth; fi'",
url: `${BASE_URL}/api/health`,
reuseExistingServer: false,
timeout: 120_000,
timeout: 300_000,
env: {
ALCHEMIST_CONFIG_PATH: CONFIG_PATH,
ALCHEMIST_DB_PATH: DB_PATH,

View File

@@ -0,0 +1,102 @@
import { expect, test } from "@playwright/test";
import {
createEngineMode,
createEngineStatus,
fulfillJson,
mockDashboardData,
} from "./helpers";
test.use({ storageState: undefined });
test.beforeEach(async ({ page }) => {
await mockDashboardData(page);
await page.route("**/api/engine/mode", async (route) => {
await fulfillJson(route, 200, createEngineMode());
});
});
test("pause then resume transitions engine state correctly", async ({ page }) => {
let engineStatus = createEngineStatus({ status: "running", manual_paused: false });
let pauseCalls = 0;
let resumeCalls = 0;
await page.route("**/api/engine/status", async (route) => {
await fulfillJson(route, 200, engineStatus);
});
await page.route("**/api/engine/pause", async (route) => {
pauseCalls += 1;
engineStatus = createEngineStatus({ status: "paused", manual_paused: true });
await fulfillJson(route, 200, { status: "paused" });
});
await page.route("**/api/engine/resume", async (route) => {
resumeCalls += 1;
engineStatus = createEngineStatus({ status: "running", manual_paused: false });
await fulfillJson(route, 200, { status: "running" });
});
await page.goto("/settings?tab=system");
await page.getByRole("button", { name: "Pause" }).click();
await expect.poll(() => pauseCalls).toBe(1);
await page.getByRole("button", { name: "Start" }).click();
await expect.poll(() => resumeCalls).toBe(1);
});
test("drain transitions to draining state and cancel-stop reverts it", async ({ page }) => {
let engineStatus = createEngineStatus({ status: "running", manual_paused: false });
let drainCalls = 0;
let stopDrainCalls = 0;
await page.route("**/api/engine/status", async (route) => {
await fulfillJson(route, 200, engineStatus);
});
await page.route("**/api/engine/drain", async (route) => {
drainCalls += 1;
engineStatus = createEngineStatus({
status: "draining",
manual_paused: false,
draining: true,
});
await fulfillJson(route, 200, { status: "draining" });
});
await page.route("**/api/engine/stop-drain", async (route) => {
stopDrainCalls += 1;
engineStatus = createEngineStatus({ status: "running", manual_paused: false });
await fulfillJson(route, 200, { status: "running" });
});
await page.goto("/");
await page.getByRole("button", { name: "Stop" }).click();
await expect.poll(() => drainCalls).toBe(1);
await expect(page.getByText("Stopping", { exact: true })).toBeVisible();
await expect.poll(() => stopDrainCalls).toBe(0);
});
test("engine restart endpoint is called and status returns to running", async ({ page }) => {
let engineStatus = createEngineStatus({ status: "running", manual_paused: false });
let restartCalls = 0;
await page.route("**/api/engine/status", async (route) => {
await fulfillJson(route, 200, engineStatus);
});
await page.route("**/api/engine/restart", async (route) => {
restartCalls += 1;
engineStatus = createEngineStatus({ status: "running", manual_paused: false });
await fulfillJson(route, 200, { status: "running" });
});
await page.goto("/");
const result = await page.evaluate(async () => {
const res = await fetch("/api/engine/restart", { method: "POST" });
const body = await res.json() as { status: string };
return { status: res.status, body };
});
expect(restartCalls).toBe(1);
expect(result.status).toBe(200);
expect(result.body.status).toBe("running");
});

View File

@@ -159,6 +159,18 @@ export interface JobDetailFixture {
message: string;
created_at: string;
}>;
encode_attempts?: Array<{
id: number;
attempt_number: number;
started_at: string | null;
finished_at: string;
outcome: "completed" | "failed" | "cancelled";
failure_code: string | null;
failure_summary: string | null;
input_size_bytes: number | null;
output_size_bytes: number | null;
encode_time_seconds: number | null;
}>;
job_failure_summary?: string;
decision_explanation?: ExplanationFixture | null;
failure_explanation?: ExplanationFixture | null;

View File

@@ -0,0 +1,118 @@
import { expect, test } from "@playwright/test";
import {
type JobDetailFixture,
fulfillJson,
mockEngineStatus,
mockJobDetails,
} from "./helpers";
const completedDetail: JobDetailFixture = {
job: {
id: 51,
input_path: "/media/duplicates/movie-copy-1.mkv",
output_path: "/output/movie-copy-1-av1.mkv",
status: "completed",
priority: 0,
progress: 100,
created_at: "2025-01-01T00:00:00Z",
updated_at: "2025-01-02T00:00:00Z",
vmaf_score: 95.1,
},
metadata: {
duration_secs: 120,
codec_name: "hevc",
width: 1920,
height: 1080,
bit_depth: 10,
size_bytes: 2_000_000_000,
video_bitrate_bps: 12_000_000,
container_bitrate_bps: 12_500_000,
fps: 24,
container: "mkv",
audio_codec: "aac",
audio_channels: 2,
dynamic_range: "hdr10",
},
encode_stats: {
input_size_bytes: 2_000_000_000,
output_size_bytes: 900_000_000,
compression_ratio: 0.55,
encode_time_seconds: 1800,
encode_speed: 1.6,
avg_bitrate_kbps: 6000,
vmaf_score: 95.1,
},
job_logs: [],
};
test.use({ storageState: undefined });
test.beforeEach(async ({ page }) => {
await mockEngineStatus(page);
});
test("intelligence actions queue remux opportunities and review duplicate jobs", async ({
page,
}) => {
let enqueueCount = 0;
await page.route("**/api/library/intelligence", async (route) => {
await fulfillJson(route, 200, {
duplicate_groups: [
{
stem: "movie-copy",
count: 2,
paths: [
{ id: 51, path: "/media/duplicates/movie-copy-1.mkv", status: "completed" },
{ id: 52, path: "/media/duplicates/movie-copy-2.mkv", status: "queued" },
],
},
],
total_duplicates: 1,
recommendation_counts: {
duplicates: 1,
remux_only_candidate: 2,
wasteful_audio_layout: 0,
commentary_cleanup_candidate: 0,
},
recommendations: [
{
type: "remux_only_candidate",
title: "Remux movie one",
summary: "The file can be normalized with a container-only remux.",
path: "/media/remux/movie-one.mkv",
suggested_action: "Queue a remux to normalize the container without re-encoding the video stream.",
},
{
type: "remux_only_candidate",
title: "Remux movie two",
summary: "The file can be normalized with a container-only remux.",
path: "/media/remux/movie-two.mkv",
suggested_action: "Queue a remux to normalize the container without re-encoding the video stream.",
},
],
});
});
await page.route("**/api/jobs/enqueue", async (route) => {
enqueueCount += 1;
const body = route.request().postDataJSON() as { path: string };
await fulfillJson(route, 200, {
enqueued: true,
message: `Enqueued ${body.path}.`,
});
});
await mockJobDetails(page, { 51: completedDetail });
await page.goto("/intelligence");
await page.getByRole("button", { name: "Queue all" }).click();
await expect.poll(() => enqueueCount).toBe(2);
await expect(
page.getByText("Queue all finished: 2 enqueued, 0 skipped, 0 failed.").first(),
).toBeVisible();
await page.getByRole("button", { name: "Review" }).first().click();
await expect(page.getByRole("dialog")).toBeVisible();
await expect(page.getByText("Encode Results")).toBeVisible();
await expect(page.getByRole("dialog").getByText("/media/duplicates/movie-copy-1.mkv")).toBeVisible();
});

View File

@@ -19,6 +19,17 @@ const completedJob: JobFixture = {
vmaf_score: 95.4,
};
const queuedJob: JobFixture = {
id: 44,
input_path: "/media/queued-blocked.mkv",
output_path: "/output/queued-blocked-av1.mkv",
status: "queued",
priority: 0,
progress: 0,
created_at: "2025-01-01T00:00:00Z",
updated_at: "2025-01-02T00:00:00Z",
};
const completedDetail: JobDetailFixture = {
job: completedJob,
metadata: {
@@ -183,3 +194,57 @@ test("failed job detail prefers structured failure explanation", async ({ page }
await expect(page.getByText("Structured failure detail from the backend.")).toBeVisible();
await expect(page.getByText("Structured failure guidance from the backend.")).toBeVisible();
});
test("queued job detail shows the processor blocked reason", async ({ page }) => {
await page.route("**/api/jobs/table**", async (route) => {
await fulfillJson(route, 200, [queuedJob]);
});
await mockJobDetails(page, {
44: {
job: queuedJob,
job_logs: [],
queue_position: 3,
},
});
await page.route("**/api/processor/status", async (route) => {
await fulfillJson(route, 200, {
blocked_reason: "workers_busy",
message: "All worker slots are currently busy.",
manual_paused: false,
scheduler_paused: false,
draining: false,
active_jobs: 1,
concurrent_limit: 1,
});
});
await page.goto("/jobs");
await page.getByTitle("/media/queued-blocked.mkv").click();
await expect(page.getByText("Queue position:")).toBeVisible();
await expect(page.getByText("Blocked:")).toBeVisible();
await expect(page.getByText("All worker slots are currently busy.")).toBeVisible();
});
test("add file submits the enqueue request and surfaces the response", async ({ page }) => {
let postedPath = "";
await page.route("**/api/jobs/table**", async (route) => {
await fulfillJson(route, 200, []);
});
await page.route("**/api/jobs/enqueue", async (route) => {
const body = route.request().postDataJSON() as { path: string };
postedPath = body.path;
await fulfillJson(route, 200, {
enqueued: true,
message: `Enqueued ${body.path}.`,
});
});
await page.goto("/jobs");
await page.getByRole("button", { name: "Add file" }).click();
await page.getByPlaceholder("/Volumes/Media/Movies/example.mkv").fill("/media/manual-add.mkv");
await page.getByRole("dialog").getByRole("button", { name: "Add File", exact: true }).click();
await expect.poll(() => postedPath).toBe("/media/manual-add.mkv");
await expect(page.getByText("Enqueued /media/manual-add.mkv.").first()).toBeVisible();
});

View File

@@ -142,7 +142,7 @@ test("search requests are debounced and failed job details show summary and logs
await mockJobDetails(page, { 2: failedDetail });
await page.goto("/jobs");
await page.getByPlaceholder("Search files...").fill("failed");
await page.getByPlaceholder("Search files...").first().fill("failed");
await expect
.poll(() => requests.some((url) => url.searchParams.get("search") === "failed"))
@@ -286,7 +286,7 @@ test("queued job with no metadata shows waiting for analysis placeholder", async
await page.getByTitle("/media/queued.mkv").click();
await expect(page.getByRole("dialog")).toBeVisible();
await expect(page.getByText("Waiting for analysis")).toBeVisible();
await expect(page.getByText("Waiting in queue")).toBeVisible();
await expect(page.getByText("Unknown bit depth")).not.toBeVisible();
});

View File

@@ -162,7 +162,7 @@ test("notification targets can be added, tested, and removed", async ({ page })
await expect(page.getByText("Test notification sent.").first()).toBeVisible();
expect(testPayload).toMatchObject({
name: "Playwright Target",
target_type: "discord",
target_type: "discord_webhook",
});
await page.getByLabel("Delete notification target Playwright Target").click();

View File

@@ -1,6 +1,6 @@
{
"name": "alchemist-web",
"version": "0.3.1-rc.1",
"version": "0.3.1-rc.5",
"private": true,
"packageManager": "bun@1",
"type": "module",

View File

@@ -5,18 +5,18 @@ import { apiJson, isApiError } from "../lib/api";
import { showToast } from "../lib/toast";
interface SystemInfo {
version: string;
os_version: string;
is_docker: boolean;
telemetry_enabled: boolean;
ffmpeg_version: string;
is_docker: boolean;
os_version: string;
telemetry_enabled: boolean;
version: string;
}
interface UpdateInfo {
current_version: string;
latest_version: string | null;
update_available: boolean;
release_url: string | null;
update_available: boolean;
}
interface AboutDialogProps {

View File

@@ -1,6 +1,5 @@
import { useEffect } from "react";
import { apiFetch, apiJson } from "../lib/api";
import { stripBasePath, withBasePath } from "../lib/basePath";
interface SetupStatus {
setup_required?: boolean;
@@ -11,7 +10,7 @@ export default function AuthGuard() {
let cancelled = false;
const checkAuth = async () => {
const path = stripBasePath(window.location.pathname);
const path = window.location.pathname;
const isAuthPage = path.startsWith("/login") || path.startsWith("/setup");
if (isAuthPage) {
return;
@@ -28,9 +27,7 @@ export default function AuthGuard() {
return;
}
window.location.href = setupStatus.setup_required
? withBasePath("/setup")
: withBasePath("/login");
window.location.href = setupStatus.setup_required ? "/setup" : "/login";
} catch {
// Keep user on current page on transient backend/network failures.
}

View File

@@ -1,7 +1,6 @@
import { useEffect, useState } from "react";
import { Upload, Wand2, Play, Download, Trash2 } from "lucide-react";
import { apiAction, apiFetch, apiJson, isApiError } from "../lib/api";
import { withBasePath } from "../lib/basePath";
import { showToast } from "../lib/toast";
interface SubtitleStreamMetadata {
@@ -105,7 +104,7 @@ const DEFAULT_SETTINGS: ConversionSettings = {
},
};
export default function ConversionTool() {
export function ConversionTool() {
const [uploading, setUploading] = useState(false);
const [previewing, setPreviewing] = useState(false);
const [starting, setStarting] = useState(false);
@@ -121,13 +120,14 @@ export default function ConversionTool() {
const id = window.setInterval(() => {
void apiJson<JobStatusResponse>(`/api/conversion/jobs/${conversionJobId}`)
.then(setStatus)
.catch(() => {});
.catch(() => {
});
}, 2000);
return () => window.clearInterval(id);
}, [conversionJobId]);
const updateSettings = (patch: Partial<ConversionSettings>) => {
setSettings((current) => ({ ...current, ...patch }));
setSettings((current) => ({...current, ...patch}));
};
const uploadFile = async (file: File) => {
@@ -157,7 +157,7 @@ export default function ConversionTool() {
} catch (err) {
const message = err instanceof Error ? err.message : "Upload failed";
setError(message);
showToast({ kind: "error", title: "Conversion", message });
showToast({kind: "error", title: "Conversion", message});
} finally {
setUploading(false);
}
@@ -169,7 +169,7 @@ export default function ConversionTool() {
try {
const payload = await apiJson<PreviewResponse>("/api/conversion/preview", {
method: "POST",
headers: { "Content-Type": "application/json" },
headers: {"Content-Type": "application/json"},
body: JSON.stringify({
conversion_job_id: conversionJobId,
settings,
@@ -177,11 +177,11 @@ export default function ConversionTool() {
});
setSettings(payload.normalized_settings);
setCommandPreview(payload.command_preview);
showToast({ kind: "success", title: "Conversion", message: "Preview updated." });
showToast({kind: "success", title: "Conversion", message: "Preview updated."});
} catch (err) {
const message = isApiError(err) ? err.message : "Preview failed";
setError(message);
showToast({ kind: "error", title: "Conversion", message });
showToast({kind: "error", title: "Conversion", message});
} finally {
setPreviewing(false);
}
@@ -191,14 +191,14 @@ export default function ConversionTool() {
if (!conversionJobId) return;
setStarting(true);
try {
await apiAction(`/api/conversion/jobs/${conversionJobId}/start`, { method: "POST" });
await apiAction(`/api/conversion/jobs/${conversionJobId}/start`, {method: "POST"});
const payload = await apiJson<JobStatusResponse>(`/api/conversion/jobs/${conversionJobId}`);
setStatus(payload);
showToast({ kind: "success", title: "Conversion", message: "Conversion job queued." });
showToast({kind: "success", title: "Conversion", message: "Conversion job queued."});
} catch (err) {
const message = isApiError(err) ? err.message : "Failed to start conversion";
setError(message);
showToast({ kind: "error", title: "Conversion", message });
showToast({kind: "error", title: "Conversion", message});
} finally {
setStarting(false);
}
@@ -207,23 +207,23 @@ export default function ConversionTool() {
const remove = async () => {
if (!conversionJobId) return;
try {
await apiAction(`/api/conversion/jobs/${conversionJobId}`, { method: "DELETE" });
await apiAction(`/api/conversion/jobs/${conversionJobId}`, {method: "DELETE"});
setConversionJobId(null);
setProbe(null);
setStatus(null);
setSettings(DEFAULT_SETTINGS);
setCommandPreview("");
showToast({ kind: "success", title: "Conversion", message: "Conversion job removed." });
showToast({kind: "success", title: "Conversion", message: "Conversion job removed."});
} catch (err) {
const message = isApiError(err) ? err.message : "Failed to remove conversion job";
setError(message);
showToast({ kind: "error", title: "Conversion", message });
showToast({kind: "error", title: "Conversion", message});
}
};
const download = async () => {
if (!conversionJobId) return;
window.location.href = withBasePath(`/api/conversion/jobs/${conversionJobId}/download`);
window.location.href = `/api/conversion/jobs/${conversionJobId}/download`;
};
return (
@@ -231,22 +231,26 @@ export default function ConversionTool() {
<div>
<h1 className="text-xl font-bold text-helios-ink">Conversion / Remux</h1>
<p className="mt-1 text-sm text-helios-slate">
Upload a single file, inspect the streams, preview the generated FFmpeg command, and run it through Alchemist.
Upload a single file, inspect the streams, preview the generated FFmpeg command, and run it through
Alchemist.
</p>
</div>
{error && (
<div className="rounded-lg border border-status-error/20 bg-status-error/10 px-4 py-3 text-sm text-status-error">
<div
className="rounded-lg border border-status-error/20 bg-status-error/10 px-4 py-3 text-sm text-status-error">
{error}
</div>
)}
{!probe && (
<label className="flex flex-col items-center justify-center gap-3 rounded-xl border border-dashed border-helios-line/30 bg-helios-surface p-10 text-center cursor-pointer hover:bg-helios-surface-soft transition-colors">
<Upload size={28} className="text-helios-solar" />
<label
className="flex flex-col items-center justify-center gap-3 rounded-xl border border-dashed border-helios-line/30 bg-helios-surface p-10 text-center cursor-pointer hover:bg-helios-surface-soft transition-colors">
<Upload size={28} className="text-helios-solar"/>
<div>
<p className="text-sm font-semibold text-helios-ink">Upload a source file</p>
<p className="text-xs text-helios-slate mt-1">The uploaded file is stored temporarily under Alchemist-managed temp storage.</p>
<p className="text-xs text-helios-slate mt-1">You can select a couple options here to
convert/remux a video file.</p>
</div>
<input
type="file"
@@ -270,16 +274,18 @@ export default function ConversionTool() {
<section className="rounded-xl border border-helios-line/20 bg-helios-surface p-5 space-y-4">
<h2 className="text-sm font-semibold text-helios-ink">Input</h2>
<div className="grid gap-3 md:grid-cols-4 text-sm">
<Stat label="Container" value={probe.metadata.container} />
<Stat label="Video" value={probe.metadata.codec_name} />
<Stat label="Resolution" value={`${probe.metadata.width}x${probe.metadata.height}`} />
<Stat label="Dynamic Range" value={probe.metadata.dynamic_range} />
<Stat label="Container" value={probe.metadata.container}/>
<Stat label="Video" value={probe.metadata.codec_name}/>
<Stat label="Resolution" value={`${probe.metadata.width}x${probe.metadata.height}`}/>
<Stat label="Dynamic Range" value={probe.metadata.dynamic_range}/>
</div>
</section>
<section className="rounded-xl border border-helios-line/20 bg-helios-surface p-5 space-y-4">
<h2 className="text-sm font-semibold text-helios-ink">Output Container</h2>
<select value={settings.output_container} onChange={(event) => updateSettings({ output_container: event.target.value })} className="w-full md:w-60 bg-helios-surface-soft border border-helios-line/20 rounded p-2 text-sm text-helios-ink">
<select value={settings.output_container}
onChange={(event) => updateSettings({output_container: event.target.value})}
className="w-full md:w-60 bg-helios-surface-soft border border-helios-line/20 rounded p-2 text-sm text-helios-ink">
{["mkv", "mp4", "webm", "mov"].map((option) => (
<option key={option} value={option}>{option.toUpperCase()}</option>
))}
@@ -293,7 +299,7 @@ export default function ConversionTool() {
<input
type="checkbox"
checked={settings.remux_only}
onChange={(event) => updateSettings({ remux_only: event.target.checked })}
onChange={(event) => updateSettings({remux_only: event.target.checked})}
/>
Remux only
</label>
@@ -311,41 +317,59 @@ export default function ConversionTool() {
value={settings.video.codec}
disabled={settings.remux_only}
options={["copy", "h264", "hevc", "av1"]}
onChange={(value) => setSettings((current) => ({ ...current, video: { ...current.video, codec: value } }))}
onChange={(value) => setSettings((current) => ({
...current,
video: {...current.video, codec: value}
}))}
/>
<SelectField
label="Mode"
value={settings.video.mode}
disabled={settings.remux_only || settings.video.codec === "copy"}
options={["crf", "bitrate"]}
onChange={(value) => setSettings((current) => ({ ...current, video: { ...current.video, mode: value } }))}
onChange={(value) => setSettings((current) => ({
...current,
video: {...current.video, mode: value}
}))}
/>
<NumberField
label={settings.video.mode === "bitrate" ? "Bitrate (kbps)" : "Quality Value"}
value={settings.video.value ?? 0}
disabled={settings.remux_only || settings.video.codec === "copy"}
onChange={(value) => setSettings((current) => ({ ...current, video: { ...current.video, value } }))}
onChange={(value) => setSettings((current) => ({
...current,
video: {...current.video, value}
}))}
/>
<SelectField
label="Preset"
value={settings.video.preset ?? "medium"}
disabled={settings.remux_only || settings.video.codec === "copy"}
options={["ultrafast", "superfast", "veryfast", "faster", "fast", "medium", "slow", "slower", "veryslow"]}
onChange={(value) => setSettings((current) => ({ ...current, video: { ...current.video, preset: value } }))}
onChange={(value) => setSettings((current) => ({
...current,
video: {...current.video, preset: value}
}))}
/>
<SelectField
label="Resolution Mode"
value={settings.video.resolution.mode}
disabled={settings.remux_only || settings.video.codec === "copy"}
options={["original", "custom", "scale_factor"]}
onChange={(value) => setSettings((current) => ({ ...current, video: { ...current.video, resolution: { ...current.video.resolution, mode: value } } }))}
onChange={(value) => setSettings((current) => ({
...current,
video: {...current.video, resolution: {...current.video.resolution, mode: value}}
}))}
/>
<SelectField
label="HDR"
value={settings.video.hdr_mode}
disabled={settings.remux_only || settings.video.codec === "copy"}
options={["preserve", "tonemap", "strip_metadata"]}
onChange={(value) => setSettings((current) => ({ ...current, video: { ...current.video, hdr_mode: value } }))}
onChange={(value) => setSettings((current) => ({
...current,
video: {...current.video, hdr_mode: value}
}))}
/>
{settings.video.resolution.mode === "custom" && (
<>
@@ -353,13 +377,25 @@ export default function ConversionTool() {
label="Width"
value={settings.video.resolution.width ?? probe.metadata.width}
disabled={settings.remux_only || settings.video.codec === "copy"}
onChange={(value) => setSettings((current) => ({ ...current, video: { ...current.video, resolution: { ...current.video.resolution, width: value } } }))}
onChange={(value) => setSettings((current) => ({
...current,
video: {
...current.video,
resolution: {...current.video.resolution, width: value}
}
}))}
/>
<NumberField
label="Height"
value={settings.video.resolution.height ?? probe.metadata.height}
disabled={settings.remux_only || settings.video.codec === "copy"}
onChange={(value) => setSettings((current) => ({ ...current, video: { ...current.video, resolution: { ...current.video.resolution, height: value } } }))}
onChange={(value) => setSettings((current) => ({
...current,
video: {
...current.video,
resolution: {...current.video.resolution, height: value}
}
}))}
/>
</>
)}
@@ -369,7 +405,13 @@ export default function ConversionTool() {
value={settings.video.resolution.scale_factor ?? 1}
disabled={settings.remux_only || settings.video.codec === "copy"}
step="0.1"
onChange={(value) => setSettings((current) => ({ ...current, video: { ...current.video, resolution: { ...current.video.resolution, scale_factor: value } } }))}
onChange={(value) => setSettings((current) => ({
...current,
video: {
...current.video,
resolution: {...current.video.resolution, scale_factor: value}
}
}))}
/>
)}
</div>
@@ -383,20 +425,29 @@ export default function ConversionTool() {
value={settings.audio.codec}
disabled={settings.remux_only}
options={["copy", "aac", "opus", "mp3"]}
onChange={(value) => setSettings((current) => ({ ...current, audio: { ...current.audio, codec: value } }))}
onChange={(value) => setSettings((current) => ({
...current,
audio: {...current.audio, codec: value}
}))}
/>
<NumberField
label="Bitrate (kbps)"
value={settings.audio.bitrate_kbps ?? 160}
disabled={settings.remux_only || settings.audio.codec === "copy"}
onChange={(value) => setSettings((current) => ({ ...current, audio: { ...current.audio, bitrate_kbps: value } }))}
onChange={(value) => setSettings((current) => ({
...current,
audio: {...current.audio, bitrate_kbps: value}
}))}
/>
<SelectField
label="Channels"
value={settings.audio.channels ?? "auto"}
disabled={settings.remux_only || settings.audio.codec === "copy"}
options={["auto", "stereo", "5.1"]}
onChange={(value) => setSettings((current) => ({ ...current, audio: { ...current.audio, channels: value } }))}
onChange={(value) => setSettings((current) => ({
...current,
audio: {...current.audio, channels: value}
}))}
/>
</div>
</section>
@@ -408,31 +459,36 @@ export default function ConversionTool() {
value={settings.subtitles.mode}
disabled={settings.remux_only}
options={["copy", "burn", "remove"]}
onChange={(value) => setSettings((current) => ({ ...current, subtitles: { mode: value } }))}
onChange={(value) => setSettings((current) => ({...current, subtitles: {mode: value}}))}
/>
</section>
<section className="rounded-xl border border-helios-line/20 bg-helios-surface p-5 space-y-4">
<div className="flex flex-wrap gap-3">
<button onClick={() => void preview()} disabled={previewing} className="flex items-center gap-2 rounded-lg bg-helios-solar px-4 py-2 text-sm font-bold text-helios-main">
<Wand2 size={16} />
<button onClick={() => void preview()} disabled={previewing}
className="flex items-center gap-2 rounded-lg bg-helios-solar px-4 py-2 text-sm font-bold text-helios-main">
<Wand2 size={16}/>
{previewing ? "Previewing..." : "Preview Command"}
</button>
<button onClick={() => void start()} disabled={starting || !commandPreview} className="flex items-center gap-2 rounded-lg border border-helios-line/20 px-4 py-2 text-sm font-semibold text-helios-ink">
<Play size={16} />
<button onClick={() => void start()} disabled={starting || !commandPreview}
className="flex items-center gap-2 rounded-lg border border-helios-line/20 px-4 py-2 text-sm font-semibold text-helios-ink">
<Play size={16}/>
{starting ? "Starting..." : "Start Job"}
</button>
<button onClick={() => void download()} disabled={!status?.download_ready} className="flex items-center gap-2 rounded-lg border border-helios-line/20 px-4 py-2 text-sm font-semibold text-helios-ink disabled:opacity-50">
<Download size={16} />
<button onClick={() => void download()} disabled={!status?.download_ready}
className="flex items-center gap-2 rounded-lg border border-helios-line/20 px-4 py-2 text-sm font-semibold text-helios-ink disabled:opacity-50">
<Download size={16}/>
Download Result
</button>
<button onClick={() => void remove()} className="flex items-center gap-2 rounded-lg border border-red-500/20 px-4 py-2 text-sm font-semibold text-red-500">
<Trash2 size={16} />
<button onClick={() => void remove()}
className="flex items-center gap-2 rounded-lg border border-red-500/20 px-4 py-2 text-sm font-semibold text-red-500">
<Trash2 size={16}/>
Remove
</button>
</div>
{commandPreview && (
<pre className="overflow-x-auto rounded-lg border border-helios-line/20 bg-helios-surface-soft p-4 text-xs text-helios-ink whitespace-pre-wrap">
<pre
className="overflow-x-auto rounded-lg border border-helios-line/20 bg-helios-surface-soft p-4 text-xs text-helios-ink whitespace-pre-wrap">
{commandPreview}
</pre>
)}
@@ -442,10 +498,11 @@ export default function ConversionTool() {
<section className="rounded-xl border border-helios-line/20 bg-helios-surface p-5 space-y-3">
<h2 className="text-sm font-semibold text-helios-ink">Status</h2>
<div className="grid gap-3 md:grid-cols-4 text-sm">
<Stat label="State" value={status.status} />
<Stat label="Progress" value={`${status.progress.toFixed(1)}%`} />
<Stat label="Linked Job" value={status.linked_job_id ? `#${status.linked_job_id}` : "None"} />
<Stat label="Download" value={status.download_ready ? "Ready" : "Pending"} />
<Stat label="State" value={status.status}/>
<Stat label="Progress" value={`${status.progress.toFixed(1)}%`}/>
<Stat label="Linked Job"
value={status.linked_job_id ? `#${status.linked_job_id}` : "None"}/>
<Stat label="Download" value={status.download_ready ? "Ready" : "Pending"}/>
</div>
</section>
)}

View File

@@ -9,7 +9,6 @@ import {
type LucideIcon,
} from "lucide-react";
import { apiJson, isApiError } from "../lib/api";
import { withBasePath } from "../lib/basePath";
import { useSharedStats } from "../lib/statsStore";
import { showToast } from "../lib/toast";
import ResourceMonitor from "./ResourceMonitor";
@@ -145,7 +144,7 @@ function Dashboard() {
}
if (setupComplete !== "true") {
window.location.href = withBasePath("/setup");
window.location.href = "/setup";
}
}
} catch {
@@ -233,7 +232,7 @@ function Dashboard() {
<Activity size={16} className="text-helios-solar" />
Recent Activity
</h3>
<a href={withBasePath("/jobs")} className="text-xs font-medium text-helios-solar hover:underline">
<a href="/jobs" className="text-xs font-medium text-helios-solar hover:underline">
View all
</a>
</div>
@@ -249,7 +248,7 @@ function Dashboard() {
<span className="text-sm text-helios-slate/60">
No recent activity.
</span>
<a href={withBasePath("/settings")} className="text-xs text-helios-solar hover:underline">
<a href="/settings" className="text-xs text-helios-solar hover:underline">
Add a library folder
</a>
</div>

View File

@@ -8,14 +8,14 @@ interface Props {
}
interface State {
hasError: boolean;
errorMessage: string;
errorMessage: string;
hasError: boolean;
}
export class ErrorBoundary extends Component<Props, State> {
public state: State = {
hasError: false,
errorMessage: "",
errorMessage: "",
hasError: false,
};
public static getDerivedStateFromError(error: Error): State {

View File

@@ -14,24 +14,24 @@ interface HardwareInfo {
failed: number;
};
backends?: Array<{
kind: string;
codec: string;
encoder: string;
device_path: string | null;
encoder: string;
kind: string;
}>;
detection_notes?: string[];
}
interface HardwareProbeEntry {
vendor: string;
codec: string;
encoder: string;
backend: string;
codec: string;
device_path: string | null;
success: boolean;
encoder: string;
selected: boolean;
summary: string;
stderr?: string | null;
success: boolean;
summary: string;
vendor: string;
}
interface HardwareProbeLog {
@@ -39,11 +39,11 @@ interface HardwareProbeLog {
}
interface HardwareSettings {
allow_cpu_fallback: boolean;
allow_cpu_encoding: boolean;
allow_cpu_fallback: boolean;
cpu_preset: string;
preferred_vendor: string | null;
device_path: string | null;
preferred_vendor: string | null;
}
export default function HardwareSettings() {

View File

@@ -3,7 +3,6 @@ import { Info, LogOut, Play, Square } from "lucide-react";
import { motion } from "framer-motion";
import AboutDialog from "./AboutDialog";
import { apiAction, apiJson } from "../lib/api";
import { withBasePath } from "../lib/basePath";
import { useSharedStats } from "../lib/statsStore";
import { showToast } from "../lib/toast";
@@ -40,15 +39,16 @@ export default function HeaderActions() {
labelColor: "text-helios-solar",
},
draining: {
dot: "bg-helios-slate animate-pulse",
dot: "bg-helios-solar animate-pulse",
label: "Stopping",
labelColor: "text-helios-slate",
labelColor: "text-helios-solar",
},
} as const;
const status = engineStatus?.status ?? "paused";
const isIdle = status === "running" && (stats?.active ?? 0) === 0;
const displayStatus: keyof typeof statusConfig = isIdle ? "idle" : status;
const displayStatus: keyof typeof statusConfig =
status === "draining" ? "draining" : isIdle ? "idle" : status;
const refreshEngineStatus = async () => {
const data = await apiJson<EngineStatus>("/api/engine/status");
@@ -147,7 +147,7 @@ export default function HeaderActions() {
message: "Logout request failed. Redirecting to login.",
});
} finally {
window.location.href = withBasePath("/login");
window.location.href = "/login";
}
};

File diff suppressed because it is too large Load Diff

View File

@@ -1,7 +1,12 @@
import { useEffect, useState } from "react";
import { AlertTriangle, Copy, Sparkles } from "lucide-react";
import { useCallback, useEffect, useMemo, useState } from "react";
import { createPortal } from "react-dom";
import { AlertTriangle, Copy, Sparkles, Zap, Search } from "lucide-react";
import { apiJson, isApiError } from "../lib/api";
import { showToast } from "../lib/toast";
import ConfirmDialog from "./ui/ConfirmDialog";
import { JobDetailModal } from "./jobs/JobDetailModal";
import { getStatusBadge } from "./jobs/jobStatusBadge";
import { useJobDetailController } from "./jobs/useJobDetailController";
interface DuplicatePath {
id: number;
@@ -58,36 +63,98 @@ export default function LibraryIntelligence() {
const [data, setData] = useState<IntelligenceResponse | null>(null);
const [loading, setLoading] = useState(true);
const [error, setError] = useState<string | null>(null);
const [queueingRemux, setQueueingRemux] = useState(false);
useEffect(() => {
const fetch = async () => {
try {
const result = await apiJson<IntelligenceResponse>("/api/library/intelligence");
setData(result);
} catch (e) {
const message = isApiError(e) ? e.message : "Failed to load intelligence data.";
setError(message);
showToast({
kind: "error",
title: "Intelligence",
message,
});
} finally {
setLoading(false);
}
};
void fetch();
const fetchIntelligence = useCallback(async () => {
try {
const result = await apiJson<IntelligenceResponse>("/api/library/intelligence");
setData(result);
setError(null);
} catch (e) {
const message = isApiError(e) ? e.message : "Failed to load intelligence data.";
setError(message);
showToast({
kind: "error",
title: "Intelligence",
message,
});
} finally {
setLoading(false);
}
}, []);
const groupedRecommendations = data?.recommendations.reduce<Record<string, IntelligenceRecommendation[]>>(
(groups, recommendation) => {
groups[recommendation.type] ??= [];
groups[recommendation.type].push(recommendation);
return groups;
},
{},
) ?? {};
const {
focusedJob,
detailLoading,
confirmState,
detailDialogRef,
openJobDetails,
handleAction,
handlePriority,
openConfirm,
setConfirmState,
closeJobDetails,
focusedDecision,
focusedFailure,
focusedJobLogs,
shouldShowFfmpegOutput,
completedEncodeStats,
focusedEmptyState,
} = useJobDetailController({
onRefresh: fetchIntelligence,
});
useEffect(() => {
void fetchIntelligence();
}, [fetchIntelligence]);
const groupedRecommendations = useMemo(
() => data?.recommendations.reduce<Record<string, IntelligenceRecommendation[]>>(
(groups, recommendation) => {
groups[recommendation.type] ??= [];
groups[recommendation.type].push(recommendation);
return groups;
},
{},
) ?? {},
[data],
);
const handleQueueAllRemux = async () => {
const remuxPaths = groupedRecommendations.remux_only_candidate ?? [];
if (remuxPaths.length === 0) {
return;
}
setQueueingRemux(true);
let enqueued = 0;
let skipped = 0;
let failed = 0;
for (const recommendation of remuxPaths) {
try {
const result = await apiJson<{ enqueued: boolean; message: string }>("/api/jobs/enqueue", {
method: "POST",
body: JSON.stringify({ path: recommendation.path }),
});
if (result.enqueued) {
enqueued += 1;
} else {
skipped += 1;
}
} catch {
failed += 1;
}
}
setQueueingRemux(false);
await fetchIntelligence();
showToast({
kind: failed > 0 ? "error" : "success",
title: "Intelligence",
message: `Queue all finished: ${enqueued} enqueued, ${skipped} skipped, ${failed} failed.`,
});
};
return (
<div className="flex flex-col gap-6">
@@ -128,6 +195,16 @@ export default function LibraryIntelligence() {
<h2 className="text-sm font-semibold text-helios-ink">
{TYPE_LABELS[type] ?? type}
</h2>
{type === "remux_only_candidate" && recommendations.length > 0 && (
<button
onClick={() => void handleQueueAllRemux()}
disabled={queueingRemux}
className="ml-auto inline-flex items-center gap-2 rounded-lg border border-helios-solar/20 bg-helios-solar/10 px-3 py-1.5 text-xs font-semibold text-helios-solar transition-colors hover:bg-helios-solar/20 disabled:opacity-60"
>
<Zap size={12} />
{queueingRemux ? "Queueing..." : "Queue all"}
</button>
)}
</div>
<div className="divide-y divide-helios-line/10">
{recommendations.map((recommendation, index) => (
@@ -137,6 +214,28 @@ export default function LibraryIntelligence() {
<h3 className="text-sm font-semibold text-helios-ink">{recommendation.title}</h3>
<p className="mt-1 text-sm text-helios-slate">{recommendation.summary}</p>
</div>
{type === "remux_only_candidate" && (
<button
onClick={() => void apiJson<{ enqueued: boolean; message: string }>("/api/jobs/enqueue", {
method: "POST",
body: JSON.stringify({ path: recommendation.path }),
}).then((result) => {
showToast({
kind: result.enqueued ? "success" : "info",
title: "Intelligence",
message: result.message,
});
return fetchIntelligence();
}).catch((err) => {
const message = isApiError(err) ? err.message : "Failed to enqueue remux opportunity.";
showToast({ kind: "error", title: "Intelligence", message });
})}
className="inline-flex items-center gap-2 rounded-lg border border-helios-line/20 bg-helios-surface px-3 py-2 text-xs font-semibold text-helios-ink transition-colors hover:bg-helios-surface-soft"
>
<Zap size={12} />
Queue
</button>
)}
</div>
<p className="mt-3 break-all font-mono text-xs text-helios-slate">{recommendation.path}</p>
<div className="mt-3 rounded-lg border border-helios-line/20 bg-helios-surface-soft/40 px-3 py-2 text-xs text-helios-ink">
@@ -197,6 +296,13 @@ export default function LibraryIntelligence() {
<span className="break-all font-mono text-xs text-helios-slate">
{path.path}
</span>
<button
onClick={() => void openJobDetails(path.id)}
className="inline-flex items-center gap-1 rounded-lg border border-helios-line/20 bg-helios-surface px-2.5 py-1.5 text-[11px] font-semibold text-helios-ink transition-colors hover:bg-helios-surface-soft"
>
<Search size={12} />
Review
</button>
<span className="ml-auto shrink-0 text-xs capitalize text-helios-slate/50">
{path.status}
</span>
@@ -209,6 +315,41 @@ export default function LibraryIntelligence() {
)}
</>
)}
{typeof document !== "undefined" && createPortal(
<JobDetailModal
focusedJob={focusedJob}
detailDialogRef={detailDialogRef}
detailLoading={detailLoading}
onClose={closeJobDetails}
focusedDecision={focusedDecision}
focusedFailure={focusedFailure}
focusedJobLogs={focusedJobLogs}
shouldShowFfmpegOutput={shouldShowFfmpegOutput}
completedEncodeStats={completedEncodeStats}
focusedEmptyState={focusedEmptyState}
openConfirm={openConfirm}
handleAction={handleAction}
handlePriority={handlePriority}
getStatusBadge={getStatusBadge}
/>,
document.body,
)}
<ConfirmDialog
open={confirmState !== null}
title={confirmState?.title ?? ""}
description={confirmState?.body ?? ""}
confirmLabel={confirmState?.confirmLabel ?? "Confirm"}
tone={confirmState?.confirmTone ?? "primary"}
onClose={() => setConfirmState(null)}
onConfirm={async () => {
if (!confirmState) {
return;
}
await confirmState.onConfirm();
}}
/>
</div>
);
}

Some files were not shown because too many files have changed in this diff Show More