mirror of
https://github.com/bybrooklyn/alchemist.git
synced 2026-04-18 09:53:33 -04:00
Compare commits
7 Commits
nightly-20
...
master
| Author | SHA1 | Date | |
|---|---|---|---|
| 46129d89ae | |||
| df771d3f7c | |||
| b0646e2629 | |||
| c454de6116 | |||
| c26c2d4420 | |||
| f511f1c084 | |||
| e50ca64e80 |
@@ -27,7 +27,17 @@
|
||||
"Bash(cargo fmt:*)",
|
||||
"Bash(cargo test:*)",
|
||||
"Bash(just check:*)",
|
||||
"Bash(just test:*)"
|
||||
"Bash(just test:*)",
|
||||
"Bash(find /Users/brooklyn/data/alchemist -name *.sql -o -name *migration*)",
|
||||
"Bash(grep -l \"DROP\\\\|RENAME\\\\|DELETE FROM\" /Users/brooklyn/data/alchemist/migrations/*.sql)",
|
||||
"Bash(just test-filter:*)",
|
||||
"Bash(npx tsc:*)",
|
||||
"Bash(find /Users/brooklyn/data/alchemist -type f -name *.rs)",
|
||||
"Bash(ls -la /Users/brooklyn/data/alchemist/src/*.rs)",
|
||||
"Bash(grep -rn \"from_alchemist_event\\\\|AlchemistEvent.*JobEvent\\\\|JobEvent.*AlchemistEvent\" /Users/brooklyn/data/alchemist/src/ --include=*.rs)",
|
||||
"Bash(grep -l AlchemistEvent /Users/brooklyn/data/alchemist/src/*.rs /Users/brooklyn/data/alchemist/src/**/*.rs)",
|
||||
"Bash(/tmp/audit_report.txt:*)",
|
||||
"Read(//tmp/**)"
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
18
CHANGELOG.md
18
CHANGELOG.md
@@ -2,6 +2,24 @@
|
||||
|
||||
All notable changes to this project will be documented in this file.
|
||||
|
||||
## [0.3.1-rc.5] - 2026-04-16
|
||||
|
||||
### Reliability & Stability
|
||||
|
||||
- **Segment-based encode resume** — interrupted encode jobs now persist resume sessions and completed segments so restart and recovery flows can continue without discarding all completed work.
|
||||
- **Notification target compatibility hardening** — notification target reads/writes now preserve the additive migration path, tolerate legacy shapes, and avoid duplicate-delete projection bugs in settings management.
|
||||
- **Daily summary reliability** — summary delivery now retries safely after transient failures and avoids duplicate sends across restart boundaries by persisting the last successful day.
|
||||
- **Job-detail correctness** — completed-job detail loading now fails closed on database errors instead of returning partial `200 OK` payloads, and encode stat duration fallback uses the encoded output rather than the source file.
|
||||
- **Auth and settings safety** — login now returns server errors for real database failures, and duplicate notification/schedule rows no longer disappear together from a single delete action.
|
||||
|
||||
### Jobs & UX
|
||||
|
||||
- **Manual enqueue flow** — the jobs UI now supports enqueueing a single absolute file path through the same backend dedupe and output rules used by library scans.
|
||||
- **Queued-job visibility** — job detail now exposes queue position and processor blocked reasons so operators can see why a queued job is not starting.
|
||||
- **Attempt-history surfacing** — job detail now shows encode attempt history directly in the modal, including outcome, timing, and captured failure summary.
|
||||
- **Jobs UI follow-through** — the `JobManager` refactor now ships with dedicated controller/dialog helpers and tighter SSE reconciliation so filtered tables and open detail modals stay aligned with backend truth.
|
||||
- **Intelligence actions** — remux recommendations and duplicate candidates are now actionable directly from the Intelligence page.
|
||||
|
||||
## [0.3.1-rc.3] - 2026-04-12
|
||||
|
||||
### New Features
|
||||
|
||||
27
CLAUDE.md
27
CLAUDE.md
@@ -44,6 +44,8 @@ just test-e2e-headed # E2e with browser visible
|
||||
|
||||
Integration tests require FFmpeg and FFprobe installed locally.
|
||||
|
||||
Integration tests live in `tests/` — notably `integration_db_upgrade.rs` tests schema migrations against a v0.2.5 baseline database. Every migration must pass this.
|
||||
|
||||
### Database
|
||||
```bash
|
||||
just db-reset # Wipe dev database (keeps config)
|
||||
@@ -53,6 +55,10 @@ just db-shell # SQLite shell
|
||||
|
||||
## Architecture
|
||||
|
||||
### Clippy Strictness
|
||||
|
||||
CI enforces `-D clippy::unwrap_used` and `-D clippy::expect_used`. Use `?` propagation or explicit match — no `.unwrap()` or `.expect()` in production code paths.
|
||||
|
||||
### Rust Backend (`src/`)
|
||||
|
||||
The backend is structured around a central `AppState` (holding SQLite pool, config, broadcast channels) passed to Axum handlers:
|
||||
@@ -77,15 +83,32 @@ The backend is structured around a central `AppState` (holding SQLite pool, conf
|
||||
- `pipeline.rs` — Orchestrates scan → analyze → plan → execute
|
||||
- `processor.rs` — Job queue controller (concurrency, pausing, draining)
|
||||
- `ffmpeg/` — FFmpeg command builder and progress parser, with platform-specific encoder modules
|
||||
- **`orchestrator.rs`** — Spawns and monitors FFmpeg processes, streams progress back via channels
|
||||
- **`orchestrator.rs`** — Spawns and monitors FFmpeg processes, streams progress back via channels. Uses `std::sync::Mutex` (not tokio) intentionally — critical sections never cross `.await` boundaries.
|
||||
- **`system/`** — Hardware detection (`hardware.rs`), file watcher (`watcher.rs`), library scanner (`scanner.rs`)
|
||||
- **`scheduler.rs`** — Off-peak cron scheduling
|
||||
- **`notifications.rs`** — Discord, Gotify, Webhook integrations
|
||||
- **`wizard.rs`** — First-run setup flow
|
||||
|
||||
#### Event Channel Architecture
|
||||
|
||||
Three typed broadcast channels in `AppState` (defined in `db.rs`):
|
||||
- `jobs` (capacity 1000) — high-frequency: progress, state changes, decisions, logs
|
||||
- `config` (capacity 50) — watch folder changes, settings updates
|
||||
- `system` (capacity 100) — scan lifecycle, hardware state changes
|
||||
|
||||
`sse.rs` merges all three via `futures::stream::select_all`. SSE is capped at 50 concurrent connections (`MAX_SSE_CONNECTIONS`), enforced with a RAII guard that decrements on stream drop.
|
||||
|
||||
`AlchemistEvent` still exists as a legacy bridge; `JobEvent` is the canonical type — new code uses `JobEvent`/`ConfigEvent`/`SystemEvent`.
|
||||
|
||||
#### FFmpeg Command Builder
|
||||
|
||||
`FFmpegCommandBuilder<'a>` in `src/media/ffmpeg/mod.rs` uses lifetime references to avoid cloning input/output paths. `.with_hardware(Option<&HardwareInfo>)` injects hardware flags; `.build_args()` returns `Vec<String>` for unit testing without spawning a process. Each hardware platform is a submodule (amf, cpu, nvenc, qsv, vaapi, videotoolbox). `EncoderCapabilities` is detected once via live ffmpeg invocation and cached in `OnceLock`.
|
||||
|
||||
### Frontend (`web/src/`)
|
||||
|
||||
Astro pages with React islands. UI reflects backend state via Server-Sent Events (SSE) — avoid optimistic UI unless reconciled with backend truth.
|
||||
Astro pages (`web/src/pages/`) with React islands. UI reflects backend state via SSE — avoid optimistic UI unless reconciled with backend truth.
|
||||
|
||||
Job management UI is split into focused subcomponents under `web/src/components/jobs/`: `JobsTable`, `JobDetailModal`, `JobsToolbar`, `JobExplanations`, `useJobSSE.ts` (SSE hook), and `types.ts` (shared types + pure data utilities). `JobManager.tsx` is the parent that owns state and wires them together.
|
||||
|
||||
### Database Schema
|
||||
|
||||
|
||||
2
Cargo.lock
generated
2
Cargo.lock
generated
@@ -13,7 +13,7 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "alchemist"
|
||||
version = "0.3.1-rc.3"
|
||||
version = "0.3.1-rc.5"
|
||||
dependencies = [
|
||||
"anyhow",
|
||||
"argon2",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
[package]
|
||||
name = "alchemist"
|
||||
version = "0.3.1-rc.3"
|
||||
version = "0.3.1-rc.5"
|
||||
edition = "2024"
|
||||
rust-version = "1.85"
|
||||
license = "GPL-3.0"
|
||||
|
||||
@@ -30,15 +30,15 @@ Then complete the release-candidate preflight:
|
||||
|
||||
Promote to stable only after the RC burn-in is complete and the same automated preflight is still green.
|
||||
|
||||
1. Run `just bump 0.3.0`.
|
||||
1. Run `just bump 0.3.1`.
|
||||
2. Update `CHANGELOG.md` and `docs/docs/changelog.md` for the stable cut.
|
||||
3. Run `just release-check`.
|
||||
4. Re-run the manual smoke checklist against the final release artifacts:
|
||||
- Docker fresh install
|
||||
- Packaged binary first-run
|
||||
- Upgrade from the most recent `0.2.x` or `0.3.0-rc.x`
|
||||
- Upgrade from the most recent `0.2.x` or `0.3.1-rc.x`
|
||||
- Encode, skip, failure, and notification verification
|
||||
5. Re-run the Windows contributor verification checklist if Windows parity changed after the last RC.
|
||||
6. Confirm release notes, docs, and hardware-support wording match the tested release state.
|
||||
7. Merge the stable release commit to `main`.
|
||||
8. Create the annotated tag `v0.3.0` on the exact merged commit.
|
||||
8. Create the annotated tag `v0.3.1` on the exact merged commit.
|
||||
|
||||
470
audit.md
470
audit.md
@@ -1,136 +1,420 @@
|
||||
# Audit Findings
|
||||
|
||||
Date: 2026-04-11
|
||||
Last updated: 2026-04-12 (second pass)
|
||||
|
||||
## Summary
|
||||
---
|
||||
|
||||
This audit focused on the highest-risk paths in Alchemist:
|
||||
## P1 Issues
|
||||
|
||||
- queue claiming and cancellation
|
||||
- media planning and execution
|
||||
- conversion validation
|
||||
- setup/auth exposure
|
||||
- job detail and failure UX
|
||||
---
|
||||
|
||||
The current automated checks were green at audit time, but several real
|
||||
correctness and behavior issues remain.
|
||||
### [P1-1] Cancel during analysis can be overwritten by the pipeline
|
||||
|
||||
## Findings
|
||||
**Status: RESOLVED**
|
||||
|
||||
### [P1] Canceling a job during analysis can be overwritten
|
||||
**Files:**
|
||||
- `src/server/jobs.rs:41–63`
|
||||
- `src/media/pipeline.rs:1178–1221`
|
||||
- `src/orchestrator.rs:84–90`
|
||||
|
||||
Relevant code:
|
||||
**Severity:** P1
|
||||
|
||||
- `src/server/jobs.rs:41`
|
||||
- `src/media/pipeline.rs:927`
|
||||
- `src/media/pipeline.rs:970`
|
||||
- `src/orchestrator.rs:239`
|
||||
**Problem:**
|
||||
|
||||
`request_job_cancel()` marks `analyzing` and `resuming` jobs as
|
||||
`cancelled` immediately. But the analysis/planning path can still run to
|
||||
completion and later overwrite that state to `skipped`,
|
||||
`encoding`/`remuxing`, or another follow-on state.
|
||||
`request_job_cancel()` in `jobs.rs` immediately writes `Cancelled` to the DB for jobs in `Analyzing` or `Resuming` state. The pipeline used to have race windows where it could overwrite this state with `Skipped`, `Encoding`, or `Remuxing` if it reached a checkpoint after the cancel was issued but before it could be processed.
|
||||
|
||||
The transcoder-side `pending_cancels` check only applies around FFmpeg
|
||||
spawn, so a cancel issued during analysis is not guaranteed to stop the
|
||||
pipeline before state transitions are persisted.
|
||||
**Fix:**
|
||||
|
||||
Impact:
|
||||
Implemented `cancel_requested: Arc<tokio::sync::RwLock<HashSet<i64>>>` in `Transcoder` (orchestrator). The `update_job_state` wrapper in `pipeline.rs` now checks this set before any DB write for `Encoding`, `Remuxing`, `Skipped`, and `Completed` states. Terminal states (Completed, Failed, Cancelled, Skipped) also trigger removal from the set.
|
||||
|
||||
- a user-visible cancel can be lost
|
||||
- the UI can report a cancelled job that later resumes or becomes skipped
|
||||
- queue state becomes harder to trust
|
||||
---
|
||||
|
||||
### [P1] VideoToolbox quality controls are effectively a no-op
|
||||
### [P1-2] VideoToolbox quality controls are effectively ignored
|
||||
|
||||
Relevant code:
|
||||
**Status: RESOLVED**
|
||||
|
||||
- `src/config.rs:85`
|
||||
- `src/media/planner.rs:633`
|
||||
- `src/media/ffmpeg/videotoolbox.rs:3`
|
||||
- `src/conversion.rs:424`
|
||||
**Files:**
|
||||
- `src/media/planner.rs:630–650`
|
||||
- `src/media/ffmpeg/videotoolbox.rs:25–54`
|
||||
- `src/config.rs:85–92`
|
||||
|
||||
The config still defines a VideoToolbox quality ladder, and the planner
|
||||
still emits `RateControl::Cq` for VideoToolbox encoders. But the actual
|
||||
VideoToolbox FFmpeg builder ignores rate-control input entirely.
|
||||
**Severity:** P1
|
||||
|
||||
The Convert workflow does the same thing by still generating `Cq` for
|
||||
non-CPU/QSV encoders even though the VideoToolbox path does not consume
|
||||
it.
|
||||
**Problem:**
|
||||
|
||||
Impact:
|
||||
The planner used to emit `RateControl::Cq` values that were incorrectly mapped for VideoToolbox, resulting in uncalibrated or inverted quality.
|
||||
|
||||
- quality profile does not meaningfully affect VideoToolbox jobs
|
||||
- Convert quality values for VideoToolbox are misleading
|
||||
- macOS throughput/quality tradeoffs are harder to reason about
|
||||
**Fix:**
|
||||
|
||||
### [P2] Convert does not reuse subtitle/container compatibility checks
|
||||
Fixed the mapping in `videotoolbox.rs` to use `-q:v` (1-100, lower is better) and clamped the input range to 1-51 to match user expectations from x264/x265. Updated `QualityProfile` in `config.rs` to provide sane default values (24, 28, 32) for VideoToolbox quality.
|
||||
|
||||
Relevant code:
|
||||
---
|
||||
|
||||
- `src/media/planner.rs:863`
|
||||
- `src/media/planner.rs:904`
|
||||
- `src/conversion.rs:272`
|
||||
- `src/conversion.rs:366`
|
||||
## P2 Issues
|
||||
|
||||
The main library planner explicitly rejects unsafe subtitle-copy
|
||||
combinations, especially for MP4/MOV targets. The Convert flow has its
|
||||
own normalization/build path and does not reuse that validation.
|
||||
---
|
||||
|
||||
Impact:
|
||||
### [P2-1] Convert does not reuse subtitle/container compatibility checks
|
||||
|
||||
- the Convert UI can accept settings that are known to fail later in FFmpeg
|
||||
- conversion behavior diverges from library-job behavior
|
||||
- users can hit avoidable execution-time errors instead of fast validation
|
||||
**Status: RESOLVED**
|
||||
|
||||
### [P2] Completed job details omit metadata at the API layer
|
||||
**Files:**
|
||||
- `src/conversion.rs:372–380`
|
||||
- `src/media/planner.rs`
|
||||
|
||||
Relevant code:
|
||||
**Severity:** P2
|
||||
|
||||
- `src/server/jobs.rs:344`
|
||||
- `web/src/components/JobManager.tsx:1774`
|
||||
**Problem:**
|
||||
|
||||
The job detail endpoint explicitly returns `metadata = None` for
|
||||
`completed` jobs, even though the Jobs modal is structured to display
|
||||
input metadata when available.
|
||||
The conversion path was not validating subtitle/container compatibility, leading to FFmpeg runtime failures instead of early validation errors.
|
||||
|
||||
Impact:
|
||||
**Fix:**
|
||||
|
||||
- completed-job details are structurally incomplete
|
||||
- the frontend needs special-case empty-state behavior
|
||||
- operator confidence is lower when comparing completed jobs after the fact
|
||||
Integrated `crate::media::planner::subtitle_copy_supported` into `src/conversion.rs:build_subtitle_plan`. The "copy" mode now returns an `AlchemistError::Config` if the combination is unsupported.
|
||||
|
||||
### [P2] LAN-only setup is easy to misconfigure behind a local reverse proxy
|
||||
---
|
||||
|
||||
Relevant code:
|
||||
### [P2-2] Completed job metadata omitted at the API layer
|
||||
|
||||
- `src/server/middleware.rs:269`
|
||||
- `src/server/middleware.rs:300`
|
||||
**Status: RESOLVED**
|
||||
|
||||
The setup gate uses `request_ip()` and trusts forwarded headers only when
|
||||
the direct peer is local/private. If Alchemist sits behind a loopback or
|
||||
LAN reverse proxy that fails to forward the real client IP, the request
|
||||
falls back to the proxy peer IP and is treated as LAN-local.
|
||||
**Files:**
|
||||
- `src/db.rs:254–263`
|
||||
- `src/media/pipeline.rs:599`
|
||||
- `src/server/jobs.rs:343`
|
||||
|
||||
Impact:
|
||||
**Severity:** P2
|
||||
|
||||
- public reverse-proxy deployments can accidentally expose setup
|
||||
- behavior depends on correct proxy header forwarding
|
||||
- the security model is sound in principle but fragile in deployment
|
||||
**Problem:**
|
||||
|
||||
## What To Fix First
|
||||
Job details required a live re-probe of the input file to show metadata, which failed if the file was moved or deleted after completion.
|
||||
|
||||
1. Fix the cancel-during-analysis race.
|
||||
2. Fix or redesign VideoToolbox quality handling so the UI and planner do
|
||||
not promise controls that the backend ignores.
|
||||
3. Reuse planner validation in Convert for subtitle/container safety.
|
||||
4. Decide whether completed jobs should persist and return metadata in the
|
||||
detail API.
|
||||
**Fix:**
|
||||
|
||||
## What To Investigate Next
|
||||
Added `input_metadata_json` column to the `jobs` table (migration `20260412000000_store_job_metadata.sql`). The pipeline now stores the metadata string immediately after analysis. `get_job_detail_handler` reads this stored value, ensuring metadata is always available even if the source file is missing.
|
||||
|
||||
1. Use runtime diagnostics to confirm whether macOS slowness is true
|
||||
hardware underperformance, silent fallback, or filter overhead.
|
||||
2. Verify whether “only one job at a time” is caused by actual worker
|
||||
serialization or by planner eligibility/skips.
|
||||
3. Review dominant skip reasons before relaxing planner heuristics.
|
||||
---
|
||||
|
||||
### [P2-3] LAN-only setup exposed to reverse proxy misconfig
|
||||
|
||||
**Status: RESOLVED**
|
||||
|
||||
**Files:**
|
||||
- `src/config.rs` — `SystemConfig.trusted_proxies`
|
||||
- `src/server/mod.rs` — `AppState.trusted_proxies`, `AppState.setup_token`
|
||||
- `src/server/middleware.rs` — `is_trusted_peer`, `request_ip`, `auth_middleware`
|
||||
|
||||
**Severity:** P2
|
||||
|
||||
**Problem:**
|
||||
|
||||
The setup wizard gate trusts all private/loopback IPs for header forwarding. When running behind a misconfigured proxy that doesn't set headers, it falls back to the proxy's own IP (e.g. 127.0.0.1), making the setup endpoint accessible to external traffic.
|
||||
|
||||
**Fix:**
|
||||
|
||||
Added two independent security layers:
|
||||
1. `trusted_proxies: Vec<String>` to `SystemConfig`. When non-empty, only those exact IPs (plus loopback) are trusted for proxy header forwarding instead of all RFC-1918 ranges. Empty = previous behavior preserved.
|
||||
2. `ALCHEMIST_SETUP_TOKEN` env var. When set, setup endpoints require `?token=<value>` query param regardless of client IP. Token mode takes precedence over IP-based LAN check.
|
||||
|
||||
---
|
||||
|
||||
### [P2-4] N+1 DB update in batch cancel
|
||||
|
||||
**Status: RESOLVED**
|
||||
|
||||
**Files:**
|
||||
- `src/server/jobs.rs` — `batch_jobs_handler`
|
||||
|
||||
**Severity:** P2
|
||||
|
||||
**Problem:**
|
||||
|
||||
`batch_jobs_handler` for "cancel" action iterates over jobs and calls `request_job_cancel` which performs an individual `update_job_status` query per job. Cancelling a large number of jobs triggers N queries.
|
||||
|
||||
**Fix:**
|
||||
|
||||
Restructured the "cancel" branch in `batch_jobs_handler`. Orchestrator in-memory operations (add_cancel_request, cancel_job) still run per-job, but all DB status updates are batched into a single `db.batch_cancel_jobs(&ids)` call (which already existed at db.rs). Immediate-resolution jobs (Queued + successfully signalled Analyzing/Resuming) are collected and written in one UPDATE ... WHERE id IN (...) query.
|
||||
|
||||
---
|
||||
|
||||
### [P2-5] Missing archived filter in health and stats queries
|
||||
|
||||
**Status: RESOLVED**
|
||||
|
||||
**Files:**
|
||||
- `src/db.rs` — `get_aggregated_stats`, `get_job_stats`, `get_health_summary`
|
||||
|
||||
**Severity:** P2
|
||||
|
||||
**Problem:**
|
||||
|
||||
`get_health_summary` and `get_aggregated_stats` (total_jobs) do not include `AND archived = 0`. Archived (deleted) jobs are incorrectly included in library health metrics and total job counts.
|
||||
|
||||
**Fix:**
|
||||
|
||||
Added `AND archived = 0` to all three affected queries: `total_jobs` and `completed_jobs` subqueries in `get_aggregated_stats`, the `GROUP BY status` query in `get_job_stats`, and both subqueries in `get_health_summary`. Updated tests that were asserting the old (incorrect) behavior.
|
||||
|
||||
---
|
||||
|
||||
### [P2-6] Daily summary notifications bypass SSRF protections
|
||||
|
||||
**Status: RESOLVED**
|
||||
|
||||
**Files:**
|
||||
- `src/notifications.rs` — `build_safe_client()`, `send()`, `send_daily_summary_target()`
|
||||
|
||||
**Severity:** P2
|
||||
|
||||
**Problem:**
|
||||
|
||||
`send_daily_summary_target()` used `Client::new()` without any SSRF defences, while `send()` applied DNS timeout, private-IP blocking, no-redirect policy, and request timeout.
|
||||
|
||||
**Fix:**
|
||||
|
||||
Extracted all client-building logic into `build_safe_client(&self, target)` which applies the full SSRF defence stack. Both `send()` and `send_daily_summary_target()` now use this shared helper.
|
||||
|
||||
---
|
||||
|
||||
### [P2-7] Silent reprobe failure corrupts saved encode stats
|
||||
|
||||
**Status: RESOLVED**
|
||||
|
||||
**Files:**
|
||||
- `src/media/pipeline.rs` — `finalize_job()` duration reprobe
|
||||
|
||||
**Severity:** P2
|
||||
|
||||
**Problem:**
|
||||
|
||||
When a completed encode's metadata has `duration_secs <= 0.0`, the pipeline reprobes the output file to get the actual duration. If reprobe fails, the error was silently swallowed via `.ok()` and duration defaulted to 0.0, poisoning downstream stats.
|
||||
|
||||
**Fix:**
|
||||
|
||||
Replaced `.ok().and_then().unwrap_or(0.0)` chain with explicit `match` that logs the error via `tracing::warn!` and falls through to 0.0. Existing guards at the stats computation lines already handle `duration <= 0.0` correctly — operators now see *why* stats are zeroed.
|
||||
|
||||
---
|
||||
|
||||
## Technical Debt
|
||||
|
||||
---
|
||||
|
||||
### [TD-1] `db.rs` is a 3481-line monolith
|
||||
|
||||
**Status: RESOLVED**
|
||||
|
||||
**File:** `src/db/` (was `src/db.rs`)
|
||||
|
||||
**Severity:** TD
|
||||
|
||||
**Problem:**
|
||||
|
||||
The database layer had grown to nearly 3500 lines. Every query, migration flag, and state enum was in one file, making navigation and maintenance difficult.
|
||||
|
||||
**Fix:**
|
||||
|
||||
Split into `src/db/` module with 8 submodules: `mod.rs` (Db struct, init, migrations, hash fns), `types.rs` (all type defs), `events.rs` (event enums + channels), `jobs.rs` (job CRUD/filtering/decisions), `stats.rs` (encode/aggregated/daily stats), `config.rs` (watch dirs/profiles/notifications/schedules/file settings/preferences), `conversion.rs` (ConversionJob CRUD), `system.rs` (auth/sessions/API tokens/logs/health). All tests moved alongside their methods. Public API unchanged — all types re-exported from `db/mod.rs`.
|
||||
|
||||
---
|
||||
|
||||
### [TD-2] `AlchemistEvent` legacy bridge is dead weight
|
||||
|
||||
**Status: RESOLVED**
|
||||
|
||||
**Files:**
|
||||
- `src/db.rs` — enum and From impls removed
|
||||
- `src/media/pipeline.rs`, `src/media/executor.rs`, `src/media/processor.rs` — legacy `tx` channel removed
|
||||
- `src/notifications.rs` — migrated to typed `EventChannels` (jobs + system)
|
||||
- `src/server/mod.rs`, `src/main.rs` — legacy channel removed from AppState/RunServerArgs
|
||||
|
||||
**Severity:** TD
|
||||
|
||||
**Problem:**
|
||||
|
||||
`AlchemistEvent` was a legacy event type duplicated by `JobEvent`, `ConfigEvent`, and `SystemEvent`. All senders were emitting events on both channels.
|
||||
|
||||
**Fix:**
|
||||
|
||||
Migrated the notification system (the sole consumer) to subscribe to `EventChannels.jobs` and `EventChannels.system` directly. Added `SystemEvent::EngineIdle` variant. Removed `AlchemistEvent` enum, its `From` impls, the legacy `tx` broadcast channel from all structs, and the `pub use` from `lib.rs`.
|
||||
|
||||
---
|
||||
|
||||
### [TD-3] `pipeline.rs` legacy `AlchemistEvent::Progress` stub
|
||||
|
||||
**Status: RESOLVED**
|
||||
|
||||
**Files:**
|
||||
- `src/media/pipeline.rs:1228`
|
||||
|
||||
**Severity:** TD
|
||||
|
||||
**Problem:**
|
||||
|
||||
The pipeline used to emit zeroed progress events that could overwrite real stats from the executor.
|
||||
|
||||
**Fix:**
|
||||
|
||||
Emission removed. A comment at line 1228-1229 confirms that `AlchemistEvent::Progress` is no longer emitted from the pipeline wrapper.
|
||||
|
||||
---
|
||||
|
||||
### [TD-4] Silent `.ok()` on pipeline decision and attempt DB writes
|
||||
|
||||
**Status: RESOLVED**
|
||||
|
||||
**Files:**
|
||||
- `src/media/pipeline.rs` — all `add_decision`, `insert_encode_attempt`, `upsert_job_failure_explanation`, and `add_log` call sites
|
||||
|
||||
**Severity:** TD
|
||||
|
||||
**Problem:**
|
||||
|
||||
Decision records, encode attempt records, failure explanations, and error logs were written with `.ok()` or `let _ =`, silently discarding DB write failures. These records are the only audit trail of *why* a job was skipped/transcoded/failed.
|
||||
|
||||
**Fix:**
|
||||
|
||||
Replaced all `.ok()` / `let _ =` patterns on `add_decision`, `insert_encode_attempt`, `upsert_job_failure_explanation`, and `add_log` calls with `if let Err(e) = ... { tracing::warn!(...) }`. Pipeline still continues on failure, but operators now see the error.
|
||||
|
||||
---
|
||||
|
||||
### [TD-5] Correlated subquery for sort-by-size in job listing
|
||||
|
||||
**Status: RESOLVED**
|
||||
|
||||
**Files:**
|
||||
- `src/db.rs` — `get_jobs_filtered()` query
|
||||
|
||||
**Severity:** TD
|
||||
|
||||
**Problem:**
|
||||
|
||||
Sorting jobs by file size used a correlated subquery in ORDER BY, executing one subquery per row and producing NULL for jobs without encode_stats.
|
||||
|
||||
**Fix:**
|
||||
|
||||
Added `LEFT JOIN encode_stats es ON es.job_id = j.id` to the base query. Sort column changed to `COALESCE(es.input_size_bytes, 0)`, ensuring jobs without stats sort as 0 (smallest) instead of NULL.
|
||||
|
||||
---
|
||||
|
||||
## Reliability Gaps
|
||||
|
||||
---
|
||||
|
||||
### [RG-1] No encode resume after crash or restart
|
||||
|
||||
**Status: PARTIALLY RESOLVED**
|
||||
|
||||
**Files:**
|
||||
- `src/main.rs:320`
|
||||
- `src/media/processor.rs:255`
|
||||
|
||||
**Severity:** RG
|
||||
|
||||
**Problem:**
|
||||
|
||||
Interrupted encodes were left in `Encoding` state and orphaned temp files remained on disk.
|
||||
|
||||
**Fix:**
|
||||
|
||||
Implemented `db.reset_interrupted_jobs()` in `main.rs` which resets `Encoding`, `Remuxing`, `Resuming`, and `Analyzing` jobs to `Queued` on startup. Orphaned temp files are also detected and removed. Full bitstream-level resume (resuming from the middle of a file) is still missing.
|
||||
|
||||
---
|
||||
|
||||
### [RG-2] AMD VAAPI/AMF hardware paths unvalidated
|
||||
|
||||
**Files:**
|
||||
- `src/media/ffmpeg/vaapi.rs`
|
||||
- `src/media/ffmpeg/amf.rs`
|
||||
|
||||
**Severity:** RG
|
||||
|
||||
**Problem:**
|
||||
|
||||
Hardware paths for AMD (VAAPI on Linux, AMF on Windows) were implemented without real hardware validation.
|
||||
|
||||
**Fix:**
|
||||
|
||||
Verify exact flag compatibility on AMD hardware and add integration tests gated on GPU presence.
|
||||
|
||||
---
|
||||
|
||||
## UX Gaps
|
||||
|
||||
---
|
||||
|
||||
### [UX-1] Queued jobs show no position or estimated wait time
|
||||
|
||||
**Status: RESOLVED**
|
||||
|
||||
**Files:**
|
||||
- `src/db.rs` — `get_queue_position`
|
||||
- `src/server/jobs.rs` — `JobDetailResponse.queue_position`
|
||||
- `web/src/components/jobs/JobDetailModal.tsx`
|
||||
- `web/src/components/jobs/types.ts` — `JobDetail.queue_position`
|
||||
|
||||
**Severity:** UX
|
||||
|
||||
**Problem:**
|
||||
|
||||
Queued jobs only show "Waiting" without indicating their position in the priority queue.
|
||||
|
||||
**Fix:**
|
||||
|
||||
Implemented `db.get_queue_position(job_id)` which counts jobs with higher priority or earlier `created_at` (matching the `priority DESC, created_at ASC` dequeue order). Added `queue_position: Option<u32>` to `JobDetailResponse` — populated only when `status == Queued`. Frontend shows `Queue position: #N` in the empty state card in `JobDetailModal`.
|
||||
|
||||
---
|
||||
|
||||
### [UX-2] No way to add a single file to the queue via the UI
|
||||
|
||||
**Severity:** UX
|
||||
|
||||
**Problem:**
|
||||
|
||||
Jobs only enter the queue via full library scans. No manual "Enqueue path" exists in the UI.
|
||||
|
||||
**Fix:**
|
||||
|
||||
Add `POST /api/jobs/enqueue` and a "Add file" action in the `JobsToolbar`.
|
||||
|
||||
---
|
||||
|
||||
### [UX-3] Workers-blocked reason not surfaced for queued jobs
|
||||
|
||||
**Severity:** UX
|
||||
|
||||
**Problem:**
|
||||
|
||||
Users cannot see why a job is stuck in Queued (paused, scheduled, or slots full).
|
||||
|
||||
**Fix:**
|
||||
|
||||
Add `GET /api/processor/status` and show the reason in the job detail.
|
||||
|
||||
---
|
||||
|
||||
## Feature Gaps
|
||||
|
||||
---
|
||||
|
||||
### [FG-4] Intelligence page content not actionable
|
||||
|
||||
**Files:**
|
||||
- `web/src/components/LibraryIntelligence.tsx`
|
||||
|
||||
**Severity:** FG
|
||||
|
||||
**Problem:**
|
||||
|
||||
Intelligence page is informational only; recommendations cannot be acted upon directly from the page.
|
||||
|
||||
**Fix:**
|
||||
|
||||
Add "Queue all" for remux opportunities and "Review" actions for duplicates.
|
||||
|
||||
---
|
||||
|
||||
## What To Fix Next
|
||||
|
||||
1. **[UX-2]** Single file enqueue — New feature.
|
||||
2. **[UX-3]** Workers-blocked reason — New feature.
|
||||
3. **[FG-4]** Intelligence page actions — New feature.
|
||||
4. **[RG-2]** AMD VAAPI/AMF validation — Needs real hardware.
|
||||
|
||||
40
backlog.md
40
backlog.md
@@ -59,37 +59,37 @@ documentation, or iteration.
|
||||
- remux-only opportunities
|
||||
- wasteful audio layouts
|
||||
- commentary/descriptive-track cleanup candidates
|
||||
- Direct actions now exist for queueing remux recommendations and opening duplicate candidates in the shared job-detail flow
|
||||
|
||||
### Engine Lifecycle + Planner Docs
|
||||
- Runtime drain/restart controls exist in the product surface
|
||||
- Backend and Playwright lifecycle coverage now exists for the current behavior
|
||||
- Planner and engine lifecycle docs are in-repo and should now be kept in sync with shipped semantics rather than treated as missing work
|
||||
|
||||
### Jobs UI Refactor / In Flight
|
||||
- `JobManager` has been decomposed into focused jobs subcomponents and controller hooks
|
||||
- SSE ownership is now centered in a dedicated hook and job-detail controller flow
|
||||
- Treat the current jobs UI surface as shipping product that still needs stabilization and regression coverage, not as a future refactor candidate
|
||||
|
||||
---
|
||||
|
||||
## Active Priorities
|
||||
|
||||
### Engine Lifecycle Controls
|
||||
- Finish and harden restart/shutdown semantics from the About/header surface
|
||||
- Restart must reset the engine loop without re-execing the process
|
||||
- Shutdown must cancel active jobs and exit cleanly
|
||||
- Add final backend and Playwright coverage for lifecycle transitions
|
||||
### `0.3.1` RC Stability Follow-Through
|
||||
- Keep the current in-flight backend/frontend/test delta focused on reliability, upgrade safety, and release hardening
|
||||
- Expand regression coverage for resume/restart/cancel flows, job-detail refresh semantics, settings projection, and intelligence actions
|
||||
- Keep release docs, changelog entries, and support wording aligned with what the RC actually ships
|
||||
|
||||
### Planner and Lifecycle Documentation
|
||||
- Document planner heuristics and stable skip/transcode/remux decision boundaries
|
||||
- Document hardware fallback rules and backend selection semantics
|
||||
- Document pause, drain, restart, cancel, and shutdown semantics from actual behavior
|
||||
|
||||
### Per-File Encode History
|
||||
- Show full attempt history in job detail, grouped by canonical file identity
|
||||
- Include outcome, encode stats, and failure reason where available
|
||||
- Make retries, reruns, and settings-driven requeues legible
|
||||
|
||||
### Behavior-Preserving Refactor Pass
|
||||
- Decompose `web/src/components/JobManager.tsx` without changing current behavior
|
||||
- Extract shared formatting logic
|
||||
- Clarify SSE vs polling ownership
|
||||
- Add regression coverage before deeper structural cleanup
|
||||
### Per-File Encode History Follow-Through
|
||||
- Attempt history now exists in job detail, but it is still job-scoped rather than grouped by canonical file identity
|
||||
- Next hardening pass should make retries, reruns, and settings-driven requeues legible across a file’s full history
|
||||
- Include outcome, encode stats, and failure reason where available without regressing the existing job-detail flow
|
||||
|
||||
### AMD AV1 Validation
|
||||
- Validate Linux VAAPI and Windows AMF AV1 paths on real hardware
|
||||
- Confirm encoder selection, fallback behavior, and defaults
|
||||
- Keep support claims conservative until validation is real
|
||||
- Deferred from the current `0.3.1-rc.5` automated-stability pass; do not broaden support claims before this work is complete
|
||||
|
||||
---
|
||||
|
||||
|
||||
3
docs/bun.lock
generated
3
docs/bun.lock
generated
@@ -24,6 +24,7 @@
|
||||
},
|
||||
},
|
||||
"overrides": {
|
||||
"follow-redirects": "^1.16.0",
|
||||
"lodash": "^4.18.1",
|
||||
"serialize-javascript": "^7.0.5",
|
||||
},
|
||||
@@ -1108,7 +1109,7 @@
|
||||
|
||||
"flat": ["flat@5.0.2", "", { "bin": { "flat": "cli.js" } }, "sha512-b6suED+5/3rTpUBdG1gupIl8MPFCAMA0QXwmljLhvCUKcUvdE4gWky9zpuGCcXHOsz4J9wPGNWq6OKpmIzz3hQ=="],
|
||||
|
||||
"follow-redirects": ["follow-redirects@1.15.11", "", {}, "sha512-deG2P0JfjrTxl50XGCDyfI97ZGVCxIpfKYmfyrQ54n5FO/0gfIES8C/Psl6kWVDolizcaaxZJnTS0QSMxvnsBQ=="],
|
||||
"follow-redirects": ["follow-redirects@1.16.0", "", {}, "sha512-y5rN/uOsadFT/JfYwhxRS5R7Qce+g3zG97+JrtFZlC9klX/W5hD7iiLzScI4nZqUS7DNUdhPgw4xI8W2LuXlUw=="],
|
||||
|
||||
"form-data-encoder": ["form-data-encoder@2.1.4", "", {}, "sha512-yDYSgNMraqvnxiEXO4hi88+YZxaHC6QKzb5N84iRCTDeRO7ZALpir/lVmf/uXUhnwUr2O4HU8s/n6x+yNjQkHw=="],
|
||||
|
||||
|
||||
440
docs/docs/api.md
440
docs/docs/api.md
@@ -1,54 +1,35 @@
|
||||
---
|
||||
title: API
|
||||
title: API Reference
|
||||
description: REST and SSE API reference for Alchemist.
|
||||
---
|
||||
|
||||
All API routes require the `alchemist_session` auth cookie
|
||||
except:
|
||||
|
||||
- `/api/auth/*`
|
||||
- `/api/health`
|
||||
- `/api/ready`
|
||||
- during first-time setup, the setup UI and setup-related
|
||||
unauthenticated routes are only reachable from the local
|
||||
network
|
||||
|
||||
Authentication is established by `POST /api/auth/login`.
|
||||
The backend also accepts `Authorization: Bearer <token>`.
|
||||
Bearer tokens now come in two classes:
|
||||
|
||||
- `read_only` — observability-only routes
|
||||
- `full_access` — same route access as an authenticated session
|
||||
|
||||
The web UI still uses the session cookie.
|
||||
|
||||
Machine-readable contract:
|
||||
|
||||
- [OpenAPI spec](/openapi.yaml)
|
||||
|
||||
## Authentication
|
||||
|
||||
### API tokens
|
||||
All API routes require the `alchemist_session` auth cookie established via `/api/auth/login`, or an `Authorization: Bearer <token>` header.
|
||||
|
||||
API tokens are created in **Settings → API Tokens**.
|
||||
Machine-readable contract: [OpenAPI spec](/openapi.yaml)
|
||||
|
||||
- token values are only shown once at creation time
|
||||
- only hashed token material is stored server-side
|
||||
- revoked tokens stop working immediately
|
||||
### `POST /api/auth/login`
|
||||
Establish a session. Returns a `Set-Cookie` header.
|
||||
|
||||
Read-only tokens are intentionally limited to observability
|
||||
routes such as stats, jobs, logs history, SSE, system info,
|
||||
hardware info, library intelligence, and health/readiness.
|
||||
**Request:**
|
||||
```json
|
||||
{
|
||||
"username": "admin",
|
||||
"password": "..."
|
||||
}
|
||||
```
|
||||
|
||||
### `POST /api/auth/logout`
|
||||
Invalidate current session and clear cookie.
|
||||
|
||||
### `GET /api/settings/api-tokens`
|
||||
|
||||
Lists token metadata only. Plaintext token values are never
|
||||
returned after creation.
|
||||
List metadata for configured API tokens.
|
||||
|
||||
### `POST /api/settings/api-tokens`
|
||||
Create a new API token. The plaintext value is only returned once.
|
||||
|
||||
Request:
|
||||
|
||||
**Request:**
|
||||
```json
|
||||
{
|
||||
"name": "Prometheus",
|
||||
@@ -56,411 +37,114 @@ Request:
|
||||
}
|
||||
```
|
||||
|
||||
Response:
|
||||
|
||||
```json
|
||||
{
|
||||
"token": {
|
||||
"id": 1,
|
||||
"name": "Prometheus",
|
||||
"access_level": "read_only"
|
||||
},
|
||||
"plaintext_token": "alc_tok_..."
|
||||
}
|
||||
```
|
||||
|
||||
### `DELETE /api/settings/api-tokens/:id`
|
||||
Revoke a token.
|
||||
|
||||
Revokes a token in place. Existing automations using it will
|
||||
begin receiving `401` or `403` depending on route class.
|
||||
|
||||
### `POST /api/auth/login`
|
||||
|
||||
Request:
|
||||
|
||||
```json
|
||||
{
|
||||
"username": "admin",
|
||||
"password": "secret"
|
||||
}
|
||||
```
|
||||
|
||||
Response:
|
||||
|
||||
```http
|
||||
HTTP/1.1 200 OK
|
||||
Set-Cookie: alchemist_session=...; HttpOnly; SameSite=Lax; Path=/; Max-Age=2592000
|
||||
```
|
||||
|
||||
```json
|
||||
{
|
||||
"status": "ok"
|
||||
}
|
||||
```
|
||||
|
||||
### `POST /api/auth/logout`
|
||||
|
||||
Clears the session cookie and deletes the server-side
|
||||
session if one exists.
|
||||
|
||||
```json
|
||||
{
|
||||
"status": "ok"
|
||||
}
|
||||
```
|
||||
---
|
||||
|
||||
## Jobs
|
||||
|
||||
### `GET /api/jobs`
|
||||
List jobs with filtering and pagination.
|
||||
|
||||
Canonical job listing endpoint. Supports query params such
|
||||
as `limit`, `page`, `status`, `search`, `sort_by`,
|
||||
`sort_desc`, and `archived`.
|
||||
|
||||
Each returned job row still includes the legacy
|
||||
`decision_reason` string when present, and now also includes
|
||||
an optional `decision_explanation` object:
|
||||
|
||||
- `category`
|
||||
- `code`
|
||||
- `summary`
|
||||
- `detail`
|
||||
- `operator_guidance`
|
||||
- `measured`
|
||||
- `legacy_reason`
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
curl -b cookie.txt \
|
||||
'http://localhost:3000/api/jobs?status=queued,failed&limit=50&page=1'
|
||||
```
|
||||
**Params:** `limit`, `page`, `status`, `search`, `sort_by`, `sort_desc`, `archived`.
|
||||
|
||||
### `GET /api/jobs/:id/details`
|
||||
|
||||
Returns the job row, any available analyzed metadata,
|
||||
encode stats for completed jobs, recent job logs, and a
|
||||
failure summary for failed jobs. Structured explanation
|
||||
fields are included when available:
|
||||
|
||||
- `decision_explanation`
|
||||
- `failure_explanation`
|
||||
- `job_failure_summary` is retained as a compatibility field
|
||||
|
||||
Example response shape:
|
||||
|
||||
```json
|
||||
{
|
||||
"job": {
|
||||
"id": 42,
|
||||
"input_path": "/media/movies/example.mkv",
|
||||
"status": "completed"
|
||||
},
|
||||
"metadata": {
|
||||
"codec_name": "h264",
|
||||
"width": 1920,
|
||||
"height": 1080
|
||||
},
|
||||
"encode_stats": {
|
||||
"input_size_bytes": 8011223344,
|
||||
"output_size_bytes": 4112233445,
|
||||
"compression_ratio": 1.95,
|
||||
"encode_speed": 2.4,
|
||||
"vmaf_score": 93.1
|
||||
},
|
||||
"job_logs": [],
|
||||
"job_failure_summary": null,
|
||||
"decision_explanation": {
|
||||
"category": "decision",
|
||||
"code": "transcode_recommended",
|
||||
"summary": "Transcode recommended",
|
||||
"detail": "Alchemist determined the file should be transcoded based on the current codec and measured efficiency.",
|
||||
"operator_guidance": null,
|
||||
"measured": {
|
||||
"target_codec": "av1",
|
||||
"current_codec": "h264",
|
||||
"bpp": "0.1200"
|
||||
},
|
||||
"legacy_reason": "transcode_recommended|target_codec=av1,current_codec=h264,bpp=0.1200"
|
||||
},
|
||||
"failure_explanation": null
|
||||
}
|
||||
```
|
||||
Fetch full job state, metadata, logs, and stats.
|
||||
|
||||
### `POST /api/jobs/:id/cancel`
|
||||
|
||||
Cancels a queued or active job if the current state allows
|
||||
it.
|
||||
Cancel a queued or active job.
|
||||
|
||||
### `POST /api/jobs/:id/restart`
|
||||
|
||||
Restarts a non-active job by sending it back to `queued`.
|
||||
Restart a terminal job (failed/cancelled/completed).
|
||||
|
||||
### `POST /api/jobs/:id/priority`
|
||||
Update job priority.
|
||||
|
||||
Request:
|
||||
|
||||
```json
|
||||
{
|
||||
"priority": 100
|
||||
}
|
||||
```
|
||||
|
||||
Response:
|
||||
|
||||
```json
|
||||
{
|
||||
"id": 42,
|
||||
"priority": 100
|
||||
}
|
||||
```
|
||||
**Request:** `{"priority": 100}`
|
||||
|
||||
### `POST /api/jobs/batch`
|
||||
Bulk action on multiple jobs.
|
||||
|
||||
Supported `action` values: `cancel`, `restart`, `delete`.
|
||||
|
||||
**Request:**
|
||||
```json
|
||||
{
|
||||
"action": "restart",
|
||||
"ids": [41, 42, 43]
|
||||
}
|
||||
```
|
||||
|
||||
Response:
|
||||
|
||||
```json
|
||||
{
|
||||
"count": 3
|
||||
"action": "restart|cancel|delete",
|
||||
"ids": [1, 2, 3]
|
||||
}
|
||||
```
|
||||
|
||||
### `POST /api/jobs/restart-failed`
|
||||
|
||||
Response:
|
||||
|
||||
```json
|
||||
{
|
||||
"count": 2,
|
||||
"message": "Queued 2 failed or cancelled jobs for retry."
|
||||
}
|
||||
```
|
||||
Restart all failed or cancelled jobs.
|
||||
|
||||
### `POST /api/jobs/clear-completed`
|
||||
Archive all completed jobs from the active queue.
|
||||
|
||||
Archives completed jobs from the visible queue while
|
||||
preserving historical encode stats.
|
||||
|
||||
```json
|
||||
{
|
||||
"count": 12,
|
||||
"message": "Cleared 12 completed jobs from the queue. Historical stats were preserved."
|
||||
}
|
||||
```
|
||||
---
|
||||
|
||||
## Engine
|
||||
|
||||
### `POST /api/engine/pause`
|
||||
### `GET /api/engine/status`
|
||||
Get current operational status and limits.
|
||||
|
||||
```json
|
||||
{
|
||||
"status": "paused"
|
||||
}
|
||||
```
|
||||
### `POST /api/engine/pause`
|
||||
Pause the engine (suspend active jobs).
|
||||
|
||||
### `POST /api/engine/resume`
|
||||
|
||||
```json
|
||||
{
|
||||
"status": "running"
|
||||
}
|
||||
```
|
||||
Resume the engine.
|
||||
|
||||
### `POST /api/engine/drain`
|
||||
|
||||
```json
|
||||
{
|
||||
"status": "draining"
|
||||
}
|
||||
```
|
||||
|
||||
### `POST /api/engine/stop-drain`
|
||||
|
||||
```json
|
||||
{
|
||||
"status": "running"
|
||||
}
|
||||
```
|
||||
|
||||
### `GET /api/engine/status`
|
||||
|
||||
Response fields:
|
||||
|
||||
- `status`
|
||||
- `mode`
|
||||
- `concurrent_limit`
|
||||
- `manual_paused`
|
||||
- `scheduler_paused`
|
||||
- `draining`
|
||||
- `is_manual_override`
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
{
|
||||
"status": "paused",
|
||||
"manual_paused": true,
|
||||
"scheduler_paused": false,
|
||||
"draining": false,
|
||||
"mode": "balanced",
|
||||
"concurrent_limit": 2,
|
||||
"is_manual_override": false
|
||||
}
|
||||
```
|
||||
|
||||
### `GET /api/engine/mode`
|
||||
|
||||
Returns current mode, whether a manual override is active,
|
||||
the current concurrent limit, CPU count, and computed mode
|
||||
limits.
|
||||
Enter drain mode (finish active jobs, don't start new ones).
|
||||
|
||||
### `POST /api/engine/mode`
|
||||
Switch engine mode or apply manual overrides.
|
||||
|
||||
Request:
|
||||
|
||||
**Request:**
|
||||
```json
|
||||
{
|
||||
"mode": "balanced",
|
||||
"mode": "background|balanced|throughput",
|
||||
"concurrent_jobs_override": 2,
|
||||
"threads_override": 0
|
||||
}
|
||||
```
|
||||
|
||||
Response:
|
||||
---
|
||||
|
||||
```json
|
||||
{
|
||||
"status": "ok",
|
||||
"mode": "balanced",
|
||||
"concurrent_limit": 2,
|
||||
"is_manual_override": true
|
||||
}
|
||||
```
|
||||
|
||||
## Stats
|
||||
## Statistics
|
||||
|
||||
### `GET /api/stats/aggregated`
|
||||
|
||||
```json
|
||||
{
|
||||
"total_input_bytes": 1234567890,
|
||||
"total_output_bytes": 678901234,
|
||||
"total_savings_bytes": 555666656,
|
||||
"total_time_seconds": 81234.5,
|
||||
"total_jobs": 87,
|
||||
"avg_vmaf": 92.4
|
||||
}
|
||||
```
|
||||
Total savings, job counts, and global efficiency.
|
||||
|
||||
### `GET /api/stats/daily`
|
||||
|
||||
Returns the last 30 days of encode activity.
|
||||
|
||||
### `GET /api/stats/detailed`
|
||||
|
||||
Returns the most recent detailed encode stats rows.
|
||||
Encode activity history for the last 30 days.
|
||||
|
||||
### `GET /api/stats/savings`
|
||||
Detailed breakdown of storage savings.
|
||||
|
||||
Returns the storage-savings summary used by the statistics
|
||||
dashboard.
|
||||
|
||||
## Settings
|
||||
|
||||
### `GET /api/settings/transcode`
|
||||
|
||||
Returns the transcode settings payload currently loaded by
|
||||
the backend.
|
||||
|
||||
### `POST /api/settings/transcode`
|
||||
|
||||
Request:
|
||||
|
||||
```json
|
||||
{
|
||||
"concurrent_jobs": 2,
|
||||
"size_reduction_threshold": 0.3,
|
||||
"min_bpp_threshold": 0.1,
|
||||
"min_file_size_mb": 50,
|
||||
"output_codec": "av1",
|
||||
"quality_profile": "balanced",
|
||||
"threads": 0,
|
||||
"allow_fallback": true,
|
||||
"hdr_mode": "preserve",
|
||||
"tonemap_algorithm": "hable",
|
||||
"tonemap_peak": 100.0,
|
||||
"tonemap_desat": 0.2,
|
||||
"subtitle_mode": "copy",
|
||||
"stream_rules": {
|
||||
"strip_audio_by_title": ["commentary"],
|
||||
"keep_audio_languages": ["eng"],
|
||||
"keep_only_default_audio": false
|
||||
}
|
||||
}
|
||||
```
|
||||
---
|
||||
|
||||
## System
|
||||
|
||||
### `GET /api/system/hardware`
|
||||
|
||||
Returns the current detected hardware backend, supported
|
||||
codecs, backends, selection reason, probe summary, and any
|
||||
detection notes.
|
||||
Detected hardware backend and codec support matrix.
|
||||
|
||||
### `GET /api/system/hardware/probe-log`
|
||||
|
||||
Returns the per-encoder probe log with success/failure
|
||||
status, selected-flag metadata, summary text, and stderr
|
||||
excerpts.
|
||||
Full logs from the startup hardware probe.
|
||||
|
||||
### `GET /api/system/resources`
|
||||
Live telemetry: CPU, Memory, GPU utilization, and uptime.
|
||||
|
||||
Returns live resource data:
|
||||
---
|
||||
|
||||
- `cpu_percent`
|
||||
- `memory_used_mb`
|
||||
- `memory_total_mb`
|
||||
- `memory_percent`
|
||||
- `uptime_seconds`
|
||||
- `active_jobs`
|
||||
- `concurrent_limit`
|
||||
- `cpu_count`
|
||||
- `gpu_utilization`
|
||||
- `gpu_memory_percent`
|
||||
|
||||
## Server-Sent Events
|
||||
## Events (SSE)
|
||||
|
||||
### `GET /api/events`
|
||||
Real-time event stream.
|
||||
|
||||
Internal event types are `JobStateChanged`, `Progress`,
|
||||
`Decision`, and `Log`. The SSE stream exposed to clients
|
||||
emits lower-case event names:
|
||||
|
||||
- `status`
|
||||
- `progress`
|
||||
- `decision`
|
||||
- `log`
|
||||
|
||||
Additional config/system events may also appear, including
|
||||
`config_updated`, `scan_started`, `scan_completed`,
|
||||
`engine_status_changed`, and `hardware_state_changed`.
|
||||
|
||||
Example:
|
||||
|
||||
```text
|
||||
event: progress
|
||||
data: {"job_id":42,"percentage":61.4,"time":"00:11:32"}
|
||||
```
|
||||
|
||||
`decision` events include the legacy `reason` plus an
|
||||
optional structured `explanation` object with the same shape
|
||||
used by the jobs API.
|
||||
**Emitted Events:**
|
||||
- `status`: Job state changes.
|
||||
- `progress`: Real-time encode statistics.
|
||||
- `decision`: Skip/Transcode logic results.
|
||||
- `log`: Engine and job logs.
|
||||
- `config_updated`: Configuration hot-reload notification.
|
||||
- `scan_started` / `scan_completed`: Library scan status.
|
||||
|
||||
@@ -3,6 +3,24 @@ title: Changelog
|
||||
description: Release history for Alchemist.
|
||||
---
|
||||
|
||||
## [0.3.1-rc.5] - 2026-04-16
|
||||
|
||||
### Reliability & Stability
|
||||
|
||||
- **Segment-based encode resume** — interrupted encode jobs now persist resume sessions and completed segments so restart and recovery flows can continue without discarding all completed work.
|
||||
- **Notification target compatibility hardening** — notification target reads/writes now preserve the additive migration path, tolerate legacy shapes, and avoid duplicate-delete projection bugs in settings management.
|
||||
- **Daily summary reliability** — summary delivery now retries safely after transient failures and avoids duplicate sends across restart boundaries by persisting the last successful day.
|
||||
- **Job-detail correctness** — completed-job detail loading now fails closed on database errors instead of returning partial `200 OK` payloads, and encode stat duration fallback uses the encoded output rather than the source file.
|
||||
- **Auth and settings safety** — login now returns server errors for real database failures, and duplicate notification/schedule rows no longer disappear together from a single delete action.
|
||||
|
||||
### Jobs & UX
|
||||
|
||||
- **Manual enqueue flow** — the jobs UI now supports enqueueing a single absolute file path through the same backend dedupe and output rules used by library scans.
|
||||
- **Queued-job visibility** — job detail now exposes queue position and processor blocked reasons so operators can see why a queued job is not starting.
|
||||
- **Attempt-history surfacing** — job detail now shows encode attempt history directly in the modal, including outcome, timing, and captured failure summary.
|
||||
- **Jobs UI follow-through** — the `JobManager` refactor now ships with dedicated controller/dialog helpers and tighter SSE reconciliation so filtered tables and open detail modals stay aligned with backend truth.
|
||||
- **Intelligence actions** — remux recommendations and duplicate candidates are now actionable directly from the Intelligence page.
|
||||
|
||||
## [0.3.1-rc.3] - 2026-04-12
|
||||
|
||||
### New Features
|
||||
|
||||
@@ -70,7 +70,7 @@ Default config file location:
|
||||
| `output_extension` | string | `"mkv"` | Output file extension |
|
||||
| `output_suffix` | string | `"-alchemist"` | Suffix added to the output filename |
|
||||
| `replace_strategy` | string | `"keep"` | Replace behavior for output collisions |
|
||||
| `output_root` | string | optional | Mirror outputs into another root path instead of writing beside the source |
|
||||
| `output_root` | string | optional | If set, Alchemist mirrors the source library directory structure under this root path instead of writing outputs alongside the source files |
|
||||
|
||||
## `[schedule]`
|
||||
|
||||
|
||||
@@ -1,37 +1,39 @@
|
||||
---
|
||||
title: Engine Modes
|
||||
description: Background, Balanced, and Throughput — what they mean and when to use each.
|
||||
title: Engine Modes & States
|
||||
description: Background, Balanced, and Throughput — understanding concurrency and execution flow.
|
||||
---
|
||||
|
||||
Engine modes set the concurrent job limit.
|
||||
Alchemist uses **Modes** to dictate performance limits and **States** to control execution flow.
|
||||
|
||||
## Modes
|
||||
## Engine Modes (Concurrency)
|
||||
|
||||
| Mode | Concurrent jobs | Use when |
|
||||
|------|----------------|----------|
|
||||
| Background | 1 | Server in active use |
|
||||
| Balanced (default) | floor(cpu_count / 2), min 1, max 4 | General shared server |
|
||||
| Throughput | floor(cpu_count / 2), min 1, no cap | Dedicated server, clear a backlog |
|
||||
Modes define the maximum number of concurrent jobs the engine will attempt to run.
|
||||
|
||||
## Manual override
|
||||
| Mode | Concurrent Jobs | Ideal For |
|
||||
|------|----------------|-----------|
|
||||
| **Background** | 1 | Server in active use by other applications. |
|
||||
| **Balanced** | `floor(cpu_count / 2)` (min 1, max 4) | Default. Shared server usage. |
|
||||
| **Throughput** | `floor(cpu_count / 2)` (min 1, no cap) | Dedicated server; clearing a large backlog. |
|
||||
|
||||
Override the computed limit in **Settings → Runtime**. Takes
|
||||
effect immediately. A "manual" badge appears in engine
|
||||
status. Switching modes clears the override.
|
||||
:::tip Manual Override
|
||||
You can override the computed limit in **Settings → Runtime**. A "Manual" badge will appear in the engine status. Switching modes clears manual overrides.
|
||||
:::
|
||||
|
||||
## States vs. modes
|
||||
---
|
||||
|
||||
Modes determine *how many* jobs run. States determine
|
||||
*whether* they run.
|
||||
## Engine States (Execution)
|
||||
|
||||
States determine whether the engine is actively processing the queue.
|
||||
|
||||
| State | Behavior |
|
||||
|-------|----------|
|
||||
| Running | Jobs start up to the mode's limit |
|
||||
| Paused | No jobs start; active jobs freeze |
|
||||
| Draining | Active jobs finish; no new jobs start |
|
||||
| Scheduler paused | Paused by a schedule window |
|
||||
| **Running** | Engine is active. Jobs start up to the current mode's limit. |
|
||||
| **Paused** | Engine is suspended. No new jobs start; active jobs are frozen. |
|
||||
| **Draining** | Engine is stopping. Active jobs finish, but no new jobs start. |
|
||||
| **Scheduler Paused** | Engine is temporarily paused by a configured [Schedule Window](/scheduling). |
|
||||
|
||||
## Changing modes
|
||||
---
|
||||
|
||||
**Settings → Runtime**. Takes effect immediately; in-progress
|
||||
jobs are not cancelled.
|
||||
## Changing Engine Behavior
|
||||
|
||||
Engine behavior can be adjusted in real-time via the **Runtime** dashboard or the [API](/api#engine). Changes take effect immediately without cancelling in-progress jobs.
|
||||
|
||||
@@ -1,32 +1,35 @@
|
||||
---
|
||||
title: Library Doctor
|
||||
description: Scan for corrupt, truncated, and unreadable media files.
|
||||
description: Identifying corrupt, truncated, and unreadable media files in your library.
|
||||
---
|
||||
|
||||
Library Doctor scans your configured directories for files
|
||||
that are corrupt, truncated, or unreadable by FFprobe.
|
||||
Library Doctor is a specialized diagnostic tool that scans your library for media files that are corrupt, truncated, or otherwise unreadable by the Alchemist analyzer.
|
||||
|
||||
Run from **Settings → Runtime → Library Doctor → Run Scan**.
|
||||
Run a scan manually from **Settings → Runtime → Library Doctor**.
|
||||
|
||||
## What it checks
|
||||
## Core Checks
|
||||
|
||||
| Check | What it detects |
|
||||
|-------|-----------------|
|
||||
| Probe failure | Files FFprobe cannot read at all |
|
||||
| No video stream | Files with no detectable video track |
|
||||
| Zero duration | Files reporting 0 seconds of content |
|
||||
| Truncated file | Files that appear to end prematurely |
|
||||
| Missing codec data | Files missing metadata needed to plan a transcode |
|
||||
Library Doctor runs an intensive probe on every file in your watch directories to identify the following issues:
|
||||
|
||||
## What to do with results
|
||||
| Check | Technical Detection | Action Recommended |
|
||||
|-------|-----------------|--------------------|
|
||||
| **Probe Failure** | `ffprobe` returns a non-zero exit code or cannot parse headers. | Re-download or Re-rip. |
|
||||
| **No Video Stream** | File container is valid but contains no detectable video tracks. | Verify source; delete if unintended. |
|
||||
| **Zero Duration** | File metadata reports a duration of 0 seconds. | Check for interrupted transfers. |
|
||||
| **Truncated File** | File size is significantly smaller than expected for the reported bitrate/duration. | Check filesystem integrity. |
|
||||
| **Missing Metadata** | Missing critical codec data (e.g., pixel format, profile) needed for planning. | Possible unsupported codec variant. |
|
||||
|
||||
Library Doctor reports issues — it does not repair or delete
|
||||
files automatically.
|
||||
---
|
||||
|
||||
- **Re-download** — interrupted download
|
||||
- **Re-rip** — disc read errors
|
||||
- **Delete** — duplicate or unrecoverable
|
||||
- **Ignore** — player handles it despite FFprobe failing
|
||||
## Relationship to Jobs
|
||||
|
||||
Files that fail Library Doctor also fail the Analyzing
|
||||
stage of a transcode job and appear as Failed in Jobs.
|
||||
Files that fail Library Doctor checks will also fail the **Analyzing** stage of a standard transcode job.
|
||||
|
||||
- **Pre-emptive detection**: Running Library Doctor helps you clear "broken" files from your library before they enter the processing queue.
|
||||
- **Reporting**: Issues identified by the Doctor appear in the **Health** tab of the dashboard, separate from active transcode jobs.
|
||||
|
||||
## Handling Results
|
||||
|
||||
Library Doctor is read-only; it will **never delete or modify** your files automatically.
|
||||
|
||||
If a file is flagged, you should manually verify it using a media player. If the file is indeed unplayable, we recommend replacing it from the source. Flags can be cleared by deleting the file or moving it out of a watched directory.
|
||||
|
||||
@@ -37,13 +37,9 @@ FFmpeg expert.
|
||||
|
||||
## Hardware support
|
||||
|
||||
| Vendor | AV1 | HEVC | H.264 | Notes |
|
||||
|--------|-----|------|-------|-------|
|
||||
| NVIDIA NVENC | RTX 30/40 | Maxwell+ | All | Best for speed |
|
||||
| Intel QSV | 12th gen+ | 6th gen+ | All | Best for power efficiency |
|
||||
| AMD VAAPI/AMF | RDNA 2+ on compatible driver/FFmpeg stacks | Polaris+ | All | Linux VAAPI / Windows AMF; HEVC/H.264 are the validated AMD paths for `0.3.0` |
|
||||
| Apple VideoToolbox | M3+ | M1+ / T2 | All | Binary install recommended |
|
||||
| CPU (SVT-AV1/x265/x264) | All | All | All | Always available |
|
||||
Alchemist detects and selects the best available hardware encoder automatically (NVIDIA NVENC, Intel QSV, AMD VAAPI/AMF, Apple VideoToolbox, or CPU fallback).
|
||||
|
||||
For detailed codec support matrices (AV1, HEVC, H.264) and vendor-specific setup guides, see the [Hardware Acceleration](/hardware) documentation.
|
||||
|
||||
## Where to start
|
||||
|
||||
|
||||
@@ -103,7 +103,6 @@ const config: Config = {
|
||||
],
|
||||
},
|
||||
footer: {
|
||||
style: 'dark',
|
||||
links: [
|
||||
{
|
||||
title: 'Get Started',
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "alchemist-docs",
|
||||
"version": "0.3.1-rc.3",
|
||||
"version": "0.3.1-rc.5",
|
||||
"private": true,
|
||||
"packageManager": "bun@1.3.5",
|
||||
"scripts": {
|
||||
@@ -48,6 +48,7 @@
|
||||
"node": ">=20.0"
|
||||
},
|
||||
"overrides": {
|
||||
"follow-redirects": "^1.16.0",
|
||||
"lodash": "^4.18.1",
|
||||
"serialize-javascript": "^7.0.5"
|
||||
}
|
||||
|
||||
@@ -94,8 +94,9 @@ html {
|
||||
}
|
||||
|
||||
.footer {
|
||||
border-top: 1px solid rgba(200, 155, 90, 0.22);
|
||||
background: var(--ifm-footer-background-color);
|
||||
border-top: 1px solid var(--doc-border);
|
||||
background-color: var(--ifm-footer-background-color) !important;
|
||||
color: #ddd0be;
|
||||
}
|
||||
|
||||
.footer__links {
|
||||
@@ -118,13 +119,22 @@ html {
|
||||
}
|
||||
|
||||
.footer__title {
|
||||
color: #fdf6ee;
|
||||
font-weight: 700;
|
||||
}
|
||||
|
||||
.footer__bottom,
|
||||
.footer__link-item {
|
||||
color: #cfc0aa;
|
||||
}
|
||||
|
||||
.footer__link-item:hover {
|
||||
color: var(--ifm-link-hover-color);
|
||||
text-decoration: none;
|
||||
}
|
||||
|
||||
.footer__copyright {
|
||||
text-align: center;
|
||||
color: #b8a88e;
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
.main-wrapper {
|
||||
|
||||
@@ -1,34 +1,25 @@
|
||||
CREATE TABLE IF NOT EXISTS notification_targets_new (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
name TEXT NOT NULL,
|
||||
target_type TEXT CHECK(target_type IN ('discord_webhook', 'discord_bot', 'gotify', 'webhook', 'telegram', 'email')) NOT NULL,
|
||||
config_json TEXT NOT NULL DEFAULT '{}',
|
||||
events TEXT NOT NULL DEFAULT '["encode.failed","encode.completed"]',
|
||||
enabled BOOLEAN DEFAULT 1,
|
||||
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
ALTER TABLE notification_targets
|
||||
ADD COLUMN target_type_v2 TEXT;
|
||||
|
||||
INSERT INTO notification_targets_new (id, name, target_type, config_json, events, enabled, created_at)
|
||||
SELECT
|
||||
id,
|
||||
name,
|
||||
CASE target_type
|
||||
ALTER TABLE notification_targets
|
||||
ADD COLUMN config_json TEXT NOT NULL DEFAULT '{}';
|
||||
|
||||
UPDATE notification_targets
|
||||
SET
|
||||
target_type_v2 = CASE target_type
|
||||
WHEN 'discord' THEN 'discord_webhook'
|
||||
WHEN 'gotify' THEN 'gotify'
|
||||
ELSE 'webhook'
|
||||
END,
|
||||
CASE target_type
|
||||
config_json = CASE target_type
|
||||
WHEN 'discord' THEN json_object('webhook_url', endpoint_url)
|
||||
WHEN 'gotify' THEN json_object('server_url', endpoint_url, 'app_token', COALESCE(auth_token, ''))
|
||||
ELSE json_object('url', endpoint_url, 'auth_token', auth_token)
|
||||
END,
|
||||
COALESCE(events, '["failed","completed"]'),
|
||||
enabled,
|
||||
created_at
|
||||
FROM notification_targets;
|
||||
|
||||
DROP TABLE notification_targets;
|
||||
ALTER TABLE notification_targets_new RENAME TO notification_targets;
|
||||
END
|
||||
WHERE target_type_v2 IS NULL
|
||||
OR target_type_v2 = ''
|
||||
OR config_json IS NULL
|
||||
OR trim(config_json) = '';
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_notification_targets_enabled
|
||||
ON notification_targets(enabled);
|
||||
|
||||
2
migrations/20260412000000_store_job_metadata.sql
Normal file
2
migrations/20260412000000_store_job_metadata.sql
Normal file
@@ -0,0 +1,2 @@
|
||||
-- Store input metadata as JSON to avoid live re-probing completed jobs
|
||||
ALTER TABLE jobs ADD COLUMN input_metadata_json TEXT;
|
||||
38
migrations/20260414010000_job_resume_sessions.sql
Normal file
38
migrations/20260414010000_job_resume_sessions.sql
Normal file
@@ -0,0 +1,38 @@
|
||||
CREATE TABLE IF NOT EXISTS job_resume_sessions (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
job_id INTEGER NOT NULL UNIQUE REFERENCES jobs(id) ON DELETE CASCADE,
|
||||
strategy TEXT NOT NULL,
|
||||
plan_hash TEXT NOT NULL,
|
||||
mtime_hash TEXT NOT NULL,
|
||||
temp_dir TEXT NOT NULL,
|
||||
concat_manifest_path TEXT NOT NULL,
|
||||
segment_length_secs INTEGER NOT NULL,
|
||||
status TEXT NOT NULL DEFAULT 'active',
|
||||
created_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
updated_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS job_resume_segments (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
job_id INTEGER NOT NULL REFERENCES jobs(id) ON DELETE CASCADE,
|
||||
segment_index INTEGER NOT NULL,
|
||||
start_secs REAL NOT NULL,
|
||||
duration_secs REAL NOT NULL,
|
||||
temp_path TEXT NOT NULL,
|
||||
status TEXT NOT NULL DEFAULT 'pending',
|
||||
attempt_count INTEGER NOT NULL DEFAULT 0,
|
||||
created_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
updated_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
UNIQUE(job_id, segment_index)
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_job_resume_sessions_status
|
||||
ON job_resume_sessions(status);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_job_resume_segments_job_status
|
||||
ON job_resume_segments(job_id, status);
|
||||
|
||||
INSERT OR REPLACE INTO schema_info (key, value) VALUES
|
||||
('schema_version', '9'),
|
||||
('min_compatible_version', '0.2.5'),
|
||||
('last_updated', datetime('now'));
|
||||
191
plans.md
Normal file
191
plans.md
Normal file
@@ -0,0 +1,191 @@
|
||||
# Open Item Plans
|
||||
|
||||
---
|
||||
|
||||
## [UX-2] Single File Enqueue
|
||||
|
||||
### Goal
|
||||
`POST /api/jobs/enqueue` + "Add file" button in JobsToolbar.
|
||||
|
||||
### Backend
|
||||
|
||||
**New handler in `src/server/jobs.rs`:**
|
||||
```rust
|
||||
#[derive(Deserialize)]
|
||||
struct EnqueueFilePayload {
|
||||
input_path: String,
|
||||
source_root: Option<String>,
|
||||
}
|
||||
|
||||
async fn enqueue_file_handler(State(state), Json(payload)) -> impl IntoResponse
|
||||
```
|
||||
|
||||
Logic:
|
||||
1. Validate `input_path` exists on disk, is a file
|
||||
2. Read `mtime` from filesystem metadata
|
||||
3. Build `DiscoveredMedia { path, mtime, source_root }`
|
||||
4. Call `enqueue_discovered_with_db(&db, discovered)` — reuses all existing skip checks, output path computation, file settings
|
||||
5. If `Ok(true)` → fetch job via `db.get_job_by_input_path()`, return it
|
||||
6. If `Ok(false)` → 409 "already tracked / output exists"
|
||||
7. If `Err` → 400 with error
|
||||
|
||||
**Route:** Add `.route("/api/jobs/enqueue", post(enqueue_file_handler))` in `src/server/mod.rs`
|
||||
|
||||
### Frontend
|
||||
|
||||
**`web/src/components/jobs/JobsToolbar.tsx`:**
|
||||
- Add "Add File" button next to refresh
|
||||
- Opens small modal/dialog with text input for path
|
||||
- POST to `/api/jobs/enqueue`, toast result
|
||||
- SSE handles job appearing in table automatically
|
||||
|
||||
### Files to modify
|
||||
- `src/server/jobs.rs` — new handler + payload struct
|
||||
- `src/server/mod.rs` — route registration
|
||||
- `web/src/components/jobs/JobsToolbar.tsx` — button + dialog
|
||||
- `web/src/components/jobs/` — optional: new `EnqueueDialog.tsx` component
|
||||
|
||||
### Verification
|
||||
- `cargo check && cargo test && cargo clippy`
|
||||
- Manual: POST valid path → job appears queued
|
||||
- POST nonexistent path → 400
|
||||
- POST already-tracked path → 409
|
||||
- Frontend: click Add File, enter path, see job in table
|
||||
|
||||
---
|
||||
|
||||
## [UX-3] Workers-Blocked Reason
|
||||
|
||||
### Goal
|
||||
Surface why queued jobs aren't being processed. Extend `/api/engine/status` → show reason in JobDetailModal.
|
||||
|
||||
### Backend
|
||||
|
||||
**Extend `engine_status_handler` response** (or create new endpoint) to include blocking state:
|
||||
|
||||
```rust
|
||||
struct EngineStatusResponse {
|
||||
// existing fields...
|
||||
blocked_reason: Option<String>, // "paused", "scheduled", "draining", "boot_analysis", "slots_full", null
|
||||
schedule_resume: Option<String>, // next window open time if scheduler_paused
|
||||
}
|
||||
```
|
||||
|
||||
Derive from `Agent` state:
|
||||
- `agent.is_manual_paused()` → `"paused"`
|
||||
- `agent.is_scheduler_paused()` → `"scheduled"`
|
||||
- `agent.is_draining()` → `"draining"`
|
||||
- `agent.is_boot_analyzing()` → `"boot_analysis"`
|
||||
- `agent.in_flight_jobs >= agent.concurrent_jobs_limit()` → `"slots_full"`
|
||||
- else → `null` (processing normally)
|
||||
|
||||
### Frontend
|
||||
|
||||
**`web/src/components/jobs/JobDetailModal.tsx`:**
|
||||
- Below queue position display, show blocked reason if present
|
||||
- Fetch from engine status (already available via SSE `EngineStatusChanged` events, or poll `/api/engine/status`)
|
||||
- Color-coded: yellow for schedule/pause, blue for boot analysis, gray for slots full
|
||||
|
||||
### Files to modify
|
||||
- `src/server/jobs.rs` or wherever `engine_status_handler` lives — extend response
|
||||
- `web/src/components/jobs/JobDetailModal.tsx` — display blocked reason
|
||||
- `web/src/components/jobs/useJobSSE.ts` — optionally track engine status via SSE
|
||||
|
||||
### Verification
|
||||
- Pause engine → queued job detail shows "Engine paused"
|
||||
- Set schedule window outside current time → shows "Outside schedule window"
|
||||
- Fill all slots → shows "All worker slots occupied"
|
||||
- Resume → reason disappears
|
||||
|
||||
---
|
||||
|
||||
## [FG-4] Intelligence Page Actions
|
||||
|
||||
### Goal
|
||||
Add actionable buttons to `LibraryIntelligence.tsx`: delete duplicates, queue remux opportunities.
|
||||
|
||||
### Duplicate Group Actions
|
||||
|
||||
**"Keep Latest, Delete Rest" button per group:**
|
||||
- Each duplicate group card gets a "Clean Up" button
|
||||
- Selects all jobs except the one with latest `updated_at`
|
||||
- Calls `POST /api/jobs/batch` with `{ action: "delete", ids: [...] }`
|
||||
- Confirmation modal: "Archive N duplicate jobs?"
|
||||
|
||||
**"Clean All Duplicates" bulk button:**
|
||||
- Top-level button in duplicates section header
|
||||
- Same logic across all groups
|
||||
- Shows total count in confirmation
|
||||
|
||||
### Recommendation Actions
|
||||
|
||||
**"Queue All Remux" button:**
|
||||
- Gathers IDs of all remux opportunity jobs
|
||||
- Calls `POST /api/jobs/batch` with `{ action: "restart", ids: [...] }`
|
||||
- Jobs re-enter queue for remux processing
|
||||
|
||||
**Per-recommendation "Queue" button:**
|
||||
- Individual restart for single recommendation items
|
||||
|
||||
### Backend
|
||||
No new endpoints needed — existing `POST /api/jobs/batch` handles all actions (cancel/delete/restart).
|
||||
|
||||
### Frontend
|
||||
|
||||
**`web/src/components/LibraryIntelligence.tsx`:**
|
||||
- Add "Clean Up" button to each duplicate group card
|
||||
- Add "Clean All Duplicates" button to section header
|
||||
- Add "Queue All" button to remux opportunities section
|
||||
- Add confirmation modal component
|
||||
- Add toast notifications for success/error
|
||||
- Refresh data after action completes
|
||||
|
||||
### Files to modify
|
||||
- `web/src/components/LibraryIntelligence.tsx` — buttons, modals, action handlers
|
||||
|
||||
### Verification
|
||||
- Click "Clean Up" on duplicate group → archives all but latest
|
||||
- Click "Queue All Remux" → remux jobs reset to queued
|
||||
- Confirm counts in modal match actual
|
||||
- Data refreshes after action
|
||||
|
||||
---
|
||||
|
||||
## [RG-2] AMD VAAPI/AMF Validation
|
||||
|
||||
### Goal
|
||||
Verify AMD hardware encoder paths produce correct FFmpeg commands on real AMD hardware.
|
||||
|
||||
### Problem
|
||||
`src/media/ffmpeg/vaapi.rs` and `src/media/ffmpeg/amf.rs` were implemented without real hardware validation. Flag mappings, device paths, and quality controls may be incorrect.
|
||||
|
||||
### Validation checklist
|
||||
|
||||
**VAAPI (Linux):**
|
||||
- [ ] Device path `/dev/dri/renderD128` detection works
|
||||
- [ ] `hevc_vaapi` / `h264_vaapi` encoder selection
|
||||
- [ ] CRF/quality mapping → `-rc_mode CQP -qp N` or `-rc_mode ICQ -quality N`
|
||||
- [ ] HDR passthrough flags (if applicable)
|
||||
- [ ] Container compatibility (MKV/MP4)
|
||||
|
||||
**AMF (Windows):**
|
||||
- [ ] `hevc_amf` / `h264_amf` encoder selection
|
||||
- [ ] Quality mapping → `-quality quality -qp_i N -qp_p N`
|
||||
- [ ] B-frame support detection
|
||||
- [ ] HDR passthrough
|
||||
|
||||
### Approach
|
||||
1. Write unit tests for `build_args()` output — verify flag strings without hardware
|
||||
2. Gate integration tests on `AMD_GPU_AVAILABLE` env var
|
||||
3. Document known-good flag sets from AMD documentation
|
||||
4. Add `EncoderCapabilities` detection for AMF/VAAPI (similar to existing NVENC/QSV detection)
|
||||
|
||||
### Files to modify
|
||||
- `src/media/ffmpeg/vaapi.rs` — flag corrections if needed
|
||||
- `src/media/ffmpeg/amf.rs` — flag corrections if needed
|
||||
- `tests/` — new integration test file gated on hardware
|
||||
|
||||
### Verification
|
||||
- Unit tests pass on CI (no hardware needed)
|
||||
- Integration tests pass on AMD hardware (manual)
|
||||
- Generated FFmpeg commands reviewed against AMD documentation
|
||||
@@ -1,124 +0,0 @@
|
||||
# Security Best Practices Report
|
||||
|
||||
## Executive Summary
|
||||
|
||||
I found one critical security bug and one additional high-severity issue in the setup/bootstrap flow.
|
||||
|
||||
The critical problem is that first-run setup is remotely accessible without authentication while the server listens on `0.0.0.0`. A network-reachable attacker can win the initial setup race, create the first admin account, and take over the instance.
|
||||
|
||||
I did not find evidence of major client-side XSS sinks or obvious SQL injection paths during this audit. Most of the remaining concerns I saw were hardening-level issues rather than immediately exploitable major bugs.
|
||||
|
||||
## Critical Findings
|
||||
|
||||
### ALCH-SEC-001
|
||||
|
||||
- Severity: Critical
|
||||
- Location:
|
||||
- `src/server/middleware.rs:80-86`
|
||||
- `src/server/wizard.rs:95-210`
|
||||
- `src/server/mod.rs:176-197`
|
||||
- `README.md:61-79`
|
||||
- Impact: Any attacker who can reach the service before the legitimate operator completes setup can create the first admin account and fully compromise the instance.
|
||||
|
||||
#### Evidence
|
||||
|
||||
`auth_middleware` exempts the full `/api/setup` namespace from authentication:
|
||||
|
||||
- `src/server/middleware.rs:80-86`
|
||||
|
||||
`setup_complete_handler` only checks `setup_required` and then creates the user, session cookie, and persisted config:
|
||||
|
||||
- `src/server/wizard.rs:95-210`
|
||||
|
||||
The server binds to all interfaces by default:
|
||||
|
||||
- `src/server/mod.rs:176-197`
|
||||
|
||||
The documented Docker quick-start publishes port `3000` directly:
|
||||
|
||||
- `README.md:61-79`
|
||||
|
||||
#### Why This Is Exploitable
|
||||
|
||||
On a fresh install, or any run where `setup_required == true`, the application accepts unauthenticated requests to `/api/setup/complete`. Because the listener binds `0.0.0.0`, that endpoint is reachable from any network that can reach the host unless an external firewall or reverse proxy blocks it.
|
||||
|
||||
That lets a remote attacker:
|
||||
|
||||
1. POST their own username and password to `/api/setup/complete`
|
||||
2. Receive the initial authenticated session cookie
|
||||
3. Persist attacker-controlled configuration and start operating as the admin user
|
||||
|
||||
This is a full-authentication-bypass takeover of the instance during bootstrap.
|
||||
|
||||
#### Recommended Fix
|
||||
|
||||
Require setup completion to come only from a trusted local origin during bootstrap, matching the stricter treatment already used for `/api/fs/*` during setup.
|
||||
|
||||
Minimal safe options:
|
||||
|
||||
1. Restrict `/api/setup/*` and `/api/settings/bundle` to loopback-only while `setup_required == true`.
|
||||
2. Alternatively require an explicit one-time bootstrap secret/token generated on startup and printed locally.
|
||||
3. Consider binding to `127.0.0.1` by default until setup is complete, then allowing an explicit public bind only after bootstrap.
|
||||
|
||||
#### Mitigation Until Fixed
|
||||
|
||||
- Do not expose the service to any network before setup is completed.
|
||||
- Do not publish the container port directly on untrusted networks.
|
||||
- Complete setup only through a local-only tunnel or host firewall rule.
|
||||
|
||||
## High Findings
|
||||
|
||||
### ALCH-SEC-002
|
||||
|
||||
- Severity: High
|
||||
- Location:
|
||||
- `src/server/middleware.rs:116-117`
|
||||
- `src/server/settings.rs:244-285`
|
||||
- `src/config.rs:366-390`
|
||||
- `src/main.rs:369-383`
|
||||
- `src/db.rs:2566-2571`
|
||||
- Impact: During setup mode, an unauthenticated remote attacker can read and overwrite the full runtime configuration; after `--reset-auth`, this can expose existing notification endpoints/tokens and let the attacker reconfigure the instance before the operator reclaims it.
|
||||
|
||||
#### Evidence
|
||||
|
||||
While `setup_required == true`, `auth_middleware` explicitly allows `/api/settings/bundle` without authentication:
|
||||
|
||||
- `src/server/middleware.rs:116-117`
|
||||
|
||||
`get_settings_bundle_handler` returns the full `Config`, and `update_settings_bundle_handler` writes an attacker-supplied `Config` back to disk and runtime state:
|
||||
|
||||
- `src/server/settings.rs:244-285`
|
||||
|
||||
The config structure includes notification targets and optional `auth_token` fields:
|
||||
|
||||
- `src/config.rs:366-390`
|
||||
|
||||
`--reset-auth` only clears users and sessions, then re-enters setup mode:
|
||||
|
||||
- `src/main.rs:369-383`
|
||||
- `src/db.rs:2566-2571`
|
||||
|
||||
#### Why This Is Exploitable
|
||||
|
||||
This endpoint is effectively a public config API whenever the app is in setup mode. On a brand-new install that broadens the same bootstrap attack surface as ALCH-SEC-001. On an existing deployment where an operator runs `--reset-auth`, the previous configuration remains on disk while authentication is removed, so a remote caller can:
|
||||
|
||||
1. GET `/api/settings/bundle` and read the current config
|
||||
2. Learn configured paths, schedules, webhook targets, and any stored notification bearer tokens
|
||||
3. PUT a replacement config before the legitimate operator finishes recovery
|
||||
|
||||
That creates both confidential-data exposure and unauthenticated remote reconfiguration during recovery/bootstrap windows.
|
||||
|
||||
#### Recommended Fix
|
||||
|
||||
Do not expose `/api/settings/bundle` anonymously.
|
||||
|
||||
Safer options:
|
||||
|
||||
1. Apply the same loopback-only setup restriction used for `/api/fs/*`.
|
||||
2. Split bootstrap-safe fields from privileged configuration and expose only the minimal bootstrap payload anonymously.
|
||||
3. Redact secret-bearing config fields such as notification tokens from any unauthenticated response path.
|
||||
|
||||
## Notes
|
||||
|
||||
- I did not find a major DOM-XSS path in `web/src`; there were no `dangerouslySetInnerHTML`, `innerHTML`, `insertAdjacentHTML`, `eval`, or similar high-risk sinks in the audited code paths.
|
||||
- I also did not see obvious raw SQL string interpolation issues; the database code I reviewed uses parameter binding.
|
||||
@@ -82,12 +82,12 @@ impl QualityProfile {
|
||||
}
|
||||
}
|
||||
|
||||
/// Get FFmpeg quality value for Apple VideoToolbox
|
||||
/// Get FFmpeg quality value for Apple VideoToolbox (-q:v 1-100, lower is better)
|
||||
pub fn videotoolbox_quality(&self) -> &'static str {
|
||||
match self {
|
||||
Self::Quality => "55",
|
||||
Self::Balanced => "65",
|
||||
Self::Speed => "75",
|
||||
Self::Quality => "24",
|
||||
Self::Balanced => "28",
|
||||
Self::Speed => "32",
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -490,36 +490,34 @@ impl NotificationTargetConfig {
|
||||
|
||||
match self.target_type.as_str() {
|
||||
"discord_webhook" => {
|
||||
if !config_map.contains_key("webhook_url") {
|
||||
if let Some(endpoint_url) = self.endpoint_url.clone() {
|
||||
config_map
|
||||
.insert("webhook_url".to_string(), JsonValue::String(endpoint_url));
|
||||
}
|
||||
if let Some(endpoint_url) = self.endpoint_url.clone() {
|
||||
config_map
|
||||
.entry("webhook_url".to_string())
|
||||
.or_insert_with(|| JsonValue::String(endpoint_url));
|
||||
}
|
||||
}
|
||||
"gotify" => {
|
||||
if !config_map.contains_key("server_url") {
|
||||
if let Some(endpoint_url) = self.endpoint_url.clone() {
|
||||
config_map
|
||||
.insert("server_url".to_string(), JsonValue::String(endpoint_url));
|
||||
}
|
||||
if let Some(endpoint_url) = self.endpoint_url.clone() {
|
||||
config_map
|
||||
.entry("server_url".to_string())
|
||||
.or_insert_with(|| JsonValue::String(endpoint_url));
|
||||
}
|
||||
if !config_map.contains_key("app_token") {
|
||||
if let Some(auth_token) = self.auth_token.clone() {
|
||||
config_map.insert("app_token".to_string(), JsonValue::String(auth_token));
|
||||
}
|
||||
if let Some(auth_token) = self.auth_token.clone() {
|
||||
config_map
|
||||
.entry("app_token".to_string())
|
||||
.or_insert_with(|| JsonValue::String(auth_token));
|
||||
}
|
||||
}
|
||||
"webhook" => {
|
||||
if !config_map.contains_key("url") {
|
||||
if let Some(endpoint_url) = self.endpoint_url.clone() {
|
||||
config_map.insert("url".to_string(), JsonValue::String(endpoint_url));
|
||||
}
|
||||
if let Some(endpoint_url) = self.endpoint_url.clone() {
|
||||
config_map
|
||||
.entry("url".to_string())
|
||||
.or_insert_with(|| JsonValue::String(endpoint_url));
|
||||
}
|
||||
if !config_map.contains_key("auth_token") {
|
||||
if let Some(auth_token) = self.auth_token.clone() {
|
||||
config_map.insert("auth_token".to_string(), JsonValue::String(auth_token));
|
||||
}
|
||||
if let Some(auth_token) = self.auth_token.clone() {
|
||||
config_map
|
||||
.entry("auth_token".to_string())
|
||||
.or_insert_with(|| JsonValue::String(auth_token));
|
||||
}
|
||||
}
|
||||
_ => {}
|
||||
@@ -684,6 +682,13 @@ pub struct SystemConfig {
|
||||
/// Enable HSTS header (only enable if running behind HTTPS)
|
||||
#[serde(default)]
|
||||
pub https_only: bool,
|
||||
/// Explicit list of reverse proxy IPs (e.g. "192.168.1.1") whose
|
||||
/// X-Forwarded-For / X-Real-IP headers are trusted. When non-empty,
|
||||
/// only these IPs (plus loopback) are trusted as proxies; private
|
||||
/// ranges are no longer trusted by default. Leave empty to preserve
|
||||
/// the previous behaviour (trust all RFC-1918 private addresses).
|
||||
#[serde(default)]
|
||||
pub trusted_proxies: Vec<String>,
|
||||
}
|
||||
|
||||
fn default_true() -> bool {
|
||||
@@ -710,6 +715,7 @@ impl Default for SystemConfig {
|
||||
log_retention_days: default_log_retention_days(),
|
||||
engine_mode: EngineMode::default(),
|
||||
https_only: false,
|
||||
trusted_proxies: Vec::new(),
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -825,6 +831,7 @@ impl Default for Config {
|
||||
log_retention_days: default_log_retention_days(),
|
||||
engine_mode: EngineMode::default(),
|
||||
https_only: false,
|
||||
trusted_proxies: Vec::new(),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
@@ -442,6 +442,7 @@ fn build_rate_control(mode: &str, value: Option<u32>, encoder: Encoder) -> Resul
|
||||
match encoder.backend() {
|
||||
EncoderBackend::Qsv => Ok(RateControl::QsvQuality { value: quality }),
|
||||
EncoderBackend::Cpu => Ok(RateControl::Crf { value: quality }),
|
||||
EncoderBackend::Videotoolbox => Ok(RateControl::Cq { value: quality }),
|
||||
_ => Ok(RateControl::Cq { value: quality }),
|
||||
}
|
||||
}
|
||||
|
||||
864
src/db/config.rs
Normal file
864
src/db/config.rs
Normal file
@@ -0,0 +1,864 @@
|
||||
use crate::error::Result;
|
||||
use serde_json::Value as JsonValue;
|
||||
use sqlx::Row;
|
||||
use std::collections::HashMap;
|
||||
use std::path::{Path, PathBuf};
|
||||
|
||||
use super::Db;
|
||||
use super::types::*;
|
||||
|
||||
fn notification_config_string(config_json: &str, key: &str) -> Option<String> {
|
||||
serde_json::from_str::<JsonValue>(config_json)
|
||||
.ok()
|
||||
.and_then(|value| {
|
||||
value
|
||||
.get(key)
|
||||
.and_then(JsonValue::as_str)
|
||||
.map(str::to_string)
|
||||
})
|
||||
.map(|value| value.trim().to_string())
|
||||
.filter(|value| !value.is_empty())
|
||||
}
|
||||
|
||||
fn notification_legacy_columns(
|
||||
target_type: &str,
|
||||
config_json: &str,
|
||||
) -> (String, Option<String>, Option<String>) {
|
||||
match target_type {
|
||||
"discord_webhook" => (
|
||||
"discord".to_string(),
|
||||
notification_config_string(config_json, "webhook_url"),
|
||||
None,
|
||||
),
|
||||
"discord_bot" => (
|
||||
"discord".to_string(),
|
||||
Some("https://discord.com".to_string()),
|
||||
notification_config_string(config_json, "bot_token"),
|
||||
),
|
||||
"gotify" => (
|
||||
"gotify".to_string(),
|
||||
notification_config_string(config_json, "server_url"),
|
||||
notification_config_string(config_json, "app_token"),
|
||||
),
|
||||
"webhook" => (
|
||||
"webhook".to_string(),
|
||||
notification_config_string(config_json, "url"),
|
||||
notification_config_string(config_json, "auth_token"),
|
||||
),
|
||||
"telegram" => (
|
||||
"webhook".to_string(),
|
||||
Some("https://api.telegram.org".to_string()),
|
||||
notification_config_string(config_json, "bot_token"),
|
||||
),
|
||||
"email" => ("webhook".to_string(), None, None),
|
||||
other => (other.to_string(), None, None),
|
||||
}
|
||||
}
|
||||
|
||||
impl Db {
|
||||
pub async fn get_watch_dirs(&self) -> Result<Vec<WatchDir>> {
|
||||
let has_is_recursive = self.watch_dir_flags.has_is_recursive;
|
||||
let has_recursive = self.watch_dir_flags.has_recursive;
|
||||
let has_enabled = self.watch_dir_flags.has_enabled;
|
||||
let has_profile_id = self.watch_dir_flags.has_profile_id;
|
||||
|
||||
let recursive_expr = if has_is_recursive {
|
||||
"is_recursive"
|
||||
} else if has_recursive {
|
||||
"recursive"
|
||||
} else {
|
||||
"1"
|
||||
};
|
||||
|
||||
let enabled_filter = if has_enabled {
|
||||
"WHERE enabled = 1 "
|
||||
} else {
|
||||
""
|
||||
};
|
||||
let profile_expr = if has_profile_id { "profile_id" } else { "NULL" };
|
||||
let query = format!(
|
||||
"SELECT id, path, {} as is_recursive, {} as profile_id, created_at
|
||||
FROM watch_dirs {}ORDER BY path ASC",
|
||||
recursive_expr, profile_expr, enabled_filter
|
||||
);
|
||||
|
||||
let dirs = sqlx::query_as::<_, WatchDir>(&query)
|
||||
.fetch_all(&self.pool)
|
||||
.await?;
|
||||
Ok(dirs)
|
||||
}
|
||||
|
||||
pub async fn add_watch_dir(&self, path: &str, is_recursive: bool) -> Result<WatchDir> {
|
||||
let has_is_recursive = self.watch_dir_flags.has_is_recursive;
|
||||
let has_recursive = self.watch_dir_flags.has_recursive;
|
||||
let has_profile_id = self.watch_dir_flags.has_profile_id;
|
||||
|
||||
let row = if has_is_recursive && has_profile_id {
|
||||
sqlx::query_as::<_, WatchDir>(
|
||||
"INSERT INTO watch_dirs (path, is_recursive) VALUES (?, ?)
|
||||
RETURNING id, path, is_recursive, profile_id, created_at",
|
||||
)
|
||||
.bind(path)
|
||||
.bind(is_recursive)
|
||||
.fetch_one(&self.pool)
|
||||
.await?
|
||||
} else if has_is_recursive {
|
||||
sqlx::query_as::<_, WatchDir>(
|
||||
"INSERT INTO watch_dirs (path, is_recursive) VALUES (?, ?)
|
||||
RETURNING id, path, is_recursive, NULL as profile_id, created_at",
|
||||
)
|
||||
.bind(path)
|
||||
.bind(is_recursive)
|
||||
.fetch_one(&self.pool)
|
||||
.await?
|
||||
} else if has_recursive && has_profile_id {
|
||||
sqlx::query_as::<_, WatchDir>(
|
||||
"INSERT INTO watch_dirs (path, recursive) VALUES (?, ?)
|
||||
RETURNING id, path, recursive as is_recursive, profile_id, created_at",
|
||||
)
|
||||
.bind(path)
|
||||
.bind(is_recursive)
|
||||
.fetch_one(&self.pool)
|
||||
.await?
|
||||
} else if has_recursive {
|
||||
sqlx::query_as::<_, WatchDir>(
|
||||
"INSERT INTO watch_dirs (path, recursive) VALUES (?, ?)
|
||||
RETURNING id, path, recursive as is_recursive, NULL as profile_id, created_at",
|
||||
)
|
||||
.bind(path)
|
||||
.bind(is_recursive)
|
||||
.fetch_one(&self.pool)
|
||||
.await?
|
||||
} else {
|
||||
sqlx::query_as::<_, WatchDir>(
|
||||
"INSERT INTO watch_dirs (path) VALUES (?)
|
||||
RETURNING id, path, 1 as is_recursive, NULL as profile_id, created_at",
|
||||
)
|
||||
.bind(path)
|
||||
.fetch_one(&self.pool)
|
||||
.await?
|
||||
};
|
||||
Ok(row)
|
||||
}
|
||||
|
||||
pub async fn replace_watch_dirs(
|
||||
&self,
|
||||
watch_dirs: &[crate::config::WatchDirConfig],
|
||||
) -> Result<()> {
|
||||
let has_is_recursive = self.watch_dir_flags.has_is_recursive;
|
||||
let has_recursive = self.watch_dir_flags.has_recursive;
|
||||
let has_profile_id = self.watch_dir_flags.has_profile_id;
|
||||
let preserved_profiles = if has_profile_id {
|
||||
let rows = sqlx::query("SELECT path, profile_id FROM watch_dirs")
|
||||
.fetch_all(&self.pool)
|
||||
.await?;
|
||||
rows.into_iter()
|
||||
.map(|row| {
|
||||
let path: String = row.get("path");
|
||||
let profile_id: Option<i64> = row.get("profile_id");
|
||||
(path, profile_id)
|
||||
})
|
||||
.collect::<HashMap<_, _>>()
|
||||
} else {
|
||||
HashMap::new()
|
||||
};
|
||||
let mut tx = self.pool.begin().await?;
|
||||
sqlx::query("DELETE FROM watch_dirs")
|
||||
.execute(&mut *tx)
|
||||
.await?;
|
||||
for watch_dir in watch_dirs {
|
||||
let preserved_profile_id = preserved_profiles.get(&watch_dir.path).copied().flatten();
|
||||
if has_is_recursive && has_profile_id {
|
||||
sqlx::query(
|
||||
"INSERT INTO watch_dirs (path, is_recursive, profile_id) VALUES (?, ?, ?)",
|
||||
)
|
||||
.bind(&watch_dir.path)
|
||||
.bind(watch_dir.is_recursive)
|
||||
.bind(preserved_profile_id)
|
||||
.execute(&mut *tx)
|
||||
.await?;
|
||||
} else if has_recursive && has_profile_id {
|
||||
sqlx::query(
|
||||
"INSERT INTO watch_dirs (path, recursive, profile_id) VALUES (?, ?, ?)",
|
||||
)
|
||||
.bind(&watch_dir.path)
|
||||
.bind(watch_dir.is_recursive)
|
||||
.bind(preserved_profile_id)
|
||||
.execute(&mut *tx)
|
||||
.await?;
|
||||
} else if has_recursive {
|
||||
sqlx::query("INSERT INTO watch_dirs (path, recursive) VALUES (?, ?)")
|
||||
.bind(&watch_dir.path)
|
||||
.bind(watch_dir.is_recursive)
|
||||
.execute(&mut *tx)
|
||||
.await?;
|
||||
} else {
|
||||
sqlx::query("INSERT INTO watch_dirs (path) VALUES (?)")
|
||||
.bind(&watch_dir.path)
|
||||
.execute(&mut *tx)
|
||||
.await?;
|
||||
}
|
||||
}
|
||||
tx.commit().await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn remove_watch_dir(&self, id: i64) -> Result<()> {
|
||||
let res = sqlx::query("DELETE FROM watch_dirs WHERE id = ?")
|
||||
.bind(id)
|
||||
.execute(&self.pool)
|
||||
.await?;
|
||||
if res.rows_affected() == 0 {
|
||||
return Err(crate::error::AlchemistError::Database(
|
||||
sqlx::Error::RowNotFound,
|
||||
));
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn get_all_profiles(&self) -> Result<Vec<LibraryProfile>> {
|
||||
let profiles = sqlx::query_as::<_, LibraryProfile>(
|
||||
"SELECT id, name, preset, codec, quality_profile, hdr_mode, audio_mode,
|
||||
crf_override, notes, created_at, updated_at
|
||||
FROM library_profiles
|
||||
ORDER BY id ASC",
|
||||
)
|
||||
.fetch_all(&self.pool)
|
||||
.await?;
|
||||
Ok(profiles)
|
||||
}
|
||||
|
||||
pub async fn get_profile(&self, id: i64) -> Result<Option<LibraryProfile>> {
|
||||
let profile = sqlx::query_as::<_, LibraryProfile>(
|
||||
"SELECT id, name, preset, codec, quality_profile, hdr_mode, audio_mode,
|
||||
crf_override, notes, created_at, updated_at
|
||||
FROM library_profiles
|
||||
WHERE id = ?",
|
||||
)
|
||||
.bind(id)
|
||||
.fetch_optional(&self.pool)
|
||||
.await?;
|
||||
Ok(profile)
|
||||
}
|
||||
|
||||
pub async fn create_profile(&self, profile: NewLibraryProfile) -> Result<i64> {
|
||||
let id = sqlx::query(
|
||||
"INSERT INTO library_profiles
|
||||
(name, preset, codec, quality_profile, hdr_mode, audio_mode, crf_override, notes, updated_at)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, CURRENT_TIMESTAMP)",
|
||||
)
|
||||
.bind(profile.name)
|
||||
.bind(profile.preset)
|
||||
.bind(profile.codec)
|
||||
.bind(profile.quality_profile)
|
||||
.bind(profile.hdr_mode)
|
||||
.bind(profile.audio_mode)
|
||||
.bind(profile.crf_override)
|
||||
.bind(profile.notes)
|
||||
.execute(&self.pool)
|
||||
.await?
|
||||
.last_insert_rowid();
|
||||
Ok(id)
|
||||
}
|
||||
|
||||
pub async fn update_profile(&self, id: i64, profile: NewLibraryProfile) -> Result<()> {
|
||||
let result = sqlx::query(
|
||||
"UPDATE library_profiles
|
||||
SET name = ?,
|
||||
preset = ?,
|
||||
codec = ?,
|
||||
quality_profile = ?,
|
||||
hdr_mode = ?,
|
||||
audio_mode = ?,
|
||||
crf_override = ?,
|
||||
notes = ?,
|
||||
updated_at = CURRENT_TIMESTAMP
|
||||
WHERE id = ?",
|
||||
)
|
||||
.bind(profile.name)
|
||||
.bind(profile.preset)
|
||||
.bind(profile.codec)
|
||||
.bind(profile.quality_profile)
|
||||
.bind(profile.hdr_mode)
|
||||
.bind(profile.audio_mode)
|
||||
.bind(profile.crf_override)
|
||||
.bind(profile.notes)
|
||||
.bind(id)
|
||||
.execute(&self.pool)
|
||||
.await?;
|
||||
|
||||
if result.rows_affected() == 0 {
|
||||
return Err(crate::error::AlchemistError::Database(
|
||||
sqlx::Error::RowNotFound,
|
||||
));
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn delete_profile(&self, id: i64) -> Result<()> {
|
||||
let result = sqlx::query("DELETE FROM library_profiles WHERE id = ?")
|
||||
.bind(id)
|
||||
.execute(&self.pool)
|
||||
.await?;
|
||||
if result.rows_affected() == 0 {
|
||||
return Err(crate::error::AlchemistError::Database(
|
||||
sqlx::Error::RowNotFound,
|
||||
));
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn assign_profile_to_watch_dir(
|
||||
&self,
|
||||
dir_id: i64,
|
||||
profile_id: Option<i64>,
|
||||
) -> Result<()> {
|
||||
let result = sqlx::query(
|
||||
"UPDATE watch_dirs
|
||||
SET profile_id = ?
|
||||
WHERE id = ?",
|
||||
)
|
||||
.bind(profile_id)
|
||||
.bind(dir_id)
|
||||
.execute(&self.pool)
|
||||
.await?;
|
||||
|
||||
if result.rows_affected() == 0 {
|
||||
return Err(crate::error::AlchemistError::Database(
|
||||
sqlx::Error::RowNotFound,
|
||||
));
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn get_profile_for_path(&self, path: &str) -> Result<Option<LibraryProfile>> {
|
||||
let normalized = Path::new(path);
|
||||
let candidate = sqlx::query_as::<_, LibraryProfile>(
|
||||
"SELECT lp.id, lp.name, lp.preset, lp.codec, lp.quality_profile, lp.hdr_mode,
|
||||
lp.audio_mode, lp.crf_override, lp.notes, lp.created_at, lp.updated_at
|
||||
FROM watch_dirs wd
|
||||
JOIN library_profiles lp ON lp.id = wd.profile_id
|
||||
WHERE wd.profile_id IS NOT NULL
|
||||
AND (
|
||||
? = wd.path
|
||||
OR (
|
||||
length(?) > length(wd.path)
|
||||
AND (
|
||||
substr(?, 1, length(wd.path) + 1) = wd.path || '/'
|
||||
OR substr(?, 1, length(wd.path) + 1) = wd.path || '\\'
|
||||
)
|
||||
)
|
||||
)
|
||||
ORDER BY LENGTH(wd.path) DESC
|
||||
LIMIT 1",
|
||||
)
|
||||
.bind(path)
|
||||
.bind(path)
|
||||
.bind(path)
|
||||
.bind(path)
|
||||
.fetch_optional(&self.pool)
|
||||
.await?;
|
||||
|
||||
if candidate.is_some() {
|
||||
return Ok(candidate);
|
||||
}
|
||||
|
||||
// SQLite prefix matching is a fast first pass; fall back to strict path ancestry
|
||||
// if separators or normalization differ.
|
||||
let rows = sqlx::query(
|
||||
"SELECT wd.path,
|
||||
lp.id, lp.name, lp.preset, lp.codec, lp.quality_profile, lp.hdr_mode,
|
||||
lp.audio_mode, lp.crf_override, lp.notes, lp.created_at, lp.updated_at
|
||||
FROM watch_dirs wd
|
||||
JOIN library_profiles lp ON lp.id = wd.profile_id
|
||||
WHERE wd.profile_id IS NOT NULL",
|
||||
)
|
||||
.fetch_all(&self.pool)
|
||||
.await?;
|
||||
|
||||
let mut best: Option<(usize, LibraryProfile)> = None;
|
||||
for row in rows {
|
||||
let watch_path: String = row.get("path");
|
||||
let profile = LibraryProfile {
|
||||
id: row.get("id"),
|
||||
name: row.get("name"),
|
||||
preset: row.get("preset"),
|
||||
codec: row.get("codec"),
|
||||
quality_profile: row.get("quality_profile"),
|
||||
hdr_mode: row.get("hdr_mode"),
|
||||
audio_mode: row.get("audio_mode"),
|
||||
crf_override: row.get("crf_override"),
|
||||
notes: row.get("notes"),
|
||||
created_at: row.get("created_at"),
|
||||
updated_at: row.get("updated_at"),
|
||||
};
|
||||
let watch_path_buf = PathBuf::from(&watch_path);
|
||||
if normalized == watch_path_buf || normalized.starts_with(&watch_path_buf) {
|
||||
let score = watch_path.len();
|
||||
if best
|
||||
.as_ref()
|
||||
.is_none_or(|(best_score, _)| score > *best_score)
|
||||
{
|
||||
best = Some((score, profile));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(best.map(|(_, profile)| profile))
|
||||
}
|
||||
|
||||
pub async fn count_watch_dirs_using_profile(&self, profile_id: i64) -> Result<i64> {
|
||||
let row: (i64,) = sqlx::query_as("SELECT COUNT(*) FROM watch_dirs WHERE profile_id = ?")
|
||||
.bind(profile_id)
|
||||
.fetch_one(&self.pool)
|
||||
.await?;
|
||||
Ok(row.0)
|
||||
}
|
||||
|
||||
pub async fn get_notification_targets(&self) -> Result<Vec<NotificationTarget>> {
|
||||
let flags = &self.notification_target_flags;
|
||||
let targets = if flags.has_target_type_v2 {
|
||||
sqlx::query_as::<_, NotificationTarget>(
|
||||
"SELECT
|
||||
id,
|
||||
name,
|
||||
COALESCE(
|
||||
NULLIF(target_type_v2, ''),
|
||||
CASE target_type
|
||||
WHEN 'discord' THEN 'discord_webhook'
|
||||
WHEN 'gotify' THEN 'gotify'
|
||||
ELSE 'webhook'
|
||||
END
|
||||
) AS target_type,
|
||||
CASE
|
||||
WHEN trim(config_json) != '' THEN config_json
|
||||
WHEN target_type = 'discord' THEN json_object('webhook_url', endpoint_url)
|
||||
WHEN target_type = 'gotify' THEN json_object('server_url', endpoint_url, 'app_token', COALESCE(auth_token, ''))
|
||||
ELSE json_object('url', endpoint_url, 'auth_token', auth_token)
|
||||
END AS config_json,
|
||||
events,
|
||||
enabled,
|
||||
created_at
|
||||
FROM notification_targets
|
||||
ORDER BY id ASC",
|
||||
)
|
||||
.fetch_all(&self.pool)
|
||||
.await?
|
||||
} else {
|
||||
sqlx::query_as::<_, NotificationTarget>(
|
||||
"SELECT id, name, target_type, config_json, events, enabled, created_at
|
||||
FROM notification_targets
|
||||
ORDER BY id ASC",
|
||||
)
|
||||
.fetch_all(&self.pool)
|
||||
.await?
|
||||
};
|
||||
Ok(targets)
|
||||
}
|
||||
|
||||
pub async fn add_notification_target(
|
||||
&self,
|
||||
name: &str,
|
||||
target_type: &str,
|
||||
config_json: &str,
|
||||
events: &str,
|
||||
enabled: bool,
|
||||
) -> Result<NotificationTarget> {
|
||||
let flags = &self.notification_target_flags;
|
||||
if flags.has_target_type_v2 {
|
||||
let (legacy_target_type, endpoint_url, auth_token) =
|
||||
notification_legacy_columns(target_type, config_json);
|
||||
let result = sqlx::query(
|
||||
"INSERT INTO notification_targets
|
||||
(name, target_type, target_type_v2, endpoint_url, auth_token, config_json, events, enabled)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?)",
|
||||
)
|
||||
.bind(name)
|
||||
.bind(legacy_target_type)
|
||||
.bind(target_type)
|
||||
.bind(endpoint_url)
|
||||
.bind(auth_token)
|
||||
.bind(config_json)
|
||||
.bind(events)
|
||||
.bind(enabled)
|
||||
.execute(&self.pool)
|
||||
.await?;
|
||||
self.get_notification_target_by_id(result.last_insert_rowid())
|
||||
.await
|
||||
} else {
|
||||
let result = sqlx::query(
|
||||
"INSERT INTO notification_targets (name, target_type, config_json, events, enabled)
|
||||
VALUES (?, ?, ?, ?, ?)",
|
||||
)
|
||||
.bind(name)
|
||||
.bind(target_type)
|
||||
.bind(config_json)
|
||||
.bind(events)
|
||||
.bind(enabled)
|
||||
.execute(&self.pool)
|
||||
.await?;
|
||||
self.get_notification_target_by_id(result.last_insert_rowid())
|
||||
.await
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn delete_notification_target(&self, id: i64) -> Result<()> {
|
||||
let res = sqlx::query("DELETE FROM notification_targets WHERE id = ?")
|
||||
.bind(id)
|
||||
.execute(&self.pool)
|
||||
.await?;
|
||||
if res.rows_affected() == 0 {
|
||||
return Err(crate::error::AlchemistError::Database(
|
||||
sqlx::Error::RowNotFound,
|
||||
));
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn replace_notification_targets(
|
||||
&self,
|
||||
targets: &[crate::config::NotificationTargetConfig],
|
||||
) -> Result<()> {
|
||||
let flags = &self.notification_target_flags;
|
||||
let mut tx = self.pool.begin().await?;
|
||||
sqlx::query("DELETE FROM notification_targets")
|
||||
.execute(&mut *tx)
|
||||
.await?;
|
||||
for target in targets {
|
||||
let config_json = target.config_json.to_string();
|
||||
let events = serde_json::to_string(&target.events).unwrap_or_else(|_| "[]".to_string());
|
||||
if flags.has_target_type_v2 {
|
||||
let (legacy_target_type, endpoint_url, auth_token) =
|
||||
notification_legacy_columns(&target.target_type, &config_json);
|
||||
sqlx::query(
|
||||
"INSERT INTO notification_targets
|
||||
(name, target_type, target_type_v2, endpoint_url, auth_token, config_json, events, enabled)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?)",
|
||||
)
|
||||
.bind(&target.name)
|
||||
.bind(legacy_target_type)
|
||||
.bind(&target.target_type)
|
||||
.bind(endpoint_url)
|
||||
.bind(auth_token)
|
||||
.bind(&config_json)
|
||||
.bind(&events)
|
||||
.bind(target.enabled)
|
||||
.execute(&mut *tx)
|
||||
.await?;
|
||||
} else {
|
||||
sqlx::query(
|
||||
"INSERT INTO notification_targets (name, target_type, config_json, events, enabled) VALUES (?, ?, ?, ?, ?)",
|
||||
)
|
||||
.bind(&target.name)
|
||||
.bind(&target.target_type)
|
||||
.bind(&config_json)
|
||||
.bind(&events)
|
||||
.bind(target.enabled)
|
||||
.execute(&mut *tx)
|
||||
.await?;
|
||||
}
|
||||
}
|
||||
tx.commit().await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn get_notification_target_by_id(&self, id: i64) -> Result<NotificationTarget> {
|
||||
let flags = &self.notification_target_flags;
|
||||
let row = if flags.has_target_type_v2 {
|
||||
sqlx::query_as::<_, NotificationTarget>(
|
||||
"SELECT
|
||||
id,
|
||||
name,
|
||||
COALESCE(
|
||||
NULLIF(target_type_v2, ''),
|
||||
CASE target_type
|
||||
WHEN 'discord' THEN 'discord_webhook'
|
||||
WHEN 'gotify' THEN 'gotify'
|
||||
ELSE 'webhook'
|
||||
END
|
||||
) AS target_type,
|
||||
CASE
|
||||
WHEN trim(config_json) != '' THEN config_json
|
||||
WHEN target_type = 'discord' THEN json_object('webhook_url', endpoint_url)
|
||||
WHEN target_type = 'gotify' THEN json_object('server_url', endpoint_url, 'app_token', COALESCE(auth_token, ''))
|
||||
ELSE json_object('url', endpoint_url, 'auth_token', auth_token)
|
||||
END AS config_json,
|
||||
events,
|
||||
enabled,
|
||||
created_at
|
||||
FROM notification_targets
|
||||
WHERE id = ?",
|
||||
)
|
||||
.bind(id)
|
||||
.fetch_one(&self.pool)
|
||||
.await?
|
||||
} else {
|
||||
sqlx::query_as::<_, NotificationTarget>(
|
||||
"SELECT id, name, target_type, config_json, events, enabled, created_at
|
||||
FROM notification_targets
|
||||
WHERE id = ?",
|
||||
)
|
||||
.bind(id)
|
||||
.fetch_one(&self.pool)
|
||||
.await?
|
||||
};
|
||||
Ok(row)
|
||||
}
|
||||
|
||||
pub async fn get_schedule_windows(&self) -> Result<Vec<ScheduleWindow>> {
|
||||
let windows =
|
||||
sqlx::query_as::<_, ScheduleWindow>("SELECT * FROM schedule_windows ORDER BY id ASC")
|
||||
.fetch_all(&self.pool)
|
||||
.await?;
|
||||
Ok(windows)
|
||||
}
|
||||
|
||||
pub async fn add_schedule_window(
|
||||
&self,
|
||||
start_time: &str,
|
||||
end_time: &str,
|
||||
days_of_week: &str,
|
||||
enabled: bool,
|
||||
) -> Result<ScheduleWindow> {
|
||||
let row = sqlx::query_as::<_, ScheduleWindow>(
|
||||
"INSERT INTO schedule_windows (start_time, end_time, days_of_week, enabled)
|
||||
VALUES (?, ?, ?, ?)
|
||||
RETURNING *",
|
||||
)
|
||||
.bind(start_time)
|
||||
.bind(end_time)
|
||||
.bind(days_of_week)
|
||||
.bind(enabled)
|
||||
.fetch_one(&self.pool)
|
||||
.await?;
|
||||
Ok(row)
|
||||
}
|
||||
|
||||
pub async fn delete_schedule_window(&self, id: i64) -> Result<()> {
|
||||
let res = sqlx::query("DELETE FROM schedule_windows WHERE id = ?")
|
||||
.bind(id)
|
||||
.execute(&self.pool)
|
||||
.await?;
|
||||
|
||||
if res.rows_affected() == 0 {
|
||||
return Err(crate::error::AlchemistError::Database(
|
||||
sqlx::Error::RowNotFound,
|
||||
));
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn replace_schedule_windows(
|
||||
&self,
|
||||
windows: &[crate::config::ScheduleWindowConfig],
|
||||
) -> Result<()> {
|
||||
let mut tx = self.pool.begin().await?;
|
||||
sqlx::query("DELETE FROM schedule_windows")
|
||||
.execute(&mut *tx)
|
||||
.await?;
|
||||
for window in windows {
|
||||
sqlx::query(
|
||||
"INSERT INTO schedule_windows (start_time, end_time, days_of_week, enabled) VALUES (?, ?, ?, ?)",
|
||||
)
|
||||
.bind(&window.start_time)
|
||||
.bind(&window.end_time)
|
||||
.bind(serde_json::to_string(&window.days_of_week).unwrap_or_else(|_| "[]".to_string()))
|
||||
.bind(window.enabled)
|
||||
.execute(&mut *tx)
|
||||
.await?;
|
||||
}
|
||||
tx.commit().await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn get_file_settings(&self) -> Result<FileSettings> {
|
||||
// Migration ensures row 1 exists, but we handle missing just in case
|
||||
let row = sqlx::query_as::<_, FileSettings>("SELECT * FROM file_settings WHERE id = 1")
|
||||
.fetch_optional(&self.pool)
|
||||
.await?;
|
||||
|
||||
match row {
|
||||
Some(s) => Ok(s),
|
||||
None => {
|
||||
// If missing (shouldn't happen), return default
|
||||
Ok(FileSettings {
|
||||
id: 1,
|
||||
delete_source: false,
|
||||
output_extension: "mkv".to_string(),
|
||||
output_suffix: "-alchemist".to_string(),
|
||||
replace_strategy: "keep".to_string(),
|
||||
output_root: None,
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn update_file_settings(
|
||||
&self,
|
||||
delete_source: bool,
|
||||
output_extension: &str,
|
||||
output_suffix: &str,
|
||||
replace_strategy: &str,
|
||||
output_root: Option<&str>,
|
||||
) -> Result<FileSettings> {
|
||||
let row = sqlx::query_as::<_, FileSettings>(
|
||||
"UPDATE file_settings
|
||||
SET delete_source = ?, output_extension = ?, output_suffix = ?, replace_strategy = ?, output_root = ?
|
||||
WHERE id = 1
|
||||
RETURNING *",
|
||||
)
|
||||
.bind(delete_source)
|
||||
.bind(output_extension)
|
||||
.bind(output_suffix)
|
||||
.bind(replace_strategy)
|
||||
.bind(output_root)
|
||||
.fetch_one(&self.pool)
|
||||
.await?;
|
||||
Ok(row)
|
||||
}
|
||||
|
||||
pub async fn replace_file_settings_projection(
|
||||
&self,
|
||||
settings: &crate::config::FileSettingsConfig,
|
||||
) -> Result<FileSettings> {
|
||||
self.update_file_settings(
|
||||
settings.delete_source,
|
||||
&settings.output_extension,
|
||||
&settings.output_suffix,
|
||||
&settings.replace_strategy,
|
||||
settings.output_root.as_deref(),
|
||||
)
|
||||
.await
|
||||
}
|
||||
|
||||
/// Set UI preference
|
||||
pub async fn set_preference(&self, key: &str, value: &str) -> Result<()> {
|
||||
sqlx::query(
|
||||
"INSERT INTO ui_preferences (key, value, updated_at) VALUES (?, ?, CURRENT_TIMESTAMP)
|
||||
ON CONFLICT(key) DO UPDATE SET value = excluded.value, updated_at = CURRENT_TIMESTAMP",
|
||||
)
|
||||
.bind(key)
|
||||
.bind(value)
|
||||
.execute(&self.pool)
|
||||
.await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Get UI preference
|
||||
pub async fn get_preference(&self, key: &str) -> Result<Option<String>> {
|
||||
let row: Option<(String,)> =
|
||||
sqlx::query_as("SELECT value FROM ui_preferences WHERE key = ?")
|
||||
.bind(key)
|
||||
.fetch_optional(&self.pool)
|
||||
.await?;
|
||||
Ok(row.map(|r| r.0))
|
||||
}
|
||||
|
||||
pub async fn delete_preference(&self, key: &str) -> Result<()> {
|
||||
sqlx::query("DELETE FROM ui_preferences WHERE key = ?")
|
||||
.bind(key)
|
||||
.execute(&self.pool)
|
||||
.await?;
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
fn temp_db_path(prefix: &str) -> PathBuf {
|
||||
let mut path = std::env::temp_dir();
|
||||
path.push(format!("{prefix}_{}.db", rand::random::<u64>()));
|
||||
path
|
||||
}
|
||||
|
||||
fn sample_profile(name: &str) -> NewLibraryProfile {
|
||||
NewLibraryProfile {
|
||||
name: name.to_string(),
|
||||
preset: "balanced".to_string(),
|
||||
codec: "av1".to_string(),
|
||||
quality_profile: "balanced".to_string(),
|
||||
hdr_mode: "preserve".to_string(),
|
||||
audio_mode: "copy".to_string(),
|
||||
crf_override: None,
|
||||
notes: None,
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn profile_lookup_treats_percent_and_underscore_as_literals() -> anyhow::Result<()> {
|
||||
let db_path = temp_db_path("alchemist_profile_lookup_literals");
|
||||
let db = Db::new(db_path.to_string_lossy().as_ref()).await?;
|
||||
|
||||
let underscore_profile = db.create_profile(sample_profile("underscore")).await?;
|
||||
let percent_profile = db.create_profile(sample_profile("percent")).await?;
|
||||
|
||||
let underscore_watch = db.add_watch_dir("/media/TV_4K", true).await?;
|
||||
db.assign_profile_to_watch_dir(underscore_watch.id, Some(underscore_profile))
|
||||
.await?;
|
||||
|
||||
let percent_watch = db.add_watch_dir("/media/Movies%20", true).await?;
|
||||
db.assign_profile_to_watch_dir(percent_watch.id, Some(percent_profile))
|
||||
.await?;
|
||||
|
||||
assert_eq!(
|
||||
db.get_profile_for_path("/media/TV_4K/show/file.mkv")
|
||||
.await?
|
||||
.map(|profile| profile.name),
|
||||
Some("underscore".to_string())
|
||||
);
|
||||
assert_eq!(
|
||||
db.get_profile_for_path("/media/TVA4K/show/file.mkv")
|
||||
.await?
|
||||
.map(|profile| profile.name),
|
||||
None
|
||||
);
|
||||
assert_eq!(
|
||||
db.get_profile_for_path("/media/Movies%20/title/file.mkv")
|
||||
.await?
|
||||
.map(|profile| profile.name),
|
||||
Some("percent".to_string())
|
||||
);
|
||||
assert_eq!(
|
||||
db.get_profile_for_path("/media/MoviesABCD/title/file.mkv")
|
||||
.await?
|
||||
.map(|profile| profile.name),
|
||||
None
|
||||
);
|
||||
|
||||
db.pool.close().await;
|
||||
let _ = std::fs::remove_file(db_path);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn profile_lookup_prefers_longest_literal_matching_watch_dir() -> anyhow::Result<()> {
|
||||
let db_path = temp_db_path("alchemist_profile_lookup_longest");
|
||||
let db = Db::new(db_path.to_string_lossy().as_ref()).await?;
|
||||
|
||||
let base_profile = db.create_profile(sample_profile("base")).await?;
|
||||
let nested_profile = db.create_profile(sample_profile("nested")).await?;
|
||||
|
||||
let base_watch = db.add_watch_dir("/media", true).await?;
|
||||
db.assign_profile_to_watch_dir(base_watch.id, Some(base_profile))
|
||||
.await?;
|
||||
|
||||
let nested_watch = db.add_watch_dir("/media/TV_4K", true).await?;
|
||||
db.assign_profile_to_watch_dir(nested_watch.id, Some(nested_profile))
|
||||
.await?;
|
||||
|
||||
assert_eq!(
|
||||
db.get_profile_for_path("/media/TV_4K/show/file.mkv")
|
||||
.await?
|
||||
.map(|profile| profile.name),
|
||||
Some("nested".to_string())
|
||||
);
|
||||
|
||||
db.pool.close().await;
|
||||
let _ = std::fs::remove_file(db_path);
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
152
src/db/conversion.rs
Normal file
152
src/db/conversion.rs
Normal file
@@ -0,0 +1,152 @@
|
||||
use crate::error::Result;
|
||||
|
||||
use super::Db;
|
||||
use super::types::*;
|
||||
|
||||
impl Db {
|
||||
pub async fn create_conversion_job(
|
||||
&self,
|
||||
upload_path: &str,
|
||||
mode: &str,
|
||||
settings_json: &str,
|
||||
probe_json: Option<&str>,
|
||||
expires_at: &str,
|
||||
) -> Result<ConversionJob> {
|
||||
let row = sqlx::query_as::<_, ConversionJob>(
|
||||
"INSERT INTO conversion_jobs (upload_path, mode, settings_json, probe_json, expires_at)
|
||||
VALUES (?, ?, ?, ?, ?)
|
||||
RETURNING *",
|
||||
)
|
||||
.bind(upload_path)
|
||||
.bind(mode)
|
||||
.bind(settings_json)
|
||||
.bind(probe_json)
|
||||
.bind(expires_at)
|
||||
.fetch_one(&self.pool)
|
||||
.await?;
|
||||
Ok(row)
|
||||
}
|
||||
|
||||
pub async fn get_conversion_job(&self, id: i64) -> Result<Option<ConversionJob>> {
|
||||
let row = sqlx::query_as::<_, ConversionJob>(
|
||||
"SELECT id, upload_path, output_path, mode, settings_json, probe_json, linked_job_id, status, expires_at, downloaded_at, created_at, updated_at
|
||||
FROM conversion_jobs
|
||||
WHERE id = ?",
|
||||
)
|
||||
.bind(id)
|
||||
.fetch_optional(&self.pool)
|
||||
.await?;
|
||||
Ok(row)
|
||||
}
|
||||
|
||||
pub async fn get_conversion_job_by_linked_job_id(
|
||||
&self,
|
||||
linked_job_id: i64,
|
||||
) -> Result<Option<ConversionJob>> {
|
||||
let row = sqlx::query_as::<_, ConversionJob>(
|
||||
"SELECT id, upload_path, output_path, mode, settings_json, probe_json, linked_job_id, status, expires_at, downloaded_at, created_at, updated_at
|
||||
FROM conversion_jobs
|
||||
WHERE linked_job_id = ?",
|
||||
)
|
||||
.bind(linked_job_id)
|
||||
.fetch_optional(&self.pool)
|
||||
.await?;
|
||||
Ok(row)
|
||||
}
|
||||
|
||||
pub async fn update_conversion_job_probe(&self, id: i64, probe_json: &str) -> Result<()> {
|
||||
sqlx::query(
|
||||
"UPDATE conversion_jobs
|
||||
SET probe_json = ?, updated_at = datetime('now')
|
||||
WHERE id = ?",
|
||||
)
|
||||
.bind(probe_json)
|
||||
.bind(id)
|
||||
.execute(&self.pool)
|
||||
.await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn update_conversion_job_settings(
|
||||
&self,
|
||||
id: i64,
|
||||
settings_json: &str,
|
||||
mode: &str,
|
||||
) -> Result<()> {
|
||||
sqlx::query(
|
||||
"UPDATE conversion_jobs
|
||||
SET settings_json = ?, mode = ?, updated_at = datetime('now')
|
||||
WHERE id = ?",
|
||||
)
|
||||
.bind(settings_json)
|
||||
.bind(mode)
|
||||
.bind(id)
|
||||
.execute(&self.pool)
|
||||
.await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn update_conversion_job_start(
|
||||
&self,
|
||||
id: i64,
|
||||
output_path: &str,
|
||||
linked_job_id: i64,
|
||||
) -> Result<()> {
|
||||
sqlx::query(
|
||||
"UPDATE conversion_jobs
|
||||
SET output_path = ?, linked_job_id = ?, status = 'queued', updated_at = datetime('now')
|
||||
WHERE id = ?",
|
||||
)
|
||||
.bind(output_path)
|
||||
.bind(linked_job_id)
|
||||
.bind(id)
|
||||
.execute(&self.pool)
|
||||
.await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn update_conversion_job_status(&self, id: i64, status: &str) -> Result<()> {
|
||||
sqlx::query(
|
||||
"UPDATE conversion_jobs
|
||||
SET status = ?, updated_at = datetime('now')
|
||||
WHERE id = ?",
|
||||
)
|
||||
.bind(status)
|
||||
.bind(id)
|
||||
.execute(&self.pool)
|
||||
.await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn mark_conversion_job_downloaded(&self, id: i64) -> Result<()> {
|
||||
sqlx::query(
|
||||
"UPDATE conversion_jobs
|
||||
SET downloaded_at = datetime('now'), status = 'downloaded', updated_at = datetime('now')
|
||||
WHERE id = ?",
|
||||
)
|
||||
.bind(id)
|
||||
.execute(&self.pool)
|
||||
.await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn delete_conversion_job(&self, id: i64) -> Result<()> {
|
||||
sqlx::query("DELETE FROM conversion_jobs WHERE id = ?")
|
||||
.bind(id)
|
||||
.execute(&self.pool)
|
||||
.await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn get_expired_conversion_jobs(&self, now: &str) -> Result<Vec<ConversionJob>> {
|
||||
let rows = sqlx::query_as::<_, ConversionJob>(
|
||||
"SELECT id, upload_path, output_path, mode, settings_json, probe_json, linked_job_id, status, expires_at, downloaded_at, created_at, updated_at
|
||||
FROM conversion_jobs
|
||||
WHERE expires_at <= ?",
|
||||
)
|
||||
.bind(now)
|
||||
.fetch_all(&self.pool)
|
||||
.await?;
|
||||
Ok(rows)
|
||||
}
|
||||
}
|
||||
54
src/db/events.rs
Normal file
54
src/db/events.rs
Normal file
@@ -0,0 +1,54 @@
|
||||
use crate::explanations::Explanation;
|
||||
use serde::{Deserialize, Serialize};
|
||||
|
||||
use super::types::JobState;
|
||||
|
||||
// Typed event channels for separating high-volume vs low-volume events
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
#[serde(tag = "type", content = "data")]
|
||||
pub enum JobEvent {
|
||||
StateChanged {
|
||||
job_id: i64,
|
||||
status: JobState,
|
||||
},
|
||||
Progress {
|
||||
job_id: i64,
|
||||
percentage: f64,
|
||||
time: String,
|
||||
},
|
||||
Decision {
|
||||
job_id: i64,
|
||||
action: String,
|
||||
reason: String,
|
||||
explanation: Option<Explanation>,
|
||||
},
|
||||
Log {
|
||||
level: String,
|
||||
job_id: Option<i64>,
|
||||
message: String,
|
||||
},
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
#[serde(tag = "type", content = "data")]
|
||||
pub enum ConfigEvent {
|
||||
Updated(Box<crate::config::Config>),
|
||||
WatchFolderAdded(String),
|
||||
WatchFolderRemoved(String),
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
#[serde(tag = "type", content = "data")]
|
||||
pub enum SystemEvent {
|
||||
ScanStarted,
|
||||
ScanCompleted,
|
||||
EngineIdle,
|
||||
EngineStatusChanged,
|
||||
HardwareStateChanged,
|
||||
}
|
||||
|
||||
pub struct EventChannels {
|
||||
pub jobs: tokio::sync::broadcast::Sender<JobEvent>, // 1000 capacity - high volume
|
||||
pub config: tokio::sync::broadcast::Sender<ConfigEvent>, // 50 capacity - rare
|
||||
pub system: tokio::sync::broadcast::Sender<SystemEvent>, // 100 capacity - medium
|
||||
}
|
||||
1236
src/db/jobs.rs
Normal file
1236
src/db/jobs.rs
Normal file
File diff suppressed because it is too large
Load Diff
173
src/db/mod.rs
Normal file
173
src/db/mod.rs
Normal file
@@ -0,0 +1,173 @@
|
||||
mod config;
|
||||
mod conversion;
|
||||
mod events;
|
||||
mod jobs;
|
||||
mod stats;
|
||||
mod system;
|
||||
mod types;
|
||||
|
||||
pub use events::*;
|
||||
pub use types::*;
|
||||
|
||||
use crate::error::{AlchemistError, Result};
|
||||
use sha2::{Digest, Sha256};
|
||||
use sqlx::SqlitePool;
|
||||
use sqlx::sqlite::{SqliteConnectOptions, SqliteJournalMode};
|
||||
use std::time::Duration;
|
||||
use tokio::time::timeout;
|
||||
use tracing::info;
|
||||
|
||||
/// Default timeout for potentially slow database queries
|
||||
pub(crate) const QUERY_TIMEOUT: Duration = Duration::from_secs(5);
|
||||
|
||||
/// Execute a query with a timeout to prevent blocking the job loop
|
||||
pub(crate) async fn timed_query<T, F, Fut>(operation: &str, f: F) -> Result<T>
|
||||
where
|
||||
F: FnOnce() -> Fut,
|
||||
Fut: std::future::Future<Output = Result<T>>,
|
||||
{
|
||||
match timeout(QUERY_TIMEOUT, f()).await {
|
||||
Ok(result) => result,
|
||||
Err(_) => Err(AlchemistError::QueryTimeout(
|
||||
QUERY_TIMEOUT.as_secs(),
|
||||
operation.to_string(),
|
||||
)),
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub(crate) struct WatchDirSchemaFlags {
|
||||
has_is_recursive: bool,
|
||||
has_recursive: bool,
|
||||
has_enabled: bool,
|
||||
has_profile_id: bool,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub(crate) struct NotificationTargetSchemaFlags {
|
||||
has_target_type_v2: bool,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct Db {
|
||||
pub(crate) pool: SqlitePool,
|
||||
pub(crate) watch_dir_flags: std::sync::Arc<WatchDirSchemaFlags>,
|
||||
pub(crate) notification_target_flags: std::sync::Arc<NotificationTargetSchemaFlags>,
|
||||
}
|
||||
|
||||
impl Db {
|
||||
pub async fn new(db_path: &str) -> Result<Self> {
|
||||
let start = std::time::Instant::now();
|
||||
let options = SqliteConnectOptions::new()
|
||||
.filename(db_path)
|
||||
.create_if_missing(true)
|
||||
.foreign_keys(true)
|
||||
.journal_mode(SqliteJournalMode::Wal)
|
||||
.busy_timeout(Duration::from_secs(5));
|
||||
|
||||
let pool = sqlx::sqlite::SqlitePoolOptions::new()
|
||||
.max_connections(1)
|
||||
.connect_with(options)
|
||||
.await?;
|
||||
info!(
|
||||
target: "startup",
|
||||
"Database connection opened in {} ms",
|
||||
start.elapsed().as_millis()
|
||||
);
|
||||
|
||||
// Run migrations
|
||||
let migrate_start = std::time::Instant::now();
|
||||
sqlx::migrate!("./migrations")
|
||||
.run(&pool)
|
||||
.await
|
||||
.map_err(|e| crate::error::AlchemistError::Database(e.into()))?;
|
||||
info!(
|
||||
target: "startup",
|
||||
"Database migrations completed in {} ms",
|
||||
migrate_start.elapsed().as_millis()
|
||||
);
|
||||
|
||||
// Cache watch_dirs schema flags once at startup to avoid repeated PRAGMA queries.
|
||||
let check = |column: &str| {
|
||||
let pool = pool.clone();
|
||||
let column = column.to_string();
|
||||
async move {
|
||||
let row =
|
||||
sqlx::query("SELECT name FROM pragma_table_info('watch_dirs') WHERE name = ?")
|
||||
.bind(&column)
|
||||
.fetch_optional(&pool)
|
||||
.await
|
||||
.unwrap_or(None);
|
||||
row.is_some()
|
||||
}
|
||||
};
|
||||
let watch_dir_flags = WatchDirSchemaFlags {
|
||||
has_is_recursive: check("is_recursive").await,
|
||||
has_recursive: check("recursive").await,
|
||||
has_enabled: check("enabled").await,
|
||||
has_profile_id: check("profile_id").await,
|
||||
};
|
||||
|
||||
let notification_check = |column: &str| {
|
||||
let pool = pool.clone();
|
||||
let column = column.to_string();
|
||||
async move {
|
||||
let row = sqlx::query(
|
||||
"SELECT name FROM pragma_table_info('notification_targets') WHERE name = ?",
|
||||
)
|
||||
.bind(&column)
|
||||
.fetch_optional(&pool)
|
||||
.await
|
||||
.unwrap_or(None);
|
||||
row.is_some()
|
||||
}
|
||||
};
|
||||
let notification_target_flags = NotificationTargetSchemaFlags {
|
||||
has_target_type_v2: notification_check("target_type_v2").await,
|
||||
};
|
||||
|
||||
Ok(Self {
|
||||
pool,
|
||||
watch_dir_flags: std::sync::Arc::new(watch_dir_flags),
|
||||
notification_target_flags: std::sync::Arc::new(notification_target_flags),
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
/// Hash a session token using SHA256 for secure storage.
|
||||
///
|
||||
/// # Security: Timing Attack Resistance
|
||||
///
|
||||
/// Session tokens are hashed before storage and lookup. Token validation uses
|
||||
/// SQL `WHERE token = ?` with the hashed value, so the comparison occurs in
|
||||
/// SQLite rather than in Rust code. This is inherently constant-time from the
|
||||
/// application's perspective because:
|
||||
/// 1. The database performs the comparison, not our code
|
||||
/// 2. Database query time doesn't leak information about partial matches
|
||||
/// 3. No early-exit comparison in application code
|
||||
///
|
||||
/// This design makes timing attacks infeasible without requiring the `subtle`
|
||||
/// crate for constant-time comparison.
|
||||
pub(crate) fn hash_session_token(token: &str) -> String {
|
||||
let mut hasher = Sha256::new();
|
||||
hasher.update(token.as_bytes());
|
||||
let digest = hasher.finalize();
|
||||
let mut out = String::with_capacity(64);
|
||||
for byte in digest {
|
||||
use std::fmt::Write;
|
||||
let _ = write!(&mut out, "{:02x}", byte);
|
||||
}
|
||||
out
|
||||
}
|
||||
|
||||
pub fn hash_api_token(token: &str) -> String {
|
||||
let mut hasher = Sha256::new();
|
||||
hasher.update(token.as_bytes());
|
||||
let digest = hasher.finalize();
|
||||
let mut out = String::with_capacity(64);
|
||||
for byte in digest {
|
||||
use std::fmt::Write;
|
||||
let _ = write!(&mut out, "{:02x}", byte);
|
||||
}
|
||||
out
|
||||
}
|
||||
422
src/db/stats.rs
Normal file
422
src/db/stats.rs
Normal file
@@ -0,0 +1,422 @@
|
||||
use crate::error::Result;
|
||||
use sqlx::Row;
|
||||
|
||||
use super::Db;
|
||||
use super::timed_query;
|
||||
use super::types::*;
|
||||
|
||||
impl Db {
|
||||
pub async fn get_stats(&self) -> Result<serde_json::Value> {
|
||||
let pool = &self.pool;
|
||||
timed_query("get_stats", || async {
|
||||
let stats = sqlx::query("SELECT status, count(*) as count FROM jobs GROUP BY status")
|
||||
.fetch_all(pool)
|
||||
.await?;
|
||||
|
||||
let mut map = serde_json::Map::new();
|
||||
for row in stats {
|
||||
use sqlx::Row;
|
||||
let status: String = row.get("status");
|
||||
let count: i64 = row.get("count");
|
||||
map.insert(status, serde_json::Value::Number(count.into()));
|
||||
}
|
||||
|
||||
Ok(serde_json::Value::Object(map))
|
||||
})
|
||||
.await
|
||||
}
|
||||
|
||||
/// Save encode statistics
|
||||
pub async fn save_encode_stats(&self, stats: EncodeStatsInput) -> Result<()> {
|
||||
let result = sqlx::query(
|
||||
"INSERT INTO encode_stats
|
||||
(job_id, input_size_bytes, output_size_bytes, compression_ratio,
|
||||
encode_time_seconds, encode_speed, avg_bitrate_kbps, vmaf_score, output_codec)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
ON CONFLICT(job_id) DO UPDATE SET
|
||||
input_size_bytes = excluded.input_size_bytes,
|
||||
output_size_bytes = excluded.output_size_bytes,
|
||||
compression_ratio = excluded.compression_ratio,
|
||||
encode_time_seconds = excluded.encode_time_seconds,
|
||||
encode_speed = excluded.encode_speed,
|
||||
avg_bitrate_kbps = excluded.avg_bitrate_kbps,
|
||||
vmaf_score = excluded.vmaf_score,
|
||||
output_codec = excluded.output_codec",
|
||||
)
|
||||
.bind(stats.job_id)
|
||||
.bind(stats.input_size as i64)
|
||||
.bind(stats.output_size as i64)
|
||||
.bind(stats.compression_ratio)
|
||||
.bind(stats.encode_time)
|
||||
.bind(stats.encode_speed)
|
||||
.bind(stats.avg_bitrate)
|
||||
.bind(stats.vmaf_score)
|
||||
.bind(stats.output_codec)
|
||||
.execute(&self.pool)
|
||||
.await?;
|
||||
|
||||
if result.rows_affected() == 0 {
|
||||
return Err(crate::error::AlchemistError::Database(
|
||||
sqlx::Error::RowNotFound,
|
||||
));
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Record a single encode attempt outcome
|
||||
pub async fn insert_encode_attempt(&self, input: EncodeAttemptInput) -> Result<()> {
|
||||
sqlx::query(
|
||||
"INSERT INTO encode_attempts
|
||||
(job_id, attempt_number, started_at, finished_at, outcome,
|
||||
failure_code, failure_summary, input_size_bytes, output_size_bytes,
|
||||
encode_time_seconds)
|
||||
VALUES (?, ?, ?, datetime('now'), ?, ?, ?, ?, ?, ?)",
|
||||
)
|
||||
.bind(input.job_id)
|
||||
.bind(input.attempt_number)
|
||||
.bind(input.started_at)
|
||||
.bind(input.outcome)
|
||||
.bind(input.failure_code)
|
||||
.bind(input.failure_summary)
|
||||
.bind(input.input_size_bytes)
|
||||
.bind(input.output_size_bytes)
|
||||
.bind(input.encode_time_seconds)
|
||||
.execute(&self.pool)
|
||||
.await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Get all encode attempts for a job, ordered by attempt_number
|
||||
pub async fn get_encode_attempts_by_job(&self, job_id: i64) -> Result<Vec<EncodeAttempt>> {
|
||||
let attempts = sqlx::query_as::<_, EncodeAttempt>(
|
||||
"SELECT id, job_id, attempt_number, started_at, finished_at, outcome,
|
||||
failure_code, failure_summary, input_size_bytes, output_size_bytes,
|
||||
encode_time_seconds, created_at
|
||||
FROM encode_attempts
|
||||
WHERE job_id = ?
|
||||
ORDER BY attempt_number ASC",
|
||||
)
|
||||
.bind(job_id)
|
||||
.fetch_all(&self.pool)
|
||||
.await?;
|
||||
Ok(attempts)
|
||||
}
|
||||
|
||||
pub async fn get_encode_stats_by_job_id(&self, job_id: i64) -> Result<DetailedEncodeStats> {
|
||||
let stats = sqlx::query_as::<_, DetailedEncodeStats>(
|
||||
"SELECT
|
||||
e.job_id,
|
||||
j.input_path,
|
||||
e.input_size_bytes,
|
||||
e.output_size_bytes,
|
||||
e.compression_ratio,
|
||||
e.encode_time_seconds,
|
||||
e.encode_speed,
|
||||
e.avg_bitrate_kbps,
|
||||
e.vmaf_score,
|
||||
e.created_at
|
||||
FROM encode_stats e
|
||||
JOIN jobs j ON e.job_id = j.id
|
||||
WHERE e.job_id = ?",
|
||||
)
|
||||
.bind(job_id)
|
||||
.fetch_one(&self.pool)
|
||||
.await?;
|
||||
Ok(stats)
|
||||
}
|
||||
|
||||
pub async fn get_aggregated_stats(&self) -> Result<AggregatedStats> {
|
||||
let pool = &self.pool;
|
||||
timed_query("get_aggregated_stats", || async {
|
||||
let row = sqlx::query(
|
||||
"SELECT
|
||||
(SELECT COUNT(*) FROM jobs WHERE archived = 0) as total_jobs,
|
||||
(SELECT COUNT(*) FROM jobs WHERE status = 'completed' AND archived = 0) as completed_jobs,
|
||||
COALESCE(SUM(input_size_bytes), 0) as total_input_size,
|
||||
COALESCE(SUM(output_size_bytes), 0) as total_output_size,
|
||||
AVG(vmaf_score) as avg_vmaf,
|
||||
COALESCE(SUM(encode_time_seconds), 0.0) as total_encode_time
|
||||
FROM encode_stats",
|
||||
)
|
||||
.fetch_one(pool)
|
||||
.await?;
|
||||
|
||||
Ok(AggregatedStats {
|
||||
total_jobs: row.get("total_jobs"),
|
||||
completed_jobs: row.get("completed_jobs"),
|
||||
total_input_size: row.get("total_input_size"),
|
||||
total_output_size: row.get("total_output_size"),
|
||||
avg_vmaf: row.get("avg_vmaf"),
|
||||
total_encode_time_seconds: row.get("total_encode_time"),
|
||||
})
|
||||
})
|
||||
.await
|
||||
}
|
||||
|
||||
/// Get daily statistics for the last N days (for time-series charts)
|
||||
pub async fn get_daily_stats(&self, days: i32) -> Result<Vec<DailyStats>> {
|
||||
let pool = &self.pool;
|
||||
let days_str = format!("-{}", days);
|
||||
timed_query("get_daily_stats", || async {
|
||||
let rows = sqlx::query(
|
||||
"SELECT
|
||||
DATE(e.created_at) as date,
|
||||
COUNT(*) as jobs_completed,
|
||||
COALESCE(SUM(e.input_size_bytes - e.output_size_bytes), 0) as bytes_saved,
|
||||
COALESCE(SUM(e.input_size_bytes), 0) as total_input_bytes,
|
||||
COALESCE(SUM(e.output_size_bytes), 0) as total_output_bytes
|
||||
FROM encode_stats e
|
||||
WHERE e.created_at >= DATE('now', ? || ' days')
|
||||
GROUP BY DATE(e.created_at)
|
||||
ORDER BY date ASC",
|
||||
)
|
||||
.bind(&days_str)
|
||||
.fetch_all(pool)
|
||||
.await?;
|
||||
|
||||
let stats = rows
|
||||
.iter()
|
||||
.map(|row| DailyStats {
|
||||
date: row.get("date"),
|
||||
jobs_completed: row.get("jobs_completed"),
|
||||
bytes_saved: row.get("bytes_saved"),
|
||||
total_input_bytes: row.get("total_input_bytes"),
|
||||
total_output_bytes: row.get("total_output_bytes"),
|
||||
})
|
||||
.collect();
|
||||
|
||||
Ok(stats)
|
||||
})
|
||||
.await
|
||||
}
|
||||
|
||||
/// Get detailed per-job encoding statistics (most recent first)
|
||||
pub async fn get_detailed_encode_stats(&self, limit: i32) -> Result<Vec<DetailedEncodeStats>> {
|
||||
let pool = &self.pool;
|
||||
timed_query("get_detailed_encode_stats", || async {
|
||||
let stats = sqlx::query_as::<_, DetailedEncodeStats>(
|
||||
"SELECT
|
||||
e.job_id,
|
||||
j.input_path,
|
||||
e.input_size_bytes,
|
||||
e.output_size_bytes,
|
||||
e.compression_ratio,
|
||||
e.encode_time_seconds,
|
||||
e.encode_speed,
|
||||
e.avg_bitrate_kbps,
|
||||
e.vmaf_score,
|
||||
e.created_at
|
||||
FROM encode_stats e
|
||||
JOIN jobs j ON e.job_id = j.id
|
||||
ORDER BY e.created_at DESC
|
||||
LIMIT ?",
|
||||
)
|
||||
.bind(limit)
|
||||
.fetch_all(pool)
|
||||
.await?;
|
||||
|
||||
Ok(stats)
|
||||
})
|
||||
.await
|
||||
}
|
||||
|
||||
pub async fn get_savings_summary(&self) -> Result<SavingsSummary> {
|
||||
let pool = &self.pool;
|
||||
timed_query("get_savings_summary", || async {
|
||||
let totals = sqlx::query(
|
||||
"SELECT
|
||||
COALESCE(SUM(input_size_bytes), 0) as total_input_bytes,
|
||||
COALESCE(SUM(output_size_bytes), 0) as total_output_bytes,
|
||||
COUNT(*) as job_count
|
||||
FROM encode_stats
|
||||
WHERE output_size_bytes IS NOT NULL",
|
||||
)
|
||||
.fetch_one(pool)
|
||||
.await?;
|
||||
|
||||
let total_input_bytes: i64 = totals.get("total_input_bytes");
|
||||
let total_output_bytes: i64 = totals.get("total_output_bytes");
|
||||
let job_count: i64 = totals.get("job_count");
|
||||
let total_bytes_saved = (total_input_bytes - total_output_bytes).max(0);
|
||||
let savings_percent = if total_input_bytes > 0 {
|
||||
(total_bytes_saved as f64 / total_input_bytes as f64) * 100.0
|
||||
} else {
|
||||
0.0
|
||||
};
|
||||
|
||||
let savings_by_codec = sqlx::query(
|
||||
"SELECT
|
||||
COALESCE(NULLIF(TRIM(e.output_codec), ''), 'unknown') as codec,
|
||||
COALESCE(SUM(e.input_size_bytes - e.output_size_bytes), 0) as bytes_saved,
|
||||
COUNT(*) as job_count
|
||||
FROM encode_stats e
|
||||
JOIN jobs j ON j.id = e.job_id
|
||||
WHERE e.output_size_bytes IS NOT NULL
|
||||
GROUP BY codec
|
||||
ORDER BY bytes_saved DESC, codec ASC",
|
||||
)
|
||||
.fetch_all(pool)
|
||||
.await?
|
||||
.into_iter()
|
||||
.map(|row| CodecSavings {
|
||||
codec: row.get("codec"),
|
||||
bytes_saved: row.get("bytes_saved"),
|
||||
job_count: row.get("job_count"),
|
||||
})
|
||||
.collect::<Vec<_>>();
|
||||
|
||||
let savings_over_time = sqlx::query(
|
||||
"SELECT
|
||||
DATE(e.created_at) as date,
|
||||
COALESCE(SUM(e.input_size_bytes - e.output_size_bytes), 0) as bytes_saved
|
||||
FROM encode_stats e
|
||||
WHERE e.output_size_bytes IS NOT NULL
|
||||
AND e.created_at >= datetime('now', '-30 days')
|
||||
GROUP BY DATE(e.created_at)
|
||||
ORDER BY date ASC",
|
||||
)
|
||||
.fetch_all(pool)
|
||||
.await?
|
||||
.into_iter()
|
||||
.map(|row| DailySavings {
|
||||
date: row.get("date"),
|
||||
bytes_saved: row.get("bytes_saved"),
|
||||
})
|
||||
.collect::<Vec<_>>();
|
||||
|
||||
Ok(SavingsSummary {
|
||||
total_input_bytes,
|
||||
total_output_bytes,
|
||||
total_bytes_saved,
|
||||
savings_percent,
|
||||
job_count,
|
||||
savings_by_codec,
|
||||
savings_over_time,
|
||||
})
|
||||
})
|
||||
.await
|
||||
}
|
||||
|
||||
pub async fn get_job_stats(&self) -> Result<JobStats> {
|
||||
let pool = &self.pool;
|
||||
timed_query("get_job_stats", || async {
|
||||
let rows = sqlx::query(
|
||||
"SELECT status, COUNT(*) as count FROM jobs WHERE archived = 0 GROUP BY status",
|
||||
)
|
||||
.fetch_all(pool)
|
||||
.await?;
|
||||
|
||||
let mut stats = JobStats::default();
|
||||
for row in rows {
|
||||
let status_str: String = row.get("status");
|
||||
let count: i64 = row.get("count");
|
||||
|
||||
match status_str.as_str() {
|
||||
"queued" => stats.queued += count,
|
||||
"encoding" | "analyzing" | "remuxing" | "resuming" => stats.active += count,
|
||||
"completed" => stats.completed += count,
|
||||
"failed" | "cancelled" => stats.failed += count,
|
||||
_ => {}
|
||||
}
|
||||
}
|
||||
Ok(stats)
|
||||
})
|
||||
.await
|
||||
}
|
||||
|
||||
pub async fn get_daily_summary_stats(&self) -> Result<DailySummaryStats> {
|
||||
let pool = &self.pool;
|
||||
timed_query("get_daily_summary_stats", || async {
|
||||
let row = sqlx::query(
|
||||
"SELECT
|
||||
COALESCE(SUM(CASE WHEN status = 'completed' AND DATE(updated_at, 'localtime') = DATE('now', 'localtime') THEN 1 ELSE 0 END), 0) AS completed,
|
||||
COALESCE(SUM(CASE WHEN status = 'failed' AND DATE(updated_at, 'localtime') = DATE('now', 'localtime') THEN 1 ELSE 0 END), 0) AS failed,
|
||||
COALESCE(SUM(CASE WHEN status = 'skipped' AND DATE(updated_at, 'localtime') = DATE('now', 'localtime') THEN 1 ELSE 0 END), 0) AS skipped
|
||||
FROM jobs",
|
||||
)
|
||||
.fetch_one(pool)
|
||||
.await?;
|
||||
|
||||
let completed: i64 = row.get("completed");
|
||||
let failed: i64 = row.get("failed");
|
||||
let skipped: i64 = row.get("skipped");
|
||||
|
||||
let bytes_row = sqlx::query(
|
||||
"SELECT COALESCE(SUM(input_size_bytes - output_size_bytes), 0) AS bytes_saved
|
||||
FROM encode_stats
|
||||
WHERE DATE(created_at, 'localtime') = DATE('now', 'localtime')",
|
||||
)
|
||||
.fetch_one(pool)
|
||||
.await?;
|
||||
let bytes_saved: i64 = bytes_row.get("bytes_saved");
|
||||
|
||||
let failure_rows = sqlx::query(
|
||||
"SELECT code, COUNT(*) AS count
|
||||
FROM job_failure_explanations
|
||||
WHERE DATE(updated_at, 'localtime') = DATE('now', 'localtime')
|
||||
GROUP BY code
|
||||
ORDER BY count DESC, code ASC
|
||||
LIMIT 3",
|
||||
)
|
||||
.fetch_all(pool)
|
||||
.await?;
|
||||
let top_failure_reasons = failure_rows
|
||||
.into_iter()
|
||||
.map(|row| row.get::<String, _>("code"))
|
||||
.collect::<Vec<_>>();
|
||||
|
||||
let skip_rows = sqlx::query(
|
||||
"SELECT COALESCE(reason_code, action) AS code, COUNT(*) AS count
|
||||
FROM decisions
|
||||
WHERE action = 'skip'
|
||||
AND DATE(created_at, 'localtime') = DATE('now', 'localtime')
|
||||
GROUP BY COALESCE(reason_code, action)
|
||||
ORDER BY count DESC, code ASC
|
||||
LIMIT 3",
|
||||
)
|
||||
.fetch_all(pool)
|
||||
.await?;
|
||||
let top_skip_reasons = skip_rows
|
||||
.into_iter()
|
||||
.map(|row| row.get::<String, _>("code"))
|
||||
.collect::<Vec<_>>();
|
||||
|
||||
Ok(DailySummaryStats {
|
||||
completed,
|
||||
failed,
|
||||
skipped,
|
||||
bytes_saved,
|
||||
top_failure_reasons,
|
||||
top_skip_reasons,
|
||||
})
|
||||
})
|
||||
.await
|
||||
}
|
||||
|
||||
pub async fn get_skip_reason_counts(&self) -> Result<Vec<(String, i64)>> {
|
||||
let pool = &self.pool;
|
||||
timed_query("get_skip_reason_counts", || async {
|
||||
let rows = sqlx::query(
|
||||
"SELECT COALESCE(reason_code, action) AS code, COUNT(*) AS count
|
||||
FROM decisions
|
||||
WHERE action = 'skip'
|
||||
AND DATE(created_at, 'localtime') = DATE('now', 'localtime')
|
||||
GROUP BY COALESCE(reason_code, action)
|
||||
ORDER BY count DESC, code ASC
|
||||
LIMIT 20",
|
||||
)
|
||||
.fetch_all(pool)
|
||||
.await?;
|
||||
Ok(rows
|
||||
.into_iter()
|
||||
.map(|row| {
|
||||
let code: String = row.get("code");
|
||||
let count: i64 = row.get("count");
|
||||
(code, count)
|
||||
})
|
||||
.collect())
|
||||
})
|
||||
.await
|
||||
}
|
||||
}
|
||||
389
src/db/system.rs
Normal file
389
src/db/system.rs
Normal file
@@ -0,0 +1,389 @@
|
||||
use crate::error::Result;
|
||||
use chrono::{DateTime, Utc};
|
||||
use sqlx::Row;
|
||||
|
||||
use super::timed_query;
|
||||
use super::types::*;
|
||||
use super::{Db, hash_api_token, hash_session_token};
|
||||
|
||||
impl Db {
|
||||
pub async fn clear_completed_jobs(&self) -> Result<u64> {
|
||||
let result = sqlx::query(
|
||||
"UPDATE jobs
|
||||
SET archived = 1, updated_at = CURRENT_TIMESTAMP
|
||||
WHERE status = 'completed' AND archived = 0",
|
||||
)
|
||||
.execute(&self.pool)
|
||||
.await?;
|
||||
Ok(result.rows_affected())
|
||||
}
|
||||
|
||||
pub async fn cleanup_sessions(&self) -> Result<()> {
|
||||
sqlx::query("DELETE FROM sessions WHERE expires_at <= CURRENT_TIMESTAMP")
|
||||
.execute(&self.pool)
|
||||
.await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn cleanup_expired_sessions(&self) -> Result<u64> {
|
||||
let result = sqlx::query("DELETE FROM sessions WHERE expires_at <= CURRENT_TIMESTAMP")
|
||||
.execute(&self.pool)
|
||||
.await?;
|
||||
Ok(result.rows_affected())
|
||||
}
|
||||
|
||||
pub async fn add_log(&self, level: &str, job_id: Option<i64>, message: &str) -> Result<()> {
|
||||
sqlx::query("INSERT INTO logs (level, job_id, message) VALUES (?, ?, ?)")
|
||||
.bind(level)
|
||||
.bind(job_id)
|
||||
.bind(message)
|
||||
.execute(&self.pool)
|
||||
.await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn get_logs(&self, limit: i64, offset: i64) -> Result<Vec<LogEntry>> {
|
||||
let logs = sqlx::query_as::<_, LogEntry>(
|
||||
"SELECT id, level, job_id, message, created_at FROM logs ORDER BY created_at DESC LIMIT ? OFFSET ?"
|
||||
)
|
||||
.bind(limit)
|
||||
.bind(offset)
|
||||
.fetch_all(&self.pool)
|
||||
.await?;
|
||||
Ok(logs)
|
||||
}
|
||||
|
||||
pub async fn get_logs_for_job(&self, job_id: i64, limit: i64) -> Result<Vec<LogEntry>> {
|
||||
sqlx::query_as::<_, LogEntry>(
|
||||
"SELECT id, level, job_id, message, created_at
|
||||
FROM logs
|
||||
WHERE job_id = ?
|
||||
ORDER BY created_at ASC
|
||||
LIMIT ?",
|
||||
)
|
||||
.bind(job_id)
|
||||
.bind(limit)
|
||||
.fetch_all(&self.pool)
|
||||
.await
|
||||
.map_err(Into::into)
|
||||
}
|
||||
|
||||
pub async fn clear_logs(&self) -> Result<()> {
|
||||
sqlx::query("DELETE FROM logs").execute(&self.pool).await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn prune_old_logs(&self, max_age_days: u32) -> Result<u64> {
|
||||
let result = sqlx::query(
|
||||
"DELETE FROM logs
|
||||
WHERE created_at < datetime('now', '-' || ? || ' days')",
|
||||
)
|
||||
.bind(max_age_days as i64)
|
||||
.execute(&self.pool)
|
||||
.await?;
|
||||
Ok(result.rows_affected())
|
||||
}
|
||||
|
||||
pub async fn create_user(&self, username: &str, password_hash: &str) -> Result<i64> {
|
||||
let id = sqlx::query("INSERT INTO users (username, password_hash) VALUES (?, ?)")
|
||||
.bind(username)
|
||||
.bind(password_hash)
|
||||
.execute(&self.pool)
|
||||
.await?
|
||||
.last_insert_rowid();
|
||||
Ok(id)
|
||||
}
|
||||
|
||||
pub async fn get_user_by_username(&self, username: &str) -> Result<Option<User>> {
|
||||
let user = sqlx::query_as::<_, User>("SELECT * FROM users WHERE username = ?")
|
||||
.bind(username)
|
||||
.fetch_optional(&self.pool)
|
||||
.await?;
|
||||
Ok(user)
|
||||
}
|
||||
|
||||
pub async fn has_users(&self) -> Result<bool> {
|
||||
let count: (i64,) = sqlx::query_as("SELECT COUNT(*) FROM users")
|
||||
.fetch_one(&self.pool)
|
||||
.await?;
|
||||
Ok(count.0 > 0)
|
||||
}
|
||||
|
||||
pub async fn create_session(
|
||||
&self,
|
||||
user_id: i64,
|
||||
token: &str,
|
||||
expires_at: DateTime<Utc>,
|
||||
) -> Result<()> {
|
||||
let token_hash = hash_session_token(token);
|
||||
sqlx::query("INSERT INTO sessions (token, user_id, expires_at) VALUES (?, ?, ?)")
|
||||
.bind(token_hash)
|
||||
.bind(user_id)
|
||||
.bind(expires_at)
|
||||
.execute(&self.pool)
|
||||
.await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn get_session(&self, token: &str) -> Result<Option<Session>> {
|
||||
let token_hash = hash_session_token(token);
|
||||
let session = sqlx::query_as::<_, Session>(
|
||||
"SELECT * FROM sessions WHERE token = ? AND expires_at > CURRENT_TIMESTAMP",
|
||||
)
|
||||
.bind(&token_hash)
|
||||
.fetch_optional(&self.pool)
|
||||
.await?;
|
||||
Ok(session)
|
||||
}
|
||||
|
||||
pub async fn delete_session(&self, token: &str) -> Result<()> {
|
||||
let token_hash = hash_session_token(token);
|
||||
sqlx::query("DELETE FROM sessions WHERE token = ?")
|
||||
.bind(&token_hash)
|
||||
.execute(&self.pool)
|
||||
.await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn list_api_tokens(&self) -> Result<Vec<ApiToken>> {
|
||||
let tokens = sqlx::query_as::<_, ApiToken>(
|
||||
"SELECT id, name, access_level, created_at, last_used_at, revoked_at
|
||||
FROM api_tokens
|
||||
ORDER BY created_at DESC",
|
||||
)
|
||||
.fetch_all(&self.pool)
|
||||
.await?;
|
||||
Ok(tokens)
|
||||
}
|
||||
|
||||
pub async fn create_api_token(
|
||||
&self,
|
||||
name: &str,
|
||||
token: &str,
|
||||
access_level: ApiTokenAccessLevel,
|
||||
) -> Result<ApiToken> {
|
||||
let token_hash = hash_api_token(token);
|
||||
let row = sqlx::query_as::<_, ApiToken>(
|
||||
"INSERT INTO api_tokens (name, token_hash, access_level)
|
||||
VALUES (?, ?, ?)
|
||||
RETURNING id, name, access_level, created_at, last_used_at, revoked_at",
|
||||
)
|
||||
.bind(name)
|
||||
.bind(token_hash)
|
||||
.bind(access_level)
|
||||
.fetch_one(&self.pool)
|
||||
.await?;
|
||||
Ok(row)
|
||||
}
|
||||
|
||||
pub async fn get_active_api_token(&self, token: &str) -> Result<Option<ApiTokenRecord>> {
|
||||
let token_hash = hash_api_token(token);
|
||||
let row = sqlx::query_as::<_, ApiTokenRecord>(
|
||||
"SELECT id, name, token_hash, access_level, created_at, last_used_at, revoked_at
|
||||
FROM api_tokens
|
||||
WHERE token_hash = ? AND revoked_at IS NULL",
|
||||
)
|
||||
.bind(token_hash)
|
||||
.fetch_optional(&self.pool)
|
||||
.await?;
|
||||
Ok(row)
|
||||
}
|
||||
|
||||
pub async fn update_api_token_last_used(&self, id: i64) -> Result<()> {
|
||||
sqlx::query("UPDATE api_tokens SET last_used_at = CURRENT_TIMESTAMP WHERE id = ?")
|
||||
.bind(id)
|
||||
.execute(&self.pool)
|
||||
.await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn revoke_api_token(&self, id: i64) -> Result<()> {
|
||||
let result = sqlx::query(
|
||||
"UPDATE api_tokens
|
||||
SET revoked_at = COALESCE(revoked_at, CURRENT_TIMESTAMP)
|
||||
WHERE id = ?",
|
||||
)
|
||||
.bind(id)
|
||||
.execute(&self.pool)
|
||||
.await?;
|
||||
if result.rows_affected() == 0 {
|
||||
return Err(crate::error::AlchemistError::Database(
|
||||
sqlx::Error::RowNotFound,
|
||||
));
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn record_health_check(
|
||||
&self,
|
||||
job_id: i64,
|
||||
issues: Option<&crate::media::health::HealthIssueReport>,
|
||||
) -> Result<()> {
|
||||
let serialized_issues = issues
|
||||
.map(serde_json::to_string)
|
||||
.transpose()
|
||||
.map_err(|err| {
|
||||
crate::error::AlchemistError::Unknown(format!(
|
||||
"Failed to serialize health issue report: {}",
|
||||
err
|
||||
))
|
||||
})?;
|
||||
|
||||
sqlx::query(
|
||||
"UPDATE jobs
|
||||
SET health_issues = ?,
|
||||
last_health_check = datetime('now')
|
||||
WHERE id = ?",
|
||||
)
|
||||
.bind(serialized_issues)
|
||||
.bind(job_id)
|
||||
.execute(&self.pool)
|
||||
.await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn get_health_summary(&self) -> Result<HealthSummary> {
|
||||
let pool = &self.pool;
|
||||
timed_query("get_health_summary", || async {
|
||||
let row = sqlx::query(
|
||||
"SELECT
|
||||
(SELECT COUNT(*) FROM jobs WHERE last_health_check IS NOT NULL AND archived = 0) as total_checked,
|
||||
(SELECT COUNT(*)
|
||||
FROM jobs
|
||||
WHERE health_issues IS NOT NULL AND TRIM(health_issues) != '' AND archived = 0) as issues_found,
|
||||
(SELECT MAX(started_at) FROM health_scan_runs) as last_run",
|
||||
)
|
||||
.fetch_one(pool)
|
||||
.await?;
|
||||
|
||||
Ok(HealthSummary {
|
||||
total_checked: row.get("total_checked"),
|
||||
issues_found: row.get("issues_found"),
|
||||
last_run: row.get("last_run"),
|
||||
})
|
||||
})
|
||||
.await
|
||||
}
|
||||
|
||||
pub async fn create_health_scan_run(&self) -> Result<i64> {
|
||||
let id = sqlx::query("INSERT INTO health_scan_runs DEFAULT VALUES")
|
||||
.execute(&self.pool)
|
||||
.await?
|
||||
.last_insert_rowid();
|
||||
Ok(id)
|
||||
}
|
||||
|
||||
pub async fn complete_health_scan_run(
|
||||
&self,
|
||||
id: i64,
|
||||
files_checked: i64,
|
||||
issues_found: i64,
|
||||
) -> Result<()> {
|
||||
sqlx::query(
|
||||
"UPDATE health_scan_runs
|
||||
SET completed_at = datetime('now'),
|
||||
files_checked = ?,
|
||||
issues_found = ?
|
||||
WHERE id = ?",
|
||||
)
|
||||
.bind(files_checked)
|
||||
.bind(issues_found)
|
||||
.bind(id)
|
||||
.execute(&self.pool)
|
||||
.await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn get_jobs_with_health_issues(&self) -> Result<Vec<JobWithHealthIssueRow>> {
|
||||
let pool = &self.pool;
|
||||
timed_query("get_jobs_with_health_issues", || async {
|
||||
let jobs = sqlx::query_as::<_, JobWithHealthIssueRow>(
|
||||
"SELECT j.id, j.input_path, j.output_path, j.status,
|
||||
(SELECT reason FROM decisions WHERE job_id = j.id ORDER BY created_at DESC LIMIT 1) as decision_reason,
|
||||
COALESCE(j.priority, 0) as priority,
|
||||
COALESCE(CAST(j.progress AS REAL), 0.0) as progress,
|
||||
COALESCE(j.attempt_count, 0) as attempt_count,
|
||||
(SELECT vmaf_score FROM encode_stats WHERE job_id = j.id) as vmaf_score,
|
||||
j.created_at, j.updated_at,
|
||||
j.input_metadata_json,
|
||||
j.health_issues
|
||||
FROM jobs j
|
||||
WHERE j.archived = 0
|
||||
AND j.health_issues IS NOT NULL
|
||||
AND TRIM(j.health_issues) != ''
|
||||
ORDER BY j.updated_at DESC",
|
||||
)
|
||||
.fetch_all(pool)
|
||||
.await?;
|
||||
Ok(jobs)
|
||||
})
|
||||
.await
|
||||
}
|
||||
|
||||
pub async fn reset_auth(&self) -> Result<()> {
|
||||
sqlx::query("DELETE FROM sessions")
|
||||
.execute(&self.pool)
|
||||
.await?;
|
||||
sqlx::query("DELETE FROM users").execute(&self.pool).await?;
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use std::path::Path;
|
||||
use std::time::SystemTime;
|
||||
|
||||
#[tokio::test]
|
||||
async fn clear_completed_archives_jobs_but_preserves_encode_stats()
|
||||
-> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
let mut db_path = std::env::temp_dir();
|
||||
let token: u64 = rand::random();
|
||||
db_path.push(format!("alchemist_archive_completed_{}.db", token));
|
||||
|
||||
let db = Db::new(db_path.to_string_lossy().as_ref()).await?;
|
||||
let input = Path::new("movie.mkv");
|
||||
let output = Path::new("movie-alchemist.mkv");
|
||||
let _ = db
|
||||
.enqueue_job(input, output, SystemTime::UNIX_EPOCH)
|
||||
.await?;
|
||||
|
||||
let job = db
|
||||
.get_job_by_input_path("movie.mkv")
|
||||
.await?
|
||||
.ok_or_else(|| std::io::Error::other("missing job"))?;
|
||||
db.update_job_status(job.id, JobState::Completed).await?;
|
||||
db.save_encode_stats(EncodeStatsInput {
|
||||
job_id: job.id,
|
||||
input_size: 2_000,
|
||||
output_size: 1_000,
|
||||
compression_ratio: 0.5,
|
||||
encode_time: 42.0,
|
||||
encode_speed: 1.2,
|
||||
avg_bitrate: 800.0,
|
||||
vmaf_score: Some(96.5),
|
||||
output_codec: Some("av1".to_string()),
|
||||
})
|
||||
.await?;
|
||||
|
||||
let cleared = db.clear_completed_jobs().await?;
|
||||
assert_eq!(cleared, 1);
|
||||
assert!(db.get_job_by_id(job.id).await?.is_none());
|
||||
assert!(db.get_job_by_input_path("movie.mkv").await?.is_none());
|
||||
|
||||
let visible_completed = db.get_jobs_by_status(JobState::Completed).await?;
|
||||
assert!(visible_completed.is_empty());
|
||||
|
||||
let aggregated = db.get_aggregated_stats().await?;
|
||||
// Archived jobs are excluded from active stats.
|
||||
assert_eq!(aggregated.completed_jobs, 0);
|
||||
// encode_stats rows are preserved even after archiving.
|
||||
assert_eq!(aggregated.total_input_size, 2_000);
|
||||
assert_eq!(aggregated.total_output_size, 1_000);
|
||||
|
||||
drop(db);
|
||||
let _ = std::fs::remove_file(db_path);
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
692
src/db/types.rs
Normal file
692
src/db/types.rs
Normal file
@@ -0,0 +1,692 @@
|
||||
use chrono::{DateTime, Utc};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::path::{Path, PathBuf};
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, Clone, Copy, PartialEq, Eq, sqlx::Type)]
|
||||
#[sqlx(rename_all = "lowercase")]
|
||||
#[serde(rename_all = "lowercase")]
|
||||
pub enum JobState {
|
||||
Queued,
|
||||
Analyzing,
|
||||
Encoding,
|
||||
Remuxing,
|
||||
Completed,
|
||||
Skipped,
|
||||
Failed,
|
||||
Cancelled,
|
||||
Resuming,
|
||||
}
|
||||
|
||||
impl std::fmt::Display for JobState {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
let s = match self {
|
||||
JobState::Queued => "queued",
|
||||
JobState::Analyzing => "analyzing",
|
||||
JobState::Encoding => "encoding",
|
||||
JobState::Remuxing => "remuxing",
|
||||
JobState::Completed => "completed",
|
||||
JobState::Skipped => "skipped",
|
||||
JobState::Failed => "failed",
|
||||
JobState::Cancelled => "cancelled",
|
||||
JobState::Resuming => "resuming",
|
||||
};
|
||||
write!(f, "{}", s)
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, Default, Clone)]
|
||||
#[serde(default)]
|
||||
pub struct JobStats {
|
||||
pub active: i64,
|
||||
pub queued: i64,
|
||||
pub completed: i64,
|
||||
pub failed: i64,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, Default, Clone)]
|
||||
#[serde(default)]
|
||||
pub struct DailySummaryStats {
|
||||
pub completed: i64,
|
||||
pub failed: i64,
|
||||
pub skipped: i64,
|
||||
pub bytes_saved: i64,
|
||||
pub top_failure_reasons: Vec<String>,
|
||||
pub top_skip_reasons: Vec<String>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, sqlx::FromRow)]
|
||||
pub struct LogEntry {
|
||||
pub id: i64,
|
||||
pub level: String,
|
||||
pub job_id: Option<i64>,
|
||||
pub message: String,
|
||||
pub created_at: String, // SQLite datetime as string
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, Clone, sqlx::FromRow)]
|
||||
pub struct Job {
|
||||
pub id: i64,
|
||||
pub input_path: String,
|
||||
pub output_path: String,
|
||||
pub status: JobState,
|
||||
pub decision_reason: Option<String>,
|
||||
pub priority: i32,
|
||||
pub progress: f64,
|
||||
pub attempt_count: i32,
|
||||
pub vmaf_score: Option<f64>,
|
||||
pub created_at: DateTime<Utc>,
|
||||
pub updated_at: DateTime<Utc>,
|
||||
pub input_metadata_json: Option<String>,
|
||||
}
|
||||
|
||||
impl Job {
|
||||
pub fn input_metadata(&self) -> Option<crate::media::pipeline::MediaMetadata> {
|
||||
self.input_metadata_json
|
||||
.as_ref()
|
||||
.and_then(|json| serde_json::from_str(json).ok())
|
||||
}
|
||||
|
||||
pub fn is_active(&self) -> bool {
|
||||
matches!(
|
||||
self.status,
|
||||
JobState::Encoding | JobState::Analyzing | JobState::Remuxing | JobState::Resuming
|
||||
)
|
||||
}
|
||||
|
||||
pub fn can_retry(&self) -> bool {
|
||||
matches!(self.status, JobState::Failed | JobState::Cancelled)
|
||||
}
|
||||
|
||||
pub fn status_class(&self) -> &'static str {
|
||||
match self.status {
|
||||
JobState::Completed => "badge-green",
|
||||
JobState::Encoding | JobState::Remuxing | JobState::Resuming => "badge-yellow",
|
||||
JobState::Analyzing => "badge-blue",
|
||||
JobState::Failed | JobState::Cancelled => "badge-red",
|
||||
_ => "badge-gray",
|
||||
}
|
||||
}
|
||||
|
||||
pub fn progress_fixed(&self) -> String {
|
||||
format!("{:.1}", self.progress)
|
||||
}
|
||||
|
||||
pub fn vmaf_fixed(&self) -> String {
|
||||
self.vmaf_score
|
||||
.map(|s| format!("{:.1}", s))
|
||||
.unwrap_or_else(|| "N/A".to_string())
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, Clone, sqlx::FromRow)]
|
||||
pub struct JobWithHealthIssueRow {
|
||||
pub id: i64,
|
||||
pub input_path: String,
|
||||
pub output_path: String,
|
||||
pub status: JobState,
|
||||
pub decision_reason: Option<String>,
|
||||
pub priority: i32,
|
||||
pub progress: f64,
|
||||
pub attempt_count: i32,
|
||||
pub vmaf_score: Option<f64>,
|
||||
pub created_at: DateTime<Utc>,
|
||||
pub updated_at: DateTime<Utc>,
|
||||
pub input_metadata_json: Option<String>,
|
||||
pub health_issues: String,
|
||||
}
|
||||
|
||||
impl JobWithHealthIssueRow {
|
||||
pub fn into_parts(self) -> (Job, String) {
|
||||
(
|
||||
Job {
|
||||
id: self.id,
|
||||
input_path: self.input_path,
|
||||
output_path: self.output_path,
|
||||
status: self.status,
|
||||
decision_reason: self.decision_reason,
|
||||
priority: self.priority,
|
||||
progress: self.progress,
|
||||
attempt_count: self.attempt_count,
|
||||
vmaf_score: self.vmaf_score,
|
||||
created_at: self.created_at,
|
||||
updated_at: self.updated_at,
|
||||
input_metadata_json: self.input_metadata_json,
|
||||
},
|
||||
self.health_issues,
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize, sqlx::FromRow)]
|
||||
pub struct DuplicateCandidate {
|
||||
pub id: i64,
|
||||
pub input_path: String,
|
||||
pub status: String,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, Clone, sqlx::FromRow)]
|
||||
pub struct WatchDir {
|
||||
pub id: i64,
|
||||
pub path: String,
|
||||
pub is_recursive: bool,
|
||||
pub profile_id: Option<i64>,
|
||||
pub created_at: DateTime<Utc>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, Clone, sqlx::FromRow)]
|
||||
pub struct LibraryProfile {
|
||||
pub id: i64,
|
||||
pub name: String,
|
||||
pub preset: String,
|
||||
pub codec: String,
|
||||
pub quality_profile: String,
|
||||
pub hdr_mode: String,
|
||||
pub audio_mode: String,
|
||||
pub crf_override: Option<i32>,
|
||||
pub notes: Option<String>,
|
||||
pub created_at: DateTime<Utc>,
|
||||
pub updated_at: DateTime<Utc>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, Clone)]
|
||||
pub struct NewLibraryProfile {
|
||||
pub name: String,
|
||||
pub preset: String,
|
||||
pub codec: String,
|
||||
pub quality_profile: String,
|
||||
pub hdr_mode: String,
|
||||
pub audio_mode: String,
|
||||
pub crf_override: Option<i32>,
|
||||
pub notes: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Default)]
|
||||
pub struct JobFilterQuery {
|
||||
pub limit: i64,
|
||||
pub offset: i64,
|
||||
pub statuses: Option<Vec<JobState>>,
|
||||
pub search: Option<String>,
|
||||
pub sort_by: Option<String>,
|
||||
pub sort_desc: bool,
|
||||
pub archived: Option<bool>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, Clone, sqlx::FromRow)]
|
||||
pub struct NotificationTarget {
|
||||
pub id: i64,
|
||||
pub name: String,
|
||||
pub target_type: String,
|
||||
pub config_json: String,
|
||||
pub events: String,
|
||||
pub enabled: bool,
|
||||
pub created_at: DateTime<Utc>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, Clone, sqlx::FromRow)]
|
||||
pub struct ConversionJob {
|
||||
pub id: i64,
|
||||
pub upload_path: String,
|
||||
pub output_path: Option<String>,
|
||||
pub mode: String,
|
||||
pub settings_json: String,
|
||||
pub probe_json: Option<String>,
|
||||
pub linked_job_id: Option<i64>,
|
||||
pub status: String,
|
||||
pub expires_at: String,
|
||||
pub downloaded_at: Option<String>,
|
||||
pub created_at: String,
|
||||
pub updated_at: String,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, Clone, sqlx::FromRow)]
|
||||
pub struct JobResumeSession {
|
||||
pub id: i64,
|
||||
pub job_id: i64,
|
||||
pub strategy: String,
|
||||
pub plan_hash: String,
|
||||
pub mtime_hash: String,
|
||||
pub temp_dir: String,
|
||||
pub concat_manifest_path: String,
|
||||
pub segment_length_secs: i64,
|
||||
pub status: String,
|
||||
pub created_at: String,
|
||||
pub updated_at: String,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, Clone, sqlx::FromRow)]
|
||||
pub struct JobResumeSegment {
|
||||
pub id: i64,
|
||||
pub job_id: i64,
|
||||
pub segment_index: i64,
|
||||
pub start_secs: f64,
|
||||
pub duration_secs: f64,
|
||||
pub temp_path: String,
|
||||
pub status: String,
|
||||
pub attempt_count: i32,
|
||||
pub created_at: String,
|
||||
pub updated_at: String,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct UpsertJobResumeSessionInput {
|
||||
pub job_id: i64,
|
||||
pub strategy: String,
|
||||
pub plan_hash: String,
|
||||
pub mtime_hash: String,
|
||||
pub temp_dir: String,
|
||||
pub concat_manifest_path: String,
|
||||
pub segment_length_secs: i64,
|
||||
pub status: String,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct UpsertJobResumeSegmentInput {
|
||||
pub job_id: i64,
|
||||
pub segment_index: i64,
|
||||
pub start_secs: f64,
|
||||
pub duration_secs: f64,
|
||||
pub temp_path: String,
|
||||
pub status: String,
|
||||
pub attempt_count: i32,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, Clone, sqlx::FromRow)]
|
||||
pub struct ScheduleWindow {
|
||||
pub id: i64,
|
||||
pub start_time: String,
|
||||
pub end_time: String,
|
||||
pub days_of_week: String, // as JSON string
|
||||
pub enabled: bool,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, Clone, sqlx::FromRow)]
|
||||
pub struct FileSettings {
|
||||
pub id: i64,
|
||||
pub delete_source: bool,
|
||||
pub output_extension: String,
|
||||
pub output_suffix: String,
|
||||
pub replace_strategy: String,
|
||||
pub output_root: Option<String>,
|
||||
}
|
||||
|
||||
impl FileSettings {
|
||||
pub fn output_path_for(&self, input_path: &Path) -> PathBuf {
|
||||
self.output_path_for_source(input_path, None)
|
||||
}
|
||||
|
||||
pub fn output_path_for_source(&self, input_path: &Path, source_root: Option<&Path>) -> PathBuf {
|
||||
let mut output_path = self.output_base_path(input_path, source_root);
|
||||
let stem = input_path.file_stem().unwrap_or_default().to_string_lossy();
|
||||
let extension = self.output_extension.trim_start_matches('.');
|
||||
let suffix = self.output_suffix.as_str();
|
||||
|
||||
let mut filename = String::new();
|
||||
filename.push_str(&stem);
|
||||
filename.push_str(suffix);
|
||||
if !extension.is_empty() {
|
||||
filename.push('.');
|
||||
filename.push_str(extension);
|
||||
}
|
||||
if filename.is_empty() {
|
||||
filename.push_str("output");
|
||||
}
|
||||
output_path.set_file_name(filename);
|
||||
|
||||
if output_path == input_path {
|
||||
let safe_suffix = if suffix.is_empty() {
|
||||
"-alchemist".to_string()
|
||||
} else {
|
||||
format!("{}-alchemist", suffix)
|
||||
};
|
||||
let mut safe_name = String::new();
|
||||
safe_name.push_str(&stem);
|
||||
safe_name.push_str(&safe_suffix);
|
||||
if !extension.is_empty() {
|
||||
safe_name.push('.');
|
||||
safe_name.push_str(extension);
|
||||
}
|
||||
output_path.set_file_name(safe_name);
|
||||
}
|
||||
|
||||
output_path
|
||||
}
|
||||
|
||||
fn output_base_path(&self, input_path: &Path, source_root: Option<&Path>) -> PathBuf {
|
||||
let Some(output_root) = self
|
||||
.output_root
|
||||
.as_deref()
|
||||
.filter(|value| !value.trim().is_empty())
|
||||
else {
|
||||
return input_path.to_path_buf();
|
||||
};
|
||||
|
||||
let Some(root) = source_root else {
|
||||
return input_path.to_path_buf();
|
||||
};
|
||||
|
||||
let Ok(relative_path) = input_path.strip_prefix(root) else {
|
||||
return input_path.to_path_buf();
|
||||
};
|
||||
|
||||
let mut output_path = PathBuf::from(output_root);
|
||||
if let Some(parent) = relative_path.parent() {
|
||||
output_path.push(parent);
|
||||
}
|
||||
output_path.push(relative_path.file_name().unwrap_or_default());
|
||||
output_path
|
||||
}
|
||||
|
||||
pub fn should_replace_existing_output(&self) -> bool {
|
||||
let strategy = self.replace_strategy.trim();
|
||||
strategy.eq_ignore_ascii_case("replace") || strategy.eq_ignore_ascii_case("overwrite")
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, Clone, Default)]
|
||||
#[serde(default)]
|
||||
pub struct AggregatedStats {
|
||||
pub total_jobs: i64,
|
||||
pub completed_jobs: i64,
|
||||
pub total_input_size: i64,
|
||||
pub total_output_size: i64,
|
||||
pub avg_vmaf: Option<f64>,
|
||||
pub total_encode_time_seconds: f64,
|
||||
}
|
||||
|
||||
impl AggregatedStats {
|
||||
pub fn total_savings_gb(&self) -> f64 {
|
||||
self.total_input_size.saturating_sub(self.total_output_size) as f64 / 1_073_741_824.0
|
||||
}
|
||||
|
||||
pub fn total_input_gb(&self) -> f64 {
|
||||
self.total_input_size as f64 / 1_073_741_824.0
|
||||
}
|
||||
|
||||
pub fn avg_reduction_percentage(&self) -> f64 {
|
||||
if self.total_input_size == 0 {
|
||||
0.0
|
||||
} else {
|
||||
(1.0 - (self.total_output_size as f64 / self.total_input_size as f64)) * 100.0
|
||||
}
|
||||
}
|
||||
|
||||
pub fn total_time_hours(&self) -> f64 {
|
||||
self.total_encode_time_seconds / 3600.0
|
||||
}
|
||||
|
||||
pub fn total_savings_fixed(&self) -> String {
|
||||
format!("{:.1}", self.total_savings_gb())
|
||||
}
|
||||
|
||||
pub fn total_input_fixed(&self) -> String {
|
||||
format!("{:.1}", self.total_input_gb())
|
||||
}
|
||||
|
||||
pub fn efficiency_fixed(&self) -> String {
|
||||
format!("{:.1}", self.avg_reduction_percentage())
|
||||
}
|
||||
|
||||
pub fn time_fixed(&self) -> String {
|
||||
format!("{:.1}", self.total_time_hours())
|
||||
}
|
||||
|
||||
pub fn avg_vmaf_fixed(&self) -> String {
|
||||
self.avg_vmaf
|
||||
.map(|v| format!("{:.1}", v))
|
||||
.unwrap_or_else(|| "N/A".to_string())
|
||||
}
|
||||
}
|
||||
|
||||
/// Daily aggregated statistics for time-series charts
|
||||
#[derive(Debug, Serialize, Deserialize, Clone)]
|
||||
pub struct DailyStats {
|
||||
pub date: String,
|
||||
pub jobs_completed: i64,
|
||||
pub bytes_saved: i64,
|
||||
pub total_input_bytes: i64,
|
||||
pub total_output_bytes: i64,
|
||||
}
|
||||
|
||||
/// Detailed per-job encoding statistics
|
||||
#[derive(Debug, Serialize, Deserialize, Clone, sqlx::FromRow)]
|
||||
pub struct DetailedEncodeStats {
|
||||
pub job_id: i64,
|
||||
pub input_path: String,
|
||||
pub input_size_bytes: i64,
|
||||
pub output_size_bytes: i64,
|
||||
pub compression_ratio: f64,
|
||||
pub encode_time_seconds: f64,
|
||||
pub encode_speed: f64,
|
||||
pub avg_bitrate_kbps: f64,
|
||||
pub vmaf_score: Option<f64>,
|
||||
pub created_at: DateTime<Utc>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, Clone, sqlx::FromRow)]
|
||||
pub struct EncodeAttempt {
|
||||
pub id: i64,
|
||||
pub job_id: i64,
|
||||
pub attempt_number: i32,
|
||||
pub started_at: Option<String>,
|
||||
pub finished_at: String,
|
||||
pub outcome: String,
|
||||
pub failure_code: Option<String>,
|
||||
pub failure_summary: Option<String>,
|
||||
pub input_size_bytes: Option<i64>,
|
||||
pub output_size_bytes: Option<i64>,
|
||||
pub encode_time_seconds: Option<f64>,
|
||||
pub created_at: String,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct EncodeAttemptInput {
|
||||
pub job_id: i64,
|
||||
pub attempt_number: i32,
|
||||
pub started_at: Option<String>,
|
||||
pub outcome: String,
|
||||
pub failure_code: Option<String>,
|
||||
pub failure_summary: Option<String>,
|
||||
pub input_size_bytes: Option<i64>,
|
||||
pub output_size_bytes: Option<i64>,
|
||||
pub encode_time_seconds: Option<f64>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct EncodeStatsInput {
|
||||
pub job_id: i64,
|
||||
pub input_size: u64,
|
||||
pub output_size: u64,
|
||||
pub compression_ratio: f64,
|
||||
pub encode_time: f64,
|
||||
pub encode_speed: f64,
|
||||
pub avg_bitrate: f64,
|
||||
pub vmaf_score: Option<f64>,
|
||||
pub output_codec: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, Clone)]
|
||||
pub struct CodecSavings {
|
||||
pub codec: String,
|
||||
pub bytes_saved: i64,
|
||||
pub job_count: i64,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, Clone)]
|
||||
pub struct DailySavings {
|
||||
pub date: String,
|
||||
pub bytes_saved: i64,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, Clone)]
|
||||
pub struct SavingsSummary {
|
||||
pub total_input_bytes: i64,
|
||||
pub total_output_bytes: i64,
|
||||
pub total_bytes_saved: i64,
|
||||
pub savings_percent: f64,
|
||||
pub job_count: i64,
|
||||
pub savings_by_codec: Vec<CodecSavings>,
|
||||
pub savings_over_time: Vec<DailySavings>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, Clone, Default)]
|
||||
pub struct HealthSummary {
|
||||
pub total_checked: i64,
|
||||
pub issues_found: i64,
|
||||
pub last_run: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, Clone, sqlx::FromRow)]
|
||||
pub struct Decision {
|
||||
pub id: i64,
|
||||
pub job_id: i64,
|
||||
pub action: String, // "encode", "skip", "reject"
|
||||
pub reason: String,
|
||||
pub reason_code: Option<String>,
|
||||
pub reason_payload_json: Option<String>,
|
||||
pub created_at: DateTime<Utc>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, sqlx::FromRow)]
|
||||
pub(crate) struct DecisionRecord {
|
||||
pub(crate) job_id: i64,
|
||||
pub(crate) action: String,
|
||||
pub(crate) reason: String,
|
||||
pub(crate) reason_payload_json: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, sqlx::FromRow)]
|
||||
pub(crate) struct FailureExplanationRecord {
|
||||
pub(crate) legacy_summary: Option<String>,
|
||||
pub(crate) code: String,
|
||||
pub(crate) payload_json: String,
|
||||
}
|
||||
|
||||
// Auth related structs
|
||||
#[derive(Debug, Serialize, Deserialize, Clone, sqlx::FromRow)]
|
||||
pub struct User {
|
||||
pub id: i64,
|
||||
pub username: String,
|
||||
pub password_hash: String,
|
||||
pub created_at: DateTime<Utc>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, Clone, sqlx::FromRow)]
|
||||
pub struct Session {
|
||||
pub token: String,
|
||||
pub user_id: i64,
|
||||
pub expires_at: DateTime<Utc>,
|
||||
pub created_at: DateTime<Utc>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, Clone, Copy, PartialEq, Eq, sqlx::Type)]
|
||||
#[sqlx(rename_all = "snake_case")]
|
||||
#[serde(rename_all = "snake_case")]
|
||||
pub enum ApiTokenAccessLevel {
|
||||
ReadOnly,
|
||||
FullAccess,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, Clone, sqlx::FromRow)]
|
||||
pub struct ApiToken {
|
||||
pub id: i64,
|
||||
pub name: String,
|
||||
pub access_level: ApiTokenAccessLevel,
|
||||
pub created_at: DateTime<Utc>,
|
||||
pub last_used_at: Option<DateTime<Utc>>,
|
||||
pub revoked_at: Option<DateTime<Utc>>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, sqlx::FromRow)]
|
||||
pub struct ApiTokenRecord {
|
||||
pub id: i64,
|
||||
pub name: String,
|
||||
pub token_hash: String,
|
||||
pub access_level: ApiTokenAccessLevel,
|
||||
pub created_at: DateTime<Utc>,
|
||||
pub last_used_at: Option<DateTime<Utc>>,
|
||||
pub revoked_at: Option<DateTime<Utc>>,
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use std::path::{Path, PathBuf};
|
||||
|
||||
#[test]
|
||||
fn test_output_path_for_suffix() {
|
||||
let settings = FileSettings {
|
||||
id: 1,
|
||||
delete_source: false,
|
||||
output_extension: "mkv".to_string(),
|
||||
output_suffix: "-alchemist".to_string(),
|
||||
replace_strategy: "keep".to_string(),
|
||||
output_root: None,
|
||||
};
|
||||
let input = Path::new("video.mp4");
|
||||
let output = settings.output_path_for(input);
|
||||
assert_eq!(output, PathBuf::from("video-alchemist.mkv"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_output_path_avoids_inplace() {
|
||||
let settings = FileSettings {
|
||||
id: 1,
|
||||
delete_source: false,
|
||||
output_extension: "mkv".to_string(),
|
||||
output_suffix: "".to_string(),
|
||||
replace_strategy: "keep".to_string(),
|
||||
output_root: None,
|
||||
};
|
||||
let input = Path::new("video.mkv");
|
||||
let output = settings.output_path_for(input);
|
||||
assert_ne!(output, input);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_output_path_mirrors_source_root_under_output_root() {
|
||||
let settings = FileSettings {
|
||||
id: 1,
|
||||
delete_source: false,
|
||||
output_extension: "mkv".to_string(),
|
||||
output_suffix: "-alchemist".to_string(),
|
||||
replace_strategy: "keep".to_string(),
|
||||
output_root: Some("/encoded".to_string()),
|
||||
};
|
||||
let input = Path::new("/library/movies/action/video.mp4");
|
||||
let output = settings.output_path_for_source(input, Some(Path::new("/library")));
|
||||
assert_eq!(
|
||||
output,
|
||||
PathBuf::from("/encoded/movies/action/video-alchemist.mkv")
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_output_path_falls_back_when_source_root_does_not_match() {
|
||||
let settings = FileSettings {
|
||||
id: 1,
|
||||
delete_source: false,
|
||||
output_extension: "mkv".to_string(),
|
||||
output_suffix: "-alchemist".to_string(),
|
||||
replace_strategy: "keep".to_string(),
|
||||
output_root: Some("/encoded".to_string()),
|
||||
};
|
||||
let input = Path::new("/library/movies/video.mp4");
|
||||
let output = settings.output_path_for_source(input, Some(Path::new("/other")));
|
||||
assert_eq!(output, PathBuf::from("/library/movies/video-alchemist.mkv"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_replace_strategy() {
|
||||
let mut settings = FileSettings {
|
||||
id: 1,
|
||||
delete_source: false,
|
||||
output_extension: "mkv".to_string(),
|
||||
output_suffix: "-alchemist".to_string(),
|
||||
replace_strategy: "keep".to_string(),
|
||||
output_root: None,
|
||||
};
|
||||
assert!(!settings.should_replace_existing_output());
|
||||
settings.replace_strategy = "replace".to_string();
|
||||
assert!(settings.should_replace_existing_output());
|
||||
}
|
||||
}
|
||||
@@ -18,7 +18,6 @@ pub mod version;
|
||||
pub mod wizard;
|
||||
|
||||
pub use config::QualityProfile;
|
||||
pub use db::AlchemistEvent;
|
||||
pub use media::ffmpeg::{EncodeStats, EncoderCapabilities, HardwareAccelerators};
|
||||
pub use media::processor::Agent;
|
||||
pub use orchestrator::Transcoder;
|
||||
|
||||
18
src/main.rs
18
src/main.rs
@@ -306,6 +306,10 @@ async fn run() -> Result<()> {
|
||||
Ok(mut remuxing_jobs) => jobs.append(&mut remuxing_jobs),
|
||||
Err(err) => error!("Failed to load interrupted remuxing jobs: {}", err),
|
||||
}
|
||||
match db.get_jobs_by_status(db::JobState::Resuming).await {
|
||||
Ok(mut resuming_jobs) => jobs.append(&mut resuming_jobs),
|
||||
Err(err) => error!("Failed to load interrupted resuming jobs: {}", err),
|
||||
}
|
||||
match db.get_jobs_by_status(db::JobState::Analyzing).await {
|
||||
Ok(mut analyzing_jobs) => jobs.append(&mut analyzing_jobs),
|
||||
Err(err) => error!("Failed to load interrupted analyzing jobs: {}", err),
|
||||
@@ -317,6 +321,11 @@ async fn run() -> Result<()> {
|
||||
Ok(count) if count > 0 => {
|
||||
warn!("{} interrupted jobs reset to queued", count);
|
||||
for job in interrupted_jobs {
|
||||
let has_resume_session =
|
||||
db.get_resume_session(job.id).await.ok().flatten().is_some();
|
||||
if has_resume_session {
|
||||
continue;
|
||||
}
|
||||
let temp_path = orphaned_temp_output_path(&job.output_path);
|
||||
if std::fs::metadata(&temp_path).is_ok() {
|
||||
match std::fs::remove_file(&temp_path) {
|
||||
@@ -515,9 +524,6 @@ async fn run() -> Result<()> {
|
||||
system: system_tx,
|
||||
});
|
||||
|
||||
// Keep legacy channel for transition compatibility
|
||||
let (tx, _rx) = broadcast::channel(100);
|
||||
|
||||
let transcoder = Arc::new(Transcoder::new());
|
||||
let hardware_state = hardware::HardwareState::new(Some(hw_info.clone()));
|
||||
let hardware_probe_log = Arc::new(RwLock::new(initial_probe_log));
|
||||
@@ -528,7 +534,7 @@ async fn run() -> Result<()> {
|
||||
db.as_ref().clone(),
|
||||
config.clone(),
|
||||
));
|
||||
notification_manager.start_listener(tx.subscribe());
|
||||
notification_manager.start_listener(&event_channels);
|
||||
|
||||
let maintenance_db = db.clone();
|
||||
let maintenance_config = config.clone();
|
||||
@@ -563,7 +569,6 @@ async fn run() -> Result<()> {
|
||||
transcoder.clone(),
|
||||
config.clone(),
|
||||
hardware_state.clone(),
|
||||
tx.clone(),
|
||||
event_channels.clone(),
|
||||
matches!(args.command, Some(Commands::Run { dry_run: true, .. })),
|
||||
)
|
||||
@@ -767,7 +772,6 @@ async fn run() -> Result<()> {
|
||||
transcoder,
|
||||
scheduler: scheduler_handle,
|
||||
event_channels,
|
||||
tx,
|
||||
setup_required: setup_mode,
|
||||
config_path: config_path.clone(),
|
||||
config_mutable,
|
||||
@@ -1278,7 +1282,6 @@ mod tests {
|
||||
}));
|
||||
let hardware_probe_log = Arc::new(RwLock::new(hardware::HardwareProbeLog::default()));
|
||||
let transcoder = Arc::new(Transcoder::new());
|
||||
let (tx, _rx) = broadcast::channel(8);
|
||||
let (jobs_tx, _) = broadcast::channel(100);
|
||||
let (config_tx, _) = broadcast::channel(10);
|
||||
let (system_tx, _) = broadcast::channel(10);
|
||||
@@ -1293,7 +1296,6 @@ mod tests {
|
||||
transcoder,
|
||||
config_state.clone(),
|
||||
hardware_state.clone(),
|
||||
tx,
|
||||
event_channels,
|
||||
true,
|
||||
)
|
||||
|
||||
@@ -1,8 +1,6 @@
|
||||
use crate::db::{AlchemistEvent, Db, EventChannels, Job, JobEvent};
|
||||
use crate::db::{Db, EventChannels, Job, JobEvent};
|
||||
use crate::error::Result;
|
||||
use crate::media::pipeline::{
|
||||
Encoder, ExecutionResult, ExecutionStats, Executor, MediaAnalysis, TranscodePlan,
|
||||
};
|
||||
use crate::media::pipeline::{Encoder, ExecutionResult, Executor, MediaAnalysis, TranscodePlan};
|
||||
use crate::orchestrator::{
|
||||
AsyncExecutionObserver, ExecutionObserver, TranscodeRequest, Transcoder,
|
||||
};
|
||||
@@ -10,13 +8,12 @@ use crate::system::hardware::HardwareInfo;
|
||||
use std::path::PathBuf;
|
||||
use std::sync::Arc;
|
||||
use std::time::{Duration, Instant};
|
||||
use tokio::sync::{Mutex, broadcast};
|
||||
use tokio::sync::Mutex;
|
||||
|
||||
pub struct FfmpegExecutor {
|
||||
transcoder: Arc<Transcoder>,
|
||||
db: Arc<Db>,
|
||||
hw_info: Option<HardwareInfo>,
|
||||
event_tx: Arc<broadcast::Sender<AlchemistEvent>>,
|
||||
event_channels: Arc<EventChannels>,
|
||||
dry_run: bool,
|
||||
}
|
||||
@@ -26,7 +23,6 @@ impl FfmpegExecutor {
|
||||
transcoder: Arc<Transcoder>,
|
||||
db: Arc<Db>,
|
||||
hw_info: Option<HardwareInfo>,
|
||||
event_tx: Arc<broadcast::Sender<AlchemistEvent>>,
|
||||
event_channels: Arc<EventChannels>,
|
||||
dry_run: bool,
|
||||
) -> Self {
|
||||
@@ -34,7 +30,6 @@ impl FfmpegExecutor {
|
||||
transcoder,
|
||||
db,
|
||||
hw_info,
|
||||
event_tx,
|
||||
event_channels,
|
||||
dry_run,
|
||||
}
|
||||
@@ -44,22 +39,15 @@ impl FfmpegExecutor {
|
||||
struct JobExecutionObserver {
|
||||
job_id: i64,
|
||||
db: Arc<Db>,
|
||||
event_tx: Arc<broadcast::Sender<AlchemistEvent>>,
|
||||
event_channels: Arc<EventChannels>,
|
||||
last_progress: Mutex<Option<(f64, Instant)>>,
|
||||
}
|
||||
|
||||
impl JobExecutionObserver {
|
||||
fn new(
|
||||
job_id: i64,
|
||||
db: Arc<Db>,
|
||||
event_tx: Arc<broadcast::Sender<AlchemistEvent>>,
|
||||
event_channels: Arc<EventChannels>,
|
||||
) -> Self {
|
||||
fn new(job_id: i64, db: Arc<Db>, event_channels: Arc<EventChannels>) -> Self {
|
||||
Self {
|
||||
job_id,
|
||||
db,
|
||||
event_tx,
|
||||
event_channels,
|
||||
last_progress: Mutex::new(None),
|
||||
}
|
||||
@@ -68,18 +56,11 @@ impl JobExecutionObserver {
|
||||
|
||||
impl AsyncExecutionObserver for JobExecutionObserver {
|
||||
async fn on_log(&self, message: String) {
|
||||
// Send to typed channel
|
||||
let _ = self.event_channels.jobs.send(JobEvent::Log {
|
||||
level: "info".to_string(),
|
||||
job_id: Some(self.job_id),
|
||||
message: message.clone(),
|
||||
});
|
||||
// Also send to legacy channel for backwards compatibility
|
||||
let _ = self.event_tx.send(AlchemistEvent::Log {
|
||||
level: "info".to_string(),
|
||||
job_id: Some(self.job_id),
|
||||
message: message.clone(),
|
||||
});
|
||||
if let Err(err) = self.db.add_log("info", Some(self.job_id), &message).await {
|
||||
tracing::warn!(
|
||||
"Failed to persist transcode log for job {}: {}",
|
||||
@@ -117,14 +98,7 @@ impl AsyncExecutionObserver for JobExecutionObserver {
|
||||
}
|
||||
}
|
||||
|
||||
// Send to typed channel
|
||||
let _ = self.event_channels.jobs.send(JobEvent::Progress {
|
||||
job_id: self.job_id,
|
||||
percentage,
|
||||
time: progress.time.clone(),
|
||||
});
|
||||
// Also send to legacy channel for backwards compatibility
|
||||
let _ = self.event_tx.send(AlchemistEvent::Progress {
|
||||
job_id: self.job_id,
|
||||
percentage,
|
||||
time: progress.time,
|
||||
@@ -155,7 +129,6 @@ impl Executor for FfmpegExecutor {
|
||||
let observer: Arc<dyn ExecutionObserver> = Arc::new(JobExecutionObserver::new(
|
||||
job.id,
|
||||
self.db.clone(),
|
||||
self.event_tx.clone(),
|
||||
self.event_channels.clone(),
|
||||
));
|
||||
|
||||
@@ -181,6 +154,8 @@ impl Executor for FfmpegExecutor {
|
||||
metadata: &analysis.metadata,
|
||||
plan,
|
||||
observer: Some(observer.clone()),
|
||||
clip_start_seconds: None,
|
||||
clip_duration_seconds: None,
|
||||
})
|
||||
.await?;
|
||||
|
||||
@@ -198,6 +173,8 @@ impl Executor for FfmpegExecutor {
|
||||
metadata: &analysis.metadata,
|
||||
plan,
|
||||
observer: Some(observer),
|
||||
clip_start_seconds: None,
|
||||
clip_duration_seconds: None,
|
||||
})
|
||||
.await?;
|
||||
}
|
||||
@@ -274,17 +251,11 @@ impl Executor for FfmpegExecutor {
|
||||
fallback_occurred: plan.fallback.is_some() || codec_mismatch || encoder_mismatch,
|
||||
actual_output_codec,
|
||||
actual_encoder_name,
|
||||
stats: ExecutionStats {
|
||||
encode_time_secs: 0.0,
|
||||
input_size: 0,
|
||||
output_size: 0,
|
||||
vmaf: None,
|
||||
},
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
fn output_codec_from_name(codec: &str) -> Option<crate::config::OutputCodec> {
|
||||
pub(crate) fn output_codec_from_name(codec: &str) -> Option<crate::config::OutputCodec> {
|
||||
if codec.eq_ignore_ascii_case("av1") {
|
||||
Some(crate::config::OutputCodec::Av1)
|
||||
} else if codec.eq_ignore_ascii_case("hevc") || codec.eq_ignore_ascii_case("h265") {
|
||||
@@ -296,7 +267,10 @@ fn output_codec_from_name(codec: &str) -> Option<crate::config::OutputCodec> {
|
||||
}
|
||||
}
|
||||
|
||||
fn encoder_tag_matches(requested: crate::media::pipeline::Encoder, encoder_tag: &str) -> bool {
|
||||
pub(crate) fn encoder_tag_matches(
|
||||
requested: crate::media::pipeline::Encoder,
|
||||
encoder_tag: &str,
|
||||
) -> bool {
|
||||
let tag = encoder_tag.to_ascii_lowercase();
|
||||
let expected_markers: &[&str] = match requested {
|
||||
crate::media::pipeline::Encoder::Av1Qsv
|
||||
@@ -392,8 +366,7 @@ mod tests {
|
||||
let Some(job) = db.get_job_by_input_path("input.mkv").await? else {
|
||||
panic!("expected seeded job");
|
||||
};
|
||||
let (tx, mut rx) = broadcast::channel(8);
|
||||
let (jobs_tx, _) = broadcast::channel(100);
|
||||
let (jobs_tx, mut jobs_rx) = broadcast::channel(100);
|
||||
let (config_tx, _) = broadcast::channel(10);
|
||||
let (system_tx, _) = broadcast::channel(10);
|
||||
let event_channels = Arc::new(crate::db::EventChannels {
|
||||
@@ -401,7 +374,7 @@ mod tests {
|
||||
config: config_tx,
|
||||
system: system_tx,
|
||||
});
|
||||
let observer = JobExecutionObserver::new(job.id, db.clone(), Arc::new(tx), event_channels);
|
||||
let observer = JobExecutionObserver::new(job.id, db.clone(), event_channels);
|
||||
|
||||
LocalExecutionObserver::on_log(&observer, "ffmpeg line".to_string()).await;
|
||||
LocalExecutionObserver::on_progress(
|
||||
@@ -423,10 +396,10 @@ mod tests {
|
||||
};
|
||||
assert!((updated.progress - 20.0).abs() < 0.01);
|
||||
|
||||
let first = rx.recv().await?;
|
||||
assert!(matches!(first, AlchemistEvent::Log { .. }));
|
||||
let second = rx.recv().await?;
|
||||
assert!(matches!(second, AlchemistEvent::Progress { .. }));
|
||||
let first = jobs_rx.recv().await?;
|
||||
assert!(matches!(first, JobEvent::Log { .. }));
|
||||
let second = jobs_rx.recv().await?;
|
||||
assert!(matches!(second, JobEvent::Progress { .. }));
|
||||
|
||||
drop(db);
|
||||
let _ = std::fs::remove_file(db_path);
|
||||
|
||||
@@ -135,6 +135,8 @@ pub struct FFmpegCommandBuilder<'a> {
|
||||
metadata: &'a crate::media::pipeline::MediaMetadata,
|
||||
plan: &'a TranscodePlan,
|
||||
hw_info: Option<&'a HardwareInfo>,
|
||||
clip_start_seconds: Option<f64>,
|
||||
clip_duration_seconds: Option<f64>,
|
||||
}
|
||||
|
||||
impl<'a> FFmpegCommandBuilder<'a> {
|
||||
@@ -150,6 +152,8 @@ impl<'a> FFmpegCommandBuilder<'a> {
|
||||
metadata,
|
||||
plan,
|
||||
hw_info: None,
|
||||
clip_start_seconds: None,
|
||||
clip_duration_seconds: None,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -158,6 +162,16 @@ impl<'a> FFmpegCommandBuilder<'a> {
|
||||
self
|
||||
}
|
||||
|
||||
pub fn with_clip(
|
||||
mut self,
|
||||
clip_start_seconds: Option<f64>,
|
||||
clip_duration_seconds: Option<f64>,
|
||||
) -> Self {
|
||||
self.clip_start_seconds = clip_start_seconds;
|
||||
self.clip_duration_seconds = clip_duration_seconds;
|
||||
self
|
||||
}
|
||||
|
||||
pub fn build(self) -> Result<tokio::process::Command> {
|
||||
let args = self.build_args()?;
|
||||
let mut cmd = tokio::process::Command::new("ffmpeg");
|
||||
@@ -189,14 +203,23 @@ impl<'a> FFmpegCommandBuilder<'a> {
|
||||
"-nostats".to_string(),
|
||||
"-progress".to_string(),
|
||||
"pipe:2".to_string(),
|
||||
"-i".to_string(),
|
||||
self.input.display().to_string(),
|
||||
"-map_metadata".to_string(),
|
||||
"0".to_string(),
|
||||
"-map".to_string(),
|
||||
"0:v:0".to_string(),
|
||||
];
|
||||
|
||||
args.push("-i".to_string());
|
||||
args.push(self.input.display().to_string());
|
||||
if let Some(clip_start_seconds) = self.clip_start_seconds {
|
||||
args.push("-ss".to_string());
|
||||
args.push(format!("{clip_start_seconds:.3}"));
|
||||
}
|
||||
if let Some(clip_duration_seconds) = self.clip_duration_seconds {
|
||||
args.push("-t".to_string());
|
||||
args.push(format!("{clip_duration_seconds:.3}"));
|
||||
}
|
||||
args.push("-map_metadata".to_string());
|
||||
args.push("0".to_string());
|
||||
args.push("-map".to_string());
|
||||
args.push("0:v:0".to_string());
|
||||
|
||||
if !matches!(self.plan.audio, AudioStreamPlan::Drop) {
|
||||
match &self.plan.audio_stream_indices {
|
||||
None => {
|
||||
@@ -604,10 +627,11 @@ impl FFmpegProgressState {
|
||||
}
|
||||
}
|
||||
"speed" => self.current.speed = value.to_string(),
|
||||
"progress" if matches!(value, "continue" | "end") => {
|
||||
if self.current.time_seconds > 0.0 || self.current.frame > 0 {
|
||||
return Some(self.current.clone());
|
||||
}
|
||||
"progress"
|
||||
if matches!(value, "continue" | "end")
|
||||
&& (self.current.time_seconds > 0.0 || self.current.frame > 0) =>
|
||||
{
|
||||
return Some(self.current.clone());
|
||||
}
|
||||
_ => {}
|
||||
}
|
||||
@@ -1039,6 +1063,30 @@ mod tests {
|
||||
assert!(args.iter().any(|arg| arg.contains("format=nv12,hwupload")));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn vaapi_cq_mode_sets_inverted_global_quality() {
|
||||
let metadata = metadata();
|
||||
let mut plan = plan_for(Encoder::HevcVaapi);
|
||||
plan.rate_control = Some(RateControl::Cq { value: 23 });
|
||||
let mut info = hw_info("/dev/dri/renderD128");
|
||||
info.vendor = crate::system::hardware::Vendor::Amd;
|
||||
let builder = FFmpegCommandBuilder::new(
|
||||
Path::new("/tmp/in.mkv"),
|
||||
Path::new("/tmp/out.mkv"),
|
||||
&metadata,
|
||||
&plan,
|
||||
)
|
||||
.with_hardware(Some(&info));
|
||||
let args = builder
|
||||
.build_args()
|
||||
.unwrap_or_else(|err| panic!("failed to build vaapi cq args: {err}"));
|
||||
let quality_index = args
|
||||
.iter()
|
||||
.position(|arg| arg == "-global_quality")
|
||||
.unwrap_or_else(|| panic!("missing -global_quality"));
|
||||
assert_eq!(args.get(quality_index + 1).map(String::as_str), Some("77"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn command_args_cover_videotoolbox_backend() {
|
||||
let metadata = metadata();
|
||||
@@ -1054,7 +1102,7 @@ mod tests {
|
||||
.unwrap_or_else(|err| panic!("failed to build videotoolbox args: {err}"));
|
||||
assert!(args.contains(&"hevc_videotoolbox".to_string()));
|
||||
assert!(!args.contains(&"hvc1".to_string()));
|
||||
assert!(!args.contains(&"-q:v".to_string()));
|
||||
assert!(args.contains(&"-q:v".to_string())); // P1-2 fix: Cq maps to -q:v
|
||||
assert!(!args.contains(&"-b:v".to_string()));
|
||||
}
|
||||
|
||||
@@ -1074,7 +1122,7 @@ mod tests {
|
||||
.unwrap_or_else(|err| panic!("failed to build mp4 videotoolbox args: {err}"));
|
||||
assert!(args.contains(&"hevc_videotoolbox".to_string()));
|
||||
assert!(args.contains(&"hvc1".to_string()));
|
||||
assert!(!args.contains(&"-q:v".to_string()));
|
||||
assert!(args.contains(&"-q:v".to_string())); // P1-2 fix: Cq maps to -q:v
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -1148,6 +1196,42 @@ mod tests {
|
||||
assert!(args.contains(&"hevc_amf".to_string()));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn amf_cq_mode_sets_cqp_flags() {
|
||||
let metadata = metadata();
|
||||
let mut plan = plan_for(Encoder::HevcAmf);
|
||||
plan.rate_control = Some(RateControl::Cq { value: 19 });
|
||||
let builder = FFmpegCommandBuilder::new(
|
||||
Path::new("/tmp/in.mkv"),
|
||||
Path::new("/tmp/out.mkv"),
|
||||
&metadata,
|
||||
&plan,
|
||||
);
|
||||
let args = builder
|
||||
.build_args()
|
||||
.unwrap_or_else(|err| panic!("failed to build amf cq args: {err}"));
|
||||
assert!(args.windows(2).any(|window| window == ["-rc", "cqp"]));
|
||||
assert!(args.windows(2).any(|window| window == ["-qp_i", "19"]));
|
||||
assert!(args.windows(2).any(|window| window == ["-qp_p", "19"]));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn clip_window_adds_trim_arguments() {
|
||||
let metadata = metadata();
|
||||
let plan = plan_for(Encoder::H264X264);
|
||||
let args = FFmpegCommandBuilder::new(
|
||||
Path::new("/tmp/in.mkv"),
|
||||
Path::new("/tmp/out.mkv"),
|
||||
&metadata,
|
||||
&plan,
|
||||
)
|
||||
.with_clip(Some(12.5), Some(8.0))
|
||||
.build_args()
|
||||
.unwrap_or_else(|err| panic!("failed to build clipped args: {err}"));
|
||||
assert!(args.windows(2).any(|window| window == ["-ss", "12.500"]));
|
||||
assert!(args.windows(2).any(|window| window == ["-t", "8.000"]));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn mp4_audio_transcode_uses_aac_profile() {
|
||||
let mut plan = plan_for(Encoder::H264X264);
|
||||
|
||||
@@ -6,10 +6,6 @@ pub fn append_args(
|
||||
tag_hevc_as_hvc1: bool,
|
||||
rate_control: Option<&RateControl>,
|
||||
) {
|
||||
// VideoToolbox quality is controlled via -global_quality (0–100, 100=best).
|
||||
// The config uses CQ-style semantics where lower value = better quality,
|
||||
// so we invert: global_quality = 100 - cq_value.
|
||||
// Bitrate mode is handled by the shared builder in mod.rs.
|
||||
match encoder {
|
||||
Encoder::Av1Videotoolbox => {
|
||||
args.extend(["-c:v".to_string(), "av1_videotoolbox".to_string()]);
|
||||
@@ -25,8 +21,27 @@ pub fn append_args(
|
||||
}
|
||||
_ => {}
|
||||
}
|
||||
if let Some(RateControl::Cq { value }) = rate_control {
|
||||
let global_quality = 100u8.saturating_sub(*value);
|
||||
args.extend(["-global_quality".to_string(), global_quality.to_string()]);
|
||||
|
||||
match rate_control {
|
||||
Some(RateControl::Cq { value }) => {
|
||||
// VideoToolbox -q:v: 1 (best) to 100 (worst). Config value is CRF-style
|
||||
// where lower = better quality. Clamp to 1-51 range matching x264/x265.
|
||||
let q = (*value).clamp(1, 51);
|
||||
args.extend(["-q:v".to_string(), q.to_string()]);
|
||||
}
|
||||
Some(RateControl::Bitrate { kbps, .. }) => {
|
||||
args.extend([
|
||||
"-b:v".to_string(),
|
||||
format!("{}k", kbps),
|
||||
"-maxrate".to_string(),
|
||||
format!("{}k", kbps * 2),
|
||||
"-bufsize".to_string(),
|
||||
format!("{}k", kbps * 4),
|
||||
]);
|
||||
}
|
||||
_ => {
|
||||
// Default: constant quality at 28 (HEVC-equivalent mid quality)
|
||||
args.extend(["-q:v".to_string(), "28".to_string()]);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,6 +1,6 @@
|
||||
use crate::Transcoder;
|
||||
use crate::config::Config;
|
||||
use crate::db::{AlchemistEvent, Db, EventChannels, JobEvent, SystemEvent};
|
||||
use crate::db::{Db, EventChannels, JobEvent, SystemEvent};
|
||||
use crate::error::Result;
|
||||
use crate::media::pipeline::Pipeline;
|
||||
use crate::media::scanner::Scanner;
|
||||
@@ -8,7 +8,7 @@ use crate::system::hardware::HardwareState;
|
||||
use std::path::PathBuf;
|
||||
use std::sync::Arc;
|
||||
use std::sync::atomic::{AtomicBool, AtomicUsize, Ordering};
|
||||
use tokio::sync::{Mutex, OwnedSemaphorePermit, RwLock, Semaphore, broadcast};
|
||||
use tokio::sync::{Mutex, OwnedSemaphorePermit, RwLock, Semaphore};
|
||||
use tracing::{debug, error, info};
|
||||
|
||||
pub struct Agent {
|
||||
@@ -16,7 +16,6 @@ pub struct Agent {
|
||||
orchestrator: Arc<Transcoder>,
|
||||
config: Arc<RwLock<Config>>,
|
||||
hardware_state: HardwareState,
|
||||
tx: Arc<broadcast::Sender<AlchemistEvent>>,
|
||||
event_channels: Arc<EventChannels>,
|
||||
semaphore: Arc<Semaphore>,
|
||||
semaphore_limit: Arc<AtomicUsize>,
|
||||
@@ -39,7 +38,6 @@ impl Agent {
|
||||
orchestrator: Arc<Transcoder>,
|
||||
config: Arc<RwLock<Config>>,
|
||||
hardware_state: HardwareState,
|
||||
tx: broadcast::Sender<AlchemistEvent>,
|
||||
event_channels: Arc<EventChannels>,
|
||||
dry_run: bool,
|
||||
) -> Self {
|
||||
@@ -54,7 +52,6 @@ impl Agent {
|
||||
orchestrator,
|
||||
config,
|
||||
hardware_state,
|
||||
tx: Arc::new(tx),
|
||||
event_channels,
|
||||
semaphore: Arc::new(Semaphore::new(concurrent_jobs)),
|
||||
semaphore_limit: Arc::new(AtomicUsize::new(concurrent_jobs)),
|
||||
@@ -99,15 +96,8 @@ impl Agent {
|
||||
job_id: 0,
|
||||
status: crate::db::JobState::Queued,
|
||||
});
|
||||
// Also send to legacy channel for backwards compatibility
|
||||
let _ = self.tx.send(AlchemistEvent::JobStateChanged {
|
||||
job_id: 0,
|
||||
status: crate::db::JobState::Queued,
|
||||
});
|
||||
|
||||
// Notify scan completed
|
||||
let _ = self.event_channels.system.send(SystemEvent::ScanCompleted);
|
||||
let _ = self.tx.send(AlchemistEvent::ScanCompleted);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
@@ -479,7 +469,7 @@ impl Agent {
|
||||
if self.in_flight_jobs.load(Ordering::SeqCst) == 0
|
||||
&& !self.idle_notified.swap(true, Ordering::SeqCst)
|
||||
{
|
||||
let _ = self.tx.send(crate::db::AlchemistEvent::EngineIdle);
|
||||
let _ = self.event_channels.system.send(SystemEvent::EngineIdle);
|
||||
}
|
||||
drop(permit);
|
||||
tokio::time::sleep(tokio::time::Duration::from_secs(5)).await;
|
||||
@@ -507,7 +497,6 @@ impl Agent {
|
||||
self.orchestrator.clone(),
|
||||
self.config.clone(),
|
||||
self.hardware_state.clone(),
|
||||
self.tx.clone(),
|
||||
self.event_channels.clone(),
|
||||
self.dry_run,
|
||||
)
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
use crate::config::Config;
|
||||
use crate::db::{AlchemistEvent, Db, NotificationTarget};
|
||||
use crate::db::{Db, EventChannels, JobEvent, NotificationTarget, SystemEvent};
|
||||
use crate::explanations::Explanation;
|
||||
use chrono::Timelike;
|
||||
use lettre::message::{Mailbox, Message, SinglePart, header::ContentType};
|
||||
@@ -12,10 +12,11 @@ use std::net::IpAddr;
|
||||
use std::sync::Arc;
|
||||
use std::time::Duration;
|
||||
use tokio::net::lookup_host;
|
||||
use tokio::sync::{Mutex, RwLock, broadcast};
|
||||
use tokio::sync::{Mutex, RwLock};
|
||||
use tracing::{error, warn};
|
||||
|
||||
type NotificationResult<T> = Result<T, Box<dyn std::error::Error + Send + Sync>>;
|
||||
const DAILY_SUMMARY_LAST_SUCCESS_KEY: &str = "notifications.daily_summary.last_success_date";
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct NotificationManager {
|
||||
@@ -86,9 +87,21 @@ fn endpoint_url_for_target(target: &NotificationTarget) -> NotificationResult<Op
|
||||
}
|
||||
}
|
||||
|
||||
fn event_key_from_event(event: &AlchemistEvent) -> Option<&'static str> {
|
||||
/// Internal event type that unifies the events the notification system cares about.
|
||||
#[derive(Debug, Clone, serde::Serialize)]
|
||||
#[serde(tag = "type", content = "data")]
|
||||
enum NotifiableEvent {
|
||||
JobStateChanged {
|
||||
job_id: i64,
|
||||
status: crate::db::JobState,
|
||||
},
|
||||
ScanCompleted,
|
||||
EngineIdle,
|
||||
}
|
||||
|
||||
fn event_key(event: &NotifiableEvent) -> Option<&'static str> {
|
||||
match event {
|
||||
AlchemistEvent::JobStateChanged { status, .. } => match status {
|
||||
NotifiableEvent::JobStateChanged { status, .. } => match status {
|
||||
crate::db::JobState::Queued => Some(crate::config::NOTIFICATION_EVENT_ENCODE_QUEUED),
|
||||
crate::db::JobState::Encoding | crate::db::JobState::Remuxing => {
|
||||
Some(crate::config::NOTIFICATION_EVENT_ENCODE_STARTED)
|
||||
@@ -99,9 +112,8 @@ fn event_key_from_event(event: &AlchemistEvent) -> Option<&'static str> {
|
||||
crate::db::JobState::Failed => Some(crate::config::NOTIFICATION_EVENT_ENCODE_FAILED),
|
||||
_ => None,
|
||||
},
|
||||
AlchemistEvent::ScanCompleted => Some(crate::config::NOTIFICATION_EVENT_SCAN_COMPLETED),
|
||||
AlchemistEvent::EngineIdle => Some(crate::config::NOTIFICATION_EVENT_ENGINE_IDLE),
|
||||
_ => None,
|
||||
NotifiableEvent::ScanCompleted => Some(crate::config::NOTIFICATION_EVENT_SCAN_COMPLETED),
|
||||
NotifiableEvent::EngineIdle => Some(crate::config::NOTIFICATION_EVENT_ENGINE_IDLE),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -114,30 +126,121 @@ impl NotificationManager {
|
||||
}
|
||||
}
|
||||
|
||||
pub fn start_listener(&self, mut rx: broadcast::Receiver<AlchemistEvent>) {
|
||||
/// Build an HTTP client with SSRF protections: DNS resolution timeout,
|
||||
/// private-IP blocking (unless allow_local_notifications), no redirects,
|
||||
/// and a 10-second request timeout.
|
||||
async fn build_safe_client(&self, target: &NotificationTarget) -> NotificationResult<Client> {
|
||||
if let Some(endpoint_url) = endpoint_url_for_target(target)? {
|
||||
let url = Url::parse(&endpoint_url)?;
|
||||
let host = url
|
||||
.host_str()
|
||||
.ok_or("notification endpoint host is missing")?;
|
||||
let port = url.port_or_known_default().ok_or("invalid port")?;
|
||||
|
||||
let allow_local = self
|
||||
.config
|
||||
.read()
|
||||
.await
|
||||
.notifications
|
||||
.allow_local_notifications;
|
||||
|
||||
if !allow_local && host.eq_ignore_ascii_case("localhost") {
|
||||
return Err("localhost is not allowed as a notification endpoint".into());
|
||||
}
|
||||
|
||||
let addr = format!("{}:{}", host, port);
|
||||
let ips = tokio::time::timeout(Duration::from_secs(3), lookup_host(&addr)).await??;
|
||||
|
||||
let target_ip = if allow_local {
|
||||
ips.into_iter()
|
||||
.map(|a| a.ip())
|
||||
.next()
|
||||
.ok_or("no IP address found for notification endpoint")?
|
||||
} else {
|
||||
ips.into_iter()
|
||||
.map(|a| a.ip())
|
||||
.find(|ip| !is_private_ip(*ip))
|
||||
.ok_or("no public IP address found for notification endpoint")?
|
||||
};
|
||||
|
||||
Ok(Client::builder()
|
||||
.timeout(Duration::from_secs(10))
|
||||
.redirect(Policy::none())
|
||||
.resolve(host, std::net::SocketAddr::new(target_ip, port))
|
||||
.build()?)
|
||||
} else {
|
||||
Ok(Client::builder()
|
||||
.timeout(Duration::from_secs(10))
|
||||
.redirect(Policy::none())
|
||||
.build()?)
|
||||
}
|
||||
}
|
||||
|
||||
pub fn start_listener(&self, event_channels: &EventChannels) {
|
||||
let manager_clone = self.clone();
|
||||
let summary_manager = self.clone();
|
||||
|
||||
// Listen for job events (state changes are the only ones we notify on)
|
||||
let mut jobs_rx = event_channels.jobs.subscribe();
|
||||
let job_manager = self.clone();
|
||||
tokio::spawn(async move {
|
||||
loop {
|
||||
match rx.recv().await {
|
||||
Ok(event) => {
|
||||
if let Err(e) = manager_clone.handle_event(event).await {
|
||||
match jobs_rx.recv().await {
|
||||
Ok(JobEvent::StateChanged { job_id, status }) => {
|
||||
let event = NotifiableEvent::JobStateChanged { job_id, status };
|
||||
if let Err(e) = job_manager.handle_event(event).await {
|
||||
error!("Notification error: {}", e);
|
||||
}
|
||||
}
|
||||
Err(broadcast::error::RecvError::Lagged(_)) => {
|
||||
warn!("Notification listener lagged")
|
||||
Ok(_) => {} // Ignore Progress, Decision, Log
|
||||
Err(tokio::sync::broadcast::error::RecvError::Lagged(_)) => {
|
||||
warn!("Notification job listener lagged")
|
||||
}
|
||||
Err(broadcast::error::RecvError::Closed) => break,
|
||||
Err(tokio::sync::broadcast::error::RecvError::Closed) => break,
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
// Listen for system events (scan completed, engine idle)
|
||||
let mut system_rx = event_channels.system.subscribe();
|
||||
tokio::spawn(async move {
|
||||
loop {
|
||||
match system_rx.recv().await {
|
||||
Ok(SystemEvent::ScanCompleted) => {
|
||||
if let Err(e) = manager_clone
|
||||
.handle_event(NotifiableEvent::ScanCompleted)
|
||||
.await
|
||||
{
|
||||
error!("Notification error: {}", e);
|
||||
}
|
||||
}
|
||||
Ok(SystemEvent::EngineIdle) => {
|
||||
if let Err(e) = manager_clone
|
||||
.handle_event(NotifiableEvent::EngineIdle)
|
||||
.await
|
||||
{
|
||||
error!("Notification error: {}", e);
|
||||
}
|
||||
}
|
||||
Ok(_) => {} // Ignore ScanStarted, EngineStatusChanged, HardwareStateChanged
|
||||
Err(tokio::sync::broadcast::error::RecvError::Lagged(_)) => {
|
||||
warn!("Notification system listener lagged")
|
||||
}
|
||||
Err(tokio::sync::broadcast::error::RecvError::Closed) => break,
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
tokio::spawn(async move {
|
||||
let start = tokio::time::Instant::now()
|
||||
+ delay_until_next_minute_boundary(chrono::Local::now());
|
||||
let mut interval = tokio::time::interval_at(start, Duration::from_secs(60));
|
||||
loop {
|
||||
tokio::time::sleep(Duration::from_secs(30)).await;
|
||||
if let Err(err) = summary_manager.maybe_send_daily_summary().await {
|
||||
interval.tick().await;
|
||||
if let Err(err) = summary_manager
|
||||
.maybe_send_daily_summary_at(chrono::Local::now())
|
||||
.await
|
||||
{
|
||||
error!("Daily summary notification error: {}", err);
|
||||
}
|
||||
}
|
||||
@@ -145,14 +248,14 @@ impl NotificationManager {
|
||||
}
|
||||
|
||||
pub async fn send_test(&self, target: &NotificationTarget) -> NotificationResult<()> {
|
||||
let event = AlchemistEvent::JobStateChanged {
|
||||
let event = NotifiableEvent::JobStateChanged {
|
||||
job_id: 0,
|
||||
status: crate::db::JobState::Completed,
|
||||
};
|
||||
self.send(target, &event).await
|
||||
}
|
||||
|
||||
async fn handle_event(&self, event: AlchemistEvent) -> NotificationResult<()> {
|
||||
async fn handle_event(&self, event: NotifiableEvent) -> NotificationResult<()> {
|
||||
let targets = match self.db.get_notification_targets().await {
|
||||
Ok(t) => t,
|
||||
Err(e) => {
|
||||
@@ -165,7 +268,7 @@ impl NotificationManager {
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
let event_key = match event_key_from_event(&event) {
|
||||
let event_key = match event_key(&event) {
|
||||
Some(event_key) => event_key,
|
||||
None => return Ok(()),
|
||||
};
|
||||
@@ -205,9 +308,11 @@ impl NotificationManager {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn maybe_send_daily_summary(&self) -> NotificationResult<()> {
|
||||
async fn maybe_send_daily_summary_at(
|
||||
&self,
|
||||
now: chrono::DateTime<chrono::Local>,
|
||||
) -> NotificationResult<()> {
|
||||
let config = self.config.read().await.clone();
|
||||
let now = chrono::Local::now();
|
||||
let parts = config
|
||||
.notifications
|
||||
.daily_summary_time_local
|
||||
@@ -218,97 +323,113 @@ impl NotificationManager {
|
||||
}
|
||||
let hour = parts[0].parse::<u32>().unwrap_or(9);
|
||||
let minute = parts[1].parse::<u32>().unwrap_or(0);
|
||||
if now.hour() != hour || now.minute() != minute {
|
||||
let Some(scheduled_at) = now
|
||||
.with_hour(hour)
|
||||
.and_then(|value| value.with_minute(minute))
|
||||
.and_then(|value| value.with_second(0))
|
||||
.and_then(|value| value.with_nanosecond(0))
|
||||
else {
|
||||
return Ok(());
|
||||
};
|
||||
if now < scheduled_at {
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
let summary_key = now.format("%Y-%m-%d").to_string();
|
||||
{
|
||||
let last_sent = self.daily_summary_last_sent.lock().await;
|
||||
if last_sent.as_deref() == Some(summary_key.as_str()) {
|
||||
return Ok(());
|
||||
}
|
||||
if self.daily_summary_already_sent(&summary_key).await? {
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
let summary = self.db.get_daily_summary_stats().await?;
|
||||
let targets = self.db.get_notification_targets().await?;
|
||||
let mut eligible_targets = Vec::new();
|
||||
for target in targets {
|
||||
if !target.enabled {
|
||||
continue;
|
||||
}
|
||||
let allowed: Vec<String> = serde_json::from_str(&target.events).unwrap_or_default();
|
||||
let allowed: Vec<String> = match serde_json::from_str(&target.events) {
|
||||
Ok(events) => events,
|
||||
Err(err) => {
|
||||
warn!(
|
||||
"Failed to parse events for notification target '{}': {}",
|
||||
target.name, err
|
||||
);
|
||||
Vec::new()
|
||||
}
|
||||
};
|
||||
let normalized_allowed = crate::config::normalize_notification_events(&allowed);
|
||||
if !normalized_allowed
|
||||
if normalized_allowed
|
||||
.iter()
|
||||
.any(|event| event == crate::config::NOTIFICATION_EVENT_DAILY_SUMMARY)
|
||||
{
|
||||
continue;
|
||||
eligible_targets.push(target);
|
||||
}
|
||||
}
|
||||
|
||||
if eligible_targets.is_empty() {
|
||||
self.mark_daily_summary_sent(&summary_key).await?;
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
let summary = self.db.get_daily_summary_stats().await?;
|
||||
let mut delivered = 0usize;
|
||||
for target in eligible_targets {
|
||||
if let Err(err) = self.send_daily_summary_target(&target, &summary).await {
|
||||
error!(
|
||||
"Failed to send daily summary to target '{}': {}",
|
||||
target.name, err
|
||||
);
|
||||
continue;
|
||||
}
|
||||
delivered += 1;
|
||||
}
|
||||
|
||||
if delivered > 0 {
|
||||
self.mark_daily_summary_sent(&summary_key).await?;
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn daily_summary_already_sent(&self, summary_key: &str) -> NotificationResult<bool> {
|
||||
{
|
||||
let last_sent = self.daily_summary_last_sent.lock().await;
|
||||
if last_sent.as_deref() == Some(summary_key) {
|
||||
return Ok(true);
|
||||
}
|
||||
}
|
||||
|
||||
*self.daily_summary_last_sent.lock().await = Some(summary_key);
|
||||
let persisted = self
|
||||
.db
|
||||
.get_preference(DAILY_SUMMARY_LAST_SUCCESS_KEY)
|
||||
.await?;
|
||||
if persisted.as_deref() == Some(summary_key) {
|
||||
let mut last_sent = self.daily_summary_last_sent.lock().await;
|
||||
*last_sent = Some(summary_key.to_string());
|
||||
return Ok(true);
|
||||
}
|
||||
|
||||
Ok(false)
|
||||
}
|
||||
|
||||
async fn mark_daily_summary_sent(&self, summary_key: &str) -> NotificationResult<()> {
|
||||
self.db
|
||||
.set_preference(DAILY_SUMMARY_LAST_SUCCESS_KEY, summary_key)
|
||||
.await?;
|
||||
let mut last_sent = self.daily_summary_last_sent.lock().await;
|
||||
*last_sent = Some(summary_key.to_string());
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn send(
|
||||
&self,
|
||||
target: &NotificationTarget,
|
||||
event: &AlchemistEvent,
|
||||
event: &NotifiableEvent,
|
||||
) -> NotificationResult<()> {
|
||||
let event_key = event_key_from_event(event).unwrap_or("unknown");
|
||||
let client = if let Some(endpoint_url) = endpoint_url_for_target(target)? {
|
||||
let url = Url::parse(&endpoint_url)?;
|
||||
let host = url
|
||||
.host_str()
|
||||
.ok_or("notification endpoint host is missing")?;
|
||||
let port = url.port_or_known_default().ok_or("invalid port")?;
|
||||
|
||||
let allow_local = self
|
||||
.config
|
||||
.read()
|
||||
.await
|
||||
.notifications
|
||||
.allow_local_notifications;
|
||||
|
||||
if !allow_local && host.eq_ignore_ascii_case("localhost") {
|
||||
return Err("localhost is not allowed as a notification endpoint".into());
|
||||
}
|
||||
|
||||
let addr = format!("{}:{}", host, port);
|
||||
let ips = tokio::time::timeout(Duration::from_secs(3), lookup_host(&addr)).await??;
|
||||
|
||||
let target_ip = if allow_local {
|
||||
ips.into_iter()
|
||||
.map(|a| a.ip())
|
||||
.next()
|
||||
.ok_or("no IP address found for notification endpoint")?
|
||||
} else {
|
||||
ips.into_iter()
|
||||
.map(|a| a.ip())
|
||||
.find(|ip| !is_private_ip(*ip))
|
||||
.ok_or("no public IP address found for notification endpoint")?
|
||||
};
|
||||
|
||||
Client::builder()
|
||||
.timeout(Duration::from_secs(10))
|
||||
.redirect(Policy::none())
|
||||
.resolve(host, std::net::SocketAddr::new(target_ip, port))
|
||||
.build()?
|
||||
} else {
|
||||
Client::builder()
|
||||
.timeout(Duration::from_secs(10))
|
||||
.redirect(Policy::none())
|
||||
.build()?
|
||||
};
|
||||
let event_key = event_key(event).unwrap_or("unknown");
|
||||
let client = self.build_safe_client(target).await?;
|
||||
|
||||
let (decision_explanation, failure_explanation) = match event {
|
||||
AlchemistEvent::JobStateChanged { job_id, status } => {
|
||||
NotifiableEvent::JobStateChanged { job_id, status } => {
|
||||
let decision_explanation = self
|
||||
.db
|
||||
.get_job_decision_explanation(*job_id)
|
||||
@@ -423,25 +544,24 @@ impl NotificationManager {
|
||||
|
||||
fn message_for_event(
|
||||
&self,
|
||||
event: &AlchemistEvent,
|
||||
event: &NotifiableEvent,
|
||||
decision_explanation: Option<&Explanation>,
|
||||
failure_explanation: Option<&Explanation>,
|
||||
) -> String {
|
||||
match event {
|
||||
AlchemistEvent::JobStateChanged { job_id, status } => self.notification_message(
|
||||
NotifiableEvent::JobStateChanged { job_id, status } => self.notification_message(
|
||||
*job_id,
|
||||
&status.to_string(),
|
||||
decision_explanation,
|
||||
failure_explanation,
|
||||
),
|
||||
AlchemistEvent::ScanCompleted => {
|
||||
NotifiableEvent::ScanCompleted => {
|
||||
"Library scan completed. Review the queue for newly discovered work.".to_string()
|
||||
}
|
||||
AlchemistEvent::EngineIdle => {
|
||||
NotifiableEvent::EngineIdle => {
|
||||
"The engine is idle. There are no active jobs and no queued work ready to run."
|
||||
.to_string()
|
||||
}
|
||||
_ => "Event occurred".to_string(),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -472,7 +592,7 @@ impl NotificationManager {
|
||||
&self,
|
||||
client: &Client,
|
||||
target: &NotificationTarget,
|
||||
event: &AlchemistEvent,
|
||||
event: &NotifiableEvent,
|
||||
event_key: &str,
|
||||
decision_explanation: Option<&Explanation>,
|
||||
failure_explanation: Option<&Explanation>,
|
||||
@@ -511,7 +631,7 @@ impl NotificationManager {
|
||||
&self,
|
||||
client: &Client,
|
||||
target: &NotificationTarget,
|
||||
event: &AlchemistEvent,
|
||||
event: &NotifiableEvent,
|
||||
_event_key: &str,
|
||||
decision_explanation: Option<&Explanation>,
|
||||
failure_explanation: Option<&Explanation>,
|
||||
@@ -536,7 +656,7 @@ impl NotificationManager {
|
||||
&self,
|
||||
client: &Client,
|
||||
target: &NotificationTarget,
|
||||
event: &AlchemistEvent,
|
||||
event: &NotifiableEvent,
|
||||
event_key: &str,
|
||||
decision_explanation: Option<&Explanation>,
|
||||
failure_explanation: Option<&Explanation>,
|
||||
@@ -550,16 +670,21 @@ impl NotificationManager {
|
||||
_ => 2,
|
||||
};
|
||||
|
||||
let req = client.post(&config.server_url).json(&json!({
|
||||
"title": "Alchemist",
|
||||
"message": message,
|
||||
"priority": priority,
|
||||
"extras": {
|
||||
"client::display": {
|
||||
"contentType": "text/plain"
|
||||
let req = client
|
||||
.post(format!(
|
||||
"{}/message",
|
||||
config.server_url.trim_end_matches('/')
|
||||
))
|
||||
.json(&json!({
|
||||
"title": "Alchemist",
|
||||
"message": message,
|
||||
"priority": priority,
|
||||
"extras": {
|
||||
"client::display": {
|
||||
"contentType": "text/plain"
|
||||
}
|
||||
}
|
||||
}
|
||||
}));
|
||||
}));
|
||||
req.header("X-Gotify-Key", config.app_token)
|
||||
.send()
|
||||
.await?
|
||||
@@ -571,7 +696,7 @@ impl NotificationManager {
|
||||
&self,
|
||||
client: &Client,
|
||||
target: &NotificationTarget,
|
||||
event: &AlchemistEvent,
|
||||
event: &NotifiableEvent,
|
||||
event_key: &str,
|
||||
decision_explanation: Option<&Explanation>,
|
||||
failure_explanation: Option<&Explanation>,
|
||||
@@ -601,7 +726,7 @@ impl NotificationManager {
|
||||
&self,
|
||||
client: &Client,
|
||||
target: &NotificationTarget,
|
||||
event: &AlchemistEvent,
|
||||
event: &NotifiableEvent,
|
||||
_event_key: &str,
|
||||
decision_explanation: Option<&Explanation>,
|
||||
failure_explanation: Option<&Explanation>,
|
||||
@@ -627,7 +752,7 @@ impl NotificationManager {
|
||||
async fn send_email(
|
||||
&self,
|
||||
target: &NotificationTarget,
|
||||
event: &AlchemistEvent,
|
||||
event: &NotifiableEvent,
|
||||
_event_key: &str,
|
||||
decision_explanation: Option<&Explanation>,
|
||||
failure_explanation: Option<&Explanation>,
|
||||
@@ -677,10 +802,11 @@ impl NotificationManager {
|
||||
summary: &crate::db::DailySummaryStats,
|
||||
) -> NotificationResult<()> {
|
||||
let message = self.daily_summary_message(summary);
|
||||
let client = self.build_safe_client(target).await?;
|
||||
match target.target_type.as_str() {
|
||||
"discord_webhook" => {
|
||||
let config = parse_target_config::<DiscordWebhookConfig>(target)?;
|
||||
Client::new()
|
||||
client
|
||||
.post(config.webhook_url)
|
||||
.json(&json!({
|
||||
"embeds": [{
|
||||
@@ -696,7 +822,7 @@ impl NotificationManager {
|
||||
}
|
||||
"discord_bot" => {
|
||||
let config = parse_target_config::<DiscordBotConfig>(target)?;
|
||||
Client::new()
|
||||
client
|
||||
.post(format!(
|
||||
"https://discord.com/api/v10/channels/{}/messages",
|
||||
config.channel_id
|
||||
@@ -709,7 +835,7 @@ impl NotificationManager {
|
||||
}
|
||||
"gotify" => {
|
||||
let config = parse_target_config::<GotifyConfig>(target)?;
|
||||
Client::new()
|
||||
client
|
||||
.post(config.server_url)
|
||||
.header("X-Gotify-Key", config.app_token)
|
||||
.json(&json!({
|
||||
@@ -723,7 +849,7 @@ impl NotificationManager {
|
||||
}
|
||||
"webhook" => {
|
||||
let config = parse_target_config::<WebhookConfig>(target)?;
|
||||
let mut req = Client::new().post(config.url).json(&json!({
|
||||
let mut req = client.post(config.url).json(&json!({
|
||||
"event": crate::config::NOTIFICATION_EVENT_DAILY_SUMMARY,
|
||||
"summary": summary,
|
||||
"message": message,
|
||||
@@ -736,7 +862,7 @@ impl NotificationManager {
|
||||
}
|
||||
"telegram" => {
|
||||
let config = parse_target_config::<TelegramConfig>(target)?;
|
||||
Client::new()
|
||||
client
|
||||
.post(format!(
|
||||
"https://api.telegram.org/bot{}/sendMessage",
|
||||
config.bot_token
|
||||
@@ -791,6 +917,17 @@ impl NotificationManager {
|
||||
}
|
||||
}
|
||||
|
||||
fn delay_until_next_minute_boundary(now: chrono::DateTime<chrono::Local>) -> Duration {
|
||||
let remaining_seconds = 60_u64.saturating_sub(now.second() as u64).max(1);
|
||||
let mut delay = Duration::from_secs(remaining_seconds);
|
||||
if now.nanosecond() > 0 {
|
||||
delay = delay
|
||||
.checked_sub(Duration::from_nanos(now.nanosecond() as u64))
|
||||
.unwrap_or_else(|| Duration::from_millis(1));
|
||||
}
|
||||
delay
|
||||
}
|
||||
|
||||
async fn _unused_ensure_public_endpoint(raw: &str) -> Result<(), Box<dyn std::error::Error>> {
|
||||
let url = Url::parse(raw)?;
|
||||
let host = match url.host_str() {
|
||||
@@ -852,9 +989,38 @@ fn is_private_ip(ip: IpAddr) -> bool {
|
||||
mod tests {
|
||||
use super::*;
|
||||
use crate::db::JobState;
|
||||
use std::sync::{
|
||||
Arc,
|
||||
atomic::{AtomicUsize, Ordering},
|
||||
};
|
||||
use tokio::io::{AsyncReadExt, AsyncWriteExt};
|
||||
use tokio::net::TcpListener;
|
||||
|
||||
fn scheduled_test_time(hour: u32, minute: u32) -> chrono::DateTime<chrono::Local> {
|
||||
chrono::Local::now()
|
||||
.with_hour(hour)
|
||||
.and_then(|value| value.with_minute(minute))
|
||||
.and_then(|value| value.with_second(0))
|
||||
.and_then(|value| value.with_nanosecond(0))
|
||||
.unwrap_or_else(chrono::Local::now)
|
||||
}
|
||||
|
||||
async fn add_daily_summary_webhook_target(
|
||||
db: &Db,
|
||||
addr: std::net::SocketAddr,
|
||||
) -> NotificationResult<()> {
|
||||
let config_json = serde_json::json!({ "url": format!("http://{}", addr) }).to_string();
|
||||
db.add_notification_target(
|
||||
"daily-summary",
|
||||
"webhook",
|
||||
&config_json,
|
||||
"[\"daily.summary\"]",
|
||||
true,
|
||||
)
|
||||
.await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_webhook_errors_on_non_success()
|
||||
-> std::result::Result<(), Box<dyn std::error::Error + Send + Sync>> {
|
||||
@@ -896,7 +1062,7 @@ mod tests {
|
||||
enabled: true,
|
||||
created_at: chrono::Utc::now(),
|
||||
};
|
||||
let event = AlchemistEvent::JobStateChanged {
|
||||
let event = NotifiableEvent::JobStateChanged {
|
||||
job_id: 1,
|
||||
status: crate::db::JobState::Failed,
|
||||
};
|
||||
@@ -976,7 +1142,7 @@ mod tests {
|
||||
enabled: true,
|
||||
created_at: chrono::Utc::now(),
|
||||
};
|
||||
let event = AlchemistEvent::JobStateChanged {
|
||||
let event = NotifiableEvent::JobStateChanged {
|
||||
job_id: job.id,
|
||||
status: JobState::Failed,
|
||||
};
|
||||
@@ -1001,4 +1167,154 @@ mod tests {
|
||||
let _ = std::fs::remove_file(db_path);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn daily_summary_retries_after_failed_delivery_and_marks_success()
|
||||
-> std::result::Result<(), Box<dyn std::error::Error + Send + Sync>> {
|
||||
let mut db_path = std::env::temp_dir();
|
||||
let token: u64 = rand::random();
|
||||
db_path.push(format!("alchemist_notifications_daily_retry_{}.db", token));
|
||||
|
||||
let db = Db::new(db_path.to_string_lossy().as_ref()).await?;
|
||||
let mut test_config = crate::config::Config::default();
|
||||
test_config.notifications.allow_local_notifications = true;
|
||||
test_config.notifications.daily_summary_time_local = "09:00".to_string();
|
||||
let config = Arc::new(RwLock::new(test_config));
|
||||
let manager = NotificationManager::new(db.clone(), config);
|
||||
|
||||
let listener = match TcpListener::bind("127.0.0.1:0").await {
|
||||
Ok(listener) => listener,
|
||||
Err(err) if err.kind() == std::io::ErrorKind::PermissionDenied => {
|
||||
return Ok(());
|
||||
}
|
||||
Err(err) => return Err(err.into()),
|
||||
};
|
||||
let addr = listener.local_addr()?;
|
||||
add_daily_summary_webhook_target(&db, addr).await?;
|
||||
|
||||
let request_count = Arc::new(AtomicUsize::new(0));
|
||||
let request_count_task = request_count.clone();
|
||||
let listener_task = tokio::spawn(async move {
|
||||
loop {
|
||||
let Ok((mut socket, _)) = listener.accept().await else {
|
||||
break;
|
||||
};
|
||||
let mut buf = [0u8; 1024];
|
||||
let _ = socket.read(&mut buf).await;
|
||||
let index = request_count_task.fetch_add(1, Ordering::SeqCst);
|
||||
let response = if index == 0 {
|
||||
"HTTP/1.1 500 Internal Server Error\r\nContent-Length: 0\r\n\r\n"
|
||||
} else {
|
||||
"HTTP/1.1 200 OK\r\nContent-Length: 0\r\n\r\n"
|
||||
};
|
||||
let _ = socket.write_all(response.as_bytes()).await;
|
||||
}
|
||||
});
|
||||
|
||||
let first_now = scheduled_test_time(9, 5);
|
||||
manager.maybe_send_daily_summary_at(first_now).await?;
|
||||
assert_eq!(request_count.load(Ordering::SeqCst), 1);
|
||||
assert_eq!(
|
||||
db.get_preference(DAILY_SUMMARY_LAST_SUCCESS_KEY).await?,
|
||||
None
|
||||
);
|
||||
|
||||
manager
|
||||
.maybe_send_daily_summary_at(first_now + chrono::Duration::minutes(1))
|
||||
.await?;
|
||||
assert_eq!(request_count.load(Ordering::SeqCst), 2);
|
||||
assert_eq!(
|
||||
db.get_preference(DAILY_SUMMARY_LAST_SUCCESS_KEY).await?,
|
||||
Some(first_now.format("%Y-%m-%d").to_string())
|
||||
);
|
||||
|
||||
listener_task.abort();
|
||||
let _ = std::fs::remove_file(db_path);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn daily_summary_is_restart_safe_after_successful_delivery()
|
||||
-> std::result::Result<(), Box<dyn std::error::Error + Send + Sync>> {
|
||||
let mut db_path = std::env::temp_dir();
|
||||
let token: u64 = rand::random();
|
||||
db_path.push(format!(
|
||||
"alchemist_notifications_daily_restart_{}.db",
|
||||
token
|
||||
));
|
||||
|
||||
let db = Db::new(db_path.to_string_lossy().as_ref()).await?;
|
||||
let mut test_config = crate::config::Config::default();
|
||||
test_config.notifications.allow_local_notifications = true;
|
||||
test_config.notifications.daily_summary_time_local = "09:00".to_string();
|
||||
let config = Arc::new(RwLock::new(test_config));
|
||||
|
||||
let listener = match TcpListener::bind("127.0.0.1:0").await {
|
||||
Ok(listener) => listener,
|
||||
Err(err) if err.kind() == std::io::ErrorKind::PermissionDenied => {
|
||||
return Ok(());
|
||||
}
|
||||
Err(err) => return Err(err.into()),
|
||||
};
|
||||
let addr = listener.local_addr()?;
|
||||
add_daily_summary_webhook_target(&db, addr).await?;
|
||||
|
||||
let request_count = Arc::new(AtomicUsize::new(0));
|
||||
let request_count_task = request_count.clone();
|
||||
let listener_task = tokio::spawn(async move {
|
||||
loop {
|
||||
let Ok((mut socket, _)) = listener.accept().await else {
|
||||
break;
|
||||
};
|
||||
let mut buf = [0u8; 1024];
|
||||
let _ = socket.read(&mut buf).await;
|
||||
request_count_task.fetch_add(1, Ordering::SeqCst);
|
||||
let _ = socket
|
||||
.write_all(b"HTTP/1.1 200 OK\r\nContent-Length: 0\r\n\r\n")
|
||||
.await;
|
||||
}
|
||||
});
|
||||
|
||||
let first_now = scheduled_test_time(9, 2);
|
||||
let manager = NotificationManager::new(db.clone(), config.clone());
|
||||
manager.maybe_send_daily_summary_at(first_now).await?;
|
||||
assert_eq!(request_count.load(Ordering::SeqCst), 1);
|
||||
|
||||
let restarted_manager = NotificationManager::new(db.clone(), config.clone());
|
||||
restarted_manager
|
||||
.maybe_send_daily_summary_at(first_now + chrono::Duration::minutes(10))
|
||||
.await?;
|
||||
assert_eq!(request_count.load(Ordering::SeqCst), 1);
|
||||
|
||||
listener_task.abort();
|
||||
let _ = std::fs::remove_file(db_path);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn daily_summary_marks_day_sent_when_no_targets_are_eligible()
|
||||
-> std::result::Result<(), Box<dyn std::error::Error + Send + Sync>> {
|
||||
let mut db_path = std::env::temp_dir();
|
||||
let token: u64 = rand::random();
|
||||
db_path.push(format!(
|
||||
"alchemist_notifications_daily_no_targets_{}.db",
|
||||
token
|
||||
));
|
||||
|
||||
let db = Db::new(db_path.to_string_lossy().as_ref()).await?;
|
||||
let mut test_config = crate::config::Config::default();
|
||||
test_config.notifications.daily_summary_time_local = "09:00".to_string();
|
||||
let config = Arc::new(RwLock::new(test_config));
|
||||
let manager = NotificationManager::new(db.clone(), config);
|
||||
|
||||
let now = scheduled_test_time(9, 1);
|
||||
manager.maybe_send_daily_summary_at(now).await?;
|
||||
assert_eq!(
|
||||
db.get_preference(DAILY_SUMMARY_LAST_SUCCESS_KEY).await?,
|
||||
Some(now.format("%Y-%m-%d").to_string())
|
||||
);
|
||||
|
||||
let _ = std::fs::remove_file(db_path);
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
@@ -17,6 +17,7 @@ pub struct Transcoder {
|
||||
// so there is no deadlock risk. Contention is negligible (≤ concurrent_jobs entries).
|
||||
cancel_channels: Arc<Mutex<HashMap<i64, oneshot::Sender<()>>>>,
|
||||
pending_cancels: Arc<Mutex<HashSet<i64>>>,
|
||||
pub(crate) cancel_requested: Arc<tokio::sync::RwLock<HashSet<i64>>>,
|
||||
}
|
||||
|
||||
pub struct TranscodeRequest<'a> {
|
||||
@@ -28,6 +29,8 @@ pub struct TranscodeRequest<'a> {
|
||||
pub metadata: &'a crate::media::pipeline::MediaMetadata,
|
||||
pub plan: &'a TranscodePlan,
|
||||
pub observer: Option<Arc<dyn ExecutionObserver>>,
|
||||
pub clip_start_seconds: Option<f64>,
|
||||
pub clip_duration_seconds: Option<f64>,
|
||||
}
|
||||
|
||||
#[allow(async_fn_in_trait)]
|
||||
@@ -80,9 +83,22 @@ impl Transcoder {
|
||||
Self {
|
||||
cancel_channels: Arc::new(Mutex::new(HashMap::new())),
|
||||
pending_cancels: Arc::new(Mutex::new(HashSet::new())),
|
||||
cancel_requested: Arc::new(tokio::sync::RwLock::new(HashSet::new())),
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn is_cancel_requested(&self, job_id: i64) -> bool {
|
||||
self.cancel_requested.read().await.contains(&job_id)
|
||||
}
|
||||
|
||||
pub async fn remove_cancel_request(&self, job_id: i64) {
|
||||
self.cancel_requested.write().await.remove(&job_id);
|
||||
}
|
||||
|
||||
pub async fn add_cancel_request(&self, job_id: i64) {
|
||||
self.cancel_requested.write().await.insert(job_id);
|
||||
}
|
||||
|
||||
pub fn cancel_job(&self, job_id: i64) -> bool {
|
||||
let mut channels = match self.cancel_channels.lock() {
|
||||
Ok(channels) => channels,
|
||||
@@ -173,6 +189,7 @@ impl Transcoder {
|
||||
request.plan,
|
||||
)
|
||||
.with_hardware(request.hw_info)
|
||||
.with_clip(request.clip_start_seconds, request.clip_duration_seconds)
|
||||
.build()?;
|
||||
|
||||
info!("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━");
|
||||
|
||||
@@ -15,6 +15,7 @@ use chrono::Utc;
|
||||
use rand::Rng;
|
||||
use std::net::SocketAddr;
|
||||
use std::sync::Arc;
|
||||
use tracing::error;
|
||||
|
||||
#[derive(serde::Deserialize)]
|
||||
pub(crate) struct LoginPayload {
|
||||
@@ -32,11 +33,13 @@ pub(crate) async fn login_handler(
|
||||
}
|
||||
|
||||
let mut is_valid = true;
|
||||
let user_result = state
|
||||
.db
|
||||
.get_user_by_username(&payload.username)
|
||||
.await
|
||||
.unwrap_or(None);
|
||||
let user_result = match state.db.get_user_by_username(&payload.username).await {
|
||||
Ok(user) => user,
|
||||
Err(err) => {
|
||||
error!("Login lookup failed for '{}': {}", payload.username, err);
|
||||
return (StatusCode::INTERNAL_SERVER_ERROR, err.to_string()).into_response();
|
||||
}
|
||||
};
|
||||
|
||||
// A valid argon2 static hash of a random string used to simulate work and equalize timing
|
||||
const DUMMY_HASH: &str = "$argon2id$v=19$m=19456,t=2,p=1$c2FsdHN0cmluZzEyMzQ1Ng$1tJ2tA109qj15m3u5+kS/sX5X1UoZ6/H9b/30tX9N/g";
|
||||
|
||||
@@ -10,7 +10,11 @@ use axum::{
|
||||
response::{IntoResponse, Response},
|
||||
};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::sync::Arc;
|
||||
use std::{
|
||||
path::{Path as FsPath, PathBuf},
|
||||
sync::Arc,
|
||||
time::SystemTime,
|
||||
};
|
||||
|
||||
#[derive(Serialize)]
|
||||
struct BlockedJob {
|
||||
@@ -24,6 +28,17 @@ struct BlockedJobsResponse {
|
||||
blocked: Vec<BlockedJob>,
|
||||
}
|
||||
|
||||
#[derive(Deserialize)]
|
||||
pub(crate) struct EnqueueJobPayload {
|
||||
path: String,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub(crate) struct EnqueueJobResponse {
|
||||
enqueued: bool,
|
||||
message: String,
|
||||
}
|
||||
|
||||
pub(crate) fn blocked_jobs_response(message: impl Into<String>, blocked: &[Job]) -> Response {
|
||||
let payload = BlockedJobsResponse {
|
||||
message: message.into(),
|
||||
@@ -38,13 +53,175 @@ pub(crate) fn blocked_jobs_response(message: impl Into<String>, blocked: &[Job])
|
||||
(StatusCode::CONFLICT, axum::Json(payload)).into_response()
|
||||
}
|
||||
|
||||
fn resolve_source_root(path: &FsPath, watch_dirs: &[crate::db::WatchDir]) -> Option<PathBuf> {
|
||||
watch_dirs
|
||||
.iter()
|
||||
.map(|watch_dir| PathBuf::from(&watch_dir.path))
|
||||
.filter(|watch_dir| path.starts_with(watch_dir))
|
||||
.max_by_key(|watch_dir| watch_dir.components().count())
|
||||
}
|
||||
|
||||
async fn purge_resume_sessions_for_jobs(state: &AppState, ids: &[i64]) {
|
||||
let sessions = match state.db.get_resume_sessions_by_job_ids(ids).await {
|
||||
Ok(sessions) => sessions,
|
||||
Err(err) => {
|
||||
tracing::warn!("Failed to load resume sessions for purge: {}", err);
|
||||
return;
|
||||
}
|
||||
};
|
||||
|
||||
for session in sessions {
|
||||
if let Err(err) = state.db.delete_resume_session(session.job_id).await {
|
||||
tracing::warn!(
|
||||
job_id = session.job_id,
|
||||
"Failed to delete resume session rows: {err}"
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
let temp_dir = PathBuf::from(&session.temp_dir);
|
||||
if temp_dir.exists() {
|
||||
if let Err(err) = tokio::fs::remove_dir_all(&temp_dir).await {
|
||||
tracing::warn!(
|
||||
job_id = session.job_id,
|
||||
path = %temp_dir.display(),
|
||||
"Failed to remove resume temp dir: {err}"
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) async fn enqueue_job_handler(
|
||||
State(state): State<Arc<AppState>>,
|
||||
axum::Json(payload): axum::Json<EnqueueJobPayload>,
|
||||
) -> impl IntoResponse {
|
||||
let submitted_path = payload.path.trim();
|
||||
if submitted_path.is_empty() {
|
||||
return (
|
||||
StatusCode::BAD_REQUEST,
|
||||
axum::Json(EnqueueJobResponse {
|
||||
enqueued: false,
|
||||
message: "Path must not be empty.".to_string(),
|
||||
}),
|
||||
)
|
||||
.into_response();
|
||||
}
|
||||
|
||||
let requested_path = PathBuf::from(submitted_path);
|
||||
if !requested_path.is_absolute() {
|
||||
return (
|
||||
StatusCode::BAD_REQUEST,
|
||||
axum::Json(EnqueueJobResponse {
|
||||
enqueued: false,
|
||||
message: "Path must be absolute.".to_string(),
|
||||
}),
|
||||
)
|
||||
.into_response();
|
||||
}
|
||||
|
||||
let canonical_path = match std::fs::canonicalize(&requested_path) {
|
||||
Ok(path) => path,
|
||||
Err(err) => {
|
||||
return (
|
||||
StatusCode::BAD_REQUEST,
|
||||
axum::Json(EnqueueJobResponse {
|
||||
enqueued: false,
|
||||
message: format!("Unable to resolve path: {err}"),
|
||||
}),
|
||||
)
|
||||
.into_response();
|
||||
}
|
||||
};
|
||||
|
||||
let metadata = match std::fs::metadata(&canonical_path) {
|
||||
Ok(metadata) => metadata,
|
||||
Err(err) => {
|
||||
return (
|
||||
StatusCode::BAD_REQUEST,
|
||||
axum::Json(EnqueueJobResponse {
|
||||
enqueued: false,
|
||||
message: format!("Unable to read file metadata: {err}"),
|
||||
}),
|
||||
)
|
||||
.into_response();
|
||||
}
|
||||
};
|
||||
if !metadata.is_file() {
|
||||
return (
|
||||
StatusCode::BAD_REQUEST,
|
||||
axum::Json(EnqueueJobResponse {
|
||||
enqueued: false,
|
||||
message: "Path must point to a file.".to_string(),
|
||||
}),
|
||||
)
|
||||
.into_response();
|
||||
}
|
||||
|
||||
let extension = canonical_path
|
||||
.extension()
|
||||
.and_then(|value| value.to_str())
|
||||
.map(|value| value.to_ascii_lowercase());
|
||||
let supported = crate::media::scanner::Scanner::new().extensions;
|
||||
if extension
|
||||
.as_deref()
|
||||
.is_none_or(|value| !supported.iter().any(|candidate| candidate == value))
|
||||
{
|
||||
return (
|
||||
StatusCode::BAD_REQUEST,
|
||||
axum::Json(EnqueueJobResponse {
|
||||
enqueued: false,
|
||||
message: "File type is not supported for enqueue.".to_string(),
|
||||
}),
|
||||
)
|
||||
.into_response();
|
||||
}
|
||||
|
||||
let watch_dirs = match state.db.get_watch_dirs().await {
|
||||
Ok(watch_dirs) => watch_dirs,
|
||||
Err(err) => {
|
||||
return (StatusCode::INTERNAL_SERVER_ERROR, err.to_string()).into_response();
|
||||
}
|
||||
};
|
||||
|
||||
let discovered = crate::media::pipeline::DiscoveredMedia {
|
||||
path: canonical_path.clone(),
|
||||
mtime: metadata.modified().unwrap_or(SystemTime::UNIX_EPOCH),
|
||||
source_root: resolve_source_root(&canonical_path, &watch_dirs),
|
||||
};
|
||||
|
||||
match crate::media::pipeline::enqueue_discovered_with_db(state.db.as_ref(), discovered).await {
|
||||
Ok(true) => (
|
||||
StatusCode::OK,
|
||||
axum::Json(EnqueueJobResponse {
|
||||
enqueued: true,
|
||||
message: format!("Enqueued {}.", canonical_path.display()),
|
||||
}),
|
||||
)
|
||||
.into_response(),
|
||||
Ok(false) => (
|
||||
StatusCode::OK,
|
||||
axum::Json(EnqueueJobResponse {
|
||||
enqueued: false,
|
||||
message:
|
||||
"File was not enqueued because it matched existing output or dedupe rules."
|
||||
.to_string(),
|
||||
}),
|
||||
)
|
||||
.into_response(),
|
||||
Err(err) => (StatusCode::INTERNAL_SERVER_ERROR, err.to_string()).into_response(),
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) async fn request_job_cancel(state: &AppState, job: &Job) -> Result<bool> {
|
||||
state.transcoder.add_cancel_request(job.id).await;
|
||||
match job.status {
|
||||
JobState::Queued => {
|
||||
state
|
||||
.db
|
||||
.update_job_status(job.id, JobState::Cancelled)
|
||||
.await?;
|
||||
state.transcoder.remove_cancel_request(job.id).await;
|
||||
Ok(true)
|
||||
}
|
||||
JobState::Analyzing | JobState::Resuming => {
|
||||
@@ -55,6 +232,7 @@ pub(crate) async fn request_job_cancel(state: &AppState, job: &Job) -> Result<bo
|
||||
.db
|
||||
.update_job_status(job.id, JobState::Cancelled)
|
||||
.await?;
|
||||
state.transcoder.remove_cancel_request(job.id).await;
|
||||
Ok(true)
|
||||
}
|
||||
JobState::Encoding | JobState::Remuxing => Ok(state.transcoder.cancel_job(job.id)),
|
||||
@@ -162,17 +340,49 @@ pub(crate) async fn batch_jobs_handler(
|
||||
|
||||
match payload.action.as_str() {
|
||||
"cancel" => {
|
||||
let mut count = 0_u64;
|
||||
// Add all cancel requests first (in-memory, cheap).
|
||||
for job in &jobs {
|
||||
match request_job_cancel(&state, job).await {
|
||||
Ok(true) => count += 1,
|
||||
Ok(false) => {}
|
||||
Err(e) if is_row_not_found(&e) => {}
|
||||
state.transcoder.add_cancel_request(job.id).await;
|
||||
}
|
||||
|
||||
// Collect IDs that can be immediately set to Cancelled in the DB.
|
||||
let mut immediate_ids: Vec<i64> = Vec::new();
|
||||
let mut active_count: u64 = 0;
|
||||
|
||||
for job in &jobs {
|
||||
match job.status {
|
||||
JobState::Queued => {
|
||||
immediate_ids.push(job.id);
|
||||
}
|
||||
JobState::Analyzing | JobState::Resuming
|
||||
if state.transcoder.cancel_job(job.id) =>
|
||||
{
|
||||
immediate_ids.push(job.id);
|
||||
}
|
||||
JobState::Encoding | JobState::Remuxing
|
||||
if state.transcoder.cancel_job(job.id) =>
|
||||
{
|
||||
active_count += 1;
|
||||
}
|
||||
_ => {}
|
||||
}
|
||||
}
|
||||
|
||||
// Single batch DB update instead of N individual queries.
|
||||
if !immediate_ids.is_empty() {
|
||||
match state.db.batch_cancel_jobs(&immediate_ids).await {
|
||||
Ok(_) => {}
|
||||
Err(e) => {
|
||||
return (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()).into_response();
|
||||
}
|
||||
}
|
||||
// Remove cancel requests for jobs already resolved in DB.
|
||||
for id in &immediate_ids {
|
||||
state.transcoder.remove_cancel_request(*id).await;
|
||||
}
|
||||
}
|
||||
|
||||
let count = immediate_ids.len() as u64 + active_count;
|
||||
axum::Json(serde_json::json!({ "count": count })).into_response()
|
||||
}
|
||||
"delete" | "restart" => {
|
||||
@@ -191,7 +401,12 @@ pub(crate) async fn batch_jobs_handler(
|
||||
};
|
||||
|
||||
match result {
|
||||
Ok(count) => axum::Json(serde_json::json!({ "count": count })).into_response(),
|
||||
Ok(count) => {
|
||||
if payload.action == "delete" {
|
||||
purge_resume_sessions_for_jobs(state.as_ref(), &payload.ids).await;
|
||||
}
|
||||
axum::Json(serde_json::json!({ "count": count })).into_response()
|
||||
}
|
||||
Err(e) => (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()).into_response(),
|
||||
}
|
||||
}
|
||||
@@ -235,8 +450,13 @@ pub(crate) async fn restart_failed_handler(
|
||||
pub(crate) async fn clear_completed_handler(
|
||||
State(state): State<Arc<AppState>>,
|
||||
) -> impl IntoResponse {
|
||||
let completed_job_ids = match state.db.get_jobs_by_status(JobState::Completed).await {
|
||||
Ok(jobs) => jobs.into_iter().map(|job| job.id).collect::<Vec<_>>(),
|
||||
Err(e) => return (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()).into_response(),
|
||||
};
|
||||
match state.db.clear_completed_jobs().await {
|
||||
Ok(count) => {
|
||||
purge_resume_sessions_for_jobs(state.as_ref(), &completed_job_ids).await;
|
||||
let message = if count == 0 {
|
||||
"No completed jobs were waiting to be cleared.".to_string()
|
||||
} else if count == 1 {
|
||||
@@ -289,7 +509,10 @@ pub(crate) async fn delete_job_handler(
|
||||
state.transcoder.cancel_job(id);
|
||||
|
||||
match state.db.delete_job(id).await {
|
||||
Ok(_) => StatusCode::OK.into_response(),
|
||||
Ok(_) => {
|
||||
purge_resume_sessions_for_jobs(state.as_ref(), &[id]).await;
|
||||
StatusCode::OK.into_response()
|
||||
}
|
||||
Err(e) if is_row_not_found(&e) => StatusCode::NOT_FOUND.into_response(),
|
||||
Err(e) => (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()).into_response(),
|
||||
}
|
||||
@@ -330,6 +553,7 @@ pub(crate) struct JobDetailResponse {
|
||||
job_failure_summary: Option<String>,
|
||||
decision_explanation: Option<Explanation>,
|
||||
failure_explanation: Option<Explanation>,
|
||||
queue_position: Option<u32>,
|
||||
}
|
||||
|
||||
pub(crate) async fn get_job_detail_handler(
|
||||
@@ -342,24 +566,18 @@ pub(crate) async fn get_job_detail_handler(
|
||||
Err(e) => return (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()).into_response(),
|
||||
};
|
||||
|
||||
// Avoid long probes while the job is still active.
|
||||
let metadata = match job.status {
|
||||
JobState::Queued | JobState::Analyzing => None,
|
||||
_ => {
|
||||
let analyzer = crate::media::analyzer::FfmpegAnalyzer;
|
||||
use crate::media::pipeline::Analyzer;
|
||||
analyzer
|
||||
.analyze(std::path::Path::new(&job.input_path))
|
||||
.await
|
||||
.ok()
|
||||
.map(|analysis| analysis.metadata)
|
||||
}
|
||||
};
|
||||
let metadata = job.input_metadata();
|
||||
|
||||
// Try to get encode stats (using the subquery result or a specific query)
|
||||
// For now we'll just query the encode_stats table if completed
|
||||
let encode_stats = if job.status == JobState::Completed {
|
||||
state.db.get_encode_stats_by_job_id(id).await.ok()
|
||||
match state.db.get_encode_stats_by_job_id(id).await {
|
||||
Ok(stats) => Some(stats),
|
||||
Err(err) if is_row_not_found(&err) => None,
|
||||
Err(err) => {
|
||||
return (StatusCode::INTERNAL_SERVER_ERROR, err.to_string()).into_response();
|
||||
}
|
||||
}
|
||||
} else {
|
||||
None
|
||||
};
|
||||
@@ -400,11 +618,21 @@ pub(crate) async fn get_job_detail_handler(
|
||||
(None, None)
|
||||
};
|
||||
|
||||
let encode_attempts = state
|
||||
.db
|
||||
.get_encode_attempts_by_job(id)
|
||||
.await
|
||||
.unwrap_or_default();
|
||||
let encode_attempts = match state.db.get_encode_attempts_by_job(id).await {
|
||||
Ok(attempts) => attempts,
|
||||
Err(err) => return (StatusCode::INTERNAL_SERVER_ERROR, err.to_string()).into_response(),
|
||||
};
|
||||
|
||||
let queue_position = if job.status == JobState::Queued {
|
||||
match state.db.get_queue_position(id).await {
|
||||
Ok(position) => position,
|
||||
Err(err) => {
|
||||
return (StatusCode::INTERNAL_SERVER_ERROR, err.to_string()).into_response();
|
||||
}
|
||||
}
|
||||
} else {
|
||||
None
|
||||
};
|
||||
|
||||
axum::Json(JobDetailResponse {
|
||||
job,
|
||||
@@ -415,6 +643,7 @@ pub(crate) async fn get_job_detail_handler(
|
||||
job_failure_summary,
|
||||
decision_explanation,
|
||||
failure_explanation,
|
||||
queue_position,
|
||||
})
|
||||
.into_response()
|
||||
}
|
||||
|
||||
@@ -76,16 +76,26 @@ pub(crate) async fn auth_middleware(
|
||||
let path = req.uri().path();
|
||||
let method = req.method().clone();
|
||||
|
||||
if state.setup_required.load(Ordering::Relaxed)
|
||||
&& path != "/api/health"
|
||||
&& path != "/api/ready"
|
||||
&& !request_is_lan(&req)
|
||||
if state.setup_required.load(Ordering::Relaxed) && path != "/api/health" && path != "/api/ready"
|
||||
{
|
||||
return (
|
||||
StatusCode::FORBIDDEN,
|
||||
"Alchemist setup is only available from the local network",
|
||||
)
|
||||
.into_response();
|
||||
let allowed = if let Some(expected_token) = &state.setup_token {
|
||||
// Token mode: require `?token=<value>` regardless of client IP.
|
||||
req.uri()
|
||||
.query()
|
||||
.and_then(|q| q.split('&').find_map(|pair| pair.strip_prefix("token=")))
|
||||
.map(|t| t == expected_token.as_str())
|
||||
.unwrap_or(false)
|
||||
} else {
|
||||
request_is_lan(&req, &state.trusted_proxies)
|
||||
};
|
||||
|
||||
if !allowed {
|
||||
return (
|
||||
StatusCode::FORBIDDEN,
|
||||
"Alchemist setup is only available from the local network",
|
||||
)
|
||||
.into_response();
|
||||
}
|
||||
}
|
||||
|
||||
// 1. API Protection: Only lock down /api routes
|
||||
@@ -148,12 +158,12 @@ pub(crate) async fn auth_middleware(
|
||||
next.run(req).await
|
||||
}
|
||||
|
||||
fn request_is_lan(req: &Request) -> bool {
|
||||
fn request_is_lan(req: &Request, trusted_proxies: &[IpAddr]) -> bool {
|
||||
let direct_peer = req
|
||||
.extensions()
|
||||
.get::<ConnectInfo<SocketAddr>>()
|
||||
.map(|info| info.0.ip());
|
||||
let resolved = request_ip(req);
|
||||
let resolved = request_ip(req, trusted_proxies);
|
||||
|
||||
// If resolved IP differs from direct peer, forwarded headers were used.
|
||||
// Warn operators so misconfigured proxies surface in logs.
|
||||
@@ -216,7 +226,7 @@ pub(crate) async fn rate_limit_middleware(
|
||||
return next.run(req).await;
|
||||
}
|
||||
|
||||
let ip = request_ip(&req).unwrap_or(IpAddr::from([0, 0, 0, 0]));
|
||||
let ip = request_ip(&req, &state.trusted_proxies).unwrap_or(IpAddr::from([0, 0, 0, 0]));
|
||||
if !allow_global_request(&state, ip).await {
|
||||
return (StatusCode::TOO_MANY_REQUESTS, "Too many requests").into_response();
|
||||
}
|
||||
@@ -287,18 +297,18 @@ pub(crate) fn get_cookie_value(headers: &axum::http::HeaderMap, name: &str) -> O
|
||||
None
|
||||
}
|
||||
|
||||
pub(crate) fn request_ip(req: &Request) -> Option<IpAddr> {
|
||||
pub(crate) fn request_ip(req: &Request, trusted_proxies: &[IpAddr]) -> Option<IpAddr> {
|
||||
let peer_ip = req
|
||||
.extensions()
|
||||
.get::<ConnectInfo<SocketAddr>>()
|
||||
.map(|info| info.0.ip());
|
||||
|
||||
// Only trust proxy headers (X-Forwarded-For, X-Real-IP) when the direct
|
||||
// TCP peer is a loopback or private IP — i.e., a trusted reverse proxy.
|
||||
// This prevents external attackers from spoofing these headers to bypass
|
||||
// rate limiting.
|
||||
// TCP peer is a trusted reverse proxy. When trusted_proxies is non-empty,
|
||||
// only those exact IPs (plus loopback) are trusted. Otherwise, fall back
|
||||
// to trusting all RFC-1918 private ranges (legacy behaviour).
|
||||
if let Some(peer) = peer_ip {
|
||||
if is_trusted_peer(peer) {
|
||||
if is_trusted_peer(peer, trusted_proxies) {
|
||||
if let Some(xff) = req.headers().get("X-Forwarded-For") {
|
||||
if let Ok(xff_str) = xff.to_str() {
|
||||
if let Some(ip_str) = xff_str.split(',').next() {
|
||||
@@ -321,13 +331,27 @@ pub(crate) fn request_ip(req: &Request) -> Option<IpAddr> {
|
||||
peer_ip
|
||||
}
|
||||
|
||||
/// Returns true if the peer IP is a loopback or private address,
|
||||
/// meaning it is likely a local reverse proxy that can be trusted
|
||||
/// to set forwarded headers.
|
||||
fn is_trusted_peer(ip: IpAddr) -> bool {
|
||||
match ip {
|
||||
IpAddr::V4(v4) => v4.is_loopback() || v4.is_private() || v4.is_link_local(),
|
||||
IpAddr::V6(v6) => v6.is_loopback() || v6.is_unique_local() || v6.is_unicast_link_local(),
|
||||
/// Returns true if the peer IP may be trusted to set forwarded headers.
|
||||
///
|
||||
/// When `trusted_proxies` is non-empty, only loopback addresses and the
|
||||
/// explicitly configured IPs are trusted, tightening the default which
|
||||
/// previously trusted all RFC-1918 private ranges.
|
||||
fn is_trusted_peer(ip: IpAddr, trusted_proxies: &[IpAddr]) -> bool {
|
||||
let is_loopback = match ip {
|
||||
IpAddr::V4(v4) => v4.is_loopback(),
|
||||
IpAddr::V6(v6) => v6.is_loopback(),
|
||||
};
|
||||
if is_loopback {
|
||||
return true;
|
||||
}
|
||||
if trusted_proxies.is_empty() {
|
||||
// Legacy: trust all private ranges when no explicit list is configured.
|
||||
match ip {
|
||||
IpAddr::V4(v4) => v4.is_private() || v4.is_link_local(),
|
||||
IpAddr::V6(v6) => v6.is_unique_local() || v6.is_unicast_link_local(),
|
||||
}
|
||||
} else {
|
||||
trusted_proxies.contains(&ip)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -17,7 +17,7 @@ mod tests;
|
||||
use crate::Agent;
|
||||
use crate::Transcoder;
|
||||
use crate::config::Config;
|
||||
use crate::db::{AlchemistEvent, Db, EventChannels};
|
||||
use crate::db::{Db, EventChannels};
|
||||
use crate::error::{AlchemistError, Result};
|
||||
use crate::system::hardware::{HardwareInfo, HardwareProbeLog, HardwareState};
|
||||
use axum::{
|
||||
@@ -38,7 +38,7 @@ use std::sync::Arc;
|
||||
use std::sync::atomic::{AtomicBool, Ordering};
|
||||
use std::time::Instant;
|
||||
use tokio::net::lookup_host;
|
||||
use tokio::sync::{Mutex, RwLock, broadcast};
|
||||
use tokio::sync::{Mutex, RwLock};
|
||||
use tokio::time::Duration;
|
||||
#[cfg(not(feature = "embed-web"))]
|
||||
use tracing::warn;
|
||||
@@ -71,7 +71,6 @@ pub struct AppState {
|
||||
pub transcoder: Arc<Transcoder>,
|
||||
pub scheduler: crate::scheduler::SchedulerHandle,
|
||||
pub event_channels: Arc<EventChannels>,
|
||||
pub tx: broadcast::Sender<AlchemistEvent>, // Legacy channel for transition
|
||||
pub setup_required: Arc<AtomicBool>,
|
||||
pub start_time: Instant,
|
||||
pub telemetry_runtime_id: String,
|
||||
@@ -87,6 +86,10 @@ pub struct AppState {
|
||||
pub(crate) login_rate_limiter: Mutex<HashMap<IpAddr, RateLimitEntry>>,
|
||||
pub(crate) global_rate_limiter: Mutex<HashMap<IpAddr, RateLimitEntry>>,
|
||||
pub(crate) sse_connections: Arc<std::sync::atomic::AtomicUsize>,
|
||||
/// IPs whose proxy headers are trusted. Empty = trust all private ranges.
|
||||
pub(crate) trusted_proxies: Vec<IpAddr>,
|
||||
/// If set, setup endpoints require `?token=<value>` query parameter.
|
||||
pub(crate) setup_token: Option<String>,
|
||||
}
|
||||
|
||||
pub struct RunServerArgs {
|
||||
@@ -96,7 +99,6 @@ pub struct RunServerArgs {
|
||||
pub transcoder: Arc<Transcoder>,
|
||||
pub scheduler: crate::scheduler::SchedulerHandle,
|
||||
pub event_channels: Arc<EventChannels>,
|
||||
pub tx: broadcast::Sender<AlchemistEvent>, // Legacy channel for transition
|
||||
pub setup_required: bool,
|
||||
pub config_path: PathBuf,
|
||||
pub config_mutable: bool,
|
||||
@@ -115,7 +117,6 @@ pub async fn run_server(args: RunServerArgs) -> Result<()> {
|
||||
transcoder,
|
||||
scheduler,
|
||||
event_channels,
|
||||
tx,
|
||||
setup_required,
|
||||
config_path,
|
||||
config_mutable,
|
||||
@@ -145,6 +146,34 @@ pub async fn run_server(args: RunServerArgs) -> Result<()> {
|
||||
sys.refresh_cpu_usage();
|
||||
sys.refresh_memory();
|
||||
|
||||
// Read setup token from environment (opt-in security layer).
|
||||
let setup_token = std::env::var("ALCHEMIST_SETUP_TOKEN").ok();
|
||||
if setup_token.is_some() {
|
||||
info!("ALCHEMIST_SETUP_TOKEN is set — setup endpoints require token query param");
|
||||
}
|
||||
|
||||
// Parse trusted proxy IPs from config. Unparseable entries are logged and skipped.
|
||||
let trusted_proxies: Vec<IpAddr> = {
|
||||
let cfg = config.read().await;
|
||||
cfg.system
|
||||
.trusted_proxies
|
||||
.iter()
|
||||
.filter_map(|s| {
|
||||
s.parse::<IpAddr>()
|
||||
.map_err(|_| {
|
||||
error!("Invalid trusted_proxy entry (not a valid IP address): {s}");
|
||||
})
|
||||
.ok()
|
||||
})
|
||||
.collect()
|
||||
};
|
||||
if !trusted_proxies.is_empty() {
|
||||
info!(
|
||||
"Trusted proxies configured ({}): only these IPs will be trusted for X-Forwarded-For headers",
|
||||
trusted_proxies.len()
|
||||
);
|
||||
}
|
||||
|
||||
let state = Arc::new(AppState {
|
||||
db,
|
||||
config,
|
||||
@@ -152,7 +181,6 @@ pub async fn run_server(args: RunServerArgs) -> Result<()> {
|
||||
transcoder,
|
||||
scheduler,
|
||||
event_channels,
|
||||
tx,
|
||||
setup_required: Arc::new(AtomicBool::new(setup_required)),
|
||||
start_time: std::time::Instant::now(),
|
||||
telemetry_runtime_id: Uuid::new_v4().to_string(),
|
||||
@@ -168,6 +196,8 @@ pub async fn run_server(args: RunServerArgs) -> Result<()> {
|
||||
login_rate_limiter: Mutex::new(HashMap::new()),
|
||||
global_rate_limiter: Mutex::new(HashMap::new()),
|
||||
sse_connections: Arc::new(std::sync::atomic::AtomicUsize::new(0)),
|
||||
trusted_proxies,
|
||||
setup_token,
|
||||
});
|
||||
|
||||
// Clone agent for shutdown handler before moving state into router
|
||||
@@ -311,6 +341,7 @@ fn app_router(state: Arc<AppState>) -> Router {
|
||||
// Canonical job list endpoint.
|
||||
.route("/api/jobs", get(jobs_table_handler))
|
||||
.route("/api/jobs/table", get(jobs_table_handler))
|
||||
.route("/api/jobs/enqueue", post(enqueue_job_handler))
|
||||
.route("/api/jobs/batch", post(batch_jobs_handler))
|
||||
.route("/api/logs/history", get(logs_history_handler))
|
||||
.route("/api/logs", delete(clear_logs_handler))
|
||||
@@ -346,6 +377,7 @@ fn app_router(state: Arc<AppState>) -> Router {
|
||||
get(get_engine_mode_handler).post(set_engine_mode_handler),
|
||||
)
|
||||
.route("/api/engine/status", get(engine_status_handler))
|
||||
.route("/api/processor/status", get(processor_status_handler))
|
||||
.route(
|
||||
"/api/settings/transcode",
|
||||
get(get_transcode_settings_handler).post(update_transcode_settings_handler),
|
||||
|
||||
@@ -126,7 +126,7 @@ async fn run_library_health_scan(db: Arc<crate::db::Db>) {
|
||||
let semaphore = Arc::new(tokio::sync::Semaphore::new(2));
|
||||
|
||||
stream::iter(jobs)
|
||||
.for_each_concurrent(Some(10), {
|
||||
.for_each_concurrent(None, {
|
||||
let db = db.clone();
|
||||
let counters = counters.clone();
|
||||
let semaphore = semaphore.clone();
|
||||
|
||||
@@ -461,47 +461,36 @@ fn normalize_notification_payload(
|
||||
unreachable!("notification config_json should always be an object here");
|
||||
};
|
||||
match payload.target_type.as_str() {
|
||||
"discord_webhook" | "discord" => {
|
||||
if !config_map.contains_key("webhook_url") {
|
||||
if let Some(endpoint_url) = payload.endpoint_url.as_ref() {
|
||||
config_map.insert(
|
||||
"webhook_url".to_string(),
|
||||
JsonValue::String(endpoint_url.clone()),
|
||||
);
|
||||
}
|
||||
"discord_webhook" | "discord" if !config_map.contains_key("webhook_url") => {
|
||||
if let Some(endpoint_url) = payload.endpoint_url.as_ref() {
|
||||
config_map.insert(
|
||||
"webhook_url".to_string(),
|
||||
JsonValue::String(endpoint_url.clone()),
|
||||
);
|
||||
}
|
||||
}
|
||||
"gotify" => {
|
||||
if !config_map.contains_key("server_url") {
|
||||
if let Some(endpoint_url) = payload.endpoint_url.as_ref() {
|
||||
config_map.insert(
|
||||
"server_url".to_string(),
|
||||
JsonValue::String(endpoint_url.clone()),
|
||||
);
|
||||
}
|
||||
if let Some(endpoint_url) = payload.endpoint_url.as_ref() {
|
||||
config_map
|
||||
.entry("server_url".to_string())
|
||||
.or_insert_with(|| JsonValue::String(endpoint_url.clone()));
|
||||
}
|
||||
if !config_map.contains_key("app_token") {
|
||||
if let Some(auth_token) = payload.auth_token.as_ref() {
|
||||
config_map.insert(
|
||||
"app_token".to_string(),
|
||||
JsonValue::String(auth_token.clone()),
|
||||
);
|
||||
}
|
||||
if let Some(auth_token) = payload.auth_token.as_ref() {
|
||||
config_map
|
||||
.entry("app_token".to_string())
|
||||
.or_insert_with(|| JsonValue::String(auth_token.clone()));
|
||||
}
|
||||
}
|
||||
"webhook" => {
|
||||
if !config_map.contains_key("url") {
|
||||
if let Some(endpoint_url) = payload.endpoint_url.as_ref() {
|
||||
config_map.insert("url".to_string(), JsonValue::String(endpoint_url.clone()));
|
||||
}
|
||||
if let Some(endpoint_url) = payload.endpoint_url.as_ref() {
|
||||
config_map
|
||||
.entry("url".to_string())
|
||||
.or_insert_with(|| JsonValue::String(endpoint_url.clone()));
|
||||
}
|
||||
if !config_map.contains_key("auth_token") {
|
||||
if let Some(auth_token) = payload.auth_token.as_ref() {
|
||||
config_map.insert(
|
||||
"auth_token".to_string(),
|
||||
JsonValue::String(auth_token.clone()),
|
||||
);
|
||||
}
|
||||
if let Some(auth_token) = payload.auth_token.as_ref() {
|
||||
config_map
|
||||
.entry("auth_token".to_string())
|
||||
.or_insert_with(|| JsonValue::String(auth_token.clone()));
|
||||
}
|
||||
}
|
||||
_ => {}
|
||||
@@ -641,9 +630,8 @@ pub(crate) async fn add_notification_handler(
|
||||
}
|
||||
|
||||
match state.db.get_notification_targets().await {
|
||||
Ok(targets) => targets
|
||||
.into_iter()
|
||||
.find(|target| target.name == payload.name)
|
||||
Ok(mut targets) => targets
|
||||
.pop()
|
||||
.map(|target| axum::Json(notification_target_response(target)).into_response())
|
||||
.unwrap_or_else(|| StatusCode::OK.into_response()),
|
||||
Err(e) => (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()).into_response(),
|
||||
@@ -654,23 +642,23 @@ pub(crate) async fn delete_notification_handler(
|
||||
State(state): State<Arc<AppState>>,
|
||||
Path(id): Path<i64>,
|
||||
) -> impl IntoResponse {
|
||||
let target = match state.db.get_notification_targets().await {
|
||||
Ok(targets) => targets.into_iter().find(|target| target.id == id),
|
||||
let target_index = match state.db.get_notification_targets().await {
|
||||
Ok(targets) => targets.iter().position(|target| target.id == id),
|
||||
Err(e) => return (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()).into_response(),
|
||||
};
|
||||
let Some(target) = target else {
|
||||
let Some(target_index) = target_index else {
|
||||
return StatusCode::NOT_FOUND.into_response();
|
||||
};
|
||||
|
||||
let mut next_config = state.config.read().await.clone();
|
||||
let target_config_json = target.config_json.clone();
|
||||
let parsed_target_config_json =
|
||||
serde_json::from_str::<JsonValue>(&target_config_json).unwrap_or(JsonValue::Null);
|
||||
next_config.notifications.targets.retain(|candidate| {
|
||||
!(candidate.name == target.name
|
||||
&& candidate.target_type == target.target_type
|
||||
&& candidate.config_json == parsed_target_config_json)
|
||||
});
|
||||
if target_index >= next_config.notifications.targets.len() {
|
||||
return (
|
||||
StatusCode::INTERNAL_SERVER_ERROR,
|
||||
"notification settings projection is out of sync with config",
|
||||
)
|
||||
.into_response();
|
||||
}
|
||||
next_config.notifications.targets.remove(target_index);
|
||||
if let Err(response) = save_config_or_response(&state, &next_config).await {
|
||||
return *response;
|
||||
}
|
||||
@@ -837,13 +825,8 @@ pub(crate) async fn add_schedule_handler(
|
||||
state.scheduler.trigger();
|
||||
|
||||
match state.db.get_schedule_windows().await {
|
||||
Ok(windows) => windows
|
||||
.into_iter()
|
||||
.find(|window| {
|
||||
window.start_time == start_time
|
||||
&& window.end_time == end_time
|
||||
&& window.enabled == payload.enabled
|
||||
})
|
||||
Ok(mut windows) => windows
|
||||
.pop()
|
||||
.map(|window| axum::Json(serde_json::json!(window)).into_response())
|
||||
.unwrap_or_else(|| StatusCode::OK.into_response()),
|
||||
Err(e) => (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()).into_response(),
|
||||
@@ -854,22 +837,23 @@ pub(crate) async fn delete_schedule_handler(
|
||||
State(state): State<Arc<AppState>>,
|
||||
Path(id): Path<i64>,
|
||||
) -> impl IntoResponse {
|
||||
let window = match state.db.get_schedule_windows().await {
|
||||
Ok(windows) => windows.into_iter().find(|window| window.id == id),
|
||||
let window_index = match state.db.get_schedule_windows().await {
|
||||
Ok(windows) => windows.iter().position(|window| window.id == id),
|
||||
Err(e) => return (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()).into_response(),
|
||||
};
|
||||
let Some(window) = window else {
|
||||
let Some(window_index) = window_index else {
|
||||
return StatusCode::NOT_FOUND.into_response();
|
||||
};
|
||||
|
||||
let days_of_week: Vec<i32> = serde_json::from_str(&window.days_of_week).unwrap_or_default();
|
||||
let mut next_config = state.config.read().await.clone();
|
||||
next_config.schedule.windows.retain(|candidate| {
|
||||
!(candidate.start_time == window.start_time
|
||||
&& candidate.end_time == window.end_time
|
||||
&& candidate.enabled == window.enabled
|
||||
&& candidate.days_of_week == days_of_week)
|
||||
});
|
||||
if window_index >= next_config.schedule.windows.len() {
|
||||
return (
|
||||
StatusCode::INTERNAL_SERVER_ERROR,
|
||||
"schedule settings projection is out of sync with config",
|
||||
)
|
||||
.into_response();
|
||||
}
|
||||
next_config.schedule.windows.remove(window_index);
|
||||
if let Err(response) = save_config_or_response(&state, &next_config).await {
|
||||
return *response;
|
||||
}
|
||||
|
||||
@@ -108,6 +108,10 @@ pub(crate) fn sse_message_for_system_event(event: &SystemEvent) -> SseMessage {
|
||||
event_name: "scan_completed",
|
||||
data: "{}".to_string(),
|
||||
},
|
||||
SystemEvent::EngineIdle => SseMessage {
|
||||
event_name: "engine_idle",
|
||||
data: "{}".to_string(),
|
||||
},
|
||||
SystemEvent::EngineStatusChanged => SseMessage {
|
||||
event_name: "engine_status_changed",
|
||||
data: "{}".to_string(),
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
//! System information, hardware info, resources, health handlers.
|
||||
|
||||
use super::{AppState, config_read_error_response};
|
||||
use crate::media::pipeline::{Analyzer as _, Planner as _, TranscodeDecision};
|
||||
use crate::media::pipeline::{Planner as _, TranscodeDecision};
|
||||
use axum::{
|
||||
extract::State,
|
||||
http::StatusCode,
|
||||
@@ -27,6 +27,17 @@ struct SystemResources {
|
||||
gpu_memory_percent: Option<f32>,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub(crate) struct ProcessorStatusResponse {
|
||||
blocked_reason: Option<&'static str>,
|
||||
message: String,
|
||||
manual_paused: bool,
|
||||
scheduler_paused: bool,
|
||||
draining: bool,
|
||||
active_jobs: i64,
|
||||
concurrent_limit: usize,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
struct DuplicateGroup {
|
||||
stem: String,
|
||||
@@ -135,6 +146,54 @@ pub(crate) async fn system_resources_handler(State(state): State<Arc<AppState>>)
|
||||
axum::Json(value).into_response()
|
||||
}
|
||||
|
||||
pub(crate) async fn processor_status_handler(State(state): State<Arc<AppState>>) -> Response {
|
||||
let stats = match state.db.get_job_stats().await {
|
||||
Ok(stats) => stats,
|
||||
Err(err) => return config_read_error_response("load processor status", &err),
|
||||
};
|
||||
|
||||
let concurrent_limit = state.agent.concurrent_jobs_limit();
|
||||
let manual_paused = state.agent.is_manual_paused();
|
||||
let scheduler_paused = state.agent.is_scheduler_paused();
|
||||
let draining = state.agent.is_draining();
|
||||
let active_jobs = stats.active;
|
||||
|
||||
let (blocked_reason, message) = if manual_paused {
|
||||
(
|
||||
Some("manual_paused"),
|
||||
"The engine is manually paused and will not start queued jobs.".to_string(),
|
||||
)
|
||||
} else if scheduler_paused {
|
||||
(
|
||||
Some("scheduled_pause"),
|
||||
"The schedule is currently pausing the engine.".to_string(),
|
||||
)
|
||||
} else if draining {
|
||||
(
|
||||
Some("draining"),
|
||||
"The engine is draining and will not start new queued jobs.".to_string(),
|
||||
)
|
||||
} else if active_jobs >= concurrent_limit as i64 {
|
||||
(
|
||||
Some("workers_busy"),
|
||||
"All worker slots are currently busy.".to_string(),
|
||||
)
|
||||
} else {
|
||||
(None, "Workers are available.".to_string())
|
||||
};
|
||||
|
||||
axum::Json(ProcessorStatusResponse {
|
||||
blocked_reason,
|
||||
message,
|
||||
manual_paused,
|
||||
scheduler_paused,
|
||||
draining,
|
||||
active_jobs,
|
||||
concurrent_limit,
|
||||
})
|
||||
.into_response()
|
||||
}
|
||||
|
||||
pub(crate) async fn library_intelligence_handler(State(state): State<Arc<AppState>>) -> Response {
|
||||
use std::collections::HashMap;
|
||||
use std::path::Path;
|
||||
@@ -195,7 +254,6 @@ pub(crate) async fn library_intelligence_handler(State(state): State<Arc<AppStat
|
||||
return StatusCode::INTERNAL_SERVER_ERROR.into_response();
|
||||
}
|
||||
};
|
||||
let analyzer = crate::media::analyzer::FfmpegAnalyzer;
|
||||
let config_snapshot = state.config.read().await.clone();
|
||||
let hw_snapshot = state.hardware_state.snapshot().await;
|
||||
let planner = crate::media::planner::BasicPlanner::new(
|
||||
@@ -207,14 +265,16 @@ pub(crate) async fn library_intelligence_handler(State(state): State<Arc<AppStat
|
||||
if job.status == crate::db::JobState::Cancelled {
|
||||
continue;
|
||||
}
|
||||
let input_path = std::path::Path::new(&job.input_path);
|
||||
if !input_path.exists() {
|
||||
continue;
|
||||
}
|
||||
|
||||
let analysis = match analyzer.analyze(input_path).await {
|
||||
Ok(analysis) => analysis,
|
||||
Err(_) => continue,
|
||||
// Use stored metadata only — no live ffprobe spawning per job.
|
||||
let metadata = match job.input_metadata() {
|
||||
Some(m) => m,
|
||||
None => continue,
|
||||
};
|
||||
let analysis = crate::media::pipeline::MediaAnalysis {
|
||||
metadata,
|
||||
warnings: vec![],
|
||||
confidence: crate::media::pipeline::AnalysisConfidence::High,
|
||||
};
|
||||
|
||||
let profile: Option<crate::db::LibraryProfile> = state
|
||||
|
||||
@@ -61,7 +61,6 @@ where
|
||||
probe_summary: crate::system::hardware::ProbeSummary::default(),
|
||||
}));
|
||||
let hardware_probe_log = Arc::new(RwLock::new(HardwareProbeLog::default()));
|
||||
let (tx, _rx) = broadcast::channel(tx_capacity);
|
||||
let transcoder = Arc::new(Transcoder::new());
|
||||
|
||||
// Create event channels before Agent
|
||||
@@ -81,7 +80,6 @@ where
|
||||
transcoder.clone(),
|
||||
config.clone(),
|
||||
hardware_state.clone(),
|
||||
tx.clone(),
|
||||
event_channels.clone(),
|
||||
true,
|
||||
)
|
||||
@@ -101,7 +99,6 @@ where
|
||||
transcoder,
|
||||
scheduler: scheduler.handle(),
|
||||
event_channels,
|
||||
tx,
|
||||
setup_required: Arc::new(AtomicBool::new(setup_required)),
|
||||
start_time: Instant::now(),
|
||||
telemetry_runtime_id: "test-runtime".to_string(),
|
||||
@@ -120,6 +117,8 @@ where
|
||||
login_rate_limiter: Mutex::new(HashMap::new()),
|
||||
global_rate_limiter: Mutex::new(HashMap::new()),
|
||||
sse_connections: Arc::new(std::sync::atomic::AtomicUsize::new(0)),
|
||||
trusted_proxies: Vec::new(),
|
||||
setup_token: None,
|
||||
});
|
||||
|
||||
Ok((state.clone(), app_router(state), config_path, db_path))
|
||||
@@ -548,6 +547,69 @@ async fn engine_status_endpoint_reports_draining_state()
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn processor_status_endpoint_reports_blocking_reason_precedence()
|
||||
-> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
let (state, app, config_path, db_path) = build_test_app(false, 8, |_| {}).await?;
|
||||
let token = create_session(state.db.as_ref()).await?;
|
||||
let (_job, input_path, output_path) = seed_job(state.db.as_ref(), JobState::Encoding).await?;
|
||||
|
||||
let response = app
|
||||
.clone()
|
||||
.oneshot(auth_request(
|
||||
Method::GET,
|
||||
"/api/processor/status",
|
||||
&token,
|
||||
Body::empty(),
|
||||
))
|
||||
.await?;
|
||||
assert_eq!(response.status(), StatusCode::OK);
|
||||
let payload: serde_json::Value = serde_json::from_str(&body_text(response).await)?;
|
||||
assert_eq!(payload["blocked_reason"], "workers_busy");
|
||||
|
||||
state.agent.drain();
|
||||
let response = app
|
||||
.clone()
|
||||
.oneshot(auth_request(
|
||||
Method::GET,
|
||||
"/api/processor/status",
|
||||
&token,
|
||||
Body::empty(),
|
||||
))
|
||||
.await?;
|
||||
let payload: serde_json::Value = serde_json::from_str(&body_text(response).await)?;
|
||||
assert_eq!(payload["blocked_reason"], "draining");
|
||||
|
||||
state.agent.set_scheduler_paused(true);
|
||||
let response = app
|
||||
.clone()
|
||||
.oneshot(auth_request(
|
||||
Method::GET,
|
||||
"/api/processor/status",
|
||||
&token,
|
||||
Body::empty(),
|
||||
))
|
||||
.await?;
|
||||
let payload: serde_json::Value = serde_json::from_str(&body_text(response).await)?;
|
||||
assert_eq!(payload["blocked_reason"], "scheduled_pause");
|
||||
|
||||
state.agent.pause();
|
||||
let response = app
|
||||
.clone()
|
||||
.oneshot(auth_request(
|
||||
Method::GET,
|
||||
"/api/processor/status",
|
||||
&token,
|
||||
Body::empty(),
|
||||
))
|
||||
.await?;
|
||||
let payload: serde_json::Value = serde_json::from_str(&body_text(response).await)?;
|
||||
assert_eq!(payload["blocked_reason"], "manual_paused");
|
||||
|
||||
cleanup_paths(&[input_path, output_path, config_path, db_path]);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn read_only_api_token_allows_observability_only_routes()
|
||||
-> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
@@ -1160,6 +1222,35 @@ async fn public_clients_can_reach_login_after_setup()
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn login_returns_internal_error_when_user_lookup_fails()
|
||||
-> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
let (state, app, config_path, db_path) = build_test_app(false, 8, |_| {}).await?;
|
||||
state.db.pool.close().await;
|
||||
|
||||
let mut request = remote_request(
|
||||
Method::POST,
|
||||
"/api/auth/login",
|
||||
Body::from(
|
||||
json!({
|
||||
"username": "tester",
|
||||
"password": "not-important"
|
||||
})
|
||||
.to_string(),
|
||||
),
|
||||
);
|
||||
request.headers_mut().insert(
|
||||
header::CONTENT_TYPE,
|
||||
header::HeaderValue::from_static("application/json"),
|
||||
);
|
||||
|
||||
let response = app.clone().oneshot(request).await?;
|
||||
assert_eq!(response.status(), StatusCode::INTERNAL_SERVER_ERROR);
|
||||
|
||||
cleanup_paths(&[config_path, db_path]);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn settings_bundle_requires_auth_after_setup()
|
||||
-> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
@@ -1364,6 +1455,93 @@ async fn settings_bundle_put_projects_extended_settings_to_db()
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn delete_notification_removes_only_one_duplicate_target()
|
||||
-> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
let duplicate_target = crate::config::NotificationTargetConfig {
|
||||
name: "Discord".to_string(),
|
||||
target_type: "discord_webhook".to_string(),
|
||||
config_json: serde_json::json!({
|
||||
"webhook_url": "https://discord.com/api/webhooks/test"
|
||||
}),
|
||||
endpoint_url: None,
|
||||
auth_token: None,
|
||||
events: vec!["encode.completed".to_string()],
|
||||
enabled: true,
|
||||
};
|
||||
let (state, app, config_path, db_path) = build_test_app(false, 8, |config| {
|
||||
config.notifications.targets = vec![duplicate_target.clone(), duplicate_target.clone()];
|
||||
})
|
||||
.await?;
|
||||
let projected = state.config.read().await.clone();
|
||||
crate::settings::project_config_to_db(state.db.as_ref(), &projected).await?;
|
||||
let token = create_session(state.db.as_ref()).await?;
|
||||
|
||||
let targets = state.db.get_notification_targets().await?;
|
||||
assert_eq!(targets.len(), 2);
|
||||
|
||||
let response = app
|
||||
.clone()
|
||||
.oneshot(auth_request(
|
||||
Method::DELETE,
|
||||
&format!("/api/settings/notifications/{}", targets[0].id),
|
||||
&token,
|
||||
Body::empty(),
|
||||
))
|
||||
.await?;
|
||||
assert_eq!(response.status(), StatusCode::OK);
|
||||
|
||||
let persisted = crate::config::Config::load(config_path.as_path())?;
|
||||
assert_eq!(persisted.notifications.targets.len(), 1);
|
||||
|
||||
let stored_targets = state.db.get_notification_targets().await?;
|
||||
assert_eq!(stored_targets.len(), 1);
|
||||
|
||||
cleanup_paths(&[config_path, db_path]);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn delete_schedule_removes_only_one_duplicate_window()
|
||||
-> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
let duplicate_window = crate::config::ScheduleWindowConfig {
|
||||
start_time: "22:00".to_string(),
|
||||
end_time: "06:00".to_string(),
|
||||
days_of_week: vec![1, 2, 3],
|
||||
enabled: true,
|
||||
};
|
||||
let (state, app, config_path, db_path) = build_test_app(false, 8, |config| {
|
||||
config.schedule.windows = vec![duplicate_window.clone(), duplicate_window.clone()];
|
||||
})
|
||||
.await?;
|
||||
let projected = state.config.read().await.clone();
|
||||
crate::settings::project_config_to_db(state.db.as_ref(), &projected).await?;
|
||||
let token = create_session(state.db.as_ref()).await?;
|
||||
|
||||
let windows = state.db.get_schedule_windows().await?;
|
||||
assert_eq!(windows.len(), 2);
|
||||
|
||||
let response = app
|
||||
.clone()
|
||||
.oneshot(auth_request(
|
||||
Method::DELETE,
|
||||
&format!("/api/settings/schedule/{}", windows[0].id),
|
||||
&token,
|
||||
Body::empty(),
|
||||
))
|
||||
.await?;
|
||||
assert_eq!(response.status(), StatusCode::OK);
|
||||
|
||||
let persisted = crate::config::Config::load(config_path.as_path())?;
|
||||
assert_eq!(persisted.schedule.windows.len(), 1);
|
||||
|
||||
let stored_windows = state.db.get_schedule_windows().await?;
|
||||
assert_eq!(stored_windows.len(), 1);
|
||||
|
||||
cleanup_paths(&[config_path, db_path]);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn raw_config_put_overwrites_divergent_db_projection()
|
||||
-> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
@@ -1616,6 +1794,219 @@ async fn job_detail_route_falls_back_to_legacy_failure_summary()
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn job_detail_route_returns_internal_error_when_encode_attempts_query_fails()
|
||||
-> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
let (state, app, config_path, db_path) = build_test_app(false, 8, |_| {}).await?;
|
||||
let token = create_session(state.db.as_ref()).await?;
|
||||
let (job, input_path, output_path) = seed_job(state.db.as_ref(), JobState::Queued).await?;
|
||||
|
||||
sqlx::query("DROP TABLE encode_attempts")
|
||||
.execute(&state.db.pool)
|
||||
.await?;
|
||||
|
||||
let response = app
|
||||
.clone()
|
||||
.oneshot(auth_request(
|
||||
Method::GET,
|
||||
&format!("/api/jobs/{}/details", job.id),
|
||||
&token,
|
||||
Body::empty(),
|
||||
))
|
||||
.await?;
|
||||
assert_eq!(response.status(), StatusCode::INTERNAL_SERVER_ERROR);
|
||||
|
||||
cleanup_paths(&[input_path, output_path, config_path, db_path]);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn enqueue_job_endpoint_accepts_supported_absolute_files()
|
||||
-> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
let (state, app, config_path, db_path) = build_test_app(false, 8, |_| {}).await?;
|
||||
let token = create_session(state.db.as_ref()).await?;
|
||||
|
||||
let input_path = temp_path("alchemist_enqueue_input", "mkv");
|
||||
std::fs::write(&input_path, b"test")?;
|
||||
let canonical_input = std::fs::canonicalize(&input_path)?;
|
||||
|
||||
let response = app
|
||||
.clone()
|
||||
.oneshot(auth_json_request(
|
||||
Method::POST,
|
||||
"/api/jobs/enqueue",
|
||||
&token,
|
||||
json!({ "path": input_path.to_string_lossy() }),
|
||||
))
|
||||
.await?;
|
||||
assert_eq!(response.status(), StatusCode::OK);
|
||||
|
||||
let payload: serde_json::Value = serde_json::from_str(&body_text(response).await)?;
|
||||
assert_eq!(payload["enqueued"], true);
|
||||
assert!(
|
||||
state
|
||||
.db
|
||||
.get_job_by_input_path(canonical_input.to_string_lossy().as_ref())
|
||||
.await?
|
||||
.is_some()
|
||||
);
|
||||
|
||||
cleanup_paths(&[input_path, config_path, db_path]);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn enqueue_job_endpoint_rejects_relative_paths_and_unsupported_extensions()
|
||||
-> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
let (_state, app, config_path, db_path) = build_test_app(false, 8, |_| {}).await?;
|
||||
let token = create_session(_state.db.as_ref()).await?;
|
||||
|
||||
let response = app
|
||||
.clone()
|
||||
.oneshot(auth_json_request(
|
||||
Method::POST,
|
||||
"/api/jobs/enqueue",
|
||||
&token,
|
||||
json!({ "path": "relative/movie.mkv" }),
|
||||
))
|
||||
.await?;
|
||||
assert_eq!(response.status(), StatusCode::BAD_REQUEST);
|
||||
|
||||
let unsupported = temp_path("alchemist_enqueue_unsupported", "txt");
|
||||
std::fs::write(&unsupported, b"test")?;
|
||||
|
||||
let response = app
|
||||
.clone()
|
||||
.oneshot(auth_json_request(
|
||||
Method::POST,
|
||||
"/api/jobs/enqueue",
|
||||
&token,
|
||||
json!({ "path": unsupported.to_string_lossy() }),
|
||||
))
|
||||
.await?;
|
||||
assert_eq!(response.status(), StatusCode::BAD_REQUEST);
|
||||
|
||||
cleanup_paths(&[unsupported, config_path, db_path]);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn enqueue_job_endpoint_returns_noop_for_generated_output_paths()
|
||||
-> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
let (_state, app, config_path, db_path) = build_test_app(false, 8, |_| {}).await?;
|
||||
let token = create_session(_state.db.as_ref()).await?;
|
||||
|
||||
let generated_dir = temp_path("alchemist_enqueue_generated_dir", "dir");
|
||||
std::fs::create_dir_all(&generated_dir)?;
|
||||
let generated = generated_dir.join("movie-alchemist.mkv");
|
||||
std::fs::write(&generated, b"test")?;
|
||||
|
||||
let response = app
|
||||
.clone()
|
||||
.oneshot(auth_json_request(
|
||||
Method::POST,
|
||||
"/api/jobs/enqueue",
|
||||
&token,
|
||||
json!({ "path": generated.to_string_lossy() }),
|
||||
))
|
||||
.await?;
|
||||
assert_eq!(response.status(), StatusCode::OK);
|
||||
|
||||
let payload: serde_json::Value = serde_json::from_str(&body_text(response).await)?;
|
||||
assert_eq!(payload["enqueued"], false);
|
||||
|
||||
cleanup_paths(&[generated_dir, config_path, db_path]);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn delete_job_endpoint_purges_resume_session_temp_dir()
|
||||
-> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
let (state, app, config_path, db_path) = build_test_app(false, 8, |_| {}).await?;
|
||||
let token = create_session(state.db.as_ref()).await?;
|
||||
let (job, input_path, output_path) = seed_job(state.db.as_ref(), JobState::Failed).await?;
|
||||
|
||||
let resume_dir = temp_path("alchemist_resume_delete", "dir");
|
||||
std::fs::create_dir_all(&resume_dir)?;
|
||||
std::fs::write(resume_dir.join("segment-00000.mkv"), b"segment")?;
|
||||
state
|
||||
.db
|
||||
.upsert_resume_session(&crate::db::UpsertJobResumeSessionInput {
|
||||
job_id: job.id,
|
||||
strategy: "segment_v1".to_string(),
|
||||
plan_hash: "plan".to_string(),
|
||||
mtime_hash: "mtime".to_string(),
|
||||
temp_dir: resume_dir.to_string_lossy().to_string(),
|
||||
concat_manifest_path: resume_dir
|
||||
.join("segments.ffconcat")
|
||||
.to_string_lossy()
|
||||
.to_string(),
|
||||
segment_length_secs: 120,
|
||||
status: "active".to_string(),
|
||||
})
|
||||
.await?;
|
||||
|
||||
let response = app
|
||||
.clone()
|
||||
.oneshot(auth_request(
|
||||
Method::POST,
|
||||
&format!("/api/jobs/{}/delete", job.id),
|
||||
&token,
|
||||
Body::empty(),
|
||||
))
|
||||
.await?;
|
||||
assert_eq!(response.status(), StatusCode::OK);
|
||||
assert!(state.db.get_resume_session(job.id).await?.is_none());
|
||||
assert!(!resume_dir.exists());
|
||||
|
||||
cleanup_paths(&[resume_dir, input_path, output_path, config_path, db_path]);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn clear_completed_purges_resume_sessions()
|
||||
-> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
let (state, app, config_path, db_path) = build_test_app(false, 8, |_| {}).await?;
|
||||
let token = create_session(state.db.as_ref()).await?;
|
||||
let (job, input_path, output_path) = seed_job(state.db.as_ref(), JobState::Completed).await?;
|
||||
|
||||
let resume_dir = temp_path("alchemist_resume_clear_completed", "dir");
|
||||
std::fs::create_dir_all(&resume_dir)?;
|
||||
std::fs::write(resume_dir.join("segment-00000.mkv"), b"segment")?;
|
||||
state
|
||||
.db
|
||||
.upsert_resume_session(&crate::db::UpsertJobResumeSessionInput {
|
||||
job_id: job.id,
|
||||
strategy: "segment_v1".to_string(),
|
||||
plan_hash: "plan".to_string(),
|
||||
mtime_hash: "mtime".to_string(),
|
||||
temp_dir: resume_dir.to_string_lossy().to_string(),
|
||||
concat_manifest_path: resume_dir
|
||||
.join("segments.ffconcat")
|
||||
.to_string_lossy()
|
||||
.to_string(),
|
||||
segment_length_secs: 120,
|
||||
status: "segments_complete".to_string(),
|
||||
})
|
||||
.await?;
|
||||
|
||||
let response = app
|
||||
.clone()
|
||||
.oneshot(auth_request(
|
||||
Method::POST,
|
||||
"/api/jobs/clear-completed",
|
||||
&token,
|
||||
Body::empty(),
|
||||
))
|
||||
.await?;
|
||||
assert_eq!(response.status(), StatusCode::OK);
|
||||
assert!(state.db.get_resume_session(job.id).await?.is_none());
|
||||
assert!(!resume_dir.exists());
|
||||
|
||||
cleanup_paths(&[resume_dir, input_path, output_path, config_path, db_path]);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn delete_active_job_returns_conflict() -> std::result::Result<(), Box<dyn std::error::Error>>
|
||||
{
|
||||
@@ -1719,7 +2110,9 @@ async fn clear_completed_archives_jobs_and_preserves_stats()
|
||||
|
||||
assert!(state.db.get_job_by_id(job.id).await?.is_none());
|
||||
let aggregated = state.db.get_aggregated_stats().await?;
|
||||
assert_eq!(aggregated.completed_jobs, 1);
|
||||
// Archived jobs are excluded from active stats.
|
||||
assert_eq!(aggregated.completed_jobs, 0);
|
||||
// encode_stats rows are preserved even after archiving.
|
||||
assert_eq!(aggregated.total_input_size, 2_000);
|
||||
assert_eq!(aggregated.total_output_size, 1_000);
|
||||
|
||||
|
||||
@@ -162,7 +162,7 @@ fn browse_blocking(path: &Path) -> Result<FsBrowseResponse> {
|
||||
Vec::new()
|
||||
};
|
||||
|
||||
entries.sort_by(|a, b| a.name.to_lowercase().cmp(&b.name.to_lowercase()));
|
||||
entries.sort_by_key(|entry| entry.name.to_lowercase());
|
||||
|
||||
Ok(FsBrowseResponse {
|
||||
path: path.to_string_lossy().to_string(),
|
||||
|
||||
@@ -317,7 +317,6 @@ where
|
||||
selection_reason: String::new(),
|
||||
probe_summary: alchemist::system::hardware::ProbeSummary::default(),
|
||||
})),
|
||||
Arc::new(broadcast::channel(16).0),
|
||||
event_channels,
|
||||
false,
|
||||
);
|
||||
|
||||
@@ -49,6 +49,12 @@ async fn v0_2_5_fixture_upgrades_and_preserves_core_state() -> Result<()> {
|
||||
let notifications = db.get_notification_targets().await?;
|
||||
assert_eq!(notifications.len(), 1);
|
||||
assert_eq!(notifications[0].target_type, "discord_webhook");
|
||||
let notification_config: serde_json::Value =
|
||||
serde_json::from_str(¬ifications[0].config_json)?;
|
||||
assert_eq!(
|
||||
notification_config["webhook_url"].as_str(),
|
||||
Some("https://discord.invalid/webhook")
|
||||
);
|
||||
|
||||
let schedule_windows = db.get_schedule_windows().await?;
|
||||
assert_eq!(schedule_windows.len(), 1);
|
||||
@@ -101,7 +107,7 @@ async fn v0_2_5_fixture_upgrades_and_preserves_core_state() -> Result<()> {
|
||||
.fetch_one(&pool)
|
||||
.await?
|
||||
.get("value");
|
||||
assert_eq!(schema_version, "8");
|
||||
assert_eq!(schema_version, "9");
|
||||
|
||||
let min_compatible_version: String =
|
||||
sqlx::query("SELECT value FROM schema_info WHERE key = 'min_compatible_version'")
|
||||
@@ -153,6 +159,45 @@ async fn v0_2_5_fixture_upgrades_and_preserves_core_state() -> Result<()> {
|
||||
.get("count");
|
||||
assert_eq!(job_failure_explanations_exists, 1);
|
||||
|
||||
let notification_columns = sqlx::query("PRAGMA table_info(notification_targets)")
|
||||
.fetch_all(&pool)
|
||||
.await?
|
||||
.into_iter()
|
||||
.map(|row| row.get::<String, _>("name"))
|
||||
.collect::<Vec<_>>();
|
||||
assert!(
|
||||
notification_columns
|
||||
.iter()
|
||||
.any(|name| name == "endpoint_url")
|
||||
);
|
||||
assert!(notification_columns.iter().any(|name| name == "auth_token"));
|
||||
assert!(
|
||||
notification_columns
|
||||
.iter()
|
||||
.any(|name| name == "target_type_v2")
|
||||
);
|
||||
assert!(
|
||||
notification_columns
|
||||
.iter()
|
||||
.any(|name| name == "config_json")
|
||||
);
|
||||
|
||||
let resume_sessions_exists: i64 = sqlx::query(
|
||||
"SELECT COUNT(*) as count FROM sqlite_master WHERE type = 'table' AND name = 'job_resume_sessions'",
|
||||
)
|
||||
.fetch_one(&pool)
|
||||
.await?
|
||||
.get("count");
|
||||
assert_eq!(resume_sessions_exists, 1);
|
||||
|
||||
let resume_segments_exists: i64 = sqlx::query(
|
||||
"SELECT COUNT(*) as count FROM sqlite_master WHERE type = 'table' AND name = 'job_resume_segments'",
|
||||
)
|
||||
.fetch_one(&pool)
|
||||
.await?
|
||||
.get("count");
|
||||
assert_eq!(resume_segments_exists, 1);
|
||||
|
||||
pool.close().await;
|
||||
drop(db);
|
||||
let _ = fs::remove_file(&db_path);
|
||||
|
||||
@@ -43,6 +43,20 @@ fn ffmpeg_ready() -> bool {
|
||||
ffmpeg_available() && ffprobe_available()
|
||||
}
|
||||
|
||||
fn ffmpeg_has_encoder(name: &str) -> bool {
|
||||
Command::new("ffmpeg")
|
||||
.args(["-hide_banner", "-encoders"])
|
||||
.output()
|
||||
.ok()
|
||||
.map(|output| {
|
||||
output.status.success()
|
||||
&& String::from_utf8_lossy(&output.stdout)
|
||||
.lines()
|
||||
.any(|line| line.contains(name))
|
||||
})
|
||||
.unwrap_or(false)
|
||||
}
|
||||
|
||||
/// Get the path to test fixtures
|
||||
fn fixtures_path() -> PathBuf {
|
||||
let mut path = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
|
||||
@@ -68,6 +82,75 @@ fn cleanup_temp_dir(path: &Path) {
|
||||
let _ = std::fs::remove_dir_all(path);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn amd_vaapi_smoke_test_is_hardware_gated() -> Result<()> {
|
||||
let Some(device_path) = std::env::var("ALCHEMIST_TEST_AMD_VAAPI_DEVICE").ok() else {
|
||||
println!("Skipping test: ALCHEMIST_TEST_AMD_VAAPI_DEVICE not set");
|
||||
return Ok(());
|
||||
};
|
||||
if !ffmpeg_available() || !ffmpeg_has_encoder("h264_vaapi") {
|
||||
println!("Skipping test: ffmpeg or h264_vaapi encoder not available");
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
let status = Command::new("ffmpeg")
|
||||
.args([
|
||||
"-hide_banner",
|
||||
"-loglevel",
|
||||
"error",
|
||||
"-vaapi_device",
|
||||
&device_path,
|
||||
"-f",
|
||||
"lavfi",
|
||||
"-i",
|
||||
"testsrc=size=64x64:rate=1:d=1",
|
||||
"-vf",
|
||||
"format=nv12,hwupload",
|
||||
"-c:v",
|
||||
"h264_vaapi",
|
||||
"-f",
|
||||
"null",
|
||||
"-",
|
||||
])
|
||||
.status()?;
|
||||
assert!(
|
||||
status.success(),
|
||||
"expected VAAPI smoke transcode to succeed"
|
||||
);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn amd_amf_smoke_test_is_hardware_gated() -> Result<()> {
|
||||
if std::env::var("ALCHEMIST_TEST_AMD_AMF").ok().as_deref() != Some("1") {
|
||||
println!("Skipping test: ALCHEMIST_TEST_AMD_AMF not set");
|
||||
return Ok(());
|
||||
}
|
||||
if !ffmpeg_available() || !ffmpeg_has_encoder("h264_amf") {
|
||||
println!("Skipping test: ffmpeg or h264_amf encoder not available");
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
let status = Command::new("ffmpeg")
|
||||
.args([
|
||||
"-hide_banner",
|
||||
"-loglevel",
|
||||
"error",
|
||||
"-f",
|
||||
"lavfi",
|
||||
"-i",
|
||||
"testsrc=size=64x64:rate=1:d=1",
|
||||
"-c:v",
|
||||
"h264_amf",
|
||||
"-f",
|
||||
"null",
|
||||
"-",
|
||||
])
|
||||
.status()?;
|
||||
assert!(status.success(), "expected AMF smoke transcode to succeed");
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Create a test database
|
||||
async fn create_test_db() -> Result<(Arc<Db>, PathBuf)> {
|
||||
let mut db_path = std::env::temp_dir();
|
||||
@@ -120,7 +203,6 @@ where
|
||||
selection_reason: String::new(),
|
||||
probe_summary: alchemist::system::hardware::ProbeSummary::default(),
|
||||
})),
|
||||
Arc::new(broadcast::channel(16).0),
|
||||
event_channels,
|
||||
false,
|
||||
);
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "alchemist-web-e2e",
|
||||
"version": "0.3.1-rc.3",
|
||||
"version": "0.3.1-rc.5",
|
||||
"private": true,
|
||||
"packageManager": "bun@1",
|
||||
"type": "module",
|
||||
@@ -8,7 +8,7 @@
|
||||
"test": "playwright test",
|
||||
"test:headed": "playwright test --headed",
|
||||
"test:ui": "playwright test --ui",
|
||||
"test:reliability": "playwright test tests/settings-nonok.spec.ts tests/setup-recovery.spec.ts tests/setup-happy-path.spec.ts tests/new-user-redirect.spec.ts tests/stats-poller.spec.ts tests/jobs-actions-nonok.spec.ts tests/jobs-stability.spec.ts tests/library-intake-stability.spec.ts"
|
||||
"test:reliability": "playwright test tests/settings-nonok.spec.ts tests/setup-recovery.spec.ts tests/setup-happy-path.spec.ts tests/new-user-redirect.spec.ts tests/stats-poller.spec.ts tests/jobs-actions-nonok.spec.ts tests/jobs-stability.spec.ts tests/library-intake-stability.spec.ts tests/intelligence-actions.spec.ts"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@playwright/test": "^1.54.2"
|
||||
|
||||
@@ -37,10 +37,10 @@ export default defineConfig({
|
||||
],
|
||||
webServer: {
|
||||
command:
|
||||
"sh -c 'mkdir -p .runtime/media && cd .. && (cd web && bun install --frozen-lockfile && bun run build) && if [ -x ./target/debug/alchemist ]; then ./target/debug/alchemist --reset-auth; else cargo run --locked --no-default-features -- --reset-auth; fi'",
|
||||
"sh -c 'mkdir -p .runtime/media && rm -f .runtime/alchemist.db .runtime/alchemist.db-wal .runtime/alchemist.db-shm && cd .. && (cd web && bun install --frozen-lockfile && bun run build) && if [ -x ./target/debug/alchemist ]; then ./target/debug/alchemist --reset-auth; else cargo run --locked --no-default-features -- --reset-auth; fi'",
|
||||
url: `${BASE_URL}/api/health`,
|
||||
reuseExistingServer: false,
|
||||
timeout: 120_000,
|
||||
timeout: 300_000,
|
||||
env: {
|
||||
ALCHEMIST_CONFIG_PATH: CONFIG_PATH,
|
||||
ALCHEMIST_DB_PATH: DB_PATH,
|
||||
|
||||
118
web-e2e/tests/intelligence-actions.spec.ts
Normal file
118
web-e2e/tests/intelligence-actions.spec.ts
Normal file
@@ -0,0 +1,118 @@
|
||||
import { expect, test } from "@playwright/test";
|
||||
import {
|
||||
type JobDetailFixture,
|
||||
fulfillJson,
|
||||
mockEngineStatus,
|
||||
mockJobDetails,
|
||||
} from "./helpers";
|
||||
|
||||
const completedDetail: JobDetailFixture = {
|
||||
job: {
|
||||
id: 51,
|
||||
input_path: "/media/duplicates/movie-copy-1.mkv",
|
||||
output_path: "/output/movie-copy-1-av1.mkv",
|
||||
status: "completed",
|
||||
priority: 0,
|
||||
progress: 100,
|
||||
created_at: "2025-01-01T00:00:00Z",
|
||||
updated_at: "2025-01-02T00:00:00Z",
|
||||
vmaf_score: 95.1,
|
||||
},
|
||||
metadata: {
|
||||
duration_secs: 120,
|
||||
codec_name: "hevc",
|
||||
width: 1920,
|
||||
height: 1080,
|
||||
bit_depth: 10,
|
||||
size_bytes: 2_000_000_000,
|
||||
video_bitrate_bps: 12_000_000,
|
||||
container_bitrate_bps: 12_500_000,
|
||||
fps: 24,
|
||||
container: "mkv",
|
||||
audio_codec: "aac",
|
||||
audio_channels: 2,
|
||||
dynamic_range: "hdr10",
|
||||
},
|
||||
encode_stats: {
|
||||
input_size_bytes: 2_000_000_000,
|
||||
output_size_bytes: 900_000_000,
|
||||
compression_ratio: 0.55,
|
||||
encode_time_seconds: 1800,
|
||||
encode_speed: 1.6,
|
||||
avg_bitrate_kbps: 6000,
|
||||
vmaf_score: 95.1,
|
||||
},
|
||||
job_logs: [],
|
||||
};
|
||||
|
||||
test.use({ storageState: undefined });
|
||||
|
||||
test.beforeEach(async ({ page }) => {
|
||||
await mockEngineStatus(page);
|
||||
});
|
||||
|
||||
test("intelligence actions queue remux opportunities and review duplicate jobs", async ({
|
||||
page,
|
||||
}) => {
|
||||
let enqueueCount = 0;
|
||||
|
||||
await page.route("**/api/library/intelligence", async (route) => {
|
||||
await fulfillJson(route, 200, {
|
||||
duplicate_groups: [
|
||||
{
|
||||
stem: "movie-copy",
|
||||
count: 2,
|
||||
paths: [
|
||||
{ id: 51, path: "/media/duplicates/movie-copy-1.mkv", status: "completed" },
|
||||
{ id: 52, path: "/media/duplicates/movie-copy-2.mkv", status: "queued" },
|
||||
],
|
||||
},
|
||||
],
|
||||
total_duplicates: 1,
|
||||
recommendation_counts: {
|
||||
duplicates: 1,
|
||||
remux_only_candidate: 2,
|
||||
wasteful_audio_layout: 0,
|
||||
commentary_cleanup_candidate: 0,
|
||||
},
|
||||
recommendations: [
|
||||
{
|
||||
type: "remux_only_candidate",
|
||||
title: "Remux movie one",
|
||||
summary: "The file can be normalized with a container-only remux.",
|
||||
path: "/media/remux/movie-one.mkv",
|
||||
suggested_action: "Queue a remux to normalize the container without re-encoding the video stream.",
|
||||
},
|
||||
{
|
||||
type: "remux_only_candidate",
|
||||
title: "Remux movie two",
|
||||
summary: "The file can be normalized with a container-only remux.",
|
||||
path: "/media/remux/movie-two.mkv",
|
||||
suggested_action: "Queue a remux to normalize the container without re-encoding the video stream.",
|
||||
},
|
||||
],
|
||||
});
|
||||
});
|
||||
await page.route("**/api/jobs/enqueue", async (route) => {
|
||||
enqueueCount += 1;
|
||||
const body = route.request().postDataJSON() as { path: string };
|
||||
await fulfillJson(route, 200, {
|
||||
enqueued: true,
|
||||
message: `Enqueued ${body.path}.`,
|
||||
});
|
||||
});
|
||||
await mockJobDetails(page, { 51: completedDetail });
|
||||
|
||||
await page.goto("/intelligence");
|
||||
|
||||
await page.getByRole("button", { name: "Queue all" }).click();
|
||||
await expect.poll(() => enqueueCount).toBe(2);
|
||||
await expect(
|
||||
page.getByText("Queue all finished: 2 enqueued, 0 skipped, 0 failed.").first(),
|
||||
).toBeVisible();
|
||||
|
||||
await page.getByRole("button", { name: "Review" }).first().click();
|
||||
await expect(page.getByRole("dialog")).toBeVisible();
|
||||
await expect(page.getByText("Encode Results")).toBeVisible();
|
||||
await expect(page.getByRole("dialog").getByText("/media/duplicates/movie-copy-1.mkv")).toBeVisible();
|
||||
});
|
||||
@@ -19,6 +19,17 @@ const completedJob: JobFixture = {
|
||||
vmaf_score: 95.4,
|
||||
};
|
||||
|
||||
const queuedJob: JobFixture = {
|
||||
id: 44,
|
||||
input_path: "/media/queued-blocked.mkv",
|
||||
output_path: "/output/queued-blocked-av1.mkv",
|
||||
status: "queued",
|
||||
priority: 0,
|
||||
progress: 0,
|
||||
created_at: "2025-01-01T00:00:00Z",
|
||||
updated_at: "2025-01-02T00:00:00Z",
|
||||
};
|
||||
|
||||
const completedDetail: JobDetailFixture = {
|
||||
job: completedJob,
|
||||
metadata: {
|
||||
@@ -183,3 +194,57 @@ test("failed job detail prefers structured failure explanation", async ({ page }
|
||||
await expect(page.getByText("Structured failure detail from the backend.")).toBeVisible();
|
||||
await expect(page.getByText("Structured failure guidance from the backend.")).toBeVisible();
|
||||
});
|
||||
|
||||
test("queued job detail shows the processor blocked reason", async ({ page }) => {
|
||||
await page.route("**/api/jobs/table**", async (route) => {
|
||||
await fulfillJson(route, 200, [queuedJob]);
|
||||
});
|
||||
await mockJobDetails(page, {
|
||||
44: {
|
||||
job: queuedJob,
|
||||
job_logs: [],
|
||||
queue_position: 3,
|
||||
},
|
||||
});
|
||||
await page.route("**/api/processor/status", async (route) => {
|
||||
await fulfillJson(route, 200, {
|
||||
blocked_reason: "workers_busy",
|
||||
message: "All worker slots are currently busy.",
|
||||
manual_paused: false,
|
||||
scheduler_paused: false,
|
||||
draining: false,
|
||||
active_jobs: 1,
|
||||
concurrent_limit: 1,
|
||||
});
|
||||
});
|
||||
|
||||
await page.goto("/jobs");
|
||||
await page.getByTitle("/media/queued-blocked.mkv").click();
|
||||
|
||||
await expect(page.getByText("Queue position:")).toBeVisible();
|
||||
await expect(page.getByText("Blocked:")).toBeVisible();
|
||||
await expect(page.getByText("All worker slots are currently busy.")).toBeVisible();
|
||||
});
|
||||
|
||||
test("add file submits the enqueue request and surfaces the response", async ({ page }) => {
|
||||
let postedPath = "";
|
||||
await page.route("**/api/jobs/table**", async (route) => {
|
||||
await fulfillJson(route, 200, []);
|
||||
});
|
||||
await page.route("**/api/jobs/enqueue", async (route) => {
|
||||
const body = route.request().postDataJSON() as { path: string };
|
||||
postedPath = body.path;
|
||||
await fulfillJson(route, 200, {
|
||||
enqueued: true,
|
||||
message: `Enqueued ${body.path}.`,
|
||||
});
|
||||
});
|
||||
|
||||
await page.goto("/jobs");
|
||||
await page.getByRole("button", { name: "Add file" }).click();
|
||||
await page.getByPlaceholder("/Volumes/Media/Movies/example.mkv").fill("/media/manual-add.mkv");
|
||||
await page.getByRole("dialog").getByRole("button", { name: "Add File", exact: true }).click();
|
||||
|
||||
await expect.poll(() => postedPath).toBe("/media/manual-add.mkv");
|
||||
await expect(page.getByText("Enqueued /media/manual-add.mkv.").first()).toBeVisible();
|
||||
});
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "alchemist-web",
|
||||
"version": "0.3.1-rc.3",
|
||||
"version": "0.3.1-rc.5",
|
||||
"private": true,
|
||||
"packageManager": "bun@1",
|
||||
"type": "module",
|
||||
|
||||
@@ -5,55 +5,16 @@ import { apiAction, apiJson, isApiError } from "../lib/api";
|
||||
import { useDebouncedValue } from "../lib/useDebouncedValue";
|
||||
import { showToast } from "../lib/toast";
|
||||
import ConfirmDialog from "./ui/ConfirmDialog";
|
||||
import { clsx, type ClassValue } from "clsx";
|
||||
import { twMerge } from "tailwind-merge";
|
||||
import { withErrorBoundary } from "./ErrorBoundary";
|
||||
import type { Job, JobDetail, TabType, SortField, ConfirmConfig, CountMessageResponse } from "./jobs/types";
|
||||
import { SORT_OPTIONS, isJobActive, jobDetailEmptyState } from "./jobs/types";
|
||||
import { normalizeDecisionExplanation, normalizeFailureExplanation } from "./jobs/JobExplanations";
|
||||
import type { Job, TabType, SortField, CountMessageResponse } from "./jobs/types";
|
||||
import { isJobActive } from "./jobs/types";
|
||||
import { useJobSSE } from "./jobs/useJobSSE";
|
||||
import { JobsToolbar } from "./jobs/JobsToolbar";
|
||||
import { JobsTable } from "./jobs/JobsTable";
|
||||
import { JobDetailModal } from "./jobs/JobDetailModal";
|
||||
|
||||
function cn(...inputs: ClassValue[]) {
|
||||
return twMerge(clsx(inputs));
|
||||
}
|
||||
|
||||
function focusableElements(root: HTMLElement): HTMLElement[] {
|
||||
const selector = [
|
||||
"a[href]",
|
||||
"button:not([disabled])",
|
||||
"input:not([disabled])",
|
||||
"select:not([disabled])",
|
||||
"textarea:not([disabled])",
|
||||
"[tabindex]:not([tabindex='-1'])",
|
||||
].join(",");
|
||||
|
||||
return Array.from(root.querySelectorAll<HTMLElement>(selector)).filter(
|
||||
(element) => !element.hasAttribute("disabled")
|
||||
);
|
||||
}
|
||||
|
||||
function getStatusBadge(status: string) {
|
||||
const styles: Record<string, string> = {
|
||||
queued: "bg-helios-slate/10 text-helios-slate border-helios-slate/20",
|
||||
analyzing: "bg-blue-500/10 text-blue-500 border-blue-500/20",
|
||||
encoding: "bg-helios-solar/10 text-helios-solar border-helios-solar/20 animate-pulse",
|
||||
remuxing: "bg-helios-solar/10 text-helios-solar border-helios-solar/20 animate-pulse",
|
||||
completed: "bg-green-500/10 text-green-500 border-green-500/20",
|
||||
failed: "bg-red-500/10 text-red-500 border-red-500/20",
|
||||
cancelled: "bg-red-500/10 text-red-500 border-red-500/20",
|
||||
skipped: "bg-gray-500/10 text-gray-500 border-gray-500/20",
|
||||
archived: "bg-zinc-500/10 text-zinc-400 border-zinc-500/20",
|
||||
resuming: "bg-helios-solar/10 text-helios-solar border-helios-solar/20 animate-pulse",
|
||||
};
|
||||
return (
|
||||
<span className={cn("px-2.5 py-1 rounded-md text-xs font-medium border capitalize", styles[status] || styles.queued)}>
|
||||
{status}
|
||||
</span>
|
||||
);
|
||||
}
|
||||
import { EnqueuePathDialog } from "./jobs/EnqueuePathDialog";
|
||||
import { getStatusBadge } from "./jobs/jobStatusBadge";
|
||||
import { useJobDetailController } from "./jobs/useJobDetailController";
|
||||
|
||||
function JobManager() {
|
||||
const [jobs, setJobs] = useState<Job[]>([]);
|
||||
@@ -67,18 +28,17 @@ function JobManager() {
|
||||
const [sortBy, setSortBy] = useState<SortField>("updated_at");
|
||||
const [sortDesc, setSortDesc] = useState(true);
|
||||
const [refreshing, setRefreshing] = useState(false);
|
||||
const [focusedJob, setFocusedJob] = useState<JobDetail | null>(null);
|
||||
const [detailLoading, setDetailLoading] = useState(false);
|
||||
const [actionError, setActionError] = useState<string | null>(null);
|
||||
const [menuJobId, setMenuJobId] = useState<number | null>(null);
|
||||
const [enqueueDialogOpen, setEnqueueDialogOpen] = useState(false);
|
||||
const [enqueuePath, setEnqueuePath] = useState("");
|
||||
const [enqueueSubmitting, setEnqueueSubmitting] = useState(false);
|
||||
const menuRef = useRef<HTMLDivElement | null>(null);
|
||||
const detailDialogRef = useRef<HTMLDivElement | null>(null);
|
||||
const detailLastFocusedRef = useRef<HTMLElement | null>(null);
|
||||
const compactSearchRef = useRef<HTMLDivElement | null>(null);
|
||||
const compactSearchInputRef = useRef<HTMLInputElement | null>(null);
|
||||
const confirmOpenRef = useRef(false);
|
||||
const encodeStartTimes = useRef<Map<number, number>>(new Map());
|
||||
const [confirmState, setConfirmState] = useState<ConfirmConfig | null>(null);
|
||||
const focusedJobIdRef = useRef<number | null>(null);
|
||||
const refreshFocusedJobRef = useRef<() => Promise<void>>(async () => undefined);
|
||||
const [tick, setTick] = useState(0);
|
||||
|
||||
useEffect(() => {
|
||||
@@ -182,6 +142,7 @@ function JobManager() {
|
||||
const serverIsTerminal = terminal.includes(serverJob.status);
|
||||
if (
|
||||
local &&
|
||||
local.status === serverJob.status &&
|
||||
terminal.includes(local.status) &&
|
||||
serverIsTerminal
|
||||
) {
|
||||
@@ -232,7 +193,51 @@ function JobManager() {
|
||||
};
|
||||
}, []);
|
||||
|
||||
useJobSSE({ setJobs, fetchJobsRef, encodeStartTimes });
|
||||
const {
|
||||
focusedJob,
|
||||
setFocusedJob,
|
||||
detailLoading,
|
||||
confirmState,
|
||||
detailDialogRef,
|
||||
openJobDetails,
|
||||
handleAction,
|
||||
handlePriority,
|
||||
openConfirm,
|
||||
setConfirmState,
|
||||
closeJobDetails,
|
||||
focusedDecision,
|
||||
focusedFailure,
|
||||
focusedJobLogs,
|
||||
shouldShowFfmpegOutput,
|
||||
completedEncodeStats,
|
||||
focusedEmptyState,
|
||||
} = useJobDetailController({
|
||||
onRefresh: async () => {
|
||||
await fetchJobs();
|
||||
},
|
||||
});
|
||||
|
||||
useEffect(() => {
|
||||
focusedJobIdRef.current = focusedJob?.job.id ?? null;
|
||||
}, [focusedJob?.job.id]);
|
||||
|
||||
useEffect(() => {
|
||||
refreshFocusedJobRef.current = async () => {
|
||||
const jobId = focusedJobIdRef.current;
|
||||
if (jobId !== null) {
|
||||
await openJobDetails(jobId);
|
||||
}
|
||||
};
|
||||
}, [openJobDetails]);
|
||||
|
||||
useJobSSE({
|
||||
setJobs,
|
||||
setFocusedJob,
|
||||
fetchJobsRef,
|
||||
focusedJobIdRef,
|
||||
refreshFocusedJobRef,
|
||||
encodeStartTimes,
|
||||
});
|
||||
|
||||
useEffect(() => {
|
||||
const encodingJobIds = new Set<number>();
|
||||
@@ -267,76 +272,6 @@ function JobManager() {
|
||||
return () => document.removeEventListener("mousedown", handleClick);
|
||||
}, [menuJobId]);
|
||||
|
||||
useEffect(() => {
|
||||
confirmOpenRef.current = confirmState !== null;
|
||||
}, [confirmState]);
|
||||
|
||||
useEffect(() => {
|
||||
if (!focusedJob) {
|
||||
return;
|
||||
}
|
||||
|
||||
detailLastFocusedRef.current = document.activeElement as HTMLElement | null;
|
||||
|
||||
const root = detailDialogRef.current;
|
||||
if (root) {
|
||||
const focusables = focusableElements(root);
|
||||
if (focusables.length > 0) {
|
||||
focusables[0].focus();
|
||||
} else {
|
||||
root.focus();
|
||||
}
|
||||
}
|
||||
|
||||
const onKeyDown = (event: KeyboardEvent) => {
|
||||
if (!focusedJob || confirmOpenRef.current) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (event.key === "Escape") {
|
||||
event.preventDefault();
|
||||
setFocusedJob(null);
|
||||
return;
|
||||
}
|
||||
|
||||
if (event.key !== "Tab") {
|
||||
return;
|
||||
}
|
||||
|
||||
const dialogRoot = detailDialogRef.current;
|
||||
if (!dialogRoot) {
|
||||
return;
|
||||
}
|
||||
|
||||
const focusables = focusableElements(dialogRoot);
|
||||
if (focusables.length === 0) {
|
||||
event.preventDefault();
|
||||
dialogRoot.focus();
|
||||
return;
|
||||
}
|
||||
|
||||
const first = focusables[0];
|
||||
const last = focusables[focusables.length - 1];
|
||||
const current = document.activeElement as HTMLElement | null;
|
||||
|
||||
if (event.shiftKey && current === first) {
|
||||
event.preventDefault();
|
||||
last.focus();
|
||||
} else if (!event.shiftKey && current === last) {
|
||||
event.preventDefault();
|
||||
first.focus();
|
||||
}
|
||||
};
|
||||
|
||||
document.addEventListener("keydown", onKeyDown);
|
||||
return () => {
|
||||
document.removeEventListener("keydown", onKeyDown);
|
||||
if (detailLastFocusedRef.current) {
|
||||
detailLastFocusedRef.current.focus();
|
||||
}
|
||||
};
|
||||
}, [focusedJob]);
|
||||
|
||||
const toggleSelect = (id: number) => {
|
||||
const newSet = new Set(selected);
|
||||
if (newSet.has(id)) newSet.delete(id);
|
||||
@@ -406,96 +341,31 @@ function JobManager() {
|
||||
}
|
||||
};
|
||||
|
||||
const fetchJobDetails = async (id: number) => {
|
||||
const handleEnqueuePath = async () => {
|
||||
setActionError(null);
|
||||
setDetailLoading(true);
|
||||
setEnqueueSubmitting(true);
|
||||
try {
|
||||
const data = await apiJson<JobDetail>(`/api/jobs/${id}/details`);
|
||||
setFocusedJob(data);
|
||||
} catch (e) {
|
||||
const message = isApiError(e) ? e.message : "Failed to fetch job details";
|
||||
const payload = await apiJson<{ enqueued: boolean; message: string }>("/api/jobs/enqueue", {
|
||||
method: "POST",
|
||||
body: JSON.stringify({ path: enqueuePath }),
|
||||
});
|
||||
showToast({
|
||||
kind: payload.enqueued ? "success" : "info",
|
||||
title: "Jobs",
|
||||
message: payload.message,
|
||||
});
|
||||
setEnqueueDialogOpen(false);
|
||||
setEnqueuePath("");
|
||||
await fetchJobs();
|
||||
} catch (error) {
|
||||
const message = isApiError(error) ? error.message : "Failed to enqueue file";
|
||||
setActionError(message);
|
||||
showToast({ kind: "error", title: "Jobs", message });
|
||||
} finally {
|
||||
setDetailLoading(false);
|
||||
setEnqueueSubmitting(false);
|
||||
}
|
||||
};
|
||||
|
||||
const handleAction = async (id: number, action: "cancel" | "restart" | "delete") => {
|
||||
setActionError(null);
|
||||
try {
|
||||
await apiAction(`/api/jobs/${id}/${action}`, { method: "POST" });
|
||||
if (action === "delete") {
|
||||
setFocusedJob((current) => (current?.job.id === id ? null : current));
|
||||
} else if (focusedJob?.job.id === id) {
|
||||
await fetchJobDetails(id);
|
||||
}
|
||||
await fetchJobs();
|
||||
showToast({
|
||||
kind: "success",
|
||||
title: "Jobs",
|
||||
message: `Job ${action} request completed.`,
|
||||
});
|
||||
} catch (e) {
|
||||
const message = formatJobActionError(e, `Job ${action} failed`);
|
||||
setActionError(message);
|
||||
showToast({ kind: "error", title: "Jobs", message });
|
||||
}
|
||||
};
|
||||
|
||||
const handlePriority = async (job: Job, priority: number, label: string) => {
|
||||
setActionError(null);
|
||||
try {
|
||||
await apiAction(`/api/jobs/${job.id}/priority`, {
|
||||
method: "POST",
|
||||
body: JSON.stringify({ priority }),
|
||||
});
|
||||
if (focusedJob?.job.id === job.id) {
|
||||
setFocusedJob({
|
||||
...focusedJob,
|
||||
job: {
|
||||
...focusedJob.job,
|
||||
priority,
|
||||
},
|
||||
});
|
||||
}
|
||||
await fetchJobs();
|
||||
showToast({ kind: "success", title: "Jobs", message: `${label} for job #${job.id}.` });
|
||||
} catch (e) {
|
||||
const message = formatJobActionError(e, "Failed to update priority");
|
||||
setActionError(message);
|
||||
showToast({ kind: "error", title: "Jobs", message });
|
||||
}
|
||||
};
|
||||
|
||||
const openConfirm = (config: ConfirmConfig) => {
|
||||
setConfirmState(config);
|
||||
};
|
||||
|
||||
const focusedDecision = focusedJob
|
||||
? normalizeDecisionExplanation(
|
||||
focusedJob.decision_explanation ?? focusedJob.job.decision_explanation,
|
||||
focusedJob.job.decision_reason,
|
||||
)
|
||||
: null;
|
||||
const focusedFailure = focusedJob
|
||||
? normalizeFailureExplanation(
|
||||
focusedJob.failure_explanation,
|
||||
focusedJob.job_failure_summary,
|
||||
focusedJob.job_logs,
|
||||
)
|
||||
: null;
|
||||
const focusedJobLogs = focusedJob?.job_logs ?? [];
|
||||
const shouldShowFfmpegOutput = focusedJob
|
||||
? ["failed", "completed", "skipped"].includes(focusedJob.job.status) && focusedJobLogs.length > 0
|
||||
: false;
|
||||
const completedEncodeStats = focusedJob?.job.status === "completed"
|
||||
? focusedJob.encode_stats
|
||||
: null;
|
||||
const focusedEmptyState = focusedJob
|
||||
? jobDetailEmptyState(focusedJob.job.status)
|
||||
: null;
|
||||
|
||||
return (
|
||||
<div className="space-y-6 relative">
|
||||
<div className="flex items-center gap-4 px-1 text-xs text-helios-slate">
|
||||
@@ -529,6 +399,7 @@ function JobManager() {
|
||||
setSortDesc={setSortDesc}
|
||||
refreshing={refreshing}
|
||||
fetchJobs={fetchJobs}
|
||||
openEnqueueDialog={() => setEnqueueDialogOpen(true)}
|
||||
/>
|
||||
|
||||
{actionError && (
|
||||
@@ -612,7 +483,7 @@ function JobManager() {
|
||||
menuRef={menuRef}
|
||||
toggleSelect={toggleSelect}
|
||||
toggleSelectAll={toggleSelectAll}
|
||||
fetchJobDetails={fetchJobDetails}
|
||||
fetchJobDetails={openJobDetails}
|
||||
setMenuJobId={setMenuJobId}
|
||||
openConfirm={openConfirm}
|
||||
handleAction={handleAction}
|
||||
@@ -645,7 +516,7 @@ function JobManager() {
|
||||
focusedJob={focusedJob}
|
||||
detailDialogRef={detailDialogRef}
|
||||
detailLoading={detailLoading}
|
||||
onClose={() => setFocusedJob(null)}
|
||||
onClose={closeJobDetails}
|
||||
focusedDecision={focusedDecision}
|
||||
focusedFailure={focusedFailure}
|
||||
focusedJobLogs={focusedJobLogs}
|
||||
@@ -660,6 +531,22 @@ function JobManager() {
|
||||
document.body
|
||||
)}
|
||||
|
||||
{typeof document !== "undefined" && createPortal(
|
||||
<EnqueuePathDialog
|
||||
open={enqueueDialogOpen}
|
||||
path={enqueuePath}
|
||||
submitting={enqueueSubmitting}
|
||||
onPathChange={setEnqueuePath}
|
||||
onClose={() => {
|
||||
if (!enqueueSubmitting) {
|
||||
setEnqueueDialogOpen(false);
|
||||
}
|
||||
}}
|
||||
onSubmit={handleEnqueuePath}
|
||||
/>,
|
||||
document.body,
|
||||
)}
|
||||
|
||||
<ConfirmDialog
|
||||
open={confirmState !== null}
|
||||
title={confirmState?.title ?? ""}
|
||||
|
||||
@@ -1,7 +1,12 @@
|
||||
import { useEffect, useState } from "react";
|
||||
import { AlertTriangle, Copy, Sparkles } from "lucide-react";
|
||||
import { useCallback, useEffect, useMemo, useState } from "react";
|
||||
import { createPortal } from "react-dom";
|
||||
import { AlertTriangle, Copy, Sparkles, Zap, Search } from "lucide-react";
|
||||
import { apiJson, isApiError } from "../lib/api";
|
||||
import { showToast } from "../lib/toast";
|
||||
import ConfirmDialog from "./ui/ConfirmDialog";
|
||||
import { JobDetailModal } from "./jobs/JobDetailModal";
|
||||
import { getStatusBadge } from "./jobs/jobStatusBadge";
|
||||
import { useJobDetailController } from "./jobs/useJobDetailController";
|
||||
|
||||
interface DuplicatePath {
|
||||
id: number;
|
||||
@@ -58,36 +63,98 @@ export default function LibraryIntelligence() {
|
||||
const [data, setData] = useState<IntelligenceResponse | null>(null);
|
||||
const [loading, setLoading] = useState(true);
|
||||
const [error, setError] = useState<string | null>(null);
|
||||
const [queueingRemux, setQueueingRemux] = useState(false);
|
||||
|
||||
useEffect(() => {
|
||||
const fetch = async () => {
|
||||
try {
|
||||
const result = await apiJson<IntelligenceResponse>("/api/library/intelligence");
|
||||
setData(result);
|
||||
} catch (e) {
|
||||
const message = isApiError(e) ? e.message : "Failed to load intelligence data.";
|
||||
setError(message);
|
||||
showToast({
|
||||
kind: "error",
|
||||
title: "Intelligence",
|
||||
message,
|
||||
});
|
||||
} finally {
|
||||
setLoading(false);
|
||||
}
|
||||
};
|
||||
|
||||
void fetch();
|
||||
const fetchIntelligence = useCallback(async () => {
|
||||
try {
|
||||
const result = await apiJson<IntelligenceResponse>("/api/library/intelligence");
|
||||
setData(result);
|
||||
setError(null);
|
||||
} catch (e) {
|
||||
const message = isApiError(e) ? e.message : "Failed to load intelligence data.";
|
||||
setError(message);
|
||||
showToast({
|
||||
kind: "error",
|
||||
title: "Intelligence",
|
||||
message,
|
||||
});
|
||||
} finally {
|
||||
setLoading(false);
|
||||
}
|
||||
}, []);
|
||||
|
||||
const groupedRecommendations = data?.recommendations.reduce<Record<string, IntelligenceRecommendation[]>>(
|
||||
(groups, recommendation) => {
|
||||
groups[recommendation.type] ??= [];
|
||||
groups[recommendation.type].push(recommendation);
|
||||
return groups;
|
||||
},
|
||||
{},
|
||||
) ?? {};
|
||||
const {
|
||||
focusedJob,
|
||||
detailLoading,
|
||||
confirmState,
|
||||
detailDialogRef,
|
||||
openJobDetails,
|
||||
handleAction,
|
||||
handlePriority,
|
||||
openConfirm,
|
||||
setConfirmState,
|
||||
closeJobDetails,
|
||||
focusedDecision,
|
||||
focusedFailure,
|
||||
focusedJobLogs,
|
||||
shouldShowFfmpegOutput,
|
||||
completedEncodeStats,
|
||||
focusedEmptyState,
|
||||
} = useJobDetailController({
|
||||
onRefresh: fetchIntelligence,
|
||||
});
|
||||
|
||||
useEffect(() => {
|
||||
void fetchIntelligence();
|
||||
}, [fetchIntelligence]);
|
||||
|
||||
const groupedRecommendations = useMemo(
|
||||
() => data?.recommendations.reduce<Record<string, IntelligenceRecommendation[]>>(
|
||||
(groups, recommendation) => {
|
||||
groups[recommendation.type] ??= [];
|
||||
groups[recommendation.type].push(recommendation);
|
||||
return groups;
|
||||
},
|
||||
{},
|
||||
) ?? {},
|
||||
[data],
|
||||
);
|
||||
|
||||
const handleQueueAllRemux = async () => {
|
||||
const remuxPaths = groupedRecommendations.remux_only_candidate ?? [];
|
||||
if (remuxPaths.length === 0) {
|
||||
return;
|
||||
}
|
||||
|
||||
setQueueingRemux(true);
|
||||
let enqueued = 0;
|
||||
let skipped = 0;
|
||||
let failed = 0;
|
||||
|
||||
for (const recommendation of remuxPaths) {
|
||||
try {
|
||||
const result = await apiJson<{ enqueued: boolean; message: string }>("/api/jobs/enqueue", {
|
||||
method: "POST",
|
||||
body: JSON.stringify({ path: recommendation.path }),
|
||||
});
|
||||
if (result.enqueued) {
|
||||
enqueued += 1;
|
||||
} else {
|
||||
skipped += 1;
|
||||
}
|
||||
} catch {
|
||||
failed += 1;
|
||||
}
|
||||
}
|
||||
|
||||
setQueueingRemux(false);
|
||||
await fetchIntelligence();
|
||||
showToast({
|
||||
kind: failed > 0 ? "error" : "success",
|
||||
title: "Intelligence",
|
||||
message: `Queue all finished: ${enqueued} enqueued, ${skipped} skipped, ${failed} failed.`,
|
||||
});
|
||||
};
|
||||
|
||||
return (
|
||||
<div className="flex flex-col gap-6">
|
||||
@@ -128,6 +195,16 @@ export default function LibraryIntelligence() {
|
||||
<h2 className="text-sm font-semibold text-helios-ink">
|
||||
{TYPE_LABELS[type] ?? type}
|
||||
</h2>
|
||||
{type === "remux_only_candidate" && recommendations.length > 0 && (
|
||||
<button
|
||||
onClick={() => void handleQueueAllRemux()}
|
||||
disabled={queueingRemux}
|
||||
className="ml-auto inline-flex items-center gap-2 rounded-lg border border-helios-solar/20 bg-helios-solar/10 px-3 py-1.5 text-xs font-semibold text-helios-solar transition-colors hover:bg-helios-solar/20 disabled:opacity-60"
|
||||
>
|
||||
<Zap size={12} />
|
||||
{queueingRemux ? "Queueing..." : "Queue all"}
|
||||
</button>
|
||||
)}
|
||||
</div>
|
||||
<div className="divide-y divide-helios-line/10">
|
||||
{recommendations.map((recommendation, index) => (
|
||||
@@ -137,6 +214,28 @@ export default function LibraryIntelligence() {
|
||||
<h3 className="text-sm font-semibold text-helios-ink">{recommendation.title}</h3>
|
||||
<p className="mt-1 text-sm text-helios-slate">{recommendation.summary}</p>
|
||||
</div>
|
||||
{type === "remux_only_candidate" && (
|
||||
<button
|
||||
onClick={() => void apiJson<{ enqueued: boolean; message: string }>("/api/jobs/enqueue", {
|
||||
method: "POST",
|
||||
body: JSON.stringify({ path: recommendation.path }),
|
||||
}).then((result) => {
|
||||
showToast({
|
||||
kind: result.enqueued ? "success" : "info",
|
||||
title: "Intelligence",
|
||||
message: result.message,
|
||||
});
|
||||
return fetchIntelligence();
|
||||
}).catch((err) => {
|
||||
const message = isApiError(err) ? err.message : "Failed to enqueue remux opportunity.";
|
||||
showToast({ kind: "error", title: "Intelligence", message });
|
||||
})}
|
||||
className="inline-flex items-center gap-2 rounded-lg border border-helios-line/20 bg-helios-surface px-3 py-2 text-xs font-semibold text-helios-ink transition-colors hover:bg-helios-surface-soft"
|
||||
>
|
||||
<Zap size={12} />
|
||||
Queue
|
||||
</button>
|
||||
)}
|
||||
</div>
|
||||
<p className="mt-3 break-all font-mono text-xs text-helios-slate">{recommendation.path}</p>
|
||||
<div className="mt-3 rounded-lg border border-helios-line/20 bg-helios-surface-soft/40 px-3 py-2 text-xs text-helios-ink">
|
||||
@@ -197,6 +296,13 @@ export default function LibraryIntelligence() {
|
||||
<span className="break-all font-mono text-xs text-helios-slate">
|
||||
{path.path}
|
||||
</span>
|
||||
<button
|
||||
onClick={() => void openJobDetails(path.id)}
|
||||
className="inline-flex items-center gap-1 rounded-lg border border-helios-line/20 bg-helios-surface px-2.5 py-1.5 text-[11px] font-semibold text-helios-ink transition-colors hover:bg-helios-surface-soft"
|
||||
>
|
||||
<Search size={12} />
|
||||
Review
|
||||
</button>
|
||||
<span className="ml-auto shrink-0 text-xs capitalize text-helios-slate/50">
|
||||
{path.status}
|
||||
</span>
|
||||
@@ -209,6 +315,41 @@ export default function LibraryIntelligence() {
|
||||
)}
|
||||
</>
|
||||
)}
|
||||
|
||||
{typeof document !== "undefined" && createPortal(
|
||||
<JobDetailModal
|
||||
focusedJob={focusedJob}
|
||||
detailDialogRef={detailDialogRef}
|
||||
detailLoading={detailLoading}
|
||||
onClose={closeJobDetails}
|
||||
focusedDecision={focusedDecision}
|
||||
focusedFailure={focusedFailure}
|
||||
focusedJobLogs={focusedJobLogs}
|
||||
shouldShowFfmpegOutput={shouldShowFfmpegOutput}
|
||||
completedEncodeStats={completedEncodeStats}
|
||||
focusedEmptyState={focusedEmptyState}
|
||||
openConfirm={openConfirm}
|
||||
handleAction={handleAction}
|
||||
handlePriority={handlePriority}
|
||||
getStatusBadge={getStatusBadge}
|
||||
/>,
|
||||
document.body,
|
||||
)}
|
||||
|
||||
<ConfirmDialog
|
||||
open={confirmState !== null}
|
||||
title={confirmState?.title ?? ""}
|
||||
description={confirmState?.body ?? ""}
|
||||
confirmLabel={confirmState?.confirmLabel ?? "Confirm"}
|
||||
tone={confirmState?.confirmTone ?? "primary"}
|
||||
onClose={() => setConfirmState(null)}
|
||||
onConfirm={async () => {
|
||||
if (!confirmState) {
|
||||
return;
|
||||
}
|
||||
await confirmState.onConfirm();
|
||||
}}
|
||||
/>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
@@ -16,6 +16,7 @@ interface SystemSettingsPayload {
|
||||
}
|
||||
|
||||
interface EngineStatus {
|
||||
status: "running" | "paused" | "draining";
|
||||
mode: "background" | "balanced" | "throughput";
|
||||
concurrent_limit: number;
|
||||
is_manual_override: boolean;
|
||||
@@ -41,6 +42,7 @@ export default function SystemSettings() {
|
||||
const [engineStatus, setEngineStatus] =
|
||||
useState<EngineStatus | null>(null);
|
||||
const [modeLoading, setModeLoading] = useState(false);
|
||||
const [engineActionLoading, setEngineActionLoading] = useState(false);
|
||||
|
||||
useEffect(() => {
|
||||
void fetchSettings();
|
||||
@@ -129,6 +131,32 @@ export default function SystemSettings() {
|
||||
}
|
||||
};
|
||||
|
||||
const handleEngineAction = async (action: "pause" | "resume") => {
|
||||
setEngineActionLoading(true);
|
||||
try {
|
||||
await apiAction(`/api/engine/${action === "pause" ? "pause" : "resume"}`, {
|
||||
method: "POST",
|
||||
});
|
||||
const updatedStatus = await apiJson<EngineStatus>("/api/engine/status");
|
||||
setEngineStatus(updatedStatus);
|
||||
showToast({
|
||||
kind: "success",
|
||||
title: "Engine",
|
||||
message: action === "pause" ? "Engine paused." : "Engine resumed.",
|
||||
});
|
||||
} catch (err) {
|
||||
showToast({
|
||||
kind: "error",
|
||||
title: "Engine",
|
||||
message: isApiError(err)
|
||||
? err.message
|
||||
: "Failed to update engine state.",
|
||||
});
|
||||
} finally {
|
||||
setEngineActionLoading(false);
|
||||
}
|
||||
};
|
||||
|
||||
if (loading) {
|
||||
return <div className="p-8 text-helios-slate animate-pulse">Loading system settings...</div>;
|
||||
}
|
||||
@@ -210,6 +238,25 @@ export default function SystemSettings() {
|
||||
</p>
|
||||
)}
|
||||
</div>
|
||||
|
||||
<div className="flex items-center justify-between rounded-lg border border-helios-line/20 bg-helios-surface-soft/40 px-4 py-3">
|
||||
<div>
|
||||
<p className="text-xs font-semibold uppercase tracking-wide text-helios-slate">
|
||||
Engine State
|
||||
</p>
|
||||
<p className="mt-1 text-sm text-helios-ink capitalize">
|
||||
{engineStatus.status}
|
||||
</p>
|
||||
</div>
|
||||
<button
|
||||
type="button"
|
||||
onClick={() => void handleEngineAction(engineStatus.status === "paused" ? "resume" : "pause")}
|
||||
disabled={engineActionLoading || engineStatus.status === "draining"}
|
||||
className="rounded-lg border border-helios-line/20 bg-helios-surface px-4 py-2 text-sm font-semibold text-helios-ink transition-colors hover:bg-helios-surface-soft disabled:opacity-50"
|
||||
>
|
||||
{engineStatus.status === "paused" ? "Start" : "Pause"}
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
|
||||
98
web/src/components/jobs/EnqueuePathDialog.tsx
Normal file
98
web/src/components/jobs/EnqueuePathDialog.tsx
Normal file
@@ -0,0 +1,98 @@
|
||||
import type { FormEvent } from "react";
|
||||
import { X } from "lucide-react";
|
||||
|
||||
interface EnqueuePathDialogProps {
|
||||
open: boolean;
|
||||
path: string;
|
||||
submitting: boolean;
|
||||
onPathChange: (value: string) => void;
|
||||
onClose: () => void;
|
||||
onSubmit: () => Promise<void>;
|
||||
}
|
||||
|
||||
export function EnqueuePathDialog({
|
||||
open,
|
||||
path,
|
||||
submitting,
|
||||
onPathChange,
|
||||
onClose,
|
||||
onSubmit,
|
||||
}: EnqueuePathDialogProps) {
|
||||
if (!open) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const handleSubmit = async (event: FormEvent<HTMLFormElement>) => {
|
||||
event.preventDefault();
|
||||
await onSubmit();
|
||||
};
|
||||
|
||||
return (
|
||||
<>
|
||||
<div
|
||||
className="fixed inset-0 z-[110] bg-black/60 backdrop-blur-sm"
|
||||
onClick={onClose}
|
||||
/>
|
||||
<div className="fixed inset-0 z-[111] flex items-center justify-center px-4">
|
||||
<form
|
||||
onSubmit={(event) => void handleSubmit(event)}
|
||||
role="dialog"
|
||||
aria-modal="true"
|
||||
aria-labelledby="enqueue-path-title"
|
||||
className="w-full max-w-xl rounded-xl border border-helios-line/20 bg-helios-surface shadow-2xl"
|
||||
>
|
||||
<div className="flex items-start justify-between gap-4 border-b border-helios-line/10 bg-helios-surface-soft/50 px-6 py-5">
|
||||
<div>
|
||||
<h2 id="enqueue-path-title" className="text-lg font-bold text-helios-ink">Add File</h2>
|
||||
<p className="mt-1 text-sm text-helios-slate">
|
||||
Enqueue one absolute filesystem path without running a full scan.
|
||||
</p>
|
||||
</div>
|
||||
<button
|
||||
type="button"
|
||||
onClick={onClose}
|
||||
className="rounded-md p-2 text-helios-slate transition-colors hover:bg-helios-line/10"
|
||||
aria-label="Close add file dialog"
|
||||
>
|
||||
<X size={18} />
|
||||
</button>
|
||||
</div>
|
||||
|
||||
<div className="space-y-3 px-6 py-5">
|
||||
<label className="block text-xs font-semibold uppercase tracking-wide text-helios-slate">
|
||||
Absolute Path
|
||||
</label>
|
||||
<input
|
||||
type="text"
|
||||
value={path}
|
||||
onChange={(event) => onPathChange(event.target.value)}
|
||||
placeholder="/Volumes/Media/Movies/example.mkv"
|
||||
className="w-full rounded-lg border border-helios-line/20 bg-helios-surface px-4 py-3 text-sm text-helios-ink outline-none focus:border-helios-solar"
|
||||
autoFocus
|
||||
/>
|
||||
<p className="text-xs text-helios-slate">
|
||||
Supported media files only. Paths are resolved on the server before enqueue.
|
||||
</p>
|
||||
</div>
|
||||
|
||||
<div className="flex items-center justify-end gap-3 border-t border-helios-line/10 px-6 py-4">
|
||||
<button
|
||||
type="button"
|
||||
onClick={onClose}
|
||||
className="rounded-lg border border-helios-line/20 px-4 py-2 text-sm font-semibold text-helios-slate transition-colors hover:bg-helios-surface-soft"
|
||||
>
|
||||
Cancel
|
||||
</button>
|
||||
<button
|
||||
type="submit"
|
||||
disabled={submitting}
|
||||
className="rounded-lg bg-helios-solar px-4 py-2 text-sm font-bold text-helios-main transition-all hover:brightness-110 disabled:opacity-60"
|
||||
>
|
||||
{submitting ? "Adding..." : "Add File"}
|
||||
</button>
|
||||
</div>
|
||||
</form>
|
||||
</div>
|
||||
</>
|
||||
);
|
||||
}
|
||||
@@ -2,9 +2,10 @@ import { X, Clock, Info, Activity, Database, Zap, Maximize2, AlertCircle, Refres
|
||||
import { motion, AnimatePresence } from "framer-motion";
|
||||
import { clsx, type ClassValue } from "clsx";
|
||||
import { twMerge } from "tailwind-merge";
|
||||
import type { RefObject } from "react";
|
||||
import { useEffect, useState, type RefObject } from "react";
|
||||
import type React from "react";
|
||||
import type { JobDetail, EncodeStats, ExplanationView, LogEntry, ConfirmConfig, Job } from "./types";
|
||||
import { apiJson } from "../../lib/api";
|
||||
import type { JobDetail, EncodeStats, ExplanationView, LogEntry, ConfirmConfig, Job, ProcessorStatus } from "./types";
|
||||
import { formatBytes, formatDuration, logLevelClass, isJobActive } from "./types";
|
||||
|
||||
function cn(...inputs: ClassValue[]) {
|
||||
@@ -34,6 +35,32 @@ export function JobDetailModal({
|
||||
completedEncodeStats, focusedEmptyState,
|
||||
openConfirm, handleAction, handlePriority, getStatusBadge,
|
||||
}: JobDetailModalProps) {
|
||||
const [processorStatus, setProcessorStatus] = useState<ProcessorStatus | null>(null);
|
||||
|
||||
useEffect(() => {
|
||||
if (!focusedJob || focusedJob.job.status !== "queued") {
|
||||
setProcessorStatus(null);
|
||||
return;
|
||||
}
|
||||
|
||||
let cancelled = false;
|
||||
void apiJson<ProcessorStatus>("/api/processor/status")
|
||||
.then((status) => {
|
||||
if (!cancelled) {
|
||||
setProcessorStatus(status);
|
||||
}
|
||||
})
|
||||
.catch(() => {
|
||||
if (!cancelled) {
|
||||
setProcessorStatus(null);
|
||||
}
|
||||
});
|
||||
|
||||
return () => {
|
||||
cancelled = true;
|
||||
};
|
||||
}, [focusedJob]);
|
||||
|
||||
return (
|
||||
<AnimatePresence>
|
||||
{focusedJob && (
|
||||
@@ -179,7 +206,7 @@ export function JobDetailModal({
|
||||
<div className="flex justify-between items-center text-xs">
|
||||
<span className="text-helios-slate font-medium">Reduction</span>
|
||||
<span className="text-green-500 font-bold">
|
||||
{((1 - focusedJob.encode_stats.compression_ratio) * 100).toFixed(1)}% Saved
|
||||
{(focusedJob.encode_stats.compression_ratio * 100).toFixed(1)}% Saved
|
||||
</span>
|
||||
</div>
|
||||
<div className="flex justify-between items-center text-xs">
|
||||
@@ -262,6 +289,16 @@ export function JobDetailModal({
|
||||
<p className="text-xs text-helios-slate mt-0.5">
|
||||
{focusedEmptyState.detail}
|
||||
</p>
|
||||
{focusedJob.job.status === "queued" && focusedJob.queue_position != null && (
|
||||
<p className="text-xs text-helios-slate mt-1">
|
||||
Queue position: <span className="font-semibold text-helios-ink">#{focusedJob.queue_position}</span>
|
||||
</p>
|
||||
)}
|
||||
{focusedJob.job.status === "queued" && processorStatus?.blocked_reason && (
|
||||
<p className="text-xs text-helios-slate mt-1">
|
||||
Blocked: <span className="font-semibold text-helios-ink">{processorStatus.message}</span>
|
||||
</p>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
) : null}
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
import { Search, RefreshCw, ArrowDown, ArrowUp } from "lucide-react";
|
||||
import { Search, RefreshCw, ArrowDown, ArrowUp, Plus } from "lucide-react";
|
||||
import { clsx, type ClassValue } from "clsx";
|
||||
import { twMerge } from "tailwind-merge";
|
||||
import type { RefObject } from "react";
|
||||
@@ -26,6 +26,7 @@ interface JobsToolbarProps {
|
||||
setSortDesc: (fn: boolean | ((prev: boolean) => boolean)) => void;
|
||||
refreshing: boolean;
|
||||
fetchJobs: () => Promise<void>;
|
||||
openEnqueueDialog: () => void;
|
||||
}
|
||||
|
||||
export function JobsToolbar({
|
||||
@@ -33,7 +34,7 @@ export function JobsToolbar({
|
||||
searchInput, setSearchInput,
|
||||
compactSearchOpen, setCompactSearchOpen, compactSearchRef, compactSearchInputRef,
|
||||
sortBy, setSortBy, sortDesc, setSortDesc,
|
||||
refreshing, fetchJobs,
|
||||
refreshing, fetchJobs, openEnqueueDialog,
|
||||
}: JobsToolbarProps) {
|
||||
return (
|
||||
<div className="rounded-xl border border-helios-line/10 bg-helios-surface/50 px-3 py-3">
|
||||
@@ -94,6 +95,13 @@ export function JobsToolbar({
|
||||
</div>
|
||||
|
||||
<div className="flex items-center gap-2 sm:ml-auto">
|
||||
<button
|
||||
onClick={openEnqueueDialog}
|
||||
className="inline-flex h-10 items-center gap-2 rounded-lg border border-helios-line/20 bg-helios-surface px-3 text-sm font-semibold text-helios-ink hover:bg-helios-surface-soft"
|
||||
>
|
||||
<Plus size={16} />
|
||||
<span>Add file</span>
|
||||
</button>
|
||||
<button
|
||||
onClick={() => void fetchJobs()}
|
||||
className="flex h-10 w-10 shrink-0 items-center justify-center rounded-lg border border-helios-line/20 bg-helios-surface text-helios-ink hover:bg-helios-surface-soft"
|
||||
|
||||
32
web/src/components/jobs/jobStatusBadge.tsx
Normal file
32
web/src/components/jobs/jobStatusBadge.tsx
Normal file
@@ -0,0 +1,32 @@
|
||||
import { clsx, type ClassValue } from "clsx";
|
||||
import { twMerge } from "tailwind-merge";
|
||||
|
||||
function cn(...inputs: ClassValue[]) {
|
||||
return twMerge(clsx(inputs));
|
||||
}
|
||||
|
||||
export function getStatusBadge(status: string) {
|
||||
const styles: Record<string, string> = {
|
||||
queued: "bg-helios-slate/10 text-helios-slate border-helios-slate/20",
|
||||
analyzing: "bg-blue-500/10 text-blue-500 border-blue-500/20",
|
||||
encoding: "bg-helios-solar/10 text-helios-solar border-helios-solar/20 animate-pulse",
|
||||
remuxing: "bg-helios-solar/10 text-helios-solar border-helios-solar/20 animate-pulse",
|
||||
completed: "bg-green-500/10 text-green-500 border-green-500/20",
|
||||
failed: "bg-red-500/10 text-red-500 border-red-500/20",
|
||||
cancelled: "bg-red-500/10 text-red-500 border-red-500/20",
|
||||
skipped: "bg-gray-500/10 text-gray-500 border-gray-500/20",
|
||||
archived: "bg-zinc-500/10 text-zinc-400 border-zinc-500/20",
|
||||
resuming: "bg-helios-solar/10 text-helios-solar border-helios-solar/20 animate-pulse",
|
||||
};
|
||||
|
||||
return (
|
||||
<span
|
||||
className={cn(
|
||||
"px-2.5 py-1 rounded-md text-xs font-medium border capitalize",
|
||||
styles[status] || styles.queued,
|
||||
)}
|
||||
>
|
||||
{status}
|
||||
</span>
|
||||
);
|
||||
}
|
||||
@@ -91,6 +91,17 @@ export interface JobDetail {
|
||||
job_failure_summary: string | null;
|
||||
decision_explanation: ExplanationPayload | null;
|
||||
failure_explanation: ExplanationPayload | null;
|
||||
queue_position: number | null;
|
||||
}
|
||||
|
||||
export interface ProcessorStatus {
|
||||
blocked_reason: "manual_paused" | "scheduled_pause" | "draining" | "workers_busy" | null;
|
||||
message: string;
|
||||
manual_paused: boolean;
|
||||
scheduler_paused: boolean;
|
||||
draining: boolean;
|
||||
active_jobs: number;
|
||||
concurrent_limit: number;
|
||||
}
|
||||
|
||||
export interface CountMessageResponse {
|
||||
|
||||
237
web/src/components/jobs/useJobDetailController.tsx
Normal file
237
web/src/components/jobs/useJobDetailController.tsx
Normal file
@@ -0,0 +1,237 @@
|
||||
import { useCallback, useEffect, useRef, useState } from "react";
|
||||
import { apiAction, apiJson, isApiError } from "../../lib/api";
|
||||
import { showToast } from "../../lib/toast";
|
||||
import { normalizeDecisionExplanation, normalizeFailureExplanation } from "./JobExplanations";
|
||||
import type {
|
||||
ConfirmConfig,
|
||||
EncodeStats,
|
||||
ExplanationView,
|
||||
Job,
|
||||
JobDetail,
|
||||
LogEntry,
|
||||
} from "./types";
|
||||
import { jobDetailEmptyState } from "./types";
|
||||
|
||||
function focusableElements(root: HTMLElement): HTMLElement[] {
|
||||
const selector = [
|
||||
"a[href]",
|
||||
"button:not([disabled])",
|
||||
"input:not([disabled])",
|
||||
"select:not([disabled])",
|
||||
"textarea:not([disabled])",
|
||||
"[tabindex]:not([tabindex='-1'])",
|
||||
].join(",");
|
||||
|
||||
return Array.from(root.querySelectorAll<HTMLElement>(selector)).filter(
|
||||
(element) => !element.hasAttribute("disabled"),
|
||||
);
|
||||
}
|
||||
|
||||
function formatJobActionError(error: unknown, fallback: string) {
|
||||
if (!isApiError(error)) {
|
||||
return fallback;
|
||||
}
|
||||
|
||||
const blocked = Array.isArray((error.body as { blocked?: unknown } | undefined)?.blocked)
|
||||
? ((error.body as { blocked?: Array<{ id?: number; status?: string }> }).blocked ?? [])
|
||||
: [];
|
||||
if (blocked.length === 0) {
|
||||
return error.message;
|
||||
}
|
||||
|
||||
const summary = blocked
|
||||
.map((job) => `#${job.id ?? "?"} (${job.status ?? "unknown"})`)
|
||||
.join(", ");
|
||||
return `${error.message}: ${summary}`;
|
||||
}
|
||||
|
||||
interface UseJobDetailControllerOptions {
|
||||
onRefresh?: () => Promise<void>;
|
||||
}
|
||||
|
||||
export function useJobDetailController(options: UseJobDetailControllerOptions = {}) {
|
||||
const [focusedJob, setFocusedJob] = useState<JobDetail | null>(null);
|
||||
const [detailLoading, setDetailLoading] = useState(false);
|
||||
const [confirmState, setConfirmState] = useState<ConfirmConfig | null>(null);
|
||||
const detailDialogRef = useRef<HTMLDivElement | null>(null);
|
||||
const detailLastFocusedRef = useRef<HTMLElement | null>(null);
|
||||
const confirmOpenRef = useRef(false);
|
||||
|
||||
useEffect(() => {
|
||||
confirmOpenRef.current = confirmState !== null;
|
||||
}, [confirmState]);
|
||||
|
||||
useEffect(() => {
|
||||
if (!focusedJob) {
|
||||
return;
|
||||
}
|
||||
|
||||
detailLastFocusedRef.current = document.activeElement as HTMLElement | null;
|
||||
|
||||
const root = detailDialogRef.current;
|
||||
if (root) {
|
||||
const focusables = focusableElements(root);
|
||||
if (focusables.length > 0) {
|
||||
focusables[0].focus();
|
||||
} else {
|
||||
root.focus();
|
||||
}
|
||||
}
|
||||
|
||||
const onKeyDown = (event: KeyboardEvent) => {
|
||||
if (!focusedJob || confirmOpenRef.current) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (event.key === "Escape") {
|
||||
event.preventDefault();
|
||||
setFocusedJob(null);
|
||||
return;
|
||||
}
|
||||
|
||||
if (event.key !== "Tab") {
|
||||
return;
|
||||
}
|
||||
|
||||
const dialogRoot = detailDialogRef.current;
|
||||
if (!dialogRoot) {
|
||||
return;
|
||||
}
|
||||
|
||||
const focusables = focusableElements(dialogRoot);
|
||||
if (focusables.length === 0) {
|
||||
event.preventDefault();
|
||||
dialogRoot.focus();
|
||||
return;
|
||||
}
|
||||
|
||||
const first = focusables[0];
|
||||
const last = focusables[focusables.length - 1];
|
||||
const current = document.activeElement as HTMLElement | null;
|
||||
|
||||
if (event.shiftKey && current === first) {
|
||||
event.preventDefault();
|
||||
last.focus();
|
||||
} else if (!event.shiftKey && current === last) {
|
||||
event.preventDefault();
|
||||
first.focus();
|
||||
}
|
||||
};
|
||||
|
||||
document.addEventListener("keydown", onKeyDown);
|
||||
return () => {
|
||||
document.removeEventListener("keydown", onKeyDown);
|
||||
if (detailLastFocusedRef.current) {
|
||||
detailLastFocusedRef.current.focus();
|
||||
}
|
||||
};
|
||||
}, [focusedJob]);
|
||||
|
||||
const openJobDetails = useCallback(async (id: number) => {
|
||||
setDetailLoading(true);
|
||||
try {
|
||||
const data = await apiJson<JobDetail>(`/api/jobs/${id}/details`);
|
||||
setFocusedJob(data);
|
||||
} catch (error) {
|
||||
const message = isApiError(error) ? error.message : "Failed to fetch job details";
|
||||
showToast({ kind: "error", title: "Jobs", message });
|
||||
} finally {
|
||||
setDetailLoading(false);
|
||||
}
|
||||
}, []);
|
||||
|
||||
const handleAction = useCallback(async (id: number, action: "cancel" | "restart" | "delete") => {
|
||||
try {
|
||||
await apiAction(`/api/jobs/${id}/${action}`, { method: "POST" });
|
||||
if (action === "delete") {
|
||||
setFocusedJob((current) => (current?.job.id === id ? null : current));
|
||||
} else if (focusedJob?.job.id === id) {
|
||||
await openJobDetails(id);
|
||||
}
|
||||
if (options.onRefresh) {
|
||||
await options.onRefresh();
|
||||
}
|
||||
showToast({
|
||||
kind: "success",
|
||||
title: "Jobs",
|
||||
message: `Job ${action} request completed.`,
|
||||
});
|
||||
} catch (error) {
|
||||
const message = formatJobActionError(error, `Job ${action} failed`);
|
||||
showToast({ kind: "error", title: "Jobs", message });
|
||||
}
|
||||
}, [focusedJob?.job.id, openJobDetails, options]);
|
||||
|
||||
const handlePriority = useCallback(async (job: Job, priority: number, label: string) => {
|
||||
try {
|
||||
await apiAction(`/api/jobs/${job.id}/priority`, {
|
||||
method: "POST",
|
||||
body: JSON.stringify({ priority }),
|
||||
});
|
||||
if (focusedJob?.job.id === job.id) {
|
||||
setFocusedJob({
|
||||
...focusedJob,
|
||||
job: {
|
||||
...focusedJob.job,
|
||||
priority,
|
||||
},
|
||||
});
|
||||
}
|
||||
if (options.onRefresh) {
|
||||
await options.onRefresh();
|
||||
}
|
||||
showToast({ kind: "success", title: "Jobs", message: `${label} for job #${job.id}.` });
|
||||
} catch (error) {
|
||||
const message = formatJobActionError(error, "Failed to update priority");
|
||||
showToast({ kind: "error", title: "Jobs", message });
|
||||
}
|
||||
}, [focusedJob, options]);
|
||||
|
||||
const openConfirm = useCallback((config: ConfirmConfig) => {
|
||||
setConfirmState(config);
|
||||
}, []);
|
||||
|
||||
const focusedDecision: ExplanationView | null = focusedJob
|
||||
? normalizeDecisionExplanation(
|
||||
focusedJob.decision_explanation ?? focusedJob.job.decision_explanation,
|
||||
focusedJob.job.decision_reason,
|
||||
)
|
||||
: null;
|
||||
const focusedFailure: ExplanationView | null = focusedJob
|
||||
? normalizeFailureExplanation(
|
||||
focusedJob.failure_explanation,
|
||||
focusedJob.job_failure_summary,
|
||||
focusedJob.job_logs,
|
||||
)
|
||||
: null;
|
||||
const focusedJobLogs: LogEntry[] = focusedJob?.job_logs ?? [];
|
||||
const shouldShowFfmpegOutput = focusedJob
|
||||
? ["failed", "completed", "skipped"].includes(focusedJob.job.status) && focusedJobLogs.length > 0
|
||||
: false;
|
||||
const completedEncodeStats: EncodeStats | null = focusedJob?.job.status === "completed"
|
||||
? focusedJob.encode_stats
|
||||
: null;
|
||||
const focusedEmptyState = focusedJob
|
||||
? jobDetailEmptyState(focusedJob.job.status)
|
||||
: null;
|
||||
|
||||
return {
|
||||
focusedJob,
|
||||
setFocusedJob,
|
||||
detailLoading,
|
||||
confirmState,
|
||||
detailDialogRef,
|
||||
openJobDetails,
|
||||
handleAction,
|
||||
handlePriority,
|
||||
openConfirm,
|
||||
setConfirmState,
|
||||
closeJobDetails: () => setFocusedJob(null),
|
||||
focusedDecision,
|
||||
focusedFailure,
|
||||
focusedJobLogs,
|
||||
shouldShowFfmpegOutput,
|
||||
completedEncodeStats,
|
||||
focusedEmptyState,
|
||||
};
|
||||
}
|
||||
@@ -1,14 +1,24 @@
|
||||
import { useEffect } from "react";
|
||||
import type { MutableRefObject, Dispatch, SetStateAction } from "react";
|
||||
import type { Job } from "./types";
|
||||
import type { Job, JobDetail } from "./types";
|
||||
|
||||
interface UseJobSSEOptions {
|
||||
setJobs: Dispatch<SetStateAction<Job[]>>;
|
||||
setFocusedJob: Dispatch<SetStateAction<JobDetail | null>>;
|
||||
fetchJobsRef: MutableRefObject<() => Promise<void>>;
|
||||
focusedJobIdRef: MutableRefObject<number | null>;
|
||||
refreshFocusedJobRef: MutableRefObject<() => Promise<void>>;
|
||||
encodeStartTimes: MutableRefObject<Map<number, number>>;
|
||||
}
|
||||
|
||||
export function useJobSSE({ setJobs, fetchJobsRef, encodeStartTimes }: UseJobSSEOptions): void {
|
||||
export function useJobSSE({
|
||||
setJobs,
|
||||
setFocusedJob,
|
||||
fetchJobsRef,
|
||||
focusedJobIdRef,
|
||||
refreshFocusedJobRef,
|
||||
encodeStartTimes,
|
||||
}: UseJobSSEOptions): void {
|
||||
useEffect(() => {
|
||||
let eventSource: EventSource | null = null;
|
||||
let cancelled = false;
|
||||
@@ -38,14 +48,31 @@ export function useJobSSE({ setJobs, fetchJobsRef, encodeStartTimes }: UseJobSSE
|
||||
job_id: number;
|
||||
status: string;
|
||||
};
|
||||
const terminalStatuses = ["completed", "failed", "cancelled", "skipped"];
|
||||
if (status === "encoding") {
|
||||
encodeStartTimes.current.set(job_id, Date.now());
|
||||
} else {
|
||||
} else if (terminalStatuses.includes(status)) {
|
||||
encodeStartTimes.current.delete(job_id);
|
||||
}
|
||||
setJobs((prev) =>
|
||||
prev.map((job) => job.id === job_id ? { ...job, status } : job)
|
||||
);
|
||||
setFocusedJob((prev) =>
|
||||
prev?.job.id === job_id
|
||||
? {
|
||||
...prev,
|
||||
queue_position: status === "queued" ? prev.queue_position : null,
|
||||
job: {
|
||||
...prev.job,
|
||||
status,
|
||||
},
|
||||
}
|
||||
: prev
|
||||
);
|
||||
void fetchJobsRef.current();
|
||||
if (focusedJobIdRef.current === job_id) {
|
||||
void refreshFocusedJobRef.current();
|
||||
}
|
||||
} catch {
|
||||
/* ignore malformed */
|
||||
}
|
||||
@@ -60,15 +87,33 @@ export function useJobSSE({ setJobs, fetchJobsRef, encodeStartTimes }: UseJobSSE
|
||||
setJobs((prev) =>
|
||||
prev.map((job) => job.id === job_id ? { ...job, progress: percentage } : job)
|
||||
);
|
||||
setFocusedJob((prev) =>
|
||||
prev?.job.id === job_id
|
||||
? { ...prev, job: { ...prev.job, progress: percentage } }
|
||||
: prev
|
||||
);
|
||||
} catch {
|
||||
/* ignore malformed */
|
||||
}
|
||||
});
|
||||
|
||||
eventSource.addEventListener("decision", () => {
|
||||
eventSource.addEventListener("decision", (e) => {
|
||||
try {
|
||||
const payload = JSON.parse(e.data) as { job_id?: number };
|
||||
if (payload.job_id != null && focusedJobIdRef.current === payload.job_id) {
|
||||
void refreshFocusedJobRef.current();
|
||||
}
|
||||
} catch {
|
||||
/* ignore malformed */
|
||||
}
|
||||
void fetchJobsRef.current();
|
||||
});
|
||||
|
||||
eventSource.addEventListener("lagged", () => {
|
||||
void fetchJobsRef.current();
|
||||
void refreshFocusedJobRef.current();
|
||||
});
|
||||
|
||||
eventSource.onerror = () => {
|
||||
eventSource?.close();
|
||||
if (!cancelled) {
|
||||
|
||||
Reference in New Issue
Block a user