diff --git a/CHANGELOG.md b/CHANGELOG.md
index 551598e..673f131 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -2,6 +2,24 @@
All notable changes to this project will be documented in this file.
+## [0.3.1-rc.5] - 2026-04-16
+
+### Reliability & Stability
+
+- **Segment-based encode resume** — interrupted encode jobs now persist resume sessions and completed segments so restart and recovery flows can continue without discarding all completed work.
+- **Notification target compatibility hardening** — notification target reads/writes now preserve the additive migration path, tolerate legacy shapes, and avoid duplicate-delete projection bugs in settings management.
+- **Daily summary reliability** — summary delivery now retries safely after transient failures and avoids duplicate sends across restart boundaries by persisting the last successful day.
+- **Job-detail correctness** — completed-job detail loading now fails closed on database errors instead of returning partial `200 OK` payloads, and encode stat duration fallback uses the encoded output rather than the source file.
+- **Auth and settings safety** — login now returns server errors for real database failures, and duplicate notification/schedule rows no longer disappear together from a single delete action.
+
+### Jobs & UX
+
+- **Manual enqueue flow** — the jobs UI now supports enqueueing a single absolute file path through the same backend dedupe and output rules used by library scans.
+- **Queued-job visibility** — job detail now exposes queue position and processor blocked reasons so operators can see why a queued job is not starting.
+- **Attempt-history surfacing** — job detail now shows encode attempt history directly in the modal, including outcome, timing, and captured failure summary.
+- **Jobs UI follow-through** — the `JobManager` refactor now ships with dedicated controller/dialog helpers and tighter SSE reconciliation so filtered tables and open detail modals stay aligned with backend truth.
+- **Intelligence actions** — remux recommendations and duplicate candidates are now actionable directly from the Intelligence page.
+
## [0.3.1-rc.3] - 2026-04-12
### New Features
diff --git a/Cargo.lock b/Cargo.lock
index 29647bc..c7f00f6 100644
--- a/Cargo.lock
+++ b/Cargo.lock
@@ -13,7 +13,7 @@ dependencies = [
[[package]]
name = "alchemist"
-version = "0.3.1-rc.4"
+version = "0.3.1-rc.5"
dependencies = [
"anyhow",
"argon2",
diff --git a/Cargo.toml b/Cargo.toml
index a21c6d4..68f2ae5 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -1,6 +1,6 @@
[package]
name = "alchemist"
-version = "0.3.1-rc.4"
+version = "0.3.1-rc.5"
edition = "2024"
rust-version = "1.85"
license = "GPL-3.0"
diff --git a/RELEASING.md b/RELEASING.md
index 18479c7..639c1f3 100644
--- a/RELEASING.md
+++ b/RELEASING.md
@@ -30,15 +30,15 @@ Then complete the release-candidate preflight:
Promote to stable only after the RC burn-in is complete and the same automated preflight is still green.
-1. Run `just bump 0.3.0`.
+1. Run `just bump 0.3.1`.
2. Update `CHANGELOG.md` and `docs/docs/changelog.md` for the stable cut.
3. Run `just release-check`.
4. Re-run the manual smoke checklist against the final release artifacts:
- Docker fresh install
- Packaged binary first-run
- - Upgrade from the most recent `0.2.x` or `0.3.0-rc.x`
+ - Upgrade from the most recent `0.2.x` or `0.3.1-rc.x`
- Encode, skip, failure, and notification verification
5. Re-run the Windows contributor verification checklist if Windows parity changed after the last RC.
6. Confirm release notes, docs, and hardware-support wording match the tested release state.
7. Merge the stable release commit to `main`.
-8. Create the annotated tag `v0.3.0` on the exact merged commit.
+8. Create the annotated tag `v0.3.1` on the exact merged commit.
diff --git a/VERSION b/VERSION
index 6f2e72b..3d9532f 100644
--- a/VERSION
+++ b/VERSION
@@ -1 +1 @@
-0.3.1-rc.4
+0.3.1-rc.5
diff --git a/backlog.md b/backlog.md
index 91ef5d4..7df8d9d 100644
--- a/backlog.md
+++ b/backlog.md
@@ -59,37 +59,37 @@ documentation, or iteration.
- remux-only opportunities
- wasteful audio layouts
- commentary/descriptive-track cleanup candidates
+- Direct actions now exist for queueing remux recommendations and opening duplicate candidates in the shared job-detail flow
+
+### Engine Lifecycle + Planner Docs
+- Runtime drain/restart controls exist in the product surface
+- Backend and Playwright lifecycle coverage now exists for the current behavior
+- Planner and engine lifecycle docs are in-repo and should now be kept in sync with shipped semantics rather than treated as missing work
+
+### Jobs UI Refactor / In Flight
+- `JobManager` has been decomposed into focused jobs subcomponents and controller hooks
+- SSE ownership is now centered in a dedicated hook and job-detail controller flow
+- Treat the current jobs UI surface as shipping product that still needs stabilization and regression coverage, not as a future refactor candidate
---
## Active Priorities
-### Engine Lifecycle Controls
-- Finish and harden restart/shutdown semantics from the About/header surface
-- Restart must reset the engine loop without re-execing the process
-- Shutdown must cancel active jobs and exit cleanly
-- Add final backend and Playwright coverage for lifecycle transitions
+### `0.3.1` RC Stability Follow-Through
+- Keep the current in-flight backend/frontend/test delta focused on reliability, upgrade safety, and release hardening
+- Expand regression coverage for resume/restart/cancel flows, job-detail refresh semantics, settings projection, and intelligence actions
+- Keep release docs, changelog entries, and support wording aligned with what the RC actually ships
-### Planner and Lifecycle Documentation
-- Document planner heuristics and stable skip/transcode/remux decision boundaries
-- Document hardware fallback rules and backend selection semantics
-- Document pause, drain, restart, cancel, and shutdown semantics from actual behavior
-
-### Per-File Encode History
-- Show full attempt history in job detail, grouped by canonical file identity
-- Include outcome, encode stats, and failure reason where available
-- Make retries, reruns, and settings-driven requeues legible
-
-### Behavior-Preserving Refactor Pass
-- Decompose `web/src/components/JobManager.tsx` without changing current behavior
-- Extract shared formatting logic
-- Clarify SSE vs polling ownership
-- Add regression coverage before deeper structural cleanup
+### Per-File Encode History Follow-Through
+- Attempt history now exists in job detail, but it is still job-scoped rather than grouped by canonical file identity
+- Next hardening pass should make retries, reruns, and settings-driven requeues legible across a file’s full history
+- Include outcome, encode stats, and failure reason where available without regressing the existing job-detail flow
### AMD AV1 Validation
- Validate Linux VAAPI and Windows AMF AV1 paths on real hardware
- Confirm encoder selection, fallback behavior, and defaults
- Keep support claims conservative until validation is real
+- Deferred from the current `0.3.1-rc.5` automated-stability pass; do not broaden support claims before this work is complete
---
diff --git a/docs/bun.lock b/docs/bun.lock
index 32d6eec..a138b02 100644
--- a/docs/bun.lock
+++ b/docs/bun.lock
@@ -24,6 +24,7 @@
},
},
"overrides": {
+ "follow-redirects": "^1.16.0",
"lodash": "^4.18.1",
"serialize-javascript": "^7.0.5",
},
@@ -1108,7 +1109,7 @@
"flat": ["flat@5.0.2", "", { "bin": { "flat": "cli.js" } }, "sha512-b6suED+5/3rTpUBdG1gupIl8MPFCAMA0QXwmljLhvCUKcUvdE4gWky9zpuGCcXHOsz4J9wPGNWq6OKpmIzz3hQ=="],
- "follow-redirects": ["follow-redirects@1.15.11", "", {}, "sha512-deG2P0JfjrTxl50XGCDyfI97ZGVCxIpfKYmfyrQ54n5FO/0gfIES8C/Psl6kWVDolizcaaxZJnTS0QSMxvnsBQ=="],
+ "follow-redirects": ["follow-redirects@1.16.0", "", {}, "sha512-y5rN/uOsadFT/JfYwhxRS5R7Qce+g3zG97+JrtFZlC9klX/W5hD7iiLzScI4nZqUS7DNUdhPgw4xI8W2LuXlUw=="],
"form-data-encoder": ["form-data-encoder@2.1.4", "", {}, "sha512-yDYSgNMraqvnxiEXO4hi88+YZxaHC6QKzb5N84iRCTDeRO7ZALpir/lVmf/uXUhnwUr2O4HU8s/n6x+yNjQkHw=="],
diff --git a/docs/docs/changelog.md b/docs/docs/changelog.md
index 6ffadc8..c0504fc 100644
--- a/docs/docs/changelog.md
+++ b/docs/docs/changelog.md
@@ -3,6 +3,24 @@ title: Changelog
description: Release history for Alchemist.
---
+## [0.3.1-rc.5] - 2026-04-16
+
+### Reliability & Stability
+
+- **Segment-based encode resume** — interrupted encode jobs now persist resume sessions and completed segments so restart and recovery flows can continue without discarding all completed work.
+- **Notification target compatibility hardening** — notification target reads/writes now preserve the additive migration path, tolerate legacy shapes, and avoid duplicate-delete projection bugs in settings management.
+- **Daily summary reliability** — summary delivery now retries safely after transient failures and avoids duplicate sends across restart boundaries by persisting the last successful day.
+- **Job-detail correctness** — completed-job detail loading now fails closed on database errors instead of returning partial `200 OK` payloads, and encode stat duration fallback uses the encoded output rather than the source file.
+- **Auth and settings safety** — login now returns server errors for real database failures, and duplicate notification/schedule rows no longer disappear together from a single delete action.
+
+### Jobs & UX
+
+- **Manual enqueue flow** — the jobs UI now supports enqueueing a single absolute file path through the same backend dedupe and output rules used by library scans.
+- **Queued-job visibility** — job detail now exposes queue position and processor blocked reasons so operators can see why a queued job is not starting.
+- **Attempt-history surfacing** — job detail now shows encode attempt history directly in the modal, including outcome, timing, and captured failure summary.
+- **Jobs UI follow-through** — the `JobManager` refactor now ships with dedicated controller/dialog helpers and tighter SSE reconciliation so filtered tables and open detail modals stay aligned with backend truth.
+- **Intelligence actions** — remux recommendations and duplicate candidates are now actionable directly from the Intelligence page.
+
## [0.3.1-rc.3] - 2026-04-12
### New Features
diff --git a/docs/package.json b/docs/package.json
index dd7d5a9..0631cc5 100644
--- a/docs/package.json
+++ b/docs/package.json
@@ -1,6 +1,6 @@
{
"name": "alchemist-docs",
- "version": "0.3.1-rc.4",
+ "version": "0.3.1-rc.5",
"private": true,
"packageManager": "bun@1.3.5",
"scripts": {
@@ -48,6 +48,7 @@
"node": ">=20.0"
},
"overrides": {
+ "follow-redirects": "^1.16.0",
"lodash": "^4.18.1",
"serialize-javascript": "^7.0.5"
}
diff --git a/migrations/20260407110000_notification_targets_v2_and_conversion_jobs.sql b/migrations/20260407110000_notification_targets_v2_and_conversion_jobs.sql
index 7be416b..39f4679 100644
--- a/migrations/20260407110000_notification_targets_v2_and_conversion_jobs.sql
+++ b/migrations/20260407110000_notification_targets_v2_and_conversion_jobs.sql
@@ -1,34 +1,25 @@
-CREATE TABLE IF NOT EXISTS notification_targets_new (
- id INTEGER PRIMARY KEY AUTOINCREMENT,
- name TEXT NOT NULL,
- target_type TEXT CHECK(target_type IN ('discord_webhook', 'discord_bot', 'gotify', 'webhook', 'telegram', 'email')) NOT NULL,
- config_json TEXT NOT NULL DEFAULT '{}',
- events TEXT NOT NULL DEFAULT '["encode.failed","encode.completed"]',
- enabled BOOLEAN DEFAULT 1,
- created_at DATETIME DEFAULT CURRENT_TIMESTAMP
-);
+ALTER TABLE notification_targets
+ ADD COLUMN target_type_v2 TEXT;
-INSERT INTO notification_targets_new (id, name, target_type, config_json, events, enabled, created_at)
-SELECT
- id,
- name,
- CASE target_type
+ALTER TABLE notification_targets
+ ADD COLUMN config_json TEXT NOT NULL DEFAULT '{}';
+
+UPDATE notification_targets
+SET
+ target_type_v2 = CASE target_type
WHEN 'discord' THEN 'discord_webhook'
WHEN 'gotify' THEN 'gotify'
ELSE 'webhook'
END,
- CASE target_type
+ config_json = CASE target_type
WHEN 'discord' THEN json_object('webhook_url', endpoint_url)
WHEN 'gotify' THEN json_object('server_url', endpoint_url, 'app_token', COALESCE(auth_token, ''))
ELSE json_object('url', endpoint_url, 'auth_token', auth_token)
- END,
- COALESCE(events, '["failed","completed"]'),
- enabled,
- created_at
-FROM notification_targets;
-
-DROP TABLE notification_targets;
-ALTER TABLE notification_targets_new RENAME TO notification_targets;
+ END
+WHERE target_type_v2 IS NULL
+ OR target_type_v2 = ''
+ OR config_json IS NULL
+ OR trim(config_json) = '';
CREATE INDEX IF NOT EXISTS idx_notification_targets_enabled
ON notification_targets(enabled);
diff --git a/migrations/20260414010000_job_resume_sessions.sql b/migrations/20260414010000_job_resume_sessions.sql
new file mode 100644
index 0000000..e404297
--- /dev/null
+++ b/migrations/20260414010000_job_resume_sessions.sql
@@ -0,0 +1,38 @@
+CREATE TABLE IF NOT EXISTS job_resume_sessions (
+ id INTEGER PRIMARY KEY AUTOINCREMENT,
+ job_id INTEGER NOT NULL UNIQUE REFERENCES jobs(id) ON DELETE CASCADE,
+ strategy TEXT NOT NULL,
+ plan_hash TEXT NOT NULL,
+ mtime_hash TEXT NOT NULL,
+ temp_dir TEXT NOT NULL,
+ concat_manifest_path TEXT NOT NULL,
+ segment_length_secs INTEGER NOT NULL,
+ status TEXT NOT NULL DEFAULT 'active',
+ created_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
+ updated_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP
+);
+
+CREATE TABLE IF NOT EXISTS job_resume_segments (
+ id INTEGER PRIMARY KEY AUTOINCREMENT,
+ job_id INTEGER NOT NULL REFERENCES jobs(id) ON DELETE CASCADE,
+ segment_index INTEGER NOT NULL,
+ start_secs REAL NOT NULL,
+ duration_secs REAL NOT NULL,
+ temp_path TEXT NOT NULL,
+ status TEXT NOT NULL DEFAULT 'pending',
+ attempt_count INTEGER NOT NULL DEFAULT 0,
+ created_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
+ updated_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
+ UNIQUE(job_id, segment_index)
+);
+
+CREATE INDEX IF NOT EXISTS idx_job_resume_sessions_status
+ ON job_resume_sessions(status);
+
+CREATE INDEX IF NOT EXISTS idx_job_resume_segments_job_status
+ ON job_resume_segments(job_id, status);
+
+INSERT OR REPLACE INTO schema_info (key, value) VALUES
+ ('schema_version', '9'),
+ ('min_compatible_version', '0.2.5'),
+ ('last_updated', datetime('now'));
diff --git a/src/db/config.rs b/src/db/config.rs
index 4a2a22b..ac02ce4 100644
--- a/src/db/config.rs
+++ b/src/db/config.rs
@@ -1,4 +1,5 @@
use crate::error::Result;
+use serde_json::Value as JsonValue;
use sqlx::Row;
use std::collections::HashMap;
use std::path::{Path, PathBuf};
@@ -6,6 +7,54 @@ use std::path::{Path, PathBuf};
use super::Db;
use super::types::*;
+fn notification_config_string(config_json: &str, key: &str) -> Option {
+ serde_json::from_str::(config_json)
+ .ok()
+ .and_then(|value| {
+ value
+ .get(key)
+ .and_then(JsonValue::as_str)
+ .map(str::to_string)
+ })
+ .map(|value| value.trim().to_string())
+ .filter(|value| !value.is_empty())
+}
+
+fn notification_legacy_columns(
+ target_type: &str,
+ config_json: &str,
+) -> (String, Option, Option) {
+ match target_type {
+ "discord_webhook" => (
+ "discord".to_string(),
+ notification_config_string(config_json, "webhook_url"),
+ None,
+ ),
+ "discord_bot" => (
+ "discord".to_string(),
+ Some("https://discord.com".to_string()),
+ notification_config_string(config_json, "bot_token"),
+ ),
+ "gotify" => (
+ "gotify".to_string(),
+ notification_config_string(config_json, "server_url"),
+ notification_config_string(config_json, "app_token"),
+ ),
+ "webhook" => (
+ "webhook".to_string(),
+ notification_config_string(config_json, "url"),
+ notification_config_string(config_json, "auth_token"),
+ ),
+ "telegram" => (
+ "webhook".to_string(),
+ Some("https://api.telegram.org".to_string()),
+ notification_config_string(config_json, "bot_token"),
+ ),
+ "email" => ("webhook".to_string(), None, None),
+ other => (other.to_string(), None, None),
+ }
+}
+
impl Db {
pub async fn get_watch_dirs(&self) -> Result> {
let has_is_recursive = self.watch_dir_flags.has_is_recursive;
@@ -292,13 +341,23 @@ impl Db {
FROM watch_dirs wd
JOIN library_profiles lp ON lp.id = wd.profile_id
WHERE wd.profile_id IS NOT NULL
- AND (? = wd.path OR ? LIKE wd.path || '/%' OR ? LIKE wd.path || '\\%')
+ AND (
+ ? = wd.path
+ OR (
+ length(?) > length(wd.path)
+ AND (
+ substr(?, 1, length(wd.path) + 1) = wd.path || '/'
+ OR substr(?, 1, length(wd.path) + 1) = wd.path || '\\'
+ )
+ )
+ )
ORDER BY LENGTH(wd.path) DESC
LIMIT 1",
)
.bind(path)
.bind(path)
.bind(path)
+ .bind(path)
.fetch_optional(&self.pool)
.await?;
@@ -359,11 +418,43 @@ impl Db {
}
pub async fn get_notification_targets(&self) -> Result> {
- let targets = sqlx::query_as::<_, NotificationTarget>(
- "SELECT id, name, target_type, config_json, events, enabled, created_at FROM notification_targets",
- )
+ let flags = &self.notification_target_flags;
+ let targets = if flags.has_target_type_v2 {
+ sqlx::query_as::<_, NotificationTarget>(
+ "SELECT
+ id,
+ name,
+ COALESCE(
+ NULLIF(target_type_v2, ''),
+ CASE target_type
+ WHEN 'discord' THEN 'discord_webhook'
+ WHEN 'gotify' THEN 'gotify'
+ ELSE 'webhook'
+ END
+ ) AS target_type,
+ CASE
+ WHEN trim(config_json) != '' THEN config_json
+ WHEN target_type = 'discord' THEN json_object('webhook_url', endpoint_url)
+ WHEN target_type = 'gotify' THEN json_object('server_url', endpoint_url, 'app_token', COALESCE(auth_token, ''))
+ ELSE json_object('url', endpoint_url, 'auth_token', auth_token)
+ END AS config_json,
+ events,
+ enabled,
+ created_at
+ FROM notification_targets
+ ORDER BY id ASC",
+ )
.fetch_all(&self.pool)
- .await?;
+ .await?
+ } else {
+ sqlx::query_as::<_, NotificationTarget>(
+ "SELECT id, name, target_type, config_json, events, enabled, created_at
+ FROM notification_targets
+ ORDER BY id ASC",
+ )
+ .fetch_all(&self.pool)
+ .await?
+ };
Ok(targets)
}
@@ -375,18 +466,42 @@ impl Db {
events: &str,
enabled: bool,
) -> Result {
- let row = sqlx::query_as::<_, NotificationTarget>(
- "INSERT INTO notification_targets (name, target_type, config_json, events, enabled)
- VALUES (?, ?, ?, ?, ?) RETURNING *",
- )
- .bind(name)
- .bind(target_type)
- .bind(config_json)
- .bind(events)
- .bind(enabled)
- .fetch_one(&self.pool)
- .await?;
- Ok(row)
+ let flags = &self.notification_target_flags;
+ if flags.has_target_type_v2 {
+ let (legacy_target_type, endpoint_url, auth_token) =
+ notification_legacy_columns(target_type, config_json);
+ let result = sqlx::query(
+ "INSERT INTO notification_targets
+ (name, target_type, target_type_v2, endpoint_url, auth_token, config_json, events, enabled)
+ VALUES (?, ?, ?, ?, ?, ?, ?, ?)",
+ )
+ .bind(name)
+ .bind(legacy_target_type)
+ .bind(target_type)
+ .bind(endpoint_url)
+ .bind(auth_token)
+ .bind(config_json)
+ .bind(events)
+ .bind(enabled)
+ .execute(&self.pool)
+ .await?;
+ self.get_notification_target_by_id(result.last_insert_rowid())
+ .await
+ } else {
+ let result = sqlx::query(
+ "INSERT INTO notification_targets (name, target_type, config_json, events, enabled)
+ VALUES (?, ?, ?, ?, ?)",
+ )
+ .bind(name)
+ .bind(target_type)
+ .bind(config_json)
+ .bind(events)
+ .bind(enabled)
+ .execute(&self.pool)
+ .await?;
+ self.get_notification_target_by_id(result.last_insert_rowid())
+ .await
+ }
}
pub async fn delete_notification_target(&self, id: i64) -> Result<()> {
@@ -406,30 +521,97 @@ impl Db {
&self,
targets: &[crate::config::NotificationTargetConfig],
) -> Result<()> {
+ let flags = &self.notification_target_flags;
let mut tx = self.pool.begin().await?;
sqlx::query("DELETE FROM notification_targets")
.execute(&mut *tx)
.await?;
for target in targets {
- sqlx::query(
- "INSERT INTO notification_targets (name, target_type, config_json, events, enabled) VALUES (?, ?, ?, ?, ?)",
- )
- .bind(&target.name)
- .bind(&target.target_type)
- .bind(target.config_json.to_string())
- .bind(serde_json::to_string(&target.events).unwrap_or_else(|_| "[]".to_string()))
- .bind(target.enabled)
- .execute(&mut *tx)
- .await?;
+ let config_json = target.config_json.to_string();
+ let events = serde_json::to_string(&target.events).unwrap_or_else(|_| "[]".to_string());
+ if flags.has_target_type_v2 {
+ let (legacy_target_type, endpoint_url, auth_token) =
+ notification_legacy_columns(&target.target_type, &config_json);
+ sqlx::query(
+ "INSERT INTO notification_targets
+ (name, target_type, target_type_v2, endpoint_url, auth_token, config_json, events, enabled)
+ VALUES (?, ?, ?, ?, ?, ?, ?, ?)",
+ )
+ .bind(&target.name)
+ .bind(legacy_target_type)
+ .bind(&target.target_type)
+ .bind(endpoint_url)
+ .bind(auth_token)
+ .bind(&config_json)
+ .bind(&events)
+ .bind(target.enabled)
+ .execute(&mut *tx)
+ .await?;
+ } else {
+ sqlx::query(
+ "INSERT INTO notification_targets (name, target_type, config_json, events, enabled) VALUES (?, ?, ?, ?, ?)",
+ )
+ .bind(&target.name)
+ .bind(&target.target_type)
+ .bind(&config_json)
+ .bind(&events)
+ .bind(target.enabled)
+ .execute(&mut *tx)
+ .await?;
+ }
}
tx.commit().await?;
Ok(())
}
+ async fn get_notification_target_by_id(&self, id: i64) -> Result {
+ let flags = &self.notification_target_flags;
+ let row = if flags.has_target_type_v2 {
+ sqlx::query_as::<_, NotificationTarget>(
+ "SELECT
+ id,
+ name,
+ COALESCE(
+ NULLIF(target_type_v2, ''),
+ CASE target_type
+ WHEN 'discord' THEN 'discord_webhook'
+ WHEN 'gotify' THEN 'gotify'
+ ELSE 'webhook'
+ END
+ ) AS target_type,
+ CASE
+ WHEN trim(config_json) != '' THEN config_json
+ WHEN target_type = 'discord' THEN json_object('webhook_url', endpoint_url)
+ WHEN target_type = 'gotify' THEN json_object('server_url', endpoint_url, 'app_token', COALESCE(auth_token, ''))
+ ELSE json_object('url', endpoint_url, 'auth_token', auth_token)
+ END AS config_json,
+ events,
+ enabled,
+ created_at
+ FROM notification_targets
+ WHERE id = ?",
+ )
+ .bind(id)
+ .fetch_one(&self.pool)
+ .await?
+ } else {
+ sqlx::query_as::<_, NotificationTarget>(
+ "SELECT id, name, target_type, config_json, events, enabled, created_at
+ FROM notification_targets
+ WHERE id = ?",
+ )
+ .bind(id)
+ .fetch_one(&self.pool)
+ .await?
+ };
+ Ok(row)
+ }
+
pub async fn get_schedule_windows(&self) -> Result> {
- let windows = sqlx::query_as::<_, ScheduleWindow>("SELECT * FROM schedule_windows")
- .fetch_all(&self.pool)
- .await?;
+ let windows =
+ sqlx::query_as::<_, ScheduleWindow>("SELECT * FROM schedule_windows ORDER BY id ASC")
+ .fetch_all(&self.pool)
+ .await?;
Ok(windows)
}
@@ -582,3 +764,101 @@ impl Db {
Ok(())
}
}
+
+#[cfg(test)]
+mod tests {
+ use super::*;
+
+ fn temp_db_path(prefix: &str) -> PathBuf {
+ let mut path = std::env::temp_dir();
+ path.push(format!("{prefix}_{}.db", rand::random::()));
+ path
+ }
+
+ fn sample_profile(name: &str) -> NewLibraryProfile {
+ NewLibraryProfile {
+ name: name.to_string(),
+ preset: "balanced".to_string(),
+ codec: "av1".to_string(),
+ quality_profile: "balanced".to_string(),
+ hdr_mode: "preserve".to_string(),
+ audio_mode: "copy".to_string(),
+ crf_override: None,
+ notes: None,
+ }
+ }
+
+ #[tokio::test]
+ async fn profile_lookup_treats_percent_and_underscore_as_literals() -> anyhow::Result<()> {
+ let db_path = temp_db_path("alchemist_profile_lookup_literals");
+ let db = Db::new(db_path.to_string_lossy().as_ref()).await?;
+
+ let underscore_profile = db.create_profile(sample_profile("underscore")).await?;
+ let percent_profile = db.create_profile(sample_profile("percent")).await?;
+
+ let underscore_watch = db.add_watch_dir("/media/TV_4K", true).await?;
+ db.assign_profile_to_watch_dir(underscore_watch.id, Some(underscore_profile))
+ .await?;
+
+ let percent_watch = db.add_watch_dir("/media/Movies%20", true).await?;
+ db.assign_profile_to_watch_dir(percent_watch.id, Some(percent_profile))
+ .await?;
+
+ assert_eq!(
+ db.get_profile_for_path("/media/TV_4K/show/file.mkv")
+ .await?
+ .map(|profile| profile.name),
+ Some("underscore".to_string())
+ );
+ assert_eq!(
+ db.get_profile_for_path("/media/TVA4K/show/file.mkv")
+ .await?
+ .map(|profile| profile.name),
+ None
+ );
+ assert_eq!(
+ db.get_profile_for_path("/media/Movies%20/title/file.mkv")
+ .await?
+ .map(|profile| profile.name),
+ Some("percent".to_string())
+ );
+ assert_eq!(
+ db.get_profile_for_path("/media/MoviesABCD/title/file.mkv")
+ .await?
+ .map(|profile| profile.name),
+ None
+ );
+
+ db.pool.close().await;
+ let _ = std::fs::remove_file(db_path);
+ Ok(())
+ }
+
+ #[tokio::test]
+ async fn profile_lookup_prefers_longest_literal_matching_watch_dir() -> anyhow::Result<()> {
+ let db_path = temp_db_path("alchemist_profile_lookup_longest");
+ let db = Db::new(db_path.to_string_lossy().as_ref()).await?;
+
+ let base_profile = db.create_profile(sample_profile("base")).await?;
+ let nested_profile = db.create_profile(sample_profile("nested")).await?;
+
+ let base_watch = db.add_watch_dir("/media", true).await?;
+ db.assign_profile_to_watch_dir(base_watch.id, Some(base_profile))
+ .await?;
+
+ let nested_watch = db.add_watch_dir("/media/TV_4K", true).await?;
+ db.assign_profile_to_watch_dir(nested_watch.id, Some(nested_profile))
+ .await?;
+
+ assert_eq!(
+ db.get_profile_for_path("/media/TV_4K/show/file.mkv")
+ .await?
+ .map(|profile| profile.name),
+ Some("nested".to_string())
+ );
+
+ db.pool.close().await;
+ let _ = std::fs::remove_file(db_path);
+ Ok(())
+ }
+}
diff --git a/src/db/jobs.rs b/src/db/jobs.rs
index 8d50ccf..d64df4f 100644
--- a/src/db/jobs.rs
+++ b/src/db/jobs.rs
@@ -662,6 +662,166 @@ impl Db {
Ok(Some((pos + 1) as u32))
}
+ pub async fn get_resume_session(&self, job_id: i64) -> Result
)}
+ {focusedJob.job.status === "queued" && processorStatus?.blocked_reason && (
+
+ Blocked: {processorStatus.message}
+
+ )}
) : null}
diff --git a/web/src/components/jobs/JobsToolbar.tsx b/web/src/components/jobs/JobsToolbar.tsx
index 8f16274..8928c2a 100644
--- a/web/src/components/jobs/JobsToolbar.tsx
+++ b/web/src/components/jobs/JobsToolbar.tsx
@@ -1,4 +1,4 @@
-import { Search, RefreshCw, ArrowDown, ArrowUp } from "lucide-react";
+import { Search, RefreshCw, ArrowDown, ArrowUp, Plus } from "lucide-react";
import { clsx, type ClassValue } from "clsx";
import { twMerge } from "tailwind-merge";
import type { RefObject } from "react";
@@ -26,6 +26,7 @@ interface JobsToolbarProps {
setSortDesc: (fn: boolean | ((prev: boolean) => boolean)) => void;
refreshing: boolean;
fetchJobs: () => Promise;
+ openEnqueueDialog: () => void;
}
export function JobsToolbar({
@@ -33,7 +34,7 @@ export function JobsToolbar({
searchInput, setSearchInput,
compactSearchOpen, setCompactSearchOpen, compactSearchRef, compactSearchInputRef,
sortBy, setSortBy, sortDesc, setSortDesc,
- refreshing, fetchJobs,
+ refreshing, fetchJobs, openEnqueueDialog,
}: JobsToolbarProps) {
return (
@@ -94,6 +95,13 @@ export function JobsToolbar({
+