mirror of
https://github.com/bybrooklyn/alchemist.git
synced 2026-04-18 09:53:33 -04:00
Add structured decision and failure explanations
This commit is contained in:
2
.idea/.name
generated
2
.idea/.name
generated
@@ -1 +1 @@
|
||||
processor.rs
|
||||
explanations.rs
|
||||
@@ -61,6 +61,18 @@ Canonical job listing endpoint. Supports query params such
|
||||
as `limit`, `page`, `status`, `search`, `sort_by`,
|
||||
`sort_desc`, and `archived`.
|
||||
|
||||
Each returned job row still includes the legacy
|
||||
`decision_reason` string when present, and now also includes
|
||||
an optional `decision_explanation` object:
|
||||
|
||||
- `category`
|
||||
- `code`
|
||||
- `summary`
|
||||
- `detail`
|
||||
- `operator_guidance`
|
||||
- `measured`
|
||||
- `legacy_reason`
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
@@ -72,7 +84,12 @@ curl -b cookie.txt \
|
||||
|
||||
Returns the job row, any available analyzed metadata,
|
||||
encode stats for completed jobs, recent job logs, and a
|
||||
failure summary for failed jobs.
|
||||
failure summary for failed jobs. Structured explanation
|
||||
fields are included when available:
|
||||
|
||||
- `decision_explanation`
|
||||
- `failure_explanation`
|
||||
- `job_failure_summary` is retained as a compatibility field
|
||||
|
||||
Example response shape:
|
||||
|
||||
@@ -96,7 +113,21 @@ Example response shape:
|
||||
"vmaf_score": 93.1
|
||||
},
|
||||
"job_logs": [],
|
||||
"job_failure_summary": null
|
||||
"job_failure_summary": null,
|
||||
"decision_explanation": {
|
||||
"category": "decision",
|
||||
"code": "transcode_recommended",
|
||||
"summary": "Transcode recommended",
|
||||
"detail": "Alchemist determined the file should be transcoded based on the current codec and measured efficiency.",
|
||||
"operator_guidance": null,
|
||||
"measured": {
|
||||
"target_codec": "av1",
|
||||
"current_codec": "h264",
|
||||
"bpp": "0.1200"
|
||||
},
|
||||
"legacy_reason": "transcode_recommended|target_codec=av1,current_codec=h264,bpp=0.1200"
|
||||
},
|
||||
"failure_explanation": null
|
||||
}
|
||||
```
|
||||
|
||||
@@ -373,3 +404,7 @@ Example:
|
||||
event: progress
|
||||
data: {"job_id":42,"percentage":61.4,"time":"00:11:32"}
|
||||
```
|
||||
|
||||
`decision` events include the legacy `reason` plus an
|
||||
optional structured `explanation` object with the same shape
|
||||
used by the jobs API.
|
||||
|
||||
@@ -51,9 +51,22 @@ Database location:
|
||||
| `id` | INTEGER | Primary key |
|
||||
| `job_id` | INTEGER | Foreign key to `jobs.id` |
|
||||
| `action` | TEXT | Planner or post-encode action |
|
||||
| `reason` | TEXT | Machine-readable reason string |
|
||||
| `reason` | TEXT | Legacy machine-readable reason string retained for compatibility |
|
||||
| `reason_code` | TEXT | Stable structured explanation code |
|
||||
| `reason_payload_json` | TEXT | Serialized structured explanation payload |
|
||||
| `created_at` | DATETIME | Insert timestamp |
|
||||
|
||||
## `job_failure_explanations`
|
||||
|
||||
| Column | Type | Description |
|
||||
|--------|------|-------------|
|
||||
| `job_id` | INTEGER | Primary key and foreign key to `jobs.id` |
|
||||
| `legacy_summary` | TEXT | Legacy failure summary retained for compatibility |
|
||||
| `code` | TEXT | Stable structured failure code |
|
||||
| `payload_json` | TEXT | Serialized structured failure explanation payload |
|
||||
| `created_at` | TEXT | Insert timestamp |
|
||||
| `updated_at` | TEXT | Last update timestamp |
|
||||
|
||||
## `users`
|
||||
|
||||
| Column | Type | Description |
|
||||
|
||||
@@ -22,6 +22,16 @@ Enter your Gotify server URL and app token.
|
||||
Alchemist sends a JSON POST to any URL you configure.
|
||||
Works with Home Assistant, ntfy, Apprise, and custom scripts.
|
||||
|
||||
Webhook payloads now include structured explanation data
|
||||
when relevant:
|
||||
|
||||
- `decision_explanation`
|
||||
- `failure_explanation`
|
||||
|
||||
Discord and Gotify targets use the same structured
|
||||
summary/detail/guidance internally, but render them as
|
||||
human-readable message text instead of raw JSON.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
If notifications aren't arriving:
|
||||
|
||||
@@ -3,9 +3,20 @@ title: Skip Decisions
|
||||
description: Why Alchemist skipped a file and what each reason means.
|
||||
---
|
||||
|
||||
Every skipped file has a machine-readable reason string
|
||||
recorded in the database and shown as plain English in the
|
||||
job detail panel.
|
||||
Every skipped file now has a structured explanation object
|
||||
as the primary source of truth. The legacy machine-readable
|
||||
reason string is still retained for compatibility and
|
||||
debugging during rollout.
|
||||
|
||||
Structured explanation fields:
|
||||
|
||||
- `category`
|
||||
- `code`
|
||||
- `summary`
|
||||
- `detail`
|
||||
- `operator_guidance`
|
||||
- `measured`
|
||||
- `legacy_reason`
|
||||
|
||||
## Skip reasons
|
||||
|
||||
|
||||
22
migrations/20260404123000_decision_explanations.sql
Normal file
22
migrations/20260404123000_decision_explanations.sql
Normal file
@@ -0,0 +1,22 @@
|
||||
ALTER TABLE decisions ADD COLUMN reason_code TEXT;
|
||||
ALTER TABLE decisions ADD COLUMN reason_payload_json TEXT;
|
||||
|
||||
CREATE TABLE IF NOT EXISTS job_failure_explanations (
|
||||
job_id INTEGER PRIMARY KEY REFERENCES jobs(id) ON DELETE CASCADE,
|
||||
legacy_summary TEXT,
|
||||
code TEXT NOT NULL,
|
||||
payload_json TEXT NOT NULL,
|
||||
created_at TEXT NOT NULL DEFAULT (datetime('now')),
|
||||
updated_at TEXT NOT NULL DEFAULT (datetime('now'))
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_decisions_reason_code
|
||||
ON decisions(reason_code);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_job_failure_explanations_code
|
||||
ON job_failure_explanations(code);
|
||||
|
||||
INSERT OR REPLACE INTO schema_info (key, value) VALUES
|
||||
('schema_version', '6'),
|
||||
('min_compatible_version', '0.2.5'),
|
||||
('last_updated', datetime('now'));
|
||||
204
src/db.rs
204
src/db.rs
@@ -1,4 +1,8 @@
|
||||
use crate::error::{AlchemistError, Result};
|
||||
use crate::explanations::{
|
||||
Explanation, decision_from_legacy, explanation_from_json, explanation_to_json,
|
||||
failure_from_summary,
|
||||
};
|
||||
use chrono::{DateTime, Utc};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use sha2::{Digest, Sha256};
|
||||
@@ -61,6 +65,7 @@ pub enum AlchemistEvent {
|
||||
job_id: i64,
|
||||
action: String,
|
||||
reason: String,
|
||||
explanation: Option<Explanation>,
|
||||
},
|
||||
Log {
|
||||
level: String,
|
||||
@@ -86,6 +91,7 @@ pub enum JobEvent {
|
||||
job_id: i64,
|
||||
action: String,
|
||||
reason: String,
|
||||
explanation: Option<Explanation>,
|
||||
},
|
||||
Log {
|
||||
level: String,
|
||||
@@ -137,10 +143,12 @@ impl From<JobEvent> for AlchemistEvent {
|
||||
job_id,
|
||||
action,
|
||||
reason,
|
||||
explanation,
|
||||
} => AlchemistEvent::Decision {
|
||||
job_id,
|
||||
action,
|
||||
reason,
|
||||
explanation,
|
||||
},
|
||||
JobEvent::Log {
|
||||
level,
|
||||
@@ -175,10 +183,12 @@ impl From<AlchemistEvent> for JobEvent {
|
||||
job_id,
|
||||
action,
|
||||
reason,
|
||||
explanation,
|
||||
} => JobEvent::Decision {
|
||||
job_id,
|
||||
action,
|
||||
reason,
|
||||
explanation,
|
||||
},
|
||||
AlchemistEvent::Log {
|
||||
level,
|
||||
@@ -583,9 +593,26 @@ pub struct Decision {
|
||||
pub job_id: i64,
|
||||
pub action: String, // "encode", "skip", "reject"
|
||||
pub reason: String,
|
||||
pub reason_code: Option<String>,
|
||||
pub reason_payload_json: Option<String>,
|
||||
pub created_at: DateTime<Utc>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, sqlx::FromRow)]
|
||||
struct DecisionRecord {
|
||||
job_id: i64,
|
||||
action: String,
|
||||
reason: String,
|
||||
reason_payload_json: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, sqlx::FromRow)]
|
||||
struct FailureExplanationRecord {
|
||||
legacy_summary: Option<String>,
|
||||
code: String,
|
||||
payload_json: String,
|
||||
}
|
||||
|
||||
/// Default timeout for potentially slow database queries
|
||||
const QUERY_TIMEOUT: Duration = Duration::from_secs(5);
|
||||
|
||||
@@ -799,17 +826,33 @@ impl Db {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn add_decision(&self, job_id: i64, action: &str, reason: &str) -> Result<()> {
|
||||
sqlx::query("INSERT INTO decisions (job_id, action, reason) VALUES (?, ?, ?)")
|
||||
.bind(job_id)
|
||||
.bind(action)
|
||||
.bind(reason)
|
||||
.execute(&self.pool)
|
||||
.await?;
|
||||
pub async fn add_decision_with_explanation(
|
||||
&self,
|
||||
job_id: i64,
|
||||
action: &str,
|
||||
explanation: &Explanation,
|
||||
) -> Result<()> {
|
||||
sqlx::query(
|
||||
"INSERT INTO decisions (job_id, action, reason, reason_code, reason_payload_json)
|
||||
VALUES (?, ?, ?, ?, ?)",
|
||||
)
|
||||
.bind(job_id)
|
||||
.bind(action)
|
||||
.bind(&explanation.legacy_reason)
|
||||
.bind(&explanation.code)
|
||||
.bind(explanation_to_json(explanation))
|
||||
.execute(&self.pool)
|
||||
.await?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn add_decision(&self, job_id: i64, action: &str, reason: &str) -> Result<()> {
|
||||
let explanation = decision_from_legacy(action, reason);
|
||||
self.add_decision_with_explanation(job_id, action, &explanation)
|
||||
.await
|
||||
}
|
||||
|
||||
pub async fn get_all_jobs(&self) -> Result<Vec<Job>> {
|
||||
let pool = &self.pool;
|
||||
timed_query("get_all_jobs", || async {
|
||||
@@ -878,7 +921,11 @@ impl Db {
|
||||
|
||||
pub async fn get_job_decision(&self, job_id: i64) -> Result<Option<Decision>> {
|
||||
let decision = sqlx::query_as::<_, Decision>(
|
||||
"SELECT id, job_id, action, reason, created_at FROM decisions WHERE job_id = ? ORDER BY created_at DESC LIMIT 1",
|
||||
"SELECT id, job_id, action, reason, reason_code, reason_payload_json, created_at
|
||||
FROM decisions
|
||||
WHERE job_id = ?
|
||||
ORDER BY created_at DESC, id DESC
|
||||
LIMIT 1",
|
||||
)
|
||||
.bind(job_id)
|
||||
.fetch_optional(&self.pool)
|
||||
@@ -887,6 +934,104 @@ impl Db {
|
||||
Ok(decision)
|
||||
}
|
||||
|
||||
pub async fn get_job_decision_explanation(&self, job_id: i64) -> Result<Option<Explanation>> {
|
||||
let row = sqlx::query_as::<_, DecisionRecord>(
|
||||
"SELECT job_id, action, reason, reason_payload_json
|
||||
FROM decisions
|
||||
WHERE job_id = ?
|
||||
ORDER BY created_at DESC, id DESC
|
||||
LIMIT 1",
|
||||
)
|
||||
.bind(job_id)
|
||||
.fetch_optional(&self.pool)
|
||||
.await?;
|
||||
|
||||
Ok(row.map(|row| {
|
||||
row.reason_payload_json
|
||||
.as_deref()
|
||||
.and_then(explanation_from_json)
|
||||
.unwrap_or_else(|| decision_from_legacy(&row.action, &row.reason))
|
||||
}))
|
||||
}
|
||||
|
||||
pub async fn get_job_decision_explanations(
|
||||
&self,
|
||||
job_ids: &[i64],
|
||||
) -> Result<HashMap<i64, Explanation>> {
|
||||
if job_ids.is_empty() {
|
||||
return Ok(HashMap::new());
|
||||
}
|
||||
|
||||
let mut qb = sqlx::QueryBuilder::<sqlx::Sqlite>::new(
|
||||
"SELECT d.job_id, d.action, d.reason, d.reason_payload_json
|
||||
FROM decisions d
|
||||
INNER JOIN (SELECT job_id, MAX(id) AS max_id FROM decisions WHERE job_id IN (",
|
||||
);
|
||||
let mut separated = qb.separated(", ");
|
||||
for job_id in job_ids {
|
||||
separated.push_bind(job_id);
|
||||
}
|
||||
separated.push_unseparated(") GROUP BY job_id) latest ON latest.max_id = d.id");
|
||||
|
||||
let rows = qb
|
||||
.build_query_as::<DecisionRecord>()
|
||||
.fetch_all(&self.pool)
|
||||
.await?;
|
||||
|
||||
Ok(rows
|
||||
.into_iter()
|
||||
.map(|row| {
|
||||
let explanation = row
|
||||
.reason_payload_json
|
||||
.as_deref()
|
||||
.and_then(explanation_from_json)
|
||||
.unwrap_or_else(|| decision_from_legacy(&row.action, &row.reason));
|
||||
(row.job_id, explanation)
|
||||
})
|
||||
.collect())
|
||||
}
|
||||
|
||||
pub async fn upsert_job_failure_explanation(
|
||||
&self,
|
||||
job_id: i64,
|
||||
explanation: &Explanation,
|
||||
) -> Result<()> {
|
||||
sqlx::query(
|
||||
"INSERT INTO job_failure_explanations (job_id, legacy_summary, code, payload_json, updated_at)
|
||||
VALUES (?, ?, ?, ?, datetime('now'))
|
||||
ON CONFLICT(job_id) DO UPDATE SET
|
||||
legacy_summary = excluded.legacy_summary,
|
||||
code = excluded.code,
|
||||
payload_json = excluded.payload_json,
|
||||
updated_at = datetime('now')",
|
||||
)
|
||||
.bind(job_id)
|
||||
.bind(&explanation.legacy_reason)
|
||||
.bind(&explanation.code)
|
||||
.bind(explanation_to_json(explanation))
|
||||
.execute(&self.pool)
|
||||
.await?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn get_job_failure_explanation(&self, job_id: i64) -> Result<Option<Explanation>> {
|
||||
let row = sqlx::query_as::<_, FailureExplanationRecord>(
|
||||
"SELECT legacy_summary, code, payload_json
|
||||
FROM job_failure_explanations
|
||||
WHERE job_id = ?",
|
||||
)
|
||||
.bind(job_id)
|
||||
.fetch_optional(&self.pool)
|
||||
.await?;
|
||||
|
||||
Ok(row.map(|row| {
|
||||
explanation_from_json(&row.payload_json).unwrap_or_else(|| {
|
||||
failure_from_summary(row.legacy_summary.as_deref().unwrap_or(row.code.as_str()))
|
||||
})
|
||||
}))
|
||||
}
|
||||
|
||||
pub async fn get_stats(&self) -> Result<serde_json::Value> {
|
||||
let pool = &self.pool;
|
||||
timed_query("get_stats", || async {
|
||||
@@ -2801,4 +2946,47 @@ mod tests {
|
||||
let _ = std::fs::remove_file(db_path);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn legacy_decision_rows_still_parse_into_structured_explanations()
|
||||
-> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
let mut db_path = std::env::temp_dir();
|
||||
let token: u64 = rand::random();
|
||||
db_path.push(format!("alchemist_legacy_decision_test_{}.db", token));
|
||||
|
||||
let db = Db::new(db_path.to_string_lossy().as_ref()).await?;
|
||||
let _ = db
|
||||
.enqueue_job(
|
||||
Path::new("legacy-input.mkv"),
|
||||
Path::new("legacy-output.mkv"),
|
||||
SystemTime::UNIX_EPOCH,
|
||||
)
|
||||
.await?;
|
||||
let job = db
|
||||
.get_job_by_input_path("legacy-input.mkv")
|
||||
.await?
|
||||
.ok_or_else(|| std::io::Error::other("missing job"))?;
|
||||
|
||||
sqlx::query(
|
||||
"INSERT INTO decisions (job_id, action, reason, reason_code, reason_payload_json)
|
||||
VALUES (?, 'skip', 'bpp_below_threshold|bpp=0.043,threshold=0.050', NULL, NULL)",
|
||||
)
|
||||
.bind(job.id)
|
||||
.execute(&db.pool)
|
||||
.await?;
|
||||
|
||||
let explanation = db
|
||||
.get_job_decision_explanation(job.id)
|
||||
.await?
|
||||
.ok_or_else(|| std::io::Error::other("missing explanation"))?;
|
||||
assert_eq!(explanation.code, "bpp_below_threshold");
|
||||
assert_eq!(
|
||||
explanation.measured.get("bpp"),
|
||||
Some(&serde_json::json!(0.043))
|
||||
);
|
||||
|
||||
drop(db);
|
||||
let _ = std::fs::remove_file(db_path);
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
753
src/explanations.rs
Normal file
753
src/explanations.rs
Normal file
@@ -0,0 +1,753 @@
|
||||
use serde::{Deserialize, Serialize};
|
||||
use serde_json::{Value, json};
|
||||
use std::collections::BTreeMap;
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
|
||||
#[serde(rename_all = "snake_case")]
|
||||
pub enum ExplanationCategory {
|
||||
Decision,
|
||||
Failure,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
|
||||
pub struct Explanation {
|
||||
pub category: ExplanationCategory,
|
||||
pub code: String,
|
||||
pub summary: String,
|
||||
pub detail: String,
|
||||
pub operator_guidance: Option<String>,
|
||||
pub measured: BTreeMap<String, Value>,
|
||||
pub legacy_reason: String,
|
||||
}
|
||||
|
||||
impl Explanation {
|
||||
pub fn new(
|
||||
category: ExplanationCategory,
|
||||
code: impl Into<String>,
|
||||
summary: impl Into<String>,
|
||||
detail: impl Into<String>,
|
||||
operator_guidance: Option<String>,
|
||||
legacy_reason: impl Into<String>,
|
||||
) -> Self {
|
||||
Self {
|
||||
category,
|
||||
code: code.into(),
|
||||
summary: summary.into(),
|
||||
detail: detail.into(),
|
||||
operator_guidance,
|
||||
measured: BTreeMap::new(),
|
||||
legacy_reason: legacy_reason.into(),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn with_measured(mut self, key: impl Into<String>, value: Value) -> Self {
|
||||
self.measured.insert(key.into(), value);
|
||||
self
|
||||
}
|
||||
}
|
||||
|
||||
fn split_legacy_reason(reason: &str) -> (String, BTreeMap<String, Value>) {
|
||||
let trimmed = reason.trim();
|
||||
if let Some((code, raw_params)) = trimmed.split_once('|') {
|
||||
let mut measured = BTreeMap::new();
|
||||
for pair in raw_params.split(',') {
|
||||
let pair = pair.trim();
|
||||
if pair.is_empty() {
|
||||
continue;
|
||||
}
|
||||
if let Some((key, raw_value)) = pair.split_once('=') {
|
||||
measured.insert(key.trim().to_string(), parse_primitive(raw_value.trim()));
|
||||
}
|
||||
}
|
||||
(code.trim().to_string(), measured)
|
||||
} else {
|
||||
(trimmed.to_string(), BTreeMap::new())
|
||||
}
|
||||
}
|
||||
|
||||
fn parse_primitive(value: &str) -> Value {
|
||||
if value.eq_ignore_ascii_case("null") {
|
||||
return Value::Null;
|
||||
}
|
||||
if value.eq_ignore_ascii_case("true") {
|
||||
return Value::Bool(true);
|
||||
}
|
||||
if value.eq_ignore_ascii_case("false") {
|
||||
return Value::Bool(false);
|
||||
}
|
||||
if let Ok(parsed) = value.parse::<i64>() {
|
||||
return json!(parsed);
|
||||
}
|
||||
if let Ok(parsed) = value.parse::<f64>() {
|
||||
return json!(parsed);
|
||||
}
|
||||
Value::String(value.to_string())
|
||||
}
|
||||
|
||||
fn measured_string(measured: &BTreeMap<String, Value>, key: &str) -> Option<String> {
|
||||
measured.get(key).and_then(|value| match value {
|
||||
Value::String(value) => Some(value.clone()),
|
||||
Value::Number(value) => Some(value.to_string()),
|
||||
Value::Bool(value) => Some(value.to_string()),
|
||||
Value::Null => None,
|
||||
_ => None,
|
||||
})
|
||||
}
|
||||
|
||||
fn measured_f64(measured: &BTreeMap<String, Value>, key: &str) -> Option<f64> {
|
||||
measured.get(key).and_then(|value| match value {
|
||||
Value::Number(value) => value.as_f64(),
|
||||
Value::String(value) => value.parse::<f64>().ok(),
|
||||
_ => None,
|
||||
})
|
||||
}
|
||||
|
||||
fn measured_i64(measured: &BTreeMap<String, Value>, key: &str) -> Option<i64> {
|
||||
measured.get(key).and_then(|value| match value {
|
||||
Value::Number(value) => value.as_i64(),
|
||||
Value::String(value) => value.parse::<i64>().ok(),
|
||||
_ => None,
|
||||
})
|
||||
}
|
||||
|
||||
fn action_verb(action: &str) -> &'static str {
|
||||
match action {
|
||||
"remux" => "remux",
|
||||
"reject" => "reject",
|
||||
"encode" | "transcode" => "transcode",
|
||||
_ => "decision",
|
||||
}
|
||||
}
|
||||
|
||||
pub fn explanation_to_json(explanation: &Explanation) -> String {
|
||||
serde_json::to_string(explanation).unwrap_or_else(|_| "{}".to_string())
|
||||
}
|
||||
|
||||
pub fn explanation_from_json(payload: &str) -> Option<Explanation> {
|
||||
serde_json::from_str(payload).ok()
|
||||
}
|
||||
|
||||
pub fn decision_from_legacy(action: &str, legacy_reason: &str) -> Explanation {
|
||||
let (legacy_code, measured) = split_legacy_reason(legacy_reason);
|
||||
|
||||
if legacy_reason == "Output path matches input path" {
|
||||
return Explanation::new(
|
||||
ExplanationCategory::Decision,
|
||||
"output_path_matches_input",
|
||||
"Output would overwrite source",
|
||||
"The configured output path is the same as the source file. Alchemist refused to proceed to avoid overwriting the original file.",
|
||||
Some(
|
||||
"Go to Settings -> Files and configure a different output suffix or output folder."
|
||||
.to_string(),
|
||||
),
|
||||
legacy_reason,
|
||||
);
|
||||
}
|
||||
|
||||
if legacy_reason == "Output already exists" {
|
||||
return Explanation::new(
|
||||
ExplanationCategory::Decision,
|
||||
"output_already_exists",
|
||||
"Output file already exists",
|
||||
"A transcoded version of this file already exists at the planned output path, so Alchemist skipped it to avoid duplicating work.",
|
||||
Some("Delete the existing output file if you want to run the job again.".to_string()),
|
||||
legacy_reason,
|
||||
);
|
||||
}
|
||||
|
||||
if legacy_reason == "H.264 source prioritized for transcode" {
|
||||
return Explanation::new(
|
||||
ExplanationCategory::Decision,
|
||||
"transcode_h264_source",
|
||||
"H.264 source prioritized",
|
||||
"This file is H.264, so Alchemist prioritized it for transcoding because H.264 sources are often the easiest place to reclaim storage.",
|
||||
None,
|
||||
legacy_reason,
|
||||
)
|
||||
.with_measured("current_codec", json!("h264"));
|
||||
}
|
||||
|
||||
if legacy_reason.starts_with("Ready for ") && legacy_reason.contains(" transcode") {
|
||||
return Explanation::new(
|
||||
ExplanationCategory::Decision,
|
||||
"transcode_recommended",
|
||||
"Transcode recommended",
|
||||
"Alchemist determined this file is a strong candidate for transcoding based on the current codec and measured efficiency.",
|
||||
None,
|
||||
legacy_reason,
|
||||
);
|
||||
}
|
||||
|
||||
if legacy_reason == "No suitable encoder available" {
|
||||
return Explanation::new(
|
||||
ExplanationCategory::Decision,
|
||||
"no_suitable_encoder",
|
||||
"No suitable encoder available",
|
||||
"No encoder was available for the requested output codec under the current hardware and fallback policy.",
|
||||
Some("Check Settings -> Hardware, enable CPU fallback, or verify that the expected GPU encoder is available.".to_string()),
|
||||
legacy_reason,
|
||||
);
|
||||
}
|
||||
|
||||
if legacy_reason == "No available encoders for current hardware policy" {
|
||||
return Explanation::new(
|
||||
ExplanationCategory::Decision,
|
||||
"no_available_encoders",
|
||||
"No encoders available",
|
||||
"The current hardware policy left Alchemist with no available encoders for this job.",
|
||||
Some(
|
||||
"Check Settings -> Hardware and verify CPU encoding or fallback policy."
|
||||
.to_string(),
|
||||
),
|
||||
legacy_reason,
|
||||
);
|
||||
}
|
||||
|
||||
if legacy_reason.starts_with("Preferred codec ")
|
||||
&& legacy_reason.ends_with(" unavailable and fallback disabled")
|
||||
{
|
||||
let codec = legacy_reason
|
||||
.trim_start_matches("Preferred codec ")
|
||||
.trim_end_matches(" unavailable and fallback disabled");
|
||||
return Explanation::new(
|
||||
ExplanationCategory::Decision,
|
||||
"preferred_codec_unavailable_fallback_disabled",
|
||||
"Preferred encoder unavailable",
|
||||
format!(
|
||||
"The preferred codec ({codec}) is not available and CPU fallback is disabled, so Alchemist did not proceed."
|
||||
),
|
||||
Some("Go to Settings -> Hardware and enable CPU fallback, or verify your preferred GPU encoder is available.".to_string()),
|
||||
legacy_reason,
|
||||
)
|
||||
.with_measured("codec", json!(codec));
|
||||
}
|
||||
|
||||
match legacy_code.as_str() {
|
||||
"analysis_failed" => {
|
||||
Explanation::new(
|
||||
ExplanationCategory::Decision,
|
||||
"analysis_failed",
|
||||
"File could not be analyzed",
|
||||
format!(
|
||||
"FFprobe failed to read this file. It may be corrupt, incomplete, or in an unsupported format. Error: {}",
|
||||
measured_string(&measured, "error").unwrap_or_else(|| "unknown".to_string())
|
||||
),
|
||||
Some("Try playing the file in a media player or run Library Doctor to check for corruption.".to_string()),
|
||||
legacy_reason,
|
||||
)
|
||||
.with_measured(
|
||||
"error",
|
||||
measured
|
||||
.get("error")
|
||||
.cloned()
|
||||
.unwrap_or_else(|| json!("unknown")),
|
||||
)
|
||||
}
|
||||
"planning_failed" => Explanation::new(
|
||||
ExplanationCategory::Decision,
|
||||
"planning_failed",
|
||||
"Transcode plan could not be created",
|
||||
format!(
|
||||
"An internal planning error occurred while preparing this job. Error: {}",
|
||||
measured_string(&measured, "error").unwrap_or_else(|| "unknown".to_string())
|
||||
),
|
||||
Some("Check the logs for details. If this repeats, treat it as a planner bug.".to_string()),
|
||||
legacy_reason,
|
||||
)
|
||||
.with_measured(
|
||||
"error",
|
||||
measured
|
||||
.get("error")
|
||||
.cloned()
|
||||
.unwrap_or_else(|| json!("unknown")),
|
||||
),
|
||||
"already_target_codec" => {
|
||||
let codec = measured_string(&measured, "codec").unwrap_or_else(|| "target codec".to_string());
|
||||
let bit_depth = measured_i64(&measured, "bit_depth");
|
||||
let detail = if let Some(bit_depth) = bit_depth {
|
||||
format!("This file is already encoded as {codec} at {bit_depth}-bit depth. Re-encoding it would waste time and could reduce quality.")
|
||||
} else {
|
||||
format!("This file is already encoded as {codec}. Re-encoding it would waste time and could reduce quality.")
|
||||
};
|
||||
|
||||
Explanation::new(
|
||||
ExplanationCategory::Decision,
|
||||
"already_target_codec",
|
||||
"Already in target format",
|
||||
detail,
|
||||
None,
|
||||
legacy_reason,
|
||||
)
|
||||
.with_measured("codec", json!(codec))
|
||||
.with_measured(
|
||||
"bit_depth",
|
||||
bit_depth.map_or(Value::Null, |value| json!(value)),
|
||||
)
|
||||
}
|
||||
"already_target_codec_wrong_container" => {
|
||||
let container =
|
||||
measured_string(&measured, "container").unwrap_or_else(|| "mp4".to_string());
|
||||
let target_extension = measured_string(&measured, "target_extension")
|
||||
.unwrap_or_else(|| "mkv".to_string());
|
||||
Explanation::new(
|
||||
ExplanationCategory::Decision,
|
||||
"already_target_codec_wrong_container",
|
||||
"Target codec, wrong container",
|
||||
format!(
|
||||
"The file is already in the target codec but wrapped in a {container} container. Alchemist will remux it to {target_extension} without re-encoding."
|
||||
),
|
||||
None,
|
||||
legacy_reason,
|
||||
)
|
||||
.with_measured("container", json!(container))
|
||||
.with_measured("target_extension", json!(target_extension))
|
||||
}
|
||||
"bpp_below_threshold" => Explanation::new(
|
||||
ExplanationCategory::Decision,
|
||||
"bpp_below_threshold",
|
||||
"Already efficiently compressed",
|
||||
format!(
|
||||
"Bits-per-pixel ({:.3}) is below the configured threshold ({:.3}). This file is already efficiently compressed, so transcoding would likely save very little space.",
|
||||
measured_f64(&measured, "bpp").unwrap_or_default(),
|
||||
measured_f64(&measured, "threshold").unwrap_or_default()
|
||||
),
|
||||
Some("Lower the BPP threshold in Settings -> Transcoding if you want more aggressive re-encoding.".to_string()),
|
||||
legacy_reason,
|
||||
)
|
||||
.with_measured(
|
||||
"bpp",
|
||||
measured.get("bpp").cloned().unwrap_or_else(|| json!(0.0)),
|
||||
)
|
||||
.with_measured(
|
||||
"threshold",
|
||||
measured
|
||||
.get("threshold")
|
||||
.cloned()
|
||||
.unwrap_or_else(|| json!(0.0)),
|
||||
),
|
||||
"below_min_file_size" => Explanation::new(
|
||||
ExplanationCategory::Decision,
|
||||
"below_min_file_size",
|
||||
"File too small to process",
|
||||
format!(
|
||||
"File size ({} MB) is below the minimum threshold ({} MB), so the transcoding overhead is not worth it.",
|
||||
measured_i64(&measured, "size_mb").unwrap_or_default(),
|
||||
measured_i64(&measured, "threshold_mb").unwrap_or_default()
|
||||
),
|
||||
Some("Lower the minimum file size threshold in Settings -> Transcoding if you want smaller files processed.".to_string()),
|
||||
legacy_reason,
|
||||
)
|
||||
.with_measured(
|
||||
"size_mb",
|
||||
measured
|
||||
.get("size_mb")
|
||||
.cloned()
|
||||
.unwrap_or_else(|| json!(0)),
|
||||
)
|
||||
.with_measured(
|
||||
"threshold_mb",
|
||||
measured
|
||||
.get("threshold_mb")
|
||||
.cloned()
|
||||
.unwrap_or_else(|| json!(0)),
|
||||
),
|
||||
"size_reduction_insufficient" => Explanation::new(
|
||||
ExplanationCategory::Decision,
|
||||
"size_reduction_insufficient",
|
||||
"Not enough space would be saved",
|
||||
format!(
|
||||
"The predicted or measured size reduction ({:.3}) is below the required threshold ({:.3}), so Alchemist rejected the output as not worthwhile.",
|
||||
measured_f64(&measured, "reduction")
|
||||
.or_else(|| measured_f64(&measured, "predicted"))
|
||||
.unwrap_or_default(),
|
||||
measured_f64(&measured, "threshold").unwrap_or_default(),
|
||||
),
|
||||
Some("Lower the size reduction threshold in Settings -> Transcoding if you want to keep smaller wins.".to_string()),
|
||||
legacy_reason,
|
||||
)
|
||||
.with_measured(
|
||||
"reduction",
|
||||
measured
|
||||
.get("reduction")
|
||||
.or_else(|| measured.get("predicted"))
|
||||
.cloned()
|
||||
.unwrap_or_else(|| json!(0.0)),
|
||||
)
|
||||
.with_measured(
|
||||
"threshold",
|
||||
measured
|
||||
.get("threshold")
|
||||
.cloned()
|
||||
.unwrap_or_else(|| json!(0.0)),
|
||||
)
|
||||
.with_measured(
|
||||
"output_size",
|
||||
measured
|
||||
.get("output_size")
|
||||
.cloned()
|
||||
.unwrap_or(Value::Null),
|
||||
),
|
||||
"no_available_encoders" => Explanation::new(
|
||||
ExplanationCategory::Decision,
|
||||
"no_available_encoders",
|
||||
"No encoders available",
|
||||
"The current hardware policy left Alchemist with no available encoders for this job.",
|
||||
Some(
|
||||
"Check Settings -> Hardware and verify CPU encoding or fallback policy."
|
||||
.to_string(),
|
||||
),
|
||||
legacy_reason,
|
||||
)
|
||||
.with_measured(
|
||||
"requested_codec",
|
||||
measured
|
||||
.get("requested_codec")
|
||||
.cloned()
|
||||
.unwrap_or(Value::Null),
|
||||
)
|
||||
.with_measured(
|
||||
"allow_cpu_fallback",
|
||||
measured
|
||||
.get("allow_cpu_fallback")
|
||||
.cloned()
|
||||
.unwrap_or(Value::Null),
|
||||
)
|
||||
.with_measured(
|
||||
"allow_cpu_encoding",
|
||||
measured
|
||||
.get("allow_cpu_encoding")
|
||||
.cloned()
|
||||
.unwrap_or(Value::Null),
|
||||
),
|
||||
"preferred_codec_unavailable_fallback_disabled" => Explanation::new(
|
||||
ExplanationCategory::Decision,
|
||||
"preferred_codec_unavailable_fallback_disabled",
|
||||
"Preferred encoder unavailable",
|
||||
format!(
|
||||
"The preferred codec ({}) is not available and CPU fallback is disabled in settings.",
|
||||
measured_string(&measured, "codec").unwrap_or_else(|| "target codec".to_string())
|
||||
),
|
||||
Some("Go to Settings -> Hardware and enable CPU fallback, or check that your GPU encoder is working correctly.".to_string()),
|
||||
legacy_reason,
|
||||
)
|
||||
.with_measured(
|
||||
"codec",
|
||||
measured.get("codec").cloned().unwrap_or(Value::Null),
|
||||
),
|
||||
"no_suitable_encoder" => Explanation::new(
|
||||
ExplanationCategory::Decision,
|
||||
"no_suitable_encoder",
|
||||
"No suitable encoder available",
|
||||
"No encoder was found for the requested output codec under the current hardware and fallback policy.".to_string(),
|
||||
Some("Check Settings -> Hardware. Enable CPU fallback, or verify the expected GPU encoder is available.".to_string()),
|
||||
legacy_reason,
|
||||
),
|
||||
"incomplete_metadata" => Explanation::new(
|
||||
ExplanationCategory::Decision,
|
||||
"incomplete_metadata",
|
||||
"Missing file metadata",
|
||||
format!(
|
||||
"FFprobe could not determine the required {} metadata, so Alchemist cannot make a defensible transcode decision.",
|
||||
measured_string(&measured, "missing").unwrap_or_else(|| "file".to_string())
|
||||
),
|
||||
Some("Run Library Doctor or inspect the file manually to confirm it is readable.".to_string()),
|
||||
legacy_reason,
|
||||
)
|
||||
.with_measured(
|
||||
"missing",
|
||||
measured
|
||||
.get("missing")
|
||||
.cloned()
|
||||
.unwrap_or_else(|| json!("metadata")),
|
||||
),
|
||||
"quality_below_threshold" => Explanation::new(
|
||||
ExplanationCategory::Decision,
|
||||
"quality_below_threshold",
|
||||
"Quality check failed",
|
||||
"The output failed the configured quality gate, so Alchemist reverted it instead of promoting a lower-quality file.".to_string(),
|
||||
Some("Adjust the quality thresholds in Settings -> Quality if this is stricter than you want.".to_string()),
|
||||
legacy_reason,
|
||||
)
|
||||
.with_measured(
|
||||
"metric",
|
||||
measured
|
||||
.get("metric")
|
||||
.cloned()
|
||||
.unwrap_or_else(|| json!("vmaf")),
|
||||
)
|
||||
.with_measured(
|
||||
"score",
|
||||
measured.get("score").cloned().unwrap_or(Value::Null),
|
||||
)
|
||||
.with_measured(
|
||||
"threshold",
|
||||
measured.get("threshold").cloned().unwrap_or(Value::Null),
|
||||
),
|
||||
"transcode_h264_source" => Explanation::new(
|
||||
ExplanationCategory::Decision,
|
||||
"transcode_h264_source",
|
||||
"H.264 source prioritized",
|
||||
"The file is H.264, which is typically a strong candidate for reclaiming space, so Alchemist prioritized it for transcoding.".to_string(),
|
||||
None,
|
||||
legacy_reason,
|
||||
)
|
||||
.with_measured(
|
||||
"current_codec",
|
||||
measured
|
||||
.get("current_codec")
|
||||
.cloned()
|
||||
.unwrap_or_else(|| json!("h264")),
|
||||
),
|
||||
"transcode_recommended" => Explanation::new(
|
||||
ExplanationCategory::Decision,
|
||||
"transcode_recommended",
|
||||
"Transcode recommended",
|
||||
"Alchemist determined the file should be transcoded based on the target codec, current codec, and measured efficiency.".to_string(),
|
||||
None,
|
||||
legacy_reason,
|
||||
)
|
||||
.with_measured(
|
||||
"target_codec",
|
||||
measured
|
||||
.get("target_codec")
|
||||
.cloned()
|
||||
.unwrap_or(Value::Null),
|
||||
)
|
||||
.with_measured(
|
||||
"current_codec",
|
||||
measured
|
||||
.get("current_codec")
|
||||
.cloned()
|
||||
.unwrap_or(Value::Null),
|
||||
)
|
||||
.with_measured("bpp", measured.get("bpp").cloned().unwrap_or(Value::Null)),
|
||||
"remux_mp4_to_mkv_stream_copy" => Explanation::new(
|
||||
ExplanationCategory::Decision,
|
||||
"remux_mp4_to_mkv_stream_copy",
|
||||
"Remux only",
|
||||
"The file can be moved into the target container with stream copy, so Alchemist will remux it without re-encoding.".to_string(),
|
||||
None,
|
||||
legacy_reason,
|
||||
),
|
||||
_ => Explanation::new(
|
||||
ExplanationCategory::Decision,
|
||||
format!("{}_{}", action_verb(action), legacy_code.to_ascii_lowercase().replace([' ', ':', '(', ')', '.'], "_")),
|
||||
"Decision recorded",
|
||||
legacy_reason.to_string(),
|
||||
None,
|
||||
legacy_reason,
|
||||
),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn failure_from_summary(summary: &str) -> Explanation {
|
||||
let normalized = summary.to_ascii_lowercase();
|
||||
|
||||
if normalized.contains("cancelled") {
|
||||
return Explanation::new(
|
||||
ExplanationCategory::Failure,
|
||||
"cancelled",
|
||||
"Job was cancelled",
|
||||
"The job was cancelled before processing completed. The original file is unchanged.",
|
||||
None,
|
||||
summary,
|
||||
);
|
||||
}
|
||||
|
||||
if normalized.contains("no such file or directory") {
|
||||
return Explanation::new(
|
||||
ExplanationCategory::Failure,
|
||||
"source_missing",
|
||||
"Source file missing",
|
||||
"The source file could not be found. It may have been moved, deleted, or become unavailable.",
|
||||
Some(
|
||||
"Check that the source path still exists and is readable by Alchemist.".to_string(),
|
||||
),
|
||||
summary,
|
||||
);
|
||||
}
|
||||
|
||||
if normalized.contains("invalid data found")
|
||||
|| normalized.contains("moov atom not found")
|
||||
|| normalized.contains("probing failed")
|
||||
{
|
||||
return Explanation::new(
|
||||
ExplanationCategory::Failure,
|
||||
"corrupt_or_unreadable_media",
|
||||
"Media could not be read",
|
||||
"FFmpeg or FFprobe could not read the media successfully. The file may be corrupt, incomplete, or in an unsupported format.",
|
||||
Some("Run Library Doctor or try opening the file in a media player to confirm it is readable.".to_string()),
|
||||
summary,
|
||||
);
|
||||
}
|
||||
|
||||
if normalized.contains("permission denied") {
|
||||
return Explanation::new(
|
||||
ExplanationCategory::Failure,
|
||||
"permission_denied",
|
||||
"Permission denied",
|
||||
"Alchemist does not have permission to read from or write to a required path.",
|
||||
Some("Check filesystem permissions and ensure the process user can access the source and output paths.".to_string()),
|
||||
summary,
|
||||
);
|
||||
}
|
||||
|
||||
if normalized.contains("unknown encoder") || normalized.contains("encoder not found") {
|
||||
return Explanation::new(
|
||||
ExplanationCategory::Failure,
|
||||
"encoder_unavailable",
|
||||
"Required encoder unavailable",
|
||||
"The required encoder is not available in the current FFmpeg build or hardware environment.",
|
||||
Some(
|
||||
"Check Settings -> Hardware, FFmpeg encoder availability, and fallback settings."
|
||||
.to_string(),
|
||||
),
|
||||
summary,
|
||||
);
|
||||
}
|
||||
|
||||
if normalized.contains("videotoolbox")
|
||||
|| normalized.contains("vt_compression")
|
||||
|| normalized.contains("mediaserverd")
|
||||
|| normalized.contains("no capable devices")
|
||||
|| normalized.contains("vaapi")
|
||||
|| normalized.contains("qsv")
|
||||
|| normalized.contains("amf")
|
||||
|| normalized.contains("nvenc")
|
||||
{
|
||||
return Explanation::new(
|
||||
ExplanationCategory::Failure,
|
||||
"hardware_backend_failure",
|
||||
"Hardware backend failed",
|
||||
"The selected hardware encoding backend failed during processing.",
|
||||
Some("Retry the job, check the hardware probe log, or enable CPU fallback if appropriate.".to_string()),
|
||||
summary,
|
||||
);
|
||||
}
|
||||
|
||||
if normalized.contains("fallback detected")
|
||||
|| normalized.contains("fallback disabled")
|
||||
|| normalized.contains("cpu fallback")
|
||||
{
|
||||
return Explanation::new(
|
||||
ExplanationCategory::Failure,
|
||||
"fallback_blocked",
|
||||
"Fallback blocked by policy",
|
||||
"The job could not continue because the required fallback path was disallowed by the current hardware policy.",
|
||||
Some("Enable CPU fallback in Settings -> Hardware or make the preferred encoder available.".to_string()),
|
||||
summary,
|
||||
);
|
||||
}
|
||||
|
||||
if normalized.contains("out of memory") || normalized.contains("cannot allocate memory") {
|
||||
return Explanation::new(
|
||||
ExplanationCategory::Failure,
|
||||
"resource_exhausted",
|
||||
"System ran out of memory",
|
||||
"The system ran out of memory or another required resource during processing.",
|
||||
Some("Reduce concurrent jobs, lower workload pressure, or retry on a less loaded machine.".to_string()),
|
||||
summary,
|
||||
);
|
||||
}
|
||||
|
||||
if normalized.contains("planner failed") || normalized.contains("planning_failed") {
|
||||
return Explanation::new(
|
||||
ExplanationCategory::Failure,
|
||||
"planning_failed",
|
||||
"Planner failed",
|
||||
"An internal error occurred while building the transcode plan.",
|
||||
Some(
|
||||
"Check the job logs for details. If this repeats, treat it as a planner bug."
|
||||
.to_string(),
|
||||
),
|
||||
summary,
|
||||
);
|
||||
}
|
||||
|
||||
if normalized.contains("analysis_failed") || normalized.contains("ffprobe failed") {
|
||||
return Explanation::new(
|
||||
ExplanationCategory::Failure,
|
||||
"analysis_failed",
|
||||
"Analysis failed",
|
||||
"An error occurred while analyzing the input media before planning or encoding.",
|
||||
Some("Inspect the job logs and verify the media file is readable.".to_string()),
|
||||
summary,
|
||||
);
|
||||
}
|
||||
|
||||
if normalized.contains("finalization failed") || normalized.contains("finalize_failed") {
|
||||
return Explanation::new(
|
||||
ExplanationCategory::Failure,
|
||||
"finalize_failed",
|
||||
"Finalization failed",
|
||||
"The job encoded or remuxed successfully, but final promotion or verification failed.",
|
||||
Some("Inspect filesystem state and job logs before retrying.".to_string()),
|
||||
summary,
|
||||
);
|
||||
}
|
||||
|
||||
if normalized.contains("vmaf")
|
||||
|| normalized.contains("quality gate failed")
|
||||
|| normalized.contains("quality check failed")
|
||||
{
|
||||
return Explanation::new(
|
||||
ExplanationCategory::Failure,
|
||||
"quality_check_failed",
|
||||
"Quality check failed",
|
||||
"The output did not pass the configured quality guard, so Alchemist refused to keep it.",
|
||||
Some("Adjust the quality thresholds in Settings -> Quality if this is stricter than intended.".to_string()),
|
||||
summary,
|
||||
);
|
||||
}
|
||||
|
||||
if normalized.contains("ffmpeg failed") || normalized.contains("transcode failed") {
|
||||
return Explanation::new(
|
||||
ExplanationCategory::Failure,
|
||||
"unknown_ffmpeg_failure",
|
||||
"FFmpeg failed",
|
||||
"FFmpeg failed during processing. The logs contain the most specific error details available.",
|
||||
Some("Inspect the FFmpeg output in the job logs for the root cause.".to_string()),
|
||||
summary,
|
||||
);
|
||||
}
|
||||
|
||||
Explanation::new(
|
||||
ExplanationCategory::Failure,
|
||||
"unknown_failure",
|
||||
"Failure recorded",
|
||||
summary.to_string(),
|
||||
Some("Inspect the job logs for additional context.".to_string()),
|
||||
summary,
|
||||
)
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn parses_legacy_decision_payloads() {
|
||||
let explanation =
|
||||
decision_from_legacy("skip", "bpp_below_threshold|bpp=0.043,threshold=0.050");
|
||||
assert_eq!(explanation.code, "bpp_below_threshold");
|
||||
assert_eq!(explanation.category, ExplanationCategory::Decision);
|
||||
assert_eq!(measured_f64(&explanation.measured, "bpp"), Some(0.043));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parses_failure_summaries() {
|
||||
let explanation = failure_from_summary("Transcode failed: Unknown encoder 'missing'");
|
||||
assert_eq!(explanation.code, "encoder_unavailable");
|
||||
assert_eq!(explanation.category, ExplanationCategory::Failure);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn round_trips_json_payload() {
|
||||
let explanation = decision_from_legacy(
|
||||
"transcode",
|
||||
"transcode_recommended|target_codec=av1,current_codec=hevc,bpp=0.120",
|
||||
);
|
||||
let payload = explanation_to_json(&explanation);
|
||||
assert_eq!(explanation_from_json(&payload), Some(explanation));
|
||||
}
|
||||
}
|
||||
@@ -1,6 +1,7 @@
|
||||
pub mod config;
|
||||
pub mod db;
|
||||
pub mod error;
|
||||
pub mod explanations;
|
||||
pub mod media;
|
||||
pub mod notifications;
|
||||
pub mod orchestrator;
|
||||
|
||||
@@ -590,8 +590,13 @@ impl Pipeline {
|
||||
Ok(a) => a,
|
||||
Err(e) => {
|
||||
let reason = format!("analysis_failed|error={e}");
|
||||
let failure_explanation = crate::explanations::failure_from_summary(&reason);
|
||||
let _ = self.db.add_log("error", Some(job_id), &reason).await;
|
||||
self.db.add_decision(job_id, "skip", &reason).await.ok();
|
||||
self.db
|
||||
.upsert_job_failure_explanation(job_id, &failure_explanation)
|
||||
.await
|
||||
.ok();
|
||||
self.db
|
||||
.update_job_status(job_id, crate::db::JobState::Failed)
|
||||
.await?;
|
||||
@@ -622,8 +627,13 @@ impl Pipeline {
|
||||
Ok(p) => p,
|
||||
Err(e) => {
|
||||
let reason = format!("planning_failed|error={e}");
|
||||
let failure_explanation = crate::explanations::failure_from_summary(&reason);
|
||||
let _ = self.db.add_log("error", Some(job_id), &reason).await;
|
||||
self.db.add_decision(job_id, "skip", &reason).await.ok();
|
||||
self.db
|
||||
.upsert_job_failure_explanation(job_id, &failure_explanation)
|
||||
.await
|
||||
.ok();
|
||||
self.db
|
||||
.update_job_status(job_id, crate::db::JobState::Failed)
|
||||
.await?;
|
||||
@@ -741,6 +751,11 @@ impl Pipeline {
|
||||
let msg = format!("Probing failed: {e}");
|
||||
tracing::error!("Job {}: {}", job.id, msg);
|
||||
let _ = self.db.add_log("error", Some(job.id), &msg).await;
|
||||
let explanation = crate::explanations::failure_from_summary(&msg);
|
||||
let _ = self
|
||||
.db
|
||||
.upsert_job_failure_explanation(job.id, &explanation)
|
||||
.await;
|
||||
let _ = self
|
||||
.update_job_state(job.id, crate::db::JobState::Failed)
|
||||
.await;
|
||||
@@ -782,6 +797,11 @@ impl Pipeline {
|
||||
let msg = format!("Failed to resolve library profile: {err}");
|
||||
tracing::error!("Job {}: {}", job.id, msg);
|
||||
let _ = self.db.add_log("error", Some(job.id), &msg).await;
|
||||
let explanation = crate::explanations::failure_from_summary(&msg);
|
||||
let _ = self
|
||||
.db
|
||||
.upsert_job_failure_explanation(job.id, &explanation)
|
||||
.await;
|
||||
let _ = self
|
||||
.update_job_state(job.id, crate::db::JobState::Failed)
|
||||
.await;
|
||||
@@ -797,6 +817,11 @@ impl Pipeline {
|
||||
let msg = format!("Planner failed: {e}");
|
||||
tracing::error!("Job {}: {}", job.id, msg);
|
||||
let _ = self.db.add_log("error", Some(job.id), &msg).await;
|
||||
let explanation = crate::explanations::failure_from_summary(&msg);
|
||||
let _ = self
|
||||
.db
|
||||
.upsert_job_failure_explanation(job.id, &explanation)
|
||||
.await;
|
||||
let _ = self
|
||||
.update_job_state(job.id, crate::db::JobState::Failed)
|
||||
.await;
|
||||
@@ -828,17 +853,12 @@ impl Pipeline {
|
||||
reason.clone(),
|
||||
crate::db::JobState::Encoding,
|
||||
),
|
||||
TranscodeDecision::Remux { .. } => {
|
||||
TranscodeDecision::Remux { reason } => {
|
||||
tracing::info!(
|
||||
"Job {}: Remuxing MP4→MKV (stream copy, no re-encode)",
|
||||
job.id
|
||||
);
|
||||
(
|
||||
true,
|
||||
"remux",
|
||||
"remux: mp4_to_mkv_stream_copy".to_string(),
|
||||
crate::db::JobState::Remuxing,
|
||||
)
|
||||
(true, "remux", reason.clone(), crate::db::JobState::Remuxing)
|
||||
}
|
||||
TranscodeDecision::Skip { reason } => {
|
||||
(false, "skip", reason.clone(), crate::db::JobState::Skipped)
|
||||
@@ -860,11 +880,25 @@ impl Pipeline {
|
||||
job.id,
|
||||
&reason
|
||||
);
|
||||
let _ = self.db.add_decision(job.id, action, &reason).await;
|
||||
let explanation = crate::explanations::decision_from_legacy(action, &reason);
|
||||
let _ = self
|
||||
.db
|
||||
.add_decision_with_explanation(job.id, action, &explanation)
|
||||
.await;
|
||||
let _ = self
|
||||
.event_channels
|
||||
.jobs
|
||||
.send(crate::db::JobEvent::Decision {
|
||||
job_id: job.id,
|
||||
action: action.to_string(),
|
||||
reason: explanation.legacy_reason.clone(),
|
||||
explanation: Some(explanation.clone()),
|
||||
});
|
||||
let _ = self.tx.send(crate::db::AlchemistEvent::Decision {
|
||||
job_id: job.id,
|
||||
action: action.to_string(),
|
||||
reason: reason.clone(),
|
||||
reason: explanation.legacy_reason.clone(),
|
||||
explanation: Some(explanation),
|
||||
});
|
||||
|
||||
if self.update_job_state(job.id, next_status).await.is_err() {
|
||||
@@ -910,6 +944,13 @@ impl Pipeline {
|
||||
Ok(result) => {
|
||||
if result.fallback_occurred && !plan.allow_fallback {
|
||||
tracing::error!("Job {}: Encoder fallback detected and not allowed.", job.id);
|
||||
let summary = "Encoder fallback detected and not allowed.";
|
||||
let explanation = crate::explanations::failure_from_summary(summary);
|
||||
let _ = self.db.add_log("error", Some(job.id), summary).await;
|
||||
let _ = self
|
||||
.db
|
||||
.upsert_job_failure_explanation(job.id, &explanation)
|
||||
.await;
|
||||
let _ = self
|
||||
.update_job_state(job.id, crate::db::JobState::Failed)
|
||||
.await;
|
||||
@@ -996,6 +1037,11 @@ impl Pipeline {
|
||||
let msg = format!("Transcode failed: {e}");
|
||||
tracing::error!("Job {}: {}", job.id, msg);
|
||||
let _ = self.db.add_log("error", Some(job.id), &msg).await;
|
||||
let explanation = crate::explanations::failure_from_summary(&msg);
|
||||
let _ = self
|
||||
.db
|
||||
.upsert_job_failure_explanation(job.id, &explanation)
|
||||
.await;
|
||||
let _ = self
|
||||
.update_job_state(job.id, crate::db::JobState::Failed)
|
||||
.await;
|
||||
@@ -1010,6 +1056,10 @@ impl Pipeline {
|
||||
tracing::error!("Failed to update job {} status {:?}: {}", job_id, status, e);
|
||||
return Err(e);
|
||||
}
|
||||
let _ = self
|
||||
.event_channels
|
||||
.jobs
|
||||
.send(crate::db::JobEvent::StateChanged { job_id, status });
|
||||
let _ = self
|
||||
.tx
|
||||
.send(crate::db::AlchemistEvent::JobStateChanged { job_id, status });
|
||||
@@ -1140,7 +1190,14 @@ impl Pipeline {
|
||||
cleanup_temp_subtitle_output(job_id, context.plan).await;
|
||||
let _ = self
|
||||
.db
|
||||
.add_decision(job_id, "skip", "Low quality (VMAF)")
|
||||
.add_decision(
|
||||
job_id,
|
||||
"skip",
|
||||
&format!(
|
||||
"quality_below_threshold|metric=vmaf,score={:.1},threshold={:.1}",
|
||||
s, config.quality.min_vmaf_score
|
||||
),
|
||||
)
|
||||
.await;
|
||||
self.update_job_state(job_id, crate::db::JobState::Skipped)
|
||||
.await?;
|
||||
@@ -1304,6 +1361,11 @@ impl Pipeline {
|
||||
|
||||
let message = format!("Finalization failed: {err}");
|
||||
let _ = self.db.add_log("error", Some(job_id), &message).await;
|
||||
let failure_explanation = crate::explanations::failure_from_summary(&message);
|
||||
let _ = self
|
||||
.db
|
||||
.upsert_job_failure_explanation(job_id, &failure_explanation)
|
||||
.await;
|
||||
if let crate::error::AlchemistError::QualityCheckFailed(reason) = err {
|
||||
let _ = self.db.add_decision(job_id, "reject", reason).await;
|
||||
}
|
||||
|
||||
@@ -106,7 +106,12 @@ impl Planner for BasicPlanner {
|
||||
|
||||
if available_encoders.is_empty() {
|
||||
return Ok(skip_plan(
|
||||
"No available encoders for current hardware policy".to_string(),
|
||||
format!(
|
||||
"no_available_encoders|requested_codec={},allow_cpu_fallback={},allow_cpu_encoding={}",
|
||||
requested_codec.as_str(),
|
||||
self.config.hardware.allow_cpu_fallback,
|
||||
self.config.hardware.allow_cpu_encoding
|
||||
),
|
||||
container,
|
||||
requested_codec,
|
||||
self.config.transcode.allow_fallback,
|
||||
@@ -119,7 +124,7 @@ impl Planner for BasicPlanner {
|
||||
{
|
||||
return Ok(skip_plan(
|
||||
format!(
|
||||
"Preferred codec {} unavailable and fallback disabled",
|
||||
"preferred_codec_unavailable_fallback_disabled|codec={}",
|
||||
requested_codec.as_str()
|
||||
),
|
||||
container,
|
||||
@@ -135,7 +140,10 @@ impl Planner for BasicPlanner {
|
||||
self.config.transcode.allow_fallback,
|
||||
) else {
|
||||
return Ok(skip_plan(
|
||||
"No suitable encoder available".to_string(),
|
||||
format!(
|
||||
"no_suitable_encoder|requested_codec={}",
|
||||
requested_codec.as_str()
|
||||
),
|
||||
container,
|
||||
requested_codec,
|
||||
self.config.transcode.allow_fallback,
|
||||
@@ -364,16 +372,16 @@ fn should_transcode(
|
||||
|
||||
if metadata.codec_name.eq_ignore_ascii_case("h264") {
|
||||
return TranscodeDecision::Transcode {
|
||||
reason: "H.264 source prioritized for transcode".to_string(),
|
||||
reason: "transcode_h264_source|current_codec=h264".to_string(),
|
||||
};
|
||||
}
|
||||
|
||||
TranscodeDecision::Transcode {
|
||||
reason: format!(
|
||||
"Ready for {} transcode (Current codec: {}, BPP: {})",
|
||||
"transcode_recommended|target_codec={},current_codec={},bpp={}",
|
||||
target_codec_str,
|
||||
metadata.codec_name,
|
||||
bpp.map(|value| format!("{:.4}", value))
|
||||
bpp.map(|value| format!("{value:.4}"))
|
||||
.unwrap_or_else(|| "unknown".to_string())
|
||||
),
|
||||
}
|
||||
@@ -1117,6 +1125,7 @@ mod tests {
|
||||
AnalysisConfidence, AudioStreamMetadata, DynamicRange, MediaMetadata,
|
||||
SubtitleStreamMetadata,
|
||||
};
|
||||
use crate::system::hardware::{BackendCapability, ProbeSummary};
|
||||
|
||||
fn config() -> Config {
|
||||
let mut config = Config::default();
|
||||
@@ -1400,6 +1409,137 @@ mod tests {
|
||||
assert!(matches!(decision, TranscodeDecision::Skip { .. }));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn already_target_codec_reason_is_stable() {
|
||||
let decision = should_transcode(&analysis(), &config(), OutputCodec::Hevc, "mkv");
|
||||
let TranscodeDecision::Skip { reason } = decision else {
|
||||
panic!("expected skip decision");
|
||||
};
|
||||
let explanation = crate::explanations::decision_from_legacy("skip", &reason);
|
||||
assert_eq!(explanation.code, "already_target_codec");
|
||||
assert_eq!(
|
||||
explanation.measured.get("codec"),
|
||||
Some(&serde_json::json!("hevc"))
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn remux_reason_is_stable() {
|
||||
let mut source = analysis();
|
||||
source.metadata.container = "mp4".to_string();
|
||||
let decision = should_transcode(&source, &config(), OutputCodec::Hevc, "mkv");
|
||||
let TranscodeDecision::Remux { reason } = decision else {
|
||||
panic!("expected remux decision");
|
||||
};
|
||||
let explanation = crate::explanations::decision_from_legacy("remux", &reason);
|
||||
assert_eq!(explanation.code, "already_target_codec_wrong_container");
|
||||
assert_eq!(
|
||||
explanation.measured.get("target_extension"),
|
||||
Some(&serde_json::json!("mkv"))
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn bpp_threshold_reason_is_stable() {
|
||||
let mut source = analysis();
|
||||
source.metadata.codec_name = "mpeg4".to_string();
|
||||
source.metadata.bit_depth = Some(8);
|
||||
source.metadata.video_bitrate_bps = Some(1_000_000);
|
||||
let decision = should_transcode(&source, &config(), OutputCodec::Av1, "mkv");
|
||||
let TranscodeDecision::Skip { reason } = decision else {
|
||||
panic!("expected skip decision");
|
||||
};
|
||||
let explanation = crate::explanations::decision_from_legacy("skip", &reason);
|
||||
assert_eq!(explanation.code, "bpp_below_threshold");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn min_file_size_reason_is_stable() {
|
||||
let mut source = analysis();
|
||||
source.metadata.codec_name = "mpeg4".to_string();
|
||||
source.metadata.bit_depth = Some(8);
|
||||
source.metadata.size_bytes = 20 * 1024 * 1024;
|
||||
let decision = should_transcode(&source, &config(), OutputCodec::Av1, "mkv");
|
||||
let TranscodeDecision::Skip { reason } = decision else {
|
||||
panic!("expected skip decision");
|
||||
};
|
||||
let explanation = crate::explanations::decision_from_legacy("skip", &reason);
|
||||
assert_eq!(explanation.code, "below_min_file_size");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn incomplete_metadata_reason_is_stable() {
|
||||
let mut source = analysis();
|
||||
source.metadata.codec_name = "mpeg4".to_string();
|
||||
source.metadata.bit_depth = Some(8);
|
||||
source.metadata.width = 0;
|
||||
let decision = should_transcode(&source, &config(), OutputCodec::Av1, "mkv");
|
||||
let TranscodeDecision::Skip { reason } = decision else {
|
||||
panic!("expected skip decision");
|
||||
};
|
||||
let explanation = crate::explanations::decision_from_legacy("skip", &reason);
|
||||
assert_eq!(explanation.code, "incomplete_metadata");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn no_available_encoders_reason_is_stable() {
|
||||
let mut cfg = config();
|
||||
cfg.hardware.allow_cpu_encoding = false;
|
||||
cfg.hardware.allow_cpu_fallback = false;
|
||||
cfg.transcode.allow_fallback = false;
|
||||
let planner = BasicPlanner::new(Arc::new(cfg), None);
|
||||
|
||||
let plan = planner
|
||||
.plan(&analysis(), Path::new("/tmp/out.mkv"), None)
|
||||
.await
|
||||
.expect("plan");
|
||||
|
||||
let TranscodeDecision::Skip { reason } = plan.decision else {
|
||||
panic!("expected skip decision");
|
||||
};
|
||||
let explanation = crate::explanations::decision_from_legacy("skip", &reason);
|
||||
assert_eq!(explanation.code, "no_available_encoders");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn preferred_codec_unavailable_reason_is_stable() {
|
||||
let hw_info = HardwareInfo {
|
||||
vendor: Vendor::Intel,
|
||||
device_path: Some("/dev/dri/renderD128".to_string()),
|
||||
supported_codecs: vec!["hevc".to_string()],
|
||||
backends: vec![BackendCapability {
|
||||
kind: HardwareBackend::Qsv,
|
||||
codec: "hevc".to_string(),
|
||||
encoder: "hevc_qsv".to_string(),
|
||||
device_path: Some("/dev/dri/renderD128".to_string()),
|
||||
}],
|
||||
detection_notes: Vec::new(),
|
||||
selection_reason: String::new(),
|
||||
probe_summary: ProbeSummary::default(),
|
||||
};
|
||||
|
||||
let mut cfg = config();
|
||||
cfg.hardware.allow_cpu_encoding = false;
|
||||
cfg.hardware.allow_cpu_fallback = false;
|
||||
cfg.transcode.output_codec = OutputCodec::Av1;
|
||||
cfg.transcode.allow_fallback = false;
|
||||
|
||||
let planner = BasicPlanner::new(Arc::new(cfg), Some(hw_info));
|
||||
let plan = planner
|
||||
.plan(&analysis(), Path::new("/tmp/out.mkv"), None)
|
||||
.await
|
||||
.expect("plan");
|
||||
|
||||
let TranscodeDecision::Skip { reason } = plan.decision else {
|
||||
panic!("expected skip decision");
|
||||
};
|
||||
let explanation = crate::explanations::decision_from_legacy("skip", &reason);
|
||||
assert_eq!(
|
||||
explanation.code,
|
||||
"preferred_codec_unavailable_fallback_disabled"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn gpu_codec_fallback_beats_cpu_requested_codec() {
|
||||
let inventory = EncoderInventory {
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
use crate::config::Config;
|
||||
use crate::db::{AlchemistEvent, Db, NotificationTarget};
|
||||
use crate::explanations::Explanation;
|
||||
use reqwest::{Client, Url, redirect::Policy};
|
||||
use serde_json::json;
|
||||
use std::net::IpAddr;
|
||||
@@ -149,29 +150,96 @@ impl NotificationManager {
|
||||
.resolve(host, std::net::SocketAddr::new(target_ip, port))
|
||||
.build()?;
|
||||
|
||||
let (decision_explanation, failure_explanation) = match event {
|
||||
AlchemistEvent::JobStateChanged { job_id, status } => {
|
||||
let decision_explanation = self
|
||||
.db
|
||||
.get_job_decision_explanation(*job_id)
|
||||
.await
|
||||
.ok()
|
||||
.flatten();
|
||||
let failure_explanation = if *status == crate::db::JobState::Failed {
|
||||
self.db
|
||||
.get_job_failure_explanation(*job_id)
|
||||
.await
|
||||
.ok()
|
||||
.flatten()
|
||||
} else {
|
||||
None
|
||||
};
|
||||
(decision_explanation, failure_explanation)
|
||||
}
|
||||
_ => (None, None),
|
||||
};
|
||||
|
||||
match target.target_type.as_str() {
|
||||
"discord" => {
|
||||
self.send_discord_with_client(&client, target, event, status)
|
||||
.await
|
||||
self.send_discord_with_client(
|
||||
&client,
|
||||
target,
|
||||
event,
|
||||
status,
|
||||
decision_explanation.as_ref(),
|
||||
failure_explanation.as_ref(),
|
||||
)
|
||||
.await
|
||||
}
|
||||
"gotify" => {
|
||||
self.send_gotify_with_client(&client, target, event, status)
|
||||
.await
|
||||
self.send_gotify_with_client(
|
||||
&client,
|
||||
target,
|
||||
event,
|
||||
status,
|
||||
decision_explanation.as_ref(),
|
||||
failure_explanation.as_ref(),
|
||||
)
|
||||
.await
|
||||
}
|
||||
"webhook" => {
|
||||
self.send_webhook_with_client(&client, target, event, status)
|
||||
.await
|
||||
self.send_webhook_with_client(
|
||||
&client,
|
||||
target,
|
||||
event,
|
||||
status,
|
||||
decision_explanation.as_ref(),
|
||||
failure_explanation.as_ref(),
|
||||
)
|
||||
.await
|
||||
}
|
||||
_ => Ok(()),
|
||||
}
|
||||
}
|
||||
|
||||
fn notification_message(
|
||||
&self,
|
||||
job_id: i64,
|
||||
status: &str,
|
||||
decision_explanation: Option<&Explanation>,
|
||||
failure_explanation: Option<&Explanation>,
|
||||
) -> String {
|
||||
let explanation = failure_explanation.or(decision_explanation);
|
||||
if let Some(explanation) = explanation {
|
||||
let mut message = format!("Job #{} {} — {}", job_id, status, explanation.summary);
|
||||
if !explanation.detail.is_empty() {
|
||||
message.push_str(&format!("\n{}", explanation.detail));
|
||||
}
|
||||
if let Some(guidance) = &explanation.operator_guidance {
|
||||
message.push_str(&format!("\nNext step: {}", guidance));
|
||||
}
|
||||
return message;
|
||||
}
|
||||
|
||||
format!("Job #{} is now {}", job_id, status)
|
||||
}
|
||||
|
||||
async fn send_discord_with_client(
|
||||
&self,
|
||||
client: &Client,
|
||||
target: &NotificationTarget,
|
||||
event: &AlchemistEvent,
|
||||
status: &str,
|
||||
decision_explanation: Option<&Explanation>,
|
||||
failure_explanation: Option<&Explanation>,
|
||||
) -> Result<(), Box<dyn std::error::Error>> {
|
||||
let color = match status {
|
||||
"completed" => 0x00FF00, // Green
|
||||
@@ -182,9 +250,12 @@ impl NotificationManager {
|
||||
};
|
||||
|
||||
let message = match event {
|
||||
AlchemistEvent::JobStateChanged { job_id, status } => {
|
||||
format!("Job #{} is now {}", job_id, status)
|
||||
}
|
||||
AlchemistEvent::JobStateChanged { job_id, status } => self.notification_message(
|
||||
*job_id,
|
||||
&status.to_string(),
|
||||
decision_explanation,
|
||||
failure_explanation,
|
||||
),
|
||||
_ => "Event occurred".to_string(),
|
||||
};
|
||||
|
||||
@@ -212,11 +283,16 @@ impl NotificationManager {
|
||||
target: &NotificationTarget,
|
||||
event: &AlchemistEvent,
|
||||
status: &str,
|
||||
decision_explanation: Option<&Explanation>,
|
||||
failure_explanation: Option<&Explanation>,
|
||||
) -> Result<(), Box<dyn std::error::Error>> {
|
||||
let message = match event {
|
||||
AlchemistEvent::JobStateChanged { job_id, status } => {
|
||||
format!("Job #{} is now {}", job_id, status)
|
||||
}
|
||||
AlchemistEvent::JobStateChanged { job_id, status } => self.notification_message(
|
||||
*job_id,
|
||||
&status.to_string(),
|
||||
decision_explanation,
|
||||
failure_explanation,
|
||||
),
|
||||
_ => "Event occurred".to_string(),
|
||||
};
|
||||
|
||||
@@ -246,11 +322,16 @@ impl NotificationManager {
|
||||
target: &NotificationTarget,
|
||||
event: &AlchemistEvent,
|
||||
status: &str,
|
||||
decision_explanation: Option<&Explanation>,
|
||||
failure_explanation: Option<&Explanation>,
|
||||
) -> Result<(), Box<dyn std::error::Error>> {
|
||||
let message = match event {
|
||||
AlchemistEvent::JobStateChanged { job_id, status } => {
|
||||
format!("Job #{} is now {}", job_id, status)
|
||||
}
|
||||
AlchemistEvent::JobStateChanged { job_id, status } => self.notification_message(
|
||||
*job_id,
|
||||
&status.to_string(),
|
||||
decision_explanation,
|
||||
failure_explanation,
|
||||
),
|
||||
_ => "Event occurred".to_string(),
|
||||
};
|
||||
|
||||
@@ -259,6 +340,8 @@ impl NotificationManager {
|
||||
"status": status,
|
||||
"message": message,
|
||||
"data": event,
|
||||
"decision_explanation": decision_explanation,
|
||||
"failure_explanation": failure_explanation,
|
||||
"timestamp": chrono::Utc::now().to_rfc3339()
|
||||
});
|
||||
|
||||
@@ -332,6 +415,7 @@ fn is_private_ip(ip: IpAddr) -> bool {
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use crate::db::JobState;
|
||||
use tokio::io::{AsyncReadExt, AsyncWriteExt};
|
||||
use tokio::net::TcpListener;
|
||||
|
||||
@@ -389,4 +473,95 @@ mod tests {
|
||||
let _ = std::fs::remove_file(db_path);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn webhook_payload_includes_structured_explanations()
|
||||
-> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
let mut db_path = std::env::temp_dir();
|
||||
let token: u64 = rand::random();
|
||||
db_path.push(format!("alchemist_notifications_payload_test_{}.db", token));
|
||||
|
||||
let db = Db::new(db_path.to_string_lossy().as_ref()).await?;
|
||||
let _ = db
|
||||
.enqueue_job(
|
||||
std::path::Path::new("notify-input.mkv"),
|
||||
std::path::Path::new("notify-output.mkv"),
|
||||
std::time::SystemTime::UNIX_EPOCH,
|
||||
)
|
||||
.await?;
|
||||
let job = db
|
||||
.get_job_by_input_path("notify-input.mkv")
|
||||
.await?
|
||||
.ok_or("missing job")?;
|
||||
db.update_job_status(job.id, JobState::Failed).await?;
|
||||
db.add_decision(job.id, "skip", "planning_failed|error=boom")
|
||||
.await?;
|
||||
db.upsert_job_failure_explanation(
|
||||
job.id,
|
||||
&crate::explanations::failure_from_summary("Unknown encoder 'missing_encoder'"),
|
||||
)
|
||||
.await?;
|
||||
|
||||
let mut test_config = crate::config::Config::default();
|
||||
test_config.notifications.allow_local_notifications = true;
|
||||
let config = Arc::new(RwLock::new(test_config));
|
||||
let manager = NotificationManager::new(db, config);
|
||||
|
||||
let listener = TcpListener::bind("127.0.0.1:0").await?;
|
||||
let addr = listener.local_addr()?;
|
||||
|
||||
let body_task = tokio::spawn(async move {
|
||||
let (mut socket, _) = listener.accept().await.expect("accept");
|
||||
let mut buf = Vec::new();
|
||||
let mut chunk = [0u8; 4096];
|
||||
loop {
|
||||
let read = socket.read(&mut chunk).await.expect("read");
|
||||
if read == 0 {
|
||||
break;
|
||||
}
|
||||
buf.extend_from_slice(&chunk[..read]);
|
||||
if buf.windows(4).any(|window| window == b"\r\n\r\n") {
|
||||
break;
|
||||
}
|
||||
}
|
||||
let response = "HTTP/1.1 200 OK\r\nContent-Length: 0\r\n\r\n";
|
||||
socket.write_all(response.as_bytes()).await.expect("write");
|
||||
String::from_utf8_lossy(&buf).to_string()
|
||||
});
|
||||
|
||||
let target = NotificationTarget {
|
||||
id: 0,
|
||||
name: "test".to_string(),
|
||||
target_type: "webhook".to_string(),
|
||||
endpoint_url: format!("http://{}", addr),
|
||||
auth_token: None,
|
||||
events: "[\"failed\"]".to_string(),
|
||||
enabled: true,
|
||||
created_at: chrono::Utc::now(),
|
||||
};
|
||||
let event = AlchemistEvent::JobStateChanged {
|
||||
job_id: job.id,
|
||||
status: JobState::Failed,
|
||||
};
|
||||
|
||||
manager.send(&target, &event, "failed").await?;
|
||||
let request = body_task.await?;
|
||||
let body = request
|
||||
.split("\r\n\r\n")
|
||||
.nth(1)
|
||||
.ok_or("missing request body")?;
|
||||
let payload: serde_json::Value = serde_json::from_str(body)?;
|
||||
assert_eq!(
|
||||
payload["failure_explanation"]["code"].as_str(),
|
||||
Some("encoder_unavailable")
|
||||
);
|
||||
assert_eq!(
|
||||
payload["decision_explanation"]["code"].as_str(),
|
||||
Some("planning_failed")
|
||||
);
|
||||
|
||||
drop(manager);
|
||||
let _ = std::fs::remove_file(db_path);
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
@@ -3,6 +3,7 @@
|
||||
use super::{AppState, is_row_not_found};
|
||||
use crate::db::{Job, JobState};
|
||||
use crate::error::Result;
|
||||
use crate::explanations::Explanation;
|
||||
use axum::{
|
||||
extract::{Path, State},
|
||||
http::StatusCode,
|
||||
@@ -121,7 +122,25 @@ pub(crate) async fn jobs_table_handler(
|
||||
})
|
||||
.await
|
||||
{
|
||||
Ok(jobs) => axum::Json(jobs).into_response(),
|
||||
Ok(jobs) => {
|
||||
let job_ids = jobs.iter().map(|job| job.id).collect::<Vec<_>>();
|
||||
let explanations = match state.db.get_job_decision_explanations(&job_ids).await {
|
||||
Ok(explanations) => explanations,
|
||||
Err(e) => {
|
||||
return (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()).into_response();
|
||||
}
|
||||
};
|
||||
|
||||
let payload = jobs
|
||||
.into_iter()
|
||||
.map(|job| JobResponse {
|
||||
decision_explanation: explanations.get(&job.id).cloned(),
|
||||
job,
|
||||
})
|
||||
.collect::<Vec<_>>();
|
||||
|
||||
axum::Json(payload).into_response()
|
||||
}
|
||||
Err(e) => (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()).into_response(),
|
||||
}
|
||||
}
|
||||
@@ -294,6 +313,13 @@ pub(crate) async fn update_job_priority_handler(
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub(crate) struct JobResponse {
|
||||
#[serde(flatten)]
|
||||
job: Job,
|
||||
decision_explanation: Option<Explanation>,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub(crate) struct JobDetailResponse {
|
||||
job: Job,
|
||||
@@ -301,6 +327,8 @@ pub(crate) struct JobDetailResponse {
|
||||
encode_stats: Option<crate::db::DetailedEncodeStats>,
|
||||
job_logs: Vec<crate::db::LogEntry>,
|
||||
job_failure_summary: Option<String>,
|
||||
decision_explanation: Option<Explanation>,
|
||||
failure_explanation: Option<Explanation>,
|
||||
}
|
||||
|
||||
pub(crate) async fn get_job_detail_handler(
|
||||
@@ -344,14 +372,35 @@ pub(crate) async fn get_job_detail_handler(
|
||||
Err(err) => return (StatusCode::INTERNAL_SERVER_ERROR, err.to_string()).into_response(),
|
||||
};
|
||||
|
||||
let job_failure_summary = if job.status == JobState::Failed {
|
||||
job_logs
|
||||
let decision_explanation = match state.db.get_job_decision_explanation(id).await {
|
||||
Ok(explanation) => explanation,
|
||||
Err(err) => return (StatusCode::INTERNAL_SERVER_ERROR, err.to_string()).into_response(),
|
||||
};
|
||||
|
||||
let (job_failure_summary, failure_explanation) = if job.status == JobState::Failed {
|
||||
let legacy_summary = job_logs
|
||||
.iter()
|
||||
.rev()
|
||||
.find(|entry| entry.level.eq_ignore_ascii_case("error"))
|
||||
.map(|entry| entry.message.clone())
|
||||
.map(|entry| entry.message.clone());
|
||||
let stored_failure = match state.db.get_job_failure_explanation(id).await {
|
||||
Ok(explanation) => explanation,
|
||||
Err(err) => {
|
||||
return (StatusCode::INTERNAL_SERVER_ERROR, err.to_string()).into_response();
|
||||
}
|
||||
};
|
||||
let summary = stored_failure
|
||||
.as_ref()
|
||||
.map(|explanation| explanation.legacy_reason.clone())
|
||||
.or(legacy_summary.clone());
|
||||
let explanation = stored_failure.or_else(|| {
|
||||
legacy_summary
|
||||
.as_deref()
|
||||
.map(crate::explanations::failure_from_summary)
|
||||
});
|
||||
(summary, explanation)
|
||||
} else {
|
||||
None
|
||||
(None, None)
|
||||
};
|
||||
|
||||
axum::Json(JobDetailResponse {
|
||||
@@ -360,6 +409,8 @@ pub(crate) async fn get_job_detail_handler(
|
||||
encode_stats,
|
||||
job_logs,
|
||||
job_failure_summary,
|
||||
decision_explanation,
|
||||
failure_explanation,
|
||||
})
|
||||
.into_response()
|
||||
}
|
||||
|
||||
@@ -67,12 +67,14 @@ pub(crate) fn sse_message_for_job_event(event: &JobEvent) -> SseMessage {
|
||||
job_id,
|
||||
action,
|
||||
reason,
|
||||
explanation,
|
||||
} => SseMessage {
|
||||
event_name: "decision",
|
||||
data: serde_json::json!({
|
||||
"job_id": job_id,
|
||||
"action": action,
|
||||
"reason": reason
|
||||
"reason": reason,
|
||||
"explanation": explanation
|
||||
})
|
||||
.to_string(),
|
||||
},
|
||||
|
||||
@@ -1147,6 +1147,13 @@ async fn job_detail_route_includes_logs_and_failure_summary()
|
||||
.db
|
||||
.add_log("error", Some(job.id), "No such file or directory")
|
||||
.await?;
|
||||
state
|
||||
.db
|
||||
.upsert_job_failure_explanation(
|
||||
job.id,
|
||||
&crate::explanations::failure_from_summary("No such file or directory"),
|
||||
)
|
||||
.await?;
|
||||
|
||||
let response = app
|
||||
.clone()
|
||||
@@ -1164,6 +1171,10 @@ async fn job_detail_route_includes_logs_and_failure_summary()
|
||||
payload["job_failure_summary"].as_str(),
|
||||
Some("No such file or directory")
|
||||
);
|
||||
assert_eq!(
|
||||
payload["failure_explanation"]["code"].as_str(),
|
||||
Some("source_missing")
|
||||
);
|
||||
assert_eq!(payload["job_logs"].as_array().map(Vec::len), Some(2));
|
||||
assert_eq!(
|
||||
payload["job_logs"][1]["message"].as_str(),
|
||||
@@ -1174,6 +1185,88 @@ async fn job_detail_route_includes_logs_and_failure_summary()
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn jobs_table_includes_structured_decision_explanation()
|
||||
-> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
let (state, app, config_path, db_path) = build_test_app(false, 8, |_| {}).await?;
|
||||
let token = create_session(state.db.as_ref()).await?;
|
||||
let (job, input_path, output_path) = seed_job(state.db.as_ref(), JobState::Skipped).await?;
|
||||
|
||||
state
|
||||
.db
|
||||
.add_decision(
|
||||
job.id,
|
||||
"skip",
|
||||
"bpp_below_threshold|bpp=0.043,threshold=0.050",
|
||||
)
|
||||
.await?;
|
||||
|
||||
let response = app
|
||||
.clone()
|
||||
.oneshot(auth_request(
|
||||
Method::GET,
|
||||
"/api/jobs",
|
||||
&token,
|
||||
Body::empty(),
|
||||
))
|
||||
.await?;
|
||||
assert_eq!(response.status(), StatusCode::OK);
|
||||
|
||||
let payload: serde_json::Value = serde_json::from_str(&body_text(response).await)?;
|
||||
let first = payload
|
||||
.as_array()
|
||||
.and_then(|items| items.first())
|
||||
.expect("job row");
|
||||
assert_eq!(
|
||||
first["decision_explanation"]["code"].as_str(),
|
||||
Some("bpp_below_threshold")
|
||||
);
|
||||
assert_eq!(
|
||||
first["decision_reason"].as_str(),
|
||||
Some("bpp_below_threshold|bpp=0.043,threshold=0.050")
|
||||
);
|
||||
|
||||
cleanup_paths(&[input_path, output_path, config_path, db_path]);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn job_detail_route_falls_back_to_legacy_failure_summary()
|
||||
-> std::result::Result<(), Box<dyn std::error::Error>> {
|
||||
let (state, app, config_path, db_path) = build_test_app(false, 8, |_| {}).await?;
|
||||
let token = create_session(state.db.as_ref()).await?;
|
||||
let (job, input_path, output_path) = seed_job(state.db.as_ref(), JobState::Failed).await?;
|
||||
|
||||
state
|
||||
.db
|
||||
.add_log("error", Some(job.id), "No such file or directory")
|
||||
.await?;
|
||||
|
||||
let response = app
|
||||
.clone()
|
||||
.oneshot(auth_request(
|
||||
Method::GET,
|
||||
&format!("/api/jobs/{}/details", job.id),
|
||||
&token,
|
||||
Body::empty(),
|
||||
))
|
||||
.await?;
|
||||
assert_eq!(response.status(), StatusCode::OK);
|
||||
|
||||
let payload: serde_json::Value = serde_json::from_str(&body_text(response).await)?;
|
||||
assert_eq!(
|
||||
payload["failure_explanation"]["code"].as_str(),
|
||||
Some("source_missing")
|
||||
);
|
||||
assert_eq!(
|
||||
payload["job_failure_summary"].as_str(),
|
||||
Some("No such file or directory")
|
||||
);
|
||||
|
||||
cleanup_paths(&[input_path, output_path, config_path, db_path]);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn delete_active_job_returns_conflict() -> std::result::Result<(), Box<dyn std::error::Error>>
|
||||
{
|
||||
|
||||
@@ -68,6 +68,18 @@ async fn v0_2_5_fixture_upgrades_and_preserves_core_state() -> Result<()> {
|
||||
job.decision_reason.as_deref(),
|
||||
Some("Legacy AV1 skip threshold")
|
||||
);
|
||||
let decision_explanation = db
|
||||
.get_job_decision_explanation(job.id)
|
||||
.await?
|
||||
.context("expected decision explanation fallback")?;
|
||||
assert_eq!(
|
||||
decision_explanation.category,
|
||||
alchemist::explanations::ExplanationCategory::Decision
|
||||
);
|
||||
assert_eq!(
|
||||
decision_explanation.code,
|
||||
"decision_legacy_av1_skip_threshold"
|
||||
);
|
||||
|
||||
let stats = db.get_aggregated_stats().await?;
|
||||
assert_eq!(stats.total_jobs, 1);
|
||||
@@ -89,7 +101,7 @@ async fn v0_2_5_fixture_upgrades_and_preserves_core_state() -> Result<()> {
|
||||
.fetch_one(&pool)
|
||||
.await?
|
||||
.get("value");
|
||||
assert_eq!(schema_version, "5");
|
||||
assert_eq!(schema_version, "6");
|
||||
|
||||
let min_compatible_version: String =
|
||||
sqlx::query("SELECT value FROM schema_info WHERE key = 'min_compatible_version'")
|
||||
@@ -120,6 +132,27 @@ async fn v0_2_5_fixture_upgrades_and_preserves_core_state() -> Result<()> {
|
||||
assert!(jobs_columns.iter().any(|name| name == "health_issues"));
|
||||
assert!(jobs_columns.iter().any(|name| name == "last_health_check"));
|
||||
|
||||
let decisions_columns = sqlx::query("PRAGMA table_info(decisions)")
|
||||
.fetch_all(&pool)
|
||||
.await?
|
||||
.into_iter()
|
||||
.map(|row| row.get::<String, _>("name"))
|
||||
.collect::<Vec<_>>();
|
||||
assert!(decisions_columns.iter().any(|name| name == "reason_code"));
|
||||
assert!(
|
||||
decisions_columns
|
||||
.iter()
|
||||
.any(|name| name == "reason_payload_json")
|
||||
);
|
||||
|
||||
let job_failure_explanations_exists: i64 = sqlx::query(
|
||||
"SELECT COUNT(*) as count FROM sqlite_master WHERE type = 'table' AND name = 'job_failure_explanations'",
|
||||
)
|
||||
.fetch_one(&pool)
|
||||
.await?
|
||||
.get("count");
|
||||
assert_eq!(job_failure_explanations_exists, 1);
|
||||
|
||||
pool.close().await;
|
||||
drop(db);
|
||||
let _ = fs::remove_file(&db_path);
|
||||
|
||||
@@ -114,6 +114,17 @@ export interface JobFixture {
|
||||
attempt_count?: number;
|
||||
vmaf_score?: number;
|
||||
decision_reason?: string;
|
||||
decision_explanation?: ExplanationFixture | null;
|
||||
}
|
||||
|
||||
export interface ExplanationFixture {
|
||||
category: "decision" | "failure";
|
||||
code: string;
|
||||
summary: string;
|
||||
detail: string;
|
||||
operator_guidance: string | null;
|
||||
measured: Record<string, string | number | boolean | null>;
|
||||
legacy_reason: string;
|
||||
}
|
||||
|
||||
export interface JobDetailFixture {
|
||||
@@ -149,6 +160,8 @@ export interface JobDetailFixture {
|
||||
created_at: string;
|
||||
}>;
|
||||
job_failure_summary?: string;
|
||||
decision_explanation?: ExplanationFixture | null;
|
||||
failure_explanation?: ExplanationFixture | null;
|
||||
}
|
||||
|
||||
interface SettingsBundle {
|
||||
|
||||
@@ -103,3 +103,83 @@ test("completed job detail renders persisted encode stats", async ({ page }) =>
|
||||
await expect(page.getByText("7000 kbps")).toBeVisible();
|
||||
await expect(page.getByText("95.4").first()).toBeVisible();
|
||||
});
|
||||
|
||||
test("skipped job detail prefers structured decision explanation", async ({ page }) => {
|
||||
const skippedJob: JobFixture = {
|
||||
id: 42,
|
||||
input_path: "/media/skipped-structured.mkv",
|
||||
output_path: "/output/skipped-structured-av1.mkv",
|
||||
status: "skipped",
|
||||
priority: 0,
|
||||
progress: 0,
|
||||
created_at: "2025-01-01T00:00:00Z",
|
||||
updated_at: "2025-01-02T00:00:00Z",
|
||||
decision_reason: "bpp_below_threshold|bpp=0.043,threshold=0.050",
|
||||
};
|
||||
|
||||
await page.route("**/api/jobs/table**", async (route) => {
|
||||
await fulfillJson(route, 200, [skippedJob]);
|
||||
});
|
||||
await mockJobDetails(page, {
|
||||
42: {
|
||||
job: skippedJob,
|
||||
job_logs: [],
|
||||
decision_explanation: {
|
||||
category: "decision",
|
||||
code: "bpp_below_threshold",
|
||||
summary: "Structured skip summary",
|
||||
detail: "Structured skip detail from the backend.",
|
||||
operator_guidance: "Structured skip guidance from the backend.",
|
||||
measured: { bpp: 0.043, threshold: 0.05 },
|
||||
legacy_reason: skippedJob.decision_reason!,
|
||||
},
|
||||
},
|
||||
});
|
||||
|
||||
await page.goto("/jobs");
|
||||
await page.getByTitle("/media/skipped-structured.mkv").click();
|
||||
|
||||
await expect(page.getByText("Structured skip summary")).toBeVisible();
|
||||
await expect(page.getByText("Structured skip detail from the backend.")).toBeVisible();
|
||||
await expect(page.getByText("Structured skip guidance from the backend.")).toBeVisible();
|
||||
});
|
||||
|
||||
test("failed job detail prefers structured failure explanation", async ({ page }) => {
|
||||
const failedJob: JobFixture = {
|
||||
id: 43,
|
||||
input_path: "/media/failed-structured.mkv",
|
||||
output_path: "/output/failed-structured-av1.mkv",
|
||||
status: "failed",
|
||||
priority: 0,
|
||||
progress: 100,
|
||||
created_at: "2025-01-01T00:00:00Z",
|
||||
updated_at: "2025-01-02T00:00:00Z",
|
||||
};
|
||||
|
||||
await page.route("**/api/jobs/table**", async (route) => {
|
||||
await fulfillJson(route, 200, [failedJob]);
|
||||
});
|
||||
await mockJobDetails(page, {
|
||||
43: {
|
||||
job: failedJob,
|
||||
job_logs: [],
|
||||
job_failure_summary: "Unknown encoder 'missing_encoder'",
|
||||
failure_explanation: {
|
||||
category: "failure",
|
||||
code: "encoder_unavailable",
|
||||
summary: "Structured failure summary",
|
||||
detail: "Structured failure detail from the backend.",
|
||||
operator_guidance: "Structured failure guidance from the backend.",
|
||||
measured: {},
|
||||
legacy_reason: "Unknown encoder 'missing_encoder'",
|
||||
},
|
||||
},
|
||||
});
|
||||
|
||||
await page.goto("/jobs");
|
||||
await page.getByTitle("/media/failed-structured.mkv").click();
|
||||
|
||||
await expect(page.getByText("Structured failure summary")).toBeVisible();
|
||||
await expect(page.getByText("Structured failure detail from the backend.")).toBeVisible();
|
||||
await expect(page.getByText("Structured failure guidance from the backend.")).toBeVisible();
|
||||
});
|
||||
|
||||
@@ -32,11 +32,24 @@ function focusableElements(root: HTMLElement): HTMLElement[] {
|
||||
);
|
||||
}
|
||||
|
||||
export interface SkipDetail {
|
||||
export interface ExplanationView {
|
||||
category: "decision" | "failure";
|
||||
code: string;
|
||||
summary: string;
|
||||
detail: string;
|
||||
action: string | null;
|
||||
measured: Record<string, string>;
|
||||
operator_guidance: string | null;
|
||||
measured: Record<string, string | number | boolean | null>;
|
||||
legacy_reason: string;
|
||||
}
|
||||
|
||||
interface ExplanationPayload {
|
||||
category: "decision" | "failure";
|
||||
code: string;
|
||||
summary: string;
|
||||
detail: string;
|
||||
operator_guidance: string | null;
|
||||
measured: Record<string, string | number | boolean | null>;
|
||||
legacy_reason: string;
|
||||
}
|
||||
|
||||
function formatReductionPercent(value?: string): string {
|
||||
@@ -48,14 +61,14 @@ function formatReductionPercent(value?: string): string {
|
||||
return Number.isFinite(parsed) ? `${(parsed * 100).toFixed(0)}%` : value;
|
||||
}
|
||||
|
||||
export function humanizeSkipReason(reason: string): SkipDetail {
|
||||
export function humanizeSkipReason(reason: string): ExplanationView {
|
||||
const pipeIdx = reason.indexOf("|");
|
||||
const key = pipeIdx === -1
|
||||
? reason.trim()
|
||||
: reason.slice(0, pipeIdx).trim();
|
||||
const paramStr = pipeIdx === -1 ? "" : reason.slice(pipeIdx + 1);
|
||||
|
||||
const measured: Record<string, string> = {};
|
||||
const measured: Record<string, string | number | boolean | null> = {};
|
||||
for (const pair of paramStr.split(",")) {
|
||||
const [rawKey, ...rawValueParts] = pair.split("=");
|
||||
if (!rawKey || rawValueParts.length === 0) {
|
||||
@@ -65,176 +78,249 @@ export function humanizeSkipReason(reason: string): SkipDetail {
|
||||
measured[rawKey.trim()] = rawValueParts.join("=").trim();
|
||||
}
|
||||
|
||||
const fallbackDisabledMatch = key.match(
|
||||
/^Preferred codec\s+(.+?)\s+unavailable and fallback disabled$/i
|
||||
);
|
||||
if (fallbackDisabledMatch) {
|
||||
measured.codec ??= fallbackDisabledMatch[1];
|
||||
return {
|
||||
summary: "Preferred encoder unavailable",
|
||||
detail: `The preferred codec (${measured.codec ?? "target codec"}) is not available and CPU fallback is disabled in settings.`,
|
||||
action: "Go to Settings -> Hardware and enable CPU fallback, or check that your GPU encoder is working correctly.",
|
||||
measured,
|
||||
};
|
||||
}
|
||||
const makeDecision = (
|
||||
code: string,
|
||||
summary: string,
|
||||
detail: string,
|
||||
operator_guidance: string | null,
|
||||
): ExplanationView => ({
|
||||
category: "decision",
|
||||
code,
|
||||
summary,
|
||||
detail,
|
||||
operator_guidance,
|
||||
measured,
|
||||
legacy_reason: reason,
|
||||
});
|
||||
|
||||
switch (key) {
|
||||
case "analysis_failed":
|
||||
return {
|
||||
summary: "File could not be analyzed",
|
||||
detail: `FFprobe failed to read this file. It may be corrupt, incomplete, or in an unsupported format. Error: ${measured.error ?? "unknown"}`,
|
||||
action: "Try playing the file in VLC or another media player. If it plays fine, re-run the scan. If not, the file may be damaged.",
|
||||
measured,
|
||||
};
|
||||
|
||||
return makeDecision(
|
||||
"analysis_failed",
|
||||
"File could not be analyzed",
|
||||
`FFprobe failed to read this file. It may be corrupt, incomplete, or in an unsupported format. Error: ${measured.error ?? "unknown"}`,
|
||||
"Try playing the file in VLC or another media player. If it plays fine, re-run the scan. If not, the file may be damaged.",
|
||||
);
|
||||
case "planning_failed":
|
||||
return {
|
||||
summary: "Transcoding plan could not be created",
|
||||
detail: `An internal error occurred while planning the transcode for this file. This is likely a bug. Error: ${measured.error ?? "unknown"}`,
|
||||
action: "Check the logs below for details. If this happens repeatedly, please report it as a bug.",
|
||||
measured,
|
||||
};
|
||||
|
||||
return makeDecision(
|
||||
"planning_failed",
|
||||
"Transcoding plan could not be created",
|
||||
`An internal error occurred while planning the transcode for this file. This is likely a bug. Error: ${measured.error ?? "unknown"}`,
|
||||
"Check the logs below for details. If this happens repeatedly, please report it as a bug.",
|
||||
);
|
||||
case "already_target_codec":
|
||||
return {
|
||||
summary: "Already in target format",
|
||||
detail: `This file is already encoded as ${measured.codec ?? "the target codec"}${measured.bit_depth ? ` at ${measured.bit_depth}-bit` : ""}. Re-encoding would waste time and could reduce quality.`,
|
||||
action: null,
|
||||
measured,
|
||||
};
|
||||
|
||||
return makeDecision(
|
||||
"already_target_codec",
|
||||
"Already in target format",
|
||||
`This file is already encoded as ${measured.codec ?? "the target codec"}${measured.bit_depth ? ` at ${measured.bit_depth}-bit` : ""}. Re-encoding would waste time and could reduce quality.`,
|
||||
null,
|
||||
);
|
||||
case "already_target_codec_wrong_container":
|
||||
return {
|
||||
summary: "Target codec, wrong container",
|
||||
detail: `The video is already in the right codec but wrapped in a ${measured.container ?? "MP4"} container. Alchemist will remux it to ${measured.target_extension ?? "MKV"} - fast and lossless, no quality loss.`,
|
||||
action: null,
|
||||
measured,
|
||||
};
|
||||
|
||||
return makeDecision(
|
||||
"already_target_codec_wrong_container",
|
||||
"Target codec, wrong container",
|
||||
`The video is already in the right codec but wrapped in a ${measured.container ?? "MP4"} container. Alchemist will remux it to ${measured.target_extension ?? "MKV"} - fast and lossless, no quality loss.`,
|
||||
null,
|
||||
);
|
||||
case "bpp_below_threshold":
|
||||
return {
|
||||
summary: "Already efficiently compressed",
|
||||
detail: `Bits-per-pixel (${measured.bpp ?? "?"}) is below the minimum threshold (${measured.threshold ?? "?"}). This file is already well-compressed - transcoding it would spend significant time for minimal space savings.`,
|
||||
action: "If you want to force transcoding, lower the BPP threshold in Settings -> Transcoding.",
|
||||
measured,
|
||||
};
|
||||
|
||||
return makeDecision(
|
||||
"bpp_below_threshold",
|
||||
"Already efficiently compressed",
|
||||
`Bits-per-pixel (${measured.bpp ?? "?"}) is below the minimum threshold (${measured.threshold ?? "?"}). This file is already well-compressed - transcoding it would spend significant time for minimal space savings.`,
|
||||
"If you want to force transcoding, lower the BPP threshold in Settings -> Transcoding.",
|
||||
);
|
||||
case "below_min_file_size":
|
||||
return {
|
||||
summary: "File too small to process",
|
||||
detail: `File size (${measured.size_mb ?? "?"}MB) is below the minimum threshold (${measured.threshold_mb ?? "?"}MB). Small files aren't worth the transcoding overhead.`,
|
||||
action: "Lower the minimum file size threshold in Settings -> Transcoding if you want small files processed.",
|
||||
measured,
|
||||
};
|
||||
|
||||
return makeDecision(
|
||||
"below_min_file_size",
|
||||
"File too small to process",
|
||||
`File size (${measured.size_mb ?? "?"}MB) is below the minimum threshold (${measured.threshold_mb ?? "?"}MB). Small files aren't worth the transcoding overhead.`,
|
||||
"Lower the minimum file size threshold in Settings -> Transcoding if you want small files processed.",
|
||||
);
|
||||
case "size_reduction_insufficient":
|
||||
return {
|
||||
summary: "Not enough space would be saved",
|
||||
detail: `The predicted size reduction (${formatReductionPercent(measured.predicted)}) is below the required threshold (${formatReductionPercent(measured.threshold)}). Transcoding this file wouldn't recover meaningful storage.`,
|
||||
action: "Lower the size reduction threshold in Settings -> Transcoding to encode files with smaller savings.",
|
||||
measured,
|
||||
};
|
||||
|
||||
return makeDecision(
|
||||
"size_reduction_insufficient",
|
||||
"Not enough space would be saved",
|
||||
`The predicted size reduction (${formatReductionPercent(String(measured.reduction ?? measured.predicted ?? ""))}) is below the required threshold (${formatReductionPercent(String(measured.threshold ?? ""))}). Transcoding this file wouldn't recover meaningful storage.`,
|
||||
"Lower the size reduction threshold in Settings -> Transcoding to encode files with smaller savings.",
|
||||
);
|
||||
case "no_suitable_encoder":
|
||||
return {
|
||||
summary: "No encoder available",
|
||||
detail: `No encoder was found for ${measured.codec ?? "the target codec"}. Hardware detection may have failed, or CPU fallback is disabled.`,
|
||||
action: "Check Settings -> Hardware. Enable CPU fallback, or verify your GPU is detected correctly.",
|
||||
measured,
|
||||
};
|
||||
|
||||
case "no_available_encoders":
|
||||
return makeDecision(
|
||||
key,
|
||||
"No encoder available",
|
||||
`No encoder was found for ${measured.codec ?? measured.requested_codec ?? "the target codec"}. Hardware detection may have failed, or CPU fallback is disabled.`,
|
||||
"Check Settings -> Hardware. Enable CPU fallback, or verify your GPU is detected correctly.",
|
||||
);
|
||||
case "preferred_codec_unavailable_fallback_disabled":
|
||||
return makeDecision(
|
||||
"preferred_codec_unavailable_fallback_disabled",
|
||||
"Preferred encoder unavailable",
|
||||
`The preferred codec (${measured.codec ?? "target codec"}) is not available and CPU fallback is disabled in settings.`,
|
||||
"Go to Settings -> Hardware and enable CPU fallback, or check that your GPU encoder is working correctly.",
|
||||
);
|
||||
case "Output path matches input path":
|
||||
return {
|
||||
summary: "Output would overwrite source",
|
||||
detail: "The configured output path is the same as the source file. Alchemist refused to proceed to avoid overwriting your original file.",
|
||||
action: "Go to Settings -> Files and configure a different output suffix or output folder.",
|
||||
measured,
|
||||
};
|
||||
|
||||
case "output_path_matches_input":
|
||||
return makeDecision(
|
||||
"output_path_matches_input",
|
||||
"Output would overwrite source",
|
||||
"The configured output path is the same as the source file. Alchemist refused to proceed to avoid overwriting your original file.",
|
||||
"Go to Settings -> Files and configure a different output suffix or output folder.",
|
||||
);
|
||||
case "Output already exists":
|
||||
return {
|
||||
summary: "Output file already exists",
|
||||
detail: "A transcoded version of this file already exists at the output path. Alchemist skipped it to avoid duplicating work.",
|
||||
action: "If you want to re-transcode it, delete the existing output file first, then retry the job.",
|
||||
measured,
|
||||
};
|
||||
|
||||
case "output_already_exists":
|
||||
return makeDecision(
|
||||
"output_already_exists",
|
||||
"Output file already exists",
|
||||
"A transcoded version of this file already exists at the output path. Alchemist skipped it to avoid duplicating work.",
|
||||
"If you want to re-transcode it, delete the existing output file first, then retry the job.",
|
||||
);
|
||||
case "incomplete_metadata":
|
||||
return {
|
||||
summary: "Missing file metadata",
|
||||
detail: `FFprobe could not determine the ${measured.missing ?? "required metadata"} for this file. Without reliable metadata Alchemist cannot make a valid transcoding decision.`,
|
||||
action: "Run a Library Doctor scan to check if this file is corrupt. Try playing it in a media player to confirm it is readable.",
|
||||
measured,
|
||||
};
|
||||
|
||||
return makeDecision(
|
||||
"incomplete_metadata",
|
||||
"Missing file metadata",
|
||||
`FFprobe could not determine the ${measured.missing ?? "required metadata"} for this file. Without reliable metadata Alchemist cannot make a valid transcoding decision.`,
|
||||
"Run a Library Doctor scan to check if this file is corrupt. Try playing it in a media player to confirm it is readable.",
|
||||
);
|
||||
case "already_10bit":
|
||||
return {
|
||||
summary: "Already 10-bit",
|
||||
detail: "This file is already encoded in high-quality 10-bit depth. Re-encoding it could reduce quality.",
|
||||
action: null,
|
||||
measured,
|
||||
};
|
||||
|
||||
return makeDecision(
|
||||
"already_10bit",
|
||||
"Already 10-bit",
|
||||
"This file is already encoded in high-quality 10-bit depth. Re-encoding it could reduce quality.",
|
||||
null,
|
||||
);
|
||||
case "remux: mp4_to_mkv_stream_copy":
|
||||
return {
|
||||
summary: "Remuxed (no re-encode)",
|
||||
detail: "This file was remuxed from MP4 to MKV using stream copy - fast and lossless. No quality was lost.",
|
||||
action: null,
|
||||
measured,
|
||||
};
|
||||
|
||||
case "remux_mp4_to_mkv_stream_copy":
|
||||
return makeDecision(
|
||||
"remux_mp4_to_mkv_stream_copy",
|
||||
"Remuxed (no re-encode)",
|
||||
"This file was remuxed from MP4 to MKV using stream copy - fast and lossless. No quality was lost.",
|
||||
null,
|
||||
);
|
||||
case "Low quality (VMAF)":
|
||||
return {
|
||||
summary: "Quality check failed",
|
||||
detail: "The encoded file scored below the minimum VMAF quality threshold. Alchemist rejected the output to protect quality.",
|
||||
action: "The original file has been preserved. You can lower the VMAF threshold in Settings -> Quality, or disable VMAF checking entirely.",
|
||||
measured,
|
||||
};
|
||||
|
||||
case "quality_below_threshold":
|
||||
return makeDecision(
|
||||
"quality_below_threshold",
|
||||
"Quality check failed",
|
||||
"The encoded file scored below the minimum VMAF quality threshold. Alchemist rejected the output to protect quality.",
|
||||
"The original file has been preserved. You can lower the VMAF threshold in Settings -> Quality, or disable VMAF checking entirely.",
|
||||
);
|
||||
case "transcode_h264_source":
|
||||
return makeDecision(
|
||||
"transcode_h264_source",
|
||||
"H.264 source prioritized",
|
||||
"This file is H.264, which is typically a strong candidate for reclaiming space, so Alchemist prioritized it for transcoding.",
|
||||
null,
|
||||
);
|
||||
case "transcode_recommended":
|
||||
return makeDecision(
|
||||
"transcode_recommended",
|
||||
"Transcode recommended",
|
||||
"Alchemist determined this file is a strong candidate for transcoding based on the current codec and measured efficiency.",
|
||||
null,
|
||||
);
|
||||
default:
|
||||
return {
|
||||
summary: "Decision recorded",
|
||||
detail: reason,
|
||||
action: null,
|
||||
measured,
|
||||
};
|
||||
return makeDecision("legacy_decision", "Decision recorded", reason, null);
|
||||
}
|
||||
}
|
||||
|
||||
function explainFailureSummary(summary: string): string {
|
||||
function explainFailureSummary(summary: string): ExplanationView {
|
||||
const normalized = summary.toLowerCase();
|
||||
|
||||
const makeFailure = (
|
||||
code: string,
|
||||
title: string,
|
||||
detail: string,
|
||||
operator_guidance: string | null,
|
||||
): ExplanationView => ({
|
||||
category: "failure",
|
||||
code,
|
||||
summary: title,
|
||||
detail,
|
||||
operator_guidance,
|
||||
measured: {},
|
||||
legacy_reason: summary,
|
||||
});
|
||||
|
||||
if (normalized.includes("cancelled")) {
|
||||
return "This job was cancelled before encoding completed. The original file is untouched.";
|
||||
return makeFailure(
|
||||
"cancelled",
|
||||
"Job was cancelled",
|
||||
"This job was cancelled before encoding completed. The original file is untouched.",
|
||||
null,
|
||||
);
|
||||
}
|
||||
if (normalized.includes("no such file or directory")) {
|
||||
return "The source file could not be found. It may have been moved or deleted.";
|
||||
return makeFailure(
|
||||
"source_missing",
|
||||
"Source file missing",
|
||||
"The source file could not be found. It may have been moved or deleted.",
|
||||
"Check that the source file still exists and is readable by Alchemist.",
|
||||
);
|
||||
}
|
||||
if (normalized.includes("invalid data found") || normalized.includes("moov atom not found")) {
|
||||
return "This file appears to be corrupt or incomplete. Try running a Library Doctor scan.";
|
||||
return makeFailure(
|
||||
"corrupt_or_unreadable_media",
|
||||
"Media could not be read",
|
||||
"This file appears to be corrupt or incomplete. Try running a Library Doctor scan.",
|
||||
"Verify the source file manually or run Library Doctor to confirm whether it is readable.",
|
||||
);
|
||||
}
|
||||
if (normalized.includes("permission denied")) {
|
||||
return "Alchemist doesn't have permission to read this file. Check the file permissions.";
|
||||
return makeFailure(
|
||||
"permission_denied",
|
||||
"Permission denied",
|
||||
"Alchemist doesn't have permission to read this file. Check the file permissions.",
|
||||
"Check the file and output path permissions for the Alchemist process user.",
|
||||
);
|
||||
}
|
||||
if (normalized.includes("encoder not found") || normalized.includes("unknown encoder")) {
|
||||
return "The required encoder is not available in your FFmpeg installation.";
|
||||
return makeFailure(
|
||||
"encoder_unavailable",
|
||||
"Required encoder unavailable",
|
||||
"The required encoder is not available in your FFmpeg installation.",
|
||||
"Check FFmpeg encoder availability and hardware settings.",
|
||||
);
|
||||
}
|
||||
if (normalized.includes("out of memory") || normalized.includes("cannot allocate memory")) {
|
||||
return "The system ran out of memory during encoding. Try reducing concurrent jobs.";
|
||||
return makeFailure(
|
||||
"resource_exhausted",
|
||||
"System ran out of memory",
|
||||
"The system ran out of memory during encoding. Try reducing concurrent jobs.",
|
||||
"Reduce concurrent jobs or rerun under lower system load.",
|
||||
);
|
||||
}
|
||||
if (normalized.includes("transcode_failed") || normalized.includes("ffmpeg exited")) {
|
||||
return "FFmpeg failed during encoding. This is often caused by a corrupt source file or an encoder configuration issue. Check the logs below for the specific FFmpeg error.";
|
||||
return makeFailure(
|
||||
"unknown_ffmpeg_failure",
|
||||
"FFmpeg failed",
|
||||
"FFmpeg failed during encoding. This is often caused by a corrupt source file or an encoder configuration issue. Check the logs below for the specific FFmpeg error.",
|
||||
"Inspect the FFmpeg output in the job logs for the exact failure.",
|
||||
);
|
||||
}
|
||||
if (normalized.includes("probing failed")) {
|
||||
return "FFprobe could not read this file. It may be corrupt or in an unsupported format.";
|
||||
return makeFailure(
|
||||
"analysis_failed",
|
||||
"Analysis failed",
|
||||
"FFprobe could not read this file. It may be corrupt or in an unsupported format.",
|
||||
"Inspect the source file manually or run Library Doctor to confirm whether it is readable.",
|
||||
);
|
||||
}
|
||||
if (normalized.includes("planning_failed") || normalized.includes("planner")) {
|
||||
return "An error occurred while planning the transcode. Check the logs below for details.";
|
||||
return makeFailure(
|
||||
"planning_failed",
|
||||
"Planner failed",
|
||||
"An error occurred while planning the transcode. Check the logs below for details.",
|
||||
"Treat repeated planner failures as a bug and inspect the logs for the triggering input.",
|
||||
);
|
||||
}
|
||||
if (normalized.includes("output_size=0") || normalized.includes("output was empty")) {
|
||||
return "Encoding produced an empty output file. This usually means FFmpeg crashed silently. Check the logs below for FFmpeg output.";
|
||||
return makeFailure(
|
||||
"unknown_ffmpeg_failure",
|
||||
"Empty output produced",
|
||||
"Encoding produced an empty output file. This usually means FFmpeg crashed silently. Check the logs below for FFmpeg output.",
|
||||
"Inspect the FFmpeg logs before retrying the job.",
|
||||
);
|
||||
}
|
||||
|
||||
if (
|
||||
normalized.includes("videotoolbox") ||
|
||||
normalized.includes("vt_compression") ||
|
||||
@@ -242,21 +328,62 @@ function explainFailureSummary(summary: string): string {
|
||||
normalized.includes("mediaserverd") ||
|
||||
normalized.includes("no capable devices")
|
||||
) {
|
||||
return "The VideoToolbox hardware encoder failed. This can happen when the GPU is busy, the file uses an unsupported pixel format, or macOS Media Services are unavailable. Retry the job — if it keeps failing, CPU fallback is available in Settings → Hardware.";
|
||||
return makeFailure(
|
||||
"hardware_backend_failure",
|
||||
"Hardware backend failed",
|
||||
"The VideoToolbox hardware encoder failed. This can happen when the GPU is busy, the file uses an unsupported pixel format, or macOS Media Services are unavailable.",
|
||||
"Retry the job. If it keeps failing, check the hardware probe log or enable CPU fallback in Settings -> Hardware.",
|
||||
);
|
||||
}
|
||||
|
||||
if (
|
||||
normalized.includes("encoder fallback") ||
|
||||
normalized.includes("fallback detected")
|
||||
) {
|
||||
return "The hardware encoder was unavailable and fell back to software encoding, which was not allowed by your settings. Enable CPU fallback in Settings → Hardware, or retry when the GPU is less busy.";
|
||||
if (normalized.includes("encoder fallback") || normalized.includes("fallback detected")) {
|
||||
return makeFailure(
|
||||
"fallback_blocked",
|
||||
"Fallback blocked by policy",
|
||||
"The hardware encoder was unavailable and fell back to software encoding, which was not allowed by your settings.",
|
||||
"Enable CPU fallback in Settings -> Hardware, or retry when the GPU is less busy.",
|
||||
);
|
||||
}
|
||||
|
||||
if (normalized.includes("ffmpeg failed")) {
|
||||
return "FFmpeg failed during encoding. Check the logs below for the specific error. Common causes: unsupported pixel format, codec not available, or corrupt source file.";
|
||||
return makeFailure(
|
||||
"unknown_ffmpeg_failure",
|
||||
"FFmpeg failed",
|
||||
"FFmpeg failed during encoding. Check the logs below for the specific error. Common causes: unsupported pixel format, codec not available, or corrupt source file.",
|
||||
"Inspect the FFmpeg output in the job logs for the exact failure.",
|
||||
);
|
||||
}
|
||||
|
||||
return summary;
|
||||
return makeFailure(
|
||||
"legacy_failure",
|
||||
"Failure recorded",
|
||||
summary,
|
||||
"Inspect the job logs for additional context.",
|
||||
);
|
||||
}
|
||||
|
||||
function normalizeDecisionExplanation(
|
||||
explanation: ExplanationPayload | null | undefined,
|
||||
legacyReason?: string | null,
|
||||
): ExplanationView | null {
|
||||
if (explanation) {
|
||||
return explanation;
|
||||
}
|
||||
if (legacyReason) {
|
||||
return humanizeSkipReason(legacyReason);
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
function normalizeFailureExplanation(
|
||||
explanation: ExplanationPayload | null | undefined,
|
||||
legacySummary?: string | null,
|
||||
): ExplanationView | null {
|
||||
if (explanation) {
|
||||
return explanation;
|
||||
}
|
||||
if (legacySummary) {
|
||||
return explainFailureSummary(legacySummary);
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
function logLevelClass(level: string): string {
|
||||
@@ -283,6 +410,7 @@ interface Job {
|
||||
attempt_count: number;
|
||||
vmaf_score?: number;
|
||||
decision_reason?: string;
|
||||
decision_explanation?: ExplanationPayload | null;
|
||||
encoder?: string;
|
||||
}
|
||||
|
||||
@@ -348,6 +476,8 @@ interface JobDetail {
|
||||
encode_stats: EncodeStats | null;
|
||||
job_logs: LogEntry[];
|
||||
job_failure_summary: string | null;
|
||||
decision_explanation: ExplanationPayload | null;
|
||||
failure_explanation: ExplanationPayload | null;
|
||||
}
|
||||
|
||||
interface CountMessageResponse {
|
||||
@@ -909,8 +1039,17 @@ function JobManager() {
|
||||
setConfirmState(config);
|
||||
};
|
||||
|
||||
const focusedDecision = focusedJob?.job.decision_reason
|
||||
? humanizeSkipReason(focusedJob.job.decision_reason)
|
||||
const focusedDecision = focusedJob
|
||||
? normalizeDecisionExplanation(
|
||||
focusedJob.decision_explanation ?? focusedJob.job.decision_explanation,
|
||||
focusedJob.job.decision_reason,
|
||||
)
|
||||
: null;
|
||||
const focusedFailure = focusedJob
|
||||
? normalizeFailureExplanation(
|
||||
focusedJob.failure_explanation,
|
||||
focusedJob.job_failure_summary,
|
||||
)
|
||||
: null;
|
||||
const focusedJobLogs = focusedJob?.job_logs ?? [];
|
||||
const shouldShowFfmpegOutput = focusedJob
|
||||
@@ -1555,76 +1694,72 @@ function JobManager() {
|
||||
)}
|
||||
|
||||
{/* Decision Info */}
|
||||
{focusedJob.job.decision_reason && focusedJob.job.status !== "failed" && focusedJob.job.status !== "skipped" && (
|
||||
{focusedDecision && focusedJob.job.status !== "failed" && focusedJob.job.status !== "skipped" && (
|
||||
<div className="p-4 rounded-lg bg-helios-solar/5 border border-helios-solar/10">
|
||||
<div className="flex items-center gap-2 text-helios-solar mb-1">
|
||||
<Info size={12} />
|
||||
<span className="text-xs font-medium text-helios-slate">Decision Context</span>
|
||||
</div>
|
||||
{focusedDecision && (
|
||||
<div className="space-y-3">
|
||||
<p className="text-sm font-medium text-helios-ink">
|
||||
{focusedJob.job.status === "completed"
|
||||
? "Transcoded"
|
||||
: focusedDecision.summary}
|
||||
</p>
|
||||
<p className="text-xs leading-relaxed text-helios-slate">
|
||||
{focusedDecision.detail}
|
||||
</p>
|
||||
{Object.keys(focusedDecision.measured).length > 0 && (
|
||||
<div className="space-y-1.5 rounded-lg border border-helios-line/20 bg-helios-surface-soft px-3 py-2.5">
|
||||
{Object.entries(focusedDecision.measured).map(([k, v]) => (
|
||||
<div key={k} className="flex items-center justify-between text-xs">
|
||||
<span className="font-mono text-helios-slate">{k}</span>
|
||||
<span className="font-mono font-bold text-helios-ink">{v}</span>
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
)}
|
||||
{focusedDecision.action && (
|
||||
<div className="flex items-start gap-2 rounded-lg border border-helios-solar/20 bg-helios-solar/5 px-3 py-2.5">
|
||||
<span className="text-xs leading-relaxed text-helios-solar">
|
||||
{focusedDecision.action}
|
||||
</span>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
<div className="space-y-3">
|
||||
<p className="text-sm font-medium text-helios-ink">
|
||||
{focusedJob.job.status === "completed"
|
||||
? "Transcoded"
|
||||
: focusedDecision.summary}
|
||||
</p>
|
||||
<p className="text-xs leading-relaxed text-helios-slate">
|
||||
{focusedDecision.detail}
|
||||
</p>
|
||||
{Object.keys(focusedDecision.measured).length > 0 && (
|
||||
<div className="space-y-1.5 rounded-lg border border-helios-line/20 bg-helios-surface-soft px-3 py-2.5">
|
||||
{Object.entries(focusedDecision.measured).map(([k, v]) => (
|
||||
<div key={k} className="flex items-center justify-between text-xs">
|
||||
<span className="font-mono text-helios-slate">{k}</span>
|
||||
<span className="font-mono font-bold text-helios-ink">{String(v)}</span>
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
)}
|
||||
{focusedDecision.operator_guidance && (
|
||||
<div className="flex items-start gap-2 rounded-lg border border-helios-solar/20 bg-helios-solar/5 px-3 py-2.5">
|
||||
<span className="text-xs leading-relaxed text-helios-solar">
|
||||
{focusedDecision.operator_guidance}
|
||||
</span>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{focusedJob.job.status === "skipped" && focusedJob.job.decision_reason && (
|
||||
{focusedJob.job.status === "skipped" && focusedDecision && (
|
||||
<div className="p-4 rounded-lg bg-helios-surface-soft border border-helios-line/10">
|
||||
<p className="text-sm text-helios-ink leading-relaxed">
|
||||
Alchemist analysed this file and decided not to transcode it. Here's why:
|
||||
</p>
|
||||
{focusedDecision && (
|
||||
<div className="mt-3 space-y-3">
|
||||
<p className="text-sm font-medium text-helios-ink">
|
||||
{focusedDecision.summary}
|
||||
</p>
|
||||
<p className="text-xs leading-relaxed text-helios-slate">
|
||||
{focusedDecision.detail}
|
||||
</p>
|
||||
{Object.keys(focusedDecision.measured).length > 0 && (
|
||||
<div className="space-y-1.5 rounded-lg border border-helios-line/20 bg-helios-surface px-3 py-2.5">
|
||||
{Object.entries(focusedDecision.measured).map(([k, v]) => (
|
||||
<div key={k} className="flex items-center justify-between text-xs">
|
||||
<span className="font-mono text-helios-slate">{k}</span>
|
||||
<span className="font-mono font-bold text-helios-ink">{v}</span>
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
)}
|
||||
{focusedDecision.action && (
|
||||
<div className="flex items-start gap-2 rounded-lg border border-helios-solar/20 bg-helios-solar/5 px-3 py-2.5">
|
||||
<span className="text-xs leading-relaxed text-helios-solar">
|
||||
{focusedDecision.action}
|
||||
</span>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
<div className="mt-3 space-y-3">
|
||||
<p className="text-sm font-medium text-helios-ink">
|
||||
{focusedDecision.summary}
|
||||
</p>
|
||||
<p className="text-xs leading-relaxed text-helios-slate">
|
||||
{focusedDecision.detail}
|
||||
</p>
|
||||
{Object.keys(focusedDecision.measured).length > 0 && (
|
||||
<div className="space-y-1.5 rounded-lg border border-helios-line/20 bg-helios-surface px-3 py-2.5">
|
||||
{Object.entries(focusedDecision.measured).map(([k, v]) => (
|
||||
<div key={k} className="flex items-center justify-between text-xs">
|
||||
<span className="font-mono text-helios-slate">{k}</span>
|
||||
<span className="font-mono font-bold text-helios-ink">{String(v)}</span>
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
)}
|
||||
{focusedDecision.operator_guidance && (
|
||||
<div className="flex items-start gap-2 rounded-lg border border-helios-solar/20 bg-helios-solar/5 px-3 py-2.5">
|
||||
<span className="text-xs leading-relaxed text-helios-solar">
|
||||
{focusedDecision.operator_guidance}
|
||||
</span>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
@@ -1636,14 +1771,24 @@ function JobManager() {
|
||||
Failure Reason
|
||||
</span>
|
||||
</div>
|
||||
{focusedJob.job_failure_summary ? (
|
||||
{focusedFailure ? (
|
||||
<>
|
||||
<p className="text-sm font-medium text-helios-ink">
|
||||
{explainFailureSummary(focusedJob.job_failure_summary)}
|
||||
{focusedFailure.summary}
|
||||
</p>
|
||||
<p className="text-xs font-mono text-helios-slate/70 break-all leading-relaxed">
|
||||
{focusedJob.job_failure_summary}
|
||||
<p className="text-xs leading-relaxed text-helios-slate">
|
||||
{focusedFailure.detail}
|
||||
</p>
|
||||
{focusedFailure.operator_guidance && (
|
||||
<p className="text-xs leading-relaxed text-status-error">
|
||||
{focusedFailure.operator_guidance}
|
||||
</p>
|
||||
)}
|
||||
{focusedFailure.legacy_reason !== focusedFailure.detail && (
|
||||
<p className="text-xs font-mono text-helios-slate/70 break-all leading-relaxed">
|
||||
{focusedFailure.legacy_reason}
|
||||
</p>
|
||||
)}
|
||||
</>
|
||||
) : (
|
||||
<p className="text-sm text-helios-slate">
|
||||
|
||||
@@ -113,8 +113,14 @@ export default function LogViewer() {
|
||||
|
||||
eventSource.addEventListener("decision", (event) => {
|
||||
try {
|
||||
const data = JSON.parse(event.data) as { action: string; reason: string; job_id?: number };
|
||||
appendLog(`Decision: ${data.action.toUpperCase()} - ${data.reason}`, "info", data.job_id);
|
||||
const data = JSON.parse(event.data) as {
|
||||
action: string;
|
||||
reason: string;
|
||||
job_id?: number;
|
||||
explanation?: { summary?: string };
|
||||
};
|
||||
const detail = data.explanation?.summary ?? data.reason;
|
||||
appendLog(`Decision: ${data.action.toUpperCase()} - ${detail}`, "info", data.job_id);
|
||||
} catch {
|
||||
// Ignore malformed SSE payload.
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user