This commit includes a huge amount of changes, including support for
adding Jellyfin servers as media sources and streaming content from
them.
These are breaking changes and touch almost every corner of the code,
but also pave the way for a lot more flexibility on the backend for
addinng different sources.
The commit also includes performance improvements to the inline modal,
lots of code cleanup, and a few bug fixes I found along the way.
Fixes#24
Includes:
* Batching inserts to external IDs
* Only saving critical external IDs (plex rating key) synchronously on
lineup update. Non-critical external ID inserts are pushed to
background tasks
* Program grouping / hierarchy updating has been pushed to a background
task
* Optimized verison of content program converter function that is
synchronous. This is much more performant for 1000s of items,
specifically when no async actions (selecting extra data) is
necessary. We saw a ~89% reduction in time spent here (>1s to ~30ms)
* A few places were selecting all channel programs when they only needed
to select the channel
* Only select program IDs at the outset of lineup update, instead of all
program details. We only need the IDs for relation diffing
Still some things we haven't solved:
1. There is non-trivial overhead in the mikro-orm ORM framework to
mapping 1000s of selected entities. In a 2k program channel case,
when loading all necessary entities (relations), we can see ~6k
entities loaded by the framework. The select from the DB only takes
about 800ms, but the entity mapping step can take >3s in this case.
One solution here is to use a simpler library for these super large
selects (kysely?)
2. It's probably overkill to have both Zodios on the client and then Zod
checking incoming types on the server. On huge request, this can add
~100ms or more (server side) to requests as Zod validates incoming
requests against the schema
3. We should think about replacing Zod on the server side with JSON
Schema. There are converters out there. We have a lot invested in
Zod, so a converter would be a good first step.
4. There's clearly some I/O contention happening in certain situations
... background tasks that query DB or Plex, getting responses to flow, logging, etc. I think most of it is DB-related.
5. Unclear if there is any actually _fast_ way to insert the amount of
data we are currently generating for large channels.
* This is a mess
* Fixes
* IDK about this
* Checkpoint
* Checkpoint
* Save external_ids for programs using Plex provided "Guid" field
This is a rather large change to a do a relative simple thing: save IDs
for external sources, such as IMDB/TMDB/TVDB, using IDs provided from
Plex.
It does a few things:
* Creates a delineation in the program_external_id table between
"single" and "multi" external IDs. Single IDs are global, while multi
IDs are scoped to some external source (e.g. Plex server)
* Starts the migration for Programs to be "source agnostic". Eventually,
Programs should be a logical entity that has >=1 "source". An ideal
end state would be for a Tunarr instance to have multiple "sources"
where Programs are deduped across them, with the ability to pick the
source for each program per channel, have source fallbacks, and search
for desired settings (e.g. audio language, subtitles) _across_ sources
for a given program. Additionally, saving non-streaming-source external
IDs opens up another avenue of metadata collection for Tunarr
* Implements uniqueness in the program_external_id table using partial
indexes. This made for a pretty messy change and I'm not super happy
with it.
* Migrates some queries away from the source/external_key fields on
Program to joins against the program_external_id table
* Implements some backfill mechanisms for these IDs