Many speed and UX improvements to the program selector grid view,
including:
* more eagerly loading items in the grid - makes the intersection
observer more eager
* faster fade in of images - the old 750ms fade was smooth but
gave the illusion of slowness
* send cache-control headers for proxied Plex thumbs in
production builds
* increases frontend query cache time for plex and jellyfin
searches in tanstack query
* implements filtering by first letter items in both plex and
jellyfin
* set the grid container height to the absolute max of all grid
items to allow for smoother scroll
* removes some unnecessary re-renders
* adds an alphanumeric quick filter for easily jumping to programs that
start with a specific character
Now that we are fully relying on our own internal DB relations for
program hierarchies (shows, seasons, albums, artists), we cannot use
eventual consistency for saving these relations when upserting programs.
This creates issues like #825, when sometimes groupings aren't fully
available after successfully creating a lineup.
This includes the following changes to speed things up and synchronously
save program_groupings:
* Completely remove usages of mikro-orm on the upsert programs path.
Everything is handled by lower level query builders via kysely
* Save program_grouping and critical program_grouping_external_ids with
immediate consistency
* Defer saving PLEX_GUID type external IDs to the background
* Properly defer saving non-critical external IDs until after the
request is returned
* Stop validating requests/responses client-side via Zod. The backend
already does this so we were duplicating work here and slowing things
down considerably.
* Moves upsertContentPrograms to ProgramDB
* Removes kysely types that were generated with kysely-codegen because
it oversimplified all underlying DB types
Fixes#825
Fixes include:
1. adding some reconnection retry parameters to the ffmpeg input arg
list
2. fast-kill concat streams on final disconnect
Also added the ability to generate ffreport files in the UI, with
customizable ffmpeg loglevel.
This commit introduces a new HLS stream mode, modeled off of the great
"HLS segmenter" in ErsatzTV. It also introduces the concept of "channel
stream modes" which are customizable in the UI.
Overall this commit performs a ton of refactoring and cleanup around the
streaming pipeline, consolidating a lot of logic and moving things
around to make all parts of it more flexible, understandable, and
extensible in the future.
This commit features a major rewrite and refactoring of the streaming
pipeline and class hierachy in Tunarr. It introduces new default
streaming modes in an attempt to stabilize transitions between program
streams, reuse underlying resources (e.g. ffmpeg processes), reduce the
complexity of interaction between various streaming class components,
and increase flexibility for future development.
Some notable changes:
* The default MPEG-TS stream is now backed by HLS. IPTV and Web
streaming sessions are shared
* Transcode readahead implementation. This should create smoother
streams. For FFMPEG 7 (and future versions) we use the native
"readrate_initial_burst" option. For earlier versions we use an
artificial readahead. **NOTE**: FFMPEG <7 artificial readahead might
have unintended side effects (e.g. loudness normalization issues) and
as such FFMPEG 7+ is recommended
* Separation between building an ffmpeg process, spawning the process,
and the notion of a 'transcode session'. These are all separate
classes now with narrow concerns
* Other misc refactoring to cleanup code and remove leftover superfluous
logic from DTV
This commit includes a huge amount of changes, including support for
adding Jellyfin servers as media sources and streaming content from
them.
These are breaking changes and touch almost every corner of the code,
but also pave the way for a lot more flexibility on the backend for
addinng different sources.
The commit also includes performance improvements to the inline modal,
lots of code cleanup, and a few bug fixes I found along the way.
Fixes#24
Fixes#599
Due to multiple issues surrounding single executable generation for
macOS, including issues with code signing due to executable mangling
(necessary when using Nexe with binary resources, i.e. the Tunarr
webapp), it seems the simplest solution is to simply bundle the relevant
version of nodejs itself along with a simple script to run Tunarr.
This essentially mimics the inside of our Docker containers and, in a
way, how the bundled executable works too... it's a little gnarly, but
it seems to work.
* Dynamic channels, part 2
* Checkpoint - not sure how are going to handle programming rules that rely on the channel start time yet
* lots more...this is gonna be cool
* Implement scheduled redirect tool on backend
* Have to scale offsets on each addition of a scheduled redirect
* Implement offline collapse internal operation
* Lots more dynamic channel goodies. We're getting closer
* Thinking about how to do operators that need metadata
* Kysely - make it fast...
Includes:
* Batching inserts to external IDs
* Only saving critical external IDs (plex rating key) synchronously on
lineup update. Non-critical external ID inserts are pushed to
background tasks
* Program grouping / hierarchy updating has been pushed to a background
task
* Optimized verison of content program converter function that is
synchronous. This is much more performant for 1000s of items,
specifically when no async actions (selecting extra data) is
necessary. We saw a ~89% reduction in time spent here (>1s to ~30ms)
* A few places were selecting all channel programs when they only needed
to select the channel
* Only select program IDs at the outset of lineup update, instead of all
program details. We only need the IDs for relation diffing
Still some things we haven't solved:
1. There is non-trivial overhead in the mikro-orm ORM framework to
mapping 1000s of selected entities. In a 2k program channel case,
when loading all necessary entities (relations), we can see ~6k
entities loaded by the framework. The select from the DB only takes
about 800ms, but the entity mapping step can take >3s in this case.
One solution here is to use a simpler library for these super large
selects (kysely?)
2. It's probably overkill to have both Zodios on the client and then Zod
checking incoming types on the server. On huge request, this can add
~100ms or more (server side) to requests as Zod validates incoming
requests against the schema
3. We should think about replacing Zod on the server side with JSON
Schema. There are converters out there. We have a lot invested in
Zod, so a converter would be a good first step.
4. There's clearly some I/O contention happening in certain situations
... background tasks that query DB or Plex, getting responses to flow, logging, etc. I think most of it is DB-related.
5. Unclear if there is any actually _fast_ way to insert the amount of
data we are currently generating for large channels.
We've seen reports like
https://github.com/chrisbenincasa/tunarr/issues/550 where swagger-ui
prevents the server from starting, permanently. I haven't been able to
reproduce this yet. We've opened an issue on the plugin repo to see if
they have ideas:
https://github.com/fastify/fastify-swagger-ui/issues/156
For now, we're not really in need of this anyway, so just remove it.
I've also updated a bunch of deps and shimmed in the "DOM" types in our
tsconfig since there are lots of errors when compiling during the lib
check phase for things like esbuild, vitest.