mirror of
https://github.com/torvalds/linux.git
synced 2026-04-18 06:44:00 -04:00
Merge tag 'drm-misc-next-2025-12-12' of https://gitlab.freedesktop.org/drm/misc/kernel into drm-next
drm-misc-next for 6.19:
UAPI Changes:
- panfrost: Add PANFROST_BO_SYNC ioctl
- panthor: Add PANTHOR_BO_SYNC ioctl
Core Changes:
- atomic: Add drm_device pointer to drm_private_obj
- bridge: Introduce drm_bridge_unplug, drm_bridge_enter, and
drm_bridge_exit
- dma-buf: Improve sg_table debugging
- dma-fence: Add new helpers, and use them when needed
- dp_mst: Avoid out-of-bounds access with VCPI==0
- gem: Reduce page table overhead with transparent huge pages
- panic: Report invalid panic modes
- sched: Add TODO entries
- ttm: Various cleanups
- vblank: Various refactoring and cleanups
- Kconfig cleanups
- Removed support for kdb
Driver Changes:
- amdxdna: Fix race conditions at suspend, Improve handling of zero
tail pointers, Fix cu_idx being overwritten during command setup
- ast: Support imported cursor buffers
-
- panthor: Enable timestamp propagation, Multiple improvements and
fixes to improve the overall robustness, notably of the scheduler.
- panels:
- panel-edp: Support for CSW MNE007QB3-1, AUO B140HAN06.4, AUO B140QAX01.H
Signed-off-by: Dave Airlie <airlied@redhat.com>
[airlied: fix mm conflict]
From: Maxime Ripard <mripard@redhat.com>
Link: https://patch.msgid.link/20251212-spectacular-agama-of-abracadabra-aaef32@penduick
This commit is contained in:
@@ -155,7 +155,12 @@ drm_gem_object_init() will create an shmfs file of the
|
||||
requested size and store it into the struct :c:type:`struct
|
||||
drm_gem_object <drm_gem_object>` filp field. The memory is
|
||||
used as either main storage for the object when the graphics hardware
|
||||
uses system memory directly or as a backing store otherwise.
|
||||
uses system memory directly or as a backing store otherwise. Drivers
|
||||
can call drm_gem_huge_mnt_create() to create, mount and use a huge
|
||||
shmem mountpoint instead of the default one ('shm_mnt'). For builds
|
||||
with CONFIG_TRANSPARENT_HUGEPAGE enabled, further calls to
|
||||
drm_gem_object_init() will let shmem allocate huge pages when
|
||||
possible.
|
||||
|
||||
Drivers are responsible for the actual physical pages allocation by
|
||||
calling shmem_read_mapping_page_gfp() for each page.
|
||||
@@ -290,15 +295,27 @@ The open and close operations must update the GEM object reference
|
||||
count. Drivers can use the drm_gem_vm_open() and drm_gem_vm_close() helper
|
||||
functions directly as open and close handlers.
|
||||
|
||||
The fault operation handler is responsible for mapping individual pages
|
||||
to userspace when a page fault occurs. Depending on the memory
|
||||
allocation scheme, drivers can allocate pages at fault time, or can
|
||||
decide to allocate memory for the GEM object at the time the object is
|
||||
created.
|
||||
The fault operation handler is responsible for mapping pages to
|
||||
userspace when a page fault occurs. Depending on the memory allocation
|
||||
scheme, drivers can allocate pages at fault time, or can decide to
|
||||
allocate memory for the GEM object at the time the object is created.
|
||||
|
||||
Drivers that want to map the GEM object upfront instead of handling page
|
||||
faults can implement their own mmap file operation handler.
|
||||
|
||||
In order to reduce page table overhead, if the internal shmem mountpoint
|
||||
"shm_mnt" is configured to use transparent huge pages (for builds with
|
||||
CONFIG_TRANSPARENT_HUGEPAGE enabled) and if the shmem backing store
|
||||
managed to allocate a huge page for a faulty address, the fault handler
|
||||
will first attempt to insert that huge page into the VMA before falling
|
||||
back to individual page insertion. mmap() user address alignment for GEM
|
||||
objects is handled by providing a custom get_unmapped_area file
|
||||
operation which forwards to the shmem backing store. For most drivers,
|
||||
which don't create a huge mountpoint by default or through a module
|
||||
parameter, transparent huge pages can be enabled by either setting the
|
||||
"transparent_hugepage_shmem" kernel parameter or the
|
||||
"/sys/kernel/mm/transparent_hugepage/shmem_enabled" sysfs knob.
|
||||
|
||||
For platforms without MMU the GEM core provides a helper method
|
||||
drm_gem_dma_get_unmapped_area(). The mmap() routines will call this to get a
|
||||
proposed address for the mapping.
|
||||
|
||||
@@ -878,6 +878,51 @@ Contact: Christian König
|
||||
|
||||
Level: Starter
|
||||
|
||||
DRM GPU Scheduler
|
||||
=================
|
||||
|
||||
Provide a universal successor for drm_sched_resubmit_jobs()
|
||||
-----------------------------------------------------------
|
||||
|
||||
drm_sched_resubmit_jobs() is deprecated. Main reason being that it leads to
|
||||
reinitializing dma_fences. See that function's docu for details. The better
|
||||
approach for valid resubmissions by amdgpu and Xe is (apparently) to figure out
|
||||
which job (and, through association: which entity) caused the hang. Then, the
|
||||
job's buffer data, together with all other jobs' buffer data currently in the
|
||||
same hardware ring, must be invalidated. This can for example be done by
|
||||
overwriting it. amdgpu currently determines which jobs are in the ring and need
|
||||
to be overwritten by keeping copies of the job. Xe obtains that information by
|
||||
directly accessing drm_sched's pending_list.
|
||||
|
||||
Tasks:
|
||||
|
||||
1. implement scheduler functionality through which the driver can obtain the
|
||||
information which *broken* jobs are currently in the hardware ring.
|
||||
2. Such infrastructure would then typically be used in
|
||||
drm_sched_backend_ops.timedout_job(). Document that.
|
||||
3. Port a driver as first user.
|
||||
4. Document the new alternative in the docu of deprecated
|
||||
drm_sched_resubmit_jobs().
|
||||
|
||||
Contact: Christian König <christian.koenig@amd.com>
|
||||
Philipp Stanner <phasta@kernel.org>
|
||||
|
||||
Level: Advanced
|
||||
|
||||
Add locking for runqueues
|
||||
-------------------------
|
||||
|
||||
There is an old FIXME by Sima in include/drm/gpu_scheduler.h. It details that
|
||||
struct drm_sched_rq is read at many places without any locks, not even with a
|
||||
READ_ONCE. At XDC 2025 no one could really tell why that is the case, whether
|
||||
locks are needed and whether they could be added. (But for real, that should
|
||||
probably be locked!). Check whether it's possible to add locks everywhere, and
|
||||
do so if yes.
|
||||
|
||||
Contact: Philipp Stanner <phasta@kernel.org>
|
||||
|
||||
Level: Intermediate
|
||||
|
||||
Outside DRM
|
||||
===========
|
||||
|
||||
|
||||
Reference in New Issue
Block a user