mirror of
https://github.com/torvalds/linux.git
synced 2026-04-30 20:42:33 -04:00
Pull drm updates from Dave Airlie:
"Outside of drm there are some rust patches from Danilo who maintains
that area in here, and some pieces for drm header check tests.
The major things in here are a new driver supporting the touchbar
displays on M1/M2, the nova-core stub driver which is just the vehicle
for adding rust abstractions and start developing a real driver inside
of.
xe adds support for SVM with a non-driver specific SVM core
abstraction that will hopefully be useful for other drivers, along
with support for shrinking for TTM devices. I'm sure xe and AMD
support new devices, but the pipeline depth on these things is hard to
know what they end up being in the marketplace!
uapi:
- add mediatek tiled fourcc
- add support for notifying userspace on device wedged
new driver:
- appletbdrm: support for Apple Touchbar displays on m1/m2
- nova-core: skeleton rust driver to develop nova inside off
firmware:
- add some rust firmware pieces
rust:
- add 'LocalModule' type alias
component:
- add helper to query bound status
fbdev:
- fbtft: remove access to page->index
media:
- cec: tda998x: import driver from drm
dma-buf:
- add fast path for single fence merging
tests:
- fix lockdep warnings
atomic:
- allow full modeset on connector changes
- clarify semantics of allow_modeset and drm_atomic_helper_check
- async-flip: support on arbitary planes
- writeback: fix UAF
- Document atomic-state history
format-helper:
- support ARGB8888 to ARGB4444 conversions
buddy:
- fix multi-root cleanup
ci:
- update IGT
dp:
- support extended wake timeout
- mst: fix RAD to string conversion
- increase DPCD eDP control CAP size to 5 bytes
- add DPCD eDP v1.5 definition
- add helpers for LTTPR transparent mode
panic:
- encode QR code according to Fido 2.2
scheduler:
- add parameter struct for init
- improve job peek/pop operations
- optimise drm_sched_job struct layout
ttm:
- refactor pool allocation
- add helpers for TTM shrinker
panel-orientation:
- add a bunch of new quirks
panel:
- convert panels to multi-style functions
- edp: Add support for B140UAN04.4, BOE NV140FHM-NZ, CSW MNB601LS1-3,
LG LP079QX1-SP0V, MNE007QS3-7, STA 116QHD024002, Starry
116KHD024006, Lenovo T14s Gen6 Snapdragon
- himax-hx83102: Add support for CSOT PNA957QT1-1, Kingdisplay
kd110n11-51ie, Starry 2082109qfh040022-50e
- visionox-r66451: use multi-style MIPI-DSI functions
- raydium-rm67200: Add driver for Raydium RM67200
- simple: Add support for BOE AV123Z7M-N17, BOE AV123Z7M-N17
- sony-td4353-jdi: Use MIPI-DSI multi-func interface
- summit: Add driver for Apple Summit display panel
- visionox-rm692e5: Add driver for Visionox RM692E5
bridge:
- pass full atomic state to various callbacks
- adv7511: Report correct capabilities
- it6505: Fix HDCP V compare
- snd65dsi86: fix device IDs
- nwl-dsi: set bridge type
- ti-sn65si83: add error recovery and set bridge type
- synopsys: add HDMI audio support
xe:
- support device-wedged event
- add mmap support for PCI memory barrier
- perf pmu integration and expose per-engien activity
- add EU stall sampling support
- GPU SVM and Xe SVM implementation
- use TTM shrinker
- add survivability mode to allow the driver to do firmware updates
in critical failure states
- PXP HWDRM support for MTL and LNL
- expose package/vram temps over hwmon
- enable DP tunneling
- drop mmio_ext abstraction
- Reject BO evcition if BO is bound to current VM
- Xe suballocator improvements
- re-use display vmas when possible
- add GuC Buffer Cache abstraction
- PCI ID update for Panther Lake and Battlemage
- Enable SRIOV for Panther Lake
- Refactor VRAM manager location
i915:
- enable extends wake timeout
- support device-wedged event
- Enable DP 128b/132b SST DSC
- FBC dirty rectangle support for display version 30+
- convert i915/xe to drm client setup
- Compute HDMI PLLS for rates not in fixed tables
- Allow DSB usage when PSR is enabled on LNL+
- Enable panel replay without full modeset
- Enable async flips with compressed buffers on ICL+
- support luminance based brightness via DPCD for eDP
- enable VRR enable/disable without full modeset
- allow GuC SLPC default strategies on MTL+ for performance
- lots of display refactoring in move to struct intel_display
amdgpu:
- add device wedged event
- support async page flips on overlay planes
- enable broadcast RGB drm property
- add info ioctl for virt mode
- OEM i2c support for RGB lights
- GC 11.5.2 + 11.5.3 support
- SDMA 6.1.3 support
- NBIO 7.9.1 + 7.11.2 support
- MMHUB 1.8.1 + 3.3.2 support
- DCN 3.6.0 support
- Add dynamic workload profile switching for GC 10-12
- support larger VBIOS sizes
- Mark gttsize parameters as deprecated
- Initial JPEG queue resset support
amdkfd:
- add KFD per process flags for setting precision
- sync pasid values between KGD and KFD
- improve GTT/VRAM handling for APUs
- fix user queue validation on GC7/8
- SDMA queue reset support
raedeon:
- rs400 hyperz fix
i2c:
- td998x: drop platform_data, split driver into media and bridge
ast:
- transmitter chip detection refactoring
- vbios display mode refactoring
- astdp: fix connection status and filter unsupported modes
- cursor handling refactoring
imagination:
- check job dependencies with sched helper
ivpu:
- improve command queue handling
- use workqueue for IRQ handling
- add support HW fault injection
- locking fixes
mgag200:
- add support for G200eH5
msm:
- dpu: add concurrent writeback support for DPU 10.x+
- use LTTPR helpers
- GPU:
- Fix obscure GMU suspend failure
- Expose syncobj timeline support
- Extend GPU devcoredump with pagetable info
- a623 support
- Fix a6xx gen1/gen2 indexed-register blocks in gpu snapshot /
devcoredump
- Display:
- Add cpu-cfg interconnect paths on SM8560 and SM8650
- Introduce KMS OMMU fault handler, causing devcoredump snapshot
- Fixed error pointer dereference in msm_kms_init_aspace()
- DPU:
- Fix mode_changing handling
- Add writeback support on SM6150 (QCS615)
- Fix DSC programming in 1:1:1 topology
- Reworked hardware resource allocation, moving it to the CRTC code
- Enabled support for Concurrent WriteBack (CWB) on SM8650
- Enabled CDM blocks on all relevant platforms
- Reworked debugfs interface for BW/clocks debugging
- Clear perf params before calculating bw
- Support YUV formats on writeback
- Fixed double inclusion
- Fixed writeback in YUV formats when using cloned output, Dropped
wb2_formats_rgb
- Corrected dpu_crtc_check_mode_changed and struct dpu_encoder_virt
kerneldocs
- Fixed uninitialized variable in dpu_crtc_kickoff_clone_mode()
- DSI:
- DSC-related fixes
- Rework clock programming
- DSI PHY:
- Fix 7nm (and lower) PHY programming
- Add proper DT schema definitions for DSI PHY clocks
- HDMI:
- Rework the driver, enabling the use of the HDMI Connector
framework
- Bindings:
- Added eDP PHY on SA8775P
nouveau:
- move drm_slave_encoder interface into driver
- nvkm: refactor GSP RPC
- use LTTPR helpers
mediatek:
- HDMI fixup and refinement
- add MT8188 dsc compatible
- MT8365 SoC support
panthor:
- Expose sizes of intenral BOs via fdinfo
- Fix race between reset and suspend
- Improve locking
qaic:
- Add support for AIC200
renesas:
- Fix limits in DT bindings
rockchip:
- support rk3562-mali
- rk3576: Add HDMI support
- vop2: Add new display modes on RK3588 HDMI0 up to 4K
- Don't change HDMI reference clock rate
- Fix DT bindings
- analogix_dp: add eDP support
- fix shutodnw
solomon:
- Set SPI device table to silence warnings
- Fix pixel and scanline encoding
v3d:
- handle clock
vc4:
- Use drm_exec
- Use dma-resv for wait-BO ioctl
- Remove seqno infrastructure
virtgpu:
- Support partial mappings of GEM objects
- Reserve VGA resources during initialization
- Fix UAF in virtgpu_dma_buf_free_obj()
- Add panic support
vkms:
- Switch to a managed modesetting pipeline
- Add support for ARGB8888
- fix UAf
xlnx:
- Set correct DMA segment size
- use mutex guards
- Fix error handling
- Fix docs"
* tag 'drm-next-2025-03-28' of https://gitlab.freedesktop.org/drm/kernel: (1762 commits)
drm/amd/pm: Update feature list for smu_v13_0_6
drm/amdgpu: Add parameter documentation for amdgpu_sync_fence
drm/amdgpu/discovery: optionally use fw based ip discovery
drm/amdgpu/discovery: use specific ip_discovery.bin for legacy asics
drm/amdgpu/discovery: check ip_discovery fw file available
drm/amd/pm: Remove unnecessay UQ10 to UINT conversion
drm/amd/pm: Remove unnecessay UQ10 to UINT conversion
drm/amdgpu/sdma_v4_4_2: update VM flush implementation for SDMA
drm/amdgpu: Optimize VM invalidation engine allocation and synchronize GPU TLB flush
drm/amd/amdgpu: Increase max rings to enable SDMA page ring
drm/amdgpu: Decode deferred error type in gfx aca bank parser
drm/amdgpu/gfx11: Add Cleaner Shader Support for GFX11.5 GPUs
drm/amdgpu/mes: clean up SDMA HQD loop
drm/amdgpu/mes: enable compute pipes across all MEC
drm/amdgpu/mes: drop MES 10.x leftovers
drm/amdgpu/mes: optimize compute loop handling
drm/amdgpu/sdma: guilty tracking is per instance
drm/amdgpu/sdma: fix engine reset handling
drm/amdgpu: remove invalid usage of sched.ready
drm/amdgpu: add cleaner shader trace point
...
422 lines
13 KiB
C
422 lines
13 KiB
C
/* SPDX-License-Identifier: MIT */
|
|
/*
|
|
* Copyright © 2021 Intel Corporation
|
|
*/
|
|
|
|
#ifndef _XE_BO_H_
|
|
#define _XE_BO_H_
|
|
|
|
#include <drm/ttm/ttm_tt.h>
|
|
|
|
#include "xe_bo_types.h"
|
|
#include "xe_macros.h"
|
|
#include "xe_vm_types.h"
|
|
#include "xe_vm.h"
|
|
|
|
#define XE_DEFAULT_GTT_SIZE_MB 3072ULL /* 3GB by default */
|
|
|
|
#define XE_BO_FLAG_USER BIT(0)
|
|
/* The bits below need to be contiguous, or things break */
|
|
#define XE_BO_FLAG_SYSTEM BIT(1)
|
|
#define XE_BO_FLAG_VRAM0 BIT(2)
|
|
#define XE_BO_FLAG_VRAM1 BIT(3)
|
|
#define XE_BO_FLAG_VRAM_MASK (XE_BO_FLAG_VRAM0 | XE_BO_FLAG_VRAM1)
|
|
/* -- */
|
|
#define XE_BO_FLAG_STOLEN BIT(4)
|
|
#define XE_BO_FLAG_VRAM_IF_DGFX(tile) (IS_DGFX(tile_to_xe(tile)) ? \
|
|
XE_BO_FLAG_VRAM0 << (tile)->id : \
|
|
XE_BO_FLAG_SYSTEM)
|
|
#define XE_BO_FLAG_GGTT BIT(5)
|
|
#define XE_BO_FLAG_IGNORE_MIN_PAGE_SIZE BIT(6)
|
|
#define XE_BO_FLAG_PINNED BIT(7)
|
|
#define XE_BO_FLAG_NO_RESV_EVICT BIT(8)
|
|
#define XE_BO_FLAG_DEFER_BACKING BIT(9)
|
|
#define XE_BO_FLAG_SCANOUT BIT(10)
|
|
#define XE_BO_FLAG_FIXED_PLACEMENT BIT(11)
|
|
#define XE_BO_FLAG_PAGETABLE BIT(12)
|
|
#define XE_BO_FLAG_NEEDS_CPU_ACCESS BIT(13)
|
|
#define XE_BO_FLAG_NEEDS_UC BIT(14)
|
|
#define XE_BO_FLAG_NEEDS_64K BIT(15)
|
|
#define XE_BO_FLAG_NEEDS_2M BIT(16)
|
|
#define XE_BO_FLAG_GGTT_INVALIDATE BIT(17)
|
|
#define XE_BO_FLAG_GGTT0 BIT(18)
|
|
#define XE_BO_FLAG_GGTT1 BIT(19)
|
|
#define XE_BO_FLAG_GGTT2 BIT(20)
|
|
#define XE_BO_FLAG_GGTT3 BIT(21)
|
|
#define XE_BO_FLAG_GGTT_ALL (XE_BO_FLAG_GGTT0 | \
|
|
XE_BO_FLAG_GGTT1 | \
|
|
XE_BO_FLAG_GGTT2 | \
|
|
XE_BO_FLAG_GGTT3)
|
|
#define XE_BO_FLAG_CPU_ADDR_MIRROR BIT(22)
|
|
|
|
/* this one is trigger internally only */
|
|
#define XE_BO_FLAG_INTERNAL_TEST BIT(30)
|
|
#define XE_BO_FLAG_INTERNAL_64K BIT(31)
|
|
|
|
#define XE_BO_FLAG_GGTTx(tile) \
|
|
(XE_BO_FLAG_GGTT0 << (tile)->id)
|
|
|
|
#define XE_PTE_SHIFT 12
|
|
#define XE_PAGE_SIZE (1 << XE_PTE_SHIFT)
|
|
#define XE_PTE_MASK (XE_PAGE_SIZE - 1)
|
|
#define XE_PDE_SHIFT (XE_PTE_SHIFT - 3)
|
|
#define XE_PDES (1 << XE_PDE_SHIFT)
|
|
#define XE_PDE_MASK (XE_PDES - 1)
|
|
|
|
#define XE_64K_PTE_SHIFT 16
|
|
#define XE_64K_PAGE_SIZE (1 << XE_64K_PTE_SHIFT)
|
|
#define XE_64K_PTE_MASK (XE_64K_PAGE_SIZE - 1)
|
|
#define XE_64K_PDE_MASK (XE_PDE_MASK >> 4)
|
|
|
|
#define XE_PL_SYSTEM TTM_PL_SYSTEM
|
|
#define XE_PL_TT TTM_PL_TT
|
|
#define XE_PL_VRAM0 TTM_PL_VRAM
|
|
#define XE_PL_VRAM1 (XE_PL_VRAM0 + 1)
|
|
#define XE_PL_STOLEN (TTM_NUM_MEM_TYPES - 1)
|
|
|
|
#define XE_BO_PROPS_INVALID (-1)
|
|
|
|
#define XE_PCI_BARRIER_MMAP_OFFSET (0x50 << XE_PTE_SHIFT)
|
|
|
|
struct sg_table;
|
|
|
|
struct xe_bo *xe_bo_alloc(void);
|
|
void xe_bo_free(struct xe_bo *bo);
|
|
|
|
struct xe_bo *___xe_bo_create_locked(struct xe_device *xe, struct xe_bo *bo,
|
|
struct xe_tile *tile, struct dma_resv *resv,
|
|
struct ttm_lru_bulk_move *bulk, size_t size,
|
|
u16 cpu_caching, enum ttm_bo_type type,
|
|
u32 flags);
|
|
struct xe_bo *
|
|
xe_bo_create_locked_range(struct xe_device *xe,
|
|
struct xe_tile *tile, struct xe_vm *vm,
|
|
size_t size, u64 start, u64 end,
|
|
enum ttm_bo_type type, u32 flags, u64 alignment);
|
|
struct xe_bo *xe_bo_create_locked(struct xe_device *xe, struct xe_tile *tile,
|
|
struct xe_vm *vm, size_t size,
|
|
enum ttm_bo_type type, u32 flags);
|
|
struct xe_bo *xe_bo_create(struct xe_device *xe, struct xe_tile *tile,
|
|
struct xe_vm *vm, size_t size,
|
|
enum ttm_bo_type type, u32 flags);
|
|
struct xe_bo *xe_bo_create_user(struct xe_device *xe, struct xe_tile *tile,
|
|
struct xe_vm *vm, size_t size,
|
|
u16 cpu_caching,
|
|
u32 flags);
|
|
struct xe_bo *xe_bo_create_pin_map(struct xe_device *xe, struct xe_tile *tile,
|
|
struct xe_vm *vm, size_t size,
|
|
enum ttm_bo_type type, u32 flags);
|
|
struct xe_bo *xe_bo_create_pin_map_at(struct xe_device *xe, struct xe_tile *tile,
|
|
struct xe_vm *vm, size_t size, u64 offset,
|
|
enum ttm_bo_type type, u32 flags);
|
|
struct xe_bo *xe_bo_create_pin_map_at_aligned(struct xe_device *xe,
|
|
struct xe_tile *tile,
|
|
struct xe_vm *vm,
|
|
size_t size, u64 offset,
|
|
enum ttm_bo_type type, u32 flags,
|
|
u64 alignment);
|
|
struct xe_bo *xe_bo_create_from_data(struct xe_device *xe, struct xe_tile *tile,
|
|
const void *data, size_t size,
|
|
enum ttm_bo_type type, u32 flags);
|
|
struct xe_bo *xe_managed_bo_create_pin_map(struct xe_device *xe, struct xe_tile *tile,
|
|
size_t size, u32 flags);
|
|
struct xe_bo *xe_managed_bo_create_from_data(struct xe_device *xe, struct xe_tile *tile,
|
|
const void *data, size_t size, u32 flags);
|
|
int xe_managed_bo_reinit_in_vram(struct xe_device *xe, struct xe_tile *tile, struct xe_bo **src);
|
|
|
|
int xe_bo_placement_for_flags(struct xe_device *xe, struct xe_bo *bo,
|
|
u32 bo_flags);
|
|
|
|
static inline struct xe_bo *ttm_to_xe_bo(const struct ttm_buffer_object *bo)
|
|
{
|
|
return container_of(bo, struct xe_bo, ttm);
|
|
}
|
|
|
|
static inline struct xe_bo *gem_to_xe_bo(const struct drm_gem_object *obj)
|
|
{
|
|
return container_of(obj, struct xe_bo, ttm.base);
|
|
}
|
|
|
|
#define xe_bo_device(bo) ttm_to_xe_device((bo)->ttm.bdev)
|
|
|
|
static inline struct xe_bo *xe_bo_get(struct xe_bo *bo)
|
|
{
|
|
if (bo)
|
|
drm_gem_object_get(&bo->ttm.base);
|
|
|
|
return bo;
|
|
}
|
|
|
|
void xe_bo_put(struct xe_bo *bo);
|
|
|
|
/*
|
|
* xe_bo_get_unless_zero() - Conditionally obtain a GEM object refcount on an
|
|
* xe bo
|
|
* @bo: The bo for which we want to obtain a refcount.
|
|
*
|
|
* There is a short window between where the bo's GEM object refcount reaches
|
|
* zero and where we put the final ttm_bo reference. Code in the eviction- and
|
|
* shrinking path should therefore attempt to grab a gem object reference before
|
|
* trying to use members outside of the base class ttm object. This function is
|
|
* intended for that purpose. On successful return, this function must be paired
|
|
* with an xe_bo_put().
|
|
*
|
|
* Return: @bo on success, NULL on failure.
|
|
*/
|
|
static inline __must_check struct xe_bo *xe_bo_get_unless_zero(struct xe_bo *bo)
|
|
{
|
|
if (!bo || !kref_get_unless_zero(&bo->ttm.base.refcount))
|
|
return NULL;
|
|
|
|
return bo;
|
|
}
|
|
|
|
static inline void __xe_bo_unset_bulk_move(struct xe_bo *bo)
|
|
{
|
|
if (bo)
|
|
ttm_bo_set_bulk_move(&bo->ttm, NULL);
|
|
}
|
|
|
|
static inline void xe_bo_assert_held(struct xe_bo *bo)
|
|
{
|
|
if (bo)
|
|
dma_resv_assert_held((bo)->ttm.base.resv);
|
|
}
|
|
|
|
int xe_bo_lock(struct xe_bo *bo, bool intr);
|
|
|
|
void xe_bo_unlock(struct xe_bo *bo);
|
|
|
|
static inline void xe_bo_unlock_vm_held(struct xe_bo *bo)
|
|
{
|
|
if (bo) {
|
|
XE_WARN_ON(bo->vm && bo->ttm.base.resv != xe_vm_resv(bo->vm));
|
|
if (bo->vm)
|
|
xe_vm_assert_held(bo->vm);
|
|
else
|
|
dma_resv_unlock(bo->ttm.base.resv);
|
|
}
|
|
}
|
|
|
|
int xe_bo_pin_external(struct xe_bo *bo);
|
|
int xe_bo_pin(struct xe_bo *bo);
|
|
void xe_bo_unpin_external(struct xe_bo *bo);
|
|
void xe_bo_unpin(struct xe_bo *bo);
|
|
int xe_bo_validate(struct xe_bo *bo, struct xe_vm *vm, bool allow_res_evict);
|
|
|
|
static inline bool xe_bo_is_pinned(struct xe_bo *bo)
|
|
{
|
|
return bo->ttm.pin_count;
|
|
}
|
|
|
|
static inline bool xe_bo_is_protected(const struct xe_bo *bo)
|
|
{
|
|
return bo->pxp_key_instance;
|
|
}
|
|
|
|
static inline void xe_bo_unpin_map_no_vm(struct xe_bo *bo)
|
|
{
|
|
if (likely(bo)) {
|
|
xe_bo_lock(bo, false);
|
|
xe_bo_unpin(bo);
|
|
xe_bo_unlock(bo);
|
|
|
|
xe_bo_put(bo);
|
|
}
|
|
}
|
|
|
|
bool xe_bo_is_xe_bo(struct ttm_buffer_object *bo);
|
|
dma_addr_t __xe_bo_addr(struct xe_bo *bo, u64 offset, size_t page_size);
|
|
dma_addr_t xe_bo_addr(struct xe_bo *bo, u64 offset, size_t page_size);
|
|
|
|
static inline dma_addr_t
|
|
xe_bo_main_addr(struct xe_bo *bo, size_t page_size)
|
|
{
|
|
return xe_bo_addr(bo, 0, page_size);
|
|
}
|
|
|
|
static inline u32
|
|
__xe_bo_ggtt_addr(struct xe_bo *bo, u8 tile_id)
|
|
{
|
|
struct xe_ggtt_node *ggtt_node = bo->ggtt_node[tile_id];
|
|
|
|
if (XE_WARN_ON(!ggtt_node))
|
|
return 0;
|
|
|
|
XE_WARN_ON(ggtt_node->base.size > bo->size);
|
|
XE_WARN_ON(ggtt_node->base.start + ggtt_node->base.size > (1ull << 32));
|
|
return ggtt_node->base.start;
|
|
}
|
|
|
|
static inline u32
|
|
xe_bo_ggtt_addr(struct xe_bo *bo)
|
|
{
|
|
xe_assert(xe_bo_device(bo), bo->tile);
|
|
|
|
return __xe_bo_ggtt_addr(bo, bo->tile->id);
|
|
}
|
|
|
|
int xe_bo_vmap(struct xe_bo *bo);
|
|
void xe_bo_vunmap(struct xe_bo *bo);
|
|
int xe_bo_read(struct xe_bo *bo, u64 offset, void *dst, int size);
|
|
|
|
bool mem_type_is_vram(u32 mem_type);
|
|
bool xe_bo_is_vram(struct xe_bo *bo);
|
|
bool xe_bo_is_stolen(struct xe_bo *bo);
|
|
bool xe_bo_is_stolen_devmem(struct xe_bo *bo);
|
|
bool xe_bo_is_vm_bound(struct xe_bo *bo);
|
|
bool xe_bo_has_single_placement(struct xe_bo *bo);
|
|
uint64_t vram_region_gpu_offset(struct ttm_resource *res);
|
|
|
|
bool xe_bo_can_migrate(struct xe_bo *bo, u32 mem_type);
|
|
|
|
int xe_bo_migrate(struct xe_bo *bo, u32 mem_type);
|
|
int xe_bo_evict(struct xe_bo *bo, bool force_alloc);
|
|
|
|
int xe_bo_evict_pinned(struct xe_bo *bo);
|
|
int xe_bo_restore_pinned(struct xe_bo *bo);
|
|
|
|
extern const struct ttm_device_funcs xe_ttm_funcs;
|
|
extern const char *const xe_mem_type_to_name[];
|
|
|
|
int xe_gem_create_ioctl(struct drm_device *dev, void *data,
|
|
struct drm_file *file);
|
|
int xe_gem_mmap_offset_ioctl(struct drm_device *dev, void *data,
|
|
struct drm_file *file);
|
|
void xe_bo_runtime_pm_release_mmap_offset(struct xe_bo *bo);
|
|
|
|
int xe_bo_dumb_create(struct drm_file *file_priv,
|
|
struct drm_device *dev,
|
|
struct drm_mode_create_dumb *args);
|
|
|
|
bool xe_bo_needs_ccs_pages(struct xe_bo *bo);
|
|
|
|
static inline size_t xe_bo_ccs_pages_start(struct xe_bo *bo)
|
|
{
|
|
return PAGE_ALIGN(bo->ttm.base.size);
|
|
}
|
|
|
|
static inline bool xe_bo_has_pages(struct xe_bo *bo)
|
|
{
|
|
if ((bo->ttm.ttm && ttm_tt_is_populated(bo->ttm.ttm)) ||
|
|
xe_bo_is_vram(bo))
|
|
return true;
|
|
|
|
return false;
|
|
}
|
|
|
|
void __xe_bo_release_dummy(struct kref *kref);
|
|
|
|
/**
|
|
* xe_bo_put_deferred() - Put a buffer object with delayed final freeing
|
|
* @bo: The bo to put.
|
|
* @deferred: List to which to add the buffer object if we cannot put, or
|
|
* NULL if the function is to put unconditionally.
|
|
*
|
|
* Since the final freeing of an object includes both sleeping and (!)
|
|
* memory allocation in the dma_resv individualization, it's not ok
|
|
* to put an object from atomic context nor from within a held lock
|
|
* tainted by reclaim. In such situations we want to defer the final
|
|
* freeing until we've exited the restricting context, or in the worst
|
|
* case to a workqueue.
|
|
* This function either puts the object if possible without the refcount
|
|
* reaching zero, or adds it to the @deferred list if that was not possible.
|
|
* The caller needs to follow up with a call to xe_bo_put_commit() to actually
|
|
* put the bo iff this function returns true. It's safe to always
|
|
* follow up with a call to xe_bo_put_commit().
|
|
* TODO: It's TTM that is the villain here. Perhaps TTM should add an
|
|
* interface like this.
|
|
*
|
|
* Return: true if @bo was the first object put on the @freed list,
|
|
* false otherwise.
|
|
*/
|
|
static inline bool
|
|
xe_bo_put_deferred(struct xe_bo *bo, struct llist_head *deferred)
|
|
{
|
|
if (!deferred) {
|
|
xe_bo_put(bo);
|
|
return false;
|
|
}
|
|
|
|
if (!kref_put(&bo->ttm.base.refcount, __xe_bo_release_dummy))
|
|
return false;
|
|
|
|
return llist_add(&bo->freed, deferred);
|
|
}
|
|
|
|
void xe_bo_put_commit(struct llist_head *deferred);
|
|
|
|
/**
|
|
* xe_bo_put_async() - Put BO async
|
|
* @bo: The bo to put.
|
|
*
|
|
* Put BO async, the final put is deferred to a worker to exit an IRQ context.
|
|
*/
|
|
static inline void
|
|
xe_bo_put_async(struct xe_bo *bo)
|
|
{
|
|
struct xe_bo_dev *bo_device = &xe_bo_device(bo)->bo_device;
|
|
|
|
if (xe_bo_put_deferred(bo, &bo_device->async_list))
|
|
schedule_work(&bo_device->async_free);
|
|
}
|
|
|
|
void xe_bo_dev_init(struct xe_bo_dev *bo_device);
|
|
|
|
void xe_bo_dev_fini(struct xe_bo_dev *bo_device);
|
|
|
|
struct sg_table *xe_bo_sg(struct xe_bo *bo);
|
|
|
|
/*
|
|
* xe_sg_segment_size() - Provides upper limit for sg segment size.
|
|
* @dev: device pointer
|
|
*
|
|
* Returns the maximum segment size for the 'struct scatterlist'
|
|
* elements.
|
|
*/
|
|
static inline unsigned int xe_sg_segment_size(struct device *dev)
|
|
{
|
|
struct scatterlist __maybe_unused sg;
|
|
size_t max = BIT_ULL(sizeof(sg.length) * 8) - 1;
|
|
|
|
max = min_t(size_t, max, dma_max_mapping_size(dev));
|
|
|
|
/*
|
|
* The iommu_dma_map_sg() function ensures iova allocation doesn't
|
|
* cross dma segment boundary. It does so by padding some sg elements.
|
|
* This can cause overflow, ending up with sg->length being set to 0.
|
|
* Avoid this by ensuring maximum segment size is half of 'max'
|
|
* rounded down to PAGE_SIZE.
|
|
*/
|
|
return round_down(max / 2, PAGE_SIZE);
|
|
}
|
|
|
|
/**
|
|
* struct xe_bo_shrink_flags - flags governing the shrink behaviour.
|
|
* @purge: Only purging allowed. Don't shrink if bo not purgeable.
|
|
* @writeback: Attempt to immediately move content to swap.
|
|
*/
|
|
struct xe_bo_shrink_flags {
|
|
u32 purge : 1;
|
|
u32 writeback : 1;
|
|
};
|
|
|
|
long xe_bo_shrink(struct ttm_operation_ctx *ctx, struct ttm_buffer_object *bo,
|
|
const struct xe_bo_shrink_flags flags,
|
|
unsigned long *scanned);
|
|
|
|
/**
|
|
* xe_bo_is_mem_type - Whether the bo currently resides in the given
|
|
* TTM memory type
|
|
* @bo: The bo to check.
|
|
* @mem_type: The TTM memory type.
|
|
*
|
|
* Return: true iff the bo resides in @mem_type, false otherwise.
|
|
*/
|
|
static inline bool xe_bo_is_mem_type(struct xe_bo *bo, u32 mem_type)
|
|
{
|
|
xe_bo_assert_held(bo);
|
|
return bo->ttm.resource->mem_type == mem_type;
|
|
}
|
|
#endif
|