With the introduction of the delayed disable-sched behavior,
we use the GuC's xarray of valid guc-id's as a way to
identify if new requests had been added to a context
when the said context is being checked for closure.
Additionally that prior change also closes the race for when
a new incoming request fails to cancel the pending
delayed disable-sched worker.
With these two complementary checks, we see no more
use for intel_context:guc_state:number_committed_requests.
Signed-off-by: Alan Previn <alan.previn.teres.alexis@intel.com>
Reviewed-by: John Harrison <John.C.Harrison@Intel.com>
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20221006225121.826257-3-alan.previn.teres.alexis@intel.com
Add a delay, configurable via debugfs (default 34ms), to disable
scheduling of a context after the pin count goes to zero. Disable
scheduling is a costly operation as it requires synchronizing with
the GuC. So the idea is that a delay allows the user to resubmit
something before doing this operation. This delay is only done if
the context isn't closed and less than a given threshold
(default is 3/4) of the guc_ids are in use.
Alan Previn: Matt Brost first introduced this patch back in Oct 2021.
However no real world workload with measured performance impact was
available to prove the intended results. Today, this series is being
republished in response to a real world workload that benefited greatly
from it along with measured performance improvement.
Workload description: 36 containers were created on a DG2 device where
each container was performing a combination of 720p 3d game rendering
and 30fps video encoding. The workload density was configured in a way
that guaranteed each container to ALWAYS be able to render and
encode no less than 30fps with a predefined maximum render + encode
latency time. That means the totality of all 36 containers and their
workloads were not saturating the engines to their max (in order to
maintain just enough headrooom to meet the min fps and max latencies
of incoming container submissions).
Problem statement: It was observed that the CPU core processing the i915
soft IRQ work was experiencing severe load. Using tracelogs and an
instrumentation patch to count specific i915 IRQ events, it was confirmed
that the majority of the CPU cycles were caused by the
gen11_other_irq_handler() -> guc_irq_handler() code path. The vast
majority of the cycles was determined to be processing a specific G2H
IRQ: i.e. INTEL_GUC_ACTION_SCHED_CONTEXT_MODE_DONE. These IRQs are sent
by GuC in response to i915 KMD sending H2G requests:
INTEL_GUC_ACTION_SCHED_CONTEXT_MODE_SET. Those H2G requests are sent
whenever a context goes idle so that we can unpin the context from GuC.
The high CPU utilization % symptom was limiting density scaling.
Root Cause Analysis: Because the incoming execution buffers were spread
across 36 different containers (each with multiple contexts) but the
system in totality was NOT saturated to the max, it was assumed that each
context was constantly idling between submissions. This was causing
a thrashing of unpinning contexts from GuC at one moment, followed quickly
by repinning them due to incoming workload the very next moment. These
event-pairs were being triggered across multiple contexts per container,
across all containers at the rate of > 30 times per sec per context.
Metrics: When running this workload without this patch, we measured an
average of ~69K INTEL_GUC_ACTION_SCHED_CONTEXT_MODE_DONE events every 10
seconds or ~10 million times over ~25+ mins. With this patch, the count
reduced to ~480 every 10 seconds or about ~28K over ~10 mins. The
improvement observed is ~99% for the average counts per 10 seconds.
Design awareness: Selftest impact.
As temporary WA disable this feature for the selftests. Selftests are
very timing sensitive and any change in timing can cause failure. A
follow up patch will fixup the selftests to understand this delay.
Design awareness: Race between guc_request_alloc and guc_context_close.
If a context close is issued while there is a request submission in
flight and a delayed schedule disable is pending, guc_context_close
and guc_request_alloc will race to cancel the delayed disable.
To close the race, make sure that guc_request_alloc waits for
guc_context_close to finish running before checking any state.
Design awareness: GT Reset event.
If a gt reset is triggered, as preparation steps, add an additional step
to ensure all contexts that have a pending delay-disable-schedule task
be flushed of it. Move them directly into the closed state after cancelling
the worker. This is okay because the existing flow flushes all
yet-to-arrive G2H's dropping them anyway.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Alan Previn <alan.previn.teres.alexis@intel.com>
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Reviewed-by: John Harrison <John.C.Harrison@Intel.com>
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20221006225121.826257-2-alan.previn.teres.alexis@intel.com
In the S2idle suspend/resume phase the gfxoff is keeping functional so
some IP blocks will be likely to reinitialize at gfxoff entry and that
will result in failing to program GC registers.Therefore, let disallow
gfxoff until AMDGPU IPs reinitialized completely.
Signed-off-by: Prike Liang <Prike.Liang@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org # 5.15.x
Waitboost (when SLPC is enabled) results in a H2G message. This can result
in thousands of messages during a stress test and fill up an already full
CTB. There is no need to request for boost if min softlimit is equal or
greater than it.
v2: Add the tracing back, and check requested freq
in the worker thread (Tvrtko)
v3: Check requested freq in dec_waiters as well
v4: Only check min_softlimit against boost_freq. Limit this
optimization for server parts for now.
v5: min_softlimit can be greater than boost (Ashutosh)
Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>
Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20221024171108.14373-1-vinay.belgaumkar@intel.com
Not all Dekel PHY registers have a lane instance, so having to specify
this when using them is awkward. It makes more sense to define each PHY
register with its full internal PHY offset where bits 15:12 is the lane
for lane-instanced PHY registers and just a register bank index for other
PHY registers. This way lane-instanced registers can be referred to with
the (tc_port, lane) parameters, while other registers just with a tc_port
parameter.
An additional benefit of this change is to prevent passing a Dekel
register to a generic MMIO access function or vice versa.
v2:
- Fix parameter reuse in the DKL_REG_MMIO definition.
v3:
- Rebase on latest patchset version.
Cc: Jani Nikula <jani.nikula@intel.com>
Acked-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20221025114457.2191004-3-imre.deak@intel.com
Accessing the TypeC DKL PHY registers during modeset-commit,
-verification, DP link-retraining and AUX power well toggling is racy
due to these code paths being concurrent and the PHY register bank
selection register (HIP_INDEX_REG) being shared between PHY instances
(aka TC ports) and the bank selection being not atomic wrt. the actual
PHY register access.
Add the required locking around each PHY register bank selection->
register access sequence.
Kudos to Ville for noticing the race conditions.
v2:
- Add the DKL PHY register accessors to intel_dkl_phy.[ch]. (Jani)
- Make the DKL_REG_TC_PORT macro independent of PHY internals.
- Move initing the DKL PHY lock to a more logical place.
v3:
- Fix parameter reuse in the DKL_REG_TC_PORT definition.
- Document the usage of phy_lock.
v4:
- Fix adding TC_PORT_1 offset in the DKL_REG_TC_PORT definition.
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Jani Nikula <jani.nikula@intel.com>
Cc: <stable@vger.kernel.org> # v5.5+
Acked-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20221025114457.2191004-1-imre.deak@intel.com
There's a lot going on here, but the main thing is switching the
firmware EDID loader to use struct drm_edid. Unfortunately, it's
difficult to reasonably split to smaller pieces.
Convert the EDID loader to struct drm_edid. There's a functional change
in validation; it no longer tries to fix errors or filter invalid
blocks. It's stricter in this sense. Hopefully this will not be an
issue.
As a by-product, this change also allows HF-EEODB extended EDIDs to be
passed via override/firmware EDID.
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/e64267c28eca483e83c802bc06ddd149bdcdfc66.1666614699.git.jani.nikula@intel.com
Having the EDID override debugfs directly update the EDID property is
problematic. The update is partial only. The driver has no way of
knowing it's been updated. Mode list is not updated. It's an
inconsistent state.
Detach debugfs EDID override from the property update completely. Only
set and reset a separate override EDID copy from debugfs, and have it
take effect only at detect (via EDID read). The copy is at
connector->edid_override, protected by connector->edid_override_mutex.
This also brings override EDID closer to firmware EDID in behaviour.
Add validation of the override EDID which we completely lacked.
Note that IGT already forces a detect whenever tests update the override
EDID.
v2: Add locking (Ville)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/4c875f8e06c4499f498fcf876e1233cbb155ec8a.1666614699.git.jani.nikula@intel.com
The connector->override_edid flag is strictly for EDID override debugfs
management, and drivers have no business using it.
The check for override_edid was added in commit 3019062905 ("drm/i915:
Ignore TMDS clock limit for DP++ when EDID override is set") to
facilitate mode list cross-checking against modes in override EDID when
the connector in question isn't even connected. The dual mode detect
fallback would do VBT based limiting in this case.
Instead of override EDID, check for connector forcing in the fallback.
v2: Simply use !connector->force (Ville)
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/c8b45867cf37134ab40be23e22825ca45adc6041.1666614699.git.jani.nikula@intel.com
For normal connector detect, there's really no point in trying dual mode
detect if the connector is disconnected. We can simplify the detect
sequence by skipping it. Since intel_hdmi_dp_dual_mode_detect() is only
called when EDID is present, we can drop the has_edid parameter.
The functional effect is speeding up disconnected connector detection
ever so slightly, and, combined with firmware EDID, also stop logging
about assuming dual mode adaptor.
It's a bit subtle, but this will also skip dual mode detect if the
connector is force connected and a) there's no EDID of any kind, normal
or override/firmware or b) there's EDID but it does not indicate
digital. These are corner cases no matter what, and arguably forcing
should not be limited by dual mode detect.
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/f8f2a4a147e1c87ba93269a607f71fc29c4b59f6.1666614699.git.jani.nikula@intel.com
Make glk_load_luts() a bit lighter for the common case
where neither the degamma LUT nor pipe CSC are enabled
by not loading the linear degamma LUT. Making .load_luts()
as lightweight as possible is a good idea since it may need
to execute from a vblank worker under tight deadlines.
My earlier reasoning for always loading the linear degamma LUT
was to avoid an extra LUT load when just enabling/disabling the
pipe CSC, but that is nonsense since we load the LUTs on every
flagged color management change/modeset anyway (either of which
is needed for a pipe CSC toggle).
We can also get rid of the glk_can_preload_luts() special
case since the presence of the degamma LUT will now always
match csc_enable.
v2: Fix typos (Uma)
Reviewed-by: Uma Shankar <uma.shankar@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20221024161514.5340-6-ville.syrjala@linux.intel.com
Add an extra remapping step between the logical state of the LUTs
(hw.(de)gamma_lut) as specified via uapi/bigjoiner copy vs.
the actual state of the LUTs programmed into the hardware.
With this we should be finally able finish the (de)gamma
readout/state checker support for the remaining platforms
(ilk-skl) where the same hardware LUT can be positioned
either before or after the pipe CSC unit. Where we position
it depends on factors such as presence of the logical degamma
LUT, RGB vs. YCbCr output, full vs. limited RGB quantization
range.
Without the extra remapping step the state readout doesn't
really know whether the LUT read from the hardware is the
degamma or gamma LUT, and so we is unable to accurately store
it into our crtc state. With the remapping step we know
exactly where to put it given the order of the LUT vs. CSC
in the hardware state.
Only the initial hw->uapi state readout done during driver
load/resume still has the problem of not really knowing
what to do with the LUT(s). But we can just assume 1:1
mapping there and let subsequent commits fix things up.
Another benefit is that we now have a place for purely
internal LUTs, without complicating the bigjoiner uapi->hw
copy logic. This should prove useful for streamlining
glk degamma LUT handling.
Reviewed-by: Uma Shankar <uma.shankar@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20221024161514.5340-3-ville.syrjala@linux.intel.com
It's important to get the initial size of cotables right because
otherwise every app needs to start with a synchronous cotable resize.
This has an measurable impact on system wide performance but is not
relevant for long running single full screen apps for which the cotable
resizes will happen early in the lifecycle and will continue running
just fine.
To eliminate the initial cotable resizes match the initial sizes to what
the userspace expects. The actual result of the patch is simply setting
the initial size of two of the cotables to a size that will align them
to two pages instead of one.
For a piglit run, before:
name | total | per frame | per sec
vmw_cotable_resize | 1405 | 0.12 | 1.58
vmw_execbuf_ioctl | 290805 | 25.43 | 326.05
After:
name | total | per frame | per sec
vmw_cotable_resize | 4 | 0.00 | 0.00
vmw_execbuf_ioctl | 281673 | 25.10 | 274.68
Signed-off-by: Zack Rusin <zackr@vmware.com>
Reviewed-by: Michael Banack <banackm@vmware.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20221022040236.616490-17-zack@kde.org
The explicit vblank handling was never finished. The driver never had
the full implementation of vblank and what was there is emulated
by DRM when the driver doesn't pretend to be implementing it itself.
Let DRM handle the vblank emulation and stop pretending the driver is
doing anything special with vblank. In the future it would make sense
to implement helpers for full vblank handling because vkms and
amdgpu_vkms already have that code. Exporting it to common helpers and
having all three drivers share it would make sense (that would be largely
just to allow more of igt to run).
Signed-off-by: Zack Rusin <zackr@vmware.com>
Reviewed-by: Maaz Mombasawala <mombasawalam@vmware.com>
Reviewed-by: Martin Krastev <krastevm@vmware.com>
Reviewed-by: Michael Banack <banackm@vmware.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20221022040236.616490-15-zack@kde.org
Instead of using vmwgfx specific framebuffer implementation use the drm
fb helpers. There's no change in functionality, the only difference
is a reduction in the amount of code inside the vmwgfx module.
drm fb helpers do not deal correctly with changes in crtc preferred mode
at runtime, but the old fb code wasn't dealing with it either.
Same situation applies to high-res fb consoles - the old code was
limited to 1176x885 because it was checking for legacy/deprecated
memory limites, the drm fb helpers are limited to the initial resolution
set on fb due to first problem (drm fb helpers being unable to handle
hotplug crtc preferred mode changes).
This also removes the kernel config for disabling fb support which hasn't
been used or supported in a very long time.
Signed-off-by: Zack Rusin <zackr@vmware.com>
Reviewed-by: Maaz Mombasawala <mombasawalam@vmware.com>
Reviewed-by: Martin Krastev <krastevm@vmware.com>
Reviewed-by: Thomas Zimmermann <tzimmermann@suse.de>
Link: https://patchwork.freedesktop.org/patch/msgid/20221022040236.616490-14-zack@kde.org
This is part of an effort to move from the vmwgfx_open_hash hashtable to
linux/hashtable implementation.
Refactor the ref_hash hashtable, used for fast lookup of reference objects
associated with a ttm file.
This also exposed a problem related to inconsistently using 32-bit and
64-bit keys with this hashtable. The hash function used changes depending
on the size of the type, and results are not consistent across numbers,
for example, hash_32(329) = 329, but hash_long(329) = 328. This would
cause the lookup to fail for objects already in the hashtable, since keys
of different sizes were being passed during adding and lookup. This was
not an issue before because vmwgfx_open_hash always used hash_long.
Fix this by always using 64-bit keys for this hashtable, which means that
hash_long is always used.
Signed-off-by: Maaz Mombasawala <mombasawalam@vmware.com>
Reviewed-by: Zack Rusin <zackr@vmware.com>
Signed-off-by: Zack Rusin <zackr@vmware.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20221022040236.616490-11-zack@kde.org