mirror of
https://github.com/torvalds/linux.git
synced 2026-04-20 15:53:59 -04:00
* arm64/for-next/perf: perf: arm_spe: Print the version of SPE detected perf: arm_spe: Add support for SPEv1.2 inverted event filtering perf: Add perf_event_attr::config3 drivers/perf: fsl_imx8_ddr_perf: Remove set-but-not-used variable perf: arm_spe: Support new SPEv1.2/v8.7 'not taken' event perf: arm_spe: Use new PMSIDR_EL1 register enums perf: arm_spe: Drop BIT() and use FIELD_GET/PREP accessors arm64/sysreg: Convert SPE registers to automatic generation arm64: Drop SYS_ from SPE register defines perf: arm_spe: Use feature numbering for PMSEVFR_EL1 defines perf/marvell: Add ACPI support to TAD uncore driver perf/marvell: Add ACPI support to DDR uncore driver perf/arm-cmn: Reset DTM_PMU_CONFIG at probe drivers/perf: hisi: Extract initialization of "cpa_pmu->pmu" drivers/perf: hisi: Simplify the parameters of hisi_pmu_init() drivers/perf: hisi: Advertise the PERF_PMU_CAP_NO_EXCLUDE capability * for-next/sysreg: : arm64 sysreg and cpufeature fixes/updates KVM: arm64: Use symbolic definition for ISR_EL1.A arm64/sysreg: Add definition of ISR_EL1 arm64/sysreg: Add definition for ICC_NMIAR1_EL1 arm64/cpufeature: Remove 4 bit assumption in ARM64_FEATURE_MASK() arm64/sysreg: Fix errors in 32 bit enumeration values arm64/cpufeature: Fix field sign for DIT hwcap detection * for-next/sme: : SME-related updates arm64/sme: Optimise SME exit on syscall entry arm64/sme: Don't use streaming mode to probe the maximum SME VL arm64/ptrace: Use system_supports_tpidr2() to check for TPIDR2 support * for-next/kselftest: (23 commits) : arm64 kselftest fixes and improvements kselftest/arm64: Don't require FA64 for streaming SVE+ZA tests kselftest/arm64: Copy whole EXTRA context kselftest/arm64: Fix enumeration of systems without 128 bit SME for SSVE+ZA kselftest/arm64: Fix enumeration of systems without 128 bit SME kselftest/arm64: Don't require FA64 for streaming SVE tests kselftest/arm64: Limit the maximum VL we try to set via ptrace kselftest/arm64: Correct buffer size for SME ZA storage kselftest/arm64: Remove the local NUM_VL definition kselftest/arm64: Verify simultaneous SSVE and ZA context generation kselftest/arm64: Verify that SSVE signal context has SVE_SIG_FLAG_SM set kselftest/arm64: Remove spurious comment from MTE test Makefile kselftest/arm64: Support build of MTE tests with clang kselftest/arm64: Initialise current at build time in signal tests kselftest/arm64: Don't pass headers to the compiler as source kselftest/arm64: Remove redundant _start labels from FP tests kselftest/arm64: Fix .pushsection for strings in FP tests kselftest/arm64: Run BTI selftests on systems without BTI kselftest/arm64: Fix test numbering when skipping tests kselftest/arm64: Skip non-power of 2 SVE vector lengths in fp-stress kselftest/arm64: Only enumerate power of two VLs in syscall-abi ... * for-next/misc: : Miscellaneous arm64 updates arm64/mm: Intercept pfn changes in set_pte_at() Documentation: arm64: correct spelling arm64: traps: attempt to dump all instructions arm64: Apply dynamic shadow call stack patching in two passes arm64: el2_setup.h: fix spelling typo in comments arm64: Kconfig: fix spelling arm64: cpufeature: Use kstrtobool() instead of strtobool() arm64: Avoid repeated AA64MMFR1_EL1 register read on pagefault path arm64: make ARCH_FORCE_MAX_ORDER selectable * for-next/sme2: (23 commits) : Support for arm64 SME 2 and 2.1 arm64/sme: Fix __finalise_el2 SMEver check kselftest/arm64: Remove redundant _start labels from zt-test kselftest/arm64: Add coverage of SME 2 and 2.1 hwcaps kselftest/arm64: Add coverage of the ZT ptrace regset kselftest/arm64: Add SME2 coverage to syscall-abi kselftest/arm64: Add test coverage for ZT register signal frames kselftest/arm64: Teach the generic signal context validation about ZT kselftest/arm64: Enumerate SME2 in the signal test utility code kselftest/arm64: Cover ZT in the FP stress test kselftest/arm64: Add a stress test program for ZT0 arm64/sme: Add hwcaps for SME 2 and 2.1 features arm64/sme: Implement ZT0 ptrace support arm64/sme: Implement signal handling for ZT arm64/sme: Implement context switching for ZT0 arm64/sme: Provide storage for ZT0 arm64/sme: Add basic enumeration for SME2 arm64/sme: Enable host kernel to access ZT0 arm64/sme: Manually encode ZT0 load and store instructions arm64/esr: Document ISS for ZT0 being disabled arm64/sme: Document SME 2 and SME 2.1 ABI ... * for-next/tpidr2: : Include TPIDR2 in the signal context kselftest/arm64: Add test case for TPIDR2 signal frame records kselftest/arm64: Add TPIDR2 to the set of known signal context records arm64/signal: Include TPIDR2 in the signal context arm64/sme: Document ABI for TPIDR2 signal information * for-next/scs: : arm64: harden shadow call stack pointer handling arm64: Stash shadow stack pointer in the task struct on interrupt arm64: Always load shadow stack pointer directly from the task struct * for-next/compat-hwcap: : arm64: Expose compat ARMv8 AArch32 features (HWCAPs) arm64: Add compat hwcap SSBS arm64: Add compat hwcap SB arm64: Add compat hwcap I8MM arm64: Add compat hwcap ASIMDBF16 arm64: Add compat hwcap ASIMDFHM arm64: Add compat hwcap ASIMDDP arm64: Add compat hwcap FPHP and ASIMDHP * for-next/ftrace: : Add arm64 support for DYNAMICE_FTRACE_WITH_CALL_OPS arm64: avoid executing padding bytes during kexec / hibernation arm64: Implement HAVE_DYNAMIC_FTRACE_WITH_CALL_OPS arm64: ftrace: Update stale comment arm64: patching: Add aarch64_insn_write_literal_u64() arm64: insn: Add helpers for BTI arm64: Extend support for CONFIG_FUNCTION_ALIGNMENT ACPI: Don't build ACPICA with '-Os' Compiler attributes: GCC cold function alignment workarounds ftrace: Add DYNAMIC_FTRACE_WITH_CALL_OPS * for-next/efi-boot-mmu-on: : Permit arm64 EFI boot with MMU and caches on arm64: kprobes: Drop ID map text from kprobes blacklist arm64: head: Switch endianness before populating the ID map efi: arm64: enter with MMU and caches enabled arm64: head: Clean the ID map and the HYP text to the PoC if needed arm64: head: avoid cache invalidation when entering with the MMU on arm64: head: record the MMU state at primary entry arm64: kernel: move identity map out of .text mapping arm64: head: Move all finalise_el2 calls to after __enable_mmu * for-next/ptrauth: : arm64 pointer authentication cleanup arm64: pauth: don't sign leaf functions arm64: unify asm-arch manipulation * for-next/pseudo-nmi: : Pseudo-NMI code generation optimisations arm64: irqflags: use alternative branches for pseudo-NMI logic arm64: add ARM64_HAS_GIC_PRIO_RELAXED_SYNC cpucap arm64: make ARM64_HAS_GIC_PRIO_MASKING depend on ARM64_HAS_GIC_CPUIF_SYSREGS arm64: rename ARM64_HAS_IRQ_PRIO_MASKING to ARM64_HAS_GIC_PRIO_MASKING arm64: rename ARM64_HAS_SYSREG_GIC_CPUIF to ARM64_HAS_GIC_CPUIF_SYSREGS
336 lines
8.1 KiB
C
336 lines
8.1 KiB
C
// SPDX-License-Identifier: GPL-2.0
|
|
/* Copyright (C) 2019 ARM Limited */
|
|
|
|
#include <ctype.h>
|
|
#include <string.h>
|
|
|
|
#include "testcases.h"
|
|
|
|
struct _aarch64_ctx *get_header(struct _aarch64_ctx *head, uint32_t magic,
|
|
size_t resv_sz, size_t *offset)
|
|
{
|
|
size_t offs = 0;
|
|
struct _aarch64_ctx *found = NULL;
|
|
|
|
if (!head || resv_sz < HDR_SZ)
|
|
return found;
|
|
|
|
while (offs <= resv_sz - HDR_SZ &&
|
|
head->magic != magic && head->magic) {
|
|
offs += head->size;
|
|
head = GET_RESV_NEXT_HEAD(head);
|
|
}
|
|
if (head->magic == magic) {
|
|
found = head;
|
|
if (offset)
|
|
*offset = offs;
|
|
}
|
|
|
|
return found;
|
|
}
|
|
|
|
bool validate_extra_context(struct extra_context *extra, char **err,
|
|
void **extra_data, size_t *extra_size)
|
|
{
|
|
struct _aarch64_ctx *term;
|
|
|
|
if (!extra || !err)
|
|
return false;
|
|
|
|
fprintf(stderr, "Validating EXTRA...\n");
|
|
term = GET_RESV_NEXT_HEAD(&extra->head);
|
|
if (!term || term->magic || term->size) {
|
|
*err = "Missing terminator after EXTRA context";
|
|
return false;
|
|
}
|
|
if (extra->datap & 0x0fUL)
|
|
*err = "Extra DATAP misaligned";
|
|
else if (extra->size & 0x0fUL)
|
|
*err = "Extra SIZE misaligned";
|
|
else if (extra->datap != (uint64_t)term + 0x10UL)
|
|
*err = "Extra DATAP misplaced (not contiguous)";
|
|
if (*err)
|
|
return false;
|
|
|
|
*extra_data = (void *)extra->datap;
|
|
*extra_size = extra->size;
|
|
|
|
return true;
|
|
}
|
|
|
|
bool validate_sve_context(struct sve_context *sve, char **err)
|
|
{
|
|
/* Size will be rounded up to a multiple of 16 bytes */
|
|
size_t regs_size
|
|
= ((SVE_SIG_CONTEXT_SIZE(sve_vq_from_vl(sve->vl)) + 15) / 16) * 16;
|
|
|
|
if (!sve || !err)
|
|
return false;
|
|
|
|
/* Either a bare sve_context or a sve_context followed by regs data */
|
|
if ((sve->head.size != sizeof(struct sve_context)) &&
|
|
(sve->head.size != regs_size)) {
|
|
*err = "bad size for SVE context";
|
|
return false;
|
|
}
|
|
|
|
if (!sve_vl_valid(sve->vl)) {
|
|
*err = "SVE VL invalid";
|
|
|
|
return false;
|
|
}
|
|
|
|
return true;
|
|
}
|
|
|
|
bool validate_za_context(struct za_context *za, char **err)
|
|
{
|
|
/* Size will be rounded up to a multiple of 16 bytes */
|
|
size_t regs_size
|
|
= ((ZA_SIG_CONTEXT_SIZE(sve_vq_from_vl(za->vl)) + 15) / 16) * 16;
|
|
|
|
if (!za || !err)
|
|
return false;
|
|
|
|
/* Either a bare za_context or a za_context followed by regs data */
|
|
if ((za->head.size != sizeof(struct za_context)) &&
|
|
(za->head.size != regs_size)) {
|
|
*err = "bad size for ZA context";
|
|
return false;
|
|
}
|
|
|
|
if (!sve_vl_valid(za->vl)) {
|
|
*err = "SME VL in ZA context invalid";
|
|
|
|
return false;
|
|
}
|
|
|
|
return true;
|
|
}
|
|
|
|
bool validate_zt_context(struct zt_context *zt, char **err)
|
|
{
|
|
if (!zt || !err)
|
|
return false;
|
|
|
|
/* If the context is present there should be at least one register */
|
|
if (zt->nregs == 0) {
|
|
*err = "no registers";
|
|
return false;
|
|
}
|
|
|
|
/* Size should agree with the number of registers */
|
|
if (zt->head.size != ZT_SIG_CONTEXT_SIZE(zt->nregs)) {
|
|
*err = "register count does not match size";
|
|
return false;
|
|
}
|
|
|
|
return true;
|
|
}
|
|
|
|
bool validate_reserved(ucontext_t *uc, size_t resv_sz, char **err)
|
|
{
|
|
bool terminated = false;
|
|
size_t offs = 0;
|
|
int flags = 0;
|
|
int new_flags, i;
|
|
struct extra_context *extra = NULL;
|
|
struct sve_context *sve = NULL;
|
|
struct za_context *za = NULL;
|
|
struct zt_context *zt = NULL;
|
|
struct _aarch64_ctx *head =
|
|
(struct _aarch64_ctx *)uc->uc_mcontext.__reserved;
|
|
void *extra_data = NULL;
|
|
size_t extra_sz = 0;
|
|
char magic[4];
|
|
|
|
if (!err)
|
|
return false;
|
|
/* Walk till the end terminator verifying __reserved contents */
|
|
while (head && !terminated && offs < resv_sz) {
|
|
if ((uint64_t)head & 0x0fUL) {
|
|
*err = "Misaligned HEAD";
|
|
return false;
|
|
}
|
|
|
|
new_flags = 0;
|
|
|
|
switch (head->magic) {
|
|
case 0:
|
|
if (head->size) {
|
|
*err = "Bad size for terminator";
|
|
} else if (extra_data) {
|
|
/* End of main data, walking the extra data */
|
|
head = extra_data;
|
|
resv_sz = extra_sz;
|
|
offs = 0;
|
|
|
|
extra_data = NULL;
|
|
extra_sz = 0;
|
|
continue;
|
|
} else {
|
|
terminated = true;
|
|
}
|
|
break;
|
|
case FPSIMD_MAGIC:
|
|
if (flags & FPSIMD_CTX)
|
|
*err = "Multiple FPSIMD_MAGIC";
|
|
else if (head->size !=
|
|
sizeof(struct fpsimd_context))
|
|
*err = "Bad size for fpsimd_context";
|
|
new_flags |= FPSIMD_CTX;
|
|
break;
|
|
case ESR_MAGIC:
|
|
if (head->size != sizeof(struct esr_context))
|
|
*err = "Bad size for esr_context";
|
|
break;
|
|
case TPIDR2_MAGIC:
|
|
if (head->size != sizeof(struct tpidr2_context))
|
|
*err = "Bad size for tpidr2_context";
|
|
break;
|
|
case SVE_MAGIC:
|
|
if (flags & SVE_CTX)
|
|
*err = "Multiple SVE_MAGIC";
|
|
/* Size is validated in validate_sve_context() */
|
|
sve = (struct sve_context *)head;
|
|
new_flags |= SVE_CTX;
|
|
break;
|
|
case ZA_MAGIC:
|
|
if (flags & ZA_CTX)
|
|
*err = "Multiple ZA_MAGIC";
|
|
/* Size is validated in validate_za_context() */
|
|
za = (struct za_context *)head;
|
|
new_flags |= ZA_CTX;
|
|
break;
|
|
case ZT_MAGIC:
|
|
if (flags & ZT_CTX)
|
|
*err = "Multiple ZT_MAGIC";
|
|
/* Size is validated in validate_za_context() */
|
|
zt = (struct zt_context *)head;
|
|
new_flags |= ZT_CTX;
|
|
break;
|
|
case EXTRA_MAGIC:
|
|
if (flags & EXTRA_CTX)
|
|
*err = "Multiple EXTRA_MAGIC";
|
|
else if (head->size !=
|
|
sizeof(struct extra_context))
|
|
*err = "Bad size for extra_context";
|
|
new_flags |= EXTRA_CTX;
|
|
extra = (struct extra_context *)head;
|
|
break;
|
|
case KSFT_BAD_MAGIC:
|
|
/*
|
|
* This is a BAD magic header defined
|
|
* artificially by a testcase and surely
|
|
* unknown to the Kernel parse_user_sigframe().
|
|
* It MUST cause a Kernel induced SEGV
|
|
*/
|
|
*err = "BAD MAGIC !";
|
|
break;
|
|
default:
|
|
/*
|
|
* A still unknown Magic: potentially freshly added
|
|
* to the Kernel code and still unknown to the
|
|
* tests. Magic numbers are supposed to be allocated
|
|
* as somewhat meaningful ASCII strings so try to
|
|
* print as such as well as the raw number.
|
|
*/
|
|
memcpy(magic, &head->magic, sizeof(magic));
|
|
for (i = 0; i < sizeof(magic); i++)
|
|
if (!isalnum(magic[i]))
|
|
magic[i] = '?';
|
|
|
|
fprintf(stdout,
|
|
"SKIP Unknown MAGIC: 0x%X (%c%c%c%c) - Is KSFT arm64/signal up to date ?\n",
|
|
head->magic,
|
|
magic[3], magic[2], magic[1], magic[0]);
|
|
break;
|
|
}
|
|
|
|
if (*err)
|
|
return false;
|
|
|
|
offs += head->size;
|
|
if (resv_sz < offs + sizeof(*head)) {
|
|
*err = "HEAD Overrun";
|
|
return false;
|
|
}
|
|
|
|
if (new_flags & EXTRA_CTX)
|
|
if (!validate_extra_context(extra, err,
|
|
&extra_data, &extra_sz))
|
|
return false;
|
|
if (new_flags & SVE_CTX)
|
|
if (!validate_sve_context(sve, err))
|
|
return false;
|
|
if (new_flags & ZA_CTX)
|
|
if (!validate_za_context(za, err))
|
|
return false;
|
|
if (new_flags & ZT_CTX)
|
|
if (!validate_zt_context(zt, err))
|
|
return false;
|
|
|
|
flags |= new_flags;
|
|
|
|
head = GET_RESV_NEXT_HEAD(head);
|
|
}
|
|
|
|
if (terminated && !(flags & FPSIMD_CTX)) {
|
|
*err = "Missing FPSIMD";
|
|
return false;
|
|
}
|
|
|
|
if (terminated && (flags & ZT_CTX) && !(flags & ZA_CTX)) {
|
|
*err = "ZT context but no ZA context";
|
|
return false;
|
|
}
|
|
|
|
return true;
|
|
}
|
|
|
|
/*
|
|
* This function walks through the records inside the provided reserved area
|
|
* trying to find enough space to fit @need_sz bytes: if not enough space is
|
|
* available and an extra_context record is present, it throws away the
|
|
* extra_context record.
|
|
*
|
|
* It returns a pointer to a new header where it is possible to start storing
|
|
* our need_sz bytes.
|
|
*
|
|
* @shead: points to the start of reserved area
|
|
* @need_sz: needed bytes
|
|
* @resv_sz: reserved area size in bytes
|
|
* @offset: if not null, this will be filled with the offset of the return
|
|
* head pointer from @shead
|
|
*
|
|
* @return: pointer to a new head where to start storing need_sz bytes, or
|
|
* NULL if space could not be made available.
|
|
*/
|
|
struct _aarch64_ctx *get_starting_head(struct _aarch64_ctx *shead,
|
|
size_t need_sz, size_t resv_sz,
|
|
size_t *offset)
|
|
{
|
|
size_t offs = 0;
|
|
struct _aarch64_ctx *head;
|
|
|
|
head = get_terminator(shead, resv_sz, &offs);
|
|
/* not found a terminator...no need to update offset if any */
|
|
if (!head)
|
|
return head;
|
|
if (resv_sz - offs < need_sz) {
|
|
fprintf(stderr, "Low on space:%zd. Discarding extra_context.\n",
|
|
resv_sz - offs);
|
|
head = get_header(shead, EXTRA_MAGIC, resv_sz, &offs);
|
|
if (!head || resv_sz - offs < need_sz) {
|
|
fprintf(stderr,
|
|
"Failed to reclaim space on sigframe.\n");
|
|
return NULL;
|
|
}
|
|
}
|
|
|
|
fprintf(stderr, "Available space:%zd\n", resv_sz - offs);
|
|
if (offset)
|
|
*offset = offs;
|
|
return head;
|
|
}
|