Merge tag 'trace-v7.1' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace

Pull tracing updates from Steven Rostedt:

 - Fix printf format warning for bprintf

   sunrpc uses a trace_printk() that triggers a printf warning during
   the compile. Move the __printf() attribute around for when debugging
   is not enabled the warning will go away

 - Remove redundant check for EVENT_FILE_FL_FREED in
   event_filter_write()

   The FREED flag is checked in the call to event_file_file() and then
   checked again right afterward, which is unneeded

 - Clean up event_file_file() and event_file_data() helpers

   These helper functions played a different role in the past, but now
   with eventfs, the READ_ONCE() isn't needed. Simplify the code a bit
   and also add a warning to event_file_data() if the file or its data
   is not present

 - Remove updating file->private_data in tracing open

   All access to the file private data is handled by the helper
   functions, which do not use file->private_data. Stop updating it on
   open

 - Show ENUM names in function arguments via BTF in function tracing

   When showing the function arguments when func-args option is set for
   function tracing, if one of the arguments is found to be an enum,
   show the name of the enum instead of its number

 - Add new trace_call__##name() API for tracepoints

   Tracepoints are enabled via static_branch() blocks, where when not
   enabled, there's only a nop that is in the code where the execution
   will just skip over it. When tracing is enabled, the nop is converted
   to a direct jump to the tracepoint code. Sometimes more calculations
   are required to be performed to update the parameters of the
   tracepoint. In this case, trace_##name##_enabled() is called which is
   a static_branch() that gets enabled only when the tracepoint is
   enabled. This allows the extra calculations to also be skipped by the
   nop:

	if (trace_foo_enabled()) {
		x = bar();
		trace_foo(x);
	}

   Where the x=bar() is only performed when foo is enabled. The problem
   with this approach is that there's now two static_branch() calls. One
   for checking if the tracepoint is enabled, and then again to know if
   the tracepoint should be called. The second one is redundant

   Introduce trace_call__foo() that will call the foo() tracepoint
   directly without doing a static_branch():

	if (trace_foo_enabled()) {
		x = bar();
		trace_call__foo();
	}

 - Update various locations to use the new trace_call__##name() API

 - Move snapshot code out of trace.c

   Cleaning up trace.c to not be a "dump all", move the snapshot code
   out of it and into a new trace_snapshot.c file

 - Clean up some "%*.s" to "%*s"

 - Allow boot kernel command line options to be called multiple times

   Have options like:

	ftrace_filter=foo ftrace_filter=bar ftrace_filter=zoo

   Equal to:

	ftrace_filter=foo,bar,zoo

 - Fix ipi_raise event CPU field to be a CPU field

   The ipi_raise target_cpus field is defined as a __bitmask(). There is
   now a __cpumask() field definition. Update the field to use that

 - Have hist_field_name() use a snprintf() and not a series of strcat()

   It's safer to use snprintf() that a series of strcat()

 - Fix tracepoint regfunc balancing

   A tracepoint can define a "reg" and "unreg" function that gets called
   before the tracepoint is enabled, and after it is disabled
   respectively. But on error, after the "reg" func is called and the
   tracepoint is not enabled, the "unreg" function is not called to tear
   down what the "reg" function performed

 - Fix output that shows what histograms are enabled

   Event variables are displayed incorrectly in the histogram output

   Instead of "sched.sched_wakeup.$var", it is showing
   "$sched.sched_wakeup.var" where the '$' is in the incorrect location

 - Some other simple cleanups

* tag 'trace-v7.1' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace: (24 commits)
  selftests/ftrace: Add test case for fully-qualified variable references
  tracing: Fix fully-qualified variable reference printing in histograms
  tracepoint: balance regfunc() on func_add() failure in tracepoint_add_func()
  tracing: Rebuild full_name on each hist_field_name() call
  tracing: Report ipi_raise target CPUs as cpumask
  tracing: Remove duplicate latency_fsnotify() stub
  tracing: Preserve repeated trace_trigger boot parameters
  tracing: Append repeated boot-time tracing parameters
  tracing: Remove spurious default precision from show_event_trigger/filter formats
  cpufreq: Use trace_call__##name() at guarded tracepoint call sites
  tracing: Remove tracing_alloc_snapshot() when snapshot isn't defined
  tracing: Move snapshot code out of trace.c and into trace_snapshot.c
  mm: damon: Use trace_call__##name() at guarded tracepoint call sites
  btrfs: Use trace_call__##name() at guarded tracepoint call sites
  spi: Use trace_call__##name() at guarded tracepoint call sites
  i2c: Use trace_call__##name() at guarded tracepoint call sites
  kernel: Use trace_call__##name() at guarded tracepoint call sites
  tracepoint: Add trace_call__##name() API
  tracing: trace_mmap.h: fix a kernel-doc warning
  tracing: Pretty-print enum parameters in function arguments
  ...
This commit is contained in:
Linus Torvalds
2026-04-17 09:43:12 -07:00
28 changed files with 1355 additions and 1250 deletions

View File

@@ -256,7 +256,7 @@ static int msr_update_perf(struct cpufreq_policy *policy, u8 min_perf,
if (trace_amd_pstate_epp_perf_enabled()) {
union perf_cached perf = READ_ONCE(cpudata->perf);
trace_amd_pstate_epp_perf(cpudata->cpu,
trace_call__amd_pstate_epp_perf(cpudata->cpu,
perf.highest_perf,
epp,
min_perf,
@@ -306,7 +306,7 @@ static int msr_set_epp(struct cpufreq_policy *policy, u8 epp)
if (trace_amd_pstate_epp_perf_enabled()) {
union perf_cached perf = cpudata->perf;
trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf,
trace_call__amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf,
epp,
FIELD_GET(AMD_CPPC_MIN_PERF_MASK,
cpudata->cppc_req_cached),
@@ -420,7 +420,7 @@ static int shmem_set_epp(struct cpufreq_policy *policy, u8 epp)
if (trace_amd_pstate_epp_perf_enabled()) {
union perf_cached perf = cpudata->perf;
trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf,
trace_call__amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf,
epp,
FIELD_GET(AMD_CPPC_MIN_PERF_MASK,
cpudata->cppc_req_cached),
@@ -585,7 +585,7 @@ static int shmem_update_perf(struct cpufreq_policy *policy, u8 min_perf,
if (trace_amd_pstate_epp_perf_enabled()) {
union perf_cached perf = READ_ONCE(cpudata->perf);
trace_amd_pstate_epp_perf(cpudata->cpu,
trace_call__amd_pstate_epp_perf(cpudata->cpu,
perf.highest_perf,
epp,
min_perf,
@@ -663,7 +663,7 @@ static void amd_pstate_update(struct cpufreq_policy *policy, u8 min_perf,
}
if (trace_amd_pstate_perf_enabled() && amd_pstate_sample(cpudata)) {
trace_amd_pstate_perf(min_perf, des_perf, max_perf, cpudata->freq,
trace_call__amd_pstate_perf(min_perf, des_perf, max_perf, cpudata->freq,
cpudata->cur.mperf, cpudata->cur.aperf, cpudata->cur.tsc,
cpudata->cpu, fast_switch);
}

View File

@@ -2212,7 +2212,7 @@ unsigned int cpufreq_driver_fast_switch(struct cpufreq_policy *policy,
if (trace_cpu_frequency_enabled()) {
for_each_cpu(cpu, policy->cpus)
trace_cpu_frequency(freq, cpu);
trace_call__cpu_frequency(freq, cpu);
}
return freq;

View File

@@ -3132,7 +3132,7 @@ static void intel_cpufreq_trace(struct cpudata *cpu, unsigned int trace_type, in
return;
sample = &cpu->sample;
trace_pstate_sample(trace_type,
trace_call__pstate_sample(trace_type,
0,
old_pstate,
cpu->pstate.current_pstate,

View File

@@ -89,7 +89,7 @@ int i2c_slave_event(struct i2c_client *client,
int ret = client->slave_cb(client, event, val);
if (trace_i2c_slave_enabled())
trace_i2c_slave(client, event, val, ret);
trace_call__i2c_slave(client, event, val, ret);
return ret;
}

View File

@@ -953,7 +953,7 @@ static int spi_engine_transfer_one_message(struct spi_controller *host,
struct spi_transfer *xfer;
list_for_each_entry(xfer, &msg->transfers, transfer_list)
trace_spi_transfer_start(msg, xfer);
trace_call__spi_transfer_start(msg, xfer);
}
spin_lock_irqsave(&spi_engine->lock, flags);
@@ -987,7 +987,7 @@ static int spi_engine_transfer_one_message(struct spi_controller *host,
struct spi_transfer *xfer;
list_for_each_entry(xfer, &msg->transfers, transfer_list)
trace_spi_transfer_stop(msg, xfer);
trace_call__spi_transfer_stop(msg, xfer);
}
out:

View File

@@ -1318,7 +1318,7 @@ static void btrfs_extent_map_shrinker_worker(struct work_struct *work)
if (trace_btrfs_extent_map_shrinker_scan_enter_enabled()) {
s64 nr = percpu_counter_sum_positive(&fs_info->evictable_extent_maps);
trace_btrfs_extent_map_shrinker_scan_enter(fs_info, nr);
trace_call__btrfs_extent_map_shrinker_scan_enter(fs_info, nr);
}
while (ctx.scanned < ctx.nr_to_scan && !btrfs_fs_closing(fs_info)) {
@@ -1358,7 +1358,7 @@ static void btrfs_extent_map_shrinker_worker(struct work_struct *work)
if (trace_btrfs_extent_map_shrinker_scan_exit_enabled()) {
s64 nr = percpu_counter_sum_positive(&fs_info->evictable_extent_maps);
trace_btrfs_extent_map_shrinker_scan_exit(fs_info, nr_dropped, nr);
trace_call__btrfs_extent_map_shrinker_scan_exit(fs_info, nr_dropped, nr);
}
atomic64_set(&fs_info->em_shrinker_nr_to_scan, 0);

View File

@@ -1719,7 +1719,7 @@ static void submit_read_wait_bio_list(struct btrfs_raid_bio *rbio,
struct raid56_bio_trace_info trace_info = { 0 };
bio_get_trace_info(rbio, bio, &trace_info);
trace_raid56_read(rbio, bio, &trace_info);
trace_call__raid56_read(rbio, bio, &trace_info);
}
submit_bio(bio);
}
@@ -2404,7 +2404,7 @@ static void submit_write_bios(struct btrfs_raid_bio *rbio,
struct raid56_bio_trace_info trace_info = { 0 };
bio_get_trace_info(rbio, bio, &trace_info);
trace_raid56_write(rbio, bio, &trace_info);
trace_call__raid56_write(rbio, bio, &trace_info);
}
submit_bio(bio);
}

View File

@@ -31,7 +31,7 @@
#define ARCH_SUPPORTS_FTRACE_OPS 0
#endif
#ifdef CONFIG_TRACING
#ifdef CONFIG_TRACER_SNAPSHOT
extern void ftrace_boot_snapshot(void);
#else
static inline void ftrace_boot_snapshot(void) { }

View File

@@ -107,7 +107,6 @@ do { \
__trace_printk(_THIS_IP_, fmt, ##args); \
} while (0)
extern __printf(2, 3)
int __trace_bprintk(unsigned long ip, const char *fmt, ...);
extern __printf(2, 3)

View File

@@ -314,6 +314,10 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
WARN_ONCE(!rcu_is_watching(), \
"RCU not watching for tracepoint"); \
} \
} \
static inline void trace_call__##name(proto) \
{ \
__do_trace_##name(args); \
}
#define __DECLARE_TRACE_SYSCALL(name, proto, args, data_proto) \
@@ -333,6 +337,11 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
WARN_ONCE(!rcu_is_watching(), \
"RCU not watching for tracepoint"); \
} \
} \
static inline void trace_call__##name(proto) \
{ \
might_fault(); \
__do_trace_##name(args); \
}
/*
@@ -418,6 +427,8 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
#define __DECLARE_TRACE_COMMON(name, proto, args, data_proto) \
static inline void trace_##name(proto) \
{ } \
static inline void trace_call__##name(proto) \
{ } \
static inline int \
register_trace_##name(void (*probe)(data_proto), \
void *data) \

View File

@@ -68,16 +68,16 @@ TRACE_EVENT(ipi_raise,
TP_ARGS(mask, reason),
TP_STRUCT__entry(
__bitmask(target_cpus, nr_cpumask_bits)
__cpumask(target_cpus)
__field(const char *, reason)
),
TP_fast_assign(
__assign_bitmask(target_cpus, cpumask_bits(mask), nr_cpumask_bits);
__assign_cpumask(target_cpus, cpumask_bits(mask));
__entry->reason = reason;
),
TP_printk("target_mask=%s (%s)", __get_bitmask(target_cpus), __entry->reason)
TP_printk("target_mask=%s (%s)", __get_cpumask(target_cpus), __entry->reason)
);
DECLARE_EVENT_CLASS(ipi_handler,

View File

@@ -10,6 +10,7 @@
* @meta_struct_len: Size of this structure.
* @subbuf_size: Size of each sub-buffer.
* @nr_subbufs: Number of subbfs in the ring-buffer, including the reader.
* @reader: The reader composite info structure
* @reader.lost_events: Number of events lost at the time of the reader swap.
* @reader.id: subbuf ID of the current reader. ID range [0 : @nr_subbufs - 1]
* @reader.read: Number of bytes read on the reader subbuf.

View File

@@ -79,7 +79,7 @@ void __weak arch_irq_work_raise(void)
static __always_inline void irq_work_raise(struct irq_work *work)
{
if (trace_ipi_send_cpu_enabled() && arch_irq_work_has_interrupt())
trace_ipi_send_cpu(smp_processor_id(), _RET_IP_, work->func);
trace_call__ipi_send_cpu(smp_processor_id(), _RET_IP_, work->func);
arch_irq_work_raise();
}

View File

@@ -5943,7 +5943,7 @@ static __printf(2, 3) void dump_line(struct seq_buf *s, const char *fmt, ...)
vscnprintf(line_buf, sizeof(line_buf), fmt, args);
va_end(args);
trace_sched_ext_dump(line_buf);
trace_call__sched_ext_dump(line_buf);
}
#endif
/* @s may be zero sized and seq_buf triggers WARN if so */

View File

@@ -408,7 +408,7 @@ void __smp_call_single_queue(int cpu, struct llist_node *node)
func = CSD_TYPE(csd) == CSD_TYPE_TTWU ?
sched_ttwu_pending : csd->func;
trace_csd_queue_cpu(cpu, _RET_IP_, func, csd);
trace_call__csd_queue_cpu(cpu, _RET_IP_, func, csd);
}
/*

View File

@@ -69,6 +69,7 @@ obj-$(CONFIG_TRACING) += trace_seq.o
obj-$(CONFIG_TRACING) += trace_stat.o
obj-$(CONFIG_TRACING) += trace_printk.o
obj-$(CONFIG_TRACING) += trace_pid.o
obj-$(CONFIG_TRACER_SNAPSHOT) += trace_snapshot.o
obj-$(CONFIG_TRACING) += pid_list.o
obj-$(CONFIG_TRACING_MAP) += tracing_map.o
obj-$(CONFIG_PREEMPTIRQ_DELAY_TEST) += preemptirq_delay_test.o

View File

@@ -6841,7 +6841,8 @@ bool ftrace_filter_param __initdata;
static int __init set_ftrace_notrace(char *str)
{
ftrace_filter_param = true;
strscpy(ftrace_notrace_buf, str, FTRACE_FILTER_SIZE);
trace_append_boot_param(ftrace_notrace_buf, str, ',',
FTRACE_FILTER_SIZE);
return 1;
}
__setup("ftrace_notrace=", set_ftrace_notrace);
@@ -6849,7 +6850,8 @@ __setup("ftrace_notrace=", set_ftrace_notrace);
static int __init set_ftrace_filter(char *str)
{
ftrace_filter_param = true;
strscpy(ftrace_filter_buf, str, FTRACE_FILTER_SIZE);
trace_append_boot_param(ftrace_filter_buf, str, ',',
FTRACE_FILTER_SIZE);
return 1;
}
__setup("ftrace_filter=", set_ftrace_filter);
@@ -6861,14 +6863,16 @@ static int ftrace_graph_set_hash(struct ftrace_hash *hash, char *buffer);
static int __init set_graph_function(char *str)
{
strscpy(ftrace_graph_buf, str, FTRACE_FILTER_SIZE);
trace_append_boot_param(ftrace_graph_buf, str, ',',
FTRACE_FILTER_SIZE);
return 1;
}
__setup("ftrace_graph_filter=", set_graph_function);
static int __init set_graph_notrace_function(char *str)
{
strscpy(ftrace_graph_notrace_buf, str, FTRACE_FILTER_SIZE);
trace_append_boot_param(ftrace_graph_notrace_buf, str, ',',
FTRACE_FILTER_SIZE);
return 1;
}
__setup("ftrace_graph_notrace=", set_graph_notrace_function);

File diff suppressed because it is too large Load Diff

View File

@@ -264,6 +264,7 @@ static inline bool still_need_pid_events(int type, struct trace_pid_list *pid_li
typedef bool (*cond_update_fn_t)(struct trace_array *tr, void *cond_data);
#ifdef CONFIG_TRACER_SNAPSHOT
/**
* struct cond_snapshot - conditional snapshot data and callback
*
@@ -306,6 +307,7 @@ struct cond_snapshot {
void *cond_data;
cond_update_fn_t update;
};
#endif /* CONFIG_TRACER_SNAPSHOT */
/*
* struct trace_func_repeats - used to keep track of the consecutive
@@ -691,6 +693,7 @@ void tracing_reset_all_online_cpus(void);
void tracing_reset_all_online_cpus_unlocked(void);
int tracing_open_generic(struct inode *inode, struct file *filp);
int tracing_open_generic_tr(struct inode *inode, struct file *filp);
int tracing_release(struct inode *inode, struct file *file);
int tracing_release_generic_tr(struct inode *inode, struct file *file);
int tracing_open_file_tr(struct inode *inode, struct file *filp);
int tracing_release_file_tr(struct inode *inode, struct file *filp);
@@ -700,6 +703,7 @@ void tracer_tracing_on(struct trace_array *tr);
void tracer_tracing_off(struct trace_array *tr);
void tracer_tracing_disable(struct trace_array *tr);
void tracer_tracing_enable(struct trace_array *tr);
int allocate_trace_buffer(struct trace_array *tr, struct array_buffer *buf, int size);
struct dentry *trace_create_file(const char *name,
umode_t mode,
struct dentry *parent,
@@ -711,8 +715,42 @@ struct dentry *trace_create_cpu_file(const char *name,
void *data,
long cpu,
const struct file_operations *fops);
int tracing_get_cpu(struct inode *inode);
struct trace_iterator *__tracing_open(struct inode *inode, struct file *file,
bool snapshot);
int tracing_buffers_open(struct inode *inode, struct file *filp);
ssize_t tracing_buffers_read(struct file *filp, char __user *ubuf,
size_t count, loff_t *ppos);
int tracing_buffers_release(struct inode *inode, struct file *file);
ssize_t tracing_buffers_splice_read(struct file *file, loff_t *ppos,
struct pipe_inode_info *pipe, size_t len, unsigned int flags);
ssize_t tracing_nsecs_read(unsigned long *ptr, char __user *ubuf,
size_t cnt, loff_t *ppos);
ssize_t tracing_nsecs_write(unsigned long *ptr, const char __user *ubuf,
size_t cnt, loff_t *ppos);
void trace_set_buffer_entries(struct array_buffer *buf, unsigned long val);
/*
* Should be used after trace_array_get(), trace_types_lock
* ensures that i_cdev was already initialized.
*/
static inline int tracing_get_cpu(struct inode *inode)
{
if (inode->i_cdev) /* See trace_create_cpu_file() */
return (long)inode->i_cdev - 1;
return RING_BUFFER_ALL_CPUS;
}
void tracing_reset_cpu(struct array_buffer *buf, int cpu);
struct ftrace_buffer_info {
struct trace_iterator iter;
void *spare;
unsigned int spare_cpu;
unsigned int spare_size;
unsigned int read;
};
/**
* tracer_tracing_is_on_cpu - show real state of ring buffer enabled on for a cpu
@@ -829,13 +867,13 @@ void update_max_tr_single(struct trace_array *tr,
#if defined(CONFIG_TRACER_MAX_TRACE) && defined(CONFIG_FSNOTIFY)
# define LATENCY_FS_NOTIFY
#endif
#endif /* CONFIG_TRACER_SNAPSHOT */
#ifdef LATENCY_FS_NOTIFY
void latency_fsnotify(struct trace_array *tr);
#else
static inline void latency_fsnotify(struct trace_array *tr) { }
#endif
#endif /* CONFIG_TRACER_SNAPSHOT */
#ifdef CONFIG_STACKTRACE
void __trace_stack(struct trace_array *tr, unsigned int trace_ctx, int skip);
@@ -851,11 +889,15 @@ static inline bool tracer_uses_snapshot(struct tracer *tracer)
{
return tracer->use_max_tr;
}
void trace_create_maxlat_file(struct trace_array *tr,
struct dentry *d_tracer);
#else
static inline bool tracer_uses_snapshot(struct tracer *tracer)
{
return false;
}
static inline void trace_create_maxlat_file(struct trace_array *tr,
struct dentry *d_tracer) { }
#endif
void trace_last_func_repeats(struct trace_array *tr,
@@ -885,6 +927,8 @@ extern int DYN_FTRACE_TEST_NAME(void);
#define DYN_FTRACE_TEST_NAME2 trace_selftest_dynamic_test_func2
extern int DYN_FTRACE_TEST_NAME2(void);
void __init trace_append_boot_param(char *buf, const char *str,
char sep, int size);
extern void trace_set_ring_buffer_expanded(struct trace_array *tr);
extern bool tracing_selftest_disabled;
@@ -1825,11 +1869,6 @@ extern struct trace_event_file *find_event_file(struct trace_array *tr,
const char *system,
const char *event);
static inline void *event_file_data(struct file *filp)
{
return READ_ONCE(file_inode(filp)->i_private);
}
extern struct mutex event_mutex;
extern struct list_head ftrace_events;
@@ -1850,12 +1889,22 @@ static inline struct trace_event_file *event_file_file(struct file *filp)
struct trace_event_file *file;
lockdep_assert_held(&event_mutex);
file = READ_ONCE(file_inode(filp)->i_private);
file = file_inode(filp)->i_private;
if (!file || file->flags & EVENT_FILE_FL_FREED)
return NULL;
return file;
}
static inline void *event_file_data(struct file *filp)
{
struct trace_event_file *file;
lockdep_assert_held(&event_mutex);
file = file_inode(filp)->i_private;
WARN_ON(!file || file->flags & EVENT_FILE_FL_FREED);
return file;
}
extern const struct file_operations event_trigger_fops;
extern const struct file_operations event_hist_fops;
extern const struct file_operations event_hist_debug_fops;
@@ -2158,12 +2207,6 @@ static inline bool event_command_needs_rec(struct event_command *cmd_ops)
extern int trace_event_enable_disable(struct trace_event_file *file,
int enable, int soft_disable);
extern int tracing_alloc_snapshot(void);
extern void tracing_snapshot_cond(struct trace_array *tr, void *cond_data);
extern int tracing_snapshot_cond_enable(struct trace_array *tr, void *cond_data, cond_update_fn_t update);
extern int tracing_snapshot_cond_disable(struct trace_array *tr);
extern void *tracing_cond_snapshot_data(struct trace_array *tr);
extern const char *__start___trace_bprintk_fmt[];
extern const char *__stop___trace_bprintk_fmt[];
@@ -2251,19 +2294,71 @@ static inline void trace_event_update_all(struct trace_eval_map **map, int len)
#endif
#ifdef CONFIG_TRACER_SNAPSHOT
extern const struct file_operations snapshot_fops;
extern const struct file_operations snapshot_raw_fops;
/* Used when creating instances */
int trace_allocate_snapshot(struct trace_array *tr, int size);
int tracing_alloc_snapshot(void);
void tracing_snapshot_cond(struct trace_array *tr, void *cond_data);
int tracing_snapshot_cond_enable(struct trace_array *tr, void *cond_data, cond_update_fn_t update);
int tracing_snapshot_cond_disable(struct trace_array *tr);
void *tracing_cond_snapshot_data(struct trace_array *tr);
void tracing_snapshot_instance(struct trace_array *tr);
int tracing_alloc_snapshot_instance(struct trace_array *tr);
int tracing_arm_snapshot_locked(struct trace_array *tr);
int tracing_arm_snapshot(struct trace_array *tr);
void tracing_disarm_snapshot(struct trace_array *tr);
#else
void free_snapshot(struct trace_array *tr);
void print_snapshot_help(struct seq_file *m, struct trace_iterator *iter);
int get_snapshot_map(struct trace_array *tr);
void put_snapshot_map(struct trace_array *tr);
int resize_buffer_duplicate_size(struct array_buffer *trace_buf,
struct array_buffer *size_buf, int cpu_id);
__init void do_allocate_snapshot(const char *name);
# ifdef CONFIG_DYNAMIC_FTRACE
__init int register_snapshot_cmd(void);
# else
static inline int register_snapshot_cmd(void) { return 0; }
# endif
#else /* !CONFIG_TRACER_SNAPSHOT */
static inline int trace_allocate_snapshot(struct trace_array *tr, int size) { return 0; }
static inline void tracing_snapshot_instance(struct trace_array *tr) { }
static inline int tracing_alloc_snapshot_instance(struct trace_array *tr)
{
return 0;
}
static inline int tracing_arm_snapshot_locked(struct trace_array *tr) { return -EBUSY; }
static inline int tracing_arm_snapshot(struct trace_array *tr) { return 0; }
static inline void tracing_disarm_snapshot(struct trace_array *tr) { }
#endif
static inline void free_snapshot(struct trace_array *tr) {}
static inline void tracing_snapshot_cond(struct trace_array *tr, void *cond_data)
{
WARN_ONCE(1, "Snapshot feature not enabled, but internal conditional snapshot used");
}
static inline void *tracing_cond_snapshot_data(struct trace_array *tr)
{
return NULL;
}
static inline int tracing_snapshot_cond_enable(struct trace_array *tr, void *cond_data, cond_update_fn_t update)
{
return -ENODEV;
}
static inline int tracing_snapshot_cond_disable(struct trace_array *tr)
{
return false;
}
static inline void print_snapshot_help(struct seq_file *m, struct trace_iterator *iter)
{
/* Should never be called */
WARN_ONCE(1, "Snapshot print function called without snapshot configured");
}
static inline int get_snapshot_map(struct trace_array *tr) { return 0; }
static inline void put_snapshot_map(struct trace_array *tr) { }
static inline void do_allocate_snapshot(const char *name) { }
static inline int register_snapshot_cmd(void) { return 0; }
#endif /* CONFIG_TRACER_SNAPSHOT */
#ifdef CONFIG_PREEMPT_TRACER
void tracer_preempt_on(unsigned long a0, unsigned long a1);

View File

@@ -1721,7 +1721,7 @@ static int t_show_filters(struct seq_file *m, void *v)
len = get_call_len(call);
seq_printf(m, "%s:%s%*.s%s\n", call->class->system,
seq_printf(m, "%s:%s%*s%s\n", call->class->system,
trace_event_name(call), len, "", filter->filter_string);
return 0;
@@ -1753,7 +1753,7 @@ static int t_show_triggers(struct seq_file *m, void *v)
len = get_call_len(call);
list_for_each_entry_rcu(data, &file->triggers, list) {
seq_printf(m, "%s:%s%*.s", call->class->system,
seq_printf(m, "%s:%s%*s", call->class->system,
trace_event_name(call), len, "");
data->cmd_ops->print(m, data);
@@ -2187,12 +2187,12 @@ static int trace_format_open(struct inode *inode, struct file *file)
static ssize_t
event_id_read(struct file *filp, char __user *ubuf, size_t cnt, loff_t *ppos)
{
int id = (long)event_file_data(filp);
/* id is directly in i_private and available for inode's lifetime. */
int id = (long)file_inode(filp)->i_private;
char buf[32];
int len;
if (unlikely(!id))
return -ENODEV;
WARN_ON(!id);
len = sprintf(buf, "%d\n", id);
@@ -2250,12 +2250,8 @@ event_filter_write(struct file *filp, const char __user *ubuf, size_t cnt,
mutex_lock(&event_mutex);
file = event_file_file(filp);
if (file) {
if (file->flags & EVENT_FILE_FL_FREED)
err = -ENODEV;
else
err = apply_event_filter(file, buf);
}
if (file)
err = apply_event_filter(file, buf);
mutex_unlock(&event_mutex);
kfree(buf);
@@ -3687,20 +3683,27 @@ static struct boot_triggers {
} bootup_triggers[MAX_BOOT_TRIGGERS];
static char bootup_trigger_buf[COMMAND_LINE_SIZE];
static int boot_trigger_buf_len;
static int nr_boot_triggers;
static __init int setup_trace_triggers(char *str)
{
char *trigger;
char *buf;
int len = boot_trigger_buf_len;
int i;
strscpy(bootup_trigger_buf, str, COMMAND_LINE_SIZE);
if (len >= COMMAND_LINE_SIZE)
return 1;
strscpy(bootup_trigger_buf + len, str, COMMAND_LINE_SIZE - len);
trace_set_ring_buffer_expanded(NULL);
disable_tracing_selftest("running event triggers");
buf = bootup_trigger_buf;
for (i = 0; i < MAX_BOOT_TRIGGERS; i++) {
buf = bootup_trigger_buf + len;
boot_trigger_buf_len += strlen(buf) + 1;
for (i = nr_boot_triggers; i < MAX_BOOT_TRIGGERS; i++) {
trigger = strsep(&buf, ",");
if (!trigger)
break;

View File

@@ -1361,12 +1361,17 @@ static const char *hist_field_name(struct hist_field *field,
field->flags & HIST_FIELD_FL_VAR_REF) {
if (field->system) {
static char full_name[MAX_FILTER_STR_VAL];
static char *fmt;
int len;
fmt = field->flags & HIST_FIELD_FL_VAR_REF ? "%s.%s.$%s" : "%s.%s.%s";
len = snprintf(full_name, sizeof(full_name), fmt,
field->system, field->event_name,
field->name);
if (len >= sizeof(full_name))
return NULL;
strcat(full_name, field->system);
strcat(full_name, ".");
strcat(full_name, field->event_name);
strcat(full_name, ".");
strcat(full_name, field->name);
field_name = full_name;
} else
field_name = field->name;
@@ -1740,9 +1745,10 @@ static const char *get_hist_field_flags(struct hist_field *hist_field)
static void expr_field_str(struct hist_field *field, char *expr)
{
if (field->flags & HIST_FIELD_FL_VAR_REF)
strcat(expr, "$");
else if (field->flags & HIST_FIELD_FL_CONST) {
if (field->flags & HIST_FIELD_FL_VAR_REF) {
if (!field->system)
strcat(expr, "$");
} else if (field->flags & HIST_FIELD_FL_CONST) {
char str[HIST_CONST_DIGITS_MAX];
snprintf(str, HIST_CONST_DIGITS_MAX, "%llu", field->constant);
@@ -5836,8 +5842,6 @@ static int event_hist_open(struct inode *inode, struct file *file)
hist_file->file = file;
hist_file->last_act = get_hist_hit_count(event_file);
/* Clear private_data to avoid warning in single_open() */
file->private_data = NULL;
ret = single_open(file, hist_show, hist_file);
if (ret) {
kfree(hist_file);
@@ -6126,8 +6130,6 @@ static int event_hist_debug_open(struct inode *inode, struct file *file)
if (ret)
return ret;
/* Clear private_data to avoid warning in single_open() */
file->private_data = NULL;
ret = single_open(file, hist_debug_show, file);
if (ret)
tracing_release_file_tr(inode, file);
@@ -6158,7 +6160,8 @@ static void hist_field_print(struct seq_file *m, struct hist_field *hist_field)
else if (field_name) {
if (hist_field->flags & HIST_FIELD_FL_VAR_REF ||
hist_field->flags & HIST_FIELD_FL_ALIAS)
seq_putc(m, '$');
if (!hist_field->system)
seq_putc(m, '$');
seq_printf(m, "%s", field_name);
} else if (hist_field->flags & HIST_FIELD_FL_TIMESTAMP)
seq_puts(m, "common_timestamp");

View File

@@ -31,7 +31,8 @@ static char kprobe_boot_events_buf[COMMAND_LINE_SIZE] __initdata;
static int __init set_kprobe_boot_events(char *str)
{
strscpy(kprobe_boot_events_buf, str, COMMAND_LINE_SIZE);
trace_append_boot_param(kprobe_boot_events_buf, str, ';',
COMMAND_LINE_SIZE);
disable_tracing_selftest("running kprobe events");
return 1;

View File

@@ -723,12 +723,13 @@ void print_function_args(struct trace_seq *s, unsigned long *args,
{
const struct btf_param *param;
const struct btf_type *t;
const struct btf_enum *enums;
const char *param_name;
char name[KSYM_NAME_LEN];
unsigned long arg;
struct btf *btf;
s32 tid, nr = 0;
int a, p, x;
int a, p, x, i;
u16 encode;
trace_seq_printf(s, "(");
@@ -782,6 +783,15 @@ void print_function_args(struct trace_seq *s, unsigned long *args,
break;
case BTF_KIND_ENUM:
trace_seq_printf(s, "%ld", arg);
enums = btf_enum(t);
for (i = 0; i < btf_vlen(t); i++) {
if (arg == enums[i].val) {
trace_seq_printf(s, " [%s]",
btf_name_by_offset(btf,
enums[i].name_off));
break;
}
}
break;
default:
/* This does not handle complex arguments */

View File

@@ -197,6 +197,7 @@ struct notifier_block module_trace_bprintk_format_nb = {
.notifier_call = module_trace_bprintk_format_notify,
};
__printf(2, 3)
int __trace_bprintk(unsigned long ip, const char *fmt, ...)
{
int ret;

File diff suppressed because it is too large Load Diff

View File

@@ -300,6 +300,8 @@ static int tracepoint_add_func(struct tracepoint *tp,
lockdep_is_held(&tracepoints_mutex));
old = func_add(&tp_funcs, func, prio);
if (IS_ERR(old)) {
if (tp->ext && tp->ext->unregfunc && !static_key_enabled(&tp->key))
tp->ext->unregfunc();
WARN_ON_ONCE(warn && PTR_ERR(old) != -ENOMEM);
return PTR_ERR(old);
}

View File

@@ -2510,7 +2510,7 @@ static void damos_trace_stat(struct damon_ctx *c, struct damos *s)
break;
sidx++;
}
trace_damos_stat_after_apply_interval(cidx, sidx, &s->stat);
trace_call__damos_stat_after_apply_interval(cidx, sidx, &s->stat);
}
static void kdamond_apply_schemes(struct damon_ctx *c)

View File

@@ -0,0 +1,34 @@
#!/bin/sh
# SPDX-License-Identifier: GPL-2.0
# description: event trigger - test fully-qualified variable reference support
# requires: set_event synthetic_events events/sched/sched_process_fork/hist ping:program
fail() { #msg
echo $1
exit_fail
}
echo "Test fully-qualified variable reference support"
echo 'wakeup_latency u64 lat; pid_t pid; int prio; char comm[16]' > synthetic_events
echo 'hist:keys=comm:ts0=common_timestamp.usecs if comm=="ping"' > events/sched/sched_waking/trigger
echo 'hist:keys=comm:ts0=common_timestamp.usecs if comm=="ping"' > events/sched/sched_wakeup/trigger
echo 'hist:keys=next_comm:wakeup_lat=common_timestamp.usecs-sched.sched_wakeup.$ts0:onmatch(sched.sched_waking).wakeup_latency($wakeup_lat,next_pid,sched.sched_waking.prio,next_comm) if next_comm=="ping"' > events/sched/sched_switch/trigger
echo 'hist:keys=pid,prio,comm:vals=lat:sort=pid,prio' > events/synthetic/wakeup_latency/trigger
ping $LOCALHOST -c 3
if ! grep -q "ping" events/synthetic/wakeup_latency/hist; then
fail "Failed to create inter-event histogram"
fi
if ! grep -q "synthetic_prio=prio" events/sched/sched_waking/hist; then
fail "Failed to create histogram with fully-qualified variable reference"
fi
echo '!hist:keys=next_comm:wakeup_lat=common_timestamp.usecs-sched.sched_wakeup.$ts0:onmatch(sched.sched_waking).wakeup_latency($wakeup_lat,next_pid,sched.sched_waking.prio,next_comm) if next_comm=="ping"' >> events/sched/sched_switch/trigger
if grep -q "synthetic_prio=prio" events/sched/sched_waking/hist; then
fail "Failed to remove histogram with fully-qualified variable reference"
fi
exit 0