Merge tag 'trace-rv-v7.1' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace

Pull runtime verification updates from Steven Rostedt:

 - Refactor da_monitor header to share handlers across monitor types

   No functional changes, only less code duplication.

 - Add Hybrid Automata model class

   Add a new model class that extends deterministic automata by adding
   constraints on transitions and states. Those constraints can take
   into account wall-clock time and as such allow RV monitor to make
   assertions on real time. Add documentation and code generation
   scripts.

 - Add stall monitor as hybrid automaton example

   Add a monitor that triggers a violation when a task is stalling as an
   example of automaton working with real time variables.

 - Convert the opid monitor to a hybrid automaton

   The opid monitor can be heavily simplified if written as a hybrid
   automaton: instead of tracking preempt and interrupt enable/disable
   events, it can just run constraints on the preemption/interrupt
   states when events like wakeup and need_resched verify.

 - Add support for per-object monitors in DA/HA

   Allow writing deterministic and hybrid automata monitors for generic
   objects (e.g. any struct), by exploiting a hash table where objects
   are saved. This allows to track more than just tasks in RV. For
   instance it will be used to track deadline entities in deadline
   monitors.

 - Add deadline tracepoints and move some deadline utilities

   Prepare the ground for deadline monitors by defining events and
   exporting helpers.

 - Add nomiss deadline monitor

   Add first example of deadline monitor asserting all entities complete
   before their deadline.

 - Improve rvgen error handling

   Introduce AutomataError exception class and better handle expected
   exceptions while showing a backtrace for unexpected ones.

 - Improve python code quality in rvgen

   Refactor the rvgen generation scripts to align with python best
   practices: use f-strings instead of %, use len() instead of
   __len__(), remove semicolons, use context managers for file
   operations, fix whitespace violations, extract magic strings into
   constants, remove unused imports and methods.

 - Fix small bugs in rvgen

   The generator scripts presented some corner case bugs: logical error
   in validating what a correct dot file looks like, fix an isinstance()
   check, enforce a dot file has an initial state, fix type annotations
   and typos in comments.

 - rvgen refactoring

   Refactor automata.py to use iterator-based parsing and handle
   required arguments directly in argparse.

 - Allow epoll in rtapp-sleep monitor

   The epoll_wait call is now rt-friendly so it should be allowed in the
   sleep monitor as a valid sleep method.

* tag 'trace-rv-v7.1' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace: (32 commits)
  rv: Allow epoll in rtapp-sleep monitor
  rv/rvgen: fix _fill_states() return type annotation
  rv/rvgen: fix unbound loop variable warning
  rv/rvgen: enforce presence of initial state
  rv/rvgen: extract node marker string to class constant
  rv/rvgen: fix isinstance check in Variable.expand()
  rv/rvgen: make monitor arguments required in rvgen
  rv/rvgen: remove unused __get_main_name method
  rv/rvgen: remove unused sys import from dot2c
  rv/rvgen: refactor automata.py to use iterator-based parsing
  rv/rvgen: use class constant for init marker
  rv/rvgen: fix DOT file validation logic error
  rv/rvgen: fix PEP 8 whitespace violations
  rv/rvgen: fix typos in automata and generator docstring and comments
  rv/rvgen: use context managers for file operations
  rv/rvgen: remove unnecessary semicolons
  rv/rvgen: replace __len__() calls with len()
  rv/rvgen: replace % string formatting with f-strings
  rv/rvgen: remove bare except clauses in generator
  rv/rvgen: introduce AutomataError exception class
  ...
This commit is contained in:
Linus Torvalds
2026-04-15 17:15:18 -07:00
50 changed files with 3891 additions and 694 deletions

View File

@@ -16,3 +16,4 @@ Runtime verification (rv) tool
rv-mon-wip
rv-mon-wwnr
rv-mon-sched
rv-mon-stall

View File

@@ -0,0 +1,44 @@
.. SPDX-License-Identifier: GPL-2.0
============
rv-mon-stall
============
--------------------
Stalled task monitor
--------------------
:Manual section: 1
SYNOPSIS
========
**rv mon stall** [*OPTIONS*]
DESCRIPTION
===========
The stalled task (**stall**) monitor is a sample per-task timed monitor that
checks if tasks are scheduled within a defined threshold after they are ready.
See kernel documentation for further information about this monitor:
<https://docs.kernel.org/trace/rv/monitor_stall.html>
OPTIONS
=======
.. include:: common_ikm.rst
SEE ALSO
========
**rv**\(1), **rv-mon**\(1)
Linux kernel *RV* documentation:
<https://www.kernel.org/doc/html/latest/trace/rv/index.html>
AUTHOR
======
Written by Gabriele Monaco <gmonaco@redhat.com>
.. include:: common_appendix.rst

View File

@@ -11,7 +11,7 @@ where:
- *E* is the finite set of events;
- x\ :subscript:`0` is the initial state;
- X\ :subscript:`m` (subset of *X*) is the set of marked (or final) states.
- *f* : *X* x *E* -> *X* $ is the transition function. It defines the state
- *f* : *X* x *E* -> *X* is the transition function. It defines the state
transition in the occurrence of an event from *E* in the state *X*. In the
special case of deterministic automata, the occurrence of the event in *E*
in a state in *X* has a deterministic next state from *X*.

View File

@@ -0,0 +1,341 @@
Hybrid Automata
===============
Hybrid automata are an extension of deterministic automata, there are several
definitions of hybrid automata in the literature. The adaptation implemented
here is formally denoted by G and defined as a 7-tuple:
*G* = { *X*, *E*, *V*, *f*, x\ :subscript:`0`, X\ :subscript:`m`, *i* }
- *X* is the set of states;
- *E* is the finite set of events;
- *V* is the finite set of environment variables;
- x\ :subscript:`0` is the initial state;
- X\ :subscript:`m` (subset of *X*) is the set of marked (or final) states.
- *f* : *X* x *E* x *C(V)* -> *X* is the transition function.
It defines the state transition in the occurrence of an event from *E* in the
state *X*. Unlike deterministic automata, the transition function also
includes guards from the set of all possible constraints (defined as *C(V)*).
Guards can be true or false with the valuation of *V* when the event occurs,
and the transition is possible only when constraints are true. Similarly to
deterministic automata, the occurrence of the event in *E* in a state in *X*
has a deterministic next state from *X*, if the guard is true.
- *i* : *X* -> *C'(V)* is the invariant assignment function, this is a
constraint assigned to each state in *X*, every state in *X* must be left
before the invariant turns to false. We can omit the representation of
invariants whose value is true regardless of the valuation of *V*.
The set of all possible constraints *C(V)* is defined according to the
following grammar:
g = v < c | v > c | v <= c | v >= c | v == c | v != c | g && g | true
With v a variable in *V* and c a numerical value.
We define the special case of hybrid automata whose variables grow with uniform
rates as timed automata. In this case, the variables are called clocks.
As the name implies, timed automata can be used to describe real time.
Additionally, clocks support another type of guard which always evaluates to true:
reset(v)
The reset constraint is used to set the value of a clock to 0.
The set of invariant constraints *C'(V)* is a subset of *C(V)* including only
constraint of the form:
g = v < c | true
This simplifies the implementation as a clock expiration is a necessary and
sufficient condition for the violation of invariants while still allowing more
complex constraints to be specified as guards.
It is important to note that any hybrid automaton is a valid deterministic
automaton with additional guards and invariants. Those can only further
constrain what transitions are valid but it is not possible to define
transition functions starting from the same state in *X* and the same event in
*E* but ending up in different states in *X* based on the valuation of *V*.
Examples
--------
Wip as hybrid automaton
~~~~~~~~~~~~~~~~~~~~~~~
The 'wip' (wakeup in preemptive) example introduced as a deterministic automaton
can also be described as:
- *X* = { ``any_thread_running`` }
- *E* = { ``sched_waking`` }
- *V* = { ``preemptive`` }
- x\ :subscript:`0` = ``any_thread_running``
- X\ :subscript:`m` = {``any_thread_running``}
- *f* =
- *f*\ (``any_thread_running``, ``sched_waking``, ``preemptive==0``) = ``any_thread_running``
- *i* =
- *i*\ (``any_thread_running``) = ``true``
Which can be represented graphically as::
|
|
v
#====================# sched_waking;preemptive==0
H H ------------------------------+
H any_thread_running H |
H H <-----------------------------+
#====================#
In this example, by using the preemptive state of the system as an environment
variable, we can assert this constraint on ``sched_waking`` without requiring
preemption events (as we would in a deterministic automaton), which can be
useful in case those events are not available or not reliable on the system.
Since all the invariants in *i* are true, we can omit them from the representation.
Stall model with guards (iteration 1)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
As a sample timed automaton we can define 'stall' as:
- *X* = { ``dequeued``, ``enqueued``, ``running``}
- *E* = { ``enqueue``, ``dequeue``, ``switch_in``}
- *V* = { ``clk`` }
- x\ :subscript:`0` = ``dequeue``
- X\ :subscript:`m` = {``dequeue``}
- *f* =
- *f*\ (``enqueued``, ``switch_in``, ``clk < threshold``) = ``running``
- *f*\ (``running``, ``dequeue``) = ``dequeued``
- *f*\ (``dequeued``, ``enqueue``, ``reset(clk)``) = ``enqueued``
- *i* = *omitted as all true*
Graphically represented as::
|
|
v
#============================#
H dequeued H <+
#============================# |
| |
| enqueue; reset(clk) |
v |
+----------------------------+ |
| enqueued | | dequeue
+----------------------------+ |
| |
| switch_in; clk < threshold |
v |
+----------------------------+ |
| running | -+
+----------------------------+
This model imposes that the time between when a task is enqueued (it becomes
runnable) and when the task gets to run must be lower than a certain threshold.
A failure in this model means that the task is starving.
One problem in using guards on the edges in this case is that the model will
not report a failure until the ``switch_in`` event occurs. This means that,
according to the model, it is valid for the task never to run.
Stall model with invariants (iteration 2)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The first iteration isn't exactly what was intended, we can change the model as:
- *X* = { ``dequeued``, ``enqueued``, ``running``}
- *E* = { ``enqueue``, ``dequeue``, ``switch_in``}
- *V* = { ``clk`` }
- x\ :subscript:`0` = ``dequeue``
- X\ :subscript:`m` = {``dequeue``}
- *f* =
- *f*\ (``enqueued``, ``switch_in``) = ``running``
- *f*\ (``running``, ``dequeue``) = ``dequeued``
- *f*\ (``dequeued``, ``enqueue``, ``reset(clk)``) = ``enqueued``
- *i* =
- *i*\ (``enqueued``) = ``clk < threshold``
Graphically::
|
|
v
#=========================#
H dequeued H <+
#=========================# |
| |
| enqueue; reset(clk) |
v |
+-------------------------+ |
| enqueued | |
| clk < threshold | | dequeue
+-------------------------+ |
| |
| switch_in |
v |
+-------------------------+ |
| running | -+
+-------------------------+
In this case, we moved the guard as an invariant to the ``enqueued`` state,
this means we not only forbid the occurrence of ``switch_in`` when ``clk`` is
past the threshold but also mark as invalid in case we are *still* in
``enqueued`` after the threshold. This model is effectively in an invalid state
as soon as a task is starving, rather than when the starving task finally runs.
Hybrid Automaton in C
---------------------
The definition of hybrid automata in C is heavily based on the deterministic
automata one. Specifically, we add the set of environment variables and the
constraints (both guards on transitions and invariants on states) as follows.
This is a combination of both iterations of the stall example::
/* enum representation of X (set of states) to be used as index */
enum states {
dequeued,
enqueued,
running,
state_max,
};
#define INVALID_STATE state_max
/* enum representation of E (set of events) to be used as index */
enum events {
dequeue,
enqueue,
switch_in,
event_max,
};
/* enum representation of V (set of environment variables) to be used as index */
enum envs {
clk,
env_max,
env_max_stored = env_max,
};
struct automaton {
char *state_names[state_max]; // X: the set of states
char *event_names[event_max]; // E: the finite set of events
char *env_names[env_max]; // V: the finite set of env vars
unsigned char function[state_max][event_max]; // f: transition function
unsigned char initial_state; // x_0: the initial state
bool final_states[state_max]; // X_m: the set of marked states
};
struct automaton aut = {
.state_names = {
"dequeued",
"enqueued",
"running",
},
.event_names = {
"dequeue",
"enqueue",
"switch_in",
},
.env_names = {
"clk",
},
.function = {
{ INVALID_STATE, enqueued, INVALID_STATE },
{ INVALID_STATE, INVALID_STATE, running },
{ dequeued, INVALID_STATE, INVALID_STATE },
},
.initial_state = dequeued,
.final_states = { 1, 0, 0 },
};
static bool verify_constraint(enum states curr_state, enum events event,
enum states next_state)
{
bool res = true;
/* Validate guards as part of f */
if (curr_state == enqueued && event == switch_in)
res = get_env(clk) < threshold;
else if (curr_state == dequeued && event == enqueue)
reset_env(clk);
/* Validate invariants in i */
if (next_state == curr_state || !res)
return res;
if (next_state == enqueued)
ha_start_timer_jiffy(ha_mon, clk, threshold_jiffies);
else if (curr_state == enqueued)
res = !ha_cancel_timer(ha_mon);
return res;
}
The function ``verify_constraint``, here reported as simplified, checks guards,
performs resets and starts timers to validate invariants according to
specification, those cannot easily be represented in the automaton struct.
Due to the complex nature of environment variables, the user needs to provide
functions to get and reset environment variables that are not common clocks
(e.g. clocks with ns or jiffy granularity).
Since invariants are only defined as clock expirations (e.g. *clk <
threshold*), reaching the expiration of a timer armed when entering the state
is in fact a failure in the model and triggers a reaction. Leaving the state
stops the timer.
It is important to note that timers implemented with hrtimers introduce
overhead, if the monitor has several instances (e.g. all tasks) this can become
an issue. The impact can be decreased using the timer wheel (``HA_TIMER_TYPE``
set to ``HA_TIMER_WHEEL``), this lowers the responsiveness of the timer without
damaging the accuracy of the model, since the invariant condition is checked
before disabling the timer in case the callback is late.
Alternatively, if the monitor is guaranteed to *eventually* leave the state and
the incurred delay to wait for the next event is acceptable, guards can be used
in place of invariants, as seen in the stall example.
Graphviz .dot format
--------------------
Also the Graphviz representation of hybrid automata is an extension of the
deterministic automata one. Specifically, guards can be provided in the event
name separated by ``;``::
"state_start" -> "state_dest" [ label = "sched_waking;preemptible==0;reset(clk)" ];
Invariant can be specified in the state label (not the node name!) separated by ``\n``::
"enqueued" [label = "enqueued\nclk < threshold_jiffies"];
Constraints can be specified as valid C comparisons and allow spaces, the first
element of the comparison must be the clock while the second is a numerical or
parametrised value. Guards allow comparisons to be combined with boolean
operations (``&&`` and ``||``), resets must be separated from other constraints.
This is the full example of the last version of the 'stall' model in DOT::
digraph state_automaton {
{node [shape = circle] "enqueued"};
{node [shape = plaintext, style=invis, label=""] "__init_dequeued"};
{node [shape = doublecircle] "dequeued"};
{node [shape = circle] "running"};
"__init_dequeued" -> "dequeued";
"enqueued" [label = "enqueued\nclk < threshold_jiffies"];
"running" [label = "running"];
"dequeued" [label = "dequeued"];
"enqueued" -> "running" [ label = "switch_in" ];
"running" -> "dequeued" [ label = "dequeue" ];
"dequeued" -> "enqueued" [ label = "enqueue;reset(clk)" ];
{ rank = min ;
"__init_dequeued";
"dequeued";
}
}
References
----------
One book covering model checking and timed automata is::
Christel Baier and Joost-Pieter Katoen: Principles of Model Checking,
The MIT Press, 2008.
Hybrid automata are described in detail in::
Thomas Henzinger: The theory of hybrid automata,
Proceedings 11th Annual IEEE Symposium on Logic in Computer Science, 1996.

View File

@@ -9,9 +9,12 @@ Runtime Verification
runtime-verification.rst
deterministic_automata.rst
linear_temporal_logic.rst
hybrid_automata.rst
monitor_synthesis.rst
da_monitor_instrumentation.rst
monitor_wip.rst
monitor_wwnr.rst
monitor_sched.rst
monitor_rtapp.rst
monitor_stall.rst
monitor_deadline.rst

View File

@@ -0,0 +1,84 @@
Deadline monitors
=================
- Name: deadline
- Type: container for multiple monitors
- Author: Gabriele Monaco <gmonaco@redhat.com>
Description
-----------
The deadline monitor is a set of specifications to describe the deadline
scheduler behaviour. It includes monitors per scheduling entity (deadline tasks
and servers) that work independently to verify different specifications the
deadline scheduler should follow.
Specifications
--------------
Monitor nomiss
~~~~~~~~~~~~~~
The nomiss monitor ensures dl entities get to run *and* run to completion
before their deadline, although deferrable servers may not run. An entity is
considered done if ``throttled``, either because it yielded or used up its
runtime, or when it voluntarily starts ``sleeping``.
The monitor includes a user configurable deadline threshold. If the total
utilisation of deadline tasks is larger than 1, they are only guaranteed
bounded tardiness. See Documentation/scheduler/sched-deadline.rst for more
details. The threshold (module parameter ``nomiss.deadline_thresh``) can be
configured to avoid the monitor to fail based on the acceptable tardiness in
the system. Since ``dl_throttle`` is a valid outcome for the entity to be done,
the minimum tardiness needs be 1 tick to consider the throttle delay, unless
the ``HRTICK_DL`` scheduler feature is active.
Servers have also an intermediate ``idle`` state, occurring as soon as no
runnable task is available from ready or running where no timing constraint
is applied. A server goes to sleep by stopping, there is no wakeup equivalent
as the order of a server starting and replenishing is not defined, hence a
server can run from sleeping without being ready::
|
sched_wakeup v
dl_replenish;reset(clk) -- #=========================#
| H H dl_replenish;reset(clk)
+-----------> H H <--------------------+
H H |
+- dl_server_stop ---- H ready H |
| +-----------------> H clk < DEADLINE_NS() H dl_throttle; |
| | H H is_defer == 1 |
| | sched_switch_in - H H -----------------+ |
| | | #=========================# | |
| | | | ^ | |
| | | dl_server_idle dl_replenish;reset(clk) | |
| | | v | | |
| | | +--------------+ | |
| | | +------ | | | |
| | | dl_server_idle | | dl_throttle | |
| | | | | idle | -----------------+ | |
| | | +-----> | | | | |
| | | | | | | |
| | | | | | | |
+--+--+---+--- dl_server_stop -- +--------------+ | | |
| | | | | ^ | | |
| | | | sched_switch_in dl_server_idle | | |
| | | | v | | | |
| | | | +---------- +---------------------+ | | |
| | | | sched_switch_in | | | | |
| | | | sched_wakeup | | | | |
| | | | dl_replenish; | running | -------+ | | |
| | | | reset(clk) | clk < DEADLINE_NS() | | | | |
| | | | +---------> | | dl_throttle | | |
| | | +----------------> | | | | | |
| | | +---------------------+ | | | |
| | sched_wakeup ^ sched_switch_suspend | | | |
v v dl_replenish;reset(clk) | dl_server_stop | | | |
+--------------+ | | v v v |
| | - sched_switch_in + | +---------------+
| | <---------------------+ dl_throttle +-- | |
| sleeping | sched_wakeup | | throttled |
| | -- dl_server_stop dl_server_idle +-> | |
| | dl_server_idle sched_switch_suspend +---------------+
+--------------+ <---------+ ^
| |
+------ dl_throttle;is_constr_dl == 1 || is_defer == 1 ------+

View File

@@ -346,55 +346,21 @@ Monitor opid
The operations with preemption and irq disabled (opid) monitor ensures
operations like ``wakeup`` and ``need_resched`` occur with interrupts and
preemption disabled or during interrupt context, in such case preemption may
not be disabled explicitly.
preemption disabled.
``need_resched`` can be set by some RCU internals functions, in which case it
doesn't match a task wakeup and might occur with only interrupts disabled::
doesn't match a task wakeup and might occur with only interrupts disabled.
The interrupt and preemption status are validated by the hybrid automaton
constraints when processing the events::
| sched_need_resched
| sched_waking
| irq_entry
| +--------------------+
v v |
+------------------------------------------------------+
+----------- | disabled | <+
| +------------------------------------------------------+ |
| | ^ |
| | preempt_disable sched_need_resched |
| preempt_enable | +--------------------+ |
| v | v | |
| +------------------------------------------------------+ |
| | irq_disabled | |
| +------------------------------------------------------+ |
| | | ^ |
| irq_entry irq_entry | | |
| sched_need_resched v | irq_disable |
| sched_waking +--------------+ | | |
| +----- | | irq_enable | |
| | | in_irq | | | |
| +----> | | | | |
| +--------------+ | | irq_disable
| | | | |
| irq_enable | irq_enable | | |
| v v | |
| #======================================================# |
| H enabled H |
| #======================================================# |
| | ^ ^ preempt_enable | |
| preempt_disable preempt_enable +--------------------+ |
| v | |
| +------------------+ | |
+----------> | preempt_disabled | -+ |
+------------------+ |
| |
+-------------------------------------------------------+
This monitor is designed to work on ``PREEMPT_RT`` kernels, the special case of
events occurring in interrupt context is a shortcut to identify valid scenarios
where the preemption tracepoints might not be visible, during interrupts
preemption is always disabled. On non- ``PREEMPT_RT`` kernels, the interrupts
might invoke a softirq to set ``need_resched`` and wake up a task. This is
another special case that is currently not supported by the monitor.
|
|
v
#=========# sched_need_resched;irq_off == 1
H H sched_waking;irq_off == 1 && preempt_off == 1
H any H ------------------------------------------------+
H H |
H H <-----------------------------------------------+
#=========#
References
----------

View File

@@ -0,0 +1,43 @@
Monitor stall
=============
- Name: stall - stalled task monitor
- Type: per-task hybrid automaton
- Author: Gabriele Monaco <gmonaco@redhat.com>
Description
-----------
The stalled task (stall) monitor is a sample per-task timed monitor that checks
if tasks are scheduled within a defined threshold after they are ready::
|
|
v
#==========================#
+-----------------> H dequeued H
| #==========================#
| |
sched_switch_wait | sched_wakeup;reset(clk)
| v
| +--------------------------+ <+
| | enqueued | | sched_wakeup
| | clk < threshold_jiffies | -+
| +--------------------------+
| | ^
| sched_switch_in sched_switch_preempt;reset(clk)
| v |
| +--------------------------+
+------------------ | running |
+--------------------------+
^ sched_switch_in |
| sched_wakeup |
+----------------------+
The threshold can be configured as a parameter by either booting with the
``stall.threshold_jiffies=<new value>`` argument or writing a new value to
``/sys/module/stall/parameters/threshold_jiffies``.
Specification
-------------
Graphviz Dot file in tools/verification/models/stall.dot

View File

@@ -18,8 +18,8 @@ functions that glue the monitor to the system reference model, and the
trace output as a reaction to event parsing and exceptions, as depicted
below::
Linux +----- RV Monitor ----------------------------------+ Formal
Realm | | Realm
Linux +---- RV Monitor ----------------------------------+ Formal
Realm | | Realm
+-------------------+ +----------------+ +-----------------+
| Linux kernel | | Monitor | | Reference |
| Tracing | -> | Instance(s) | <- | Model |
@@ -45,6 +45,7 @@ creating monitors. The header files are:
* rv/da_monitor.h for deterministic automaton monitor.
* rv/ltl_monitor.h for linear temporal logic monitor.
* rv/ha_monitor.h for hybrid automaton monitor.
rvgen
-----
@@ -252,6 +253,118 @@ the task, the monitor may need some time to start validating tasks which have
been running before the monitor is enabled. Therefore, it is recommended to
start the tasks of interest after enabling the monitor.
rv/ha_monitor.h
+++++++++++++++
The implementation of hybrid automaton monitors derives directly from the
deterministic automaton one. Despite using a different header
(``ha_monitor.h``) the functions to handle events are the same (e.g.
``da_handle_event``).
Additionally, the `rvgen` tool populates skeletons for the
``ha_verify_constraint``, ``ha_get_env`` and ``ha_reset_env`` based on the
monitor specification in the monitor source file.
``ha_verify_constraint`` is typically ready as it is generated by `rvgen`:
* standard constraints on edges are turned into the form::
res = ha_get_env(ha_mon, ENV) < VALUE;
* reset constraints are turned into the form::
ha_reset_env(ha_mon, ENV);
* constraints on the state are implemented using timers
- armed before entering the state
- cancelled while entering any other state
- untouched if the state does not change as a result of the event
- checked if the timer expired but the callback did not run
- available implementation are `HA_TIMER_HRTIMER` and `HA_TIMER_WHEEL`
- hrtimers are more precise but may have higher overhead
- select by defining `HA_TIMER_TYPE` before including the header::
#define HA_TIMER_TYPE HA_TIMER_HRTIMER
Constraint values can be specified in different forms:
* literal value (with optional unit). E.g.::
preemptive == 0
clk < 100ns
threshold <= 10j
* constant value (uppercase string). E.g.::
clk < MAX_NS
* parameter (lowercase string). E.g.::
clk <= threshold_jiffies
* macro (uppercase string with parentheses). E.g.::
clk < MAX_NS()
* function (lowercase string with parentheses). E.g.::
clk <= threshold_jiffies()
In all cases, `rvgen` will try to understand the type of the environment
variable from the name or unit. For instance, constants or parameters
terminating with ``_NS`` or ``_jiffies`` are intended as clocks with ns and jiffy
granularity, respectively. Literals with measure unit `j` are jiffies and if a
time unit is specified (`ns` to `s`), `rvgen` will convert the value to `ns`.
Constants need to be defined by the user (but unlike the name, they don't
necessarily need to be defined as constants). Parameters get converted to
module parameters and the user needs to provide a default value.
Also function and macros are defined by the user, by default they get as an
argument the ``ha_monitor``, a common usage would be to get the required value
from the target, e.g. the task in per-task monitors, using the helper
``ha_get_target(ha_mon)``.
If `rvgen` determines that the variable is a clock, it provides the getter and
resetter based on the unit. Otherwise, the user needs to provide an appropriate
definition.
Typically non-clock environment variables are not reset. In such case only the
getter skeleton will be present in the file generated by `rvgen`.
For instance, the getter for preemptive can be filled as::
static u64 ha_get_env(struct ha_monitor *ha_mon, enum envs env)
{
if (env == preemptible)
return preempt_count() == 0;
return ENV_INVALID_VALUE;
}
The function is supplied the ``ha_mon`` parameter in case some storage is
required (as it is for clocks), but environment variables without reset do not
require a storage and can ignore that argument.
The number of environment variables requiring a storage is limited by
``MAX_HA_ENV_LEN``, however such limitation doesn't stand for other variables.
Finally, constraints on states are only valid for clocks and only if the
constraint is of the form `clk < N`. This is because such constraints are
implemented with the expiration of a timer.
Typically the clock variables are reset just before arming the timer, but this
doesn't have to be the case and the available functions take care of it.
It is a responsibility of per-task monitors to make sure no timer is left
running when the task exits.
By default the generator implements timers with hrtimers (setting
``HA_TIMER_TYPE`` to ``HA_TIMER_HRTIMER``), this gives better responsiveness
but higher overhead. The timer wheel (``HA_TIMER_WHEEL``) is a good alternative
for monitors with several instances (e.g. per-task) that achieves lower
overhead with increased latency, yet without compromising precision.
Final remarks
-------------

View File

@@ -13,6 +13,7 @@
#define RV_MON_GLOBAL 0
#define RV_MON_PER_CPU 1
#define RV_MON_PER_TASK 2
#define RV_MON_PER_OBJ 3
#ifdef CONFIG_RV
#include <linux/array_size.h>
@@ -81,11 +82,49 @@ struct ltl_monitor {};
#endif /* CONFIG_RV_LTL_MONITOR */
#ifdef CONFIG_RV_HA_MONITOR
/*
* In the future, hybrid automata may rely on multiple
* environment variables, e.g. different clocks started at
* different times or running at different speed.
* For now we support only 1 variable.
*/
#define MAX_HA_ENV_LEN 1
/*
* Monitors can pick the preferred timer implementation:
* No timer: if monitors don't have state invariants.
* Timer wheel: lightweight invariants check but far less precise.
* Hrtimer: accurate invariants check with higher overhead.
*/
#define HA_TIMER_NONE 0
#define HA_TIMER_WHEEL 1
#define HA_TIMER_HRTIMER 2
/*
* Hybrid automaton per-object variables.
*/
struct ha_monitor {
struct da_monitor da_mon;
u64 env_store[MAX_HA_ENV_LEN];
union {
struct hrtimer hrtimer;
struct timer_list timer;
};
};
#else
struct ha_monitor { };
#endif /* CONFIG_RV_HA_MONITOR */
#define RV_PER_TASK_MONITOR_INIT (CONFIG_RV_PER_TASK_MONITORS)
union rv_task_monitor {
struct da_monitor da_mon;
struct ltl_monitor ltl_mon;
struct ha_monitor ha_mon;
};
#ifdef CONFIG_RV_REACTORS

View File

@@ -37,4 +37,31 @@ extern void dl_clear_root_domain_cpu(int cpu);
extern u64 dl_cookie;
extern bool dl_bw_visited(int cpu, u64 cookie);
static inline bool dl_server(struct sched_dl_entity *dl_se)
{
return dl_se->dl_server;
}
static inline struct task_struct *dl_task_of(struct sched_dl_entity *dl_se)
{
BUG_ON(dl_server(dl_se));
return container_of(dl_se, struct task_struct, dl);
}
/*
* Regarding the deadline, a task with implicit deadline has a relative
* deadline == relative period. A task with constrained deadline has a
* relative deadline <= relative period.
*
* We support constrained deadline tasks. However, there are some restrictions
* applied only for tasks which do not have an implicit deadline. See
* update_dl_entity() to know more about such restrictions.
*
* The dl_is_implicit() returns true if the task has an implicit deadline.
*/
static inline bool dl_is_implicit(struct sched_dl_entity *dl_se)
{
return dl_se->dl_deadline == dl_se->dl_period;
}
#endif /* _LINUX_SCHED_DEADLINE_H */

View File

@@ -3,9 +3,9 @@
* Copyright (C) 2019-2022 Red Hat, Inc. Daniel Bristot de Oliveira <bristot@kernel.org>
*
* Deterministic automata (DA) monitor functions, to be used together
* with automata models in C generated by the dot2k tool.
* with automata models in C generated by the rvgen tool.
*
* The dot2k tool is available at tools/verification/dot2k/
* The rvgen tool is available at tools/verification/rvgen/
*
* For further information, see:
* Documentation/trace/rv/monitor_synthesis.rst
@@ -19,6 +19,8 @@
#include <linux/stringify.h>
#include <linux/bug.h>
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/hashtable.h>
/*
* Per-cpu variables require a unique name although static in some
@@ -28,6 +30,43 @@
static struct rv_monitor rv_this;
/*
* Hook to allow the implementation of hybrid automata: define it with a
* function that takes curr_state, event and next_state and returns true if the
* environment constraints (e.g. timing) are satisfied, false otherwise.
*/
#ifndef da_monitor_event_hook
#define da_monitor_event_hook(...) true
#endif
/*
* Hook to allow the implementation of hybrid automata: define it with a
* function that takes the da_monitor and performs further initialisation
* (e.g. reset set up timers).
*/
#ifndef da_monitor_init_hook
#define da_monitor_init_hook(da_mon)
#endif
/*
* Hook to allow the implementation of hybrid automata: define it with a
* function that takes the da_monitor and performs further reset (e.g. reset
* all clocks).
*/
#ifndef da_monitor_reset_hook
#define da_monitor_reset_hook(da_mon)
#endif
/*
* Type for the target id, default to int but can be overridden.
* A long type can work as hash table key (PER_OBJ) but will be downgraded to
* int in the event tracepoint.
* Unused for implicit monitors.
*/
#ifndef da_id_type
#define da_id_type int
#endif
static void react(enum states curr_state, enum events event)
{
rv_react(&rv_this,
@@ -42,6 +81,7 @@ static void react(enum states curr_state, enum events event)
*/
static inline void da_monitor_reset(struct da_monitor *da_mon)
{
da_monitor_reset_hook(da_mon);
da_mon->monitoring = 0;
da_mon->curr_state = model_get_initial_state();
}
@@ -56,6 +96,7 @@ static inline void da_monitor_start(struct da_monitor *da_mon)
{
da_mon->curr_state = model_get_initial_state();
da_mon->monitoring = 1;
da_monitor_init_hook(da_mon);
}
/*
@@ -97,90 +138,6 @@ static inline bool da_monitor_handling_event(struct da_monitor *da_mon)
return 1;
}
#if RV_MON_TYPE == RV_MON_GLOBAL || RV_MON_TYPE == RV_MON_PER_CPU
/*
* Event handler for implicit monitors. Implicit monitor is the one which the
* handler does not need to specify which da_monitor to manipulate. Examples
* of implicit monitor are the per_cpu or the global ones.
*
* Retry in case there is a race between getting and setting the next state,
* warn and reset the monitor if it runs out of retries. The monitor should be
* able to handle various orders.
*/
static inline bool da_event(struct da_monitor *da_mon, enum events event)
{
enum states curr_state, next_state;
curr_state = READ_ONCE(da_mon->curr_state);
for (int i = 0; i < MAX_DA_RETRY_RACING_EVENTS; i++) {
next_state = model_get_next_state(curr_state, event);
if (next_state == INVALID_STATE) {
react(curr_state, event);
CONCATENATE(trace_error_, MONITOR_NAME)(
model_get_state_name(curr_state),
model_get_event_name(event));
return false;
}
if (likely(try_cmpxchg(&da_mon->curr_state, &curr_state, next_state))) {
CONCATENATE(trace_event_, MONITOR_NAME)(
model_get_state_name(curr_state),
model_get_event_name(event),
model_get_state_name(next_state),
model_is_final_state(next_state));
return true;
}
}
trace_rv_retries_error(__stringify(MONITOR_NAME), model_get_event_name(event));
pr_warn("rv: " __stringify(MAX_DA_RETRY_RACING_EVENTS)
" retries reached for event %s, resetting monitor %s",
model_get_event_name(event), __stringify(MONITOR_NAME));
return false;
}
#elif RV_MON_TYPE == RV_MON_PER_TASK
/*
* Event handler for per_task monitors.
*
* Retry in case there is a race between getting and setting the next state,
* warn and reset the monitor if it runs out of retries. The monitor should be
* able to handle various orders.
*/
static inline bool da_event(struct da_monitor *da_mon, struct task_struct *tsk,
enum events event)
{
enum states curr_state, next_state;
curr_state = READ_ONCE(da_mon->curr_state);
for (int i = 0; i < MAX_DA_RETRY_RACING_EVENTS; i++) {
next_state = model_get_next_state(curr_state, event);
if (next_state == INVALID_STATE) {
react(curr_state, event);
CONCATENATE(trace_error_, MONITOR_NAME)(tsk->pid,
model_get_state_name(curr_state),
model_get_event_name(event));
return false;
}
if (likely(try_cmpxchg(&da_mon->curr_state, &curr_state, next_state))) {
CONCATENATE(trace_event_, MONITOR_NAME)(tsk->pid,
model_get_state_name(curr_state),
model_get_event_name(event),
model_get_state_name(next_state),
model_is_final_state(next_state));
return true;
}
}
trace_rv_retries_error(__stringify(MONITOR_NAME), model_get_event_name(event));
pr_warn("rv: " __stringify(MAX_DA_RETRY_RACING_EVENTS)
" retries reached for event %s, resetting monitor %s",
model_get_event_name(event), __stringify(MONITOR_NAME));
return false;
}
#endif /* RV_MON_TYPE */
#if RV_MON_TYPE == RV_MON_GLOBAL
/*
* Functions to define, init and get a global monitor.
@@ -219,7 +176,10 @@ static inline int da_monitor_init(void)
/*
* da_monitor_destroy - destroy the monitor
*/
static inline void da_monitor_destroy(void) { }
static inline void da_monitor_destroy(void)
{
da_monitor_reset_all();
}
#elif RV_MON_TYPE == RV_MON_PER_CPU
/*
@@ -265,7 +225,10 @@ static inline int da_monitor_init(void)
/*
* da_monitor_destroy - destroy the monitor
*/
static inline void da_monitor_destroy(void) { }
static inline void da_monitor_destroy(void)
{
da_monitor_reset_all();
}
#elif RV_MON_TYPE == RV_MON_PER_TASK
/*
@@ -286,6 +249,24 @@ static inline struct da_monitor *da_get_monitor(struct task_struct *tsk)
return &tsk->rv[task_mon_slot].da_mon;
}
/*
* da_get_target - return the task associated to the monitor
*/
static inline struct task_struct *da_get_target(struct da_monitor *da_mon)
{
return container_of(da_mon, struct task_struct, rv[task_mon_slot].da_mon);
}
/*
* da_get_id - return the id associated to the monitor
*
* For per-task monitors, the id is the task's PID.
*/
static inline da_id_type da_get_id(struct da_monitor *da_mon)
{
return da_get_target(da_mon)->pid;
}
static void da_monitor_reset_all(void)
{
struct task_struct *g, *p;
@@ -330,38 +311,372 @@ static inline void da_monitor_destroy(void)
}
rv_put_task_monitor_slot(task_mon_slot);
task_mon_slot = RV_PER_TASK_MONITOR_INIT;
da_monitor_reset_all();
}
#elif RV_MON_TYPE == RV_MON_PER_OBJ
/*
* Functions to define, init and get a per-object monitor.
*/
struct da_monitor_storage {
da_id_type id;
monitor_target target;
union rv_task_monitor rv;
struct hlist_node node;
struct rcu_head rcu;
};
#ifndef DA_MONITOR_HT_BITS
#define DA_MONITOR_HT_BITS 10
#endif
static DEFINE_HASHTABLE(da_monitor_ht, DA_MONITOR_HT_BITS);
/*
* da_create_empty_storage - pre-allocate an empty storage
*/
static inline struct da_monitor_storage *da_create_empty_storage(da_id_type id)
{
struct da_monitor_storage *mon_storage;
mon_storage = kmalloc_nolock(sizeof(struct da_monitor_storage),
__GFP_ZERO, NUMA_NO_NODE);
if (!mon_storage)
return NULL;
hash_add_rcu(da_monitor_ht, &mon_storage->node, id);
mon_storage->id = id;
return mon_storage;
}
/*
* da_create_storage - create the per-object storage
*
* The caller is responsible to synchronise writers, either with locks or
* implicitly. For instance, if da_create_storage is only called from a single
* event for target (e.g. sched_switch), it's safe to call this without locks.
*/
static inline struct da_monitor *da_create_storage(da_id_type id,
monitor_target target,
struct da_monitor *da_mon)
{
struct da_monitor_storage *mon_storage;
if (da_mon)
return da_mon;
mon_storage = da_create_empty_storage(id);
if (!mon_storage)
return NULL;
mon_storage->target = target;
return &mon_storage->rv.da_mon;
}
/*
* __da_get_mon_storage - get the monitor storage from the hash table
*/
static inline struct da_monitor_storage *__da_get_mon_storage(da_id_type id)
{
struct da_monitor_storage *mon_storage;
lockdep_assert_in_rcu_read_lock();
hash_for_each_possible_rcu(da_monitor_ht, mon_storage, node, id) {
if (mon_storage->id == id)
return mon_storage;
}
return NULL;
}
/*
* da_get_monitor - return the monitor for target
*/
static struct da_monitor *da_get_monitor(da_id_type id, monitor_target target)
{
struct da_monitor_storage *mon_storage;
mon_storage = __da_get_mon_storage(id);
return mon_storage ? &mon_storage->rv.da_mon : NULL;
}
/*
* da_get_target - return the object associated to the monitor
*/
static inline monitor_target da_get_target(struct da_monitor *da_mon)
{
return container_of(da_mon, struct da_monitor_storage, rv.da_mon)->target;
}
/*
* da_get_id - return the id associated to the monitor
*/
static inline da_id_type da_get_id(struct da_monitor *da_mon)
{
return container_of(da_mon, struct da_monitor_storage, rv.da_mon)->id;
}
/*
* da_create_or_get - create the per-object storage if not already there
*
* This needs a lookup so should be guarded by RCU, the condition is checked
* directly in da_create_storage()
*/
static inline void da_create_or_get(da_id_type id, monitor_target target)
{
guard(rcu)();
da_create_storage(id, target, da_get_monitor(id, target));
}
/*
* da_fill_empty_storage - store the target in a pre-allocated storage
*
* Can be used as a substitute of da_create_storage when starting a monitor in
* an environment where allocation is unsafe.
*/
static inline struct da_monitor *da_fill_empty_storage(da_id_type id,
monitor_target target,
struct da_monitor *da_mon)
{
if (unlikely(da_mon && !da_get_target(da_mon)))
container_of(da_mon, struct da_monitor_storage, rv.da_mon)->target = target;
return da_mon;
}
/*
* da_get_target_by_id - return the object associated to the id
*/
static inline monitor_target da_get_target_by_id(da_id_type id)
{
struct da_monitor_storage *mon_storage;
guard(rcu)();
mon_storage = __da_get_mon_storage(id);
if (unlikely(!mon_storage))
return NULL;
return mon_storage->target;
}
/*
* da_destroy_storage - destroy the per-object storage
*
* The caller is responsible to synchronise writers, either with locks or
* implicitly. For instance, if da_destroy_storage is called at sched_exit and
* da_create_storage can never occur after that, it's safe to call this without
* locks.
* This function includes an RCU read-side critical section to synchronise
* against da_monitor_destroy().
*/
static inline void da_destroy_storage(da_id_type id)
{
struct da_monitor_storage *mon_storage;
guard(rcu)();
mon_storage = __da_get_mon_storage(id);
if (!mon_storage)
return;
da_monitor_reset_hook(&mon_storage->rv.da_mon);
hash_del_rcu(&mon_storage->node);
kfree_rcu(mon_storage, rcu);
}
static void da_monitor_reset_all(void)
{
struct da_monitor_storage *mon_storage;
int bkt;
rcu_read_lock();
hash_for_each_rcu(da_monitor_ht, bkt, mon_storage, node)
da_monitor_reset(&mon_storage->rv.da_mon);
rcu_read_unlock();
}
static inline int da_monitor_init(void)
{
hash_init(da_monitor_ht);
return 0;
}
static inline void da_monitor_destroy(void)
{
struct da_monitor_storage *mon_storage;
struct hlist_node *tmp;
int bkt;
/*
* This function is called after all probes are disabled, we need only
* worry about concurrency against old events.
*/
synchronize_rcu();
hash_for_each_safe(da_monitor_ht, bkt, tmp, mon_storage, node) {
da_monitor_reset_hook(&mon_storage->rv.da_mon);
hash_del_rcu(&mon_storage->node);
kfree(mon_storage);
}
}
/*
* Allow the per-object monitors to run allocation manually, necessary if the
* start condition is in a context problematic for allocation (e.g. scheduling).
* In such case, if the storage was pre-allocated without a target, set it now.
*/
#ifdef DA_SKIP_AUTO_ALLOC
#define da_prepare_storage da_fill_empty_storage
#else
#define da_prepare_storage da_create_storage
#endif /* DA_SKIP_AUTO_ALLOC */
#endif /* RV_MON_TYPE */
#if RV_MON_TYPE == RV_MON_GLOBAL || RV_MON_TYPE == RV_MON_PER_CPU
/*
* Trace events for implicit monitors. Implicit monitor is the one which the
* handler does not need to specify which da_monitor to manipulate. Examples
* of implicit monitor are the per_cpu or the global ones.
*/
static inline void da_trace_event(struct da_monitor *da_mon,
char *curr_state, char *event,
char *next_state, bool is_final,
da_id_type id)
{
CONCATENATE(trace_event_, MONITOR_NAME)(curr_state, event, next_state,
is_final);
}
static inline void da_trace_error(struct da_monitor *da_mon,
char *curr_state, char *event,
da_id_type id)
{
CONCATENATE(trace_error_, MONITOR_NAME)(curr_state, event);
}
/*
* da_get_id - unused for implicit monitors
*/
static inline da_id_type da_get_id(struct da_monitor *da_mon)
{
return 0;
}
#elif RV_MON_TYPE == RV_MON_PER_TASK || RV_MON_TYPE == RV_MON_PER_OBJ
/*
* Trace events for per_task/per_object monitors, report the target id.
*/
static inline void da_trace_event(struct da_monitor *da_mon,
char *curr_state, char *event,
char *next_state, bool is_final,
da_id_type id)
{
CONCATENATE(trace_event_, MONITOR_NAME)(id, curr_state, event,
next_state, is_final);
}
static inline void da_trace_error(struct da_monitor *da_mon,
char *curr_state, char *event,
da_id_type id)
{
CONCATENATE(trace_error_, MONITOR_NAME)(id, curr_state, event);
}
#endif /* RV_MON_TYPE */
/*
* da_event - handle an event for the da_mon
*
* This function is valid for both implicit and id monitors.
* Retry in case there is a race between getting and setting the next state,
* warn and reset the monitor if it runs out of retries. The monitor should be
* able to handle various orders.
*/
static inline bool da_event(struct da_monitor *da_mon, enum events event, da_id_type id)
{
enum states curr_state, next_state;
curr_state = READ_ONCE(da_mon->curr_state);
for (int i = 0; i < MAX_DA_RETRY_RACING_EVENTS; i++) {
next_state = model_get_next_state(curr_state, event);
if (next_state == INVALID_STATE) {
react(curr_state, event);
da_trace_error(da_mon, model_get_state_name(curr_state),
model_get_event_name(event), id);
return false;
}
if (likely(try_cmpxchg(&da_mon->curr_state, &curr_state, next_state))) {
if (!da_monitor_event_hook(da_mon, curr_state, event, next_state, id))
return false;
da_trace_event(da_mon, model_get_state_name(curr_state),
model_get_event_name(event),
model_get_state_name(next_state),
model_is_final_state(next_state), id);
return true;
}
}
trace_rv_retries_error(__stringify(MONITOR_NAME), model_get_event_name(event));
pr_warn("rv: " __stringify(MAX_DA_RETRY_RACING_EVENTS)
" retries reached for event %s, resetting monitor %s",
model_get_event_name(event), __stringify(MONITOR_NAME));
return false;
}
static inline void __da_handle_event_common(struct da_monitor *da_mon,
enum events event, da_id_type id)
{
if (!da_event(da_mon, event, id))
da_monitor_reset(da_mon);
}
static inline void __da_handle_event(struct da_monitor *da_mon,
enum events event, da_id_type id)
{
if (da_monitor_handling_event(da_mon))
__da_handle_event_common(da_mon, event, id);
}
static inline bool __da_handle_start_event(struct da_monitor *da_mon,
enum events event, da_id_type id)
{
if (!da_monitor_enabled())
return 0;
if (unlikely(!da_monitoring(da_mon))) {
da_monitor_start(da_mon);
return 0;
}
__da_handle_event_common(da_mon, event, id);
return 1;
}
static inline bool __da_handle_start_run_event(struct da_monitor *da_mon,
enum events event, da_id_type id)
{
if (!da_monitor_enabled())
return 0;
if (unlikely(!da_monitoring(da_mon)))
da_monitor_start(da_mon);
__da_handle_event_common(da_mon, event, id);
return 1;
}
#if RV_MON_TYPE == RV_MON_GLOBAL || RV_MON_TYPE == RV_MON_PER_CPU
/*
* Handle event for implicit monitor: da_get_monitor() will figure out
* the monitor.
*/
static inline void __da_handle_event(struct da_monitor *da_mon,
enum events event)
{
bool retval;
retval = da_event(da_mon, event);
if (!retval)
da_monitor_reset(da_mon);
}
/*
* da_handle_event - handle an event
*/
static inline void da_handle_event(enum events event)
{
struct da_monitor *da_mon = da_get_monitor();
bool retval;
retval = da_monitor_handling_event(da_mon);
if (!retval)
return;
__da_handle_event(da_mon, event);
__da_handle_event(da_get_monitor(), event, 0);
}
/*
@@ -376,21 +691,7 @@ static inline void da_handle_event(enum events event)
*/
static inline bool da_handle_start_event(enum events event)
{
struct da_monitor *da_mon;
if (!da_monitor_enabled())
return 0;
da_mon = da_get_monitor();
if (unlikely(!da_monitoring(da_mon))) {
da_monitor_start(da_mon);
return 0;
}
__da_handle_event(da_mon, event);
return 1;
return __da_handle_start_event(da_get_monitor(), event, 0);
}
/*
@@ -401,19 +702,7 @@ static inline bool da_handle_start_event(enum events event)
*/
static inline bool da_handle_start_run_event(enum events event)
{
struct da_monitor *da_mon;
if (!da_monitor_enabled())
return 0;
da_mon = da_get_monitor();
if (unlikely(!da_monitoring(da_mon)))
da_monitor_start(da_mon);
__da_handle_event(da_mon, event);
return 1;
return __da_handle_start_run_event(da_get_monitor(), event, 0);
}
#elif RV_MON_TYPE == RV_MON_PER_TASK
@@ -421,29 +710,12 @@ static inline bool da_handle_start_run_event(enum events event)
* Handle event for per task.
*/
static inline void __da_handle_event(struct da_monitor *da_mon,
struct task_struct *tsk, enum events event)
{
bool retval;
retval = da_event(da_mon, tsk, event);
if (!retval)
da_monitor_reset(da_mon);
}
/*
* da_handle_event - handle an event
*/
static inline void da_handle_event(struct task_struct *tsk, enum events event)
{
struct da_monitor *da_mon = da_get_monitor(tsk);
bool retval;
retval = da_monitor_handling_event(da_mon);
if (!retval)
return;
__da_handle_event(da_mon, tsk, event);
__da_handle_event(da_get_monitor(tsk), event, tsk->pid);
}
/*
@@ -459,21 +731,7 @@ static inline void da_handle_event(struct task_struct *tsk, enum events event)
static inline bool da_handle_start_event(struct task_struct *tsk,
enum events event)
{
struct da_monitor *da_mon;
if (!da_monitor_enabled())
return 0;
da_mon = da_get_monitor(tsk);
if (unlikely(!da_monitoring(da_mon))) {
da_monitor_start(da_mon);
return 0;
}
__da_handle_event(da_mon, tsk, event);
return 1;
return __da_handle_start_event(da_get_monitor(tsk), event, tsk->pid);
}
/*
@@ -484,20 +742,78 @@ static inline bool da_handle_start_event(struct task_struct *tsk,
*/
static inline bool da_handle_start_run_event(struct task_struct *tsk,
enum events event)
{
return __da_handle_start_run_event(da_get_monitor(tsk), event, tsk->pid);
}
#elif RV_MON_TYPE == RV_MON_PER_OBJ
/*
* Handle event for per object.
*/
/*
* da_handle_event - handle an event
*/
static inline void da_handle_event(da_id_type id, monitor_target target, enum events event)
{
struct da_monitor *da_mon;
if (!da_monitor_enabled())
guard(rcu)();
da_mon = da_get_monitor(id, target);
if (likely(da_mon))
__da_handle_event(da_mon, event, id);
}
/*
* da_handle_start_event - start monitoring or handle event
*
* This function is used to notify the monitor that the system is returning
* to the initial state, so the monitor can start monitoring in the next event.
* Thus:
*
* If the monitor already started, handle the event.
* If the monitor did not start yet, start the monitor but skip the event.
*/
static inline bool da_handle_start_event(da_id_type id, monitor_target target,
enum events event)
{
struct da_monitor *da_mon;
guard(rcu)();
da_mon = da_get_monitor(id, target);
da_mon = da_prepare_storage(id, target, da_mon);
if (unlikely(!da_mon))
return 0;
return __da_handle_start_event(da_mon, event, id);
}
da_mon = da_get_monitor(tsk);
/*
* da_handle_start_run_event - start monitoring and handle event
*
* This function is used to notify the monitor that the system is in the
* initial state, so the monitor can start monitoring and handling event.
*/
static inline bool da_handle_start_run_event(da_id_type id, monitor_target target,
enum events event)
{
struct da_monitor *da_mon;
if (unlikely(!da_monitoring(da_mon)))
da_monitor_start(da_mon);
guard(rcu)();
da_mon = da_get_monitor(id, target);
da_mon = da_prepare_storage(id, target, da_mon);
if (unlikely(!da_mon))
return 0;
return __da_handle_start_run_event(da_mon, event, id);
}
__da_handle_event(da_mon, tsk, event);
static inline void da_reset(da_id_type id, monitor_target target)
{
struct da_monitor *da_mon;
return 1;
guard(rcu)();
da_mon = da_get_monitor(id, target);
if (likely(da_mon))
da_monitor_reset(da_mon);
}
#endif /* RV_MON_TYPE */

478
include/rv/ha_monitor.h Normal file
View File

@@ -0,0 +1,478 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2025-2028 Red Hat, Inc. Gabriele Monaco <gmonaco@redhat.com>
*
* Hybrid automata (HA) monitor functions, to be used together
* with automata models in C generated by the rvgen tool.
*
* This type of monitors extends the Deterministic automata (DA) class by
* adding a set of environment variables (e.g. clocks) that can be used to
* constraint the valid transitions.
*
* The rvgen tool is available at tools/verification/rvgen/
*
* For further information, see:
* Documentation/trace/rv/monitor_synthesis.rst
*/
#ifndef _RV_HA_MONITOR_H
#define _RV_HA_MONITOR_H
#include <rv/automata.h>
#ifndef da_id_type
#define da_id_type int
#endif
static inline void ha_monitor_init_env(struct da_monitor *da_mon);
static inline void ha_monitor_reset_env(struct da_monitor *da_mon);
static inline void ha_setup_timer(struct ha_monitor *ha_mon);
static inline bool ha_cancel_timer(struct ha_monitor *ha_mon);
static bool ha_monitor_handle_constraint(struct da_monitor *da_mon,
enum states curr_state,
enum events event,
enum states next_state,
da_id_type id);
#define da_monitor_event_hook ha_monitor_handle_constraint
#define da_monitor_init_hook ha_monitor_init_env
#define da_monitor_reset_hook ha_monitor_reset_env
#include <rv/da_monitor.h>
#include <linux/seq_buf.h>
/* This simplifies things since da_mon and ha_mon coexist in the same union */
_Static_assert(offsetof(struct ha_monitor, da_mon) == 0,
"da_mon must be the first element in an ha_mon!");
#define to_ha_monitor(da) container_of(da, struct ha_monitor, da_mon)
#define ENV_MAX CONCATENATE(env_max_, MONITOR_NAME)
#define ENV_MAX_STORED CONCATENATE(env_max_stored_, MONITOR_NAME)
#define envs CONCATENATE(envs_, MONITOR_NAME)
/* Environment storage before being reset */
#define ENV_INVALID_VALUE U64_MAX
/* Error with no event occurs only on timeouts */
#define EVENT_NONE EVENT_MAX
#define EVENT_NONE_LBL "none"
#define ENV_BUFFER_SIZE 64
#ifdef CONFIG_RV_REACTORS
/*
* ha_react - trigger the reaction after a failed environment constraint
*
* The transition from curr_state with event is otherwise valid, but the
* environment constraint is false. This function can be called also with no
* event from a timer (state constraints only).
*/
static void ha_react(enum states curr_state, enum events event, char *env)
{
rv_react(&rv_this,
"rv: monitor %s does not allow event %s on state %s with env %s\n",
__stringify(MONITOR_NAME),
event == EVENT_NONE ? EVENT_NONE_LBL : model_get_event_name(event),
model_get_state_name(curr_state), env);
}
#else /* CONFIG_RV_REACTOR */
static void ha_react(enum states curr_state, enum events event, char *env) { }
#endif
/*
* model_get_state_name - return the (string) name of the given state
*/
static char *model_get_env_name(enum envs env)
{
if ((env < 0) || (env >= ENV_MAX))
return "INVALID";
return RV_AUTOMATON_NAME.env_names[env];
}
/*
* Monitors requiring a timer implementation need to request it explicitly.
*/
#ifndef HA_TIMER_TYPE
#define HA_TIMER_TYPE HA_TIMER_NONE
#endif
#if HA_TIMER_TYPE == HA_TIMER_WHEEL
static void ha_monitor_timer_callback(struct timer_list *timer);
#elif HA_TIMER_TYPE == HA_TIMER_HRTIMER
static enum hrtimer_restart ha_monitor_timer_callback(struct hrtimer *hrtimer);
#endif
/*
* ktime_get_ns is expensive, since we usually don't require precise accounting
* of changes within the same event, cache the current time at the beginning of
* the constraint handler and use the cache for subsequent calls.
* Monitors without ns clocks automatically skip this.
*/
#ifdef HA_CLK_NS
#define ha_get_ns() ktime_get_ns()
#else
#define ha_get_ns() 0
#endif /* HA_CLK_NS */
/* Should be supplied by the monitor */
static u64 ha_get_env(struct ha_monitor *ha_mon, enum envs env, u64 time_ns);
static bool ha_verify_constraint(struct ha_monitor *ha_mon,
enum states curr_state,
enum events event,
enum states next_state,
u64 time_ns);
/*
* ha_monitor_reset_all_stored - reset all environment variables in the monitor
*/
static inline void ha_monitor_reset_all_stored(struct ha_monitor *ha_mon)
{
for (int i = 0; i < ENV_MAX_STORED; i++)
WRITE_ONCE(ha_mon->env_store[i], ENV_INVALID_VALUE);
}
/*
* ha_monitor_init_env - setup timer and reset all environment
*
* Called from a hook in the DA start functions, it supplies the da_mon
* corresponding to the current ha_mon.
* Not all hybrid automata require the timer, still set it for simplicity.
*/
static inline void ha_monitor_init_env(struct da_monitor *da_mon)
{
struct ha_monitor *ha_mon = to_ha_monitor(da_mon);
ha_monitor_reset_all_stored(ha_mon);
ha_setup_timer(ha_mon);
}
/*
* ha_monitor_reset_env - stop timer and reset all environment
*
* Called from a hook in the DA reset functions, it supplies the da_mon
* corresponding to the current ha_mon.
* Not all hybrid automata require the timer, still clear it for simplicity.
*/
static inline void ha_monitor_reset_env(struct da_monitor *da_mon)
{
struct ha_monitor *ha_mon = to_ha_monitor(da_mon);
/* Initialisation resets the monitor before initialising the timer */
if (likely(da_monitoring(da_mon)))
ha_cancel_timer(ha_mon);
}
/*
* ha_monitor_env_invalid - return true if env has not been initialised
*/
static inline bool ha_monitor_env_invalid(struct ha_monitor *ha_mon, enum envs env)
{
return READ_ONCE(ha_mon->env_store[env]) == ENV_INVALID_VALUE;
}
static inline void ha_get_env_string(struct seq_buf *s,
struct ha_monitor *ha_mon, u64 time_ns)
{
const char *format_str = "%s=%llu";
for (int i = 0; i < ENV_MAX; i++) {
seq_buf_printf(s, format_str, model_get_env_name(i),
ha_get_env(ha_mon, i, time_ns));
format_str = ",%s=%llu";
}
}
#if RV_MON_TYPE == RV_MON_GLOBAL || RV_MON_TYPE == RV_MON_PER_CPU
static inline void ha_trace_error_env(struct ha_monitor *ha_mon,
char *curr_state, char *event, char *env,
da_id_type id)
{
CONCATENATE(trace_error_env_, MONITOR_NAME)(curr_state, event, env);
}
#elif RV_MON_TYPE == RV_MON_PER_TASK || RV_MON_TYPE == RV_MON_PER_OBJ
#define ha_get_target(ha_mon) da_get_target(&ha_mon->da_mon)
static inline void ha_trace_error_env(struct ha_monitor *ha_mon,
char *curr_state, char *event, char *env,
da_id_type id)
{
CONCATENATE(trace_error_env_, MONITOR_NAME)(id, curr_state, event, env);
}
#endif /* RV_MON_TYPE */
/*
* ha_get_monitor - return the current monitor
*/
#define ha_get_monitor(...) to_ha_monitor(da_get_monitor(__VA_ARGS__))
/*
* ha_monitor_handle_constraint - handle the constraint on the current transition
*
* If the monitor implementation defines a constraint in the transition from
* curr_state to event, react and trace appropriately as well as return false.
* This function is called from the hook in the DA event handle function and
* triggers a failure in the monitor.
*/
static bool ha_monitor_handle_constraint(struct da_monitor *da_mon,
enum states curr_state,
enum events event,
enum states next_state,
da_id_type id)
{
struct ha_monitor *ha_mon = to_ha_monitor(da_mon);
u64 time_ns = ha_get_ns();
DECLARE_SEQ_BUF(env_string, ENV_BUFFER_SIZE);
if (ha_verify_constraint(ha_mon, curr_state, event, next_state, time_ns))
return true;
ha_get_env_string(&env_string, ha_mon, time_ns);
ha_react(curr_state, event, env_string.buffer);
ha_trace_error_env(ha_mon,
model_get_state_name(curr_state),
model_get_event_name(event),
env_string.buffer, id);
return false;
}
static inline void __ha_monitor_timer_callback(struct ha_monitor *ha_mon)
{
enum states curr_state = READ_ONCE(ha_mon->da_mon.curr_state);
DECLARE_SEQ_BUF(env_string, ENV_BUFFER_SIZE);
u64 time_ns = ha_get_ns();
ha_get_env_string(&env_string, ha_mon, time_ns);
ha_react(curr_state, EVENT_NONE, env_string.buffer);
ha_trace_error_env(ha_mon, model_get_state_name(curr_state),
EVENT_NONE_LBL, env_string.buffer,
da_get_id(&ha_mon->da_mon));
da_monitor_reset(&ha_mon->da_mon);
}
/*
* The clock variables have 2 different representations in the env_store:
* - The guard representation is the timestamp of the last reset
* - The invariant representation is the timestamp when the invariant expires
* As the representations are incompatible, care must be taken when switching
* between them: the invariant representation can only be used when starting a
* timer when the previous representation was guard (e.g. no other invariant
* started since the last reset operation).
* Likewise, switching from invariant to guard representation without a reset
* can be done only by subtracting the exact value used to start the invariant.
*
* Reading the environment variable (ha_get_clk) also reflects this difference
* any reads in states that have an invariant return the (possibly negative)
* time since expiration, other reads return the time since last reset.
*/
/*
* Helper functions for env variables describing clocks with ns granularity
*/
static inline u64 ha_get_clk_ns(struct ha_monitor *ha_mon, enum envs env, u64 time_ns)
{
return time_ns - READ_ONCE(ha_mon->env_store[env]);
}
static inline void ha_reset_clk_ns(struct ha_monitor *ha_mon, enum envs env, u64 time_ns)
{
WRITE_ONCE(ha_mon->env_store[env], time_ns);
}
static inline void ha_set_invariant_ns(struct ha_monitor *ha_mon, enum envs env,
u64 value, u64 time_ns)
{
WRITE_ONCE(ha_mon->env_store[env], time_ns + value);
}
static inline bool ha_check_invariant_ns(struct ha_monitor *ha_mon,
enum envs env, u64 time_ns)
{
return READ_ONCE(ha_mon->env_store[env]) >= time_ns;
}
/*
* ha_invariant_passed_ns - prepare the invariant and return the time since reset
*/
static inline u64 ha_invariant_passed_ns(struct ha_monitor *ha_mon, enum envs env,
u64 expire, u64 time_ns)
{
u64 passed = 0;
if (env < 0 || env >= ENV_MAX_STORED)
return 0;
if (ha_monitor_env_invalid(ha_mon, env))
return 0;
passed = ha_get_env(ha_mon, env, time_ns);
ha_set_invariant_ns(ha_mon, env, expire - passed, time_ns);
return passed;
}
/*
* Helper functions for env variables describing clocks with jiffy granularity
*/
static inline u64 ha_get_clk_jiffy(struct ha_monitor *ha_mon, enum envs env)
{
return get_jiffies_64() - READ_ONCE(ha_mon->env_store[env]);
}
static inline void ha_reset_clk_jiffy(struct ha_monitor *ha_mon, enum envs env)
{
WRITE_ONCE(ha_mon->env_store[env], get_jiffies_64());
}
static inline void ha_set_invariant_jiffy(struct ha_monitor *ha_mon,
enum envs env, u64 value)
{
WRITE_ONCE(ha_mon->env_store[env], get_jiffies_64() + value);
}
static inline bool ha_check_invariant_jiffy(struct ha_monitor *ha_mon,
enum envs env, u64 time_ns)
{
return time_after64(READ_ONCE(ha_mon->env_store[env]), get_jiffies_64());
}
/*
* ha_invariant_passed_jiffy - prepare the invariant and return the time since reset
*/
static inline u64 ha_invariant_passed_jiffy(struct ha_monitor *ha_mon, enum envs env,
u64 expire, u64 time_ns)
{
u64 passed = 0;
if (env < 0 || env >= ENV_MAX_STORED)
return 0;
if (ha_monitor_env_invalid(ha_mon, env))
return 0;
passed = ha_get_env(ha_mon, env, time_ns);
ha_set_invariant_jiffy(ha_mon, env, expire - passed);
return passed;
}
/*
* Retrieve the last reset time (guard representation) from the invariant
* representation (expiration).
* It the caller's responsibility to make sure the storage was actually in the
* invariant representation (e.g. the current state has an invariant).
* The provided value must be the same used when starting the invariant.
*
* This function's access to the storage is NOT atomic, due to the rarity when
* this is used. If a monitor allows writes concurrent to this, likely
* other things are broken and need rethinking the model or additional locking.
*/
static inline void ha_inv_to_guard(struct ha_monitor *ha_mon, enum envs env,
u64 value, u64 time_ns)
{
WRITE_ONCE(ha_mon->env_store[env], READ_ONCE(ha_mon->env_store[env]) - value);
}
#if HA_TIMER_TYPE == HA_TIMER_WHEEL
/*
* Helper functions to handle the monitor timer.
* Not all monitors require a timer, in such case the timer will be set up but
* never armed.
* Timers start since the last reset of the supplied env or from now if env is
* not an environment variable. If env was not initialised no timer starts.
* Timers can expire on any CPU unless the monitor is per-cpu,
* where we assume every event occurs on the local CPU.
*/
static void ha_monitor_timer_callback(struct timer_list *timer)
{
struct ha_monitor *ha_mon = container_of(timer, struct ha_monitor, timer);
__ha_monitor_timer_callback(ha_mon);
}
static inline void ha_setup_timer(struct ha_monitor *ha_mon)
{
int mode = 0;
if (RV_MON_TYPE == RV_MON_PER_CPU)
mode |= TIMER_PINNED;
timer_setup(&ha_mon->timer, ha_monitor_timer_callback, mode);
}
static inline void ha_start_timer_jiffy(struct ha_monitor *ha_mon, enum envs env,
u64 expire, u64 time_ns)
{
u64 passed = ha_invariant_passed_jiffy(ha_mon, env, expire, time_ns);
mod_timer(&ha_mon->timer, get_jiffies_64() + expire - passed);
}
static inline void ha_start_timer_ns(struct ha_monitor *ha_mon, enum envs env,
u64 expire, u64 time_ns)
{
u64 passed = ha_invariant_passed_ns(ha_mon, env, expire, time_ns);
ha_start_timer_jiffy(ha_mon, ENV_MAX_STORED,
nsecs_to_jiffies(expire - passed + TICK_NSEC - 1), time_ns);
}
/*
* ha_cancel_timer - Cancel the timer
*
* Returns:
* * 1 when the timer was active
* * 0 when the timer was not active or running a callback
*/
static inline bool ha_cancel_timer(struct ha_monitor *ha_mon)
{
return timer_delete(&ha_mon->timer);
}
#elif HA_TIMER_TYPE == HA_TIMER_HRTIMER
/*
* Helper functions to handle the monitor timer.
* Not all monitors require a timer, in such case the timer will be set up but
* never armed.
* Timers start since the last reset of the supplied env or from now if env is
* not an environment variable. If env was not initialised no timer starts.
* Timers can expire on any CPU unless the monitor is per-cpu,
* where we assume every event occurs on the local CPU.
*/
static enum hrtimer_restart ha_monitor_timer_callback(struct hrtimer *hrtimer)
{
struct ha_monitor *ha_mon = container_of(hrtimer, struct ha_monitor, hrtimer);
__ha_monitor_timer_callback(ha_mon);
return HRTIMER_NORESTART;
}
static inline void ha_setup_timer(struct ha_monitor *ha_mon)
{
hrtimer_setup(&ha_mon->hrtimer, ha_monitor_timer_callback,
CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
}
static inline void ha_start_timer_ns(struct ha_monitor *ha_mon, enum envs env,
u64 expire, u64 time_ns)
{
int mode = HRTIMER_MODE_REL_HARD;
u64 passed = ha_invariant_passed_ns(ha_mon, env, expire, time_ns);
if (RV_MON_TYPE == RV_MON_PER_CPU)
mode |= HRTIMER_MODE_PINNED;
hrtimer_start(&ha_mon->hrtimer, ns_to_ktime(expire - passed), mode);
}
static inline void ha_start_timer_jiffy(struct ha_monitor *ha_mon, enum envs env,
u64 expire, u64 time_ns)
{
u64 passed = ha_invariant_passed_jiffy(ha_mon, env, expire, time_ns);
ha_start_timer_ns(ha_mon, ENV_MAX_STORED,
jiffies_to_nsecs(expire - passed), time_ns);
}
/*
* ha_cancel_timer - Cancel the timer
*
* Returns:
* * 1 when the timer was active
* * 0 when the timer was not active or running a callback
*/
static inline bool ha_cancel_timer(struct ha_monitor *ha_mon)
{
return hrtimer_try_to_cancel(&ha_mon->hrtimer) == 1;
}
#else /* HA_TIMER_NONE */
/*
* Start function is intentionally not defined, monitors using timers must
* set HA_TIMER_TYPE to either HA_TIMER_WHEEL or HA_TIMER_HRTIMER.
*/
static inline void ha_setup_timer(struct ha_monitor *ha_mon) { }
static inline bool ha_cancel_timer(struct ha_monitor *ha_mon)
{
return false;
}
#endif
#endif

View File

@@ -896,6 +896,32 @@ DECLARE_TRACE(sched_set_need_resched,
TP_PROTO(struct task_struct *tsk, int cpu, int tif),
TP_ARGS(tsk, cpu, tif));
#define DL_OTHER 0
#define DL_TASK 1
#define DL_SERVER_FAIR 2
#define DL_SERVER_EXT 3
DECLARE_TRACE(sched_dl_throttle,
TP_PROTO(struct sched_dl_entity *dl_se, int cpu, u8 type),
TP_ARGS(dl_se, cpu, type));
DECLARE_TRACE(sched_dl_replenish,
TP_PROTO(struct sched_dl_entity *dl_se, int cpu, u8 type),
TP_ARGS(dl_se, cpu, type));
/* Call to update_curr_dl_se not involving throttle or replenish */
DECLARE_TRACE(sched_dl_update,
TP_PROTO(struct sched_dl_entity *dl_se, int cpu, u8 type),
TP_ARGS(dl_se, cpu, type));
DECLARE_TRACE(sched_dl_server_start,
TP_PROTO(struct sched_dl_entity *dl_se, int cpu, u8 type),
TP_ARGS(dl_se, cpu, type));
DECLARE_TRACE(sched_dl_server_stop,
TP_PROTO(struct sched_dl_entity *dl_se, int cpu, u8 type),
TP_ARGS(dl_se, cpu, type));
#endif /* _TRACE_SCHED_H */
/* This part must be outside protection */

View File

@@ -122,6 +122,11 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(sched_compute_energy_tp);
EXPORT_TRACEPOINT_SYMBOL_GPL(sched_entry_tp);
EXPORT_TRACEPOINT_SYMBOL_GPL(sched_exit_tp);
EXPORT_TRACEPOINT_SYMBOL_GPL(sched_set_need_resched_tp);
EXPORT_TRACEPOINT_SYMBOL_GPL(sched_dl_throttle_tp);
EXPORT_TRACEPOINT_SYMBOL_GPL(sched_dl_replenish_tp);
EXPORT_TRACEPOINT_SYMBOL_GPL(sched_dl_update_tp);
EXPORT_TRACEPOINT_SYMBOL_GPL(sched_dl_server_start_tp);
EXPORT_TRACEPOINT_SYMBOL_GPL(sched_dl_server_stop_tp);
DEFINE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
DEFINE_PER_CPU(struct rnd_state, sched_rnd_state);

View File

@@ -18,6 +18,7 @@
#include <linux/cpuset.h>
#include <linux/sched/clock.h>
#include <linux/sched/deadline.h>
#include <uapi/linux/sched/types.h>
#include "sched.h"
#include "pelt.h"
@@ -57,17 +58,6 @@ static int __init sched_dl_sysctl_init(void)
late_initcall(sched_dl_sysctl_init);
#endif /* CONFIG_SYSCTL */
static bool dl_server(struct sched_dl_entity *dl_se)
{
return dl_se->dl_server;
}
static inline struct task_struct *dl_task_of(struct sched_dl_entity *dl_se)
{
BUG_ON(dl_server(dl_se));
return container_of(dl_se, struct task_struct, dl);
}
static inline struct rq *rq_of_dl_rq(struct dl_rq *dl_rq)
{
return container_of(dl_rq, struct rq, dl);
@@ -115,6 +105,19 @@ static inline bool is_dl_boosted(struct sched_dl_entity *dl_se)
}
#endif /* !CONFIG_RT_MUTEXES */
static inline u8 dl_get_type(struct sched_dl_entity *dl_se, struct rq *rq)
{
if (!dl_server(dl_se))
return DL_TASK;
if (dl_se == &rq->fair_server)
return DL_SERVER_FAIR;
#ifdef CONFIG_SCHED_CLASS_EXT
if (dl_se == &rq->ext_server)
return DL_SERVER_EXT;
#endif
return DL_OTHER;
}
static inline struct dl_bw *dl_bw_of(int i)
{
RCU_LOCKDEP_WARN(!rcu_read_lock_sched_held(),
@@ -733,6 +736,7 @@ static inline void replenish_dl_new_period(struct sched_dl_entity *dl_se,
dl_se->dl_throttled = 1;
dl_se->dl_defer_armed = 1;
}
trace_sched_dl_replenish_tp(dl_se, cpu_of(rq), dl_get_type(dl_se, rq));
}
/*
@@ -848,6 +852,8 @@ static void replenish_dl_entity(struct sched_dl_entity *dl_se)
if (dl_se->dl_throttled)
dl_se->dl_throttled = 0;
trace_sched_dl_replenish_tp(dl_se, cpu_of(rq), dl_get_type(dl_se, rq));
/*
* If this is the replenishment of a deferred reservation,
* clear the flag and return.
@@ -974,22 +980,6 @@ update_dl_revised_wakeup(struct sched_dl_entity *dl_se, struct rq *rq)
dl_se->runtime = (dl_se->dl_density * laxity) >> BW_SHIFT;
}
/*
* Regarding the deadline, a task with implicit deadline has a relative
* deadline == relative period. A task with constrained deadline has a
* relative deadline <= relative period.
*
* We support constrained deadline tasks. However, there are some restrictions
* applied only for tasks which do not have an implicit deadline. See
* update_dl_entity() to know more about such restrictions.
*
* The dl_is_implicit() returns true if the task has an implicit deadline.
*/
static inline bool dl_is_implicit(struct sched_dl_entity *dl_se)
{
return dl_se->dl_deadline == dl_se->dl_period;
}
/*
* When a deadline entity is placed in the runqueue, its runtime and deadline
* might need to be updated. This is done by a CBS wake up rule. There are two
@@ -1345,6 +1335,7 @@ static inline void dl_check_constrained_dl(struct sched_dl_entity *dl_se)
dl_time_before(rq_clock(rq), dl_next_period(dl_se))) {
if (unlikely(is_dl_boosted(dl_se) || !start_dl_timer(dl_se)))
return;
trace_sched_dl_throttle_tp(dl_se, cpu_of(rq), dl_get_type(dl_se, rq));
dl_se->dl_throttled = 1;
if (dl_se->runtime > 0)
dl_se->runtime = 0;
@@ -1508,6 +1499,7 @@ static void update_curr_dl_se(struct rq *rq, struct sched_dl_entity *dl_se, s64
throttle:
if (dl_runtime_exceeded(dl_se) || dl_se->dl_yielded) {
trace_sched_dl_throttle_tp(dl_se, cpu_of(rq), dl_get_type(dl_se, rq));
dl_se->dl_throttled = 1;
/* If requested, inform the user about runtime overruns. */
@@ -1532,6 +1524,8 @@ throttle:
if (!is_leftmost(dl_se, &rq->dl))
resched_curr(rq);
} else {
trace_sched_dl_update_tp(dl_se, cpu_of(rq), dl_get_type(dl_se, rq));
}
/*
@@ -1810,6 +1804,7 @@ void dl_server_start(struct sched_dl_entity *dl_se)
if (WARN_ON_ONCE(!cpu_online(cpu_of(rq))))
return;
trace_sched_dl_server_start_tp(dl_se, cpu_of(rq), dl_get_type(dl_se, rq));
dl_se->dl_server_active = 1;
enqueue_dl_entity(dl_se, ENQUEUE_WAKEUP);
if (!dl_task(dl_se->rq->curr) || dl_entity_preempt(dl_se, &rq->curr->dl))
@@ -1821,6 +1816,8 @@ void dl_server_stop(struct sched_dl_entity *dl_se)
if (!dl_server(dl_se) || !dl_server_active(dl_se))
return;
trace_sched_dl_server_stop_tp(dl_se, cpu_of(dl_se->rq),
dl_get_type(dl_se, dl_se->rq));
dequeue_dl_entity(dl_se, DEQUEUE_SLEEP);
hrtimer_try_to_cancel(&dl_se->dl_timer);
dl_se->dl_defer_armed = 0;

View File

@@ -23,6 +23,19 @@ config LTL_MON_EVENTS_ID
config RV_LTL_MONITOR
bool
config RV_HA_MONITOR
bool
config HA_MON_EVENTS_IMPLICIT
select DA_MON_EVENTS_IMPLICIT
select RV_HA_MONITOR
bool
config HA_MON_EVENTS_ID
select DA_MON_EVENTS_ID
select RV_HA_MONITOR
bool
menuconfig RV
bool "Runtime Verification"
select TRACING
@@ -65,6 +78,11 @@ source "kernel/trace/rv/monitors/pagefault/Kconfig"
source "kernel/trace/rv/monitors/sleep/Kconfig"
# Add new rtapp monitors here
source "kernel/trace/rv/monitors/stall/Kconfig"
source "kernel/trace/rv/monitors/deadline/Kconfig"
source "kernel/trace/rv/monitors/nomiss/Kconfig"
# Add new deadline monitors here
# Add new monitors here
config RV_REACTORS

View File

@@ -17,6 +17,9 @@ obj-$(CONFIG_RV_MON_STS) += monitors/sts/sts.o
obj-$(CONFIG_RV_MON_NRP) += monitors/nrp/nrp.o
obj-$(CONFIG_RV_MON_SSSW) += monitors/sssw/sssw.o
obj-$(CONFIG_RV_MON_OPID) += monitors/opid/opid.o
obj-$(CONFIG_RV_MON_STALL) += monitors/stall/stall.o
obj-$(CONFIG_RV_MON_DEADLINE) += monitors/deadline/deadline.o
obj-$(CONFIG_RV_MON_NOMISS) += monitors/nomiss/nomiss.o
# Add new monitors here
obj-$(CONFIG_RV_REACTORS) += rv_reactors.o
obj-$(CONFIG_RV_REACT_PRINTK) += reactor_printk.o

View File

@@ -0,0 +1,10 @@
config RV_MON_DEADLINE
depends on RV
bool "deadline monitor"
help
Collection of monitors to check the deadline scheduler and server
behave according to specifications. Enable this to enable all
scheduler specification supported by the current kernel.
For further information, see:
Documentation/trace/rv/monitor_deadline.rst

View File

@@ -0,0 +1,44 @@
// SPDX-License-Identifier: GPL-2.0
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/rv.h>
#include <linux/kallsyms.h>
#define MODULE_NAME "deadline"
#include "deadline.h"
struct rv_monitor rv_deadline = {
.name = "deadline",
.description = "container for several deadline scheduler specifications.",
.enable = NULL,
.disable = NULL,
.reset = NULL,
.enabled = 0,
};
/* Used by other monitors */
struct sched_class *rv_ext_sched_class;
static int __init register_deadline(void)
{
if (IS_ENABLED(CONFIG_SCHED_CLASS_EXT)) {
rv_ext_sched_class = (void *)kallsyms_lookup_name("ext_sched_class");
if (!rv_ext_sched_class)
pr_warn("rv: Missing ext_sched_class, monitors may not work.\n");
}
return rv_register_monitor(&rv_deadline, NULL);
}
static void __exit unregister_deadline(void)
{
rv_unregister_monitor(&rv_deadline);
}
module_init(register_deadline);
module_exit(unregister_deadline);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Gabriele Monaco <gmonaco@redhat.com>");
MODULE_DESCRIPTION("deadline: container for several deadline scheduler specifications.");

View File

@@ -0,0 +1,202 @@
/* SPDX-License-Identifier: GPL-2.0 */
#include <linux/kernel.h>
#include <linux/uaccess.h>
#include <linux/sched/deadline.h>
#include <asm/syscall.h>
#include <uapi/linux/sched/types.h>
#include <trace/events/sched.h>
/*
* Dummy values if not available
*/
#ifndef __NR_sched_setscheduler
#define __NR_sched_setscheduler -__COUNTER__
#endif
#ifndef __NR_sched_setattr
#define __NR_sched_setattr -__COUNTER__
#endif
extern struct rv_monitor rv_deadline;
/* Initialised when registering the deadline container */
extern struct sched_class *rv_ext_sched_class;
/*
* If both have dummy values, the syscalls are not supported and we don't even
* need to register the handler.
*/
static inline bool should_skip_syscall_handle(void)
{
return __NR_sched_setattr < 0 && __NR_sched_setscheduler < 0;
}
/*
* is_supported_type - return true if @type is supported by the deadline monitors
*/
static inline bool is_supported_type(u8 type)
{
return type == DL_TASK || type == DL_SERVER_FAIR || type == DL_SERVER_EXT;
}
/*
* is_server_type - return true if @type is a supported server
*/
static inline bool is_server_type(u8 type)
{
return is_supported_type(type) && type != DL_TASK;
}
/*
* Use negative numbers for the server.
* Currently only one fair server per CPU, may change in the future.
*/
#define fair_server_id(cpu) (-cpu)
#define ext_server_id(cpu) (-cpu - num_possible_cpus())
#define NO_SERVER_ID (-2 * num_possible_cpus())
/*
* Get a unique id used for dl entities
*
* The cpu is not required for tasks as the pid is used there, if this function
* is called on a dl_se that for sure corresponds to a task, DL_TASK can be
* used in place of cpu.
* We need the cpu for servers as it is provided in the tracepoint and we
* cannot easily retrieve it from the dl_se (requires the struct rq definition).
*/
static inline int get_entity_id(struct sched_dl_entity *dl_se, int cpu, u8 type)
{
if (dl_server(dl_se) && type != DL_TASK) {
if (type == DL_SERVER_FAIR)
return fair_server_id(cpu);
if (type == DL_SERVER_EXT)
return ext_server_id(cpu);
return NO_SERVER_ID;
}
return dl_task_of(dl_se)->pid;
}
static inline bool task_is_scx_enabled(struct task_struct *tsk)
{
return IS_ENABLED(CONFIG_SCHED_CLASS_EXT) &&
tsk->sched_class == rv_ext_sched_class;
}
/* Expand id and target as arguments for da functions */
#define EXPAND_ID(dl_se, cpu, type) get_entity_id(dl_se, cpu, type), dl_se
#define EXPAND_ID_TASK(tsk) get_entity_id(&tsk->dl, task_cpu(tsk), DL_TASK), &tsk->dl
static inline u8 get_server_type(struct task_struct *tsk)
{
if (tsk->policy == SCHED_NORMAL || tsk->policy == SCHED_EXT ||
tsk->policy == SCHED_BATCH || tsk->policy == SCHED_IDLE)
return task_is_scx_enabled(tsk) ? DL_SERVER_EXT : DL_SERVER_FAIR;
return DL_OTHER;
}
static inline int extract_params(struct pt_regs *regs, long id, pid_t *pid_out)
{
size_t size = offsetofend(struct sched_attr, sched_flags);
struct sched_attr __user *uattr, attr;
int new_policy = -1, ret;
unsigned long args[6];
switch (id) {
case __NR_sched_setscheduler:
syscall_get_arguments(current, regs, args);
*pid_out = args[0];
new_policy = args[1];
break;
case __NR_sched_setattr:
syscall_get_arguments(current, regs, args);
*pid_out = args[0];
uattr = (struct sched_attr __user *)args[1];
/*
* Just copy up to sched_flags, we are not interested after that
*/
ret = copy_struct_from_user(&attr, size, uattr, size);
if (ret)
return ret;
if (attr.sched_flags & SCHED_FLAG_KEEP_POLICY)
return -EINVAL;
new_policy = attr.sched_policy;
break;
default:
return -EINVAL;
}
return new_policy & ~SCHED_RESET_ON_FORK;
}
/* Helper functions requiring DA/HA utilities */
#ifdef RV_MON_TYPE
/*
* get_fair_server - get the fair server associated to a task
*
* If the task is a boosted task, the server is available in the task_struct,
* otherwise grab the dl entity saved for the CPU where the task is enqueued.
* This function assumes the task is enqueued somewhere.
*/
static inline struct sched_dl_entity *get_server(struct task_struct *tsk, u8 type)
{
if (tsk->dl_server && get_server_type(tsk) == type)
return tsk->dl_server;
if (type == DL_SERVER_FAIR)
return da_get_target_by_id(fair_server_id(task_cpu(tsk)));
if (type == DL_SERVER_EXT)
return da_get_target_by_id(ext_server_id(task_cpu(tsk)));
return NULL;
}
/*
* Initialise monitors for all tasks and pre-allocate the storage for servers.
* This is necessary since we don't have access to the servers here and
* allocation can cause deadlocks from their tracepoints. We can only fill
* pre-initialised storage from there.
*/
static inline int init_storage(bool skip_tasks)
{
struct task_struct *g, *p;
int cpu;
for_each_possible_cpu(cpu) {
if (!da_create_empty_storage(fair_server_id(cpu)))
goto fail;
if (IS_ENABLED(CONFIG_SCHED_CLASS_EXT) &&
!da_create_empty_storage(ext_server_id(cpu)))
goto fail;
}
if (skip_tasks)
return 0;
read_lock(&tasklist_lock);
for_each_process_thread(g, p) {
if (p->policy == SCHED_DEADLINE) {
if (!da_create_storage(EXPAND_ID_TASK(p), NULL)) {
read_unlock(&tasklist_lock);
goto fail;
}
}
}
read_unlock(&tasklist_lock);
return 0;
fail:
da_monitor_destroy();
return -ENOMEM;
}
static void __maybe_unused handle_newtask(void *data, struct task_struct *task, u64 flags)
{
/* Might be superfluous as tasks are not started with this policy.. */
if (task->policy == SCHED_DEADLINE)
da_create_storage(EXPAND_ID_TASK(task), NULL);
}
static void __maybe_unused handle_exit(void *data, struct task_struct *p, bool group_dead)
{
if (p->policy == SCHED_DEADLINE)
da_destroy_storage(get_entity_id(&p->dl, DL_TASK, DL_TASK));
}
#endif

View File

@@ -0,0 +1,15 @@
# SPDX-License-Identifier: GPL-2.0-only
#
config RV_MON_NOMISS
depends on RV
depends on HAVE_SYSCALL_TRACEPOINTS
depends on RV_MON_DEADLINE
default y
select HA_MON_EVENTS_ID
bool "nomiss monitor"
help
Monitor to ensure dl entities run to completion before their deadiline.
This monitor is part of the deadline monitors collection.
For further information, see:
Documentation/trace/rv/monitor_deadline.rst

View File

@@ -0,0 +1,293 @@
// SPDX-License-Identifier: GPL-2.0
#include <linux/ftrace.h>
#include <linux/tracepoint.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/rv.h>
#include <rv/instrumentation.h>
#define MODULE_NAME "nomiss"
#include <uapi/linux/sched/types.h>
#include <trace/events/syscalls.h>
#include <trace/events/sched.h>
#include <trace/events/task.h>
#include <rv_trace.h>
#define RV_MON_TYPE RV_MON_PER_OBJ
#define HA_TIMER_TYPE HA_TIMER_WHEEL
/* The start condition is on sched_switch, it's dangerous to allocate there */
#define DA_SKIP_AUTO_ALLOC
typedef struct sched_dl_entity *monitor_target;
#include "nomiss.h"
#include <rv/ha_monitor.h>
#include <monitors/deadline/deadline.h>
/*
* User configurable deadline threshold. If the total utilisation of deadline
* tasks is larger than 1, they are only guaranteed bounded tardiness. See
* Documentation/scheduler/sched-deadline.rst for more details.
* The minimum tardiness without sched_feat(HRTICK_DL) is 1 tick to accommodate
* for throttle enforced on the next tick.
*/
static u64 deadline_thresh = TICK_NSEC;
module_param(deadline_thresh, ullong, 0644);
#define DEADLINE_NS(ha_mon) (ha_get_target(ha_mon)->dl_deadline + deadline_thresh)
static u64 ha_get_env(struct ha_monitor *ha_mon, enum envs_nomiss env, u64 time_ns)
{
if (env == clk_nomiss)
return ha_get_clk_ns(ha_mon, env, time_ns);
else if (env == is_constr_dl_nomiss)
return !dl_is_implicit(ha_get_target(ha_mon));
else if (env == is_defer_nomiss)
return ha_get_target(ha_mon)->dl_defer;
return ENV_INVALID_VALUE;
}
static void ha_reset_env(struct ha_monitor *ha_mon, enum envs_nomiss env, u64 time_ns)
{
if (env == clk_nomiss)
ha_reset_clk_ns(ha_mon, env, time_ns);
}
static inline bool ha_verify_invariants(struct ha_monitor *ha_mon,
enum states curr_state, enum events event,
enum states next_state, u64 time_ns)
{
if (curr_state == ready_nomiss)
return ha_check_invariant_ns(ha_mon, clk_nomiss, time_ns);
else if (curr_state == running_nomiss)
return ha_check_invariant_ns(ha_mon, clk_nomiss, time_ns);
return true;
}
static inline void ha_convert_inv_guard(struct ha_monitor *ha_mon,
enum states curr_state, enum events event,
enum states next_state, u64 time_ns)
{
if (curr_state == next_state)
return;
if (curr_state == ready_nomiss)
ha_inv_to_guard(ha_mon, clk_nomiss, DEADLINE_NS(ha_mon), time_ns);
else if (curr_state == running_nomiss)
ha_inv_to_guard(ha_mon, clk_nomiss, DEADLINE_NS(ha_mon), time_ns);
}
static inline bool ha_verify_guards(struct ha_monitor *ha_mon,
enum states curr_state, enum events event,
enum states next_state, u64 time_ns)
{
bool res = true;
if (curr_state == ready_nomiss && event == dl_replenish_nomiss)
ha_reset_env(ha_mon, clk_nomiss, time_ns);
else if (curr_state == ready_nomiss && event == dl_throttle_nomiss)
res = ha_get_env(ha_mon, is_defer_nomiss, time_ns) == 1ull;
else if (curr_state == idle_nomiss && event == dl_replenish_nomiss)
ha_reset_env(ha_mon, clk_nomiss, time_ns);
else if (curr_state == running_nomiss && event == dl_replenish_nomiss)
ha_reset_env(ha_mon, clk_nomiss, time_ns);
else if (curr_state == sleeping_nomiss && event == dl_replenish_nomiss)
ha_reset_env(ha_mon, clk_nomiss, time_ns);
else if (curr_state == sleeping_nomiss && event == dl_throttle_nomiss)
res = ha_get_env(ha_mon, is_constr_dl_nomiss, time_ns) == 1ull ||
ha_get_env(ha_mon, is_defer_nomiss, time_ns) == 1ull;
else if (curr_state == throttled_nomiss && event == dl_replenish_nomiss)
ha_reset_env(ha_mon, clk_nomiss, time_ns);
return res;
}
static inline void ha_setup_invariants(struct ha_monitor *ha_mon,
enum states curr_state, enum events event,
enum states next_state, u64 time_ns)
{
if (next_state == curr_state && event != dl_replenish_nomiss)
return;
if (next_state == ready_nomiss)
ha_start_timer_ns(ha_mon, clk_nomiss, DEADLINE_NS(ha_mon), time_ns);
else if (next_state == running_nomiss)
ha_start_timer_ns(ha_mon, clk_nomiss, DEADLINE_NS(ha_mon), time_ns);
else if (curr_state == ready_nomiss)
ha_cancel_timer(ha_mon);
else if (curr_state == running_nomiss)
ha_cancel_timer(ha_mon);
}
static bool ha_verify_constraint(struct ha_monitor *ha_mon,
enum states curr_state, enum events event,
enum states next_state, u64 time_ns)
{
if (!ha_verify_invariants(ha_mon, curr_state, event, next_state, time_ns))
return false;
ha_convert_inv_guard(ha_mon, curr_state, event, next_state, time_ns);
if (!ha_verify_guards(ha_mon, curr_state, event, next_state, time_ns))
return false;
ha_setup_invariants(ha_mon, curr_state, event, next_state, time_ns);
return true;
}
static void handle_dl_replenish(void *data, struct sched_dl_entity *dl_se,
int cpu, u8 type)
{
if (is_supported_type(type))
da_handle_event(EXPAND_ID(dl_se, cpu, type), dl_replenish_nomiss);
}
static void handle_dl_throttle(void *data, struct sched_dl_entity *dl_se,
int cpu, u8 type)
{
if (is_supported_type(type))
da_handle_event(EXPAND_ID(dl_se, cpu, type), dl_throttle_nomiss);
}
static void handle_dl_server_stop(void *data, struct sched_dl_entity *dl_se,
int cpu, u8 type)
{
/*
* This isn't the standard use of da_handle_start_run_event since this
* event cannot only occur from the initial state.
* It is fine to use here because it always brings to a known state and
* the fact we "pretend" the transition starts from the initial state
* has no side effect.
*/
if (is_supported_type(type))
da_handle_start_run_event(EXPAND_ID(dl_se, cpu, type), dl_server_stop_nomiss);
}
static inline void handle_server_switch(struct task_struct *next, int cpu, u8 type)
{
struct sched_dl_entity *dl_se = get_server(next, type);
if (dl_se && is_idle_task(next))
da_handle_event(EXPAND_ID(dl_se, cpu, type), dl_server_idle_nomiss);
}
static void handle_sched_switch(void *data, bool preempt,
struct task_struct *prev,
struct task_struct *next,
unsigned int prev_state)
{
int cpu = task_cpu(next);
if (prev_state != TASK_RUNNING && !preempt && prev->policy == SCHED_DEADLINE)
da_handle_event(EXPAND_ID_TASK(prev), sched_switch_suspend_nomiss);
if (next->policy == SCHED_DEADLINE)
da_handle_start_run_event(EXPAND_ID_TASK(next), sched_switch_in_nomiss);
/*
* The server is available in next only if the next task is boosted,
* otherwise we need to retrieve it.
* Here the server continues in the state running/armed until actually
* stopped, this works since we continue expecting a throttle.
*/
if (next->dl_server)
da_handle_start_event(EXPAND_ID(next->dl_server, cpu,
get_server_type(next)),
sched_switch_in_nomiss);
else {
handle_server_switch(next, cpu, DL_SERVER_FAIR);
if (IS_ENABLED(CONFIG_SCHED_CLASS_EXT))
handle_server_switch(next, cpu, DL_SERVER_EXT);
}
}
static void handle_sys_enter(void *data, struct pt_regs *regs, long id)
{
struct task_struct *p;
int new_policy = -1;
pid_t pid = 0;
new_policy = extract_params(regs, id, &pid);
if (new_policy < 0)
return;
guard(rcu)();
p = pid ? find_task_by_vpid(pid) : current;
if (unlikely(!p) || new_policy == p->policy)
return;
if (p->policy == SCHED_DEADLINE)
da_reset(EXPAND_ID_TASK(p));
else if (new_policy == SCHED_DEADLINE)
da_create_or_get(EXPAND_ID_TASK(p));
}
static void handle_sched_wakeup(void *data, struct task_struct *tsk)
{
if (tsk->policy == SCHED_DEADLINE)
da_handle_event(EXPAND_ID_TASK(tsk), sched_wakeup_nomiss);
}
static int enable_nomiss(void)
{
int retval;
retval = da_monitor_init();
if (retval)
return retval;
retval = init_storage(false);
if (retval)
return retval;
rv_attach_trace_probe("nomiss", sched_dl_replenish_tp, handle_dl_replenish);
rv_attach_trace_probe("nomiss", sched_dl_throttle_tp, handle_dl_throttle);
rv_attach_trace_probe("nomiss", sched_dl_server_stop_tp, handle_dl_server_stop);
rv_attach_trace_probe("nomiss", sched_switch, handle_sched_switch);
rv_attach_trace_probe("nomiss", sched_wakeup, handle_sched_wakeup);
if (!should_skip_syscall_handle())
rv_attach_trace_probe("nomiss", sys_enter, handle_sys_enter);
rv_attach_trace_probe("nomiss", task_newtask, handle_newtask);
rv_attach_trace_probe("nomiss", sched_process_exit, handle_exit);
return 0;
}
static void disable_nomiss(void)
{
rv_this.enabled = 0;
/* Those are RCU writers, detach earlier hoping to close a bit faster */
rv_detach_trace_probe("nomiss", task_newtask, handle_newtask);
rv_detach_trace_probe("nomiss", sched_process_exit, handle_exit);
if (!should_skip_syscall_handle())
rv_detach_trace_probe("nomiss", sys_enter, handle_sys_enter);
rv_detach_trace_probe("nomiss", sched_dl_replenish_tp, handle_dl_replenish);
rv_detach_trace_probe("nomiss", sched_dl_throttle_tp, handle_dl_throttle);
rv_detach_trace_probe("nomiss", sched_dl_server_stop_tp, handle_dl_server_stop);
rv_detach_trace_probe("nomiss", sched_switch, handle_sched_switch);
rv_detach_trace_probe("nomiss", sched_wakeup, handle_sched_wakeup);
da_monitor_destroy();
}
static struct rv_monitor rv_this = {
.name = "nomiss",
.description = "dl entities run to completion before their deadline.",
.enable = enable_nomiss,
.disable = disable_nomiss,
.reset = da_monitor_reset_all,
.enabled = 0,
};
static int __init register_nomiss(void)
{
return rv_register_monitor(&rv_this, &rv_deadline);
}
static void __exit unregister_nomiss(void)
{
rv_unregister_monitor(&rv_this);
}
module_init(register_nomiss);
module_exit(unregister_nomiss);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Gabriele Monaco <gmonaco@redhat.com>");
MODULE_DESCRIPTION("nomiss: dl entities run to completion before their deadline.");

View File

@@ -0,0 +1,123 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Automatically generated C representation of nomiss automaton
* For further information about this format, see kernel documentation:
* Documentation/trace/rv/deterministic_automata.rst
*/
#define MONITOR_NAME nomiss
enum states_nomiss {
ready_nomiss,
idle_nomiss,
running_nomiss,
sleeping_nomiss,
throttled_nomiss,
state_max_nomiss,
};
#define INVALID_STATE state_max_nomiss
enum events_nomiss {
dl_replenish_nomiss,
dl_server_idle_nomiss,
dl_server_stop_nomiss,
dl_throttle_nomiss,
sched_switch_in_nomiss,
sched_switch_suspend_nomiss,
sched_wakeup_nomiss,
event_max_nomiss,
};
enum envs_nomiss {
clk_nomiss,
is_constr_dl_nomiss,
is_defer_nomiss,
env_max_nomiss,
env_max_stored_nomiss = is_constr_dl_nomiss,
};
_Static_assert(env_max_stored_nomiss <= MAX_HA_ENV_LEN, "Not enough slots");
#define HA_CLK_NS
struct automaton_nomiss {
char *state_names[state_max_nomiss];
char *event_names[event_max_nomiss];
char *env_names[env_max_nomiss];
unsigned char function[state_max_nomiss][event_max_nomiss];
unsigned char initial_state;
bool final_states[state_max_nomiss];
};
static const struct automaton_nomiss automaton_nomiss = {
.state_names = {
"ready",
"idle",
"running",
"sleeping",
"throttled",
},
.event_names = {
"dl_replenish",
"dl_server_idle",
"dl_server_stop",
"dl_throttle",
"sched_switch_in",
"sched_switch_suspend",
"sched_wakeup",
},
.env_names = {
"clk",
"is_constr_dl",
"is_defer",
},
.function = {
{
ready_nomiss,
idle_nomiss,
sleeping_nomiss,
throttled_nomiss,
running_nomiss,
INVALID_STATE,
ready_nomiss,
},
{
ready_nomiss,
idle_nomiss,
sleeping_nomiss,
throttled_nomiss,
running_nomiss,
INVALID_STATE,
INVALID_STATE,
},
{
running_nomiss,
idle_nomiss,
sleeping_nomiss,
throttled_nomiss,
running_nomiss,
sleeping_nomiss,
running_nomiss,
},
{
ready_nomiss,
sleeping_nomiss,
sleeping_nomiss,
throttled_nomiss,
running_nomiss,
INVALID_STATE,
ready_nomiss,
},
{
ready_nomiss,
throttled_nomiss,
INVALID_STATE,
throttled_nomiss,
INVALID_STATE,
throttled_nomiss,
throttled_nomiss,
},
},
.initial_state = ready_nomiss,
.final_states = { 1, 0, 0, 0, 0 },
};

View File

@@ -0,0 +1,19 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Snippet to be included in rv_trace.h
*/
#ifdef CONFIG_RV_MON_NOMISS
DEFINE_EVENT(event_da_monitor_id, event_nomiss,
TP_PROTO(int id, char *state, char *event, char *next_state, bool final_state),
TP_ARGS(id, state, event, next_state, final_state));
DEFINE_EVENT(error_da_monitor_id, error_nomiss,
TP_PROTO(int id, char *state, char *event),
TP_ARGS(id, state, event));
DEFINE_EVENT(error_env_da_monitor_id, error_env_nomiss,
TP_PROTO(int id, char *state, char *event, char *env),
TP_ARGS(id, state, event, env));
#endif /* CONFIG_RV_MON_NOMISS */

View File

@@ -2,18 +2,13 @@
#
config RV_MON_OPID
depends on RV
depends on TRACE_IRQFLAGS
depends on TRACE_PREEMPT_TOGGLE
depends on RV_MON_SCHED
default y if PREEMPT_RT
select DA_MON_EVENTS_IMPLICIT
default y
select HA_MON_EVENTS_IMPLICIT
bool "opid monitor"
help
Monitor to ensure operations like wakeup and need resched occur with
interrupts and preemption disabled or during IRQs, where preemption
may not be disabled explicitly.
This monitor is unstable on !PREEMPT_RT, say N unless you are testing it.
interrupts and preemption disabled.
For further information, see:
Documentation/trace/rv/monitor_sched.rst

View File

@@ -10,94 +10,63 @@
#define MODULE_NAME "opid"
#include <trace/events/sched.h>
#include <trace/events/irq.h>
#include <trace/events/preemptirq.h>
#include <rv_trace.h>
#include <monitors/sched/sched.h>
#define RV_MON_TYPE RV_MON_PER_CPU
#include "opid.h"
#include <rv/da_monitor.h>
#include <rv/ha_monitor.h>
#ifdef CONFIG_X86_LOCAL_APIC
#include <asm/trace/irq_vectors.h>
static void handle_vector_irq_entry(void *data, int vector)
static u64 ha_get_env(struct ha_monitor *ha_mon, enum envs_opid env, u64 time_ns)
{
da_handle_event(irq_entry_opid);
}
static void attach_vector_irq(void)
{
rv_attach_trace_probe("opid", local_timer_entry, handle_vector_irq_entry);
if (IS_ENABLED(CONFIG_IRQ_WORK))
rv_attach_trace_probe("opid", irq_work_entry, handle_vector_irq_entry);
if (IS_ENABLED(CONFIG_SMP)) {
rv_attach_trace_probe("opid", reschedule_entry, handle_vector_irq_entry);
rv_attach_trace_probe("opid", call_function_entry, handle_vector_irq_entry);
rv_attach_trace_probe("opid", call_function_single_entry, handle_vector_irq_entry);
if (env == irq_off_opid)
return irqs_disabled();
else if (env == preempt_off_opid) {
/*
* If CONFIG_PREEMPTION is enabled, then the tracepoint itself disables
* preemption (adding one to the preempt_count). Since we are
* interested in the preempt_count at the time the tracepoint was
* hit, we consider 1 as still enabled.
*/
if (IS_ENABLED(CONFIG_PREEMPTION))
return (preempt_count() & PREEMPT_MASK) > 1;
return true;
}
return ENV_INVALID_VALUE;
}
static void detach_vector_irq(void)
static inline bool ha_verify_guards(struct ha_monitor *ha_mon,
enum states curr_state, enum events event,
enum states next_state, u64 time_ns)
{
rv_detach_trace_probe("opid", local_timer_entry, handle_vector_irq_entry);
if (IS_ENABLED(CONFIG_IRQ_WORK))
rv_detach_trace_probe("opid", irq_work_entry, handle_vector_irq_entry);
if (IS_ENABLED(CONFIG_SMP)) {
rv_detach_trace_probe("opid", reschedule_entry, handle_vector_irq_entry);
rv_detach_trace_probe("opid", call_function_entry, handle_vector_irq_entry);
rv_detach_trace_probe("opid", call_function_single_entry, handle_vector_irq_entry);
}
bool res = true;
if (curr_state == any_opid && event == sched_need_resched_opid)
res = ha_get_env(ha_mon, irq_off_opid, time_ns) == 1ull;
else if (curr_state == any_opid && event == sched_waking_opid)
res = ha_get_env(ha_mon, irq_off_opid, time_ns) == 1ull &&
ha_get_env(ha_mon, preempt_off_opid, time_ns) == 1ull;
return res;
}
#else
/* We assume irq_entry tracepoints are sufficient on other architectures */
static void attach_vector_irq(void) { }
static void detach_vector_irq(void) { }
#endif
static void handle_irq_disable(void *data, unsigned long ip, unsigned long parent_ip)
static bool ha_verify_constraint(struct ha_monitor *ha_mon,
enum states curr_state, enum events event,
enum states next_state, u64 time_ns)
{
da_handle_event(irq_disable_opid);
}
if (!ha_verify_guards(ha_mon, curr_state, event, next_state, time_ns))
return false;
static void handle_irq_enable(void *data, unsigned long ip, unsigned long parent_ip)
{
da_handle_event(irq_enable_opid);
}
static void handle_irq_entry(void *data, int irq, struct irqaction *action)
{
da_handle_event(irq_entry_opid);
}
static void handle_preempt_disable(void *data, unsigned long ip, unsigned long parent_ip)
{
da_handle_event(preempt_disable_opid);
}
static void handle_preempt_enable(void *data, unsigned long ip, unsigned long parent_ip)
{
da_handle_event(preempt_enable_opid);
return true;
}
static void handle_sched_need_resched(void *data, struct task_struct *tsk, int cpu, int tif)
{
/* The monitor's intitial state is not in_irq */
if (this_cpu_read(hardirq_context))
da_handle_event(sched_need_resched_opid);
else
da_handle_start_event(sched_need_resched_opid);
da_handle_start_run_event(sched_need_resched_opid);
}
static void handle_sched_waking(void *data, struct task_struct *p)
{
/* The monitor's intitial state is not in_irq */
if (this_cpu_read(hardirq_context))
da_handle_event(sched_waking_opid);
else
da_handle_start_event(sched_waking_opid);
da_handle_start_run_event(sched_waking_opid);
}
static int enable_opid(void)
@@ -108,14 +77,8 @@ static int enable_opid(void)
if (retval)
return retval;
rv_attach_trace_probe("opid", irq_disable, handle_irq_disable);
rv_attach_trace_probe("opid", irq_enable, handle_irq_enable);
rv_attach_trace_probe("opid", irq_handler_entry, handle_irq_entry);
rv_attach_trace_probe("opid", preempt_disable, handle_preempt_disable);
rv_attach_trace_probe("opid", preempt_enable, handle_preempt_enable);
rv_attach_trace_probe("opid", sched_set_need_resched_tp, handle_sched_need_resched);
rv_attach_trace_probe("opid", sched_waking, handle_sched_waking);
attach_vector_irq();
return 0;
}
@@ -124,14 +87,8 @@ static void disable_opid(void)
{
rv_this.enabled = 0;
rv_detach_trace_probe("opid", irq_disable, handle_irq_disable);
rv_detach_trace_probe("opid", irq_enable, handle_irq_enable);
rv_detach_trace_probe("opid", irq_handler_entry, handle_irq_entry);
rv_detach_trace_probe("opid", preempt_disable, handle_preempt_disable);
rv_detach_trace_probe("opid", preempt_enable, handle_preempt_enable);
rv_detach_trace_probe("opid", sched_set_need_resched_tp, handle_sched_need_resched);
rv_detach_trace_probe("opid", sched_waking, handle_sched_waking);
detach_vector_irq();
da_monitor_destroy();
}

View File

@@ -8,30 +8,31 @@
#define MONITOR_NAME opid
enum states_opid {
disabled_opid,
enabled_opid,
in_irq_opid,
irq_disabled_opid,
preempt_disabled_opid,
any_opid,
state_max_opid,
};
#define INVALID_STATE state_max_opid
enum events_opid {
irq_disable_opid,
irq_enable_opid,
irq_entry_opid,
preempt_disable_opid,
preempt_enable_opid,
sched_need_resched_opid,
sched_waking_opid,
event_max_opid,
};
enum envs_opid {
irq_off_opid,
preempt_off_opid,
env_max_opid,
env_max_stored_opid = irq_off_opid,
};
_Static_assert(env_max_stored_opid <= MAX_HA_ENV_LEN, "Not enough slots");
struct automaton_opid {
char *state_names[state_max_opid];
char *event_names[event_max_opid];
char *env_names[env_max_opid];
unsigned char function[state_max_opid][event_max_opid];
unsigned char initial_state;
bool final_states[state_max_opid];
@@ -39,68 +40,19 @@ struct automaton_opid {
static const struct automaton_opid automaton_opid = {
.state_names = {
"disabled",
"enabled",
"in_irq",
"irq_disabled",
"preempt_disabled",
"any",
},
.event_names = {
"irq_disable",
"irq_enable",
"irq_entry",
"preempt_disable",
"preempt_enable",
"sched_need_resched",
"sched_waking",
},
.function = {
{
INVALID_STATE,
preempt_disabled_opid,
disabled_opid,
INVALID_STATE,
irq_disabled_opid,
disabled_opid,
disabled_opid,
},
{
irq_disabled_opid,
INVALID_STATE,
INVALID_STATE,
preempt_disabled_opid,
enabled_opid,
INVALID_STATE,
INVALID_STATE,
},
{
INVALID_STATE,
enabled_opid,
in_irq_opid,
INVALID_STATE,
INVALID_STATE,
in_irq_opid,
in_irq_opid,
},
{
INVALID_STATE,
enabled_opid,
in_irq_opid,
disabled_opid,
INVALID_STATE,
irq_disabled_opid,
INVALID_STATE,
},
{
disabled_opid,
INVALID_STATE,
INVALID_STATE,
INVALID_STATE,
enabled_opid,
INVALID_STATE,
INVALID_STATE,
},
.env_names = {
"irq_off",
"preempt_off",
},
.initial_state = disabled_opid,
.final_states = { 0, 1, 0, 0, 0 },
.function = {
{ any_opid, any_opid },
},
.initial_state = any_opid,
.final_states = { 1 },
};

View File

@@ -12,4 +12,8 @@ DEFINE_EVENT(event_da_monitor, event_opid,
DEFINE_EVENT(error_da_monitor, error_opid,
TP_PROTO(char *state, char *event),
TP_ARGS(state, event));
DEFINE_EVENT(error_env_da_monitor, error_env_opid,
TP_PROTO(char *state, char *event, char *env),
TP_ARGS(state, event, env));
#endif /* CONFIG_RV_MON_OPID */

View File

@@ -49,6 +49,7 @@ static void ltl_atoms_init(struct task_struct *task, struct ltl_monitor *mon, bo
ltl_atom_set(mon, LTL_NANOSLEEP_TIMER_ABSTIME, false);
ltl_atom_set(mon, LTL_CLOCK_NANOSLEEP, false);
ltl_atom_set(mon, LTL_FUTEX_WAIT, false);
ltl_atom_set(mon, LTL_EPOLL_WAIT, false);
ltl_atom_set(mon, LTL_FUTEX_LOCK_PI, false);
ltl_atom_set(mon, LTL_BLOCK_ON_RT_MUTEX, false);
}
@@ -63,6 +64,7 @@ static void ltl_atoms_init(struct task_struct *task, struct ltl_monitor *mon, bo
ltl_atom_set(mon, LTL_NANOSLEEP_CLOCK_TAI, false);
ltl_atom_set(mon, LTL_NANOSLEEP_TIMER_ABSTIME, false);
ltl_atom_set(mon, LTL_CLOCK_NANOSLEEP, false);
ltl_atom_set(mon, LTL_EPOLL_WAIT, false);
if (strstarts(task->comm, "migration/"))
ltl_atom_set(mon, LTL_TASK_IS_MIGRATION, true);
@@ -162,6 +164,11 @@ static void handle_sys_enter(void *data, struct pt_regs *regs, long id)
break;
}
break;
#ifdef __NR_epoll_wait
case __NR_epoll_wait:
ltl_atom_update(current, LTL_EPOLL_WAIT, true);
break;
#endif
}
}
@@ -174,6 +181,7 @@ static void handle_sys_exit(void *data, struct pt_regs *regs, long ret)
ltl_atom_set(mon, LTL_NANOSLEEP_CLOCK_MONOTONIC, false);
ltl_atom_set(mon, LTL_NANOSLEEP_CLOCK_TAI, false);
ltl_atom_set(mon, LTL_NANOSLEEP_TIMER_ABSTIME, false);
ltl_atom_set(mon, LTL_EPOLL_WAIT, false);
ltl_atom_update(current, LTL_CLOCK_NANOSLEEP, false);
}

View File

@@ -15,6 +15,7 @@ enum ltl_atom {
LTL_ABORT_SLEEP,
LTL_BLOCK_ON_RT_MUTEX,
LTL_CLOCK_NANOSLEEP,
LTL_EPOLL_WAIT,
LTL_FUTEX_LOCK_PI,
LTL_FUTEX_WAIT,
LTL_KERNEL_THREAD,
@@ -40,6 +41,7 @@ static const char *ltl_atom_str(enum ltl_atom atom)
"ab_sl",
"bl_on_rt_mu",
"cl_na",
"ep_wa",
"fu_lo_pi",
"fu_wa",
"ker_th",
@@ -75,39 +77,41 @@ static_assert(RV_NUM_BA_STATES <= RV_MAX_BA_STATES);
static void ltl_start(struct task_struct *task, struct ltl_monitor *mon)
{
bool task_is_migration = test_bit(LTL_TASK_IS_MIGRATION, mon->atoms);
bool task_is_rcu = test_bit(LTL_TASK_IS_RCU, mon->atoms);
bool val40 = task_is_rcu || task_is_migration;
bool futex_lock_pi = test_bit(LTL_FUTEX_LOCK_PI, mon->atoms);
bool val41 = futex_lock_pi || val40;
bool block_on_rt_mutex = test_bit(LTL_BLOCK_ON_RT_MUTEX, mon->atoms);
bool val5 = block_on_rt_mutex || val41;
bool kthread_should_stop = test_bit(LTL_KTHREAD_SHOULD_STOP, mon->atoms);
bool abort_sleep = test_bit(LTL_ABORT_SLEEP, mon->atoms);
bool val32 = abort_sleep || kthread_should_stop;
bool woken_by_nmi = test_bit(LTL_WOKEN_BY_NMI, mon->atoms);
bool val33 = woken_by_nmi || val32;
bool woken_by_hardirq = test_bit(LTL_WOKEN_BY_HARDIRQ, mon->atoms);
bool val34 = woken_by_hardirq || val33;
bool woken_by_equal_or_higher_prio = test_bit(LTL_WOKEN_BY_EQUAL_OR_HIGHER_PRIO,
mon->atoms);
bool val14 = woken_by_equal_or_higher_prio || val34;
bool wake = test_bit(LTL_WAKE, mon->atoms);
bool val13 = !wake;
bool kernel_thread = test_bit(LTL_KERNEL_THREAD, mon->atoms);
bool task_is_rcu = test_bit(LTL_TASK_IS_RCU, mon->atoms);
bool task_is_migration = test_bit(LTL_TASK_IS_MIGRATION, mon->atoms);
bool sleep = test_bit(LTL_SLEEP, mon->atoms);
bool rt = test_bit(LTL_RT, mon->atoms);
bool nanosleep_timer_abstime = test_bit(LTL_NANOSLEEP_TIMER_ABSTIME, mon->atoms);
bool nanosleep_clock_tai = test_bit(LTL_NANOSLEEP_CLOCK_TAI, mon->atoms);
bool nanosleep_clock_monotonic = test_bit(LTL_NANOSLEEP_CLOCK_MONOTONIC, mon->atoms);
bool val24 = nanosleep_clock_monotonic || nanosleep_clock_tai;
bool nanosleep_timer_abstime = test_bit(LTL_NANOSLEEP_TIMER_ABSTIME, mon->atoms);
bool val25 = nanosleep_timer_abstime && val24;
bool clock_nanosleep = test_bit(LTL_CLOCK_NANOSLEEP, mon->atoms);
bool val18 = clock_nanosleep && val25;
bool kthread_should_stop = test_bit(LTL_KTHREAD_SHOULD_STOP, mon->atoms);
bool kernel_thread = test_bit(LTL_KERNEL_THREAD, mon->atoms);
bool futex_wait = test_bit(LTL_FUTEX_WAIT, mon->atoms);
bool val9 = futex_wait || val18;
bool futex_lock_pi = test_bit(LTL_FUTEX_LOCK_PI, mon->atoms);
bool epoll_wait = test_bit(LTL_EPOLL_WAIT, mon->atoms);
bool clock_nanosleep = test_bit(LTL_CLOCK_NANOSLEEP, mon->atoms);
bool block_on_rt_mutex = test_bit(LTL_BLOCK_ON_RT_MUTEX, mon->atoms);
bool abort_sleep = test_bit(LTL_ABORT_SLEEP, mon->atoms);
bool val42 = task_is_rcu || task_is_migration;
bool val43 = futex_lock_pi || val42;
bool val5 = block_on_rt_mutex || val43;
bool val34 = abort_sleep || kthread_should_stop;
bool val35 = woken_by_nmi || val34;
bool val36 = woken_by_hardirq || val35;
bool val14 = woken_by_equal_or_higher_prio || val36;
bool val13 = !wake;
bool val26 = nanosleep_clock_monotonic || nanosleep_clock_tai;
bool val27 = nanosleep_timer_abstime && val26;
bool val18 = clock_nanosleep && val27;
bool val20 = val18 || epoll_wait;
bool val9 = futex_wait || val20;
bool val11 = val9 || kernel_thread;
bool sleep = test_bit(LTL_SLEEP, mon->atoms);
bool val2 = !sleep;
bool rt = test_bit(LTL_RT, mon->atoms);
bool val1 = !rt;
bool val3 = val1 || val2;
@@ -124,39 +128,41 @@ static void ltl_start(struct task_struct *task, struct ltl_monitor *mon)
static void
ltl_possible_next_states(struct ltl_monitor *mon, unsigned int state, unsigned long *next)
{
bool task_is_migration = test_bit(LTL_TASK_IS_MIGRATION, mon->atoms);
bool task_is_rcu = test_bit(LTL_TASK_IS_RCU, mon->atoms);
bool val40 = task_is_rcu || task_is_migration;
bool futex_lock_pi = test_bit(LTL_FUTEX_LOCK_PI, mon->atoms);
bool val41 = futex_lock_pi || val40;
bool block_on_rt_mutex = test_bit(LTL_BLOCK_ON_RT_MUTEX, mon->atoms);
bool val5 = block_on_rt_mutex || val41;
bool kthread_should_stop = test_bit(LTL_KTHREAD_SHOULD_STOP, mon->atoms);
bool abort_sleep = test_bit(LTL_ABORT_SLEEP, mon->atoms);
bool val32 = abort_sleep || kthread_should_stop;
bool woken_by_nmi = test_bit(LTL_WOKEN_BY_NMI, mon->atoms);
bool val33 = woken_by_nmi || val32;
bool woken_by_hardirq = test_bit(LTL_WOKEN_BY_HARDIRQ, mon->atoms);
bool val34 = woken_by_hardirq || val33;
bool woken_by_equal_or_higher_prio = test_bit(LTL_WOKEN_BY_EQUAL_OR_HIGHER_PRIO,
mon->atoms);
bool val14 = woken_by_equal_or_higher_prio || val34;
bool wake = test_bit(LTL_WAKE, mon->atoms);
bool val13 = !wake;
bool kernel_thread = test_bit(LTL_KERNEL_THREAD, mon->atoms);
bool task_is_rcu = test_bit(LTL_TASK_IS_RCU, mon->atoms);
bool task_is_migration = test_bit(LTL_TASK_IS_MIGRATION, mon->atoms);
bool sleep = test_bit(LTL_SLEEP, mon->atoms);
bool rt = test_bit(LTL_RT, mon->atoms);
bool nanosleep_timer_abstime = test_bit(LTL_NANOSLEEP_TIMER_ABSTIME, mon->atoms);
bool nanosleep_clock_tai = test_bit(LTL_NANOSLEEP_CLOCK_TAI, mon->atoms);
bool nanosleep_clock_monotonic = test_bit(LTL_NANOSLEEP_CLOCK_MONOTONIC, mon->atoms);
bool val24 = nanosleep_clock_monotonic || nanosleep_clock_tai;
bool nanosleep_timer_abstime = test_bit(LTL_NANOSLEEP_TIMER_ABSTIME, mon->atoms);
bool val25 = nanosleep_timer_abstime && val24;
bool clock_nanosleep = test_bit(LTL_CLOCK_NANOSLEEP, mon->atoms);
bool val18 = clock_nanosleep && val25;
bool kthread_should_stop = test_bit(LTL_KTHREAD_SHOULD_STOP, mon->atoms);
bool kernel_thread = test_bit(LTL_KERNEL_THREAD, mon->atoms);
bool futex_wait = test_bit(LTL_FUTEX_WAIT, mon->atoms);
bool val9 = futex_wait || val18;
bool futex_lock_pi = test_bit(LTL_FUTEX_LOCK_PI, mon->atoms);
bool epoll_wait = test_bit(LTL_EPOLL_WAIT, mon->atoms);
bool clock_nanosleep = test_bit(LTL_CLOCK_NANOSLEEP, mon->atoms);
bool block_on_rt_mutex = test_bit(LTL_BLOCK_ON_RT_MUTEX, mon->atoms);
bool abort_sleep = test_bit(LTL_ABORT_SLEEP, mon->atoms);
bool val42 = task_is_rcu || task_is_migration;
bool val43 = futex_lock_pi || val42;
bool val5 = block_on_rt_mutex || val43;
bool val34 = abort_sleep || kthread_should_stop;
bool val35 = woken_by_nmi || val34;
bool val36 = woken_by_hardirq || val35;
bool val14 = woken_by_equal_or_higher_prio || val36;
bool val13 = !wake;
bool val26 = nanosleep_clock_monotonic || nanosleep_clock_tai;
bool val27 = nanosleep_timer_abstime && val26;
bool val18 = clock_nanosleep && val27;
bool val20 = val18 || epoll_wait;
bool val9 = futex_wait || val20;
bool val11 = val9 || kernel_thread;
bool sleep = test_bit(LTL_SLEEP, mon->atoms);
bool val2 = !sleep;
bool rt = test_bit(LTL_RT, mon->atoms);
bool val1 = !rt;
bool val3 = val1 || val2;

View File

@@ -0,0 +1,13 @@
# SPDX-License-Identifier: GPL-2.0-only
#
config RV_MON_STALL
depends on RV
select HA_MON_EVENTS_ID
bool "stall monitor"
help
Enable the stall sample monitor that illustrates the usage of hybrid
automata monitors. It can be used to identify tasks stalled for
longer than a threshold.
For further information, see:
Documentation/trace/rv/monitor_stall.rst

View File

@@ -0,0 +1,150 @@
// SPDX-License-Identifier: GPL-2.0
#include <linux/ftrace.h>
#include <linux/tracepoint.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/rv.h>
#include <rv/instrumentation.h>
#define MODULE_NAME "stall"
#include <trace/events/sched.h>
#include <rv_trace.h>
#define RV_MON_TYPE RV_MON_PER_TASK
#define HA_TIMER_TYPE HA_TIMER_WHEEL
#include "stall.h"
#include <rv/ha_monitor.h>
static u64 threshold_jiffies = 1000;
module_param(threshold_jiffies, ullong, 0644);
static u64 ha_get_env(struct ha_monitor *ha_mon, enum envs_stall env, u64 time_ns)
{
if (env == clk_stall)
return ha_get_clk_jiffy(ha_mon, env);
return ENV_INVALID_VALUE;
}
static void ha_reset_env(struct ha_monitor *ha_mon, enum envs_stall env, u64 time_ns)
{
if (env == clk_stall)
ha_reset_clk_jiffy(ha_mon, env);
}
static inline bool ha_verify_invariants(struct ha_monitor *ha_mon,
enum states curr_state, enum events event,
enum states next_state, u64 time_ns)
{
if (curr_state == enqueued_stall)
return ha_check_invariant_jiffy(ha_mon, clk_stall, time_ns);
return true;
}
static inline bool ha_verify_guards(struct ha_monitor *ha_mon,
enum states curr_state, enum events event,
enum states next_state, u64 time_ns)
{
bool res = true;
if (curr_state == dequeued_stall && event == sched_wakeup_stall)
ha_reset_env(ha_mon, clk_stall, time_ns);
else if (curr_state == running_stall && event == sched_switch_preempt_stall)
ha_reset_env(ha_mon, clk_stall, time_ns);
return res;
}
static inline void ha_setup_invariants(struct ha_monitor *ha_mon,
enum states curr_state, enum events event,
enum states next_state, u64 time_ns)
{
if (next_state == curr_state)
return;
if (next_state == enqueued_stall)
ha_start_timer_jiffy(ha_mon, clk_stall, threshold_jiffies, time_ns);
else if (curr_state == enqueued_stall)
ha_cancel_timer(ha_mon);
}
static bool ha_verify_constraint(struct ha_monitor *ha_mon,
enum states curr_state, enum events event,
enum states next_state, u64 time_ns)
{
if (!ha_verify_invariants(ha_mon, curr_state, event, next_state, time_ns))
return false;
if (!ha_verify_guards(ha_mon, curr_state, event, next_state, time_ns))
return false;
ha_setup_invariants(ha_mon, curr_state, event, next_state, time_ns);
return true;
}
static void handle_sched_switch(void *data, bool preempt,
struct task_struct *prev,
struct task_struct *next,
unsigned int prev_state)
{
if (!preempt && prev_state != TASK_RUNNING)
da_handle_start_event(prev, sched_switch_wait_stall);
else
da_handle_event(prev, sched_switch_preempt_stall);
da_handle_event(next, sched_switch_in_stall);
}
static void handle_sched_wakeup(void *data, struct task_struct *p)
{
da_handle_event(p, sched_wakeup_stall);
}
static int enable_stall(void)
{
int retval;
retval = da_monitor_init();
if (retval)
return retval;
rv_attach_trace_probe("stall", sched_switch, handle_sched_switch);
rv_attach_trace_probe("stall", sched_wakeup, handle_sched_wakeup);
return 0;
}
static void disable_stall(void)
{
rv_this.enabled = 0;
rv_detach_trace_probe("stall", sched_switch, handle_sched_switch);
rv_detach_trace_probe("stall", sched_wakeup, handle_sched_wakeup);
da_monitor_destroy();
}
static struct rv_monitor rv_this = {
.name = "stall",
.description = "identify tasks stalled for longer than a threshold.",
.enable = enable_stall,
.disable = disable_stall,
.reset = da_monitor_reset_all,
.enabled = 0,
};
static int __init register_stall(void)
{
return rv_register_monitor(&rv_this, NULL);
}
static void __exit unregister_stall(void)
{
rv_unregister_monitor(&rv_this);
}
module_init(register_stall);
module_exit(unregister_stall);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Gabriele Monaco <gmonaco@redhat.com>");
MODULE_DESCRIPTION("stall: identify tasks stalled for longer than a threshold.");

View File

@@ -0,0 +1,81 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Automatically generated C representation of stall automaton
* For further information about this format, see kernel documentation:
* Documentation/trace/rv/deterministic_automata.rst
*/
#define MONITOR_NAME stall
enum states_stall {
dequeued_stall,
enqueued_stall,
running_stall,
state_max_stall,
};
#define INVALID_STATE state_max_stall
enum events_stall {
sched_switch_in_stall,
sched_switch_preempt_stall,
sched_switch_wait_stall,
sched_wakeup_stall,
event_max_stall,
};
enum envs_stall {
clk_stall,
env_max_stall,
env_max_stored_stall = env_max_stall,
};
_Static_assert(env_max_stored_stall <= MAX_HA_ENV_LEN, "Not enough slots");
struct automaton_stall {
char *state_names[state_max_stall];
char *event_names[event_max_stall];
char *env_names[env_max_stall];
unsigned char function[state_max_stall][event_max_stall];
unsigned char initial_state;
bool final_states[state_max_stall];
};
static const struct automaton_stall automaton_stall = {
.state_names = {
"dequeued",
"enqueued",
"running",
},
.event_names = {
"sched_switch_in",
"sched_switch_preempt",
"sched_switch_wait",
"sched_wakeup",
},
.env_names = {
"clk",
},
.function = {
{
INVALID_STATE,
INVALID_STATE,
INVALID_STATE,
enqueued_stall,
},
{
running_stall,
INVALID_STATE,
INVALID_STATE,
enqueued_stall,
},
{
running_stall,
enqueued_stall,
dequeued_stall,
running_stall,
},
},
.initial_state = dequeued_stall,
.final_states = { 1, 0, 0 },
};

View File

@@ -0,0 +1,19 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Snippet to be included in rv_trace.h
*/
#ifdef CONFIG_RV_MON_STALL
DEFINE_EVENT(event_da_monitor_id, event_stall,
TP_PROTO(int id, char *state, char *event, char *next_state, bool final_state),
TP_ARGS(id, state, event, next_state, final_state));
DEFINE_EVENT(error_da_monitor_id, error_stall,
TP_PROTO(int id, char *state, char *event),
TP_ARGS(id, state, event));
DEFINE_EVENT(error_env_da_monitor_id, error_env_stall,
TP_PROTO(int id, char *state, char *event, char *env),
TP_ARGS(id, state, event, env));
#endif /* CONFIG_RV_MON_STALL */

View File

@@ -62,9 +62,39 @@ DECLARE_EVENT_CLASS(error_da_monitor,
#include <monitors/scpd/scpd_trace.h>
#include <monitors/snep/snep_trace.h>
#include <monitors/sts/sts_trace.h>
#include <monitors/opid/opid_trace.h>
// Add new monitors based on CONFIG_DA_MON_EVENTS_IMPLICIT here
#ifdef CONFIG_HA_MON_EVENTS_IMPLICIT
/* For simplicity this class is marked as DA although relevant only for HA */
DECLARE_EVENT_CLASS(error_env_da_monitor,
TP_PROTO(char *state, char *event, char *env),
TP_ARGS(state, event, env),
TP_STRUCT__entry(
__string( state, state )
__string( event, event )
__string( env, env )
),
TP_fast_assign(
__assign_str(state);
__assign_str(event);
__assign_str(env);
),
TP_printk("event %s not expected in the state %s with env %s",
__get_str(event),
__get_str(state),
__get_str(env))
);
#include <monitors/opid/opid_trace.h>
// Add new monitors based on CONFIG_HA_MON_EVENTS_IMPLICIT here
#endif
#endif /* CONFIG_DA_MON_EVENTS_IMPLICIT */
#ifdef CONFIG_DA_MON_EVENTS_ID
@@ -128,6 +158,41 @@ DECLARE_EVENT_CLASS(error_da_monitor_id,
#include <monitors/sssw/sssw_trace.h>
// Add new monitors based on CONFIG_DA_MON_EVENTS_ID here
#ifdef CONFIG_HA_MON_EVENTS_ID
/* For simplicity this class is marked as DA although relevant only for HA */
DECLARE_EVENT_CLASS(error_env_da_monitor_id,
TP_PROTO(int id, char *state, char *event, char *env),
TP_ARGS(id, state, event, env),
TP_STRUCT__entry(
__field( int, id )
__string( state, state )
__string( event, event )
__string( env, env )
),
TP_fast_assign(
__assign_str(state);
__assign_str(event);
__assign_str(env);
__entry->id = id;
),
TP_printk("%d: event %s not expected in the state %s with env %s",
__entry->id,
__get_str(event),
__get_str(state),
__get_str(env))
);
#include <monitors/stall/stall_trace.h>
#include <monitors/nomiss/nomiss_trace.h>
// Add new monitors based on CONFIG_HA_MON_EVENTS_ID here
#endif
#endif /* CONFIG_DA_MON_EVENTS_ID */
#ifdef CONFIG_LTL_MON_EVENTS_ID
DECLARE_EVENT_CLASS(event_ltl_monitor_id,

View File

@@ -0,0 +1,41 @@
digraph state_automaton {
center = true;
size = "7,11";
{node [shape = circle] "idle"};
{node [shape = plaintext, style=invis, label=""] "__init_ready"};
{node [shape = doublecircle] "ready"};
{node [shape = circle] "ready"};
{node [shape = circle] "running"};
{node [shape = circle] "sleeping"};
{node [shape = circle] "throttled"};
"__init_ready" -> "ready";
"idle" [label = "idle"];
"idle" -> "idle" [ label = "dl_server_idle" ];
"idle" -> "ready" [ label = "dl_replenish;reset(clk)" ];
"idle" -> "running" [ label = "sched_switch_in" ];
"idle" -> "sleeping" [ label = "dl_server_stop" ];
"idle" -> "throttled" [ label = "dl_throttle" ];
"ready" [label = "ready\nclk < DEADLINE_NS()", color = green3];
"ready" -> "idle" [ label = "dl_server_idle" ];
"ready" -> "ready" [ label = "sched_wakeup\ndl_replenish;reset(clk)" ];
"ready" -> "running" [ label = "sched_switch_in" ];
"ready" -> "sleeping" [ label = "dl_server_stop" ];
"ready" -> "throttled" [ label = "dl_throttle;is_defer == 1" ];
"running" [label = "running\nclk < DEADLINE_NS()"];
"running" -> "idle" [ label = "dl_server_idle" ];
"running" -> "running" [ label = "dl_replenish;reset(clk)\nsched_switch_in\nsched_wakeup" ];
"running" -> "sleeping" [ label = "sched_switch_suspend\ndl_server_stop" ];
"running" -> "throttled" [ label = "dl_throttle" ];
"sleeping" [label = "sleeping"];
"sleeping" -> "ready" [ label = "sched_wakeup\ndl_replenish;reset(clk)" ];
"sleeping" -> "running" [ label = "sched_switch_in" ];
"sleeping" -> "sleeping" [ label = "dl_server_stop\ndl_server_idle" ];
"sleeping" -> "throttled" [ label = "dl_throttle;is_constr_dl == 1 || is_defer == 1" ];
"throttled" [label = "throttled"];
"throttled" -> "ready" [ label = "dl_replenish;reset(clk)" ];
"throttled" -> "throttled" [ label = "sched_switch_suspend\nsched_wakeup\ndl_server_idle\ndl_throttle" ];
{ rank = min ;
"__init_ready";
"ready";
}
}

View File

@@ -5,6 +5,7 @@ RT_FRIENDLY_SLEEP = (RT_VALID_SLEEP_REASON or KERNEL_THREAD)
RT_VALID_SLEEP_REASON = FUTEX_WAIT
or RT_FRIENDLY_NANOSLEEP
or EPOLL_WAIT
RT_FRIENDLY_NANOSLEEP = CLOCK_NANOSLEEP
and NANOSLEEP_TIMER_ABSTIME

View File

@@ -1,35 +1,13 @@
digraph state_automaton {
center = true;
size = "7,11";
{node [shape = plaintext, style=invis, label=""] "__init_disabled"};
{node [shape = circle] "disabled"};
{node [shape = doublecircle] "enabled"};
{node [shape = circle] "enabled"};
{node [shape = circle] "in_irq"};
{node [shape = circle] "irq_disabled"};
{node [shape = circle] "preempt_disabled"};
"__init_disabled" -> "disabled";
"disabled" [label = "disabled"];
"disabled" -> "disabled" [ label = "sched_need_resched\nsched_waking\nirq_entry" ];
"disabled" -> "irq_disabled" [ label = "preempt_enable" ];
"disabled" -> "preempt_disabled" [ label = "irq_enable" ];
"enabled" [label = "enabled", color = green3];
"enabled" -> "enabled" [ label = "preempt_enable" ];
"enabled" -> "irq_disabled" [ label = "irq_disable" ];
"enabled" -> "preempt_disabled" [ label = "preempt_disable" ];
"in_irq" [label = "in_irq"];
"in_irq" -> "enabled" [ label = "irq_enable" ];
"in_irq" -> "in_irq" [ label = "sched_need_resched\nsched_waking\nirq_entry" ];
"irq_disabled" [label = "irq_disabled"];
"irq_disabled" -> "disabled" [ label = "preempt_disable" ];
"irq_disabled" -> "enabled" [ label = "irq_enable" ];
"irq_disabled" -> "in_irq" [ label = "irq_entry" ];
"irq_disabled" -> "irq_disabled" [ label = "sched_need_resched" ];
"preempt_disabled" [label = "preempt_disabled"];
"preempt_disabled" -> "disabled" [ label = "irq_disable" ];
"preempt_disabled" -> "enabled" [ label = "preempt_enable" ];
{node [shape = plaintext, style=invis, label=""] "__init_any"};
{node [shape = doublecircle] "any"};
"__init_any" -> "any";
"any" [label = "any", color = green3];
"any" -> "any" [ label = "sched_need_resched;irq_off == 1\nsched_waking;irq_off == 1 && preempt_off == 1" ];
{ rank = min ;
"__init_disabled";
"disabled";
"__init_any";
"any";
}
}

View File

@@ -0,0 +1,22 @@
digraph state_automaton {
center = true;
size = "7,11";
{node [shape = circle] "enqueued"};
{node [shape = plaintext, style=invis, label=""] "__init_dequeued"};
{node [shape = doublecircle] "dequeued"};
{node [shape = circle] "running"};
"__init_dequeued" -> "dequeued";
"enqueued" [label = "enqueued\nclk < threshold_jiffies"];
"running" [label = "running"];
"dequeued" [label = "dequeued", color = green3];
"running" -> "running" [ label = "sched_switch_in\nsched_wakeup" ];
"enqueued" -> "enqueued" [ label = "sched_wakeup" ];
"enqueued" -> "running" [ label = "sched_switch_in" ];
"running" -> "dequeued" [ label = "sched_switch_wait" ];
"dequeued" -> "enqueued" [ label = "sched_wakeup;reset(clk)" ];
"running" -> "enqueued" [ label = "sched_switch_preempt;reset(clk)" ];
{ rank = min ;
"__init_dequeued";
"dequeued";
}
}

View File

@@ -9,10 +9,11 @@
# Documentation/trace/rv/da_monitor_synthesis.rst
if __name__ == '__main__':
from rvgen.dot2k import dot2k
from rvgen.dot2k import da2k, ha2k
from rvgen.generator import Monitor
from rvgen.container import Container
from rvgen.ltl2k import ltl2k
from rvgen.automata import AutomataError
import argparse
import sys
@@ -28,10 +29,11 @@ if __name__ == '__main__':
monitor_parser.add_argument('-n', "--model_name", dest="model_name")
monitor_parser.add_argument("-p", "--parent", dest="parent",
required=False, help="Create a monitor nested to parent")
monitor_parser.add_argument('-c', "--class", dest="monitor_class",
help="Monitor class, either \"da\" or \"ltl\"")
monitor_parser.add_argument('-s', "--spec", dest="spec", help="Monitor specification file")
monitor_parser.add_argument('-t', "--monitor_type", dest="monitor_type",
monitor_parser.add_argument('-c', "--class", dest="monitor_class", required=True,
help="Monitor class, either \"da\", \"ha\" or \"ltl\"")
monitor_parser.add_argument('-s', "--spec", dest="spec", required=True,
help="Monitor specification file")
monitor_parser.add_argument('-t', "--monitor_type", dest="monitor_type", required=True,
help=f"Available options: {', '.join(Monitor.monitor_types.keys())}")
container_parser = subparsers.add_parser("container")
@@ -41,9 +43,11 @@ if __name__ == '__main__':
try:
if params.subcmd == "monitor":
print("Opening and parsing the specification file %s" % params.spec)
print(f"Opening and parsing the specification file {params.spec}")
if params.monitor_class == "da":
monitor = dot2k(params.spec, params.monitor_type, vars(params))
monitor = da2k(params.spec, params.monitor_type, vars(params))
elif params.monitor_class == "ha":
monitor = ha2k(params.spec, params.monitor_type, vars(params))
elif params.monitor_class == "ltl":
monitor = ltl2k(params.spec, params.monitor_type, vars(params))
else:
@@ -51,16 +55,15 @@ if __name__ == '__main__':
sys.exit(1)
else:
monitor = Container(vars(params))
except Exception as e:
print('Error: '+ str(e))
print("Sorry : :-(")
except AutomataError as e:
print(f"There was an error processing {params.spec}: {e}", file=sys.stderr)
sys.exit(1)
print("Writing the monitor into the directory %s" % monitor.name)
print(f"Writing the monitor into the directory {monitor.name}")
monitor.print_files()
print("Almost done, checklist")
if params.subcmd == "monitor":
print(" - Edit the %s/%s.c to add the instrumentation" % (monitor.name, monitor.name))
print(f" - Edit the {monitor.name}/{monitor.name}.c to add the instrumentation")
print(monitor.fill_tracepoint_tooltip())
print(monitor.fill_makefile_tooltip())
print(monitor.fill_kconfig_tooltip())

View File

@@ -16,7 +16,6 @@
if __name__ == '__main__':
from rvgen import dot2c
import argparse
import sys
parser = argparse.ArgumentParser(description='dot2c: converts a .dot file into a C structure')
parser.add_argument('dot_file', help='The dot file to be converted')

View File

@@ -3,112 +3,182 @@
#
# Copyright (C) 2019-2022 Red Hat, Inc. Daniel Bristot de Oliveira <bristot@kernel.org>
#
# Automata object: parse an automata in dot file digraph format into a python object
# Automata class: parse an automaton in dot file digraph format into a python object
#
# For further information, see:
# Documentation/trace/rv/deterministic_automata.rst
import ntpath
import re
from typing import Iterator
from itertools import islice
class _ConstraintKey:
"""Base class for constraint keys."""
class _StateConstraintKey(_ConstraintKey, int):
"""Key for a state constraint. Under the hood just state_id."""
def __new__(cls, state_id: int):
return super().__new__(cls, state_id)
class _EventConstraintKey(_ConstraintKey, tuple):
"""Key for an event constraint. Under the hood just tuple(state_id,event_id)."""
def __new__(cls, state_id: int, event_id: int):
return super().__new__(cls, (state_id, event_id))
class AutomataError(Exception):
"""Exception raised for errors in automata parsing and validation.
Raised when DOT file processing fails due to invalid format, I/O errors,
or malformed automaton definitions.
"""
class Automata:
"""Automata class: Reads a dot file and part it as an automata.
"""Automata class: Reads a dot file and parses it as an automaton.
It supports both deterministic and hybrid automata.
Attributes:
dot_file: A dot file with an state_automaton definition.
"""
invalid_state_str = "INVALID_STATE"
init_marker = "__init_"
node_marker = "{node"
# val can be numerical, uppercase (constant or macro), lowercase (parameter or function)
# only numerical values should have units
constraint_rule = re.compile(r"""
^
(?P<env>[a-zA-Z_][a-zA-Z0-9_]+) # C-like identifier for the env var
(?P<op>[!<=>]{1,2}) # operator
(?P<val>
[0-9]+ | # numerical value
[A-Z_]+\(\) | # macro
[A-Z_]+ | # constant
[a-z_]+\(\) | # function
[a-z_]+ # parameter
)
(?P<unit>[a-z]{1,2})? # optional unit for numerical values
""", re.VERBOSE)
constraint_reset = re.compile(r"^reset\((?P<env>[a-zA-Z_][a-zA-Z0-9_]+)\)")
def __init__(self, file_path, model_name=None):
self.__dot_path = file_path
self.name = model_name or self.__get_model_name()
self.__dot_lines = self.__open_dot()
self.states, self.initial_state, self.final_states = self.__get_state_variables()
self.events = self.__get_event_variables()
self.function = self.__create_matrix()
self.env_types = {}
self.env_stored = set()
self.constraint_vars = set()
self.self_loop_reset_events = set()
self.events, self.envs = self.__get_event_variables()
self.function, self.constraints = self.__create_matrix()
self.events_start, self.events_start_run = self.__store_init_events()
self.env_stored = sorted(self.env_stored)
self.constraint_vars = sorted(self.constraint_vars)
self.self_loop_reset_events = sorted(self.self_loop_reset_events)
def __get_model_name(self) -> str:
basename = ntpath.basename(self.__dot_path)
if not basename.endswith(".dot") and not basename.endswith(".gv"):
print("not a dot file")
raise Exception("not a dot file: %s" % self.__dot_path)
raise AutomataError(f"not a dot file: {self.__dot_path}")
model_name = ntpath.splitext(basename)[0]
if model_name.__len__() == 0:
raise Exception("not a dot file: %s" % self.__dot_path)
if not model_name:
raise AutomataError(f"not a dot file: {self.__dot_path}")
return model_name
def __open_dot(self) -> list[str]:
cursor = 0
dot_lines = []
try:
dot_file = open(self.__dot_path)
except:
raise Exception("Cannot open the file: %s" % self.__dot_path)
with open(self.__dot_path) as dot_file:
dot_lines = dot_file.readlines()
except OSError as exc:
raise AutomataError(exc.strerror) from exc
dot_lines = dot_file.read().splitlines()
dot_file.close()
if not dot_lines:
raise AutomataError(f"{self.__dot_path} is empty")
# checking the first line:
line = dot_lines[cursor].split()
line = dot_lines[0].split()
if len(line) < 2 or line[0] != "digraph" or line[1] != "state_automaton":
raise AutomataError(f"Not a valid .dot format: {self.__dot_path}")
if (line[0] != "digraph") and (line[1] != "state_automaton"):
raise Exception("Not a valid .dot format: %s" % self.__dot_path)
else:
cursor += 1
return dot_lines
def __get_cursor_begin_states(self) -> int:
cursor = 0
while self.__dot_lines[cursor].split()[0] != "{node":
cursor += 1
return cursor
for cursor, line in enumerate(self.__dot_lines):
split_line = line.split()
if len(split_line) and split_line[0] == self.node_marker:
return cursor
raise AutomataError("Could not find a beginning state")
def __get_cursor_begin_events(self) -> int:
cursor = 0
while self.__dot_lines[cursor].split()[0] != "{node":
cursor += 1
while self.__dot_lines[cursor].split()[0] == "{node":
cursor += 1
# skip initial state transition
cursor += 1
state = 0
cursor = 0 # make pyright happy
for cursor, line in enumerate(self.__dot_lines):
line = line.split()
if not line:
continue
if state == 0:
if line[0] == self.node_marker:
state = 1
elif line[0] != self.node_marker:
break
else:
raise AutomataError("Could not find beginning event")
cursor += 1 # skip initial state transition
if cursor == len(self.__dot_lines):
raise AutomataError("Dot file ended after event beginning")
return cursor
def __get_state_variables(self) -> tuple[list[str], str, list[str]]:
# wait for node declaration
states = []
final_states = []
initial_state = ""
has_final_states = False
cursor = self.__get_cursor_begin_states()
# process nodes
while self.__dot_lines[cursor].split()[0] == "{node":
line = self.__dot_lines[cursor].split()
raw_state = line[-1]
for line in islice(self.__dot_lines, cursor, None):
split_line = line.split()
if not split_line or split_line[0] != self.node_marker:
break
raw_state = split_line[-1]
# "enabled_fired"}; -> enabled_fired
state = raw_state.replace('"', '').replace('};', '').replace(',','_')
if state[0:7] == "__init_":
initial_state = state[7:]
state = raw_state.replace('"', '').replace('};', '').replace(',', '_')
if state.startswith(self.init_marker):
initial_state = state[len(self.init_marker):]
else:
states.append(state)
if "doublecircle" in self.__dot_lines[cursor]:
if "doublecircle" in line:
final_states.append(state)
has_final_states = True
if "ellipse" in self.__dot_lines[cursor]:
if "ellipse" in line:
final_states.append(state)
has_final_states = True
cursor += 1
if not initial_state:
raise AutomataError("The automaton doesn't have an initial state")
states = sorted(set(states))
states.remove(initial_state)
# Insert the initial state at the bein og the states
# Insert the initial state at the beginning of the states
states.insert(0, initial_state)
if not has_final_states:
@@ -116,31 +186,95 @@ class Automata:
return states, initial_state, final_states
def __get_event_variables(self) -> list[str]:
def __get_event_variables(self) -> tuple[list[str], list[str]]:
events: list[str] = []
envs: list[str] = []
# here we are at the begin of transitions, take a note, we will return later.
cursor = self.__get_cursor_begin_events()
events = []
while self.__dot_lines[cursor].lstrip()[0] == '"':
for line in map(str.lstrip, islice(self.__dot_lines, cursor, None)):
if not line.startswith('"'):
break
# transitions have the format:
# "all_fired" -> "both_fired" [ label = "disable_irq" ];
# ------------ event is here ------------^^^^^
if self.__dot_lines[cursor].split()[1] == "->":
line = self.__dot_lines[cursor].split()
event = line[-2].replace('"','')
split_line = line.split()
if len(split_line) > 1 and split_line[1] == "->":
event = "".join(split_line[split_line.index("label") + 2:-1]).replace('"', '')
# when a transition has more than one lables, they are like this
# when a transition has more than one label, they are like this
# "local_irq_enable\nhw_local_irq_enable_n"
# so split them.
event = event.replace("\\n", " ")
for i in event.split():
events.append(i)
cursor += 1
for i in event.split("\\n"):
# if the event contains a constraint (hybrid automata),
# it will be separated by a ";":
# "sched_switch;x<1000;reset(x)"
ev, *constr = i.split(";")
if constr:
if len(constr) > 2:
raise AutomataError("Only 1 constraint and 1 reset are supported")
envs += self.__extract_env_var(constr)
events.append(ev)
else:
# state labels have the format:
# "enable_fired" [label = "enable_fired\ncondition"];
# ----- label is here -----^^^^^
# label and node name must be the same, condition is optional
state = line.split("label")[1].split('"')[1]
_, *constr = state.split("\\n")
if constr:
if len(constr) > 1:
raise AutomataError("Only 1 constraint is supported in the state")
envs += self.__extract_env_var([constr[0].replace(" ", "")])
return sorted(set(events))
return sorted(set(events)), sorted(set(envs))
def __create_matrix(self) -> list[list[str]]:
def _split_constraint_expr(self, constr: list[str]) -> Iterator[tuple[str,
str | None]]:
"""
Get a list of strings of the type constr1 && constr2 and returns a list of
constraints and separators: [[constr1,"&&"],[constr2,None]]
"""
exprs = []
seps = []
for c in constr:
while "&&" in c or "||" in c:
a = c.find("&&")
o = c.find("||")
pos = a if o < 0 or 0 < a < o else o
exprs.append(c[:pos].replace(" ", ""))
seps.append(c[pos:pos + 2].replace(" ", ""))
c = c[pos + 2:].replace(" ", "")
exprs.append(c)
seps.append(None)
return zip(exprs, seps)
def __extract_env_var(self, constraint: list[str]) -> list[str]:
env = []
for c, _ in self._split_constraint_expr(constraint):
rule = self.constraint_rule.search(c)
reset = self.constraint_reset.search(c)
if rule:
env.append(rule["env"])
if rule.groupdict().get("unit"):
self.env_types[rule["env"]] = rule["unit"]
if rule["val"][0].isalpha():
self.constraint_vars.add(rule["val"])
# try to infer unit from constants or parameters
val_for_unit = rule["val"].lower().replace("()", "")
if val_for_unit.endswith("_ns"):
self.env_types[rule["env"]] = "ns"
if val_for_unit.endswith("_jiffies"):
self.env_types[rule["env"]] = "j"
if reset:
env.append(reset["env"])
# environment variables that are reset need a storage
self.env_stored.add(reset["env"])
return env
def __create_matrix(self) -> tuple[list[list[str]], dict[_ConstraintKey, list[str]]]:
# transform the array into a dictionary
events = self.events
states = self.states
@@ -157,31 +291,49 @@ class Automata:
nr_state += 1
# declare the matrix....
matrix = [[ self.invalid_state_str for x in range(nr_event)] for y in range(nr_state)]
matrix = [[self.invalid_state_str for _ in range(nr_event)] for _ in range(nr_state)]
constraints: dict[_ConstraintKey, list[str]] = {}
# and we are back! Let's fill the matrix
cursor = self.__get_cursor_begin_events()
while self.__dot_lines[cursor].lstrip()[0] == '"':
if self.__dot_lines[cursor].split()[1] == "->":
line = self.__dot_lines[cursor].split()
origin_state = line[0].replace('"','').replace(',','_')
dest_state = line[2].replace('"','').replace(',','_')
possible_events = line[-2].replace('"','').replace("\\n", " ")
for event in possible_events.split():
matrix[states_dict[origin_state]][events_dict[event]] = dest_state
cursor += 1
for line in map(str.lstrip,
islice(self.__dot_lines, cursor, None)):
return matrix
if not line or line[0] != '"':
break
split_line = line.split()
if len(split_line) > 2 and split_line[1] == "->":
origin_state = split_line[0].replace('"', '').replace(',', '_')
dest_state = split_line[2].replace('"', '').replace(',', '_')
possible_events = "".join(split_line[split_line.index("label") + 2:-1]).replace('"', '')
for event in possible_events.split("\\n"):
event, *constr = event.split(";")
if constr:
key = _EventConstraintKey(states_dict[origin_state], events_dict[event])
constraints[key] = constr
# those events reset also on self loops
if origin_state == dest_state and "reset" in "".join(constr):
self.self_loop_reset_events.add(event)
matrix[states_dict[origin_state]][events_dict[event]] = dest_state
else:
state = line.split("label")[1].split('"')[1]
state, *constr = state.replace(" ", "").split("\\n")
if constr:
constraints[_StateConstraintKey(states_dict[state])] = constr
return matrix, constraints
def __store_init_events(self) -> tuple[list[bool], list[bool]]:
events_start = [False] * len(self.events)
events_start_run = [False] * len(self.events)
for i, _ in enumerate(self.events):
for i in range(len(self.events)):
curr_event_will_init = 0
curr_event_from_init = False
curr_event_used = 0
for j, _ in enumerate(self.states):
for j in range(len(self.states)):
if self.function[j][i] != self.invalid_state_str:
curr_event_used += 1
if self.function[j][i] == self.initial_state:
@@ -204,3 +356,13 @@ class Automata:
if any(self.events_start):
return False
return self.events_start_run[self.events.index(event)]
def is_hybrid_automata(self) -> bool:
return bool(self.envs)
def is_event_constraint(self, key: _ConstraintKey) -> bool:
"""
Given the key in self.constraints return true if it is an event
constraint, false if it is a state constraint
"""
return isinstance(key, _EventConstraintKey)

View File

@@ -3,7 +3,7 @@
#
# Copyright (C) 2019-2022 Red Hat, Inc. Daniel Bristot de Oliveira <bristot@kernel.org>
#
# dot2c: parse an automata in dot file digraph format into a C
# dot2c: parse an automaton in dot file digraph format into a C
#
# This program was written in the development of this paper:
# de Oliveira, D. B. and Cucinotta, T. and de Oliveira, R. S.
@@ -13,12 +13,13 @@
# For further information, see:
# Documentation/trace/rv/deterministic_automata.rst
from .automata import Automata
from .automata import Automata, AutomataError
class Dot2c(Automata):
enum_suffix = ""
enum_states_def = "states"
enum_events_def = "events"
enum_envs_def = "envs"
struct_automaton_def = "automaton"
var_automaton_def = "aut"
@@ -28,17 +29,17 @@ class Dot2c(Automata):
def __get_enum_states_content(self) -> list[str]:
buff = []
buff.append("\t%s%s," % (self.initial_state, self.enum_suffix))
buff.append(f"\t{self.initial_state}{self.enum_suffix},")
for state in self.states:
if state != self.initial_state:
buff.append("\t%s%s," % (state, self.enum_suffix))
buff.append("\tstate_max%s," % (self.enum_suffix))
buff.append(f"\t{state}{self.enum_suffix},")
buff.append(f"\tstate_max{self.enum_suffix},")
return buff
def format_states_enum(self) -> list[str]:
buff = []
buff.append("enum %s {" % self.enum_states_def)
buff.append(f"enum {self.enum_states_def} {{")
buff += self.__get_enum_states_content()
buff.append("};\n")
@@ -47,49 +48,82 @@ class Dot2c(Automata):
def __get_enum_events_content(self) -> list[str]:
buff = []
for event in self.events:
buff.append("\t%s%s," % (event, self.enum_suffix))
buff.append(f"\t{event}{self.enum_suffix},")
buff.append("\tevent_max%s," % self.enum_suffix)
buff.append(f"\tevent_max{self.enum_suffix},")
return buff
def format_events_enum(self) -> list[str]:
buff = []
buff.append("enum %s {" % self.enum_events_def)
buff.append(f"enum {self.enum_events_def} {{")
buff += self.__get_enum_events_content()
buff.append("};\n")
return buff
def __get_non_stored_envs(self) -> list[str]:
return [e for e in self.envs if e not in self.env_stored]
def __get_enum_envs_content(self) -> list[str]:
buff = []
# We first place env variables that have a u64 storage.
# Those are limited by MAX_HA_ENV_LEN, other variables
# are read only and don't require a storage.
unstored = self.__get_non_stored_envs()
for env in list(self.env_stored) + unstored:
buff.append(f"\t{env}{self.enum_suffix},")
buff.append(f"\tenv_max{self.enum_suffix},")
max_stored = unstored[0] if len(unstored) else "env_max"
buff.append(f"\tenv_max_stored{self.enum_suffix} = {max_stored}{self.enum_suffix},")
return buff
def format_envs_enum(self) -> list[str]:
buff = []
if self.is_hybrid_automata():
buff.append(f"enum {self.enum_envs_def} {{")
buff += self.__get_enum_envs_content()
buff.append("};\n")
buff.append(f"_Static_assert(env_max_stored{self.enum_suffix} <= MAX_HA_ENV_LEN,"
' "Not enough slots");')
if {"ns", "us", "ms", "s"}.intersection(self.env_types.values()):
buff.append("#define HA_CLK_NS")
buff.append("")
return buff
def get_minimun_type(self) -> str:
min_type = "unsigned char"
if self.states.__len__() > 255:
if len(self.states) > 255:
min_type = "unsigned short"
if self.states.__len__() > 65535:
if len(self.states) > 65535:
min_type = "unsigned int"
if self.states.__len__() > 1000000:
raise Exception("Too many states: %d" % self.states.__len__())
if len(self.states) > 1000000:
raise AutomataError(f"Too many states: {len(self.states)}")
return min_type
def format_automaton_definition(self) -> list[str]:
min_type = self.get_minimun_type()
buff = []
buff.append("struct %s {" % self.struct_automaton_def)
buff.append("\tchar *state_names[state_max%s];" % (self.enum_suffix))
buff.append("\tchar *event_names[event_max%s];" % (self.enum_suffix))
buff.append("\t%s function[state_max%s][event_max%s];" % (min_type, self.enum_suffix, self.enum_suffix))
buff.append("\t%s initial_state;" % min_type)
buff.append("\tbool final_states[state_max%s];" % (self.enum_suffix))
buff.append(f"struct {self.struct_automaton_def} {{")
buff.append(f"\tchar *state_names[state_max{self.enum_suffix}];")
buff.append(f"\tchar *event_names[event_max{self.enum_suffix}];")
if self.is_hybrid_automata():
buff.append(f"\tchar *env_names[env_max{self.enum_suffix}];")
buff.append(f"\t{min_type} function[state_max{self.enum_suffix}][event_max{self.enum_suffix}];")
buff.append(f"\t{min_type} initial_state;")
buff.append(f"\tbool final_states[state_max{self.enum_suffix}];")
buff.append("};\n")
return buff
def format_aut_init_header(self) -> list[str]:
buff = []
buff.append("static const struct %s %s = {" % (self.struct_automaton_def, self.var_automaton_def))
buff.append(f"static const struct {self.struct_automaton_def} {self.var_automaton_def} = {{")
return buff
def __get_string_vector_per_line_content(self, entries: list[str]) -> str:
@@ -113,13 +147,24 @@ class Dot2c(Automata):
return buff
def format_aut_init_envs_string(self) -> list[str]:
buff = []
if self.is_hybrid_automata():
buff.append("\t.env_names = {")
# maintain consistent order with the enum
ordered_envs = list(self.env_stored) + self.__get_non_stored_envs()
buff.append(self.__get_string_vector_per_line_content(ordered_envs))
buff.append("\t},")
return buff
def __get_max_strlen_of_states(self) -> int:
max_state_name = max(self.states, key = len).__len__()
return max(max_state_name, self.invalid_state_str.__len__())
max_state_name = len(max(self.states, key=len))
return max(max_state_name, len(self.invalid_state_str))
def get_aut_init_function(self) -> str:
nr_states = self.states.__len__()
nr_events = self.events.__len__()
nr_states = len(self.states)
nr_events = len(self.events)
buff = []
maxlen = self.__get_max_strlen_of_states() + len(self.enum_suffix)
@@ -134,10 +179,10 @@ class Dot2c(Automata):
next_state = self.function[x][y] + self.enum_suffix
if linetoolong:
line += "\t\t\t%s" % next_state
line += f"\t\t\t{next_state}"
else:
line += "%*s" % (maxlen, next_state)
if y != nr_events-1:
line += f"{next_state:>{maxlen}}"
if y != nr_events - 1:
line += ",\n" if linetoolong else ", "
else:
line += ",\n\t\t}," if linetoolong else " },"
@@ -180,7 +225,7 @@ class Dot2c(Automata):
def format_aut_init_final_states(self) -> list[str]:
buff = []
buff.append("\t.final_states = { %s }," % self.get_aut_init_final_states())
buff.append(f"\t.final_states = {{ {self.get_aut_init_final_states()} }},")
return buff
@@ -196,7 +241,7 @@ class Dot2c(Automata):
def format_invalid_state(self) -> list[str]:
buff = []
buff.append("#define %s state_max%s\n" % (self.invalid_state_str, self.enum_suffix))
buff.append(f"#define {self.invalid_state_str} state_max{self.enum_suffix}\n")
return buff
@@ -205,10 +250,12 @@ class Dot2c(Automata):
buff += self.format_states_enum()
buff += self.format_invalid_state()
buff += self.format_events_enum()
buff += self.format_envs_enum()
buff += self.format_automaton_definition()
buff += self.format_aut_init_header()
buff += self.format_aut_init_states_string()
buff += self.format_aut_init_events_string()
buff += self.format_aut_init_envs_string()
buff += self.format_aut_init_function()
buff += self.format_aut_init_initial_state()
buff += self.format_aut_init_final_states()

View File

@@ -8,8 +8,10 @@
# For further information, see:
# Documentation/trace/rv/da_monitor_synthesis.rst
from collections import deque
from .dot2c import Dot2c
from .generator import Monitor
from .automata import _EventConstraintKey, _StateConstraintKey, AutomataError
class dot2k(Monitor, Dot2c):
@@ -19,15 +21,22 @@ class dot2k(Monitor, Dot2c):
self.monitor_type = MonitorType
Monitor.__init__(self, extra_params)
Dot2c.__init__(self, file_path, extra_params.get("model_name"))
self.enum_suffix = "_%s" % self.name
self.enum_suffix = f"_{self.name}"
self.enum_suffix = f"_{self.name}"
self.monitor_class = extra_params["monitor_class"]
def fill_monitor_type(self) -> str:
return self.monitor_type.upper()
buff = [ self.monitor_type.upper() ]
buff += self._fill_timer_type()
if self.monitor_type == "per_obj":
buff.append("typedef /* XXX: define the target type */ *monitor_target;")
return "\n".join(buff)
def fill_tracepoint_handlers_skel(self) -> str:
buff = []
buff += self._fill_hybrid_definitions()
for event in self.events:
buff.append("static void handle_%s(void *data, /* XXX: fill header */)" % event)
buff.append(f"static void handle_{event}(void *data, /* XXX: fill header */)")
buff.append("{")
handle = "handle_event"
if self.is_start_event(event):
@@ -37,10 +46,14 @@ class dot2k(Monitor, Dot2c):
buff.append("\t/* XXX: validate that this event is only valid in the initial state */")
handle = "handle_start_run_event"
if self.monitor_type == "per_task":
buff.append("\tstruct task_struct *p = /* XXX: how do I get p? */;");
buff.append("\tda_%s(p, %s%s);" % (handle, event, self.enum_suffix));
buff.append("\tstruct task_struct *p = /* XXX: how do I get p? */;")
buff.append(f"\tda_{handle}(p, {event}{self.enum_suffix});")
elif self.monitor_type == "per_obj":
buff.append("\tint id = /* XXX: how do I get the id? */;")
buff.append("\tmonitor_target t = /* XXX: how do I get t? */;")
buff.append(f"\tda_{handle}(id, t, {event}{self.enum_suffix});")
else:
buff.append("\tda_%s(%s%s);" % (handle, event, self.enum_suffix));
buff.append(f"\tda_{handle}({event}{self.enum_suffix});")
buff.append("}")
buff.append("")
return '\n'.join(buff)
@@ -48,25 +61,25 @@ class dot2k(Monitor, Dot2c):
def fill_tracepoint_attach_probe(self) -> str:
buff = []
for event in self.events:
buff.append("\trv_attach_trace_probe(\"%s\", /* XXX: tracepoint */, handle_%s);" % (self.name, event))
buff.append(f"\trv_attach_trace_probe(\"{self.name}\", /* XXX: tracepoint */, handle_{event});")
return '\n'.join(buff)
def fill_tracepoint_detach_helper(self) -> str:
buff = []
for event in self.events:
buff.append("\trv_detach_trace_probe(\"%s\", /* XXX: tracepoint */, handle_%s);" % (self.name, event))
buff.append(f"\trv_detach_trace_probe(\"{self.name}\", /* XXX: tracepoint */, handle_{event});")
return '\n'.join(buff)
def fill_model_h_header(self) -> list[str]:
buff = []
buff.append("/* SPDX-License-Identifier: GPL-2.0 */")
buff.append("/*")
buff.append(" * Automatically generated C representation of %s automaton" % (self.name))
buff.append(f" * Automatically generated C representation of {self.name} automaton")
buff.append(" * For further information about this format, see kernel documentation:")
buff.append(" * Documentation/trace/rv/deterministic_automata.rst")
buff.append(" */")
buff.append("")
buff.append("#define MONITOR_NAME %s" % (self.name))
buff.append(f"#define MONITOR_NAME {self.name}")
buff.append("")
return buff
@@ -75,23 +88,27 @@ class dot2k(Monitor, Dot2c):
#
# Adjust the definition names
#
self.enum_states_def = "states_%s" % self.name
self.enum_events_def = "events_%s" % self.name
self.struct_automaton_def = "automaton_%s" % self.name
self.var_automaton_def = "automaton_%s" % self.name
self.enum_states_def = f"states_{self.name}"
self.enum_events_def = f"events_{self.name}"
self.enum_envs_def = f"envs_{self.name}"
self.struct_automaton_def = f"automaton_{self.name}"
self.var_automaton_def = f"automaton_{self.name}"
buff = self.fill_model_h_header()
buff += self.format_model()
return '\n'.join(buff)
def _is_id_monitor(self) -> bool:
return self.monitor_type in ("per_task", "per_obj")
def fill_monitor_class_type(self) -> str:
if self.monitor_type == "per_task":
if self._is_id_monitor():
return "DA_MON_EVENTS_ID"
return "DA_MON_EVENTS_IMPLICIT"
def fill_monitor_class(self) -> str:
if self.monitor_type == "per_task":
if self._is_id_monitor():
return "da_monitor_id"
return "da_monitor"
@@ -107,16 +124,30 @@ class dot2k(Monitor, Dot2c):
("char *", "state"),
("char *", "event"),
]
tp_args_error_env = tp_args_error + [("char *", "env")]
tp_args_dict = {
"event": tp_args_event,
"error": tp_args_error,
"error_env": tp_args_error_env
}
tp_args_id = ("int ", "id")
tp_args = tp_args_event if tp_type == "event" else tp_args_error
if self.monitor_type == "per_task":
tp_args = tp_args_dict[tp_type]
if self._is_id_monitor():
tp_args.insert(0, tp_args_id)
tp_proto_c = ", ".join([a+b for a,b in tp_args])
tp_args_c = ", ".join([b for a,b in tp_args])
buff.append(" TP_PROTO(%s)," % tp_proto_c)
buff.append(" TP_ARGS(%s)" % tp_args_c)
tp_proto_c = ", ".join([a + b for a, b in tp_args])
tp_args_c = ", ".join([b for a, b in tp_args])
buff.append(f" TP_PROTO({tp_proto_c}),")
buff.append(f" TP_ARGS({tp_args_c})")
return '\n'.join(buff)
def _fill_hybrid_definitions(self) -> list:
"""Stub, not valid for deterministic automata"""
return []
def _fill_timer_type(self) -> list:
"""Stub, not valid for deterministic automata"""
return []
def fill_main_c(self) -> str:
main_c = super().fill_main_c()
@@ -127,5 +158,454 @@ class dot2k(Monitor, Dot2c):
main_c = main_c.replace("%%MIN_TYPE%%", min_type)
main_c = main_c.replace("%%NR_EVENTS%%", str(nr_events))
main_c = main_c.replace("%%MONITOR_TYPE%%", monitor_type)
main_c = main_c.replace("%%MONITOR_CLASS%%", self.monitor_class)
return main_c
class da2k(dot2k):
"""Deterministic automata only"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
if self.is_hybrid_automata():
raise AutomataError("Detected hybrid automaton, use the 'ha' class")
class ha2k(dot2k):
"""Hybrid automata only"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
if not self.is_hybrid_automata():
raise AutomataError("Detected deterministic automaton, use the 'da' class")
self.trace_h = self._read_template_file("trace_hybrid.h")
self.__parse_constraints()
def fill_monitor_class_type(self) -> str:
if self._is_id_monitor():
return "HA_MON_EVENTS_ID"
return "HA_MON_EVENTS_IMPLICIT"
def fill_monitor_class(self) -> str:
"""
Used for tracepoint classes, since they are shared we keep da
instead of ha (also for the ha specific tracepoints).
The tracepoint class is not visible to the tools.
"""
return super().fill_monitor_class()
def __adjust_value(self, value: str | int, unit: str | None) -> str:
"""Adjust the value in ns"""
try:
value = int(value)
except ValueError:
# it's a constant, a parameter or a function
if value.endswith("()"):
return value.replace("()", "(ha_mon)")
return value
match unit:
case "us":
value *= 10**3
case "ms":
value *= 10**6
case "s":
value *= 10**9
return str(value) + "ull"
def __parse_single_constraint(self, rule: dict, value: str) -> str:
return f"ha_get_env(ha_mon, {rule["env"]}{self.enum_suffix}, time_ns) {rule["op"]} {value}"
def __get_constraint_env(self, constr: str) -> str:
"""Extract the second argument from an ha_ function"""
env = constr.split("(")[1].split()[1].rstrip(")").rstrip(",")
assert env.rstrip(f"_{self.name}") in self.envs
return env
def __start_to_invariant_check(self, constr: str) -> str:
# by default assume the timer has ns expiration
env = self.__get_constraint_env(constr)
clock_type = "ns"
if self.env_types.get(env.rstrip(f"_{self.name}")) == "j":
clock_type = "jiffy"
return f"return ha_check_invariant_{clock_type}(ha_mon, {env}, time_ns)"
def __start_to_conv(self, constr: str) -> str:
"""
Undo the storage conversion done by ha_start_timer_
"""
return "ha_inv_to_guard" + constr[constr.find("("):]
def __parse_timer_constraint(self, rule: dict, value: str) -> str:
# by default assume the timer has ns expiration
clock_type = "ns"
if self.env_types.get(rule["env"]) == "j":
clock_type = "jiffy"
return (f"ha_start_timer_{clock_type}(ha_mon, {rule["env"]}{self.enum_suffix},"
f" {value}, time_ns)")
def __format_guard_rules(self, rules: list[str]) -> list[str]:
"""
Merge guard constraints as a single C return statement.
If the rules include a stored env, also check its validity.
Break lines in a best effort way that tries to keep readability.
"""
if not rules:
return []
invalid_checks = [f"ha_monitor_env_invalid(ha_mon, {env}{self.enum_suffix}) ||"
for env in self.env_stored if any(env in rule for rule in rules)]
if invalid_checks and len(rules) > 1:
rules[0] = "(" + rules[0]
rules[-1] = rules[-1] + ")"
rules = invalid_checks + rules
separator = "\n\t\t " if sum(len(r) for r in rules) > 80 else " "
return ["res = " + separator.join(rules)]
def __validate_constraint(self, key: tuple[int, int] | int, constr: str,
rule, reset) -> None:
# event constrains are tuples and allow both rules and reset
# state constraints are only used for expirations (e.g. clk<N)
if self.is_event_constraint(key):
if not rule and not reset:
raise AutomataError("Unrecognised event constraint "
f"({self.states[key[0]]}/{self.events[key[1]]}: {constr})")
if rule and (rule["env"] in self.env_types and
rule["env"] not in self.env_stored):
raise AutomataError("Clocks in hybrid automata always require a storage"
f" ({rule["env"]})")
else:
if not rule:
raise AutomataError("Unrecognised state constraint "
f"({self.states[key]}: {constr})")
if rule["env"] not in self.env_stored:
raise AutomataError("State constraints always require a storage "
f"({rule["env"]})")
if rule["op"] not in ["<", "<="]:
raise AutomataError("State constraints must be clock expirations like"
f" clk<N ({rule.string})")
def __parse_constraints(self) -> None:
self.guards: dict[_EventConstraintKey, str] = {}
self.invariants: dict[_StateConstraintKey, str] = {}
for key, constraint in self.constraints.items():
rules = []
resets = []
for c, sep in self._split_constraint_expr(constraint):
rule = self.constraint_rule.search(c)
reset = self.constraint_reset.search(c)
self.__validate_constraint(key, c, rule, reset)
if rule:
value = rule["val"]
value_len = len(rule["val"])
unit = None
if rule.groupdict().get("unit"):
value_len += len(rule["unit"])
unit = rule["unit"]
c = c[:-(value_len)]
value = self.__adjust_value(value, unit)
if self.is_event_constraint(key):
c = self.__parse_single_constraint(rule, value)
if sep:
c += f" {sep}"
else:
c = self.__parse_timer_constraint(rule, value)
rules.append(c)
if reset:
c = f"ha_reset_env(ha_mon, {reset["env"]}{self.enum_suffix}, time_ns)"
resets.append(c)
if self.is_event_constraint(key):
res = self.__format_guard_rules(rules) + resets
self.guards[key] = ";".join(res)
else:
self.invariants[key] = rules[0]
def __fill_verify_invariants_func(self) -> list[str]:
buff = []
if not self.invariants:
return []
buff.append(
f"""static inline bool ha_verify_invariants(struct ha_monitor *ha_mon,
\t\t\t\t\tenum {self.enum_states_def} curr_state, enum {self.enum_events_def} event,
\t\t\t\t\tenum {self.enum_states_def} next_state, u64 time_ns)
{{""")
_else = ""
for state, constr in sorted(self.invariants.items()):
check_str = self.__start_to_invariant_check(constr)
buff.append(f"\t{_else}if (curr_state == {self.states[state]}{self.enum_suffix})")
buff.append(f"\t\t{check_str};")
_else = "else "
buff.append("\treturn true;\n}\n")
return buff
def __fill_convert_inv_guard_func(self) -> list[str]:
buff = []
if not self.invariants:
return []
conflict_guards, conflict_invs = self.__find_inv_conflicts()
if not conflict_guards and not conflict_invs:
return []
buff.append(
f"""static inline void ha_convert_inv_guard(struct ha_monitor *ha_mon,
\t\t\t\t\tenum {self.enum_states_def} curr_state, enum {self.enum_events_def} event,
\t\t\t\t\tenum {self.enum_states_def} next_state, u64 time_ns)
{{""")
buff.append("\tif (curr_state == next_state)\n\t\treturn;")
_else = ""
for state, constr in sorted(self.invariants.items()):
# a state with invariant can reach us without reset
# multiple conflicts must have the same invariant, otherwise we cannot
# know how to reset the value
conf_i = [start for start, end in conflict_invs if end == state]
# we can reach a guard without reset
conf_g = [e for s, e in conflict_guards if s == state]
if not conf_i and not conf_g:
continue
buff.append(f"\t{_else}if (curr_state == {self.states[state]}{self.enum_suffix})")
buff.append(f"\t\t{self.__start_to_conv(constr)};")
_else = "else "
buff.append("}\n")
return buff
def __fill_verify_guards_func(self) -> list[str]:
buff = []
if not self.guards:
return []
buff.append(
f"""static inline bool ha_verify_guards(struct ha_monitor *ha_mon,
\t\t\t\t enum {self.enum_states_def} curr_state, enum {self.enum_events_def} event,
\t\t\t\t enum {self.enum_states_def} next_state, u64 time_ns)
{{
\tbool res = true;
""")
_else = ""
for edge, constr in sorted(self.guards.items()):
buff.append(f"\t{_else}if (curr_state == "
f"{self.states[edge[0]]}{self.enum_suffix} && "
f"event == {self.events[edge[1]]}{self.enum_suffix})")
if constr.count(";") > 0:
buff[-1] += " {"
buff += [f"\t\t{c};" for c in constr.split(";")]
if constr.count(";") > 0:
_else = "} else "
else:
_else = "else "
if _else[0] == "}":
buff.append("\t}")
buff.append("\treturn res;\n}\n")
return buff
def __find_inv_conflicts(self) -> tuple[set[tuple[int, _EventConstraintKey]],
set[tuple[int, _StateConstraintKey]]]:
"""
Run a breadth first search from all states with an invariant.
Find any conflicting constraints reachable from there, this can be
another state with an invariant or an edge with a non-reset guard.
Stop when we find a reset.
Return the set of conflicting guards and invariants as tuples of
conflicting state and constraint key.
"""
conflict_guards: set[tuple[int, _EventConstraintKey]] = set()
conflict_invs: set[tuple[int, _StateConstraintKey]] = set()
for start_idx in self.invariants:
queue = deque([(start_idx, 0)]) # (state_idx, distance)
env = self.__get_constraint_env(self.invariants[start_idx])
while queue:
curr_idx, distance = queue.popleft()
# Check state condition
if curr_idx != start_idx and curr_idx in self.invariants:
conflict_invs.add((start_idx, _StateConstraintKey(curr_idx)))
continue
# Check if we should stop
if distance > len(self.states):
break
if curr_idx != start_idx and distance > 1:
continue
for event_idx, next_state_name in enumerate(self.function[curr_idx]):
if next_state_name == self.invalid_state_str:
continue
curr_guard = self.guards.get((curr_idx, event_idx), "")
if "reset" in curr_guard and env in curr_guard:
continue
if env in curr_guard:
conflict_guards.add((start_idx,
_EventConstraintKey(curr_idx, event_idx)))
continue
next_idx = self.states.index(next_state_name)
queue.append((next_idx, distance + 1))
return conflict_guards, conflict_invs
def __fill_setup_invariants_func(self) -> list[str]:
buff = []
if not self.invariants:
return []
buff.append(
f"""static inline void ha_setup_invariants(struct ha_monitor *ha_mon,
\t\t\t\t enum {self.enum_states_def} curr_state, enum {self.enum_events_def} event,
\t\t\t\t enum {self.enum_states_def} next_state, u64 time_ns)
{{""")
conditions = ["next_state == curr_state"]
conditions += [f"event != {e}{self.enum_suffix}"
for e in self.self_loop_reset_events]
condition_str = " && ".join(conditions)
buff.append(f"\tif ({condition_str})\n\t\treturn;")
_else = ""
for state, constr in sorted(self.invariants.items()):
buff.append(f"\t{_else}if (next_state == {self.states[state]}{self.enum_suffix})")
buff.append(f"\t\t{constr};")
_else = "else "
for state in self.invariants:
buff.append(f"\telse if (curr_state == {self.states[state]}{self.enum_suffix})")
buff.append("\t\tha_cancel_timer(ha_mon);")
buff.append("}\n")
return buff
def __fill_constr_func(self) -> list[str]:
buff = []
if not self.constraints:
return []
buff.append(
"""/*
* These functions are used to validate state transitions.
*
* They are generated by parsing the model, there is usually no need to change them.
* If the monitor requires a timer, there are functions responsible to arm it when
* the next state has a constraint, cancel it in any other case and to check
* that it didn't expire before the callback run. Transitions to the same state
* without a reset never affect timers.
* Due to the different representations between invariants and guards, there is
* a function to convert it in case invariants or guards are reachable from
* another invariant without reset. Those are not present if not required in
* the model. This is all automatic but is worth checking because it may show
* errors in the model (e.g. missing resets).
*/""")
buff += self.__fill_verify_invariants_func()
inv_conflicts = self.__fill_convert_inv_guard_func()
buff += inv_conflicts
buff += self.__fill_verify_guards_func()
buff += self.__fill_setup_invariants_func()
buff.append(
f"""static bool ha_verify_constraint(struct ha_monitor *ha_mon,
\t\t\t\t enum {self.enum_states_def} curr_state, enum {self.enum_events_def} event,
\t\t\t\t enum {self.enum_states_def} next_state, u64 time_ns)
{{""")
if self.invariants:
buff.append("\tif (!ha_verify_invariants(ha_mon, curr_state, "
"event, next_state, time_ns))\n\t\treturn false;\n")
if inv_conflicts:
buff.append("\tha_convert_inv_guard(ha_mon, curr_state, event, "
"next_state, time_ns);\n")
if self.guards:
buff.append("\tif (!ha_verify_guards(ha_mon, curr_state, event, "
"next_state, time_ns))\n\t\treturn false;\n")
if self.invariants:
buff.append("\tha_setup_invariants(ha_mon, curr_state, event, next_state, time_ns);\n")
buff.append("\treturn true;\n}\n")
return buff
def __fill_env_getter(self, env: str) -> str:
if env in self.env_types:
match self.env_types[env]:
case "ns" | "us" | "ms" | "s":
return "ha_get_clk_ns(ha_mon, env, time_ns);"
case "j":
return "ha_get_clk_jiffy(ha_mon, env);"
return f"/* XXX: how do I read {env}? */"
def __fill_env_resetter(self, env: str) -> str:
if env in self.env_types:
match self.env_types[env]:
case "ns" | "us" | "ms" | "s":
return "ha_reset_clk_ns(ha_mon, env, time_ns);"
case "j":
return "ha_reset_clk_jiffy(ha_mon, env);"
return f"/* XXX: how do I reset {env}? */"
def __fill_hybrid_get_reset_functions(self) -> list[str]:
buff = []
if self.is_hybrid_automata():
for var in self.constraint_vars:
if var.endswith("()"):
func_name = var.replace("()", "")
if func_name.isupper():
buff.append(f"#define {func_name}(ha_mon) "
f"/* XXX: what is {func_name}(ha_mon)? */\n")
else:
buff.append(f"static inline u64 {func_name}(struct ha_monitor *ha_mon)\n{{")
buff.append(f"\treturn /* XXX: what is {func_name}(ha_mon)? */;")
buff.append("}\n")
elif var.isupper():
buff.append(f"#define {var} /* XXX: what is {var}? */\n")
else:
buff.append(f"static u64 {var} = /* XXX: default value */;")
buff.append(f"module_param({var}, ullong, 0644);\n")
buff.append("""/*
* These functions define how to read and reset the environment variable.
*
* Common environment variables like ns-based and jiffy-based clocks have
* pre-define getters and resetters you can use. The parser can infer the type
* of the environment variable if you supply a measure unit in the constraint.
* If you define your own functions, make sure to add appropriate memory
* barriers if required.
* Some environment variables don't require a storage as they read a system
* state (e.g. preemption count). Those variables are never reset, so we don't
* define a reset function on monitors only relying on this type of variables.
*/""")
buff.append("static u64 ha_get_env(struct ha_monitor *ha_mon, "
f"enum envs{self.enum_suffix} env, u64 time_ns)\n{{")
_else = ""
for env in self.envs:
buff.append(f"\t{_else}if (env == {env}{self.enum_suffix})")
buff.append(f"\t\treturn {self.__fill_env_getter(env)}")
_else = "else "
buff.append("\treturn ENV_INVALID_VALUE;\n}\n")
if len(self.env_stored):
buff.append("static void ha_reset_env(struct ha_monitor *ha_mon, "
f"enum envs{self.enum_suffix} env, u64 time_ns)\n{{")
_else = ""
for env in self.env_stored:
buff.append(f"\t{_else}if (env == {env}{self.enum_suffix})")
buff.append(f"\t\t{self.__fill_env_resetter(env)}")
_else = "else "
buff.append("}\n")
return buff
def _fill_hybrid_definitions(self) -> list[str]:
return self.__fill_hybrid_get_reset_functions() + self.__fill_constr_func()
def _fill_timer_type(self) -> list:
if self.invariants:
return [
"/* XXX: If the monitor has several instances, consider HA_TIMER_WHEEL */",
"#define HA_TIMER_TYPE HA_TIMER_HRTIMER"
]
return []

View File

@@ -3,7 +3,7 @@
#
# Copyright (C) 2019-2022 Red Hat, Inc. Daniel Bristot de Oliveira <bristot@kernel.org>
#
# Abtract class for generating kernel runtime verification monitors from specification file
# Abstract class for generating kernel runtime verification monitors from specification file
import platform
import os
@@ -40,7 +40,7 @@ class RVGenerator:
if platform.system() != "Linux":
raise OSError("I can only run on Linux.")
kernel_path = os.path.join("/lib/modules/%s/build" % platform.release(), self.rv_dir)
kernel_path = os.path.join(f"/lib/modules/{platform.release()}/build", self.rv_dir)
# if the current kernel is from a distro this may not be a full kernel tree
# verify that one of the files we are going to modify is available
@@ -51,32 +51,26 @@ class RVGenerator:
raise FileNotFoundError("Could not find the rv directory, do you have the kernel source installed?")
def _read_file(self, path):
try:
fd = open(path, 'r')
except OSError:
raise Exception("Cannot open the file: %s" % path)
content = fd.read()
fd.close()
with open(path, 'r') as fd:
content = fd.read()
return content
def _read_template_file(self, file):
try:
path = os.path.join(self.abs_template_dir, file)
return self._read_file(path)
except Exception:
except OSError:
# Specific template file not found. Try the generic template file in the template/
# directory, which is one level up
path = os.path.join(self.abs_template_dir, "..", file)
return self._read_file(path)
def fill_parent(self):
return "&rv_%s" % self.parent if self.parent else "NULL"
return f"&rv_{self.parent}" if self.parent else "NULL"
def fill_include_parent(self):
if self.parent:
return "#include <monitors/%s/%s.h>\n" % (self.parent, self.parent)
return f"#include <monitors/{self.parent}/{self.parent}.h>\n"
return ""
def fill_tracepoint_handlers_skel(self):
@@ -122,7 +116,7 @@ class RVGenerator:
buff = []
buff.append(" # XXX: add dependencies if there")
if self.parent:
buff.append(" depends on RV_MON_%s" % self.parent.upper())
buff.append(f" depends on RV_MON_{self.parent.upper()}")
buff.append(" default y")
return '\n'.join(buff)
@@ -148,31 +142,30 @@ class RVGenerator:
monitor_class_type = self.fill_monitor_class_type()
if self.auto_patch:
self._patch_file("rv_trace.h",
"// Add new monitors based on CONFIG_%s here" % monitor_class_type,
"#include <monitors/%s/%s_trace.h>" % (self.name, self.name))
return " - Patching %s/rv_trace.h, double check the result" % self.rv_dir
f"// Add new monitors based on CONFIG_{monitor_class_type} here",
f"#include <monitors/{self.name}/{self.name}_trace.h>")
return f" - Patching {self.rv_dir}/rv_trace.h, double check the result"
return """ - Edit %s/rv_trace.h:
Add this line where other tracepoints are included and %s is defined:
#include <monitors/%s/%s_trace.h>
""" % (self.rv_dir, monitor_class_type, self.name, self.name)
return f""" - Edit {self.rv_dir}/rv_trace.h:
Add this line where other tracepoints are included and {monitor_class_type} is defined:
#include <monitors/{self.name}/{self.name}_trace.h>
"""
def _kconfig_marker(self, container=None) -> str:
return "# Add new %smonitors here" % (container + " "
if container else "")
return f"# Add new {container + ' ' if container else ''}monitors here"
def fill_kconfig_tooltip(self):
if self.auto_patch:
# monitors with a container should stay together in the Kconfig
self._patch_file("Kconfig",
self._kconfig_marker(self.parent),
"source \"kernel/trace/rv/monitors/%s/Kconfig\"" % (self.name))
return " - Patching %s/Kconfig, double check the result" % self.rv_dir
f"source \"kernel/trace/rv/monitors/{self.name}/Kconfig\"")
return f" - Patching {self.rv_dir}/Kconfig, double check the result"
return """ - Edit %s/Kconfig:
return f""" - Edit {self.rv_dir}/Kconfig:
Add this line where other monitors are included:
source \"kernel/trace/rv/monitors/%s/Kconfig\"
""" % (self.rv_dir, self.name)
source \"kernel/trace/rv/monitors/{self.name}/Kconfig\"
"""
def fill_makefile_tooltip(self):
name = self.name
@@ -180,18 +173,18 @@ source \"kernel/trace/rv/monitors/%s/Kconfig\"
if self.auto_patch:
self._patch_file("Makefile",
"# Add new monitors here",
"obj-$(CONFIG_RV_MON_%s) += monitors/%s/%s.o" % (name_up, name, name))
return " - Patching %s/Makefile, double check the result" % self.rv_dir
f"obj-$(CONFIG_RV_MON_{name_up}) += monitors/{name}/{name}.o")
return f" - Patching {self.rv_dir}/Makefile, double check the result"
return """ - Edit %s/Makefile:
return f""" - Edit {self.rv_dir}/Makefile:
Add this line where other monitors are included:
obj-$(CONFIG_RV_MON_%s) += monitors/%s/%s.o
""" % (self.rv_dir, name_up, name, name)
obj-$(CONFIG_RV_MON_{name_up}) += monitors/{name}/{name}.o
"""
def fill_monitor_tooltip(self):
if self.auto_patch:
return " - Monitor created in %s/monitors/%s" % (self.rv_dir, self. name)
return " - Move %s/ to the kernel's monitor directory (%s/monitors)" % (self.name, self.rv_dir)
return f" - Monitor created in {self.rv_dir}/monitors/{self.name}"
return f" - Move {self.name}/ to the kernel's monitor directory ({self.rv_dir}/monitors)"
def __create_directory(self):
path = self.name
@@ -201,41 +194,27 @@ obj-$(CONFIG_RV_MON_%s) += monitors/%s/%s.o
os.mkdir(path)
except FileExistsError:
return
except:
print("Fail creating the output dir: %s" % self.name)
def __write_file(self, file_name, content):
try:
file = open(file_name, 'w')
except:
print("Fail writing to file: %s" % file_name)
file.write(content)
file.close()
with open(file_name, 'w') as file:
file.write(content)
def _create_file(self, file_name, content):
path = "%s/%s" % (self.name, file_name)
path = f"{self.name}/{file_name}"
if self.auto_patch:
path = os.path.join(self.rv_dir, "monitors", path)
self.__write_file(path, content)
def __get_main_name(self):
path = "%s/%s" % (self.name, "main.c")
if not os.path.exists(path):
return "main.c"
return "__main.c"
def print_files(self):
main_c = self.fill_main_c()
self.__create_directory()
path = "%s.c" % self.name
path = f"{self.name}.c"
self._create_file(path, main_c)
model_h = self.fill_model_h()
path = "%s.h" % self.name
path = f"{self.name}.h"
self._create_file(path, model_h)
kconfig = self.fill_kconfig()
@@ -243,7 +222,7 @@ obj-$(CONFIG_RV_MON_%s) += monitors/%s/%s.o
class Monitor(RVGenerator):
monitor_types = { "global" : 1, "per_cpu" : 2, "per_task" : 3 }
monitor_types = {"global": 1, "per_cpu": 2, "per_task": 3, "per_obj": 4}
def __init__(self, extra_params={}):
super().__init__(extra_params)
@@ -255,16 +234,18 @@ class Monitor(RVGenerator):
monitor_class_type = self.fill_monitor_class_type()
tracepoint_args_skel_event = self.fill_tracepoint_args_skel("event")
tracepoint_args_skel_error = self.fill_tracepoint_args_skel("error")
tracepoint_args_skel_error_env = self.fill_tracepoint_args_skel("error_env")
trace_h = trace_h.replace("%%MODEL_NAME%%", self.name)
trace_h = trace_h.replace("%%MODEL_NAME_UP%%", self.name.upper())
trace_h = trace_h.replace("%%MONITOR_CLASS%%", monitor_class)
trace_h = trace_h.replace("%%MONITOR_CLASS_TYPE%%", monitor_class_type)
trace_h = trace_h.replace("%%TRACEPOINT_ARGS_SKEL_EVENT%%", tracepoint_args_skel_event)
trace_h = trace_h.replace("%%TRACEPOINT_ARGS_SKEL_ERROR%%", tracepoint_args_skel_error)
trace_h = trace_h.replace("%%TRACEPOINT_ARGS_SKEL_ERROR_ENV%%", tracepoint_args_skel_error_env)
return trace_h
def print_files(self):
super().print_files()
trace_h = self.fill_trace_h()
path = "%s_trace.h" % self.name
path = f"{self.name}_trace.h"
self._create_file(path, trace_h)

View File

@@ -9,6 +9,7 @@
from ply.lex import lex
from ply.yacc import yacc
from .automata import AutomataError
# Grammar:
# ltl ::= opd | ( ltl ) | ltl binop ltl | unop ltl
@@ -62,7 +63,7 @@ t_ignore_COMMENT = r'\#.*'
t_ignore = ' \t\n'
def t_error(t):
raise ValueError(f"Illegal character '{t.value[0]}'")
raise AutomataError(f"Illegal character '{t.value[0]}'")
lexer = lex()
@@ -394,7 +395,7 @@ class Variable:
@staticmethod
def expand(n: ASTNode, node: GraphNode, node_set) -> set[GraphNode]:
for f in node.old:
if isinstance(f, NotOp) and f.op.child is n:
if isinstance(f.op, NotOp) and f.op.child is n:
return node_set
node.old |= {n}
return node.expand(node_set)
@@ -487,7 +488,7 @@ def p_unop(p):
elif p[1] == "not":
op = NotOp(p[2])
else:
raise ValueError(f"Invalid unary operator {p[1]}")
raise AutomataError(f"Invalid unary operator {p[1]}")
p[0] = ASTNode(op)
@@ -507,7 +508,7 @@ def p_binop(p):
elif p[2] == "imply":
op = ImplyOp(p[1], p[3])
else:
raise ValueError(f"Invalid binary operator {p[2]}")
raise AutomataError(f"Invalid binary operator {p[2]}")
p[0] = ASTNode(op)
@@ -526,7 +527,7 @@ def parse_ltl(s: str) -> ASTNode:
subexpr[assign[0]] = assign[1]
if rule is None:
raise ValueError("Please define your specification in the \"RULE = <LTL spec>\" format")
raise AutomataError("Please define your specification in the \"RULE = <LTL spec>\" format")
for node in rule:
if not isinstance(node.op, Variable):

View File

@@ -4,6 +4,7 @@
from pathlib import Path
from . import generator
from . import ltl2ba
from .automata import AutomataError
COLUMN_LIMIT = 100
@@ -43,13 +44,17 @@ def abbreviate_atoms(atoms: list[str]) -> list[str]:
skip = ["is", "by", "or", "and"]
return '_'.join([x[:2] for x in s.lower().split('_') if x not in skip])
abbrs = []
for atom in atoms:
def find_share_length(atom: str) -> int:
for i in range(len(atom), -1, -1):
if sum(a.startswith(atom[:i]) for a in atoms) > 1:
break
share = atom[:i]
unique = atom[i:]
return i
return 0
abbrs = []
for atom in atoms:
share_len = find_share_length(atom)
share = atom[:share_len]
unique = atom[share_len:]
abbrs.append((shorten(share) + shorten(unique)))
return abbrs
@@ -60,20 +65,23 @@ class ltl2k(generator.Monitor):
if MonitorType != "per_task":
raise NotImplementedError("Only per_task monitor is supported for LTL")
super().__init__(extra_params)
with open(file_path) as f:
self.atoms, self.ba, self.ltl = ltl2ba.create_graph(f.read())
try:
with open(file_path) as f:
self.atoms, self.ba, self.ltl = ltl2ba.create_graph(f.read())
except OSError as exc:
raise AutomataError(exc.strerror) from exc
self.atoms_abbr = abbreviate_atoms(self.atoms)
self.name = extra_params.get("model_name")
if not self.name:
self.name = Path(file_path).stem
def _fill_states(self) -> str:
def _fill_states(self) -> list[str]:
buf = [
"enum ltl_buchi_state {",
]
for node in self.ba:
buf.append("\tS%i," % node.id)
buf.append(f"\tS{node.id},")
buf.append("\tRV_NUM_BA_STATES")
buf.append("};")
buf.append("static_assert(RV_NUM_BA_STATES <= RV_MAX_BA_STATES);")
@@ -82,7 +90,7 @@ class ltl2k(generator.Monitor):
def _fill_atoms(self):
buf = ["enum ltl_atom {"]
for a in sorted(self.atoms):
buf.append("\tLTL_%s," % a)
buf.append(f"\tLTL_{a},")
buf.append("\tLTL_NUM_ATOM")
buf.append("};")
buf.append("static_assert(LTL_NUM_ATOM <= RV_MAX_LTL_ATOM);")
@@ -96,7 +104,7 @@ class ltl2k(generator.Monitor):
]
for name in self.atoms_abbr:
buf.append("\t\t\"%s\"," % name)
buf.append(f"\t\t\"{name}\",")
buf.extend([
"\t};",
@@ -113,19 +121,19 @@ class ltl2k(generator.Monitor):
continue
if isinstance(node.op, ltl2ba.AndOp):
buf.append("\tbool %s = %s && %s;" % (node, node.op.left, node.op.right))
buf.append(f"\tbool {node} = {node.op.left} && {node.op.right};")
required_values |= {str(node.op.left), str(node.op.right)}
elif isinstance(node.op, ltl2ba.OrOp):
buf.append("\tbool %s = %s || %s;" % (node, node.op.left, node.op.right))
buf.append(f"\tbool {node} = {node.op.left} || {node.op.right};")
required_values |= {str(node.op.left), str(node.op.right)}
elif isinstance(node.op, ltl2ba.NotOp):
buf.append("\tbool %s = !%s;" % (node, node.op.child))
buf.append(f"\tbool {node} = !{node.op.child};")
required_values.add(str(node.op.child))
for atom in self.atoms:
if atom.lower() not in required_values:
continue
buf.append("\tbool %s = test_bit(LTL_%s, mon->atoms);" % (atom.lower(), atom))
buf.append(f"\tbool {atom.lower()} = test_bit(LTL_{atom}, mon->atoms);")
buf.reverse()
@@ -153,7 +161,7 @@ class ltl2k(generator.Monitor):
])
for node in self.ba:
buf.append("\tcase S%i:" % node.id)
buf.append(f"\tcase S{node.id}:")
for o in sorted(node.outgoing):
line = "\t\tif "
@@ -163,7 +171,7 @@ class ltl2k(generator.Monitor):
lines = break_long_line(line, indent)
buf.extend(lines)
buf.append("\t\t\t__set_bit(S%i, next);" % o.id)
buf.append(f"\t\t\t__set_bit(S{o.id}, next);")
buf.append("\t\tbreak;")
buf.extend([
"\t}",
@@ -197,7 +205,7 @@ class ltl2k(generator.Monitor):
lines = break_long_line(line, indent)
buf.extend(lines)
buf.append("\t\t__set_bit(S%i, mon->states);" % node.id)
buf.append(f"\t\t__set_bit(S{node.id}, mon->states);")
buf.append("}")
return buf
@@ -205,23 +213,21 @@ class ltl2k(generator.Monitor):
buff = []
buff.append("static void handle_example_event(void *data, /* XXX: fill header */)")
buff.append("{")
buff.append("\tltl_atom_update(task, LTL_%s, true/false);" % self.atoms[0])
buff.append(f"\tltl_atom_update(task, LTL_{self.atoms[0]}, true/false);")
buff.append("}")
buff.append("")
return '\n'.join(buff)
def fill_tracepoint_attach_probe(self):
return "\trv_attach_trace_probe(\"%s\", /* XXX: tracepoint */, handle_example_event);" \
% self.name
return f"\trv_attach_trace_probe(\"{self.name}\", /* XXX: tracepoint */, handle_example_event);"
def fill_tracepoint_detach_helper(self):
return "\trv_detach_trace_probe(\"%s\", /* XXX: tracepoint */, handle_sample_event);" \
% self.name
return f"\trv_detach_trace_probe(\"{self.name}\", /* XXX: tracepoint */, handle_sample_event);"
def fill_atoms_init(self):
buff = []
for a in self.atoms:
buff.append("\tltl_atom_set(mon, LTL_%s, true/false);" % a)
buff.append(f"\tltl_atom_set(mon, LTL_{a}, true/false);")
return '\n'.join(buff)
def fill_model_h(self):

View File

@@ -21,7 +21,7 @@
*/
#define RV_MON_TYPE RV_MON_%%MONITOR_TYPE%%
#include "%%MODEL_NAME%%.h"
#include <rv/da_monitor.h>
#include <rv/%%MONITOR_CLASS%%_monitor.h>
/*
* This is the instrumentation part of the monitor.

View File

@@ -0,0 +1,16 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Snippet to be included in rv_trace.h
*/
#ifdef CONFIG_RV_MON_%%MODEL_NAME_UP%%
DEFINE_EVENT(event_%%MONITOR_CLASS%%, event_%%MODEL_NAME%%,
%%TRACEPOINT_ARGS_SKEL_EVENT%%);
DEFINE_EVENT(error_%%MONITOR_CLASS%%, error_%%MODEL_NAME%%,
%%TRACEPOINT_ARGS_SKEL_ERROR%%);
DEFINE_EVENT(error_env_%%MONITOR_CLASS%%, error_env_%%MODEL_NAME%%,
%%TRACEPOINT_ARGS_SKEL_ERROR_ENV%%);
#endif /* CONFIG_RV_MON_%%MODEL_NAME_UP%% */