nvme-multipath: Add visibility for numa io-policy

This patch helps add nvme native multipath visibility for numa io-policy.
It adds a new attribute file named "numa_nodes" under namespace gendisk
device path node which prints the list of numa nodes preferred by the
given namespace path. The numa nodes value is comma delimited list of
nodes or A-B range of nodes.

For instance, if we have a shared namespace accessible from two different
controllers/paths then accessing head node of the shared namespace would
show the following output:

$ ls -l /sys/block/nvme1n1/multipath/
nvme1c1n1 -> ../../../../../pci052e:78/052e:78:00.0/nvme/nvme1/nvme1c1n1
nvme1c3n1 -> ../../../../../pci058e:78/058e:78:00.0/nvme/nvme3/nvme1c3n1

In the above example, nvme1n1 is head gendisk node created for a shared
namespace and this namespace is accessible from nvme1c1n1 and nvme1c3n1
paths. For numa io-policy we can then refer the "numa_nodes" attribute
file created under each namespace path:

$ cat /sys/block/nvme1n1/multipath/nvme1c1n1/numa_nodes
0-1

$ cat /sys/block/nvme1n1/multipath/nvme1c3n1/numa_nodes
2-3

>From the above output, we infer that I/O workload targeted at nvme1n1
and running on numa nodes 0 and 1 would prefer using path nvme1c1n1.
Similarly, I/O workload running on numa nodes 2 and 3 would prefer
using path nvme1c3n1. Reading "numa_nodes" file when configured
io-policy is anything but numa would show no output.

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
Signed-off-by: Keith Busch <kbusch@kernel.org>
This commit is contained in:
Nilay Shroff
2025-01-12 18:11:45 +05:30
committed by Keith Busch
parent 4dbd2b2ebe
commit 6546cc4a56
3 changed files with 33 additions and 0 deletions

View File

@@ -976,6 +976,33 @@ static ssize_t ana_state_show(struct device *dev, struct device_attribute *attr,
}
DEVICE_ATTR_RO(ana_state);
static ssize_t numa_nodes_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
int node, srcu_idx;
nodemask_t numa_nodes;
struct nvme_ns *current_ns;
struct nvme_ns *ns = nvme_get_ns_from_dev(dev);
struct nvme_ns_head *head = ns->head;
if (head->subsys->iopolicy != NVME_IOPOLICY_NUMA)
return 0;
nodes_clear(numa_nodes);
srcu_idx = srcu_read_lock(&head->srcu);
for_each_node(node) {
current_ns = srcu_dereference(head->current_path[node],
&head->srcu);
if (ns == current_ns)
node_set(node, numa_nodes);
}
srcu_read_unlock(&head->srcu, srcu_idx);
return sysfs_emit(buf, "%*pbl\n", nodemask_pr_args(&numa_nodes));
}
DEVICE_ATTR_RO(numa_nodes);
static int nvme_lookup_ana_group_desc(struct nvme_ctrl *ctrl,
struct nvme_ana_group_desc *desc, void *data)
{