Merge branch 'pci/endpoint'

- Free all previously requested IRQs in epf_ntb_db_bar_init_msi_doorbell()
  error path (Koichiro Den)

- Free doorbell IRQ in pci-epf-test only if it has actually been requested
  (Koichiro Den)

- Discard pointer to doorbell message array after freeing it in
  pci_epf_alloc_doorbell() error path (Koichiro Den)

- Advertise dynamic inbound mapping support in pci-epf-test and update host
  pci_endpoint_test to skip doorbell testing if not advertised by endpoint
  (Koichiro Den)

- Constify configfs item and group operations (Christophe JAILLET)

- Use array_index_nospec() on configfs MW show/store attributes (Koichiro
  Den)

- Return -ERANGE (not -EINVAL) for configfs out-of-range MW index (Koichiro
  Den)

- Return 0, not remaining timeout, when MHI eDMA ops complete so
  mhi_ep_ring_add_element() doesn't interpret non-zero as failure (Daniel
  Hodges)

- Remove vntb and ntb duplicate resource teardown that leads to oops when
  .allow_link() fails or .drop_link() is called (Koichiro Den)

- Disable vntb delayed work before clearing BAR mappings and doorbells to
  avoid oops caused by doing the work after resources have been torn down
  (Koichiro Den)

- Fix pci_epf_add_vepf() kernel-doc typo (Alok Tiwari)

- Propagate pci_epf_create() errors to pci_epf_make() callers (Alok Tiwari)

- Remove redundant BAR_RESERVED annotation for the high order part of a
  64-bit BAR (Niklas Cassel)

- Add a way to describe reserved subregions within BARs, e.g.,
  platform-owned fixed register windows, and use it for the RK3588 BAR4 DMA
  ctrl window (Koichiro Den)

- Add BAR_DISABLED for BARs that will never be available to an EPF driver,
  and change some BAR_RESERVED annotations to BAR_DISABLED (Niklas Cassel)

- Disable BARs in common code instead of in each glue driver (Niklas
  Cassel)

- Advertise reserved BARs in Capabilities so host-side drivers can skip
  them (Niklas Cassel)

- Skip reserved BARs in selftests (Niklas Cassel)

- Improve error messages and include device name when available (Manivannan
  Sadhasivam)

- Add NTB .get_dma_dev() callback for cases where DMA API requires a
  different device, e.g., vNTB devices (Koichiro Den)

- Return -EINVAL, not -ENOSPC, if endpoint test determines the subrange
  size is too small (Koichiro Den)

- Add reserved region types for MSI-X Table and PBA so Endpoint controllers
  can them as describe hardware-owned regions in a BAR_RESERVED BAR
  (Manikanta Maddireddy)

- Make Tegra194/234 BAR0 programmable and remove 1MB size limit (Manikanta
  Maddireddy)

- Expose Tegra BAR2 (MSI-X) and BAR4 (DMA) as 64-bit BAR_RESERVED
  (Manikanta Maddireddy)

- Add Tegra194 and Tegra234 device table entries to pci_endpoint_test
  (Manikanta Maddireddy)

- Skip the BAR subrange selftest if there are not enough inbound window
  resources to run the test (Christian Bruel)

* pci/endpoint:
  selftests: pci_endpoint: Skip BAR subrange test on -ENOSPC
  misc: pci_endpoint_test: Add Tegra194 and Tegra234 device table entries
  PCI: tegra194: Expose BAR2 (MSI-X) and BAR4 (DMA) as 64-bit BAR_RESERVED
  PCI: tegra194: Make BAR0 programmable and remove 1MB size limit
  PCI: endpoint: Add reserved region type for MSI-X Table and PBA
  misc: pci_endpoint_test: Use -EINVAL for small subrange size
  PCI: endpoint: pci-epf-vntb: Implement .get_dma_dev()
  NTB: ntb_transport: Use ntb_get_dma_dev() for DMA buffers
  NTB: core: Add .get_dma_dev() callback to ntb_dev_ops
  PCI: endpoint: Improve error messages
  PCI: endpoint: Print the EPF name in the error log of pci_epf_make()
  selftests: pci_endpoint: Skip reserved BARs
  misc: pci_endpoint_test: Give reserved BARs a distinct error code
  PCI: endpoint: pci-epf-test: Advertise reserved BARs
  PCI: dwc: Disable BARs in common code instead of in each glue driver
  PCI: dwc: Replace certain BAR_RESERVED with BAR_DISABLED in glue drivers
  PCI: endpoint: Introduce pci_epc_bar_type BAR_DISABLED
  PCI: dw-rockchip: Describe RK3588 BAR4 DMA ctrl window
  PCI: endpoint: Describe reserved subregions within BARs
  PCI: endpoint: Allow only_64bit on BAR_RESERVED
  PCI: endpoint: Do not mark the BAR succeeding a 64-bit BAR as BAR_RESERVED
  PCI: endpoint: Propagate error from pci_epf_create()
  PCI: endpoint: Fix typo in pci_epf_add_vepf() kernel-doc
  PCI: endpoint: pci-epf-vntb: Stop cmd_handler work in epf_ntb_epc_cleanup
  PCI: endpoint: pci-epf-ntb: Remove duplicate resource teardown
  PCI: endpoint: pci-epf-vntb: Remove duplicate resource teardown
  PCI: epf-mhi: Return 0, not remaining timeout, when eDMA ops complete
  PCI: endpoint: pci-epf-vntb: Return -ERANGE for out-of-range MW index
  PCI: endpoint: pci-epf-vntb: Use array_index_nospec() on mws_size[] access
  PCI: endpoint: Constify struct configfs_item_operations and configfs_group_operations
  selftests: pci_endpoint: Skip doorbell test when unsupported
  misc: pci_endpoint_test: Gate doorbell test on dynamic inbound mapping
  PCI: endpoint: pci-epf-test: Advertise dynamic inbound mapping support
  PCI: endpoint: pci-ep-msi: Fix error unwind and prevent double alloc
  PCI: endpoint: pci-epf-test: Don't free doorbell IRQ unless requested
  PCI: endpoint: pci-epf-vntb: Fix MSI doorbell IRQ unwind
This commit is contained in:
Bjorn Helgaas
2026-04-13 12:50:07 -05:00
28 changed files with 313 additions and 250 deletions

View File

@@ -378,10 +378,6 @@ static void dra7xx_pcie_ep_init(struct dw_pcie_ep *ep)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
struct dra7xx_pcie *dra7xx = to_dra7xx_pcie(pci);
enum pci_barno bar;
for (bar = 0; bar < PCI_STD_NUM_BARS; bar++)
dw_pcie_ep_reset_bar(pci, bar);
dra7xx_pcie_enable_wrapper_interrupts(dra7xx);
}

View File

@@ -1401,15 +1401,6 @@ static const struct dw_pcie_ops dw_pcie_ops = {
.stop_link = imx_pcie_stop_link,
};
static void imx_pcie_ep_init(struct dw_pcie_ep *ep)
{
enum pci_barno bar;
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
for (bar = BAR_0; bar <= BAR_5; bar++)
dw_pcie_ep_reset_bar(pci, bar);
}
static int imx_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
unsigned int type, u16 interrupt_num)
{
@@ -1433,19 +1424,19 @@ static int imx_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
static const struct pci_epc_features imx8m_pcie_epc_features = {
DWC_EPC_COMMON_FEATURES,
.msi_capable = true,
.bar[BAR_1] = { .type = BAR_RESERVED, },
.bar[BAR_3] = { .type = BAR_RESERVED, },
.bar[BAR_1] = { .type = BAR_DISABLED, },
.bar[BAR_3] = { .type = BAR_DISABLED, },
.bar[BAR_4] = { .type = BAR_FIXED, .fixed_size = SZ_256, },
.bar[BAR_5] = { .type = BAR_RESERVED, },
.bar[BAR_5] = { .type = BAR_DISABLED, },
.align = SZ_64K,
};
static const struct pci_epc_features imx8q_pcie_epc_features = {
DWC_EPC_COMMON_FEATURES,
.msi_capable = true,
.bar[BAR_1] = { .type = BAR_RESERVED, },
.bar[BAR_3] = { .type = BAR_RESERVED, },
.bar[BAR_5] = { .type = BAR_RESERVED, },
.bar[BAR_1] = { .type = BAR_DISABLED, },
.bar[BAR_3] = { .type = BAR_DISABLED, },
.bar[BAR_5] = { .type = BAR_DISABLED, },
.align = SZ_64K,
};
@@ -1478,7 +1469,6 @@ imx_pcie_ep_get_features(struct dw_pcie_ep *ep)
}
static const struct dw_pcie_ep_ops pcie_ep_ops = {
.init = imx_pcie_ep_init,
.raise_irq = imx_pcie_ep_raise_irq,
.get_features = imx_pcie_ep_get_features,
};

View File

@@ -933,6 +933,18 @@ static const struct pci_epc_features ks_pcie_am654_epc_features = {
DWC_EPC_COMMON_FEATURES,
.msi_capable = true,
.msix_capable = true,
/*
* TODO: This driver is the only DWC glue driver that had BAR_RESERVED
* BARs, but did not call dw_pcie_ep_reset_bar() for the reserved BARs.
*
* To not change the existing behavior, these BARs were not migrated to
* BAR_DISABLED. If this driver wants the BAR_RESERVED BARs to be
* disabled, it should migrate them to BAR_DISABLED.
*
* If they actually should be enabled, then the driver must also define
* what is behind these reserved BARs, see the definition of struct
* pci_epc_bar_rsvd_region.
*/
.bar[BAR_0] = { .type = BAR_RESERVED, },
.bar[BAR_1] = { .type = BAR_RESERVED, },
.bar[BAR_2] = { .type = BAR_RESIZABLE, },

View File

@@ -152,15 +152,11 @@ static void ls_pcie_ep_init(struct dw_pcie_ep *ep)
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
struct ls_pcie_ep *pcie = to_ls_pcie_ep(pci);
struct dw_pcie_ep_func *ep_func;
enum pci_barno bar;
ep_func = dw_pcie_ep_get_func_from_ep(ep, 0);
if (!ep_func)
return;
for (bar = 0; bar < PCI_STD_NUM_BARS; bar++)
dw_pcie_ep_reset_bar(pci, bar);
pcie->ls_epc->msi_capable = ep_func->msi_cap ? true : false;
pcie->ls_epc->msix_capable = ep_func->msix_cap ? true : false;
}
@@ -251,9 +247,7 @@ static int __init ls_pcie_ep_probe(struct platform_device *pdev)
pci->ops = pcie->drvdata->dw_pcie_ops;
ls_epc->bar[BAR_2].only_64bit = true;
ls_epc->bar[BAR_3].type = BAR_RESERVED;
ls_epc->bar[BAR_4].only_64bit = true;
ls_epc->bar[BAR_5].type = BAR_RESERVED;
ls_epc->linkup_notifier = true;
pcie->pci = pci;

View File

@@ -340,15 +340,11 @@ static void artpec6_pcie_ep_init(struct dw_pcie_ep *ep)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
struct artpec6_pcie *artpec6_pcie = to_artpec6_pcie(pci);
enum pci_barno bar;
artpec6_pcie_assert_core_reset(artpec6_pcie);
artpec6_pcie_init_phy(artpec6_pcie);
artpec6_pcie_deassert_core_reset(artpec6_pcie);
artpec6_pcie_wait_for_phy(artpec6_pcie);
for (bar = 0; bar < PCI_STD_NUM_BARS; bar++)
dw_pcie_ep_reset_bar(pci, bar);
}
static int artpec6_pcie_raise_irq(struct dw_pcie_ep *ep, u8 func_no,

View File

@@ -1114,6 +1114,28 @@ static void dw_pcie_ep_init_non_sticky_registers(struct dw_pcie *pci)
dw_pcie_dbi_ro_wr_dis(pci);
}
static void dw_pcie_ep_disable_bars(struct dw_pcie_ep *ep)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
enum pci_epc_bar_type bar_type;
enum pci_barno bar;
for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) {
bar_type = dw_pcie_ep_get_bar_type(ep, bar);
/*
* Reserved BARs should not get disabled by default. All other
* BAR types are disabled by default.
*
* This is in line with the current EPC core design, where all
* BARs are disabled by default, and then the EPF driver enables
* the BARs it wishes to use.
*/
if (bar_type != BAR_RESERVED)
dw_pcie_ep_reset_bar(pci, bar);
}
}
/**
* dw_pcie_ep_init_registers - Initialize DWC EP specific registers
* @ep: DWC EP device
@@ -1196,6 +1218,8 @@ int dw_pcie_ep_init_registers(struct dw_pcie_ep *ep)
if (ep->ops->init)
ep->ops->init(ep);
dw_pcie_ep_disable_bars(ep);
/*
* PCIe r6.0, section 7.9.15 states that for endpoints that support
* PTM, this capability structure is required in exactly one

View File

@@ -32,15 +32,6 @@ struct dw_plat_pcie_of_data {
static const struct dw_pcie_host_ops dw_plat_pcie_host_ops = {
};
static void dw_plat_pcie_ep_init(struct dw_pcie_ep *ep)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
enum pci_barno bar;
for (bar = 0; bar < PCI_STD_NUM_BARS; bar++)
dw_pcie_ep_reset_bar(pci, bar);
}
static int dw_plat_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
unsigned int type, u16 interrupt_num)
{
@@ -73,7 +64,6 @@ dw_plat_pcie_get_features(struct dw_pcie_ep *ep)
}
static const struct dw_pcie_ep_ops pcie_ep_ops = {
.init = dw_plat_pcie_ep_init,
.raise_irq = dw_plat_pcie_ep_raise_irq,
.get_features = dw_plat_pcie_get_features,
};

View File

@@ -361,13 +361,9 @@ static void rockchip_pcie_ep_hide_broken_ats_cap_rk3588(struct dw_pcie_ep *ep)
static void rockchip_pcie_ep_init(struct dw_pcie_ep *ep)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
enum pci_barno bar;
rockchip_pcie_enable_l0s(pci);
rockchip_pcie_ep_hide_broken_ats_cap_rk3588(ep);
for (bar = 0; bar < PCI_STD_NUM_BARS; bar++)
dw_pcie_ep_reset_bar(pci, bar);
};
static int rockchip_pcie_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
@@ -403,12 +399,19 @@ static const struct pci_epc_features rockchip_pcie_epc_features_rk3568 = {
.bar[BAR_5] = { .type = BAR_RESIZABLE, },
};
static const struct pci_epc_bar_rsvd_region rk3588_bar4_rsvd[] = {
{
/* DMA_CAP (BAR4: DMA Port Logic Structure) */
.type = PCI_EPC_BAR_RSVD_DMA_CTRL_MMIO,
.offset = 0x0,
.size = 0x2000,
},
};
/*
* BAR4 on rk3588 exposes the ATU Port Logic Structure to the host regardless of
* iATU settings for BAR4. This means that BAR4 cannot be used by an EPF driver,
* so mark it as RESERVED. (rockchip_pcie_ep_init() will disable all BARs by
* default.) If the host could write to BAR4, the iATU settings (for all other
* BARs) would be overwritten, resulting in (all other BARs) no longer working.
* so mark it as RESERVED.
*/
static const struct pci_epc_features rockchip_pcie_epc_features_rk3588 = {
DWC_EPC_COMMON_FEATURES,
@@ -420,7 +423,11 @@ static const struct pci_epc_features rockchip_pcie_epc_features_rk3588 = {
.bar[BAR_1] = { .type = BAR_RESIZABLE, },
.bar[BAR_2] = { .type = BAR_RESIZABLE, },
.bar[BAR_3] = { .type = BAR_RESIZABLE, },
.bar[BAR_4] = { .type = BAR_RESERVED, },
.bar[BAR_4] = {
.type = BAR_RESERVED,
.nr_rsvd_regions = ARRAY_SIZE(rk3588_bar4_rsvd),
.rsvd_regions = rk3588_bar4_rsvd,
},
.bar[BAR_5] = { .type = BAR_RESIZABLE, },
};

View File

@@ -313,11 +313,8 @@ static const struct pci_epc_features keembay_pcie_epc_features = {
.msi_capable = true,
.msix_capable = true,
.bar[BAR_0] = { .only_64bit = true, },
.bar[BAR_1] = { .type = BAR_RESERVED, },
.bar[BAR_2] = { .only_64bit = true, },
.bar[BAR_3] = { .type = BAR_RESERVED, },
.bar[BAR_4] = { .only_64bit = true, },
.bar[BAR_5] = { .type = BAR_RESERVED, },
.align = SZ_16K,
};

View File

@@ -850,9 +850,7 @@ static const struct pci_epc_features qcom_pcie_epc_features = {
.msi_capable = true,
.align = SZ_4K,
.bar[BAR_0] = { .only_64bit = true, },
.bar[BAR_1] = { .type = BAR_RESERVED, },
.bar[BAR_2] = { .only_64bit = true, },
.bar[BAR_3] = { .type = BAR_RESERVED, },
};
static const struct pci_epc_features *
@@ -861,17 +859,7 @@ qcom_pcie_epc_get_features(struct dw_pcie_ep *pci_ep)
return &qcom_pcie_epc_features;
}
static void qcom_pcie_ep_init(struct dw_pcie_ep *ep)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
enum pci_barno bar;
for (bar = BAR_0; bar <= BAR_5; bar++)
dw_pcie_ep_reset_bar(pci, bar);
}
static const struct dw_pcie_ep_ops pci_ep_ops = {
.init = qcom_pcie_ep_init,
.raise_irq = qcom_pcie_ep_raise_irq,
.get_features = qcom_pcie_epc_get_features,
};

View File

@@ -386,15 +386,6 @@ static void rcar_gen4_pcie_ep_pre_init(struct dw_pcie_ep *ep)
writel(PCIEDMAINTSTSEN_INIT, rcar->base + PCIEDMAINTSTSEN);
}
static void rcar_gen4_pcie_ep_init(struct dw_pcie_ep *ep)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
enum pci_barno bar;
for (bar = 0; bar < PCI_STD_NUM_BARS; bar++)
dw_pcie_ep_reset_bar(pci, bar);
}
static void rcar_gen4_pcie_ep_deinit(struct rcar_gen4_pcie *rcar)
{
writel(0, rcar->base + PCIEDMAINTSTSEN);
@@ -422,10 +413,10 @@ static int rcar_gen4_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
static const struct pci_epc_features rcar_gen4_pcie_epc_features = {
DWC_EPC_COMMON_FEATURES,
.msi_capable = true,
.bar[BAR_1] = { .type = BAR_RESERVED, },
.bar[BAR_3] = { .type = BAR_RESERVED, },
.bar[BAR_1] = { .type = BAR_DISABLED, },
.bar[BAR_3] = { .type = BAR_DISABLED, },
.bar[BAR_4] = { .type = BAR_FIXED, .fixed_size = 256 },
.bar[BAR_5] = { .type = BAR_RESERVED, },
.bar[BAR_5] = { .type = BAR_DISABLED, },
.align = SZ_1M,
};
@@ -449,7 +440,6 @@ static unsigned int rcar_gen4_pcie_ep_get_dbi2_offset(struct dw_pcie_ep *ep,
static const struct dw_pcie_ep_ops pcie_ep_ops = {
.pre_init = rcar_gen4_pcie_ep_pre_init,
.init = rcar_gen4_pcie_ep_init,
.raise_irq = rcar_gen4_pcie_ep_raise_irq,
.get_features = rcar_gen4_pcie_ep_get_features,
.get_dbi_offset = rcar_gen4_pcie_ep_get_dbi_offset,

View File

@@ -28,15 +28,6 @@ struct stm32_pcie {
unsigned int perst_irq;
};
static void stm32_pcie_ep_init(struct dw_pcie_ep *ep)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
enum pci_barno bar;
for (bar = 0; bar < PCI_STD_NUM_BARS; bar++)
dw_pcie_ep_reset_bar(pci, bar);
}
static int stm32_pcie_start_link(struct dw_pcie *pci)
{
struct stm32_pcie *stm32_pcie = to_stm32_pcie(pci);
@@ -82,7 +73,6 @@ stm32_pcie_get_features(struct dw_pcie_ep *ep)
}
static const struct dw_pcie_ep_ops stm32_pcie_ep_ops = {
.init = stm32_pcie_ep_init,
.raise_irq = stm32_pcie_raise_irq,
.get_features = stm32_pcie_get_features,
};

View File

@@ -1923,15 +1923,6 @@ static irqreturn_t tegra_pcie_ep_pex_rst_irq(int irq, void *arg)
return IRQ_HANDLED;
}
static void tegra_pcie_ep_init(struct dw_pcie_ep *ep)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
enum pci_barno bar;
for (bar = 0; bar < PCI_STD_NUM_BARS; bar++)
dw_pcie_ep_reset_bar(pci, bar);
};
static int tegra_pcie_ep_raise_intx_irq(struct tegra_pcie_dw *pcie, u16 irq)
{
/* Tegra194 supports only INTA */
@@ -1987,17 +1978,48 @@ static int tegra_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
return 0;
}
static const struct pci_epc_bar_rsvd_region tegra194_bar2_rsvd[] = {
{
/* MSI-X table structure */
.type = PCI_EPC_BAR_RSVD_MSIX_TBL_RAM,
.offset = 0x0,
.size = SZ_64K,
},
{
/* MSI-X PBA structure */
.type = PCI_EPC_BAR_RSVD_MSIX_PBA_RAM,
.offset = 0x10000,
.size = SZ_64K,
},
};
static const struct pci_epc_bar_rsvd_region tegra194_bar4_rsvd[] = {
{
/* DMA_CAP (BAR4: DMA Port Logic Structure) */
.type = PCI_EPC_BAR_RSVD_DMA_CTRL_MMIO,
.offset = 0x0,
.size = SZ_4K,
},
};
/* Tegra EP: BAR0 = 64-bit programmable BAR, BAR2 = 64-bit MSI-X table, BAR4 = 64-bit DMA regs. */
static const struct pci_epc_features tegra_pcie_epc_features = {
DWC_EPC_COMMON_FEATURES,
.linkup_notifier = true,
.msi_capable = true,
.bar[BAR_0] = { .type = BAR_FIXED, .fixed_size = SZ_1M,
.only_64bit = true, },
.bar[BAR_1] = { .type = BAR_RESERVED, },
.bar[BAR_2] = { .type = BAR_RESERVED, },
.bar[BAR_3] = { .type = BAR_RESERVED, },
.bar[BAR_4] = { .type = BAR_RESERVED, },
.bar[BAR_5] = { .type = BAR_RESERVED, },
.bar[BAR_0] = { .only_64bit = true, },
.bar[BAR_2] = {
.type = BAR_RESERVED,
.only_64bit = true,
.nr_rsvd_regions = ARRAY_SIZE(tegra194_bar2_rsvd),
.rsvd_regions = tegra194_bar2_rsvd,
},
.bar[BAR_4] = {
.type = BAR_RESERVED,
.only_64bit = true,
.nr_rsvd_regions = ARRAY_SIZE(tegra194_bar4_rsvd),
.rsvd_regions = tegra194_bar4_rsvd,
},
.align = SZ_64K,
};
@@ -2008,7 +2030,6 @@ tegra_pcie_ep_get_features(struct dw_pcie_ep *ep)
}
static const struct dw_pcie_ep_ops pcie_ep_ops = {
.init = tegra_pcie_ep_init,
.raise_irq = tegra_pcie_ep_raise_irq,
.get_features = tegra_pcie_ep_get_features,
};

View File

@@ -203,15 +203,6 @@ static void uniphier_pcie_stop_link(struct dw_pcie *pci)
uniphier_pcie_ltssm_enable(priv, false);
}
static void uniphier_pcie_ep_init(struct dw_pcie_ep *ep)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
enum pci_barno bar;
for (bar = BAR_0; bar <= BAR_5; bar++)
dw_pcie_ep_reset_bar(pci, bar);
}
static int uniphier_pcie_ep_raise_intx_irq(struct dw_pcie_ep *ep)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
@@ -283,7 +274,6 @@ uniphier_pcie_get_features(struct dw_pcie_ep *ep)
}
static const struct dw_pcie_ep_ops uniphier_pcie_ep_ops = {
.init = uniphier_pcie_ep_init,
.raise_irq = uniphier_pcie_ep_raise_irq,
.get_features = uniphier_pcie_get_features,
};
@@ -426,11 +416,9 @@ static const struct uniphier_pcie_ep_soc_data uniphier_pro5_data = {
.msix_capable = false,
.align = 1 << 16,
.bar[BAR_0] = { .only_64bit = true, },
.bar[BAR_1] = { .type = BAR_RESERVED, },
.bar[BAR_2] = { .only_64bit = true, },
.bar[BAR_3] = { .type = BAR_RESERVED, },
.bar[BAR_4] = { .type = BAR_RESERVED, },
.bar[BAR_5] = { .type = BAR_RESERVED, },
.bar[BAR_4] = { .type = BAR_DISABLED, },
.bar[BAR_5] = { .type = BAR_DISABLED, },
},
};
@@ -445,11 +433,8 @@ static const struct uniphier_pcie_ep_soc_data uniphier_nx1_data = {
.msix_capable = false,
.align = 1 << 12,
.bar[BAR_0] = { .only_64bit = true, },
.bar[BAR_1] = { .type = BAR_RESERVED, },
.bar[BAR_2] = { .only_64bit = true, },
.bar[BAR_3] = { .type = BAR_RESERVED, },
.bar[BAR_4] = { .only_64bit = true, },
.bar[BAR_5] = { .type = BAR_RESERVED, },
},
};

View File

@@ -440,13 +440,10 @@ static const struct pci_epc_features rcar_pcie_epc_features = {
/* use 64-bit BARs so mark BAR[1,3,5] as reserved */
.bar[BAR_0] = { .type = BAR_FIXED, .fixed_size = 128,
.only_64bit = true, },
.bar[BAR_1] = { .type = BAR_RESERVED, },
.bar[BAR_2] = { .type = BAR_FIXED, .fixed_size = 256,
.only_64bit = true, },
.bar[BAR_3] = { .type = BAR_RESERVED, },
.bar[BAR_4] = { .type = BAR_FIXED, .fixed_size = 256,
.only_64bit = true, },
.bar[BAR_5] = { .type = BAR_RESERVED, },
};
static const struct pci_epc_features*

View File

@@ -367,6 +367,8 @@ static int pci_epf_mhi_edma_read(struct mhi_ep_cntrl *mhi_cntrl,
dev_err(dev, "DMA transfer timeout\n");
dmaengine_terminate_sync(chan);
ret = -ETIMEDOUT;
} else {
ret = 0;
}
err_unmap:
@@ -438,6 +440,8 @@ static int pci_epf_mhi_edma_write(struct mhi_ep_cntrl *mhi_cntrl,
dev_err(dev, "DMA transfer timeout\n");
dmaengine_terminate_sync(chan);
ret = -ETIMEDOUT;
} else {
ret = 0;
}
err_unmap:

View File

@@ -1494,47 +1494,6 @@ err_alloc_peer_mem:
return ret;
}
/**
* epf_ntb_epc_destroy_interface() - Cleanup NTB EPC interface
* @ntb: NTB device that facilitates communication between HOST1 and HOST2
* @type: PRIMARY interface or SECONDARY interface
*
* Unbind NTB function device from EPC and relinquish reference to pci_epc
* for each of the interface.
*/
static void epf_ntb_epc_destroy_interface(struct epf_ntb *ntb,
enum pci_epc_interface_type type)
{
struct epf_ntb_epc *ntb_epc;
struct pci_epc *epc;
struct pci_epf *epf;
if (type < 0)
return;
epf = ntb->epf;
ntb_epc = ntb->epc[type];
if (!ntb_epc)
return;
epc = ntb_epc->epc;
pci_epc_remove_epf(epc, epf, type);
pci_epc_put(epc);
}
/**
* epf_ntb_epc_destroy() - Cleanup NTB EPC interface
* @ntb: NTB device that facilitates communication between HOST1 and HOST2
*
* Wrapper for epf_ntb_epc_destroy_interface() to cleanup all the NTB interfaces
*/
static void epf_ntb_epc_destroy(struct epf_ntb *ntb)
{
enum pci_epc_interface_type type;
for (type = PRIMARY_INTERFACE; type <= SECONDARY_INTERFACE; type++)
epf_ntb_epc_destroy_interface(ntb, type);
}
/**
* epf_ntb_epc_create_interface() - Create and initialize NTB EPC interface
* @ntb: NTB device that facilitates communication between HOST1 and HOST2
@@ -1614,15 +1573,8 @@ static int epf_ntb_epc_create(struct epf_ntb *ntb)
ret = epf_ntb_epc_create_interface(ntb, epf->sec_epc,
SECONDARY_INTERFACE);
if (ret) {
if (ret)
dev_err(dev, "SECONDARY intf: Fail to create NTB EPC\n");
goto err_epc_create;
}
return 0;
err_epc_create:
epf_ntb_epc_destroy_interface(ntb, PRIMARY_INTERFACE);
return ret;
}
@@ -1887,7 +1839,7 @@ static int epf_ntb_bind(struct pci_epf *epf)
ret = epf_ntb_init_epc_bar(ntb);
if (ret) {
dev_err(dev, "Failed to create NTB EPC\n");
goto err_bar_init;
return ret;
}
ret = epf_ntb_config_spad_bar_alloc_interface(ntb);
@@ -1909,9 +1861,6 @@ static int epf_ntb_bind(struct pci_epf *epf)
err_bar_alloc:
epf_ntb_config_spad_bar_free(ntb);
err_bar_init:
epf_ntb_epc_destroy(ntb);
return ret;
}
@@ -1927,7 +1876,6 @@ static void epf_ntb_unbind(struct pci_epf *epf)
epf_ntb_epc_cleanup(ntb);
epf_ntb_config_spad_bar_free(ntb);
epf_ntb_epc_destroy(ntb);
}
#define EPF_NTB_R(_name) \

View File

@@ -54,6 +54,7 @@
#define STATUS_BAR_SUBRANGE_SETUP_FAIL BIT(15)
#define STATUS_BAR_SUBRANGE_CLEAR_SUCCESS BIT(16)
#define STATUS_BAR_SUBRANGE_CLEAR_FAIL BIT(17)
#define STATUS_NO_RESOURCE BIT(18)
#define FLAG_USE_DMA BIT(0)
@@ -64,6 +65,13 @@
#define CAP_MSIX BIT(2)
#define CAP_INTX BIT(3)
#define CAP_SUBRANGE_MAPPING BIT(4)
#define CAP_DYNAMIC_INBOUND_MAPPING BIT(5)
#define CAP_BAR0_RESERVED BIT(6)
#define CAP_BAR1_RESERVED BIT(7)
#define CAP_BAR2_RESERVED BIT(8)
#define CAP_BAR3_RESERVED BIT(9)
#define CAP_BAR4_RESERVED BIT(10)
#define CAP_BAR5_RESERVED BIT(11)
#define PCI_EPF_TEST_BAR_SUBRANGE_NSUB 2
@@ -715,7 +723,6 @@ static void pci_epf_test_doorbell_cleanup(struct pci_epf_test *epf_test)
struct pci_epf_test_reg *reg = epf_test->reg[epf_test->test_reg_bar];
struct pci_epf *epf = epf_test->epf;
free_irq(epf->db_msg[0].virq, epf_test);
reg->doorbell_bar = cpu_to_le32(NO_BAR);
pci_epf_free_doorbell(epf);
@@ -759,7 +766,7 @@ static void pci_epf_test_enable_doorbell(struct pci_epf_test *epf_test,
&epf_test->db_bar.phys_addr, &offset);
if (ret)
goto err_doorbell_cleanup;
goto err_free_irq;
reg->doorbell_offset = cpu_to_le32(offset);
@@ -769,12 +776,14 @@ static void pci_epf_test_enable_doorbell(struct pci_epf_test *epf_test,
ret = pci_epc_set_bar(epc, epf->func_no, epf->vfunc_no, &epf_test->db_bar);
if (ret)
goto err_doorbell_cleanup;
goto err_free_irq;
status |= STATUS_DOORBELL_ENABLE_SUCCESS;
reg->status = cpu_to_le32(status);
return;
err_free_irq:
free_irq(epf->db_msg[0].virq, epf_test);
err_doorbell_cleanup:
pci_epf_test_doorbell_cleanup(epf_test);
set_status_err:
@@ -794,6 +803,7 @@ static void pci_epf_test_disable_doorbell(struct pci_epf_test *epf_test,
if (bar < BAR_0)
goto set_status_err;
free_irq(epf->db_msg[0].virq, epf_test);
pci_epf_test_doorbell_cleanup(epf_test);
/*
@@ -892,6 +902,8 @@ static void pci_epf_test_bar_subrange_setup(struct pci_epf_test *epf_test,
ret = pci_epc_set_bar(epc, epf->func_no, epf->vfunc_no, bar);
if (ret) {
dev_err(&epf->dev, "pci_epc_set_bar() failed: %d\n", ret);
if (ret == -ENOSPC)
status |= STATUS_NO_RESOURCE;
bar->submap = old_submap;
bar->num_submap = old_nsub;
kfree(submap);
@@ -1102,10 +1114,31 @@ static void pci_epf_test_set_capabilities(struct pci_epf *epf)
if (epf_test->epc_features->intx_capable)
caps |= CAP_INTX;
if (epf_test->epc_features->dynamic_inbound_mapping)
caps |= CAP_DYNAMIC_INBOUND_MAPPING;
if (epf_test->epc_features->dynamic_inbound_mapping &&
epf_test->epc_features->subrange_mapping)
caps |= CAP_SUBRANGE_MAPPING;
if (epf_test->epc_features->bar[BAR_0].type == BAR_RESERVED)
caps |= CAP_BAR0_RESERVED;
if (epf_test->epc_features->bar[BAR_1].type == BAR_RESERVED)
caps |= CAP_BAR1_RESERVED;
if (epf_test->epc_features->bar[BAR_2].type == BAR_RESERVED)
caps |= CAP_BAR2_RESERVED;
if (epf_test->epc_features->bar[BAR_3].type == BAR_RESERVED)
caps |= CAP_BAR3_RESERVED;
if (epf_test->epc_features->bar[BAR_4].type == BAR_RESERVED)
caps |= CAP_BAR4_RESERVED;
if (epf_test->epc_features->bar[BAR_5].type == BAR_RESERVED)
caps |= CAP_BAR5_RESERVED;
reg->caps = cpu_to_le32(caps);
}

View File

@@ -527,20 +527,20 @@ static int epf_ntb_db_bar_init_msi_doorbell(struct epf_ntb *ntb,
struct msi_msg *msg;
size_t sz;
int ret;
int i;
int i, req;
ret = pci_epf_alloc_doorbell(epf, ntb->db_count);
if (ret)
return ret;
for (i = 0; i < ntb->db_count; i++) {
ret = request_irq(epf->db_msg[i].virq, epf_ntb_doorbell_handler,
for (req = 0; req < ntb->db_count; req++) {
ret = request_irq(epf->db_msg[req].virq, epf_ntb_doorbell_handler,
0, "pci_epf_vntb_db", ntb);
if (ret) {
dev_err(&epf->dev,
"Failed to request doorbell IRQ: %d\n",
epf->db_msg[i].virq);
epf->db_msg[req].virq);
goto err_free_irq;
}
}
@@ -598,8 +598,8 @@ static int epf_ntb_db_bar_init_msi_doorbell(struct epf_ntb *ntb,
return 0;
err_free_irq:
for (i--; i >= 0; i--)
free_irq(epf->db_msg[i].virq, ntb);
for (req--; req >= 0; req--)
free_irq(epf->db_msg[req].virq, ntb);
pci_epf_free_doorbell(ntb->epf);
return ret;
@@ -763,19 +763,6 @@ static void epf_ntb_mw_bar_clear(struct epf_ntb *ntb, int num_mws)
}
}
/**
* epf_ntb_epc_destroy() - Cleanup NTB EPC interface
* @ntb: NTB device that facilitates communication between HOST and VHOST
*
* Wrapper for epf_ntb_epc_destroy_interface() to cleanup all the NTB interfaces
*/
static void epf_ntb_epc_destroy(struct epf_ntb *ntb)
{
pci_epc_remove_epf(ntb->epf->epc, ntb->epf, 0);
pci_epc_put(ntb->epf->epc);
}
/**
* epf_ntb_is_bar_used() - Check if a bar is used in the ntb configuration
* @ntb: NTB device that facilitates communication between HOST and VHOST
@@ -955,6 +942,7 @@ err_config_interrupt:
*/
static void epf_ntb_epc_cleanup(struct epf_ntb *ntb)
{
disable_delayed_work_sync(&ntb->cmd_handler);
epf_ntb_mw_bar_clear(ntb, ntb->num_mws);
epf_ntb_db_bar_clear(ntb);
epf_ntb_config_sspad_bar_clear(ntb);
@@ -995,17 +983,19 @@ static ssize_t epf_ntb_##_name##_show(struct config_item *item, \
struct config_group *group = to_config_group(item); \
struct epf_ntb *ntb = to_epf_ntb(group); \
struct device *dev = &ntb->epf->dev; \
int win_no; \
int win_no, idx; \
\
if (sscanf(#_name, "mw%d", &win_no) != 1) \
return -EINVAL; \
\
if (win_no <= 0 || win_no > ntb->num_mws) { \
dev_err(dev, "Invalid num_nws: %d value\n", ntb->num_mws); \
return -EINVAL; \
idx = win_no - 1; \
if (idx < 0 || idx >= ntb->num_mws) { \
dev_err(dev, "MW%d out of range (num_mws=%d)\n", \
win_no, ntb->num_mws); \
return -ERANGE; \
} \
\
return sprintf(page, "%lld\n", ntb->mws_size[win_no - 1]); \
idx = array_index_nospec(idx, ntb->num_mws); \
return sprintf(page, "%llu\n", ntb->mws_size[idx]); \
}
#define EPF_NTB_MW_W(_name) \
@@ -1015,7 +1005,7 @@ static ssize_t epf_ntb_##_name##_store(struct config_item *item, \
struct config_group *group = to_config_group(item); \
struct epf_ntb *ntb = to_epf_ntb(group); \
struct device *dev = &ntb->epf->dev; \
int win_no; \
int win_no, idx; \
u64 val; \
int ret; \
\
@@ -1026,12 +1016,14 @@ static ssize_t epf_ntb_##_name##_store(struct config_item *item, \
if (sscanf(#_name, "mw%d", &win_no) != 1) \
return -EINVAL; \
\
if (win_no <= 0 || win_no > ntb->num_mws) { \
dev_err(dev, "Invalid num_nws: %d value\n", ntb->num_mws); \
return -EINVAL; \
idx = win_no - 1; \
if (idx < 0 || idx >= ntb->num_mws) { \
dev_err(dev, "MW%d out of range (num_mws=%d)\n", \
win_no, ntb->num_mws); \
return -ERANGE; \
} \
\
ntb->mws_size[win_no - 1] = val; \
idx = array_index_nospec(idx, ntb->num_mws); \
ntb->mws_size[idx] = val; \
\
return len; \
}
@@ -1436,6 +1428,14 @@ static int vntb_epf_link_disable(struct ntb_dev *ntb)
return 0;
}
static struct device *vntb_epf_get_dma_dev(struct ntb_dev *ndev)
{
struct epf_ntb *ntb = ntb_ndev(ndev);
struct pci_epc *epc = ntb->epf->epc;
return epc->dev.parent;
}
static const struct ntb_dev_ops vntb_epf_ops = {
.mw_count = vntb_epf_mw_count,
.spad_count = vntb_epf_spad_count,
@@ -1457,6 +1457,7 @@ static const struct ntb_dev_ops vntb_epf_ops = {
.db_clear_mask = vntb_epf_db_clear_mask,
.db_clear = vntb_epf_db_clear,
.link_disable = vntb_epf_link_disable,
.get_dma_dev = vntb_epf_get_dma_dev,
};
static int pci_vntb_probe(struct pci_dev *pdev, const struct pci_device_id *id)
@@ -1525,7 +1526,7 @@ static int epf_ntb_bind(struct pci_epf *epf)
ret = epf_ntb_init_epc_bar(ntb);
if (ret) {
dev_err(dev, "Failed to create NTB EPC\n");
goto err_bar_init;
return ret;
}
ret = epf_ntb_config_spad_bar_alloc(ntb);
@@ -1565,9 +1566,6 @@ err_epc_cleanup:
err_bar_alloc:
epf_ntb_config_spad_bar_free(ntb);
err_bar_init:
epf_ntb_epc_destroy(ntb);
return ret;
}
@@ -1583,7 +1581,6 @@ static void epf_ntb_unbind(struct pci_epf *epf)
epf_ntb_epc_cleanup(ntb);
epf_ntb_config_spad_bar_free(ntb);
epf_ntb_epc_destroy(ntb);
pci_unregister_driver(&vntb_pci_driver);
}

View File

@@ -84,7 +84,7 @@ static void pci_secondary_epc_epf_unlink(struct config_item *epf_item,
pci_epc_remove_epf(epc, epf, SECONDARY_INTERFACE);
}
static struct configfs_item_operations pci_secondary_epc_item_ops = {
static const struct configfs_item_operations pci_secondary_epc_item_ops = {
.allow_link = pci_secondary_epc_epf_link,
.drop_link = pci_secondary_epc_epf_unlink,
};
@@ -148,7 +148,7 @@ static void pci_primary_epc_epf_unlink(struct config_item *epf_item,
pci_epc_remove_epf(epc, epf, PRIMARY_INTERFACE);
}
static struct configfs_item_operations pci_primary_epc_item_ops = {
static const struct configfs_item_operations pci_primary_epc_item_ops = {
.allow_link = pci_primary_epc_epf_link,
.drop_link = pci_primary_epc_epf_unlink,
};
@@ -256,7 +256,7 @@ static void pci_epc_epf_unlink(struct config_item *epc_item,
pci_epc_remove_epf(epc, epf, PRIMARY_INTERFACE);
}
static struct configfs_item_operations pci_epc_item_ops = {
static const struct configfs_item_operations pci_epc_item_ops = {
.allow_link = pci_epc_epf_link,
.drop_link = pci_epc_epf_unlink,
};
@@ -507,7 +507,7 @@ static void pci_epf_release(struct config_item *item)
kfree(epf_group);
}
static struct configfs_item_operations pci_epf_ops = {
static const struct configfs_item_operations pci_epf_ops = {
.allow_link = pci_epf_vepf_link,
.drop_link = pci_epf_vepf_unlink,
.release = pci_epf_release,
@@ -565,7 +565,8 @@ static void pci_ep_cfs_add_type_group(struct pci_epf_group *epf_group)
if (IS_ERR(group)) {
dev_err(&epf_group->epf->dev,
"failed to create epf type specific attributes\n");
"failed to create epf type specific attributes: %pe\n",
group);
return;
}
@@ -578,13 +579,17 @@ static void pci_epf_cfs_add_sub_groups(struct pci_epf_group *epf_group)
group = pci_ep_cfs_add_primary_group(epf_group);
if (IS_ERR(group)) {
pr_err("failed to create 'primary' EPC interface\n");
dev_err(&epf_group->epf->dev,
"failed to create 'primary' EPC interface: %pe\n",
group);
return;
}
group = pci_ep_cfs_add_secondary_group(epf_group);
if (IS_ERR(group)) {
pr_err("failed to create 'secondary' EPC interface\n");
dev_err(&epf_group->epf->dev,
"failed to create 'secondary' EPC interface: %pe\n",
group);
return;
}
@@ -624,8 +629,9 @@ static struct config_group *pci_epf_make(struct config_group *group,
epf = pci_epf_create(epf_name);
if (IS_ERR(epf)) {
pr_err("failed to create endpoint function device\n");
err = -EINVAL;
err = PTR_ERR(epf);
pr_err("failed to create endpoint function device (%s): %d\n",
epf_name, err);
goto free_name;
}
@@ -657,7 +663,7 @@ static void pci_epf_drop(struct config_group *group, struct config_item *item)
config_item_put(item);
}
static struct configfs_group_operations pci_epf_group_ops = {
static const struct configfs_group_operations pci_epf_group_ops = {
.make_group = &pci_epf_make,
.drop_item = &pci_epf_drop,
};
@@ -674,8 +680,8 @@ struct config_group *pci_ep_cfs_add_epf_group(const char *name)
group = configfs_register_default_group(functions_group, name,
&pci_epf_group_type);
if (IS_ERR(group))
pr_err("failed to register configfs group for %s function\n",
name);
pr_err("failed to register configfs group for %s function: %pe\n",
name, group);
return group;
}

View File

@@ -50,6 +50,9 @@ int pci_epf_alloc_doorbell(struct pci_epf *epf, u16 num_db)
return -EINVAL;
}
if (epf->db_msg)
return -EBUSY;
domain = of_msi_map_get_device_domain(epc->dev.parent, 0,
DOMAIN_BUS_PLATFORM_MSI);
if (!domain) {
@@ -79,6 +82,8 @@ int pci_epf_alloc_doorbell(struct pci_epf *epf, u16 num_db)
if (ret) {
dev_err(dev, "Failed to allocate MSI\n");
kfree(msg);
epf->db_msg = NULL;
epf->num_db = 0;
return ret;
}

View File

@@ -103,8 +103,9 @@ enum pci_barno pci_epc_get_next_free_bar(const struct pci_epc_features
bar++;
for (i = bar; i < PCI_STD_NUM_BARS; i++) {
/* If the BAR is not reserved, return it. */
if (epc_features->bar[i].type != BAR_RESERVED)
/* If the BAR is not reserved or disabled, return it. */
if (epc_features->bar[i].type != BAR_RESERVED &&
epc_features->bar[i].type != BAR_DISABLED)
return i;
}

View File

@@ -149,7 +149,7 @@ EXPORT_SYMBOL_GPL(pci_epf_bind);
* @epf_vf: the virtual EP function to be added
*
* A physical endpoint function can be associated with multiple virtual
* endpoint functions. Invoke pci_epf_add_epf() to add a virtual PCI endpoint
* endpoint functions. Invoke pci_epf_add_vepf() to add a virtual PCI endpoint
* function to a physical PCI endpoint function.
*/
int pci_epf_add_vepf(struct pci_epf *epf_pf, struct pci_epf *epf_vf)