[v1,0/3] Add support for inter-domain DMA operations

Message ID cover.1691768109.git.anatoly.burakov@intel.com (mailing list archive)
Headers
Series Add support for inter-domain DMA operations |

Message

Anatoly Burakov Aug. 11, 2023, 4:14 p.m. UTC
  This patchset adds inter-domain DMA operations, and implements driver support
for them in Intel(R) IDXD driver.

Inter-domain DMA operations are similar to regular DMA operations, except that
source and/or destination addresses will be in virtual address space of another
process. In this patchset, DMA device is extended to support two new data plane
operations: inter-domain copy, and inter-domain fill. No control plane API is
provided for dmadev to set up inter-domain communication (see below for more
info).

DMA device API is extended with inter-domain operations, along with their
respective capability flag. Two new op flags are also added to allow for
inter-domain operations to select whether the source and/or destination address
is in an address space of another process. Finally, the `rte_dma_info` struct is
extended with a "controller ID" value (set to -1 by default for all drivers that
don't implement it), representing a hardware DMA controller ID. This is because
under current IDXD implementation the IDPTE (Inter-Domain Permission Table
Entry) table is global to each device. That is, even though there may be
multiple dmadev devices used by IDXD driver, they will all share their IDPTE
entries if they belong to the same hardware controller, so some sort of value
indicating where each dmadev belongs was needed.

Similarly, IDXD driver is extended to support the new dmadev API, as well as use
the new "controller ID" value. IDXD driver is also extended to have a private
API for control-plane operations related to creating/attaching to memory regions
which are shared between processes.

In the current implementation, control-plane operations were made as a private
API, instead of extending the DMA device API. This is because technically, only
the submitter (a process which is using IDXD driver to perform inter-domain
operations) has to have a DMA device available, while the owner (a process which
shares its memory regions with the submitter) does not have to manage a DMA
device to give access to its memory to another process. Another consideration is
that currently, this API is Linux*-specific and relies on passing file
descriptors over IPC, and this process, if implemented on other vendors'
hardware, may not map to the same scheme.

NOTE: currently, no publicly released hardware is available to test this feature
or this patchset

We are seeking community review on the following aspects of the patchset:
- The fact that control-plane API is supposed to be private to specific drivers
- The design of inter-domain data-plane operations API with respect to how
  "inter-domain handles" are being used and whether it's possible to make the
  API more vendor-neutral
- New data-plane ops in dmadev will extend the data plane struct into the second
  cache line - this should not be an issue since non-inter-domain operations are
  still in the first cache line, and thus existing fast path is not affected
- Any other feedback is welcome as well!

Anatoly Burakov (3):
  dmadev: add inter-domain operations
  dma/idxd: implement inter-domain operations
  dma/idxd: add API to create and attach to window

 doc/guides/dmadevs/idxd.rst           |  52 ++++++++
 doc/guides/prog_guide/dmadev.rst      |  22 ++++
 drivers/dma/idxd/idxd_bus.c           |  35 ++++++
 drivers/dma/idxd/idxd_common.c        | 123 ++++++++++++++++---
 drivers/dma/idxd/idxd_hw_defs.h       |  14 ++-
 drivers/dma/idxd/idxd_inter_dom.c     | 166 ++++++++++++++++++++++++++
 drivers/dma/idxd/idxd_internal.h      |   7 ++
 drivers/dma/idxd/meson.build          |   7 +-
 drivers/dma/idxd/rte_idxd_inter_dom.h |  79 ++++++++++++
 drivers/dma/idxd/version.map          |  11 ++
 lib/dmadev/rte_dmadev.c               |   2 +
 lib/dmadev/rte_dmadev.h               | 133 +++++++++++++++++++++
 lib/dmadev/rte_dmadev_core.h          |  12 ++
 13 files changed, 644 insertions(+), 19 deletions(-)
 create mode 100644 drivers/dma/idxd/idxd_inter_dom.c
 create mode 100644 drivers/dma/idxd/rte_idxd_inter_dom.h
 create mode 100644 drivers/dma/idxd/version.map
  

Comments

Satananda Burla Aug. 15, 2023, 7:20 p.m. UTC | #1
Hi Anatoly

> -----Original Message-----
> From: Anatoly Burakov <anatoly.burakov@intel.com>
> Sent: Friday, August 11, 2023 9:15 AM
> To: dev@dpdk.org
> Cc: bruce.richardson@intel.com
> Subject: [EXT] [PATCH v1 0/3] Add support for inter-domain DMA
> operations
> 
> External Email
> 
> ----------------------------------------------------------------------
> This patchset adds inter-domain DMA operations, and implements driver
> support
> for them in Intel(R) IDXD driver.
> 
> Inter-domain DMA operations are similar to regular DMA operations,
> except that
> source and/or destination addresses will be in virtual address space of
> another
> process. In this patchset, DMA device is extended to support two new
> data plane
> operations: inter-domain copy, and inter-domain fill. No control plane
> API is
> provided for dmadev to set up inter-domain communication (see below for
> more
> info).
Thanks for posting this.
Do you have usecases where a process from 3rd domain sets up transfer 
between memories from 2 domains? i.e process 1 is src, process 2 is
dest and process 3 executes transfer. The SDXI spec also defines this kind
of a transfer.
Have you considered extending  rte_dma_port_param and rte_dma_vchan_conf
to represent interdomain memory transfer setup as a separate port type like
RTE_DMA_PORT_INTER_DOMAIN ?
And then we could have a separate vchan dedicated for this transfer.
The rte_dma_vchan  can be setup with separate struct rte_dma_port_param
each for source and destination. The union could be extended to provide
the necessary information to pmd, this could be set of fields that
would be needed by different architectures like controller id,
pasid, smmu streamid and substreamid etc, if an opaque handle is needed,
it could also be accommodated in the union.
These transfers could also be initiated between 2 processes each having 2
dmadev VFs from the same PF as well. Marvell hardware supports this mode.
Since control plane for this can differ between PMDs, it is better to
setup the memory sharing outside dmadev and only pass the fields of interest to
the PMD for completing the transfer. For instance, for PCIe EP to Host
DMA transactions (MEM_TO_DEV and DEV_TO_MEM), the process of setting up
shared memory from PCIe host is not part of dmadev.
If we wish to make the memory sharing interface as a part of dmadev, then
preferably the control plane has to be abstracted to work for all the modes
and architectures.

Regards
Satananda