From patchwork Fri Aug 11 16:14:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anatoly Burakov X-Patchwork-Id: 130169 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E21EC43036; Fri, 11 Aug 2023 18:14:56 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E0DC043258; Fri, 11 Aug 2023 18:14:53 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.136]) by mails.dpdk.org (Postfix) with ESMTP id 2725242D3F for ; Fri, 11 Aug 2023 18:14:51 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1691770492; x=1723306492; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=YI7GWtTlgTWGruKxL2ZuiuUGvv7hZTwrxGqmICJJxvA=; b=n6gHvP450/Yu+qDqnMYyZzs4NOk/p3IZgxq38okg/cHNnkZxqXplysvx OJtbO9SOfrpjQgdxd/ZR3yMqKExyPS29aXG+ssGFiD34vPtNQbL0rbPrB G6osH8i7j8CLW2XoPsd38EfvmofJ2t8yReWjJLsCmRGizJs/Ou36u6be+ Z68/Q/4GbJpavkIk3ZM5XRUZ8y6vmuGIhKtf5KJV9aduy+i8c5MjK6uOy WVI16AUpa/ZFV7eG/VXAM8du8Un9IZb4bx1Mc7NAnk5sclC7ZMV71uwAL jrFaTQC3WWbIzrqBF2ypCNi+GmDsni/kQtltFgP6H5kBhQvaE0y3G4ZcH w==; X-IronPort-AV: E=McAfee;i="6600,9927,10799"; a="351312921" X-IronPort-AV: E=Sophos;i="6.01,166,1684825200"; d="scan'208";a="351312921" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Aug 2023 09:14:51 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10799"; a="906499488" X-IronPort-AV: E=Sophos;i="6.01,166,1684825200"; d="scan'208";a="906499488" Received: from silpixa00401191.ir.intel.com ([10.55.128.139]) by orsmga005.jf.intel.com with ESMTP; 11 Aug 2023 09:14:49 -0700 From: Anatoly Burakov To: dev@dpdk.org, Chengwen Feng , Kevin Laatz , Bruce Richardson Cc: Vladimir Medvedkin Subject: [PATCH v1 1/3] dmadev: add inter-domain operations Date: Fri, 11 Aug 2023 16:14:44 +0000 Message-Id: <8866a5c7ea36e476b2a92e3e4cea6c2c127ab82f.1691768110.git.anatoly.burakov@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: References: MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add a flag to indicate that a specific device supports inter-domain operations, and add an API for inter-domain copy and fill. Inter-domain operation is an operation that is very similar to regular DMA operation, except either source or destination addresses can be in a different process's address space, indicated by source and destination handle values. These values are currently meant to be provided by private drivers' API's. This commit also adds a controller ID field into the DMA device API. This is an arbitrary value that may not be implemented by hardware, but it is meant to represent some kind of device hierarchy. Signed-off-by: Vladimir Medvedkin Signed-off-by: Anatoly Burakov --- doc/guides/prog_guide/dmadev.rst | 18 +++++ lib/dmadev/rte_dmadev.c | 2 + lib/dmadev/rte_dmadev.h | 133 +++++++++++++++++++++++++++++++ lib/dmadev/rte_dmadev_core.h | 12 +++ 4 files changed, 165 insertions(+) diff --git a/doc/guides/prog_guide/dmadev.rst b/doc/guides/prog_guide/dmadev.rst index 2aa26d33b8..e4e5196416 100644 --- a/doc/guides/prog_guide/dmadev.rst +++ b/doc/guides/prog_guide/dmadev.rst @@ -108,6 +108,24 @@ completed operations along with the status of each operation (filled into the completed operation's ``ring_idx`` which could help user track operations within their own application-defined rings. +.. _dmadev_inter_dom: + + +Inter-domain operations +~~~~~~~~~~~~~~~~~~~~~~~ + +For some devices, inter-domain DMA operations may be supported (indicated by +`RTE_DMA_CAPA_OPS_INTER_DOM` flag being set in DMA device capabilities flag). An +inter-domain operation (such as `rte_dma_copy_inter_dom`) is similar to regular +DMA device operation, except the user also needs to specify source and +destination handles, which the hardware will then use to get source and/or +destination PASID to perform the operation. When `src_handle` value is set, +`RTE_DMA_OP_FLAG_SRC_HANDLE` op flag must also be set. Similarly, when +`dst_handle` value is set, `RTE_DMA_OP_FLAG_DST_HANDLE` op flag must be set. + +Currently, source and destination handles are opaque values the user has to get +from private API's of those DMA device drivers that support the operation. + Querying Device Statistics ~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c index 8c095e1f35..ff00612f84 100644 --- a/lib/dmadev/rte_dmadev.c +++ b/lib/dmadev/rte_dmadev.c @@ -425,6 +425,8 @@ rte_dma_info_get(int16_t dev_id, struct rte_dma_info *dev_info) if (*dev->dev_ops->dev_info_get == NULL) return -ENOTSUP; memset(dev_info, 0, sizeof(struct rte_dma_info)); + /* set to -1 by default, as other drivers may not implement this */ + dev_info->controller_id = -1; ret = (*dev->dev_ops->dev_info_get)(dev, dev_info, sizeof(struct rte_dma_info)); if (ret != 0) diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h index e61d71959e..1cad36f0b6 100644 --- a/lib/dmadev/rte_dmadev.h +++ b/lib/dmadev/rte_dmadev.h @@ -278,6 +278,8 @@ int16_t rte_dma_next_dev(int16_t start_dev_id); #define RTE_DMA_CAPA_OPS_COPY_SG RTE_BIT64(33) /** Support fill operation. */ #define RTE_DMA_CAPA_OPS_FILL RTE_BIT64(34) +/** Support inter-domain operation. */ +#define RTE_DMA_CAPA_OPS_INTER_DOM RTE_BIT64(48) /**@}*/ /** @@ -307,6 +309,8 @@ struct rte_dma_info { int16_t numa_node; /** Number of virtual DMA channel configured. */ uint16_t nb_vchans; + /** Controller ID, -1 if unknown */ + int16_t controller_id; }; /** @@ -819,6 +823,16 @@ struct rte_dma_sge { * capability bit for this, driver should not return error if this flag was set. */ #define RTE_DMA_OP_FLAG_LLC RTE_BIT64(2) +/** Source handle is set. + * Used for inter-domain operations to indicate source handle value will be + * meaningful and can be used by hardware to learn source PASID. + */ +#define RTE_DMA_OP_FLAG_SRC_HANDLE RTE_BIT64(16) +/** Destination handle is set. + * Used for inter-domain operations to indicate destination handle value will be + * meaningful and can be used by hardware to learn destination PASID. + */ +#define RTE_DMA_OP_FLAG_DST_HANDLE RTE_BIT64(17) /**@}*/ /** @@ -1141,6 +1155,125 @@ rte_dma_burst_capacity(int16_t dev_id, uint16_t vchan) return (*obj->burst_capacity)(obj->dev_private, vchan); } +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue an inter-domain copy operation. + * + * This queues up an inter-domain copy operation to be performed by hardware, if + * the 'flags' parameter contains RTE_DMA_OP_FLAG_SUBMIT then trigger doorbell + * to begin this operation, otherwise do not trigger doorbell. + * + * The source and destination handle parameters are arbitrary opaque values, + * currently meant to be provided by private device driver API's. If the source + * handle value is meaningful, RTE_DMA_OP_FLAG_SRC_HANDLE flag must be set. + * Similarly, if the destination handle value is meaningful, + * RTE_DMA_OP_FLAG_DST_HANDLE flag must be set. Source and destination handle + * values are meant to provide information to the hardware about source and/or + * destination PASID for the inter-domain copy operation. + * + * @param dev_id + * The identifier of the device. + * @param vchan + * The identifier of virtual DMA channel. + * @param src + * The address of the source buffer (if `src_handle` is set, source address + * will be in address space of process referred to by source handle). + * @param dst + * The address of the destination buffer (if `dst_handle` is set, destination + * address will be in address space of process referred to by destination + * handle). + * @param length + * The length of the data to be copied. + * @param src_handle + * Source handle value (if used, RTE_DMA_OP_FLAG_SRC_HANDLE flag must be set). + * @param dst_handle + * Destination handle value (if used, RTE_DMA_OP_FLAG_DST_HANDLE flag must be + * set). + * @param flags + * Flags for this operation. + * @return + * - 0..UINT16_MAX: index of enqueued job. + * - -ENOSPC: if no space left to enqueue. + * - other values < 0 on failure. + */ +__rte_experimental +static inline int +rte_dma_copy_inter_dom(int16_t dev_id, uint16_t vchan, rte_iova_t src, + rte_iova_t dst, uint32_t length, uint16_t src_handle, + uint16_t dst_handle, uint64_t flags) +{ + struct rte_dma_fp_object *obj = &rte_dma_fp_objs[dev_id]; + +#ifdef RTE_DMADEV_DEBUG + if (!rte_dma_is_valid(dev_id) || length == 0) + return -EINVAL; + if (*obj->copy_inter_dom == NULL) + return -ENOTSUP; +#endif + return (*obj->copy_inter_dom)(obj->dev_private, vchan, src, dst, length, + src_handle, dst_handle, flags); +} + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue an inter-domain fill operation. + * + * This queues up an inter-domain fill operation to be performed by hardware, if + * the 'flags' parameter contains RTE_DMA_OP_FLAG_SUBMIT then trigger doorbell + * to begin this operation, otherwise do not trigger doorbell. + * + * The source and destination handle parameters are arbitrary opaque values, + * currently meant to be provided by private device driver API's. If the source + * handle value is meaningful, RTE_DMA_OP_FLAG_SRC_HANDLE flag must be set. + * Similarly, if the destination handle value is meaningful, + * RTE_DMA_OP_FLAG_DST_HANDLE flag must be set. Source and destination handle + * values are meant to provide information to the hardware about source and/or + * destination PASID for the inter-domain fill operation. + * + * @param dev_id + * The identifier of the device. + * @param vchan + * The identifier of virtual DMA channel. + * @param pattern + * The pattern to populate the destination buffer with. + * @param dst + * The address of the destination buffer. + * @param length + * The length of the destination buffer. + * @param dst_handle + * Destination handle value (if used, RTE_DMA_OP_FLAG_DST_HANDLE flag must be + * set). + * @param flags + * Flags for this operation. + * @return + * - 0..UINT16_MAX: index of enqueued job. + * - -ENOSPC: if no space left to enqueue. + * - other values < 0 on failure. + */ +__rte_experimental +static inline int +rte_dma_fill_inter_dom(int16_t dev_id, uint16_t vchan, uint64_t pattern, + rte_iova_t dst, uint32_t length, uint16_t dst_handle, + uint64_t flags) +{ + struct rte_dma_fp_object *obj = &rte_dma_fp_objs[dev_id]; + +#ifdef RTE_DMADEV_DEBUG + if (!rte_dma_is_valid(dev_id) || length == 0) + return -EINVAL; + if (*obj->fill_inter_dom == NULL) + return -ENOTSUP; +#endif + + return (*obj->fill_inter_dom)(obj->dev_private, vchan, pattern, dst, + length, dst_handle, flags); +} + + #ifdef __cplusplus } #endif diff --git a/lib/dmadev/rte_dmadev_core.h b/lib/dmadev/rte_dmadev_core.h index 064785686f..b3a020f9de 100644 --- a/lib/dmadev/rte_dmadev_core.h +++ b/lib/dmadev/rte_dmadev_core.h @@ -50,6 +50,16 @@ typedef uint16_t (*rte_dma_completed_status_t)(void *dev_private, /** @internal Used to check the remaining space in descriptor ring. */ typedef uint16_t (*rte_dma_burst_capacity_t)(const void *dev_private, uint16_t vchan); +/** @internal Used to enqueue an inter-domain copy operation. */ +typedef int (*rte_dma_copy_inter_dom_t)(void *dev_private, uint16_t vchan, + rte_iova_t src, rte_iova_t dst, unsigned int length, + uint16_t src_handle, uint16_t dst_handle, uint64_t flags); +/** @internal Used to enqueue an inter-domain fill operation. */ +typedef int (*rte_dma_fill_inter_dom_t)(void *dev_private, uint16_t vchan, + uint64_t pattern, rte_iova_t dst, uint32_t length, + uint16_t dst_handle, uint64_t flags); + + /** * @internal * Fast-path dmadev functions and related data are hold in a flat array. @@ -73,6 +83,8 @@ struct rte_dma_fp_object { rte_dma_completed_t completed; rte_dma_completed_status_t completed_status; rte_dma_burst_capacity_t burst_capacity; + rte_dma_copy_inter_dom_t copy_inter_dom; + rte_dma_fill_inter_dom_t fill_inter_dom; } __rte_aligned(128); extern struct rte_dma_fp_object *rte_dma_fp_objs; From patchwork Fri Aug 11 16:14:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anatoly Burakov X-Patchwork-Id: 130170 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E586E43036; Fri, 11 Aug 2023 18:15:05 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 899E543263; Fri, 11 Aug 2023 18:14:56 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.136]) by mails.dpdk.org (Postfix) with ESMTP id 605E24325B for ; Fri, 11 Aug 2023 18:14:54 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1691770494; x=1723306494; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=uMB5kj5+vSi1VMbevPMCWbbdcZkECN8Da0bbS0rZozI=; b=jCaHpwKupvN2hsiRSxzJDS7zHW8g+xDrMQHppoyFwsQDfXv4xf1OhVpp 1yCJXI+HbsH+YAtx16IVCLteYkUrPeJrhyI4pAu7rr7uK5VwAfAGZXfED PAN/zMfgtfqIuOW2knHrRR+JWtGoQ1mRQyJ3o7kPXz0fgAWwbuQc9UhPl RHiEM9QUzpLH9LzYp2xnzRvDDcuuMamzO97Q1B5n5vKfD7opVrLER/ARb DqYqRzlJ2uSOFZBWqcbzWIM9rAwTLyiigcZIzs+TrPEoJxTyN4Zk2fbeH ZxAy3InV3Yry/0avWxDzRg6h/BoILKOtWt3wOfRTCiczwKsClKTqM4iWK w==; X-IronPort-AV: E=McAfee;i="6600,9927,10799"; a="351312941" X-IronPort-AV: E=Sophos;i="6.01,166,1684825200"; d="scan'208";a="351312941" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Aug 2023 09:14:53 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10799"; a="906499504" X-IronPort-AV: E=Sophos;i="6.01,166,1684825200"; d="scan'208";a="906499504" Received: from silpixa00401191.ir.intel.com ([10.55.128.139]) by orsmga005.jf.intel.com with ESMTP; 11 Aug 2023 09:14:52 -0700 From: Anatoly Burakov To: dev@dpdk.org, Chengwen Feng , Kevin Laatz , Bruce Richardson Cc: Vladimir Medvedkin Subject: [PATCH v1 2/3] dma/idxd: implement inter-domain operations Date: Fri, 11 Aug 2023 16:14:45 +0000 Message-Id: <10660b2852115b92ccc6cc193c5b693183217a80.1691768110.git.anatoly.burakov@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: References: MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Implement inter-domain copy and fill operations defined in the newly added DMA device API. Signed-off-by: Vladimir Medvedkin Signed-off-by: Anatoly Burakov --- doc/guides/prog_guide/dmadev.rst | 4 + drivers/dma/idxd/idxd_bus.c | 35 +++++++++ drivers/dma/idxd/idxd_common.c | 123 +++++++++++++++++++++++++++---- drivers/dma/idxd/idxd_hw_defs.h | 14 +++- drivers/dma/idxd/idxd_internal.h | 7 ++ 5 files changed, 165 insertions(+), 18 deletions(-) diff --git a/doc/guides/prog_guide/dmadev.rst b/doc/guides/prog_guide/dmadev.rst index e4e5196416..c2a957e971 100644 --- a/doc/guides/prog_guide/dmadev.rst +++ b/doc/guides/prog_guide/dmadev.rst @@ -126,6 +126,10 @@ destination PASID to perform the operation. When `src_handle` value is set, Currently, source and destination handles are opaque values the user has to get from private API's of those DMA device drivers that support the operation. +List of drivers supporting inter-domain operations: + +- Intel(R) IDXD driver + Querying Device Statistics ~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/drivers/dma/idxd/idxd_bus.c b/drivers/dma/idxd/idxd_bus.c index 3b2d4c2b65..787bc4e2d7 100644 --- a/drivers/dma/idxd/idxd_bus.c +++ b/drivers/dma/idxd/idxd_bus.c @@ -7,6 +7,7 @@ #include #include #include +#include #include #include @@ -187,6 +188,31 @@ read_wq_int(struct rte_dsa_device *dev, const char *filename, return ret; } +static int +read_gen_cap(struct rte_dsa_device *dev, uint64_t *gen_cap) +{ + char sysfs_node[PATH_MAX]; + FILE *f; + + snprintf(sysfs_node, sizeof(sysfs_node), "%s/dsa%d/gen_cap", + dsa_get_sysfs_path(), dev->addr.device_id); + f = fopen(sysfs_node, "r"); + if (f == NULL) { + IDXD_PMD_ERR("%s(): opening file '%s' failed: %s", + __func__, sysfs_node, strerror(errno)); + return -1; + } + + if (fscanf(f, "%" PRIx64, gen_cap) != 1) { + IDXD_PMD_ERR("%s(): error reading file '%s': %s", + __func__, sysfs_node, strerror(errno)); + return -1; + } + + fclose(f); + return 0; +} + static int read_device_int(struct rte_dsa_device *dev, const char *filename, int *value) @@ -219,6 +245,7 @@ idxd_probe_dsa(struct rte_dsa_device *dev) { struct idxd_dmadev idxd = {0}; int ret = 0; + uint64_t gen_cap; IDXD_PMD_INFO("Probing device %s on numa node %d", dev->wq_name, dev->device.numa_node); @@ -232,6 +259,14 @@ idxd_probe_dsa(struct rte_dsa_device *dev) idxd.u.bus.dsa_id = dev->addr.device_id; idxd.sva_support = 1; + ret = read_gen_cap(dev, &gen_cap); + if (ret) { + IDXD_PMD_ERR("Failed to read gen_cap for %s", dev->wq_name); + return ret; + } + if (gen_cap & IDXD_INTERDOM_SUPPORT) + idxd.inter_dom_support = 1; + idxd.portal = idxd_bus_mmap_wq(dev); if (idxd.portal == NULL) { IDXD_PMD_ERR("WQ mmap failed"); diff --git a/drivers/dma/idxd/idxd_common.c b/drivers/dma/idxd/idxd_common.c index 83d53942eb..ffe8614d16 100644 --- a/drivers/dma/idxd/idxd_common.c +++ b/drivers/dma/idxd/idxd_common.c @@ -41,7 +41,57 @@ __idxd_movdir64b(volatile void *dst, const struct idxd_hw_desc *src) __use_avx2 static __rte_always_inline void -__submit(struct idxd_dmadev *idxd) +__idxd_enqcmd(volatile void *dst, const struct idxd_hw_desc *src) +{ + asm volatile (".byte 0xf2, 0x0f, 0x38, 0xf8, 0x02" + : + : "a" (dst), "d" (src) + : "memory"); +} + +static inline uint32_t +__idxd_get_inter_dom_flags(const enum rte_idxd_ops op) +{ + switch (op) { + case idxd_op_memmove: + return IDXD_FLAG_SRC_ALT_PASID | IDXD_FLAG_DST_ALT_PASID; + case idxd_op_fill: + return IDXD_FLAG_DST_ALT_PASID; + default: + /* no flags needed */ + return 0; + } +} + +static inline uint32_t +__idxd_get_op_flags(enum rte_idxd_ops op, uint64_t flags, bool inter_dom) +{ + uint32_t op_flags = op; + uint32_t flag_mask = IDXD_FLAG_FENCE; + if (inter_dom) { + flag_mask |= __idxd_get_inter_dom_flags(op); + op_flags |= idxd_op_inter_dom; + } + op_flags = op_flags << IDXD_CMD_OP_SHIFT; + return op_flags | (flags & flag_mask) | IDXD_FLAG_CACHE_CONTROL; +} + +static inline uint64_t +__idxd_get_alt_pasid(uint64_t flags, uint64_t src_idpte_id, + uint64_t dst_idpte_id) +{ + /* hardware is intolerant to inactive fields being non-zero */ + if (!(flags & RTE_DMA_OP_FLAG_SRC_HANDLE)) + src_idpte_id = 0; + if (!(flags & RTE_DMA_OP_FLAG_DST_HANDLE)) + dst_idpte_id = 0; + return (src_idpte_id << IDXD_CMD_DST_IDPTE_IDX_SHIFT) | + (dst_idpte_id << IDXD_CMD_DST_IDPTE_IDX_SHIFT); +} + +__use_avx2 +static __rte_always_inline void +__submit(struct idxd_dmadev *idxd, const bool use_enqcmd) { rte_prefetch1(&idxd->batch_comp_ring[idxd->batch_idx_read]); @@ -59,7 +109,10 @@ __submit(struct idxd_dmadev *idxd) desc.completion = comp_addr; desc.op_flags |= IDXD_FLAG_REQUEST_COMPLETION; _mm_sfence(); /* fence before writing desc to device */ - __idxd_movdir64b(idxd->portal, &desc); + if (use_enqcmd) + __idxd_enqcmd(idxd->portal, &desc); + else + __idxd_movdir64b(idxd->portal, &desc); } else { const struct idxd_hw_desc batch_desc = { .op_flags = (idxd_op_batch << IDXD_CMD_OP_SHIFT) | @@ -71,7 +124,10 @@ __submit(struct idxd_dmadev *idxd) .size = idxd->batch_size, }; _mm_sfence(); /* fence before writing desc to device */ - __idxd_movdir64b(idxd->portal, &batch_desc); + if (use_enqcmd) + __idxd_enqcmd(idxd->portal, &batch_desc); + else + __idxd_movdir64b(idxd->portal, &batch_desc); } if (++idxd->batch_idx_write > idxd->max_batches) @@ -93,7 +149,9 @@ __idxd_write_desc(struct idxd_dmadev *idxd, const rte_iova_t src, const rte_iova_t dst, const uint32_t size, - const uint32_t flags) + const uint32_t flags, + const uint64_t alt_pasid, + const bool use_enqcmd) { uint16_t mask = idxd->desc_ring_mask; uint16_t job_id = idxd->batch_start + idxd->batch_size; @@ -113,14 +171,14 @@ __idxd_write_desc(struct idxd_dmadev *idxd, _mm256_store_si256((void *)&idxd->desc_ring[write_idx], _mm256_set_epi64x(dst, src, comp_addr, op_flags64)); _mm256_store_si256((void *)&idxd->desc_ring[write_idx].size, - _mm256_set_epi64x(0, 0, 0, size)); + _mm256_set_epi64x(alt_pasid, 0, 0, size)); idxd->batch_size++; rte_prefetch0_write(&idxd->desc_ring[write_idx + 1]); if (flags & RTE_DMA_OP_FLAG_SUBMIT) - __submit(idxd); + __submit(idxd, use_enqcmd); return job_id; } @@ -134,10 +192,26 @@ idxd_enqueue_copy(void *dev_private, uint16_t qid __rte_unused, rte_iova_t src, * but check it at compile time to be sure. */ RTE_BUILD_BUG_ON(RTE_DMA_OP_FLAG_FENCE != IDXD_FLAG_FENCE); - uint32_t memmove = (idxd_op_memmove << IDXD_CMD_OP_SHIFT) | - IDXD_FLAG_CACHE_CONTROL | (flags & IDXD_FLAG_FENCE); - return __idxd_write_desc(dev_private, memmove, src, dst, length, - flags); + uint32_t op_flags = __idxd_get_op_flags(idxd_op_memmove, flags, false); + return __idxd_write_desc(dev_private, op_flags, src, dst, length, + flags, 0, false); +} + +__use_avx2 +int +idxd_enqueue_copy_inter_dom(void *dev_private, uint16_t qid __rte_unused, rte_iova_t src, + rte_iova_t dst, unsigned int length, + uint16_t src_idpte_id, uint16_t dst_idpte_id, uint64_t flags) +{ + /* we can take advantage of the fact that the fence flag in dmadev and + * DSA are the same, but check it at compile time to be sure. + */ + RTE_BUILD_BUG_ON(RTE_DMA_OP_FLAG_FENCE != IDXD_FLAG_FENCE); + uint32_t op_flags = __idxd_get_op_flags(idxd_op_memmove, flags, true); + uint64_t alt_pasid = __idxd_get_alt_pasid(flags, src_idpte_id, dst_idpte_id); + /* currently, we require inter-domain copies to use enqcmd */ + return __idxd_write_desc(dev_private, op_flags, src, dst, length, + flags, alt_pasid, true); } __use_avx2 @@ -145,17 +219,28 @@ int idxd_enqueue_fill(void *dev_private, uint16_t qid __rte_unused, uint64_t pattern, rte_iova_t dst, unsigned int length, uint64_t flags) { - uint32_t fill = (idxd_op_fill << IDXD_CMD_OP_SHIFT) | - IDXD_FLAG_CACHE_CONTROL | (flags & IDXD_FLAG_FENCE); - return __idxd_write_desc(dev_private, fill, pattern, dst, length, - flags); + uint32_t op_flags = __idxd_get_op_flags(idxd_op_fill, flags, false); + return __idxd_write_desc(dev_private, op_flags, pattern, dst, length, + flags, 0, false); +} + +__use_avx2 +int +idxd_enqueue_fill_inter_dom(void *dev_private, uint16_t qid __rte_unused, + uint64_t pattern, rte_iova_t dst, unsigned int length, + uint16_t dst_idpte_id, uint64_t flags) +{ + uint32_t op_flags = __idxd_get_op_flags(idxd_op_fill, flags, true); + uint64_t alt_pasid = __idxd_get_alt_pasid(flags, 0, dst_idpte_id); + return __idxd_write_desc(dev_private, op_flags, pattern, dst, length, + flags, alt_pasid, true); } __use_avx2 int idxd_submit(void *dev_private, uint16_t qid __rte_unused) { - __submit(dev_private); + __submit(dev_private, false); return 0; } @@ -490,6 +575,12 @@ idxd_info_get(const struct rte_dma_dev *dev, struct rte_dma_info *info, uint32_t }; if (idxd->sva_support) info->dev_capa |= RTE_DMA_CAPA_SVA; + + if (idxd->inter_dom_support) { + info->dev_capa |= RTE_DMA_CAPA_OPS_INTER_DOM; + info->controller_id = idxd->u.bus.dsa_id; + } + return 0; } @@ -600,6 +691,8 @@ idxd_dmadev_create(const char *name, struct rte_device *dev, dmadev->fp_obj->completed_status = idxd_completed_status; dmadev->fp_obj->burst_capacity = idxd_burst_capacity; dmadev->fp_obj->dev_private = dmadev->data->dev_private; + dmadev->fp_obj->copy_inter_dom = idxd_enqueue_copy_inter_dom; + dmadev->fp_obj->fill_inter_dom = idxd_enqueue_fill_inter_dom; if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; diff --git a/drivers/dma/idxd/idxd_hw_defs.h b/drivers/dma/idxd/idxd_hw_defs.h index a38540f283..441e9d29a4 100644 --- a/drivers/dma/idxd/idxd_hw_defs.h +++ b/drivers/dma/idxd/idxd_hw_defs.h @@ -9,18 +9,24 @@ * Defines used in the data path for interacting with IDXD hardware. */ #define IDXD_CMD_OP_SHIFT 24 +#define IDXD_CMD_SRC_IDPTE_IDX_SHIFT 32 +#define IDXD_CMD_DST_IDPTE_IDX_SHIFT 48 enum rte_idxd_ops { idxd_op_nop = 0, idxd_op_batch, idxd_op_drain, idxd_op_memmove, - idxd_op_fill + idxd_op_fill, + idxd_op_inter_dom = 0x20 }; #define IDXD_FLAG_FENCE (1 << 0) #define IDXD_FLAG_COMPLETION_ADDR_VALID (1 << 2) #define IDXD_FLAG_REQUEST_COMPLETION (1 << 3) +#define IDXD_INTERDOM_SUPPORT (1 << 6) #define IDXD_FLAG_CACHE_CONTROL (1 << 8) +#define IDXD_FLAG_SRC_ALT_PASID (1 << 16) +#define IDXD_FLAG_DST_ALT_PASID (1 << 17) /** * Hardware descriptor used by DSA hardware, for both bursts and @@ -42,8 +48,10 @@ struct idxd_hw_desc { uint16_t intr_handle; /* completion interrupt handle */ - /* remaining 26 bytes are reserved */ - uint16_t reserved[13]; + /* next 22 bytes are reserved */ + uint16_t reserved[11]; + uint16_t src_pasid_hndl; /* pasid handle for source */ + uint16_t dest_pasid_hndl; /* pasid handle for destination */ } __rte_aligned(64); #define IDXD_COMP_STATUS_INCOMPLETE 0 diff --git a/drivers/dma/idxd/idxd_internal.h b/drivers/dma/idxd/idxd_internal.h index cd4177721d..fb999d29f7 100644 --- a/drivers/dma/idxd/idxd_internal.h +++ b/drivers/dma/idxd/idxd_internal.h @@ -70,6 +70,7 @@ struct idxd_dmadev { struct rte_dma_dev *dmadev; struct rte_dma_vchan_conf qcfg; uint8_t sva_support; + uint8_t inter_dom_support; uint8_t qid; union { @@ -92,8 +93,14 @@ int idxd_info_get(const struct rte_dma_dev *dev, struct rte_dma_info *dev_info, uint32_t size); int idxd_enqueue_copy(void *dev_private, uint16_t qid, rte_iova_t src, rte_iova_t dst, unsigned int length, uint64_t flags); +int idxd_enqueue_copy_inter_dom(void *dev_private, uint16_t qid, rte_iova_t src, + rte_iova_t dst, unsigned int length, + uint16_t src_idpte_id, uint16_t dst_idpte_id, uint64_t flags); int idxd_enqueue_fill(void *dev_private, uint16_t qid, uint64_t pattern, rte_iova_t dst, unsigned int length, uint64_t flags); +int idxd_enqueue_fill_inter_dom(void *dev_private, uint16_t qid, uint64_t pattern, + rte_iova_t dst, unsigned int length, uint16_t dst_idpte_id, + uint64_t flags); int idxd_submit(void *dev_private, uint16_t qid); uint16_t idxd_completed(void *dev_private, uint16_t qid, uint16_t max_ops, uint16_t *last_idx, bool *has_error); From patchwork Fri Aug 11 16:14:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anatoly Burakov X-Patchwork-Id: 130171 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3233843036; Fri, 11 Aug 2023 18:15:11 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B23974325A; Fri, 11 Aug 2023 18:14:58 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.136]) by mails.dpdk.org (Postfix) with ESMTP id 883444325A for ; Fri, 11 Aug 2023 18:14:56 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1691770496; x=1723306496; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=vQTH2B6xZeu4yWp0eoIE5VpqT9F8OxJgmRz4cr7MQsY=; b=ln7ffY1DXktlTllFfyANCHtJaJXwvMHPM4j5zpmM8B3+kjDvmMWViFIw 2ArNajUlC9VOyK3amTATMBdDTm9NyRFd6MtltFuDOKZNavNroE9YHjvKw nTbXMRl5tAlMjFiUQszhGo26/Lm6pSbuvJQn7p6UWyCIU352AM/KalfVs L0Y0BAddFR771AtwTfecz9LCel43PgtvHMkOZkQk6GnpjIDNlR8B4O4o4 +aQYrUps6zpMUt+p4r3YQ7tUEEZ9Opj2DChv5LTahethFdQGMkIdJq0sI 6WtKwjeFsNwHmkkYJft6UwFzRcK6bXY36VW3LecFGrg7u3wnCWrL7SNlI Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10799"; a="351312964" X-IronPort-AV: E=Sophos;i="6.01,166,1684825200"; d="scan'208";a="351312964" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Aug 2023 09:14:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10799"; a="906499515" X-IronPort-AV: E=Sophos;i="6.01,166,1684825200"; d="scan'208";a="906499515" Received: from silpixa00401191.ir.intel.com ([10.55.128.139]) by orsmga005.jf.intel.com with ESMTP; 11 Aug 2023 09:14:54 -0700 From: Anatoly Burakov To: dev@dpdk.org, Bruce Richardson , Kevin Laatz Cc: Vladimir Medvedkin Subject: [PATCH v1 3/3] dma/idxd: add API to create and attach to window Date: Fri, 11 Aug 2023 16:14:46 +0000 Message-Id: X-Mailer: git-send-email 2.37.2 In-Reply-To: References: MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This commit implements functions necessary to use inter-domain operations with idxd driver. The process is as follows: 1. Process A that wishes to share its memory with others, shall call `rte_idxd_window_create()`, which will return a file descriptor 2. Process A is to send above mentioned file descriptor to any recipient process (usually over kernel IPC) that wishes to attach to that window 3. Process B, after receiving above mentioned file descriptor from process A over IPC, shall call `rte_idxd_window_attach()` and receive an inter-pasid handle 4. Process B shall use this handle as an argument for inter-domain operations using DMA device API Signed-off-by: Vladimir Medvedkin Signed-off-by: Anatoly Burakov --- doc/guides/dmadevs/idxd.rst | 52 ++++++++ drivers/dma/idxd/idxd_inter_dom.c | 166 ++++++++++++++++++++++++++ drivers/dma/idxd/meson.build | 7 +- drivers/dma/idxd/rte_idxd_inter_dom.h | 79 ++++++++++++ drivers/dma/idxd/version.map | 11 ++ 5 files changed, 314 insertions(+), 1 deletion(-) create mode 100644 drivers/dma/idxd/idxd_inter_dom.c create mode 100644 drivers/dma/idxd/rte_idxd_inter_dom.h create mode 100644 drivers/dma/idxd/version.map diff --git a/doc/guides/dmadevs/idxd.rst b/doc/guides/dmadevs/idxd.rst index cb8f1fe729..b0439377f8 100644 --- a/doc/guides/dmadevs/idxd.rst +++ b/doc/guides/dmadevs/idxd.rst @@ -225,3 +225,55 @@ which operation failed and kick off the device to continue processing operations if (error){ status_count = rte_dma_completed_status(dev_id, vchan, COMP_BURST_SZ, &idx, status); } + +Performing Inter-Domain operations +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Refer to the :ref:`Enqueue / Dequeue APIs ` section of the dmadev library +documentation for details on operation enqueue, submission and completion API usage. + +Refer to the :ref:`Inter-domain operations ` section of the dmadev library +documentation for details on inter-domain operations. + +Intel(R) IDXD currently supports the following inter-domain operations: + +* Copy operation +* Fill operation + +To use these operations with the IDXD driver, the following program flow should +be adhered to: + +* Process A that wishes to share its memory with others, shall call + ``rte_idxd_window_create()``, which will return a file descriptor +* Process A is to send above mentioned file descriptor to any recipient process + (usually over IPC) that wishes to attach to that window +* Process B, after receiving above mentioned file descriptor from process A over + IPC, shall call ``rte_idxd_window_attach()`` and receive an inter-pasid handle +* Process B shall use this handle as an argument for inter-domain operations + using DMA device API + +The controller ID parameter for create/attach functions in this case would be +the controller ID of configured DSA2 devices (located under ``rte_dma_info`` +structure), but which can also be read from ``accel-config`` tool, or from the +DSA2 work queue name (e.g. work queue ``wq0.3`` would have ``0`` as its +controller ID). + +The ``rte_idxd_window_create()`` call will accept a ``flags`` argument, which +can contain the following bits: + +* ``RTE_IDXD_WIN_FLAGS_PROT_READ`` - allow other process to read from memory + region to be shared + - In this case, the remote process will be using the resulting inter-pasid + handle as source handle for inter-domain DMA operations (and set the + ``RTE_DMA_OP_FLAG_SRC_HANDLE`` DMA operation flag) +* ``RTE_IDXD_WIN_FLAGS_PROT_WRITE`` - allow other process to write into memory + region to be shared + - In this case, the remote process will be using the resulting inter-pasid + handle as destination handle for inter-domain DMA operations (and set the + ``RTE_DMA_OP_FLAG_DST_HANDLE`` DMA operation flag) +* ``RTE_IDXD_WIN_FLAGS_WIN_CHECK`` - if this flag is not set, the remote process + will be allowed unrestricted access to entire memory space of the owner + process +* ``RTE_IDXD_WIN_FLAGS_OFFSET_MODE`` - addresses for DMA operations will have to + be specified as offsets from base address of the memory region to be shared +* ``RTE_IDXD_WIN_FLAGS_TYPE_SAMS`` - enable multi-submitter mode. diff --git a/drivers/dma/idxd/idxd_inter_dom.c b/drivers/dma/idxd/idxd_inter_dom.c new file mode 100644 index 0000000000..21dcd6980d --- /dev/null +++ b/drivers/dma/idxd/idxd_inter_dom.c @@ -0,0 +1,166 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2023 Intel Corporation + */ + +#include +#include +#include +#include +#include +#include + +#include +#include +#include + +#include "idxd_internal.h" + +#define IDXD_TYPE ('d') +#define IDXD_IOC_BASE 100 +#define IDXD_WIN_BASE 200 + +enum idxd_win_type { + IDXD_WIN_TYPE_SA_SS = 0, + IDXD_WIN_TYPE_SA_MS, +}; + +#define IDXD_WIN_FLAGS_MASK (RTE_IDXD_WIN_FLAGS_PROT_READ | RTE_IDXD_WIN_FLAGS_PROT_WRITE |\ + RTE_IDXD_WIN_FLAGS_WIN_CHECK | RTE_IDXD_WIN_FLAGS_OFFSET_MODE|\ + RTE_IDXD_WIN_FLAGS_TYPE_SAMS) + +struct idxd_win_param { + uint64_t base; /* Window base */ + uint64_t size; /* Window size */ + uint32_t type; /* Window type, see enum idxd_win_type */ + uint16_t flags; /* See IDXD windows flags */ + uint16_t handle; /* Window handle returned by driver */ +} __attribute__((packed)); + +struct idxd_win_attach { + uint32_t fd; /* Window file descriptor returned by IDXD_WIN_CREATE */ + uint16_t handle; /* Window handle returned by driver */ +} __attribute__((packed)); + +struct idxd_win_fault { + uint64_t offset; /* Window offset of faulting address */ + uint64_t len; /* Faulting range */ + uint32_t write_fault; /* Fault generated on write */ +} __attribute__((packed)); + +#define IDXD_WIN_CREATE _IOWR(IDXD_TYPE, IDXD_IOC_BASE + 1, struct idxd_win_param) +#define IDXD_WIN_ATTACH _IOR(IDXD_TYPE, IDXD_IOC_BASE + 2, struct idxd_win_attach) +#define IDXD_WIN_FAULT _IOR(IDXD_TYPE, IDXD_WIN_BASE + 1, struct idxd_win_fault) +#define DSA_DEV_PATH "/dev/dsa" + +static inline const char * +dsa_get_dev_path(void) +{ + const char *path = getenv("DSA_DEV_PATH"); + return path ? path : DSA_DEV_PATH; +} + +static int +dsa_find_work_queue(int controller_id) +{ + char dev_templ[PATH_MAX], path_templ[PATH_MAX]; + const char *path = dsa_get_dev_path(); + struct dirent *wq; + DIR *dev_dir; + int fd = -1; + + /* construct work queue path template */ + snprintf(dev_templ, sizeof(dev_templ), "wq%d.", controller_id); + + /* open the DSA device directory */ + dev_dir = opendir(path); + if (dev_dir == NULL) + return -1; + + /* find any available work queue */ + while ((wq = readdir(dev_dir)) != NULL) { + /* skip things that aren't work queues */ + if (strncmp(wq->d_name, dev_templ, strlen(dev_templ)) != 0) + continue; + + /* try this work queue */ + snprintf(path_templ, sizeof(path_templ), "%s/%s", path, wq->d_name); + + fd = open(path_templ, O_RDWR); + if (fd < 0) + continue; + + break; + } + + return fd; +} + +int +rte_idxd_window_create(int controller_id, void *win_addr, + unsigned int win_len, int flags) +{ + struct idxd_win_param param = {0}; + int idpte_fd, fd; + + fd = dsa_find_work_queue(controller_id); + + /* did we find anything? */ + if (fd < 0) { + IDXD_PMD_ERR("%s(): creatomg idpt window failed", __func__); + return -1; + } + + /* create a wormhole into a parallel reality... */ + param.base = (uint64_t)win_addr; + param.size = win_len; + param.flags = flags & IDXD_WIN_FLAGS_MASK; + param.type = (flags & RTE_IDXD_WIN_FLAGS_TYPE_SAMS) ? + IDXD_WIN_TYPE_SA_MS : IDXD_WIN_TYPE_SA_SS; + + idpte_fd = ioctl(fd, IDXD_WIN_CREATE, ¶m); + + close(fd); + + if (idpte_fd < 0) + rte_errno = idpte_fd; + + return idpte_fd; +} + +int +rte_idxd_window_attach(int controller_id, int idpte_fd, + uint16_t *handle) +{ + + struct idxd_win_attach win_attach = {0}; + int ret, fd; + + if (handle == NULL) { + rte_errno = EINVAL; + return -1; + } + + fd = dsa_find_work_queue(controller_id); + + /* did we find anything? */ + if (fd < 0) { + IDXD_PMD_ERR("%s(): creatomg idpt window failed", __func__); + rte_errno = ENOENT; + return -1; + } + + /* get access to someone else's wormhole */ + win_attach.fd = idpte_fd; + + ret = ioctl(fd, IDXD_WIN_ATTACH, &win_attach); + if (ret != 0) { + IDXD_PMD_ERR("%s(): attaching idpt window failed: %s", + __func__, strerror(ret)); + rte_errno = ret; + return -1; + } + + *handle = win_attach.handle; + + return 0; +} diff --git a/drivers/dma/idxd/meson.build b/drivers/dma/idxd/meson.build index c5403b431c..da73ab340c 100644 --- a/drivers/dma/idxd/meson.build +++ b/drivers/dma/idxd/meson.build @@ -22,5 +22,10 @@ sources = files( ) if is_linux - sources += files('idxd_bus.c') + sources += files( + 'idxd_bus.c', + 'idxd_inter_dom.c', +) endif + +headers = files('rte_idxd_inter_dom.h') diff --git a/drivers/dma/idxd/rte_idxd_inter_dom.h b/drivers/dma/idxd/rte_idxd_inter_dom.h new file mode 100644 index 0000000000..c31f3777c9 --- /dev/null +++ b/drivers/dma/idxd/rte_idxd_inter_dom.h @@ -0,0 +1,79 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2023 Intel Corporation + */ + +#ifndef _RTE_IDXD_INTER_DOM_H_ +#define _RTE_IDXD_INTER_DOM_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include + +#include + +/** Allow reading from address space. */ +#define RTE_IDXD_WIN_FLAGS_PROT_READ 0x0001 +/** Allow writing to address space. */ +#define RTE_IDXD_WIN_FLAGS_PROT_WRITE 0x0002 +/** If this flag not set, the entire address space will be accessible. */ +#define RTE_IDXD_WIN_FLAGS_WIN_CHECK 0x0004 +/** Destination addresses are offsets from window base address. */ +#define RTE_IDXD_WIN_FLAGS_OFFSET_MODE 0x0008 +/* multiple submitter flag. If not set - single submitter type will be used. */ +#define RTE_IDXD_WIN_FLAGS_TYPE_SAMS 0x0010 + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Create an inter-pasid window to allow another process to access this process' + * memory. This function returns a file descriptor for the window, that can be + * used by another process to access this window. + * + * @param controller_id + * IDXD controller device ID. + * @param win_addr + * Base address of memory chunk being shared (ignored if + * `RTE_IDXD_WIN_FLAGS_WIN_CHECK` is not set). + * @param win_len + * Length of memory chunk being shared (ignored if + * `RTE_IDXD_WIN_FLAGS_WIN_CHECK` is not set). + * @param flags + * Flags to configure the window. + * @return + * Non-negative on success. + * Negative on error. + */ +__rte_experimental +int rte_idxd_window_create(int controller_id, void *win_addr, + unsigned int win_len, int flags); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Attach to an inter-pasid window of another process. This function expects a + * file descriptor returned by `rte_idxd_window_create()`, and will set the + * value pointed to by `handle`. This handle can then be used to perform + * inter-domain DMA operations. + * + * @param controller_id + * IDXD controller device ID. + * @param idpte_fd + * File descriptor for another process's window + * @param handle + * Pointer to a variable to receive the handle. + * @return + * 0 on success. + * Negative on error. + */ +__rte_experimental +int rte_idxd_window_attach(int controller_id, int idpte_fd, uint16_t *handle); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_IDXD_INTER_DOM_H_ */ diff --git a/drivers/dma/idxd/version.map b/drivers/dma/idxd/version.map new file mode 100644 index 0000000000..e091bb7c09 --- /dev/null +++ b/drivers/dma/idxd/version.map @@ -0,0 +1,11 @@ +DPDK_23 { + local: *; +}; + + +EXPERIMENTAL { + global: + + rte_idxd_window_create; + rte_idxd_window_attach; +};