From patchwork Tue May 23 09:04:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ashwin Sekhar T K X-Patchwork-Id: 127212 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9B22742B81; Tue, 23 May 2023 15:00:15 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8A43942D35; Tue, 23 May 2023 15:00:06 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id CC42240689 for ; Tue, 23 May 2023 15:00:03 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 34NC62QV029276 for ; Tue, 23 May 2023 06:00:03 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=tA39ivxSi/ytCIKDupoRbX+ZeGdD2qArt6vGEW4j80w=; b=KOPQZ/2fz+2jpvyrXtLrQesf2qOzbOrtrZLHxgXPzdoUdZNW4FPhmNKU85GDN09npErK 0C03zHRm2l9YlweOfLWfYDesVnGklPY7ZiWoMoKI0rrQ84W+LRi3PGxTW1AA1wiGiV1E XLKlQfiqANDQHx6bxCHSKBHzOzAi9UYuKjwDwsD95shBl9MWBrsuBVfBLY7Let0aMLAV D1FHKwdaw07pI36tMYIc4dqCCVxFviWbRcaaXpbSG6v23liR8fkUt6PExX8SrRw9Gx4z 1AabpJiniTs4uJfdcXZ6hprAL6sgZyZnn0Dw1kcdHMuSYVZjzOWHuhPHGCRgawvLnidt kw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3qrm46j961-4 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 23 May 2023 06:00:02 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 23 May 2023 02:04:57 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 23 May 2023 02:04:57 -0700 Received: from localhost.localdomain (unknown [10.28.36.142]) by maili.marvell.com (Postfix) with ESMTP id 798283F70A9; Tue, 23 May 2023 02:04:54 -0700 (PDT) From: Ashwin Sekhar T K To: , Ashwin Sekhar T K , Pavan Nikhilesh CC: , , , , , , , , Subject: [PATCH v2 5/5] mempool/cnxk: add support for exchanging mbufs between pools Date: Tue, 23 May 2023 14:34:31 +0530 Message-ID: <20230523090431.717460-5-asekhar@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230523090431.717460-1-asekhar@marvell.com> References: <20230411075528.1125799-1-asekhar@marvell.com> <20230523090431.717460-1-asekhar@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: Wru9162XhlGGNYballI7CjvO5bRlqKGt X-Proofpoint-GUID: Wru9162XhlGGNYballI7CjvO5bRlqKGt X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.573,FMLib:17.11.176.26 definitions=2023-05-23_08,2023-05-23_02,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add the following cnxk mempool PMD APIs to facilitate exchanging mbufs between pools. * rte_pmd_cnxk_mempool_is_hwpool() - Allows user to check whether a pool is hwpool or not. * rte_pmd_cnxk_mempool_range_check_disable() - Disables range checking on any rte_mempool. * rte_pmd_cnxk_mempool_mbuf_exchange() - Exchanges mbufs between any two rte_mempool where the range check is disabled. Signed-off-by: Ashwin Sekhar T K --- doc/api/doxy-api-index.md | 1 + doc/api/doxy-api.conf.in | 1 + drivers/mempool/cnxk/cn10k_hwpool_ops.c | 63 ++++++++++++++++++++- drivers/mempool/cnxk/cnxk_mempool.h | 4 ++ drivers/mempool/cnxk/meson.build | 1 + drivers/mempool/cnxk/rte_pmd_cnxk_mempool.h | 56 ++++++++++++++++++ drivers/mempool/cnxk/version.map | 10 ++++ 7 files changed, 135 insertions(+), 1 deletion(-) create mode 100644 drivers/mempool/cnxk/rte_pmd_cnxk_mempool.h create mode 100644 drivers/mempool/cnxk/version.map diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md index c709fd48ad..a781b8f408 100644 --- a/doc/api/doxy-api-index.md +++ b/doc/api/doxy-api-index.md @@ -49,6 +49,7 @@ The public API headers are grouped by topics: [iavf](@ref rte_pmd_iavf.h), [bnxt](@ref rte_pmd_bnxt.h), [cnxk](@ref rte_pmd_cnxk.h), + [cnxk_mempool](@ref rte_pmd_cnxk_mempool.h), [dpaa](@ref rte_pmd_dpaa.h), [dpaa2](@ref rte_pmd_dpaa2.h), [mlx5](@ref rte_pmd_mlx5.h), diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in index d230a19e1f..7e68e43c64 100644 --- a/doc/api/doxy-api.conf.in +++ b/doc/api/doxy-api.conf.in @@ -9,6 +9,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \ @TOPDIR@/drivers/crypto/scheduler \ @TOPDIR@/drivers/dma/dpaa2 \ @TOPDIR@/drivers/event/dlb2 \ + @TOPDIR@/drivers/mempool/cnxk \ @TOPDIR@/drivers/mempool/dpaa2 \ @TOPDIR@/drivers/net/ark \ @TOPDIR@/drivers/net/bnxt \ diff --git a/drivers/mempool/cnxk/cn10k_hwpool_ops.c b/drivers/mempool/cnxk/cn10k_hwpool_ops.c index 9238765155..b234481ec1 100644 --- a/drivers/mempool/cnxk/cn10k_hwpool_ops.c +++ b/drivers/mempool/cnxk/cn10k_hwpool_ops.c @@ -3,11 +3,14 @@ */ #include +#include #include "roc_api.h" #include "cnxk_mempool.h" -#define CN10K_HWPOOL_MEM_SIZE 128 +#define CN10K_HWPOOL_MEM_SIZE 128 +#define CN10K_NPA_IOVA_RANGE_MIN 0x0 +#define CN10K_NPA_IOVA_RANGE_MAX 0x1fffffffffff80 static int __rte_hot cn10k_hwpool_enq(struct rte_mempool *hp, void *const *obj_table, unsigned int n) @@ -197,6 +200,64 @@ cn10k_hwpool_populate(struct rte_mempool *hp, unsigned int max_objs, return hp->size; } +int +rte_pmd_cnxk_mempool_mbuf_exchange(struct rte_mbuf *m1, struct rte_mbuf *m2) +{ + struct rte_mempool_objhdr *hdr; + +#ifdef RTE_LIBRTE_MEMPOOL_DEBUG + if (!(CNXK_MEMPOOL_FLAGS(m1->pool) & CNXK_MEMPOOL_F_NO_RANGE_CHECK) || + !(CNXK_MEMPOOL_FLAGS(m2->pool) & CNXK_MEMPOOL_F_NO_RANGE_CHECK)) { + plt_err("Pools must have range check disabled"); + return -EINVAL; + } + if (m1->pool->elt_size != m2->pool->elt_size || + m1->pool->header_size != m2->pool->header_size || + m1->pool->trailer_size != m2->pool->trailer_size || + m1->pool->size != m2->pool->size) { + plt_err("Parameters of pools involved in exchange does not match"); + return -EINVAL; + } +#endif + RTE_SWAP(m1->pool, m2->pool); + hdr = rte_mempool_get_header(m1); + hdr->mp = m1->pool; + hdr = rte_mempool_get_header(m2); + hdr->mp = m2->pool; + return 0; +} + +int +rte_pmd_cnxk_mempool_is_hwpool(struct rte_mempool *mp) +{ + return !!(CNXK_MEMPOOL_FLAGS(mp) & CNXK_MEMPOOL_F_IS_HWPOOL); +} + +int +rte_pmd_cnxk_mempool_range_check_disable(struct rte_mempool *mp) +{ + if (rte_pmd_cnxk_mempool_is_hwpool(mp)) { + /* Disable only aura range check for hardware pools */ + roc_npa_aura_op_range_set(mp->pool_id, CN10K_NPA_IOVA_RANGE_MIN, + CN10K_NPA_IOVA_RANGE_MAX); + CNXK_MEMPOOL_SET_FLAGS(mp, CNXK_MEMPOOL_F_NO_RANGE_CHECK); + mp = CNXK_MEMPOOL_CONFIG(mp); + } + + /* No need to disable again if already disabled */ + if (CNXK_MEMPOOL_FLAGS(mp) & CNXK_MEMPOOL_F_NO_RANGE_CHECK) + return 0; + + /* Disable aura/pool range check */ + roc_npa_pool_op_range_set(mp->pool_id, CN10K_NPA_IOVA_RANGE_MIN, + CN10K_NPA_IOVA_RANGE_MAX); + if (roc_npa_pool_range_update_check(mp->pool_id) < 0) + return -EBUSY; + + CNXK_MEMPOOL_SET_FLAGS(mp, CNXK_MEMPOOL_F_NO_RANGE_CHECK); + return 0; +} + static struct rte_mempool_ops cn10k_hwpool_ops = { .name = "cn10k_hwpool_ops", .alloc = cn10k_hwpool_alloc, diff --git a/drivers/mempool/cnxk/cnxk_mempool.h b/drivers/mempool/cnxk/cnxk_mempool.h index 4ca05d53e1..669e617952 100644 --- a/drivers/mempool/cnxk/cnxk_mempool.h +++ b/drivers/mempool/cnxk/cnxk_mempool.h @@ -20,6 +20,10 @@ enum cnxk_mempool_flags { * This flag is set by the driver. */ CNXK_MEMPOOL_F_IS_HWPOOL = RTE_BIT64(2), + /* This flag indicates whether range check has been disabled for + * the pool. This flag is set by the driver. + */ + CNXK_MEMPOOL_F_NO_RANGE_CHECK = RTE_BIT64(3), }; #define CNXK_MEMPOOL_F_MASK 0xFUL diff --git a/drivers/mempool/cnxk/meson.build b/drivers/mempool/cnxk/meson.build index ce152bedd2..e388cce26a 100644 --- a/drivers/mempool/cnxk/meson.build +++ b/drivers/mempool/cnxk/meson.build @@ -17,5 +17,6 @@ sources = files( 'cn10k_hwpool_ops.c', ) +headers = files('rte_pmd_cnxk_mempool.h') deps += ['eal', 'mbuf', 'kvargs', 'bus_pci', 'common_cnxk', 'mempool'] require_iova_in_mbuf = false diff --git a/drivers/mempool/cnxk/rte_pmd_cnxk_mempool.h b/drivers/mempool/cnxk/rte_pmd_cnxk_mempool.h new file mode 100644 index 0000000000..ada6e7cd4d --- /dev/null +++ b/drivers/mempool/cnxk/rte_pmd_cnxk_mempool.h @@ -0,0 +1,56 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +/** + * @file rte_pmd_cnxk_mempool.h + * Marvell CNXK Mempool PMD specific functions. + * + **/ + +#ifndef _PMD_CNXK_MEMPOOL_H_ +#define _PMD_CNXK_MEMPOOL_H_ + +#include +#include + +/** + * Exchange mbufs between two mempools. + * + * @param m1 + * First mbuf + * @param m2 + * Second mbuf + * + * @return + * 0 on success, a negative errno value otherwise. + */ +__rte_experimental +int rte_pmd_cnxk_mempool_mbuf_exchange(struct rte_mbuf *m1, + struct rte_mbuf *m2); + +/** + * Check whether a mempool is a hwpool. + * + * @param mp + * Mempool to check. + * + * @return + * 1 if mp is a hwpool, 0 otherwise. + */ +__rte_experimental +int rte_pmd_cnxk_mempool_is_hwpool(struct rte_mempool *mp); + +/** + * Disable buffer address range check on a mempool. + * + * @param mp + * Mempool to disable range check on. + * + * @return + * 0 on success, a negative errno value otherwise. + */ +__rte_experimental +int rte_pmd_cnxk_mempool_range_check_disable(struct rte_mempool *mp); + +#endif /* _PMD_CNXK_MEMPOOL_H_ */ diff --git a/drivers/mempool/cnxk/version.map b/drivers/mempool/cnxk/version.map new file mode 100644 index 0000000000..755731e3b5 --- /dev/null +++ b/drivers/mempool/cnxk/version.map @@ -0,0 +1,10 @@ + DPDK_23 { + local: *; + }; + + EXPERIMENTAL { + global: + rte_pmd_cnxk_mempool_is_hwpool; + rte_pmd_cnxk_mempool_mbuf_exchange; + rte_pmd_cnxk_mempool_range_check_disable; + };