From patchwork Thu Oct 6 17:53:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hanumanth Pothula X-Patchwork-Id: 117496 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3B7BFA00C2; Thu, 6 Oct 2022 19:56:14 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 228FB40DDC; Thu, 6 Oct 2022 19:56:14 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 0BFA940042 for ; Thu, 6 Oct 2022 19:56:12 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 296EGNCR017715; Thu, 6 Oct 2022 10:54:05 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=7URVCGU6KsVjjIkb2pR6AwYJ/q1MHlq5rnECFhS0rrU=; b=Q3iLx+ZgRdv2SisFcG1g/6TUi9OwvCoN+NxXsXBDmn2nsxqzGQFq7FE25/FYANfrWuNV PdrEk3DF9VC6LG8ORtEBTSaWpScU/lmnRmCs+tildmZuyrvsyFg8vGFIuQu7zw+j3EUf TTlUy9ppYIB1aZDkN0u0ULEIBaqBjlY0q1TZ7rXu85NZ1UhRvyKfsIDotY3S/Pu5WB1q usp8Dxy2s2u+hm1hy7vJQ+86S1LyKCYHnBsfZOMc/+3abVTt8K8FxabdpaCuUM9IfwBw DOMmhPI1tOIAun/2H7zXcHV+R+VRh+u8H4GOqACsE16/94wRImq05G7ug1irq2siCtZH dA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3k1d7gmtqv-4 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 06 Oct 2022 10:54:04 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 6 Oct 2022 10:54:02 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 6 Oct 2022 10:54:02 -0700 Received: from localhost.localdomain (unknown [10.28.36.155]) by maili.marvell.com (Postfix) with ESMTP id 8F92E3F7073; Thu, 6 Oct 2022 10:53:58 -0700 (PDT) From: Hanumanth Pothula To: Thomas Monjalon , Ferruh Yigit , Andrew Rybchenko CC: , , , , , , , , , , , , Hanumanth Pothula Subject: [PATCH v6 1/3] ethdev: support mulitiple mbuf pools per Rx queue Date: Thu, 6 Oct 2022 23:23:53 +0530 Message-ID: <20221006175355.1327673-1-hpothula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221006170126.1322852-1-hpothula@marvell.com> References: <20221006170126.1322852-1-hpothula@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: oSLNsPJFpiC4ojbREfkPGawV0MR2rCfz X-Proofpoint-ORIG-GUID: oSLNsPJFpiC4ojbREfkPGawV0MR2rCfz X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.528,FMLib:17.11.122.1 definitions=2022-10-06_04,2022-10-06_02,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch adds support for multiple mempool capability. Some of the HW has support for choosing memory pools based on the packet's size. The capability allows PMD to choose a memory pool based on the packet's length. This is often useful for saving the memory where the application can create a different pool to steer the specific size of the packet, thus enabling effective use of memory. For example, let's say HW has a capability of three pools, - pool-1 size is 2K - pool-2 size is > 2K and < 4K - pool-3 size is > 4K Here, pool-1 can accommodate packets with sizes < 2K pool-2 can accommodate packets with sizes > 2K and < 4K pool-3 can accommodate packets with sizes > 4K With multiple mempool capability enabled in SW, an application may create three pools of different sizes and send them to PMD. Allowing PMD to program HW based on the packet lengths. So that packets with less than 2K are received on pool-1, packets with lengths between 2K and 4K are received on pool-2 and finally packets greater than 4K are received on pool-3. Signed-off-by: Hanumanth Pothula v6: - Updated release notes, release_22_11.rst. v5: - Declared memory pools as struct rte_mempool **rx_mempools rather than as struct rte_mempool *mp. - Added the feature in release notes. - Updated conditions and strings as per review comments. v4: - Renamed Offload capability name from RTE_ETH_RX_OFFLOAD_BUFFER_SORT to RTE_ETH_RX_OFFLOAD_MUL_MEMPOOL. - In struct rte_eth_rxconf, defined new pointer, which holds array of type struct rte_eth_rx_mempool(memory pools). This array is used by PMD to program multiple mempools. v3: - Implemented Pool Sort capability as new Rx offload capability, RTE_ETH_RX_OFFLOAD_BUFFER_SORT. v2: - Along with spec changes, uploading testpmd and driver changes. --- doc/guides/rel_notes/release_22_11.rst | 6 +++ lib/ethdev/rte_ethdev.c | 74 ++++++++++++++++++++++---- lib/ethdev/rte_ethdev.h | 22 ++++++++ 3 files changed, 92 insertions(+), 10 deletions(-) diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst index 2e076ba2ad..8bb19155d9 100644 --- a/doc/guides/rel_notes/release_22_11.rst +++ b/doc/guides/rel_notes/release_22_11.rst @@ -55,6 +55,12 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= +* ** Added support ethdev support for mulitiple mbuf pools per Rx queue.** + + * Added new Rx offload flag ``RTE_ETH_RX_OFFLOAD_MUL_MEMPOOL`` to support + mulitiple mbuf pools per Rx queue. Thisi capability allows PMD to choose + a memory pool based on the packet's length + * **Updated Wangxun ngbe driver.** * Added support to set device link down/up. diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 1979dc0850..eed4834e6b 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -1634,6 +1634,44 @@ rte_eth_dev_is_removed(uint16_t port_id) return ret; } +static int +rte_eth_rx_queue_check_mempool(struct rte_mempool **rx_mempool, + uint16_t n_pool, uint32_t *mbp_buf_size, + const struct rte_eth_dev_info *dev_info) +{ + uint16_t pool_idx; + + if (n_pool > dev_info->max_pools) { + RTE_ETHDEV_LOG(ERR, + "Too many Rx mempools %u vs maximum %u\n", + n_pool, dev_info->max_pools); + return -EINVAL; + } + + for (pool_idx = 0; pool_idx < n_pool; pool_idx++) { + struct rte_mempool *mpl = rx_mempool[pool_idx]; + + if (mpl == NULL) { + RTE_ETHDEV_LOG(ERR, "null Rx mempool pointer\n"); + return -EINVAL; + } + + *mbp_buf_size = rte_pktmbuf_data_room_size(mpl); + if (*mbp_buf_size < dev_info->min_rx_bufsize + + RTE_PKTMBUF_HEADROOM) { + RTE_ETHDEV_LOG(ERR, + "%s mbuf_data_room_size %u < %u (RTE_PKTMBUF_HEADROOM=%u + min_rx_bufsize(dev)=%u)\n", + mpl->name, *mbp_buf_size, + RTE_PKTMBUF_HEADROOM + dev_info->min_rx_bufsize, + RTE_PKTMBUF_HEADROOM, + dev_info->min_rx_bufsize); + return -EINVAL; + } + } + + return 0; +} + static int rte_eth_rx_queue_check_split(const struct rte_eth_rxseg_split *rx_seg, uint16_t n_seg, uint32_t *mbp_buf_size, @@ -1733,9 +1771,12 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, if (mp != NULL) { /* Single pool configuration check. */ - if (rx_conf != NULL && rx_conf->rx_nseg != 0) { + if (((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) && + rx_conf != NULL && rx_conf->rx_nseg != 0) || + ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_MUL_MEMPOOL) && + rx_conf != NULL && rx_conf->rx_npool != 0)) { RTE_ETHDEV_LOG(ERR, - "Ambiguous segment configuration\n"); + "Ambiguous Rx mempools configuration\n"); return -EINVAL; } /* @@ -1763,30 +1804,43 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, dev_info.min_rx_bufsize); return -EINVAL; } - } else { + } else if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { const struct rte_eth_rxseg_split *rx_seg; uint16_t n_seg; /* Extended multi-segment configuration check. */ if (rx_conf == NULL || rx_conf->rx_seg == NULL || rx_conf->rx_nseg == 0) { RTE_ETHDEV_LOG(ERR, - "Memory pool is null and no extended configuration provided\n"); + "Memory pool is null and no multi-segment configuration provided\n"); return -EINVAL; } rx_seg = (const struct rte_eth_rxseg_split *)rx_conf->rx_seg; n_seg = rx_conf->rx_nseg; - if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { - ret = rte_eth_rx_queue_check_split(rx_seg, n_seg, + ret = rte_eth_rx_queue_check_split(rx_seg, n_seg, &mbp_buf_size, &dev_info); - if (ret != 0) - return ret; - } else { - RTE_ETHDEV_LOG(ERR, "No Rx segmentation offload configured\n"); + if (ret != 0) + return ret; + } else if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_MUL_MEMPOOL) { + /* Extended multi-pool configuration check. */ + if (rx_conf == NULL || rx_conf->rx_mempools == NULL || rx_conf->rx_npool == 0) { + RTE_ETHDEV_LOG(ERR, + "Memory pool is null and no multi-pool configuration provided\n"); return -EINVAL; } + + ret = rte_eth_rx_queue_check_mempool(rx_conf->rx_mempools, + rx_conf->rx_npool, + &mbp_buf_size, + &dev_info); + + if (ret != 0) + return ret; + } else { + RTE_ETHDEV_LOG(ERR, "Missing Rx mempool configuration\n"); + return -EINVAL; } /* Use default specified by driver, if nb_rx_desc is zero */ diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index b62ac5bb6f..306c2b3573 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -1067,6 +1067,25 @@ struct rte_eth_rxconf { */ union rte_eth_rxseg *rx_seg; + /** + * Points to an array of mempools. + * + * Valid only when RTE_ETH_RX_OFFLOAD_MUL_MEMPOOL flag is set in + * Rx offloads. + * + * This provides support for multiple mbuf pools per Rx queue. + * + * This is often useful for saving the memory where the application can + * create a different pools to steer the specific size of the packet, thus + * enabling effective use of memory. + * + * Note that on Rx scatter enable, a packet may be delivered using a chain + * of mbufs obtained from single mempool or multiple mempools based on + * the NIC implementation. + */ + struct rte_mempool **rx_mempools; + uint16_t rx_npool; /** < number of mempools */ + uint64_t reserved_64s[2]; /**< Reserved for future fields */ void *reserved_ptrs[2]; /**< Reserved for future fields */ }; @@ -1395,6 +1414,7 @@ struct rte_eth_conf { #define RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM RTE_BIT64(18) #define RTE_ETH_RX_OFFLOAD_RSS_HASH RTE_BIT64(19) #define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT RTE_BIT64(20) +#define RTE_ETH_RX_OFFLOAD_MUL_MEMPOOL RTE_BIT64(21) #define RTE_ETH_RX_OFFLOAD_CHECKSUM (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \ RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \ @@ -1615,6 +1635,8 @@ struct rte_eth_dev_info { /** Configured number of Rx/Tx queues */ uint16_t nb_rx_queues; /**< Number of Rx queues. */ uint16_t nb_tx_queues; /**< Number of Tx queues. */ + /** Maximum number of pools supported per Rx queue. */ + uint16_t max_pools; /** Rx parameter recommendations */ struct rte_eth_dev_portconf default_rxportconf; /** Tx parameter recommendations */ From patchwork Thu Oct 6 17:53:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hanumanth Pothula X-Patchwork-Id: 117497 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 31EE3A00C2; Thu, 6 Oct 2022 19:56:25 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 25FB94114E; Thu, 6 Oct 2022 19:56:25 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 9DD3A40042 for ; Thu, 6 Oct 2022 19:56:23 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 296EFZUE016145; Thu, 6 Oct 2022 10:54:16 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=4lDWeY8Jc6ww0JJhes4RDlf/pmA25DaOUbS4vPSBtf0=; b=UUHIA8saSdV2uRjcIKEBK8FWJT/UXuKij0O93AJAGiciv7GYbsj0oMN5cZdBCwF/Hf8U q6N9ApUsSULBozK8YXNvgTswZddTlJgZEnRdlms9GPz/o2JoGJ6NVOZ/ZVXDAqLbhJHS Eepc1xnBmC5+CKEYn318Pwf/HouXYL/aLPzoUAUsNsS71X77ardEGJeQEtre5gaZ21m9 Erp500b14I0l4R7XWFkhekvkJC/sNhkrHaYVfvMB37q6N76d+XU+2HKdrzlTxByaqlsk LW4Vr0vr0c9DgL6MStENikUfDI6+cQ0ssSD6T4V3QCRiX40T7vRDr5/mUYsfSS/eIG6V Uw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3k1d7gmtrq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 06 Oct 2022 10:54:16 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 6 Oct 2022 10:54:14 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Thu, 6 Oct 2022 10:54:14 -0700 Received: from localhost.localdomain (unknown [10.28.36.155]) by maili.marvell.com (Postfix) with ESMTP id 6E5563F707B; Thu, 6 Oct 2022 10:54:09 -0700 (PDT) From: Hanumanth Pothula To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao CC: , , , , , , , , , , , , , , Hanumanth Pothula Subject: [PATCH v6 2/3] net/cnxk: support mulitiple mbuf pools per Rx queue Date: Thu, 6 Oct 2022 23:23:54 +0530 Message-ID: <20221006175355.1327673-2-hpothula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221006175355.1327673-1-hpothula@marvell.com> References: <20221006170126.1322852-1-hpothula@marvell.com> <20221006175355.1327673-1-hpothula@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: jVHj_glHwPz0Z6d-L51v3GW1fglPbxXx X-Proofpoint-ORIG-GUID: jVHj_glHwPz0Z6d-L51v3GW1fglPbxXx X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.528,FMLib:17.11.122.1 definitions=2022-10-06_04,2022-10-06_02,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Presently, HW is programmed only to receive packets from LPB pool. Making all packets received from LPB pool. But, CNXK HW supports two pools, - SPB -> packets with smaller size (less than 4K) - LPB -> packets with bigger size (greater than 4K) Patch enables multiple mempool capability, pool is selected based on the packet's length. So, basically, PMD programs HW for receiving packets from both SPB and LPB pools based on the packet's length. This is achieved by enabling rx multiple mempool offload, RTE_ETH_RX_OFFLOAD_MUL_MEMPOOL. This allows the application to send more than one pool(in our case two) to the driver, with different segment(packet) lengths, which helps the driver to configure both pools based on segment lengths. This is often useful for saving the memory where the application can create a different pool to steer the specific size of the packet, thus enabling effective use of memory. Signed-off-by: Hanumanth Pothula --- doc/guides/nics/features/cnxk.ini | 1 + doc/guides/nics/features/cnxk_vec.ini | 1 + drivers/net/cnxk/cnxk_ethdev.c | 84 ++++++++++++++++++++++++--- drivers/net/cnxk/cnxk_ethdev.h | 4 +- drivers/net/cnxk/cnxk_ethdev_ops.c | 3 + 5 files changed, 83 insertions(+), 10 deletions(-) diff --git a/doc/guides/nics/features/cnxk.ini b/doc/guides/nics/features/cnxk.ini index 1876fe86c7..ed778ba398 100644 --- a/doc/guides/nics/features/cnxk.ini +++ b/doc/guides/nics/features/cnxk.ini @@ -4,6 +4,7 @@ ; Refer to default.ini for the full list of available PMD features. ; [Features] +multiple mempools = Y Speed capabilities = Y Rx interrupt = Y Lock-free Tx queue = Y diff --git a/doc/guides/nics/features/cnxk_vec.ini b/doc/guides/nics/features/cnxk_vec.ini index 5d0976e6ce..c2270fe338 100644 --- a/doc/guides/nics/features/cnxk_vec.ini +++ b/doc/guides/nics/features/cnxk_vec.ini @@ -4,6 +4,7 @@ ; Refer to default.ini for the full list of available PMD features. ; [Features] +multiple mempools = Y Speed capabilities = Y Rx interrupt = Y Lock-free Tx queue = Y diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c index ce896338d9..6d525036ff 100644 --- a/drivers/net/cnxk/cnxk_ethdev.c +++ b/drivers/net/cnxk/cnxk_ethdev.c @@ -541,6 +541,58 @@ cnxk_nix_tx_queue_release(struct rte_eth_dev *eth_dev, uint16_t qid) plt_free(txq_sp); } +static int +cnxk_nix_process_rx_conf(const struct rte_eth_rxconf *rx_conf, + struct rte_mempool **lpb_pool, + struct rte_mempool **spb_pool) +{ + struct rte_mempool *pool0; + struct rte_mempool *pool1; + struct rte_mempool **mp = rx_conf->rx_mempools; + const char *platform_ops; + struct rte_mempool_ops *ops; + + if (*lpb_pool || + rx_conf->rx_npool != CNXK_NIX_NUM_POOLS_MAX) { + plt_err("invalid arguments"); + return -EINVAL; + } + + if (mp == NULL || mp[0] == NULL || mp[1] == NULL) { + plt_err("invalid memory pools\n"); + return -EINVAL; + } + + pool0 = mp[0]; + pool1 = mp[1]; + + if (pool0->elt_size > pool1->elt_size) { + *lpb_pool = pool0; + *spb_pool = pool1; + + } else { + *lpb_pool = pool1; + *spb_pool = pool0; + } + + if ((*spb_pool)->pool_id == 0) { + plt_err("Invalid pool_id"); + return -EINVAL; + } + + platform_ops = rte_mbuf_platform_mempool_ops(); + ops = rte_mempool_get_ops((*spb_pool)->ops_index); + if (strncmp(ops->name, platform_ops, RTE_MEMPOOL_OPS_NAMESIZE)) { + plt_err("mempool ops should be of cnxk_npa type"); + return -EINVAL; + } + + plt_info("spb_pool:%s lpb_pool:%s lpb_len:%u spb_len:%u\n", (*spb_pool)->name, + (*lpb_pool)->name, (*lpb_pool)->elt_size, (*spb_pool)->elt_size); + + return 0; +} + int cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid, uint32_t nb_desc, uint16_t fp_rx_q_sz, @@ -557,6 +609,8 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid, uint16_t first_skip; int rc = -EINVAL; size_t rxq_sz; + struct rte_mempool *lpb_pool = mp; + struct rte_mempool *spb_pool = NULL; /* Sanity checks */ if (rx_conf->rx_deferred_start == 1) { @@ -564,15 +618,21 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid, goto fail; } + if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_MUL_MEMPOOL) { + rc = cnxk_nix_process_rx_conf(rx_conf, &lpb_pool, &spb_pool); + if (rc) + goto fail; + } + platform_ops = rte_mbuf_platform_mempool_ops(); /* This driver needs cnxk_npa mempool ops to work */ - ops = rte_mempool_get_ops(mp->ops_index); + ops = rte_mempool_get_ops(lpb_pool->ops_index); if (strncmp(ops->name, platform_ops, RTE_MEMPOOL_OPS_NAMESIZE)) { plt_err("mempool ops should be of cnxk_npa type"); goto fail; } - if (mp->pool_id == 0) { + if (lpb_pool->pool_id == 0) { plt_err("Invalid pool_id"); goto fail; } @@ -589,13 +649,13 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid, /* Its a no-op when inline device is not used */ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY || dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) - roc_nix_inl_dev_xaq_realloc(mp->pool_id); + roc_nix_inl_dev_xaq_realloc(lpb_pool->pool_id); /* Increase CQ size to Aura size to avoid CQ overflow and * then CPT buffer leak. */ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) - nb_desc = nix_inl_cq_sz_clamp_up(nix, mp, nb_desc); + nb_desc = nix_inl_cq_sz_clamp_up(nix, lpb_pool, nb_desc); /* Setup ROC CQ */ cq = &dev->cqs[qid]; @@ -611,17 +671,17 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid, rq = &dev->rqs[qid]; rq->qid = qid; rq->cqid = cq->qid; - rq->aura_handle = mp->pool_id; + rq->aura_handle = lpb_pool->pool_id; rq->flow_tag_width = 32; rq->sso_ena = false; /* Calculate first mbuf skip */ first_skip = (sizeof(struct rte_mbuf)); first_skip += RTE_PKTMBUF_HEADROOM; - first_skip += rte_pktmbuf_priv_size(mp); + first_skip += rte_pktmbuf_priv_size(lpb_pool); rq->first_skip = first_skip; rq->later_skip = sizeof(struct rte_mbuf); - rq->lpb_size = mp->elt_size; + rq->lpb_size = lpb_pool->elt_size; if (roc_errata_nix_no_meta_aura()) rq->lpb_drop_ena = !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY); @@ -629,6 +689,12 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid, if (roc_nix_inl_inb_is_enabled(nix)) rq->ipsech_ena = true; + if (spb_pool) { + rq->spb_ena = 1; + rq->spb_aura_handle = spb_pool->pool_id; + rq->spb_size = spb_pool->elt_size; + } + rc = roc_nix_rq_init(&dev->nix, rq, !!eth_dev->data->dev_started); if (rc) { plt_err("Failed to init roc rq for rq=%d, rc=%d", qid, rc); @@ -651,7 +717,7 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid, /* Queue config should reflect global offloads */ rxq_sp->qconf.conf.rx.offloads = dev->rx_offloads; rxq_sp->qconf.nb_desc = nb_desc; - rxq_sp->qconf.mp = mp; + rxq_sp->qconf.mp = lpb_pool; rxq_sp->tc = 0; rxq_sp->tx_pause = (dev->fc_cfg.mode == RTE_ETH_FC_FULL || dev->fc_cfg.mode == RTE_ETH_FC_TX_PAUSE); @@ -670,7 +736,7 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid, goto free_mem; } - plt_nix_dbg("rq=%d pool=%s nb_desc=%d->%d", qid, mp->name, nb_desc, + plt_nix_dbg("rq=%d pool=%s nb_desc=%d->%d", qid, lpb_pool->name, nb_desc, cq->nb_desc); /* Store start of fast path area */ diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h index c09e9bff8e..aedbab85b9 100644 --- a/drivers/net/cnxk/cnxk_ethdev.h +++ b/drivers/net/cnxk/cnxk_ethdev.h @@ -44,6 +44,8 @@ #define CNXK_NIX_RX_DEFAULT_RING_SZ 4096 /* Max supported SQB count */ #define CNXK_NIX_TX_MAX_SQB 512 +/* LPB & SPB */ +#define CNXK_NIX_NUM_POOLS_MAX 2 /* If PTP is enabled additional SEND MEM DESC is required which * takes 2 words, hence max 7 iova address are possible @@ -83,7 +85,7 @@ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_SCATTER | \ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_RSS_HASH | \ RTE_ETH_RX_OFFLOAD_TIMESTAMP | RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \ - RTE_ETH_RX_OFFLOAD_SECURITY) + RTE_ETH_RX_OFFLOAD_MUL_MEMPOOL | RTE_ETH_RX_OFFLOAD_SECURITY) #define RSS_IPV4_ENABLE \ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \ diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c index 07c744bf64..bfe9199537 100644 --- a/drivers/net/cnxk/cnxk_ethdev_ops.c +++ b/drivers/net/cnxk/cnxk_ethdev_ops.c @@ -69,6 +69,9 @@ cnxk_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo) devinfo->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP | RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP | RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP; + + devinfo->max_pools = CNXK_NIX_NUM_POOLS_MAX; + return 0; } From patchwork Thu Oct 6 17:53:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hanumanth Pothula X-Patchwork-Id: 117498 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 354DCA00C2; Thu, 6 Oct 2022 19:56:33 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 28E5F410D3; Thu, 6 Oct 2022 19:56:33 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 2B155427F4 for ; Thu, 6 Oct 2022 19:56:31 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2969IdPP006908; Thu, 6 Oct 2022 10:54:23 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=dYynIfpDFToyQ3LZFKQ15mSFaxr2KmOI97YsW3aAnds=; b=lCkY1XroxwMHwqh71nKOG/xdQiC8ustiT5imcltp6iqDGWlOnuUqVAb0CbeA1jzyQZJK HAuIPUi6mE6YcsJGcGu4ReJzDslKSWZ1LedIKh9P6+1I8rRAQCM9WwWChpqM3dwf50dF ordOhFTEfXL0dWBlE+oUY0Tedg99Gvz03gt+U8mq/tQH2sCkvnh5b3M5C/dkrjxN7aQu QUDClLqwrinaJa1znU8x7H0AxBfGu5wyBw6fw+VWGDjyz3bVp/reZiWXjUu1WcTfNJQx z2FfS3XIBhKhfM3xrPZEtcc2z94V/M8mPPovbX0XZKzA3j3bmrDF/V5eidPQLjRWHKJI ig== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3k1v9asyur-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 06 Oct 2022 10:54:23 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 6 Oct 2022 10:54:21 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 6 Oct 2022 10:54:21 -0700 Received: from localhost.localdomain (unknown [10.28.36.155]) by maili.marvell.com (Postfix) with ESMTP id B50C85B6957; Thu, 6 Oct 2022 10:54:16 -0700 (PDT) From: Hanumanth Pothula To: Aman Singh , Yuying Zhang CC: , , , , , , , , , , , , , , Hanumanth Pothula Subject: [PATCH v6 3/3] app/testpmd: support mulitiple mbuf pools per Rx queue Date: Thu, 6 Oct 2022 23:23:55 +0530 Message-ID: <20221006175355.1327673-3-hpothula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221006175355.1327673-1-hpothula@marvell.com> References: <20221006170126.1322852-1-hpothula@marvell.com> <20221006175355.1327673-1-hpothula@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: K5Xa96l6eYlQhGBGB1FfAI_8O65SPYe7 X-Proofpoint-ORIG-GUID: K5Xa96l6eYlQhGBGB1FfAI_8O65SPYe7 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.528,FMLib:17.11.122.1 definitions=2022-10-06_04,2022-10-06_02,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch adds support for the mulitiple mempool. Some of the HW has support for choosing memory pools based on the packet's size. The pool sort capability allows PMD to choose a memory pool based on the packet's length. On multiple mempool support enabled, populate mempool array and also print pool name on which packet is received. Signed-off-by: Hanumanth Pothula --- app/test-pmd/testpmd.c | 44 ++++++++++++++++++++++++++++++------------ app/test-pmd/testpmd.h | 3 +++ app/test-pmd/util.c | 4 ++-- 3 files changed, 37 insertions(+), 14 deletions(-) diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 77741fc41f..1dbddf7b43 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -2624,11 +2624,13 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, struct rte_eth_rxconf *rx_conf, struct rte_mempool *mp) { union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {}; + struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {}; unsigned int i, mp_n; int ret; if (rx_pkt_nb_segs <= 1 || - (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) == 0) { + (rx_conf->offloads & (RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT | + RTE_ETH_RX_OFFLOAD_MUL_MEMPOOL)) == 0) { rx_conf->rx_seg = NULL; rx_conf->rx_nseg = 0; ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, @@ -2637,7 +2639,9 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, goto exit; } for (i = 0; i < rx_pkt_nb_segs; i++) { - struct rte_eth_rxseg_split *rx_seg = &rx_useg[i].split; + struct rte_eth_rxseg_split *rx_split = &rx_useg[i].split; + struct rte_mempool *mempool; + struct rte_mempool *mpx; /* * Use last valid pool for the segments with number @@ -2645,16 +2649,32 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, */ mp_n = (i >= mbuf_data_size_n) ? mbuf_data_size_n - 1 : i; mpx = mbuf_pool_find(socket_id, mp_n); - /* Handle zero as mbuf data buffer size. */ - rx_seg->length = rx_pkt_seg_lengths[i] ? - rx_pkt_seg_lengths[i] : - mbuf_data_size[mp_n]; - rx_seg->offset = i < rx_pkt_nb_offs ? - rx_pkt_seg_offsets[i] : 0; - rx_seg->mp = mpx ? mpx : mp; - } - rx_conf->rx_nseg = rx_pkt_nb_segs; - rx_conf->rx_seg = rx_useg; + if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + /** + * On Segment length zero, update length as, + * buffer size - headroom size + * to make sure enough space is accomidate for header. + */ + rx_split->length = rx_pkt_seg_lengths[i] ? + rx_pkt_seg_lengths[i] : + mbuf_data_size[mp_n] - RTE_PKTMBUF_HEADROOM; + rx_split->offset = i < rx_pkt_nb_offs ? + rx_pkt_seg_offsets[i] : 0; + rx_split->mp = mpx ? mpx : mp; + } + if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_MUL_MEMPOOL) { + mempool = mpx ? mpx : mp; + rx_mempool[i] = mempool; + } + } + if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + rx_conf->rx_nseg = rx_pkt_nb_segs; + rx_conf->rx_seg = rx_useg; + } + if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_MUL_MEMPOOL) { + rx_conf->rx_mempools = rx_mempool; + rx_conf->rx_npool = rx_pkt_nb_segs; + } ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc, socket_id, rx_conf, NULL); rx_conf->rx_seg = NULL; diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index ddf5e21849..15a26171e2 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -82,6 +82,9 @@ extern uint8_t cl_quit; #define MIN_TOTAL_NUM_MBUFS 1024 +/* Maximum number of pools supprted per Rx queue */ +#define MAX_MEMPOOL 8 + typedef uint8_t lcoreid_t; typedef uint16_t portid_t; typedef uint16_t queueid_t; diff --git a/app/test-pmd/util.c b/app/test-pmd/util.c index fd98e8b51d..f9df5f69ef 100644 --- a/app/test-pmd/util.c +++ b/app/test-pmd/util.c @@ -150,8 +150,8 @@ dump_pkt_burst(uint16_t port_id, uint16_t queue, struct rte_mbuf *pkts[], print_ether_addr(" - dst=", ð_hdr->dst_addr, print_buf, buf_size, &cur_len); MKDUMPSTR(print_buf, buf_size, cur_len, - " - type=0x%04x - length=%u - nb_segs=%d", - eth_type, (unsigned int) mb->pkt_len, + " - pool=%s - type=0x%04x - length=%u - nb_segs=%d", + mb->pool->name, eth_type, (unsigned int) mb->pkt_len, (int)mb->nb_segs); ol_flags = mb->ol_flags; if (ol_flags & RTE_MBUF_F_RX_RSS_HASH) {