From patchwork Fri Sep 2 07:00:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hanumanth Pothula X-Patchwork-Id: 115776 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2B41DA0544; Fri, 2 Sep 2022 09:03:17 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 200D740F18; Fri, 2 Sep 2022 09:03:17 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id D613040693 for ; Fri, 2 Sep 2022 09:03:15 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 281M67sP015742; Fri, 2 Sep 2022 00:01:10 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=klnzb7oeGNY5B1tEU+M9bDzYNBJz+rEvVH1S0IJdHRk=; b=eubsr0Ygmx5NdRQ3w7zLgy+p/YqQYr2+02iB9xJ4yNDvXl6jeGxiri9bDiVrrQ7XvuTM KhmBru8+RK2286bBR0v0+c9uy9h5J31XATn+Hq9iQxkC2KxjNOk2X/FoZapn8TwvWnbV vdrzBMLykjCY/o0Odoay3tOqmaejIY2KWlHpDRZTEQMvboYwlVxnAMzbU0UcMYJfvFeQ lf9hTUSKcNZeTEjgyAnrKd/zplWhLFzjcUGECw28fMgvuY9izBPnaZ+M2t4N0OHDYDNW JD15/t3JwCdEckLuT0fkzA/Mmwre7Lqgp41iD6Qf6nuIFe9fIIDjk2CwfuEx+dFELkr8 1A== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3jax17ufur-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 02 Sep 2022 00:01:09 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 2 Sep 2022 00:01:08 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Fri, 2 Sep 2022 00:01:08 -0700 Received: from localhost.localdomain (unknown [10.28.36.155]) by maili.marvell.com (Postfix) with ESMTP id CE2EF5B6977; Fri, 2 Sep 2022 00:00:51 -0700 (PDT) From: Hanumanth Pothula To: Thomas Monjalon , Ferruh Yigit , Andrew Rybchenko CC: , , , , , , , , , , , , Hanumanth Pothula Subject: [PATCH v3 1/3] ethdev: introduce pool sort capability Date: Fri, 2 Sep 2022 12:30:45 +0530 Message-ID: <20220902070047.2812906-1-hpothula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220812172451.1208933-1-hpothula@marvell.com> References: <20220812172451.1208933-1-hpothula@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 2sbaavvaQZ04xdsOXl33UbzhpKbe_fLd X-Proofpoint-ORIG-GUID: 2sbaavvaQZ04xdsOXl33UbzhpKbe_fLd X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-09-01_12,2022-08-31_03,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch adds support for the pool sort capability. Some of the HW has support for choosing memory pools based on the packet's size. The pool sort capability allows PMD to choose a memory pool based on the packet's length. This is often useful for saving the memory where the application can create a different pool to steer the specific size of the packet, thus enabling effective use of memory. For example, let's say HW has a capability of three pools, - pool-1 size is 2K - pool-2 size is > 2K and < 4K - pool-3 size is > 4K Here, pool-1 can accommodate packets with sizes < 2K pool-2 can accommodate packets with sizes > 2K and < 4K pool-3 can accommodate packets with sizes > 4K With pool sort capability enabled in SW, an application may create three pools of different sizes and send them to PMD. Allowing PMD to program HW based on packet lengths. So that packets with less than 2K are received on pool-1, packets with lengths between 2K and 4K are received on pool-2 and finally packets greater than 4K are received on pool-3. The following two capabilities are added to the rte_eth_rxseg_capa structure, 1. pool_sort --> tells pool sort capability is supported by HW. 2. max_npool --> max number of pools supported by HW. Defined new structure rte_eth_rxseg_sort, to be used only when pool sort capability is present. If required this may be extended further to support more configurations. Signed-off-by: Hanumanth Pothula v3: - Implemented Pool Sort capability as new Rx offload capability, RTE_ETH_RX_OFFLOAD_BUFFER_SORT. v2: - Along with spec changes, uploading testpmd and driver changes. --- lib/ethdev/rte_ethdev.c | 69 ++++++++++++++++++++++++++++++++++++++--- lib/ethdev/rte_ethdev.h | 24 +++++++++++++- 2 files changed, 88 insertions(+), 5 deletions(-) diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 1979dc0850..5152c08f1e 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -1634,6 +1634,58 @@ rte_eth_dev_is_removed(uint16_t port_id) return ret; } +static int +rte_eth_rx_queue_check_sort(const struct rte_eth_rxseg_sort *rx_seg, + uint16_t n_seg, uint32_t *mbp_buf_size, + const struct rte_eth_dev_info *dev_info) +{ + const struct rte_eth_rxseg_capa *seg_capa = &dev_info->rx_seg_capa; + uint16_t seg_idx; + + if (!seg_capa->multi_pools || n_seg > seg_capa->max_npool) { + RTE_ETHDEV_LOG(ERR, + "Invalid capabilities, multi_pools:%d different length segments %u exceed supported %u\n", + seg_capa->multi_pools, n_seg, seg_capa->max_nseg); + return -EINVAL; + } + + for (seg_idx = 0; seg_idx < n_seg; seg_idx++) { + struct rte_mempool *mpl = rx_seg[seg_idx].mp; + uint32_t length = rx_seg[seg_idx].length; + + if (mpl == NULL) { + RTE_ETHDEV_LOG(ERR, "null mempool pointer\n"); + return -EINVAL; + } + + if (mpl->private_data_size < + sizeof(struct rte_pktmbuf_pool_private)) { + RTE_ETHDEV_LOG(ERR, + "%s private_data_size %u < %u\n", + mpl->name, mpl->private_data_size, + (unsigned int)sizeof + (struct rte_pktmbuf_pool_private)); + return -ENOSPC; + } + + *mbp_buf_size = rte_pktmbuf_data_room_size(mpl); + /* On segment length == 0, update segment's length with + * the pool's length - headeroom space, to make sure enough + * space is accomidate for header. + **/ + length = length != 0 ? length : (*mbp_buf_size - RTE_PKTMBUF_HEADROOM); + if (*mbp_buf_size < length + RTE_PKTMBUF_HEADROOM) { + RTE_ETHDEV_LOG(ERR, + "%s mbuf_data_room_size %u < %u))\n", + mpl->name, *mbp_buf_size, + length); + return -EINVAL; + } + } + + return 0; +} + static int rte_eth_rx_queue_check_split(const struct rte_eth_rxseg_split *rx_seg, uint16_t n_seg, uint32_t *mbp_buf_size, @@ -1764,7 +1816,6 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, return -EINVAL; } } else { - const struct rte_eth_rxseg_split *rx_seg; uint16_t n_seg; /* Extended multi-segment configuration check. */ @@ -1774,13 +1825,23 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, return -EINVAL; } - rx_seg = (const struct rte_eth_rxseg_split *)rx_conf->rx_seg; n_seg = rx_conf->rx_nseg; if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + const struct rte_eth_rxseg_split *rx_seg = + (const struct rte_eth_rxseg_split *)rx_conf->rx_seg; ret = rte_eth_rx_queue_check_split(rx_seg, n_seg, - &mbp_buf_size, - &dev_info); + &mbp_buf_size, + &dev_info); + if (ret != 0) + return ret; + } else if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SORT) { + const struct rte_eth_rxseg_sort *rx_seg = + (const struct rte_eth_rxseg_sort *)rx_conf->rx_seg; + ret = rte_eth_rx_queue_check_sort(rx_seg, n_seg, + &mbp_buf_size, + &dev_info); + if (ret != 0) return ret; } else { diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index de9e970d4d..f7b5901a40 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -1204,6 +1204,21 @@ struct rte_eth_rxseg_split { uint32_t reserved; /**< Reserved field. */ }; +/** + * The pool sort capability allows PMD to choose a memory pool based on the + * packet's length. So, basically, PMD programs HW for receiving packets from + * different pools, based on the packet's length. + * + * This is often useful for saving the memory where the application can create + * a different pool to steer the specific size of the packet, thus enabling + * effective use of memory. + */ +struct rte_eth_rxseg_sort { + struct rte_mempool *mp; /**< Memory pool to allocate packets from. */ + uint16_t length; /**< Packet data length. */ + uint32_t reserved; /**< Reserved field. */ +}; + /** * @warning * @b EXPERIMENTAL: this structure may change without prior notice. @@ -1213,7 +1228,9 @@ struct rte_eth_rxseg_split { union rte_eth_rxseg { /* The settings for buffer split offload. */ struct rte_eth_rxseg_split split; - /* The other features settings should be added here. */ + + /*The settings for packet sort offload. */ + struct rte_eth_rxseg_sort sort; }; /** @@ -1633,6 +1650,7 @@ struct rte_eth_conf { #define RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM RTE_BIT64(18) #define RTE_ETH_RX_OFFLOAD_RSS_HASH RTE_BIT64(19) #define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT RTE_BIT64(20) +#define RTE_ETH_RX_OFFLOAD_BUFFER_SORT RTE_BIT64(21) #define DEV_RX_OFFLOAD_VLAN_STRIP RTE_DEPRECATED(DEV_RX_OFFLOAD_VLAN_STRIP) RTE_ETH_RX_OFFLOAD_VLAN_STRIP #define DEV_RX_OFFLOAD_IPV4_CKSUM RTE_DEPRECATED(DEV_RX_OFFLOAD_IPV4_CKSUM) RTE_ETH_RX_OFFLOAD_IPV4_CKSUM @@ -1827,10 +1845,14 @@ struct rte_eth_switch_info { */ struct rte_eth_rxseg_capa { __extension__ + uint32_t mode_split : 1; /**< Supports buffer split capability @see struct rte_eth_rxseg_split */ + uint32_t mode_sort : 1; /**< Supports pool sort capability @see struct rte_eth_rxseg_sort */ uint32_t multi_pools:1; /**< Supports receiving to multiple pools.*/ uint32_t offset_allowed:1; /**< Supports buffer offsets. */ uint32_t offset_align_log2:4; /**< Required offset alignment. */ uint16_t max_nseg; /**< Maximum amount of segments to split. */ + /* < Maximum amount of pools that PMD can sort based on packet/segment lengths */ + uint16_t max_npool; uint16_t reserved; /**< Reserved field. */ }; From patchwork Fri Sep 2 07:00:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hanumanth Pothula X-Patchwork-Id: 115777 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0E7C4A0544; Fri, 2 Sep 2022 09:03:44 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0354140693; Fri, 2 Sep 2022 09:03:44 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 3028C40684 for ; Fri, 2 Sep 2022 09:03:42 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 281M67Yj016108; Fri, 2 Sep 2022 00:01:37 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=8RfnkyuPBcQ6E7ELAhRV/8fVYOkAo//Tytbu5CTlGTI=; b=LT0c3IQit5WET45HNxVExtuYre9PVGpxMrsJlp6YsMLWdjHGmDh4tyvNCFASDt9CaAvU mnnmDYc5Ygs61XlNUcMqRDAag1sx4+1iO3EU6Dj9RuXfPmIk/i/nYmxcfUWyRiNolxTT v5bzulXVpuEldLznWfxayGwAAXQ3jpXG56DUF4zdaSlZ43mmGCPnBbl2GmZkAHCq+kSV 3iIgVimoHcxDznJKtnC+pFIa7C6Z/ORw9XpuMuZ8bZJpn+pbiW7FXv2snoEZTZh3xK+f R+FxymWH10suuOnQOh+ZsKby/w2TkuyrgTc1H1KicHbL5r5GBL4oDBGuNNpQoZ693Ab8 wQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3jax17ufvc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 02 Sep 2022 00:01:37 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 2 Sep 2022 00:01:35 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Fri, 2 Sep 2022 00:01:35 -0700 Received: from localhost.localdomain (unknown [10.28.36.155]) by maili.marvell.com (Postfix) with ESMTP id E6D715C68F5; Fri, 2 Sep 2022 00:01:29 -0700 (PDT) From: Hanumanth Pothula To: Aman Singh , Yuying Zhang CC: , , , , , , , , , , , , , , Hanumanth Pothula Subject: [PATCH v3 2/3] app/testpmd: Add support for pool sort capability Date: Fri, 2 Sep 2022 12:30:46 +0530 Message-ID: <20220902070047.2812906-2-hpothula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220902070047.2812906-1-hpothula@marvell.com> References: <20220812172451.1208933-1-hpothula@marvell.com> <20220902070047.2812906-1-hpothula@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: CKkSoLMKsnQnpK3QmEyWwW2VfV_KSrhT X-Proofpoint-ORIG-GUID: CKkSoLMKsnQnpK3QmEyWwW2VfV_KSrhT X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-09-01_12,2022-08-31_03,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch adds support for the pool sort capability. Some of the HW has support for choosing memory pools based on the packet's size. The pool sort capability allows PMD to choose a memory pool based on the packet's length. Populate Rx Sort/Split attributes based on the Rx offload value. Also, print pool name on which packet is received. Signed-off-by: Hanumanth Pothula --- app/test-pmd/testpmd.c | 31 ++++++++++++++++++++++--------- app/test-pmd/util.c | 4 ++-- 2 files changed, 24 insertions(+), 11 deletions(-) diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index addcbcac85..57f1d806b1 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -2661,7 +2661,8 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, int ret; if (rx_pkt_nb_segs <= 1 || - (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) == 0) { + (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT || + rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SORT) == 0) { rx_conf->rx_seg = NULL; rx_conf->rx_nseg = 0; ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, @@ -2670,7 +2671,8 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, goto exit; } for (i = 0; i < rx_pkt_nb_segs; i++) { - struct rte_eth_rxseg_split *rx_seg = &rx_useg[i].split; + struct rte_eth_rxseg_split *rx_split = &rx_useg[i].split; + struct rte_eth_rxseg_sort *rx_sort = &rx_useg[i].sort; struct rte_mempool *mpx; /* * Use last valid pool for the segments with number @@ -2678,13 +2680,24 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, */ mp_n = (i >= mbuf_data_size_n) ? mbuf_data_size_n - 1 : i; mpx = mbuf_pool_find(socket_id, mp_n); - /* Handle zero as mbuf data buffer size. */ - rx_seg->length = rx_pkt_seg_lengths[i] ? - rx_pkt_seg_lengths[i] : - mbuf_data_size[mp_n]; - rx_seg->offset = i < rx_pkt_nb_offs ? - rx_pkt_seg_offsets[i] : 0; - rx_seg->mp = mpx ? mpx : mp; + if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + /** + * On Segment length zero, update length as, + * buffer size - headroom size + * to make sure enough space is accomidate for header. + */ + rx_split->length = rx_pkt_seg_lengths[i] ? + rx_pkt_seg_lengths[i] : + mbuf_data_size[mp_n] - RTE_PKTMBUF_HEADROOM; + rx_split->offset = i < rx_pkt_nb_offs ? + rx_pkt_seg_offsets[i] : 0; + rx_split->mp = mpx ? mpx : mp; + } else if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SORT) { + rx_sort->length = rx_pkt_seg_lengths[i] ? + rx_pkt_seg_lengths[i] : + mbuf_data_size[mp_n] - RTE_PKTMBUF_HEADROOM; + rx_sort->mp = mpx ? mpx : mp; + } } rx_conf->rx_nseg = rx_pkt_nb_segs; rx_conf->rx_seg = rx_useg; diff --git a/app/test-pmd/util.c b/app/test-pmd/util.c index fd98e8b51d..f9df5f69ef 100644 --- a/app/test-pmd/util.c +++ b/app/test-pmd/util.c @@ -150,8 +150,8 @@ dump_pkt_burst(uint16_t port_id, uint16_t queue, struct rte_mbuf *pkts[], print_ether_addr(" - dst=", ð_hdr->dst_addr, print_buf, buf_size, &cur_len); MKDUMPSTR(print_buf, buf_size, cur_len, - " - type=0x%04x - length=%u - nb_segs=%d", - eth_type, (unsigned int) mb->pkt_len, + " - pool=%s - type=0x%04x - length=%u - nb_segs=%d", + mb->pool->name, eth_type, (unsigned int) mb->pkt_len, (int)mb->nb_segs); ol_flags = mb->ol_flags; if (ol_flags & RTE_MBUF_F_RX_RSS_HASH) { From patchwork Fri Sep 2 07:00:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hanumanth Pothula X-Patchwork-Id: 115778 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 35022A0544; Fri, 2 Sep 2022 09:03:59 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2790840F18; Fri, 2 Sep 2022 09:03:59 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 7AFF5427F2 for ; Fri, 2 Sep 2022 09:03:57 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 281KFcam005204; Fri, 2 Sep 2022 00:01:48 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=CpQBUj1HvZjRUSf4++5dVL9uyboUCXLtg3yZMQ6E2h4=; b=ilP986Xs4F4Uaub9HMeFltE7ZnbsgkQxOevdqP0EC1pmwwj4xJD1i39OLWFOoWCB5oDJ KK/DW+gf92FNgXkV0mMQo2iaTiP1GO2fy5l/EMGBae2xjTxhPKTNwZZmHfmYsOeFooUR /domm1slMYewxcUoeVYaf/MtnelZ7wVbkABbfR3o+F9Vgi/uzvCRw38QRiB10q40UuQq GB44Rr4PjGlXH2vU/IlkjrnSL9BUOgy1RYPdoSlHEAeNglWyUIEHDgYrj3LrJfRjyLd8 +1OJewtooCndgNuMP0WeoNfDQg3nd1ApS1eygzEUucjgw59x1Gap1EEPOUvjEzghVhg3 vg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3jb3kuhwx1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 02 Sep 2022 00:01:47 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 2 Sep 2022 00:01:46 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Fri, 2 Sep 2022 00:01:46 -0700 Received: from localhost.localdomain (unknown [10.28.36.155]) by maili.marvell.com (Postfix) with ESMTP id 4A4925C68E3; Fri, 2 Sep 2022 00:01:41 -0700 (PDT) From: Hanumanth Pothula To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao CC: , , , , , , , , , , , , , , Hanumanth Pothula Subject: [PATCH v3 3/3] net/cnxk: introduce pool sort capability Date: Fri, 2 Sep 2022 12:30:47 +0530 Message-ID: <20220902070047.2812906-3-hpothula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220902070047.2812906-1-hpothula@marvell.com> References: <20220812172451.1208933-1-hpothula@marvell.com> <20220902070047.2812906-1-hpothula@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 7TQ_AUT4-mobXZ9swwV7Hqb8OBqXOCL3 X-Proofpoint-GUID: 7TQ_AUT4-mobXZ9swwV7Hqb8OBqXOCL3 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-09-01_12,2022-08-31_03,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Presently, HW is programmed only to receive packets from LPB pool. Making all packets received from LPB pool. But, CNXK HW supports two pools, - SPB -> packets with smaller size (less than 4K) - LPB -> packets with bigger size (greater than 4K) Patch enables pool sorting capability, pool is selected based on packet's length. So, basically, PMD programs HW for receiving packets from both SPB and LPB pools based on the packet's length. This is achieved by enabling rx buffer split offload, RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT. This allows the application to send more than one pool(in our case two) to the driver, with different segment(packet) lengths, which helps the driver to configure both pools based on segment lengths. This is often useful for saving the memory where the application can create a different pool to steer the specific size of the packet, thus enabling effective use of memory. Signed-off-by: Hanumanth Pothula --- doc/guides/nics/features/cnxk.ini | 1 + doc/guides/nics/features/cnxk_vec.ini | 1 + drivers/net/cnxk/cnxk_ethdev.c | 93 ++++++++++++++++++++++++--- drivers/net/cnxk/cnxk_ethdev.h | 4 +- drivers/net/cnxk/cnxk_ethdev_ops.c | 7 ++ 5 files changed, 96 insertions(+), 10 deletions(-) diff --git a/doc/guides/nics/features/cnxk.ini b/doc/guides/nics/features/cnxk.ini index 1876fe86c7..e1584ed740 100644 --- a/doc/guides/nics/features/cnxk.ini +++ b/doc/guides/nics/features/cnxk.ini @@ -4,6 +4,7 @@ ; Refer to default.ini for the full list of available PMD features. ; [Features] +pool sort = Y Speed capabilities = Y Rx interrupt = Y Lock-free Tx queue = Y diff --git a/doc/guides/nics/features/cnxk_vec.ini b/doc/guides/nics/features/cnxk_vec.ini index 5d0976e6ce..a63d35aae7 100644 --- a/doc/guides/nics/features/cnxk_vec.ini +++ b/doc/guides/nics/features/cnxk_vec.ini @@ -4,6 +4,7 @@ ; Refer to default.ini for the full list of available PMD features. ; [Features] +pool sort = Y Speed capabilities = Y Rx interrupt = Y Lock-free Tx queue = Y diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c index cfcc4df916..376c5274d3 100644 --- a/drivers/net/cnxk/cnxk_ethdev.c +++ b/drivers/net/cnxk/cnxk_ethdev.c @@ -537,6 +537,64 @@ cnxk_nix_tx_queue_release(struct rte_eth_dev *eth_dev, uint16_t qid) plt_free(txq_sp); } +static int +cnxk_nix_process_rx_conf(const struct rte_eth_rxconf *rx_conf, + struct rte_mempool **lpb_pool, struct rte_mempool **spb_pool, + uint16_t *lpb_len, uint16_t *spb_len) +{ + struct rte_eth_rxseg_sort rx_seg0; + struct rte_eth_rxseg_sort rx_seg1; + const char *platform_ops; + struct rte_mempool_ops *ops; + + if (*lpb_pool || !rx_conf->rx_seg || rx_conf->rx_nseg != CNXK_NIX_NUM_POOLS_MAX || + !rx_conf->rx_seg[0].sort.mp || !rx_conf->rx_seg[1].sort.mp) { + plt_err("invalid arguments"); + return -EINVAL; + } + + rx_seg0 = rx_conf->rx_seg[0].sort; + rx_seg1 = rx_conf->rx_seg[1].sort; + + if (rx_seg0.length >= rx_seg0.mp->elt_size || rx_seg1.length >= rx_seg1.mp->elt_size) { + plt_err("mismatch in packet length & pool length seg0_len:%u pool0_len:%u"\ + "seg1_len:%u pool1_len:%u", rx_seg0.length, rx_seg0.mp->elt_size, + rx_seg1.length, rx_seg1.mp->elt_size); + return -EINVAL; + } + + if (rx_seg0.length > rx_seg1.length) { + *lpb_pool = rx_seg0.mp; + *spb_pool = rx_seg1.mp; + + *lpb_len = rx_seg0.length; + *spb_len = rx_seg1.length; + } else { + *lpb_pool = rx_seg1.mp; + *spb_pool = rx_seg0.mp; + + *lpb_len = rx_seg1.length; + *spb_len = rx_seg0.length; + } + + if ((*spb_pool)->pool_id == 0) { + plt_err("Invalid pool_id"); + return -EINVAL; + } + + platform_ops = rte_mbuf_platform_mempool_ops(); + ops = rte_mempool_get_ops((*spb_pool)->ops_index); + if (strncmp(ops->name, platform_ops, RTE_MEMPOOL_OPS_NAMESIZE)) { + plt_err("mempool ops should be of cnxk_npa type"); + return -EINVAL; + } + + plt_info("spb_pool:%s lpb_pool:%s lpb_len:%u spb_len:%u\n", (*spb_pool)->name, + (*lpb_pool)->name, *lpb_len, *spb_len); + + return 0; +} + int cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid, uint32_t nb_desc, uint16_t fp_rx_q_sz, @@ -553,6 +611,10 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid, uint16_t first_skip; int rc = -EINVAL; size_t rxq_sz; + uint16_t lpb_len = 0; + uint16_t spb_len = 0; + struct rte_mempool *lpb_pool = mp; + struct rte_mempool *spb_pool = NULL; /* Sanity checks */ if (rx_conf->rx_deferred_start == 1) { @@ -560,15 +622,22 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid, goto fail; } + if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SORT) { + rc = cnxk_nix_process_rx_conf(rx_conf, &lpb_pool, &spb_pool, + &lpb_len, &spb_len); + if (rc) + goto fail; + } + platform_ops = rte_mbuf_platform_mempool_ops(); /* This driver needs cnxk_npa mempool ops to work */ - ops = rte_mempool_get_ops(mp->ops_index); + ops = rte_mempool_get_ops(lpb_pool->ops_index); if (strncmp(ops->name, platform_ops, RTE_MEMPOOL_OPS_NAMESIZE)) { plt_err("mempool ops should be of cnxk_npa type"); goto fail; } - if (mp->pool_id == 0) { + if (lpb_pool->pool_id == 0) { plt_err("Invalid pool_id"); goto fail; } @@ -585,13 +654,13 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid, /* Its a no-op when inline device is not used */ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY || dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) - roc_nix_inl_dev_xaq_realloc(mp->pool_id); + roc_nix_inl_dev_xaq_realloc(lpb_pool->pool_id); /* Increase CQ size to Aura size to avoid CQ overflow and * then CPT buffer leak. */ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) - nb_desc = nix_inl_cq_sz_clamp_up(nix, mp, nb_desc); + nb_desc = nix_inl_cq_sz_clamp_up(nix, lpb_pool, nb_desc); /* Setup ROC CQ */ cq = &dev->cqs[qid]; @@ -606,23 +675,29 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid, /* Setup ROC RQ */ rq = &dev->rqs[qid]; rq->qid = qid; - rq->aura_handle = mp->pool_id; + rq->aura_handle = lpb_pool->pool_id; rq->flow_tag_width = 32; rq->sso_ena = false; /* Calculate first mbuf skip */ first_skip = (sizeof(struct rte_mbuf)); first_skip += RTE_PKTMBUF_HEADROOM; - first_skip += rte_pktmbuf_priv_size(mp); + first_skip += rte_pktmbuf_priv_size(lpb_pool); rq->first_skip = first_skip; rq->later_skip = sizeof(struct rte_mbuf); - rq->lpb_size = mp->elt_size; + rq->lpb_size = lpb_len ? lpb_len : lpb_pool->elt_size; rq->lpb_drop_ena = !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY); /* Enable Inline IPSec on RQ, will not be used for Poll mode */ if (roc_nix_inl_inb_is_enabled(nix)) rq->ipsech_ena = true; + if (spb_pool) { + rq->spb_ena = 1; + rq->spb_aura_handle = spb_pool->pool_id; + rq->spb_size = spb_len; + } + rc = roc_nix_rq_init(&dev->nix, rq, !!eth_dev->data->dev_started); if (rc) { plt_err("Failed to init roc rq for rq=%d, rc=%d", qid, rc); @@ -645,7 +720,7 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid, /* Queue config should reflect global offloads */ rxq_sp->qconf.conf.rx.offloads = dev->rx_offloads; rxq_sp->qconf.nb_desc = nb_desc; - rxq_sp->qconf.mp = mp; + rxq_sp->qconf.mp = lpb_pool; rxq_sp->tc = 0; rxq_sp->tx_pause = (dev->fc_cfg.mode == RTE_ETH_FC_FULL || dev->fc_cfg.mode == RTE_ETH_FC_TX_PAUSE); @@ -664,7 +739,7 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid, goto free_mem; } - plt_nix_dbg("rq=%d pool=%s nb_desc=%d->%d", qid, mp->name, nb_desc, + plt_nix_dbg("rq=%d pool=%s nb_desc=%d->%d", qid, lpb_pool->name, nb_desc, cq->nb_desc); /* Store start of fast path area */ diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h index f11a9a0b63..4b0c11b7d2 100644 --- a/drivers/net/cnxk/cnxk_ethdev.h +++ b/drivers/net/cnxk/cnxk_ethdev.h @@ -44,6 +44,8 @@ #define CNXK_NIX_RX_DEFAULT_RING_SZ 4096 /* Max supported SQB count */ #define CNXK_NIX_TX_MAX_SQB 512 +/* LPB & SPB */ +#define CNXK_NIX_NUM_POOLS_MAX 2 /* If PTP is enabled additional SEND MEM DESC is required which * takes 2 words, hence max 7 iova address are possible @@ -83,7 +85,7 @@ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_SCATTER | \ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_RSS_HASH | \ RTE_ETH_RX_OFFLOAD_TIMESTAMP | RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \ - RTE_ETH_RX_OFFLOAD_SECURITY) + RTE_ETH_RX_OFFLOAD_BUFFER_SORT | RTE_ETH_RX_OFFLOAD_SECURITY) #define RSS_IPV4_ENABLE \ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \ diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c index 1592971073..6174a586be 100644 --- a/drivers/net/cnxk/cnxk_ethdev_ops.c +++ b/drivers/net/cnxk/cnxk_ethdev_ops.c @@ -69,6 +69,13 @@ cnxk_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo) devinfo->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP | RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP | RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP; + + devinfo->rx_seg_capa = (struct rte_eth_rxseg_capa){ + .mode_sort = 1, + .multi_pools = 1, + .max_npool = CNXK_NIX_NUM_POOLS_MAX, + }; + return 0; }