From patchwork Fri Aug 12 17:24:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hanumanth Pothula X-Patchwork-Id: 114931 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 17DC7A0543; Fri, 12 Aug 2022 19:27:52 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 041F9410D2; Fri, 12 Aug 2022 19:27:52 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id CAF5342BC0 for ; Fri, 12 Aug 2022 19:27:50 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 27CFCkfd028436; Fri, 12 Aug 2022 10:25:46 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=pUlqGOoyJsZMCavrYKygePH7GwKEImXcSXyHyJY15jQ=; b=hH6wKNuJNjxKzk1Vmd9jiFYxdaSCq60sWgIVKTxkengjtSViAX+HT9OzI6sFuHbU5ia4 SifRGoefr6Y/Qn2NA5L9D+0KYVTIcqf3XHDcM5t8GnNWcTdtESijweVEW/qOax8xdSRX 6nDkmZbtVh/QJuzd7guhyoUUuOnLNr9SWCSWbijVRa4Ima7bKyOWH7rNbs8SApg9Xjnj ZO28N3v3pdZuGLovFtRfhueLGOSW0acbEkI6hhEQmmICpNs5e1EkA2YCmwuaksygarFf f3SL5lGm6pn7INg5GIPP8F6nvu9oHaEYExK5pGAQWy7Q06NUj+M+6VRDXedJ/xgGxL+X mg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3hwsa18gk4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 12 Aug 2022 10:25:45 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 12 Aug 2022 10:25:43 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Fri, 12 Aug 2022 10:25:43 -0700 Received: from localhost.localdomain (unknown [10.28.36.155]) by maili.marvell.com (Postfix) with ESMTP id 44F925B6934; Fri, 12 Aug 2022 10:25:36 -0700 (PDT) From: Hanumanth Pothula To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao CC: , , , , , , , , , , , , , , Hanumanth Pothula Subject: [PATCH v2 3/3] net/cnxk: introduce pool sort capability Date: Fri, 12 Aug 2022 22:54:51 +0530 Message-ID: <20220812172451.1208933-3-hpothula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220812172451.1208933-1-hpothula@marvell.com> References: <20220812104648.1019978-1-hpothula@marvell.com> <20220812172451.1208933-1-hpothula@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: IoFiNnD8WzEl4O8AOCdqzED5sd_dGsjX X-Proofpoint-GUID: IoFiNnD8WzEl4O8AOCdqzED5sd_dGsjX X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-08-12_10,2022-08-11_01,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Presently, HW is programmed only to receive packets from LPB pool. Making all packets received from LPB pool. But, CNXK HW supports two pools, - SPB -> packets with smaller size (less than 4K) - LPB -> packets with bigger size (greater than 4K) Patch enables pool sorting capability, pool is selected based on packet's length. So, basically, PMD programs HW for receiving packets from both SPB and LPB pools based on the packet's length. This is achieved by enabling rx buffer split offload, RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT. This allows the application to send more than one pool(in our case two) to the driver, with different segment(packet) lengths, which helps the driver to configure both pools based on segment lengths. This is often useful for saving the memory where the application can create a different pool to steer the specific size of the packet, thus enabling effective use of memory. Signed-off-by: Hanumanth Pothula --- doc/guides/nics/features/cnxk.ini | 1 + doc/guides/nics/features/cnxk_vec.ini | 1 + drivers/net/cnxk/cnxk_ethdev.c | 93 ++++++++++++++++++++++++--- drivers/net/cnxk/cnxk_ethdev.h | 4 +- drivers/net/cnxk/cnxk_ethdev_ops.c | 7 ++ 5 files changed, 96 insertions(+), 10 deletions(-) diff --git a/doc/guides/nics/features/cnxk.ini b/doc/guides/nics/features/cnxk.ini index 1876fe86c7..e1584ed740 100644 --- a/doc/guides/nics/features/cnxk.ini +++ b/doc/guides/nics/features/cnxk.ini @@ -4,6 +4,7 @@ ; Refer to default.ini for the full list of available PMD features. ; [Features] +pool sort = Y Speed capabilities = Y Rx interrupt = Y Lock-free Tx queue = Y diff --git a/doc/guides/nics/features/cnxk_vec.ini b/doc/guides/nics/features/cnxk_vec.ini index 5d0976e6ce..a63d35aae7 100644 --- a/doc/guides/nics/features/cnxk_vec.ini +++ b/doc/guides/nics/features/cnxk_vec.ini @@ -4,6 +4,7 @@ ; Refer to default.ini for the full list of available PMD features. ; [Features] +pool sort = Y Speed capabilities = Y Rx interrupt = Y Lock-free Tx queue = Y diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c index 24182909f1..6bf04dde96 100644 --- a/drivers/net/cnxk/cnxk_ethdev.c +++ b/drivers/net/cnxk/cnxk_ethdev.c @@ -537,6 +537,64 @@ cnxk_nix_tx_queue_release(struct rte_eth_dev *eth_dev, uint16_t qid) plt_free(txq_sp); } +static int +cnxk_nix_process_rx_conf(const struct rte_eth_rxconf *rx_conf, + struct rte_mempool **lpb_pool, struct rte_mempool **spb_pool, + uint16_t *lpb_len, uint16_t *spb_len) +{ + struct rte_eth_rxseg_sort rx_seg0; + struct rte_eth_rxseg_sort rx_seg1; + const char *platform_ops; + struct rte_mempool_ops *ops; + + if (*lpb_pool || !rx_conf->rx_seg || rx_conf->rx_nseg != CNXK_NIX_NUM_POOLS_MAX || + !rx_conf->rx_seg[0].sort.mp || !rx_conf->rx_seg[1].sort.mp) { + plt_err("invalid arguments"); + return -EINVAL; + } + + rx_seg0 = rx_conf->rx_seg[0].sort; + rx_seg1 = rx_conf->rx_seg[1].sort; + + if (rx_seg0.length >= rx_seg0.mp->elt_size || rx_seg1.length >= rx_seg1.mp->elt_size) { + plt_err("mismatch in packet length & pool length seg0_len:%u pool0_len:%u"\ + "seg1_len:%u pool1_len:%u", rx_seg0.length, rx_seg0.mp->elt_size, + rx_seg1.length, rx_seg1.mp->elt_size); + return -EINVAL; + } + + if (rx_seg0.length > rx_seg1.length) { + *lpb_pool = rx_seg0.mp; + *spb_pool = rx_seg1.mp; + + *lpb_len = rx_seg0.length; + *spb_len = rx_seg1.length; + } else { + *lpb_pool = rx_seg1.mp; + *spb_pool = rx_seg0.mp; + + *lpb_len = rx_seg1.length; + *spb_len = rx_seg0.length; + } + + if ((*spb_pool)->pool_id == 0) { + plt_err("Invalid pool_id"); + return -EINVAL; + } + + platform_ops = rte_mbuf_platform_mempool_ops(); + ops = rte_mempool_get_ops((*spb_pool)->ops_index); + if (strncmp(ops->name, platform_ops, RTE_MEMPOOL_OPS_NAMESIZE)) { + plt_err("mempool ops should be of cnxk_npa type"); + return -EINVAL; + } + + plt_info("spb_pool:%s lpb_pool:%s lpb_len:%u spb_len:%u\n", (*spb_pool)->name, + (*lpb_pool)->name, *lpb_len, *spb_len); + + return 0; +} + int cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid, uint32_t nb_desc, uint16_t fp_rx_q_sz, @@ -553,6 +611,10 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid, uint16_t first_skip; int rc = -EINVAL; size_t rxq_sz; + uint16_t lpb_len = 0; + uint16_t spb_len = 0; + struct rte_mempool *lpb_pool = mp; + struct rte_mempool *spb_pool = NULL; /* Sanity checks */ if (rx_conf->rx_deferred_start == 1) { @@ -560,15 +622,22 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid, goto fail; } + if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + rc = cnxk_nix_process_rx_conf(rx_conf, &lpb_pool, &spb_pool, + &lpb_len, &spb_len); + if (rc) + goto fail; + } + platform_ops = rte_mbuf_platform_mempool_ops(); /* This driver needs cnxk_npa mempool ops to work */ - ops = rte_mempool_get_ops(mp->ops_index); + ops = rte_mempool_get_ops(lpb_pool->ops_index); if (strncmp(ops->name, platform_ops, RTE_MEMPOOL_OPS_NAMESIZE)) { plt_err("mempool ops should be of cnxk_npa type"); goto fail; } - if (mp->pool_id == 0) { + if (lpb_pool->pool_id == 0) { plt_err("Invalid pool_id"); goto fail; } @@ -585,13 +654,13 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid, /* Its a no-op when inline device is not used */ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY || dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) - roc_nix_inl_dev_xaq_realloc(mp->pool_id); + roc_nix_inl_dev_xaq_realloc(lpb_pool->pool_id); /* Increase CQ size to Aura size to avoid CQ overflow and * then CPT buffer leak. */ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) - nb_desc = nix_inl_cq_sz_clamp_up(nix, mp, nb_desc); + nb_desc = nix_inl_cq_sz_clamp_up(nix, lpb_pool, nb_desc); /* Setup ROC CQ */ cq = &dev->cqs[qid]; @@ -606,23 +675,29 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid, /* Setup ROC RQ */ rq = &dev->rqs[qid]; rq->qid = qid; - rq->aura_handle = mp->pool_id; + rq->aura_handle = lpb_pool->pool_id; rq->flow_tag_width = 32; rq->sso_ena = false; /* Calculate first mbuf skip */ first_skip = (sizeof(struct rte_mbuf)); first_skip += RTE_PKTMBUF_HEADROOM; - first_skip += rte_pktmbuf_priv_size(mp); + first_skip += rte_pktmbuf_priv_size(lpb_pool); rq->first_skip = first_skip; rq->later_skip = sizeof(struct rte_mbuf); - rq->lpb_size = mp->elt_size; rq->lpb_drop_ena = !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY); + rq->lpb_size = lpb_len ? lpb_len : lpb_pool->elt_size; /* Enable Inline IPSec on RQ, will not be used for Poll mode */ if (roc_nix_inl_inb_is_enabled(nix)) rq->ipsech_ena = true; + if (spb_pool) { + rq->spb_ena = 1; + rq->spb_aura_handle = spb_pool->pool_id; + rq->spb_size = spb_len; + } + rc = roc_nix_rq_init(&dev->nix, rq, !!eth_dev->data->dev_started); if (rc) { plt_err("Failed to init roc rq for rq=%d, rc=%d", qid, rc); @@ -645,7 +720,7 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid, /* Queue config should reflect global offloads */ rxq_sp->qconf.conf.rx.offloads = dev->rx_offloads; rxq_sp->qconf.nb_desc = nb_desc; - rxq_sp->qconf.mp = mp; + rxq_sp->qconf.mp = lpb_pool; rxq_sp->tc = 0; rxq_sp->tx_pause = (dev->fc_cfg.mode == RTE_ETH_FC_FULL || dev->fc_cfg.mode == RTE_ETH_FC_TX_PAUSE); @@ -664,7 +739,7 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid, goto free_mem; } - plt_nix_dbg("rq=%d pool=%s nb_desc=%d->%d", qid, mp->name, nb_desc, + plt_nix_dbg("rq=%d pool=%s nb_desc=%d->%d", qid, lpb_pool->name, nb_desc, cq->nb_desc); /* Store start of fast path area */ diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h index 4cb7c9e90c..d60515d50a 100644 --- a/drivers/net/cnxk/cnxk_ethdev.h +++ b/drivers/net/cnxk/cnxk_ethdev.h @@ -44,6 +44,8 @@ #define CNXK_NIX_RX_DEFAULT_RING_SZ 4096 /* Max supported SQB count */ #define CNXK_NIX_TX_MAX_SQB 512 +/* LPB & SPB */ +#define CNXK_NIX_NUM_POOLS_MAX 2 /* If PTP is enabled additional SEND MEM DESC is required which * takes 2 words, hence max 7 iova address are possible @@ -83,7 +85,7 @@ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_SCATTER | \ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_RSS_HASH | \ RTE_ETH_RX_OFFLOAD_TIMESTAMP | RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \ - RTE_ETH_RX_OFFLOAD_SECURITY) + RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT | RTE_ETH_RX_OFFLOAD_SECURITY) #define RSS_IPV4_ENABLE \ (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \ diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c index 1592971073..6174a586be 100644 --- a/drivers/net/cnxk/cnxk_ethdev_ops.c +++ b/drivers/net/cnxk/cnxk_ethdev_ops.c @@ -69,6 +69,13 @@ cnxk_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo) devinfo->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP | RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP | RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP; + + devinfo->rx_seg_capa = (struct rte_eth_rxseg_capa){ + .mode_sort = 1, + .multi_pools = 1, + .max_npool = CNXK_NIX_NUM_POOLS_MAX, + }; + return 0; }