From patchwork Tue Apr 11 09:11:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 125909 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F0EDE4291B; Tue, 11 Apr 2023 11:12:26 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0E97342C24; Tue, 11 Apr 2023 11:12:16 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id C2A1342B71 for ; Tue, 11 Apr 2023 11:12:14 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 33B8VpNP021637 for ; Tue, 11 Apr 2023 02:12:14 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=mlAK037H9XpavehJcWHR9ZJDJHGRk8/FLpA5BdcKPp8=; b=Sieg/+UwYKgRLZcSuWW/dPYTNxYjFX0HgzKpqaGZ+oePgn7bgceV3sKzRR8Ko77vdyWw Wjz7Hoxojbmqx4xMAOsb+nEBKpMnzyfZyjV63mcHAsM/0VpR8F+puUklmrSAOG7QIrdE 1HldeKfkOBNtUWcBlSyrXQvxD2daxv0dgjposunAu83GUTrtjmKs6ApXpBBpUz8uQy5m +gVTLgwIU2yu6SHaulUVB4A3zS1+QLs9Qyd0i9Cs1NYolhYXGfxBSV94XqI/rckAZKsH 1wfW2KmQ6z6uKjfSZBCwAeNbrHq92rNjosow1eUz7HpUm8CFiBfkLEOx0pSk50Q+itEq qw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3pvt73b1sa-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 11 Apr 2023 02:12:13 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 11 Apr 2023 02:12:12 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 11 Apr 2023 02:12:12 -0700 Received: from hyd1588t430.caveonetworks.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 9A5D53F7070; Tue, 11 Apr 2023 02:12:09 -0700 (PDT) From: Nithin Dabilpuram To: Nithin Kumar Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Pavan Nikhilesh , "Shijith Thotton" CC: , Subject: [PATCH 04/21] common/cnxk: reduce sqes per sqb by one Date: Tue, 11 Apr 2023 14:41:27 +0530 Message-ID: <20230411091144.1087887-4-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230411091144.1087887-1-ndabilpuram@marvell.com> References: <20230411091144.1087887-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: _y05Y-4DP4KdiMZcxxrDix0OVFdP3qe- X-Proofpoint-ORIG-GUID: _y05Y-4DP4KdiMZcxxrDix0OVFdP3qe- X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-04-11_05,2023-04-06_03,2023-02-09_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Satha Rao Each SQB reserves last SQE to store pointer to next SQB. So each SQB will holds either 31 or 63 based on send descriptors selected. This patch also consider sqb_slack to maintain threshold buffers to sync between HW and SW. Threshold will be maximum of 30% of queue size or sqb_slack. Signed-off-by: Satha Rao --- drivers/common/cnxk/roc_nix.h | 2 +- drivers/common/cnxk/roc_nix_priv.h | 2 +- drivers/common/cnxk/roc_nix_queue.c | 21 ++++++++++----------- drivers/event/cnxk/cn10k_eventdev.c | 2 +- drivers/event/cnxk/cn9k_eventdev.c | 2 +- 5 files changed, 14 insertions(+), 15 deletions(-) diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h index 188b8800d3..50aef4fe85 100644 --- a/drivers/common/cnxk/roc_nix.h +++ b/drivers/common/cnxk/roc_nix.h @@ -13,7 +13,7 @@ #define ROC_NIX_BPF_STATS_MAX 12 #define ROC_NIX_MTR_ID_INVALID UINT32_MAX #define ROC_NIX_PFC_CLASS_INVALID UINT8_MAX -#define ROC_NIX_SQB_LOWER_THRESH 70U +#define ROC_NIX_SQB_THRESH 30U #define ROC_NIX_SQB_SLACK 12U /* Reserved interface types for BPID allocation */ diff --git a/drivers/common/cnxk/roc_nix_priv.h b/drivers/common/cnxk/roc_nix_priv.h index 99e27cdc56..7144d1ee10 100644 --- a/drivers/common/cnxk/roc_nix_priv.h +++ b/drivers/common/cnxk/roc_nix_priv.h @@ -12,7 +12,7 @@ #define NIX_MAX_SQB ((uint16_t)512) #define NIX_DEF_SQB ((uint16_t)16) #define NIX_MIN_SQB ((uint16_t)8) -#define NIX_SQB_LIST_SPACE ((uint16_t)2) +#define NIX_SQB_PREFETCH ((uint16_t)1) /* Apply BP/DROP when CQ is 95% full */ #define NIX_CQ_THRESH_LEVEL (5 * 256 / 100) diff --git a/drivers/common/cnxk/roc_nix_queue.c b/drivers/common/cnxk/roc_nix_queue.c index ac4d9856c1..d29fafa895 100644 --- a/drivers/common/cnxk/roc_nix_queue.c +++ b/drivers/common/cnxk/roc_nix_queue.c @@ -982,7 +982,7 @@ static int sqb_pool_populate(struct roc_nix *roc_nix, struct roc_nix_sq *sq) { struct nix *nix = roc_nix_to_nix_priv(roc_nix); - uint16_t sqes_per_sqb, count, nb_sqb_bufs; + uint16_t sqes_per_sqb, count, nb_sqb_bufs, thr; struct npa_pool_s pool; struct npa_aura_s aura; uint64_t blk_sz; @@ -995,22 +995,21 @@ sqb_pool_populate(struct roc_nix *roc_nix, struct roc_nix_sq *sq) else sqes_per_sqb = (blk_sz / 8) / 8; + /* Reserve One SQE in each SQB to hold pointer for next SQB */ + sqes_per_sqb -= 1; + sq->nb_desc = PLT_MAX(512U, sq->nb_desc); - nb_sqb_bufs = sq->nb_desc / sqes_per_sqb; - nb_sqb_bufs += NIX_SQB_LIST_SPACE; + nb_sqb_bufs = PLT_DIV_CEIL(sq->nb_desc, sqes_per_sqb); + thr = PLT_DIV_CEIL((nb_sqb_bufs * ROC_NIX_SQB_THRESH), 100); + nb_sqb_bufs += NIX_SQB_PREFETCH; /* Clamp up the SQB count */ - nb_sqb_bufs = PLT_MIN(roc_nix->max_sqb_count, - (uint16_t)PLT_MAX(NIX_DEF_SQB, nb_sqb_bufs)); + nb_sqb_bufs = PLT_MIN(roc_nix->max_sqb_count, (uint16_t)PLT_MAX(NIX_DEF_SQB, nb_sqb_bufs)); sq->nb_sqb_bufs = nb_sqb_bufs; sq->sqes_per_sqb_log2 = (uint16_t)plt_log2_u32(sqes_per_sqb); - sq->nb_sqb_bufs_adj = - nb_sqb_bufs - - (PLT_ALIGN_MUL_CEIL(nb_sqb_bufs, sqes_per_sqb) / sqes_per_sqb); - sq->nb_sqb_bufs_adj = - (sq->nb_sqb_bufs_adj * ROC_NIX_SQB_LOWER_THRESH) / 100; + sq->nb_sqb_bufs_adj = nb_sqb_bufs; - nb_sqb_bufs += roc_nix->sqb_slack; + nb_sqb_bufs += PLT_MAX(thr, roc_nix->sqb_slack); /* Explicitly set nat_align alone as by default pool is with both * nat_align and buf_offset = 1 which we don't want for SQB. */ diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c index 071ea5a212..afd8e323b8 100644 --- a/drivers/event/cnxk/cn10k_eventdev.c +++ b/drivers/event/cnxk/cn10k_eventdev.c @@ -995,7 +995,7 @@ cn10k_sso_txq_fc_update(const struct rte_eth_dev *eth_dev, int32_t tx_queue_id) (sqes_per_sqb - 1)); txq->nb_sqb_bufs_adj = sq->nb_sqb_bufs_adj; txq->nb_sqb_bufs_adj = - (ROC_NIX_SQB_LOWER_THRESH * txq->nb_sqb_bufs_adj) / 100; + ((100 - ROC_NIX_SQB_THRESH) * txq->nb_sqb_bufs_adj) / 100; } } diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c index 2d2985f175..b104d19b9b 100644 --- a/drivers/event/cnxk/cn9k_eventdev.c +++ b/drivers/event/cnxk/cn9k_eventdev.c @@ -1037,7 +1037,7 @@ cn9k_sso_txq_fc_update(const struct rte_eth_dev *eth_dev, int32_t tx_queue_id) (sqes_per_sqb - 1)); txq->nb_sqb_bufs_adj = sq->nb_sqb_bufs_adj; txq->nb_sqb_bufs_adj = - (ROC_NIX_SQB_LOWER_THRESH * txq->nb_sqb_bufs_adj) / 100; + ((100 - ROC_NIX_SQB_THRESH) * txq->nb_sqb_bufs_adj) / 100; } }