From patchwork Thu May 25 09:58:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 127404 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F2A4542B9A; Thu, 25 May 2023 12:09:59 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 77B9842D10; Thu, 25 May 2023 12:09:59 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id AF14740DDB for ; Thu, 25 May 2023 12:09:53 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 34PA2UjI020269 for ; Thu, 25 May 2023 03:09:52 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=D4BDGQQMqWccWClE2RterRzKEOa3LSA/LyxMGcrEPHM=; b=ikbp+fm4TwoMY66KQrSQN02Z4R20cXo3ZXywWb/zyIeGYcfSie5eSbyN0K49sVIKFj/w MR7CnwwDYAfKxjvJwEKUzchQzTDIQaRiDggbwgCdbVoVhG9IfurCYPdoTK9eWOC15Asr 17MG0ib9d3X8GyaElKN0yFuZYozUyhVcUGas4SCX4V7mpPUKsOyE6lE5Wi2ejwo+tTMr pcsIg78bONzp5u3zZ3gbKhlqI04G9hbgdF+5gaICMOhn3OlUgRpbBtfBUdX5ifdZsctR r+Dt6I7+QXG4ajBk2OyZa805UqvaRofGY5l00L/K28/ILjqWTIomQXHTASzD7aU4Iklr KQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3qt5jng0kq-5 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 25 May 2023 03:09:52 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Thu, 25 May 2023 03:09:40 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Thu, 25 May 2023 03:09:40 -0700 Received: from hyd1588t430.caveonetworks.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 96C255B6BA0; Thu, 25 May 2023 02:59:32 -0700 (PDT) From: Nithin Dabilpuram To: Nithin Kumar Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Pavan Nikhilesh , "Shijith Thotton" CC: , Subject: [PATCH v3 05/32] common/cnxk: reduce sqes per sqb by one Date: Thu, 25 May 2023 15:28:37 +0530 Message-ID: <20230525095904.3967080-5-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230525095904.3967080-1-ndabilpuram@marvell.com> References: <20230411091144.1087887-1-ndabilpuram@marvell.com> <20230525095904.3967080-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 4PkI-kFt8H8tHfbAp1Oj1UV0CsQO99hD X-Proofpoint-GUID: 4PkI-kFt8H8tHfbAp1Oj1UV0CsQO99hD X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.573,FMLib:17.11.176.26 definitions=2023-05-25_06,2023-05-24_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Satha Rao Each SQB reserves last SQE to store pointer to next SQB. So each SQB will holds either 31 or 63 based on send descriptors selected. This patch also consider sqb_slack to maintain threshold buffers to sync between HW and SW. Threshold will be maximum of 30% of queue size or sqb_slack. Signed-off-by: Satha Rao --- drivers/common/cnxk/roc_nix.h | 2 +- drivers/common/cnxk/roc_nix_priv.h | 2 +- drivers/common/cnxk/roc_nix_queue.c | 21 ++++++++++----------- drivers/event/cnxk/cn10k_eventdev.c | 2 +- drivers/event/cnxk/cn9k_eventdev.c | 2 +- 5 files changed, 14 insertions(+), 15 deletions(-) diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h index 188b8800d3..50aef4fe85 100644 --- a/drivers/common/cnxk/roc_nix.h +++ b/drivers/common/cnxk/roc_nix.h @@ -13,7 +13,7 @@ #define ROC_NIX_BPF_STATS_MAX 12 #define ROC_NIX_MTR_ID_INVALID UINT32_MAX #define ROC_NIX_PFC_CLASS_INVALID UINT8_MAX -#define ROC_NIX_SQB_LOWER_THRESH 70U +#define ROC_NIX_SQB_THRESH 30U #define ROC_NIX_SQB_SLACK 12U /* Reserved interface types for BPID allocation */ diff --git a/drivers/common/cnxk/roc_nix_priv.h b/drivers/common/cnxk/roc_nix_priv.h index 99e27cdc56..7144d1ee10 100644 --- a/drivers/common/cnxk/roc_nix_priv.h +++ b/drivers/common/cnxk/roc_nix_priv.h @@ -12,7 +12,7 @@ #define NIX_MAX_SQB ((uint16_t)512) #define NIX_DEF_SQB ((uint16_t)16) #define NIX_MIN_SQB ((uint16_t)8) -#define NIX_SQB_LIST_SPACE ((uint16_t)2) +#define NIX_SQB_PREFETCH ((uint16_t)1) /* Apply BP/DROP when CQ is 95% full */ #define NIX_CQ_THRESH_LEVEL (5 * 256 / 100) diff --git a/drivers/common/cnxk/roc_nix_queue.c b/drivers/common/cnxk/roc_nix_queue.c index ac4d9856c1..d29fafa895 100644 --- a/drivers/common/cnxk/roc_nix_queue.c +++ b/drivers/common/cnxk/roc_nix_queue.c @@ -982,7 +982,7 @@ static int sqb_pool_populate(struct roc_nix *roc_nix, struct roc_nix_sq *sq) { struct nix *nix = roc_nix_to_nix_priv(roc_nix); - uint16_t sqes_per_sqb, count, nb_sqb_bufs; + uint16_t sqes_per_sqb, count, nb_sqb_bufs, thr; struct npa_pool_s pool; struct npa_aura_s aura; uint64_t blk_sz; @@ -995,22 +995,21 @@ sqb_pool_populate(struct roc_nix *roc_nix, struct roc_nix_sq *sq) else sqes_per_sqb = (blk_sz / 8) / 8; + /* Reserve One SQE in each SQB to hold pointer for next SQB */ + sqes_per_sqb -= 1; + sq->nb_desc = PLT_MAX(512U, sq->nb_desc); - nb_sqb_bufs = sq->nb_desc / sqes_per_sqb; - nb_sqb_bufs += NIX_SQB_LIST_SPACE; + nb_sqb_bufs = PLT_DIV_CEIL(sq->nb_desc, sqes_per_sqb); + thr = PLT_DIV_CEIL((nb_sqb_bufs * ROC_NIX_SQB_THRESH), 100); + nb_sqb_bufs += NIX_SQB_PREFETCH; /* Clamp up the SQB count */ - nb_sqb_bufs = PLT_MIN(roc_nix->max_sqb_count, - (uint16_t)PLT_MAX(NIX_DEF_SQB, nb_sqb_bufs)); + nb_sqb_bufs = PLT_MIN(roc_nix->max_sqb_count, (uint16_t)PLT_MAX(NIX_DEF_SQB, nb_sqb_bufs)); sq->nb_sqb_bufs = nb_sqb_bufs; sq->sqes_per_sqb_log2 = (uint16_t)plt_log2_u32(sqes_per_sqb); - sq->nb_sqb_bufs_adj = - nb_sqb_bufs - - (PLT_ALIGN_MUL_CEIL(nb_sqb_bufs, sqes_per_sqb) / sqes_per_sqb); - sq->nb_sqb_bufs_adj = - (sq->nb_sqb_bufs_adj * ROC_NIX_SQB_LOWER_THRESH) / 100; + sq->nb_sqb_bufs_adj = nb_sqb_bufs; - nb_sqb_bufs += roc_nix->sqb_slack; + nb_sqb_bufs += PLT_MAX(thr, roc_nix->sqb_slack); /* Explicitly set nat_align alone as by default pool is with both * nat_align and buf_offset = 1 which we don't want for SQB. */ diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c index 49d205af39..fd71ff15ca 100644 --- a/drivers/event/cnxk/cn10k_eventdev.c +++ b/drivers/event/cnxk/cn10k_eventdev.c @@ -995,7 +995,7 @@ cn10k_sso_txq_fc_update(const struct rte_eth_dev *eth_dev, int32_t tx_queue_id) (sqes_per_sqb - 1)); txq->nb_sqb_bufs_adj = sq->nb_sqb_bufs_adj; txq->nb_sqb_bufs_adj = - (ROC_NIX_SQB_LOWER_THRESH * txq->nb_sqb_bufs_adj) / 100; + ((100 - ROC_NIX_SQB_THRESH) * txq->nb_sqb_bufs_adj) / 100; } } diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c index 2d2985f175..b104d19b9b 100644 --- a/drivers/event/cnxk/cn9k_eventdev.c +++ b/drivers/event/cnxk/cn9k_eventdev.c @@ -1037,7 +1037,7 @@ cn9k_sso_txq_fc_update(const struct rte_eth_dev *eth_dev, int32_t tx_queue_id) (sqes_per_sqb - 1)); txq->nb_sqb_bufs_adj = sq->nb_sqb_bufs_adj; txq->nb_sqb_bufs_adj = - (ROC_NIX_SQB_LOWER_THRESH * txq->nb_sqb_bufs_adj) / 100; + ((100 - ROC_NIX_SQB_THRESH) * txq->nb_sqb_bufs_adj) / 100; } }