From patchwork Thu Sep 29 09:54:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sunil Kumar Kori X-Patchwork-Id: 117115 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4439BA00C4; Thu, 29 Sep 2022 11:57:11 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2F80E40694; Thu, 29 Sep 2022 11:57:11 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 328C340395 for ; Thu, 29 Sep 2022 11:57:10 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 28T1Uaim022641; Thu, 29 Sep 2022 02:55:04 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=H43xOTqXC3iYTapPJdu5DPid1Lc4mO9YE3PSWhQWOZI=; b=IEpjg4HazMe68lBQ583r5hVAQBiiymvLSVqXfMfnP4vY4jA6YCxl81fk8Ih+SigNLdBo +cHpwE69ltcN3fWu9nF2FA5vTFG5NU9ivD0tk/1RkpV6ltFcRx9fBeqG8jkaWQcdWJq4 +nLUq6VbHGWmJUErZhXP9e8JaomZf7s/DyLaUXpIPBbADN97DxRnSsdAGi0tBqIwO7xv SaOQnoTA2HRv7JVGRt4e7i/LTgRsXi1OqG/J51UV7qxOv6wm6JutwaIWiFH13hk1xCl+ ZtDyNsO2+tp4YjKpmC7X3qVn/QSt0g0She3c8tvmqdLsx39o0ur5APgNONZTwMK+Jiuu 1g== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3jw1rt1nr5-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 29 Sep 2022 02:55:04 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 29 Sep 2022 02:55:02 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 29 Sep 2022 02:55:02 -0700 Received: from localhost.localdomain (unknown [10.28.34.25]) by maili.marvell.com (Postfix) with ESMTP id 42B5E3F705E; Thu, 29 Sep 2022 02:55:00 -0700 (PDT) From: To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Ray Kinsella CC: Subject: [PATCH v2 2/3] common/cnxk: add congestion management ROC APIs Date: Thu, 29 Sep 2022 15:24:53 +0530 Message-ID: <20220929095455.2173071-2-skori@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220929095455.2173071-1-skori@marvell.com> References: <20220919124117.1059642-3-skori@marvell.com> <20220929095455.2173071-1-skori@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: VadZtXlHCcyAIjoR1W1Yo7A9iSpCaIZd X-Proofpoint-ORIG-GUID: VadZtXlHCcyAIjoR1W1Yo7A9iSpCaIZd X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.528,FMLib:17.11.122.1 definitions=2022-09-29_05,2022-09-29_02,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Sunil Kumar Kori Add congestion management RoC APIs. Depends-on: patch-24902 ("ethdev: support congestion management") Signed-off-by: Sunil Kumar Kori --- v1..v2: - Rebase on top of the dpdk-next-net-mrvl/for-next-net drivers/common/cnxk/roc_nix.h | 5 ++ drivers/common/cnxk/roc_nix_queue.c | 106 ++++++++++++++++++++++++++++ drivers/common/cnxk/version.map | 1 + 3 files changed, 112 insertions(+) diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h index 5c2a869eba..34cb2c717c 100644 --- a/drivers/common/cnxk/roc_nix.h +++ b/drivers/common/cnxk/roc_nix.h @@ -315,6 +315,10 @@ struct roc_nix_rq { /* Average SPB aura level drop threshold for RED */ uint8_t spb_red_drop; /* Average SPB aura level pass threshold for RED */ + uint8_t xqe_red_pass; + /* Average xqe level drop threshold for RED */ + uint8_t xqe_red_drop; + /* Average xqe level pass threshold for RED */ uint8_t spb_red_pass; /* LPB aura drop enable */ bool lpb_drop_ena; @@ -869,6 +873,7 @@ int __roc_api roc_nix_rq_init(struct roc_nix *roc_nix, struct roc_nix_rq *rq, bool ena); int __roc_api roc_nix_rq_modify(struct roc_nix *roc_nix, struct roc_nix_rq *rq, bool ena); +int __roc_api roc_nix_rq_cman_config(struct roc_nix *roc_nix, struct roc_nix_rq *rq); int __roc_api roc_nix_rq_ena_dis(struct roc_nix_rq *rq, bool enable); int __roc_api roc_nix_rq_is_sso_enable(struct roc_nix *roc_nix, uint32_t qid); int __roc_api roc_nix_rq_fini(struct roc_nix_rq *rq); diff --git a/drivers/common/cnxk/roc_nix_queue.c b/drivers/common/cnxk/roc_nix_queue.c index 405d9a8274..368f1a52f7 100644 --- a/drivers/common/cnxk/roc_nix_queue.c +++ b/drivers/common/cnxk/roc_nix_queue.c @@ -235,6 +235,46 @@ nix_rq_aura_buf_type_update(struct roc_nix_rq *rq, bool set) return 0; } +static int +nix_rq_cn9k_cman_cfg(struct dev *dev, struct roc_nix_rq *rq) +{ + struct mbox *mbox = dev->mbox; + struct nix_aq_enq_req *aq; + + aq = mbox_alloc_msg_nix_aq_enq(mbox); + if (!aq) + return -ENOSPC; + + aq->qidx = rq->qid; + aq->ctype = NIX_AQ_CTYPE_RQ; + aq->op = NIX_AQ_INSTOP_WRITE; + + if (rq->red_pass && (rq->red_pass >= rq->red_drop)) { + aq->rq.lpb_pool_pass = rq->red_pass; + aq->rq.lpb_pool_drop = rq->red_drop; + aq->rq_mask.lpb_pool_pass = ~(aq->rq_mask.lpb_pool_pass); + aq->rq_mask.lpb_pool_drop = ~(aq->rq_mask.lpb_pool_drop); + + } + + if (rq->spb_red_pass && (rq->spb_red_pass >= rq->spb_red_drop)) { + aq->rq.spb_pool_pass = rq->spb_red_pass; + aq->rq.spb_pool_drop = rq->spb_red_drop; + aq->rq_mask.spb_pool_pass = ~(aq->rq_mask.spb_pool_pass); + aq->rq_mask.spb_pool_drop = ~(aq->rq_mask.spb_pool_drop); + + } + + if (rq->xqe_red_pass && (rq->xqe_red_pass >= rq->xqe_red_drop)) { + aq->rq.xqe_pass = rq->xqe_red_pass; + aq->rq.xqe_drop = rq->xqe_red_drop; + aq->rq_mask.xqe_drop = ~(aq->rq_mask.xqe_drop); + aq->rq_mask.xqe_pass = ~(aq->rq_mask.xqe_pass); + } + + return mbox_process(mbox); +} + int nix_rq_cn9k_cfg(struct dev *dev, struct roc_nix_rq *rq, uint16_t qints, bool cfg, bool ena) @@ -529,6 +569,46 @@ nix_rq_cfg(struct dev *dev, struct roc_nix_rq *rq, uint16_t qints, bool cfg, return 0; } +static int +nix_rq_cman_cfg(struct dev *dev, struct roc_nix_rq *rq) +{ + struct nix_cn10k_aq_enq_req *aq; + struct mbox *mbox = dev->mbox; + + aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox); + if (!aq) + return -ENOSPC; + + aq->qidx = rq->qid; + aq->ctype = NIX_AQ_CTYPE_RQ; + aq->op = NIX_AQ_INSTOP_WRITE; + + if (rq->red_pass && (rq->red_pass >= rq->red_drop)) { + aq->rq.lpb_pool_pass = rq->red_pass; + aq->rq.lpb_pool_drop = rq->red_drop; + aq->rq_mask.lpb_pool_pass = ~(aq->rq_mask.lpb_pool_pass); + aq->rq_mask.lpb_pool_drop = ~(aq->rq_mask.lpb_pool_drop); + + } + + if (rq->spb_red_pass && (rq->spb_red_pass >= rq->spb_red_drop)) { + aq->rq.spb_pool_pass = rq->spb_red_pass; + aq->rq.spb_pool_drop = rq->spb_red_drop; + aq->rq_mask.spb_pool_pass = ~(aq->rq_mask.spb_pool_pass); + aq->rq_mask.spb_pool_drop = ~(aq->rq_mask.spb_pool_drop); + + } + + if (rq->xqe_red_pass && (rq->xqe_red_pass >= rq->xqe_red_drop)) { + aq->rq.xqe_pass = rq->xqe_red_pass; + aq->rq.xqe_drop = rq->xqe_red_drop; + aq->rq_mask.xqe_drop = ~(aq->rq_mask.xqe_drop); + aq->rq_mask.xqe_pass = ~(aq->rq_mask.xqe_pass); + } + + return mbox_process(mbox); +} + int roc_nix_rq_init(struct roc_nix *roc_nix, struct roc_nix_rq *rq, bool ena) { @@ -616,6 +696,32 @@ roc_nix_rq_modify(struct roc_nix *roc_nix, struct roc_nix_rq *rq, bool ena) return nix_tel_node_add_rq(rq); } +int +roc_nix_rq_cman_config(struct roc_nix *roc_nix, struct roc_nix_rq *rq) +{ + bool is_cn9k = roc_model_is_cn9k(); + struct nix *nix; + struct dev *dev; + int rc; + + if (roc_nix == NULL || rq == NULL) + return NIX_ERR_PARAM; + + nix = roc_nix_to_nix_priv(roc_nix); + + if (rq->qid >= nix->nb_rx_queues) + return NIX_ERR_QUEUE_INVALID_RANGE; + + dev = &nix->dev; + + if (is_cn9k) + rc = nix_rq_cn9k_cman_cfg(dev, rq); + else + rc = nix_rq_cman_cfg(dev, rq); + + return rc; +} + int roc_nix_rq_fini(struct roc_nix_rq *rq) { diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 276fec3660..e935f17c28 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -228,6 +228,7 @@ INTERNAL { roc_nix_reassembly_configure; roc_nix_register_cq_irqs; roc_nix_register_queue_irqs; + roc_nix_rq_cman_config; roc_nix_rq_dump; roc_nix_rq_ena_dis; roc_nix_rq_fini;