From patchwork Thu Oct 13 11:41:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 118143 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A04A1A00C2; Thu, 13 Oct 2022 13:42:32 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7CE7A42F10; Thu, 13 Oct 2022 13:42:19 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 5472442F0B for ; Thu, 13 Oct 2022 13:42:18 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 29D9I8rS010355 for ; Thu, 13 Oct 2022 04:42:17 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=Zsc24S5RVFmtiGdiLW0SE77B+10vXl3e7NZnG11bTV0=; b=eJbCokF4spCzezvwDdCGSrgR5f5XLbLVRrrPtsNpzBNz5rP4hjdVGVxRkRxv1UPTKsZQ wjHEV6AmoppgonQN1cI1HXlAJcG4VfCdCi9gwrQ5sNFY63Tzg9t2i0L4b+Ywwj3iWD0U R/RzIlmqAQUui2XCJ6LePzJag4Q2NJeU7uavuQxJcbffZX+tm4VDzMfZIfG3cnI/9WfU UPx2LOu+YNxqw+JSBcWw2mg2thtsdk+uRxMO1kg0jijR3FmTP9slkwmU3ROLOJtrdKIf FwVcc2zCD/f50efaCtdPOLDh+CnsU3g9Uznu7ssB4cnJ/LkBwhQah9p0MUwrjix+qKAk Rg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3k6fwv8eem-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 13 Oct 2022 04:42:17 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 13 Oct 2022 04:42:15 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Thu, 13 Oct 2022 04:42:15 -0700 Received: from localhost.localdomain (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 584CE3F7055; Thu, 13 Oct 2022 04:42:13 -0700 (PDT) From: Nithin Dabilpuram To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao CC: , Subject: [PATCH v2 05/13] common/cnxk: fix RQ mask config for cn10kb chip Date: Thu, 13 Oct 2022 17:11:48 +0530 Message-ID: <20221013114156.996517-5-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221013114156.996517-1-ndabilpuram@marvell.com> References: <20221011120135.45846-1-ndabilpuram@marvell.com> <20221013114156.996517-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: Vsm0OERxq3VvDRcfHV1iVIe5xb3a-fXt X-Proofpoint-ORIG-GUID: Vsm0OERxq3VvDRcfHV1iVIe5xb3a-fXt X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-10-13_08,2022-10-13_01,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org RQ mask config needs to enable SPB_ENA in order for Zero for being able to override it with Meta aura. Also fix flow control config to catch invalid rxchan config errors. Fixes: ddf955d3917e ("common/cnxk: support CPT second pass") Fixes: da57d4589a6f ("common/cnxk: support NIX flow control") Signed-off-by: Nithin Dabilpuram --- drivers/common/cnxk/roc_nix_fc.c | 4 ++- drivers/common/cnxk/roc_nix_inl.c | 43 +++++++++++++++++-------------- 2 files changed, 26 insertions(+), 21 deletions(-) diff --git a/drivers/common/cnxk/roc_nix_fc.c b/drivers/common/cnxk/roc_nix_fc.c index f4cfa11c0f..033e17a4bf 100644 --- a/drivers/common/cnxk/roc_nix_fc.c +++ b/drivers/common/cnxk/roc_nix_fc.c @@ -52,8 +52,10 @@ nix_fc_rxchan_bpid_set(struct roc_nix *roc_nix, bool enable) req->bpid_per_chan = true; rc = mbox_process_msg(mbox, (void *)&rsp); - if (rc || (req->chan_cnt != rsp->chan_cnt)) + if (rc || (req->chan_cnt != rsp->chan_cnt)) { + rc = -EIO; goto exit; + } nix->chan_cnt = rsp->chan_cnt; for (i = 0; i < rsp->chan_cnt; i++) diff --git a/drivers/common/cnxk/roc_nix_inl.c b/drivers/common/cnxk/roc_nix_inl.c index 213d71e684..0da097c9e9 100644 --- a/drivers/common/cnxk/roc_nix_inl.c +++ b/drivers/common/cnxk/roc_nix_inl.c @@ -454,27 +454,29 @@ nix_inl_rq_mask_cfg(struct roc_nix *roc_nix, bool enable) msk_req->rq_set.lpb_drop_ena = 0; msk_req->rq_set.spb_drop_ena = 0; msk_req->rq_set.xqe_drop_ena = 0; + msk_req->rq_set.spb_ena = 1; - msk_req->rq_mask.len_ol3_dis = ~(msk_req->rq_set.len_ol3_dis); - msk_req->rq_mask.len_ol4_dis = ~(msk_req->rq_set.len_ol4_dis); - msk_req->rq_mask.len_il3_dis = ~(msk_req->rq_set.len_il3_dis); + msk_req->rq_mask.len_ol3_dis = 0; + msk_req->rq_mask.len_ol4_dis = 0; + msk_req->rq_mask.len_il3_dis = 0; - msk_req->rq_mask.len_il4_dis = ~(msk_req->rq_set.len_il4_dis); - msk_req->rq_mask.csum_ol4_dis = ~(msk_req->rq_set.csum_ol4_dis); - msk_req->rq_mask.csum_il4_dis = ~(msk_req->rq_set.csum_il4_dis); + msk_req->rq_mask.len_il4_dis = 0; + msk_req->rq_mask.csum_ol4_dis = 0; + msk_req->rq_mask.csum_il4_dis = 0; - msk_req->rq_mask.lenerr_dis = ~(msk_req->rq_set.lenerr_dis); - msk_req->rq_mask.port_ol4_dis = ~(msk_req->rq_set.port_ol4_dis); - msk_req->rq_mask.port_il4_dis = ~(msk_req->rq_set.port_il4_dis); + msk_req->rq_mask.lenerr_dis = 0; + msk_req->rq_mask.port_ol4_dis = 0; + msk_req->rq_mask.port_il4_dis = 0; - msk_req->rq_mask.lpb_drop_ena = ~(msk_req->rq_set.lpb_drop_ena); - msk_req->rq_mask.spb_drop_ena = ~(msk_req->rq_set.spb_drop_ena); - msk_req->rq_mask.xqe_drop_ena = ~(msk_req->rq_set.xqe_drop_ena); + msk_req->rq_mask.lpb_drop_ena = 0; + msk_req->rq_mask.spb_drop_ena = 0; + msk_req->rq_mask.xqe_drop_ena = 0; + msk_req->rq_mask.spb_ena = 0; aura_handle = roc_npa_zero_aura_handle(); msk_req->ipsec_cfg1.spb_cpt_aura = roc_npa_aura_handle_to_aura(aura_handle); msk_req->ipsec_cfg1.rq_mask_enable = enable; - msk_req->ipsec_cfg1.spb_cpt_sizem1 = inl_cfg->buf_sz; + msk_req->ipsec_cfg1.spb_cpt_sizem1 = (inl_cfg->buf_sz >> 7) - 1; msk_req->ipsec_cfg1.spb_cpt_enable = enable; return mbox_process(mbox); @@ -544,13 +546,6 @@ roc_nix_inl_inb_init(struct roc_nix *roc_nix) idev->inl_cfg.refs++; } - if (roc_model_is_cn10kb_a0()) { - rc = nix_inl_rq_mask_cfg(roc_nix, true); - if (rc) { - plt_err("Failed to get rq mask rc=%d", rc); - return rc; - } - } nix->inl_inb_ena = true; return 0; } @@ -1043,6 +1038,14 @@ roc_nix_inl_rq_ena_dis(struct roc_nix *roc_nix, bool enable) if (!idev) return -EFAULT; + if (roc_model_is_cn10kb_a0()) { + rc = nix_inl_rq_mask_cfg(roc_nix, true); + if (rc) { + plt_err("Failed to get rq mask rc=%d", rc); + return rc; + } + } + if (nix->inb_inl_dev) { if (!inl_rq || !idev->nix_inl_dev) return -EFAULT;