From patchwork Thu May 25 09:58:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 127429 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0A45042B9A; Thu, 25 May 2023 12:37:51 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2AA4340FAE; Thu, 25 May 2023 12:37:45 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id EDCAD40FAE for ; Thu, 25 May 2023 12:37:42 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 34PA2W3L020284 for ; Thu, 25 May 2023 03:37:41 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=06m5qER3c3zLamXLRYCXJebgmX0o61Y3culNhGpHFVI=; b=RirXa3UyMRb74HjpVvHPbZ790tYhmzxO8WthdyCC3LfeWolSpyXiBTTuu9BR/FcBV+s+ KsnWFqZM60kFBAKh5tOMbHBNLpU7qRQ26xaT/Wfnh244pvlsuXi2k+JUe7CpzK8Y8QUy Y97Pu5sItw5+fDJynrHQPZw5LTjStR5XnSIu/fT1GnqoF2AfhHvaml/k8MNipBGvA4rb dkPdKPYVMVTR16DfOxmjNUcLk3vjrqNDfenMzt/ZIFjeMRxb/Icc9756EXHnqnrRdNL6 HUbjzhQ7+CPlqsbhLYwoSbPosGyIkWCU9zTQxiPrizbNjr9ytAsVobtV3mvZSDRBhGPC cw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3qt5jng41g-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 25 May 2023 03:37:41 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Thu, 25 May 2023 03:37:39 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Thu, 25 May 2023 03:37:39 -0700 Received: from hyd1588t430.caveonetworks.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id BC8C95B694F; Thu, 25 May 2023 03:00:25 -0700 (PDT) From: Nithin Dabilpuram To: Nithin Kumar Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Pavan Nikhilesh , "Shijith Thotton" CC: , Subject: [PATCH v3 23/32] net/cnxk: support for inbound without inline dev mode Date: Thu, 25 May 2023 15:28:55 +0530 Message-ID: <20230525095904.3967080-23-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230525095904.3967080-1-ndabilpuram@marvell.com> References: <20230411091144.1087887-1-ndabilpuram@marvell.com> <20230525095904.3967080-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: BCpc0P_teO8HxZPc4k74XtHTu7XA15YX X-Proofpoint-GUID: BCpc0P_teO8HxZPc4k74XtHTu7XA15YX X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.573,FMLib:17.11.176.26 definitions=2023-05-25_06,2023-05-24_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Support for inbound Inline IPsec without Inline device RQ i.e both first pass and second pass hitting same ethdev RQ in poll mode. Remove the switching from inline dev to non inline dev mode as inline dev mode is default and can only be overridden by devargs. Signed-off-by: Nithin Dabilpuram --- drivers/common/cnxk/roc_nix_queue.c | 3 +++ drivers/event/cnxk/cnxk_eventdev_adptr.c | 15 --------------- drivers/net/cnxk/cnxk_ethdev.c | 15 ++++++++++----- 3 files changed, 13 insertions(+), 20 deletions(-) diff --git a/drivers/common/cnxk/roc_nix_queue.c b/drivers/common/cnxk/roc_nix_queue.c index d29fafa895..08e8bf7ea2 100644 --- a/drivers/common/cnxk/roc_nix_queue.c +++ b/drivers/common/cnxk/roc_nix_queue.c @@ -473,6 +473,9 @@ nix_rq_cfg(struct dev *dev, struct roc_nix_rq *rq, uint16_t qints, bool cfg, if (rq->ipsech_ena) { aq->rq.ipsech_ena = 1; aq->rq.ipsecd_drop_en = 1; + aq->rq.ena_wqwd = 1; + aq->rq.wqe_skip = rq->wqe_skip; + aq->rq.wqe_caching = 1; } aq->rq.lpb_aura = roc_npa_aura_handle_to_aura(rq->aura_handle); diff --git a/drivers/event/cnxk/cnxk_eventdev_adptr.c b/drivers/event/cnxk/cnxk_eventdev_adptr.c index 8ad84198b9..92aea92389 100644 --- a/drivers/event/cnxk/cnxk_eventdev_adptr.c +++ b/drivers/event/cnxk/cnxk_eventdev_adptr.c @@ -273,15 +273,6 @@ cnxk_sso_rx_adapter_queue_add( } dev->rx_offloads |= cnxk_eth_dev->rx_offload_flags; - - /* Switch to use PF/VF's NIX LF instead of inline device for inbound - * when all the RQ's are switched to event dev mode. We do this only - * when dev arg no_inl_dev=1 is selected. - */ - if (cnxk_eth_dev->inb.no_inl_dev && - cnxk_eth_dev->nb_rxq_sso == cnxk_eth_dev->nb_rxq) - cnxk_nix_inb_mode_set(cnxk_eth_dev, false); - return 0; } @@ -309,12 +300,6 @@ cnxk_sso_rx_adapter_queue_del(const struct rte_eventdev *event_dev, if (rc < 0) plt_err("Failed to clear Rx adapter config port=%d, q=%d", eth_dev->data->port_id, rx_queue_id); - - /* Removing RQ from Rx adapter implies need to use - * inline device for CQ/Poll mode. - */ - cnxk_nix_inb_mode_set(cnxk_eth_dev, true); - return rc; } diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c index aaa1014479..916198d802 100644 --- a/drivers/net/cnxk/cnxk_ethdev.c +++ b/drivers/net/cnxk/cnxk_ethdev.c @@ -81,9 +81,6 @@ cnxk_nix_inb_mode_set(struct cnxk_eth_dev *dev, bool use_inl_dev) { struct roc_nix *nix = &dev->nix; - if (dev->inb.inl_dev == use_inl_dev) - return 0; - plt_nix_dbg("Security sessions(%u) still active, inl=%u!!!", dev->inb.nb_sess, !!dev->inb.inl_dev); @@ -119,7 +116,7 @@ nix_security_setup(struct cnxk_eth_dev *dev) /* By default pick using inline device for poll mode. * Will be overridden when event mode rq's are setup. */ - cnxk_nix_inb_mode_set(dev, true); + cnxk_nix_inb_mode_set(dev, !dev->inb.no_inl_dev); /* Allocate memory to be used as dptr for CPT ucode * WRITE_SA op. @@ -633,6 +630,7 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid, struct roc_nix_rq *rq; struct roc_nix_cq *cq; uint16_t first_skip; + uint16_t wqe_skip; int rc = -EINVAL; size_t rxq_sz; struct rte_mempool *lpb_pool = mp; @@ -712,8 +710,15 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid, rq->lpb_drop_ena = !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY); /* Enable Inline IPSec on RQ, will not be used for Poll mode */ - if (roc_nix_inl_inb_is_enabled(nix)) + if (roc_nix_inl_inb_is_enabled(nix) && !dev->inb.inl_dev) { rq->ipsech_ena = true; + /* WQE skip is needed when poll mode is enabled in CN10KA_B0 and above + * for Inline IPsec traffic to CQ without inline device. + */ + wqe_skip = RTE_ALIGN_CEIL(sizeof(struct rte_mbuf), ROC_CACHE_LINE_SZ); + wqe_skip = wqe_skip / ROC_CACHE_LINE_SZ; + rq->wqe_skip = wqe_skip; + } if (spb_pool) { rq->spb_ena = 1;