From patchwork Wed May 24 10:03:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 127309 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B46CA42B8C; Wed, 24 May 2023 12:07:17 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7FAE442D36; Wed, 24 May 2023 12:05:27 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 110B942D77 for ; Wed, 24 May 2023 12:05:25 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 34O4qD7M025070 for ; Wed, 24 May 2023 03:05:25 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=06m5qER3c3zLamXLRYCXJebgmX0o61Y3culNhGpHFVI=; b=VMar12Ilmui2z4tSRJyqobEickvSwBaVT1hI/hLIb7vX3xtR4J0I6kQAJh9f+qHdd76G 5UnuefbS6zLv3zpEf/xthXmgYiNIxMz46xaNTwYCLzGVeO/CQT1cZBAGdotmiMRFvSjN 3RPFLxw1EvRXb6i1cQwwH2LQ0fwXiWkJ4gTzpxGAj5ToYGUF5t69L7osSqva/qoezGvE GYByXoScLJvUPlpyLuf1NeTeMJQDnEPgTy+JgOzSJPyyjA11M2dFEa1sljvy6m7gfNBs 79zzdulwTVxUwXj1n9XXpU+DUMvfsuc319TMbxqusA4O32PLIHpK8KYmOiPwz1Z+6LeC Wg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3qsbxes2wa-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 24 May 2023 03:05:25 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 24 May 2023 03:05:23 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 24 May 2023 03:05:23 -0700 Received: from hyd1588t430.caveonetworks.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 0B1063F70D6; Wed, 24 May 2023 03:05:19 -0700 (PDT) From: Nithin Dabilpuram To: Nithin Kumar Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Pavan Nikhilesh , "Shijith Thotton" CC: , Subject: [PATCH v2 23/32] net/cnxk: support for inbound without inline dev mode Date: Wed, 24 May 2023 15:33:58 +0530 Message-ID: <20230524100407.3796139-23-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230524100407.3796139-1-ndabilpuram@marvell.com> References: <20230411091144.1087887-1-ndabilpuram@marvell.com> <20230524100407.3796139-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 2HqbpqnnMDX10t_oKS4LolvvSejOScN_ X-Proofpoint-GUID: 2HqbpqnnMDX10t_oKS4LolvvSejOScN_ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.573,FMLib:17.11.176.26 definitions=2023-05-24_05,2023-05-23_02,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Support for inbound Inline IPsec without Inline device RQ i.e both first pass and second pass hitting same ethdev RQ in poll mode. Remove the switching from inline dev to non inline dev mode as inline dev mode is default and can only be overridden by devargs. Signed-off-by: Nithin Dabilpuram --- drivers/common/cnxk/roc_nix_queue.c | 3 +++ drivers/event/cnxk/cnxk_eventdev_adptr.c | 15 --------------- drivers/net/cnxk/cnxk_ethdev.c | 15 ++++++++++----- 3 files changed, 13 insertions(+), 20 deletions(-) diff --git a/drivers/common/cnxk/roc_nix_queue.c b/drivers/common/cnxk/roc_nix_queue.c index d29fafa895..08e8bf7ea2 100644 --- a/drivers/common/cnxk/roc_nix_queue.c +++ b/drivers/common/cnxk/roc_nix_queue.c @@ -473,6 +473,9 @@ nix_rq_cfg(struct dev *dev, struct roc_nix_rq *rq, uint16_t qints, bool cfg, if (rq->ipsech_ena) { aq->rq.ipsech_ena = 1; aq->rq.ipsecd_drop_en = 1; + aq->rq.ena_wqwd = 1; + aq->rq.wqe_skip = rq->wqe_skip; + aq->rq.wqe_caching = 1; } aq->rq.lpb_aura = roc_npa_aura_handle_to_aura(rq->aura_handle); diff --git a/drivers/event/cnxk/cnxk_eventdev_adptr.c b/drivers/event/cnxk/cnxk_eventdev_adptr.c index 8ad84198b9..92aea92389 100644 --- a/drivers/event/cnxk/cnxk_eventdev_adptr.c +++ b/drivers/event/cnxk/cnxk_eventdev_adptr.c @@ -273,15 +273,6 @@ cnxk_sso_rx_adapter_queue_add( } dev->rx_offloads |= cnxk_eth_dev->rx_offload_flags; - - /* Switch to use PF/VF's NIX LF instead of inline device for inbound - * when all the RQ's are switched to event dev mode. We do this only - * when dev arg no_inl_dev=1 is selected. - */ - if (cnxk_eth_dev->inb.no_inl_dev && - cnxk_eth_dev->nb_rxq_sso == cnxk_eth_dev->nb_rxq) - cnxk_nix_inb_mode_set(cnxk_eth_dev, false); - return 0; } @@ -309,12 +300,6 @@ cnxk_sso_rx_adapter_queue_del(const struct rte_eventdev *event_dev, if (rc < 0) plt_err("Failed to clear Rx adapter config port=%d, q=%d", eth_dev->data->port_id, rx_queue_id); - - /* Removing RQ from Rx adapter implies need to use - * inline device for CQ/Poll mode. - */ - cnxk_nix_inb_mode_set(cnxk_eth_dev, true); - return rc; } diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c index aaa1014479..916198d802 100644 --- a/drivers/net/cnxk/cnxk_ethdev.c +++ b/drivers/net/cnxk/cnxk_ethdev.c @@ -81,9 +81,6 @@ cnxk_nix_inb_mode_set(struct cnxk_eth_dev *dev, bool use_inl_dev) { struct roc_nix *nix = &dev->nix; - if (dev->inb.inl_dev == use_inl_dev) - return 0; - plt_nix_dbg("Security sessions(%u) still active, inl=%u!!!", dev->inb.nb_sess, !!dev->inb.inl_dev); @@ -119,7 +116,7 @@ nix_security_setup(struct cnxk_eth_dev *dev) /* By default pick using inline device for poll mode. * Will be overridden when event mode rq's are setup. */ - cnxk_nix_inb_mode_set(dev, true); + cnxk_nix_inb_mode_set(dev, !dev->inb.no_inl_dev); /* Allocate memory to be used as dptr for CPT ucode * WRITE_SA op. @@ -633,6 +630,7 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid, struct roc_nix_rq *rq; struct roc_nix_cq *cq; uint16_t first_skip; + uint16_t wqe_skip; int rc = -EINVAL; size_t rxq_sz; struct rte_mempool *lpb_pool = mp; @@ -712,8 +710,15 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid, rq->lpb_drop_ena = !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY); /* Enable Inline IPSec on RQ, will not be used for Poll mode */ - if (roc_nix_inl_inb_is_enabled(nix)) + if (roc_nix_inl_inb_is_enabled(nix) && !dev->inb.inl_dev) { rq->ipsech_ena = true; + /* WQE skip is needed when poll mode is enabled in CN10KA_B0 and above + * for Inline IPsec traffic to CQ without inline device. + */ + wqe_skip = RTE_ALIGN_CEIL(sizeof(struct rte_mbuf), ROC_CACHE_LINE_SZ); + wqe_skip = wqe_skip / ROC_CACHE_LINE_SZ; + rq->wqe_skip = wqe_skip; + } if (spb_pool) { rq->spb_ena = 1;