From patchwork Mon Feb 7 06:26:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 106928 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0511BA034F; Mon, 7 Feb 2022 07:26:51 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 843F640685; Mon, 7 Feb 2022 07:26:51 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 66DCD4067C for ; Mon, 7 Feb 2022 07:26:50 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 216MF56M015869; Sun, 6 Feb 2022 22:26:49 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=XLWnDKqjYYjc+PiLRH+08sc6miz6ZaFPIBvClv9TdO4=; b=bbsMv+M8mbdmeUxSirQ0NkPwDX3s/p0bh5v4e/QQzLxfZsUBQSvdjTGXfNPY6cWk5It1 Jz4C/c0gLLfQ692GI8h8USubGc2AVCeOldkm8ofiw2WSOgXO19k9xiFtcIvkGztOA7hw L0YjF2E3sG5qqW60DVGct2C/ug2SzYls+cxIBTtcuLH1KzgoCeHVAOUo4xVMeHvunyjZ dC4/u6/xUg1XFpLHgGMobso2wXSzqLzFe25elpQDvdEzUUQqozwR55eeUzpfvUI+P6S7 tDaYmkmJdGHcHxyvmjjN1Xc0j5llDAOpIgNIy9r2Az8IpkCouETMUM8PmEyOuOwoxUfv JA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3e2p3m95ja-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 06 Feb 2022 22:26:49 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sun, 6 Feb 2022 22:26:47 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Sun, 6 Feb 2022 22:26:47 -0800 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 086103F704F; Sun, 6 Feb 2022 22:26:45 -0800 (PST) From: Nithin Dabilpuram To: , Radu Nicolau , Akhil Goyal CC: , Nithin Dabilpuram Subject: [PATCH v2 1/4] examples/ipsec-secgw: update error prints to data path log Date: Mon, 7 Feb 2022 11:56:38 +0530 Message-ID: <20220207062641.26574-1-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20220206143022.13098-1-ndabilpuram@marvell.com> References: <20220206143022.13098-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: Ph1ojassSFDmxVJztLlwcyLnUeQQ_1Qa X-Proofpoint-GUID: Ph1ojassSFDmxVJztLlwcyLnUeQQ_1Qa X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.816,Hydra:6.0.425,FMLib:17.11.62.513 definitions=2022-02-07_02,2022-02-03_01,2021-12-02_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Update error prints in data path to RTE_LOG_DP(). Error prints in fast path are not good for performance as they slow down the application when few bad packets are received. Signed-off-by: Nithin Dabilpuram Acked-by: Akhil Goyal --- v2: - Fixed issue with warning in patch 4/4 by checking for session pool initialization instead of mbuf_pool as now mbuf pool is per port. examples/ipsec-secgw/ipsec_worker.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c index 7419e85..e9493c5 100644 --- a/examples/ipsec-secgw/ipsec_worker.c +++ b/examples/ipsec-secgw/ipsec_worker.c @@ -332,7 +332,8 @@ process_ipsec_ev_inbound(struct ipsec_ctx *ctx, struct route_table *rt, break; default: - RTE_LOG(ERR, IPSEC, "Unsupported packet type = %d\n", type); + RTE_LOG_DP(DEBUG, IPSEC_ESP, "Unsupported packet type = %d\n", + type); goto drop_pkt_and_exit; } @@ -570,7 +571,8 @@ classify_pkt(struct rte_mbuf *pkt, struct ipsec_traffic *t) t->ip6.pkts[(t->ip6.num)++] = pkt; break; default: - RTE_LOG(ERR, IPSEC, "Unsupported packet type = %d\n", type); + RTE_LOG_DP(DEBUG, IPSEC_ESP, "Unsupported packet type = %d\n", + type); free_pkts(&pkt, 1); break; } From patchwork Mon Feb 7 06:26:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 106929 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 72710A034F; Mon, 7 Feb 2022 07:26:56 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5D76C410F3; Mon, 7 Feb 2022 07:26:54 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id F0E6C410EA for ; Mon, 7 Feb 2022 07:26:52 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 216MZGlg021772; Sun, 6 Feb 2022 22:26:52 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=yZfj3Vtk792Zm4TNE9BdBLuNTr/2T2e8oLMTTURCUpc=; b=JCothMYLkm2OLTihQkfJ/8bVQlGNsLtDEaxjsr0fhvWQDrHXAWBmnMhydwuyOw13Nd+k GuN9GrXfy4Q2UnDrXjiG/EAjk4G5L3vwQNdRERIDvlU6JG8QtOvpNGwubtTM6m2VrEiF vl7SpP5EJSHF00FWMEvlU0BgCfrNe8vxr3M9jqWG9ywx7VnZU4M5L74n6stZSlNan29S UzEbJlFKjMTcXQ2PesAsvh6RkdWUXz2RHdQ81jfV4b2AL62W7AX/zuTk2bIL3FdkcICE hIiMyWkw7N0LIeZg9ZqJEjLT0qrHgyKLyAQEJzKaH84g9y16EdWJFfB/FUdBPZxvZRaL 2w== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3e1smr4fm7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 06 Feb 2022 22:26:52 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 6 Feb 2022 22:26:50 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Sun, 6 Feb 2022 22:26:50 -0800 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 565A53F7063; Sun, 6 Feb 2022 22:26:48 -0800 (PST) From: Nithin Dabilpuram To: , Radu Nicolau , Akhil Goyal CC: , Nithin Dabilpuram Subject: [PATCH v2 2/4] examples/ipsec-secgw: disable Tx chksum offload for inline Date: Mon, 7 Feb 2022 11:56:39 +0530 Message-ID: <20220207062641.26574-2-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20220207062641.26574-1-ndabilpuram@marvell.com> References: <20220206143022.13098-1-ndabilpuram@marvell.com> <20220207062641.26574-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: jvNQj1ErS_OVl7gRM9e5zs2UbUP73ZL4 X-Proofpoint-ORIG-GUID: jvNQj1ErS_OVl7gRM9e5zs2UbUP73ZL4 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.816,Hydra:6.0.425,FMLib:17.11.62.513 definitions=2022-02-07_02,2022-02-03_01,2021-12-02_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Enable Tx IPv4 checksum offload only when Tx inline crypto is needed. In other cases such as Tx Inline protocol offload, checksum computation is implicitly taken care by HW. The advantage of having only necessary offloads enabled is that Tx burst function can be as light as possible. Signed-off-by: Nithin Dabilpuram Acked-by: Akhil Goyal --- examples/ipsec-secgw/ipsec-secgw.c | 3 --- examples/ipsec-secgw/sa.c | 9 +++++++++ 2 files changed, 9 insertions(+), 3 deletions(-) diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index 21abc0d..d8a9bfa 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -2314,9 +2314,6 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads) local_port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE; - if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) - local_port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM; - printf("port %u configuring rx_offloads=0x%" PRIx64 ", tx_offloads=0x%" PRIx64 "\n", portid, local_port_conf.rxmode.offloads, diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c index 1839ac7..b878a48 100644 --- a/examples/ipsec-secgw/sa.c +++ b/examples/ipsec-secgw/sa.c @@ -1790,6 +1790,15 @@ sa_check_offloads(uint16_t port_id, uint64_t *rx_offloads, RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) && rule->portid == port_id) { *tx_offloads |= RTE_ETH_TX_OFFLOAD_SECURITY; + + /* Checksum offload is not needed for inline protocol as + * all processing for Outbound IPSec packets will be + * implicitly taken care and for non-IPSec packets, + * there is no need of IPv4 Checksum offload. + */ + if (rule_type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO) + *tx_offloads |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM; + if (rule->mss) *tx_offloads |= RTE_ETH_TX_OFFLOAD_TCP_TSO; } From patchwork Mon Feb 7 06:26:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 106930 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3CE8BA034F; Mon, 7 Feb 2022 07:27:02 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 559CD410FD; Mon, 7 Feb 2022 07:26:57 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 76091410FC; Mon, 7 Feb 2022 07:26:55 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 216NOald004938; Sun, 6 Feb 2022 22:26:55 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=BWb6S6EGfw8OD7F6aBYP/c8YNOofjJkyJ0NlMux7X08=; b=WUqc6yjJ4FvdnVNEGZ/PuBUBiDeXaYwUZKZW9xJgAFKIdCq2OCSC1ntMb8YPFgo7kh7z IXY5wfMDEvcrPtA3FDCySEwnQAg2mPFrIB5hbGO77uLYT0iT8R4L8JoW46dQz3r6lsAY mcUslu8u0F4+JW/RrZT2JeL56uvfxKstZndXoX82IKqYs5kXyNEyoSfF/BEgt13tctbd rQcxdKuip3dIgyeOcrf5sXA+w0NUKx+rSBjnP3DSMlmMHrN6jFnvgbcAtlbVagLHN5F5 LujJuotwWeosrfxTM+Jdvrik7GLDm8iAuvHOTpI0+ThNQWJ0qJdBza3wKSvD8UupanuW qQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3e1smr4fmh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 06 Feb 2022 22:26:54 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sun, 6 Feb 2022 22:26:52 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Sun, 6 Feb 2022 22:26:52 -0800 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 9CB2E3F7068; Sun, 6 Feb 2022 22:26:50 -0800 (PST) From: Nithin Dabilpuram To: , Radu Nicolau , Akhil Goyal CC: , Nithin Dabilpuram , , Subject: [PATCH v2 3/4] examples/ipsec-secgw: fix buffer free logic in vector mode Date: Mon, 7 Feb 2022 11:56:40 +0530 Message-ID: <20220207062641.26574-3-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20220207062641.26574-1-ndabilpuram@marvell.com> References: <20220206143022.13098-1-ndabilpuram@marvell.com> <20220207062641.26574-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: LMO_f2c8k15EpDuy8Np0wvn_n53nYRqc X-Proofpoint-ORIG-GUID: LMO_f2c8k15EpDuy8Np0wvn_n53nYRqc X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.816,Hydra:6.0.425,FMLib:17.11.62.513 definitions=2022-02-07_02,2022-02-03_01,2021-12-02_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Fix packet processing to skip after mbuf is freed instead of touching and Tx'ing it. Also free vector event buffer in event worker when after processing there is no pkt to be enqueued to Tx adapter. Fixes: 86738ebe1e3d ("examples/ipsec-secgw: support event vector") Cc: schalla@marvell.com Cc: stable@dpdk.org Signed-off-by: Nithin Dabilpuram Acked-by: Akhil Goyal --- examples/ipsec-secgw/ipsec_worker.c | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-) diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c index e9493c5..8639426 100644 --- a/examples/ipsec-secgw/ipsec_worker.c +++ b/examples/ipsec-secgw/ipsec_worker.c @@ -205,12 +205,16 @@ check_sp_sa_bulk(struct sp_ctx *sp, struct sa_ctx *sa_ctx, ip->pkts[j++] = m; else { sa = *(struct ipsec_sa **)rte_security_dynfield(m); - if (sa == NULL) + if (sa == NULL) { free_pkts(&m, 1); + continue; + } /* SPI on the packet should match with the one in SA */ - if (unlikely(sa->spi != sa_ctx->sa[res - 1].spi)) + if (unlikely(sa->spi != sa_ctx->sa[res - 1].spi)) { free_pkts(&m, 1); + continue; + } ip->pkts[j++] = m; } @@ -536,6 +540,7 @@ ipsec_ev_route_pkts(struct rte_event_vector *vec, struct route_table *rt, RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)) { RTE_LOG(ERR, IPSEC, "SA type not supported\n"); free_pkts(&pkt, 1); + continue; } rte_security_set_pkt_metadata(sess->security.ctx, sess->security.ses, pkt, NULL); @@ -695,11 +700,13 @@ ipsec_ev_vector_process(struct lcore_conf_ev_tx_int_port_wrkr *lconf, ret = process_ipsec_ev_outbound_vector(&lconf->outbound, &lconf->rt, vec); - if (ret > 0) { + if (likely(ret > 0)) { vec->nb_elem = ret; rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, links[0].event_port_id, ev, 1, 0); + } else { + rte_mempool_put(rte_mempool_from_obj(vec), vec); } } @@ -720,6 +727,8 @@ ipsec_ev_vector_drv_mode_process(struct eh_event_link_info *links, rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, links[0].event_port_id, ev, 1, 0); + else + rte_mempool_put(rte_mempool_from_obj(vec), vec); } /* From patchwork Mon Feb 7 06:26:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 106931 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 407FBA034F; Mon, 7 Feb 2022 07:27:08 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7743B4111B; Mon, 7 Feb 2022 07:27:00 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id C349241141 for ; Mon, 7 Feb 2022 07:26:58 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 216NIH6l030544; Sun, 6 Feb 2022 22:26:58 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=XcBwbQMcvzs+2LbRZjV1Nrb5qrFRp7f4a0Bf82e4omE=; b=h2BBvL3rm6yTg5TD/KXBUUVKbv4ZVCaLg7IsPjCOPFTUKxhyVUEz8aqbXe4Uc5UrBNot jV2kn424V54oA+s9aHjWoMFt0eb1iqgMC7NigCR7/beewIWnDSUOcQibCOcCujedU5Jw Tmwspq+YTBHHLbdZeDbz4rZV0ux7W15i2oDWg3MFnFZ9u6+o/S3cq0P09/sAOyLtOZFy nkdgMPyojWdKOcAhoOAsafU3GztFegwXtcqbt2/ygbTUeQkMcV7hNvzDDSxjvGBiBFRS 03ZYX6M5eoojYIlWxdlmYBjcQNMqX4qV3rcQqGyg1CJqLWbHLwgTGH8nz1HOI/yPpGRD RA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3e2p3m95k4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 06 Feb 2022 22:26:57 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 6 Feb 2022 22:26:55 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Sun, 6 Feb 2022 22:26:55 -0800 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 73E713F704F; Sun, 6 Feb 2022 22:26:53 -0800 (PST) From: Nithin Dabilpuram To: , Radu Nicolau , Akhil Goyal CC: , Nithin Dabilpuram Subject: [PATCH v2 4/4] examples/ipsec-secgw: add per port pool and vector pool size Date: Mon, 7 Feb 2022 11:56:41 +0530 Message-ID: <20220207062641.26574-4-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20220207062641.26574-1-ndabilpuram@marvell.com> References: <20220206143022.13098-1-ndabilpuram@marvell.com> <20220207062641.26574-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: NCKXf7uunI-9UlfF-S586cu7irVbZ5Aq X-Proofpoint-GUID: NCKXf7uunI-9UlfF-S586cu7irVbZ5Aq X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.816,Hydra:6.0.425,FMLib:17.11.62.513 definitions=2022-02-07_02,2022-02-03_01,2021-12-02_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support to enable per port packet pool and also override vector pool size from command line args. This is useful on some HW to tune performance based on usecase. Signed-off-by: Nithin Dabilpuram Acked-by: Akhil Goyal --- examples/ipsec-secgw/event_helper.c | 17 ++++++-- examples/ipsec-secgw/event_helper.h | 2 + examples/ipsec-secgw/ipsec-secgw.c | 82 ++++++++++++++++++++++++++++--------- examples/ipsec-secgw/ipsec-secgw.h | 2 + examples/ipsec-secgw/ipsec.h | 2 +- 5 files changed, 81 insertions(+), 24 deletions(-) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index 8947e41..172ab8e 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -792,8 +792,8 @@ eh_rx_adapter_configure(struct eventmode_conf *em_conf, uint32_t service_id, socket_id, nb_elem; struct rte_mempool *vector_pool = NULL; uint32_t lcore_id = rte_lcore_id(); + int ret, portid, nb_ports = 0; uint8_t eventdev_id; - int ret; int j; /* Get event dev ID */ @@ -806,10 +806,21 @@ eh_rx_adapter_configure(struct eventmode_conf *em_conf, return ret; } + RTE_ETH_FOREACH_DEV(portid) + if ((em_conf->eth_portmask & (1 << portid))) + nb_ports++; + if (em_conf->ext_params.event_vector) { socket_id = rte_lcore_to_socket_id(lcore_id); - nb_elem = (nb_bufs_in_pool / em_conf->ext_params.vector_size) - + 1; + + if (em_conf->vector_pool_sz) { + nb_elem = em_conf->vector_pool_sz; + } else { + nb_elem = (nb_bufs_in_pool / + em_conf->ext_params.vector_size) + 1; + if (per_port_pool) + nb_elem = nb_ports * nb_elem; + } vector_pool = rte_event_vector_pool_create( "vector_pool", nb_elem, 0, diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index 5be6c62..f3cbe57 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -183,6 +183,8 @@ struct eventmode_conf { /**< 64 bit field to specify extended params */ uint64_t vector_tmo_ns; /**< Max vector timeout in nanoseconds */ + uint64_t vector_pool_sz; + /**< Vector pool size */ }; /** diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index d8a9bfa..2f3ebea 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -118,6 +118,8 @@ struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; #define CMD_LINE_OPT_EVENT_VECTOR "event-vector" #define CMD_LINE_OPT_VECTOR_SIZE "vector-size" #define CMD_LINE_OPT_VECTOR_TIMEOUT "vector-tmo" +#define CMD_LINE_OPT_VECTOR_POOL_SZ "vector-pool-sz" +#define CMD_LINE_OPT_PER_PORT_POOL "per-port-pool" #define CMD_LINE_ARG_EVENT "event" #define CMD_LINE_ARG_POLL "poll" @@ -145,6 +147,8 @@ enum { CMD_LINE_OPT_EVENT_VECTOR_NUM, CMD_LINE_OPT_VECTOR_SIZE_NUM, CMD_LINE_OPT_VECTOR_TIMEOUT_NUM, + CMD_LINE_OPT_VECTOR_POOL_SZ_NUM, + CMD_LINE_OPT_PER_PORT_POOL_NUM, }; static const struct option lgopts[] = { @@ -161,6 +165,8 @@ static const struct option lgopts[] = { {CMD_LINE_OPT_EVENT_VECTOR, 0, 0, CMD_LINE_OPT_EVENT_VECTOR_NUM}, {CMD_LINE_OPT_VECTOR_SIZE, 1, 0, CMD_LINE_OPT_VECTOR_SIZE_NUM}, {CMD_LINE_OPT_VECTOR_TIMEOUT, 1, 0, CMD_LINE_OPT_VECTOR_TIMEOUT_NUM}, + {CMD_LINE_OPT_VECTOR_POOL_SZ, 1, 0, CMD_LINE_OPT_VECTOR_POOL_SZ_NUM}, + {CMD_LINE_OPT_PER_PORT_POOL, 0, 0, CMD_LINE_OPT_PER_PORT_POOL_NUM}, {NULL, 0, 0, 0} }; @@ -234,7 +240,6 @@ struct lcore_conf { struct rt_ctx *rt6_ctx; struct { struct rte_ip_frag_tbl *tbl; - struct rte_mempool *pool_dir; struct rte_mempool *pool_indir; struct rte_ip_frag_death_row dr; } frag; @@ -262,6 +267,8 @@ static struct rte_eth_conf port_conf = { struct socket_ctx socket_ctx[NB_SOCKETS]; +bool per_port_pool; + /* * Determine is multi-segment support required: * - either frame buffer size is smaller then mtu @@ -630,12 +637,10 @@ send_fragment_packet(struct lcore_conf *qconf, struct rte_mbuf *m, if (proto == IPPROTO_IP) rc = rte_ipv4_fragment_packet(m, tbl->m_table + len, - n, mtu_size, qconf->frag.pool_dir, - qconf->frag.pool_indir); + n, mtu_size, m->pool, qconf->frag.pool_indir); else rc = rte_ipv6_fragment_packet(m, tbl->m_table + len, - n, mtu_size, qconf->frag.pool_dir, - qconf->frag.pool_indir); + n, mtu_size, m->pool, qconf->frag.pool_indir); if (rc >= 0) len += rc; @@ -1256,7 +1261,6 @@ ipsec_poll_mode_worker(void) qconf->outbound.session_pool = socket_ctx[socket_id].session_pool; qconf->outbound.session_priv_pool = socket_ctx[socket_id].session_priv_pool; - qconf->frag.pool_dir = socket_ctx[socket_id].mbuf_pool; qconf->frag.pool_indir = socket_ctx[socket_id].mbuf_pool_indir; rc = ipsec_sad_lcore_cache_init(app_sa_prm.cache_sz); @@ -1511,6 +1515,9 @@ print_usage(const char *prgname) " --vector-size Max vector size (default value: 16)\n" " --vector-tmo Max vector timeout in nanoseconds" " (default value: 102400)\n" + " --" CMD_LINE_OPT_PER_PORT_POOL " Enable per port mbuf pool\n" + " --" CMD_LINE_OPT_VECTOR_POOL_SZ " Vector pool size\n" + " (default value is based on mbuf count)\n" "\n", prgname); } @@ -1894,6 +1901,15 @@ parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf) em_conf = eh_conf->mode_params; em_conf->vector_tmo_ns = ret; break; + case CMD_LINE_OPT_VECTOR_POOL_SZ_NUM: + ret = parse_decimal(optarg); + + em_conf = eh_conf->mode_params; + em_conf->vector_pool_sz = ret; + break; + case CMD_LINE_OPT_PER_PORT_POOL_NUM: + per_port_pool = 1; + break; default: print_usage(prgname); return -1; @@ -2378,6 +2394,7 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads) /* init RX queues */ for (queue = 0; queue < qconf->nb_rx_queue; ++queue) { struct rte_eth_rxconf rxq_conf; + struct rte_mempool *pool; if (portid != qconf->rx_queue_list[queue].port_id) continue; @@ -2389,9 +2406,14 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads) rxq_conf = dev_info.default_rxconf; rxq_conf.offloads = local_port_conf.rxmode.offloads; + + if (per_port_pool) + pool = socket_ctx[socket_id].mbuf_pool[portid]; + else + pool = socket_ctx[socket_id].mbuf_pool[0]; + ret = rte_eth_rx_queue_setup(portid, rx_queueid, - nb_rxd, socket_id, &rxq_conf, - socket_ctx[socket_id].mbuf_pool); + nb_rxd, socket_id, &rxq_conf, pool); if (ret < 0) rte_exit(EXIT_FAILURE, "rte_eth_rx_queue_setup: err=%d, " @@ -2504,28 +2526,37 @@ session_priv_pool_init(struct socket_ctx *ctx, int32_t socket_id, } static void -pool_init(struct socket_ctx *ctx, int32_t socket_id, uint32_t nb_mbuf) +pool_init(struct socket_ctx *ctx, int32_t socket_id, int portid, + uint32_t nb_mbuf) { char s[64]; int32_t ms; - snprintf(s, sizeof(s), "mbuf_pool_%d", socket_id); - ctx->mbuf_pool = rte_pktmbuf_pool_create(s, nb_mbuf, - MEMPOOL_CACHE_SIZE, ipsec_metadata_size(), - frame_buf_size, socket_id); + + /* mbuf_pool is initialised by the pool_init() function*/ + if (socket_ctx[socket_id].mbuf_pool[portid]) + return; + + snprintf(s, sizeof(s), "mbuf_pool_%d_%d", socket_id, portid); + ctx->mbuf_pool[portid] = rte_pktmbuf_pool_create(s, nb_mbuf, + MEMPOOL_CACHE_SIZE, + ipsec_metadata_size(), + frame_buf_size, + socket_id); /* * if multi-segment support is enabled, then create a pool - * for indirect mbufs. + * for indirect mbufs. This is not per-port but global. */ ms = multi_seg_required(); - if (ms != 0) { + if (ms != 0 && !ctx->mbuf_pool_indir) { snprintf(s, sizeof(s), "mbuf_pool_indir_%d", socket_id); ctx->mbuf_pool_indir = rte_pktmbuf_pool_create(s, nb_mbuf, MEMPOOL_CACHE_SIZE, 0, 0, socket_id); } - if (ctx->mbuf_pool == NULL || (ms != 0 && ctx->mbuf_pool_indir == NULL)) + if (ctx->mbuf_pool[portid] == NULL || + (ms != 0 && ctx->mbuf_pool_indir == NULL)) rte_exit(EXIT_FAILURE, "Cannot init mbuf pool on socket %d\n", socket_id); else @@ -3338,11 +3369,22 @@ main(int32_t argc, char **argv) else socket_id = 0; - /* mbuf_pool is initialised by the pool_init() function*/ - if (socket_ctx[socket_id].mbuf_pool) + if (per_port_pool) { + RTE_ETH_FOREACH_DEV(portid) { + if ((enabled_port_mask & (1 << portid)) == 0) + continue; + + pool_init(&socket_ctx[socket_id], socket_id, + portid, nb_bufs_in_pool); + } + } else { + pool_init(&socket_ctx[socket_id], socket_id, 0, + nb_bufs_in_pool); + } + + if (socket_ctx[socket_id].session_pool) continue; - pool_init(&socket_ctx[socket_id], socket_id, nb_bufs_in_pool); session_pool_init(&socket_ctx[socket_id], socket_id, sess_sz); session_priv_pool_init(&socket_ctx[socket_id], socket_id, sess_sz); @@ -3415,7 +3457,7 @@ main(int32_t argc, char **argv) /* Replicate each context per socket */ for (i = 0; i < NB_SOCKETS && i < rte_socket_count(); i++) { socket_id = rte_socket_id_by_idx(i); - if ((socket_ctx[socket_id].mbuf_pool != NULL) && + if ((socket_ctx[socket_id].session_pool != NULL) && (socket_ctx[socket_id].sa_in == NULL) && (socket_ctx[socket_id].sa_out == NULL)) { sa_init(&socket_ctx[socket_id], socket_id); diff --git a/examples/ipsec-secgw/ipsec-secgw.h b/examples/ipsec-secgw/ipsec-secgw.h index ac4fa5e..24f11ad 100644 --- a/examples/ipsec-secgw/ipsec-secgw.h +++ b/examples/ipsec-secgw/ipsec-secgw.h @@ -134,6 +134,8 @@ extern volatile bool force_quit; extern uint32_t nb_bufs_in_pool; +extern bool per_port_pool; + static inline uint8_t is_unprotected_port(uint16_t port_id) { diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h index bc87b1a..ccfde8e 100644 --- a/examples/ipsec-secgw/ipsec.h +++ b/examples/ipsec-secgw/ipsec.h @@ -248,7 +248,7 @@ struct socket_ctx { struct sp_ctx *sp_ip6_out; struct rt_ctx *rt_ip4; struct rt_ctx *rt_ip6; - struct rte_mempool *mbuf_pool; + struct rte_mempool *mbuf_pool[RTE_MAX_ETHPORTS]; struct rte_mempool *mbuf_pool_indir; struct rte_mempool *session_pool; struct rte_mempool *session_priv_pool;