From patchwork Wed Feb 23 09:53:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 108129 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 91562A034C; Wed, 23 Feb 2022 10:54:17 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 67E1840DF6; Wed, 23 Feb 2022 10:54:17 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 68FC040140 for ; Wed, 23 Feb 2022 10:54:15 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 21N9buUN012775; Wed, 23 Feb 2022 01:54:14 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=eYrofNqub9ea/IlVRWIgh4HzafWNxNAvpSw3HvG8wQE=; b=kS77wTohKU7ZetfdDjjoDWxwVOX5lBMYACTsx0/aSBjkEHmjGEfBdmA2X9g+XvqqFPBx qbetMzVMvUrbKQT+L+mjWeJQU/BD5ykOx75QwqibRfnunv7xdRVbgFCVdodEAWMOCw06 piKgZ0ZndWUhp23230vboMB+Lg/wECV0OwrvnIccI2VQF9QKk3gWoj9wgKHylnYn+bLQ roSfZ2yfx1jTmWFHpd37wZT/x4fOceiaDQhWzdwy8uW4NFQpDmGzRw3TLt5ItYbbcrPH RtMn9N8on4A2Ihw/WjVrlcUBUYss1QUQWCsveQXO4loTIifp8iNTIRv4vC1gfsOlWCq2 ag== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3edjerg1sy-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 23 Feb 2022 01:54:14 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 23 Feb 2022 01:54:12 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 23 Feb 2022 01:54:12 -0800 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 259E73F7088; Wed, 23 Feb 2022 01:54:09 -0800 (PST) From: Nithin Dabilpuram To: Radu Nicolau , Akhil Goyal CC: , , , Nithin Dabilpuram Subject: [PATCH v3 1/3] examples/ipsec-secgw: update error prints to data path log Date: Wed, 23 Feb 2022 15:23:51 +0530 Message-ID: <20220223095400.6187-1-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20220206143022.13098-1-ndabilpuram@marvell.com> References: <20220206143022.13098-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: VQpcaoDhszda9Me377LfxZ6bIlzyS_Jy X-Proofpoint-GUID: VQpcaoDhszda9Me377LfxZ6bIlzyS_Jy X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.816,Hydra:6.0.425,FMLib:17.11.64.514 definitions=2022-02-23_03,2022-02-21_02,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Update error prints in data path to RTE_LOG_DP(). Error prints in fast path are not good for performance as they slow down the application when few bad packets are received. Signed-off-by: Nithin Dabilpuram Acked-by: Akhil Goyal --- v2: - Fixed issue with warning in patch 4/4 by checking for session pool initialization instead of mbuf_pool as now mbuf pool is per port. v3: - Removed patch 2/4 from this series. Will send it with other series that adds seperate worker thread for ipsec-secgw if all SA's are from inline protocol. - Added documentation for patch 4/4. examples/ipsec-secgw/ipsec_worker.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c index 7419e85..e9493c5 100644 --- a/examples/ipsec-secgw/ipsec_worker.c +++ b/examples/ipsec-secgw/ipsec_worker.c @@ -332,7 +332,8 @@ process_ipsec_ev_inbound(struct ipsec_ctx *ctx, struct route_table *rt, break; default: - RTE_LOG(ERR, IPSEC, "Unsupported packet type = %d\n", type); + RTE_LOG_DP(DEBUG, IPSEC_ESP, "Unsupported packet type = %d\n", + type); goto drop_pkt_and_exit; } @@ -570,7 +571,8 @@ classify_pkt(struct rte_mbuf *pkt, struct ipsec_traffic *t) t->ip6.pkts[(t->ip6.num)++] = pkt; break; default: - RTE_LOG(ERR, IPSEC, "Unsupported packet type = %d\n", type); + RTE_LOG_DP(DEBUG, IPSEC_ESP, "Unsupported packet type = %d\n", + type); free_pkts(&pkt, 1); break; } From patchwork Wed Feb 23 09:53:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 108130 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E80E2A034C; Wed, 23 Feb 2022 10:54:21 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 515D441177; Wed, 23 Feb 2022 10:54:20 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 0DE7241177; Wed, 23 Feb 2022 10:54:17 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 21N9buc9012774; Wed, 23 Feb 2022 01:54:17 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=7bKxpwYKi0cw0zGIo6eV9o2CiI4aW05EkpgxQpE6ncI=; b=Hma1lw1crP09VgnqL2KD6Gt3Lj6hhUKa80Ee6dzLFm7w2GO6T7F7JkaKpJp4RthVktY4 XcOV0imLR0uyaok2TdwoubFWbTEoUx1If5TKbPypNq1vYx7Q3vLowYnGvlVGOwrXbK4q D+TbYaL1HCTL7xHOEHFYJQJ5SOI+OwYxvVFCUx+PwKCZtsMda9XJoSCxB5HuovIRhWkA AS8q1bmQRMHOnisW6mvoLqIwnxOMkQcXb2lD2HhIS1azYDPvv4hTj80ah0Pw/STvmNNq ZYBfC49MNkvSh1bmZsKx+TLeP+pap95kSQ1SMw7lz4HH4jNg81vN6fGE62syz+h6txOh ng== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3edjerg1t3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 23 Feb 2022 01:54:17 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 23 Feb 2022 01:54:15 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Wed, 23 Feb 2022 01:54:15 -0800 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id C5DF83F708A; Wed, 23 Feb 2022 01:54:12 -0800 (PST) From: Nithin Dabilpuram To: Radu Nicolau , Akhil Goyal CC: , , , Nithin Dabilpuram , , Subject: [PATCH v3 2/3] examples/ipsec-secgw: fix buffer free logic in vector mode Date: Wed, 23 Feb 2022 15:23:52 +0530 Message-ID: <20220223095400.6187-2-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20220223095400.6187-1-ndabilpuram@marvell.com> References: <20220206143022.13098-1-ndabilpuram@marvell.com> <20220223095400.6187-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 4qmiNSMBHUzrnW5n98sX2QaawuUanWHi X-Proofpoint-GUID: 4qmiNSMBHUzrnW5n98sX2QaawuUanWHi X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.816,Hydra:6.0.425,FMLib:17.11.64.514 definitions=2022-02-23_03,2022-02-21_02,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Fix packet processing to skip after mbuf is freed instead of touching and Tx'ing it. Also free vector event buffer in event worker when after processing there is no pkt to be enqueued to Tx adapter. Fixes: 86738ebe1e3d ("examples/ipsec-secgw: support event vector") Cc: schalla@marvell.com Cc: stable@dpdk.org Signed-off-by: Nithin Dabilpuram Acked-by: Akhil Goyal --- examples/ipsec-secgw/ipsec_worker.c | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-) diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c index e9493c5..8639426 100644 --- a/examples/ipsec-secgw/ipsec_worker.c +++ b/examples/ipsec-secgw/ipsec_worker.c @@ -205,12 +205,16 @@ check_sp_sa_bulk(struct sp_ctx *sp, struct sa_ctx *sa_ctx, ip->pkts[j++] = m; else { sa = *(struct ipsec_sa **)rte_security_dynfield(m); - if (sa == NULL) + if (sa == NULL) { free_pkts(&m, 1); + continue; + } /* SPI on the packet should match with the one in SA */ - if (unlikely(sa->spi != sa_ctx->sa[res - 1].spi)) + if (unlikely(sa->spi != sa_ctx->sa[res - 1].spi)) { free_pkts(&m, 1); + continue; + } ip->pkts[j++] = m; } @@ -536,6 +540,7 @@ ipsec_ev_route_pkts(struct rte_event_vector *vec, struct route_table *rt, RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)) { RTE_LOG(ERR, IPSEC, "SA type not supported\n"); free_pkts(&pkt, 1); + continue; } rte_security_set_pkt_metadata(sess->security.ctx, sess->security.ses, pkt, NULL); @@ -695,11 +700,13 @@ ipsec_ev_vector_process(struct lcore_conf_ev_tx_int_port_wrkr *lconf, ret = process_ipsec_ev_outbound_vector(&lconf->outbound, &lconf->rt, vec); - if (ret > 0) { + if (likely(ret > 0)) { vec->nb_elem = ret; rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, links[0].event_port_id, ev, 1, 0); + } else { + rte_mempool_put(rte_mempool_from_obj(vec), vec); } } @@ -720,6 +727,8 @@ ipsec_ev_vector_drv_mode_process(struct eh_event_link_info *links, rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, links[0].event_port_id, ev, 1, 0); + else + rte_mempool_put(rte_mempool_from_obj(vec), vec); } /* From patchwork Wed Feb 23 09:53:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 108131 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 167F4A034C; Wed, 23 Feb 2022 10:54:27 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6D3914118A; Wed, 23 Feb 2022 10:54:24 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id C26C341181 for ; Wed, 23 Feb 2022 10:54:20 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 21N9buUO012775; Wed, 23 Feb 2022 01:54:20 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=7fNkhhb/wKnmUAN3B4WreOLuZVWu8Uk6ygw1RwSlGvI=; b=QdNk0vFgeufoKc1O6w2ItBgIxXaqAZj/4uElTnfNwK5k+WJFw5UNs/XSW9Kih7XDiwP2 4X1yT4LKbBrqSN99FH6V1Jum/OOBPLPXQA6IwQ56YA2OVn5OdMeV+OPnr5wGbaaNP9Wz YoqpigSPF4xsVJ03dwOyXA08O5mrIGFjdq8BP40//8pX0o36GzcJAUREu4LbIZ3/NUY1 0Ksl0b0u+4SMRY1191ea7lm2t9RoiI9Oxle5iInp5N+cmdYApPAMevGSW9Rc9M/NANbD e5nV749yilfIg/f4hukVHahs2VgYi5F0csu4X+QdIDFCH84njPvbfqF0i5FlOFV1xDW2 NA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3edjerg1t7-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 23 Feb 2022 01:54:20 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 23 Feb 2022 01:54:17 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 23 Feb 2022 01:54:17 -0800 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id D39A93F70A3; Wed, 23 Feb 2022 01:54:15 -0800 (PST) From: Nithin Dabilpuram To: Radu Nicolau , Akhil Goyal CC: , , , Nithin Dabilpuram Subject: [PATCH v3 3/3] examples/ipsec-secgw: add per port pool and vector pool size Date: Wed, 23 Feb 2022 15:23:53 +0530 Message-ID: <20220223095400.6187-3-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20220223095400.6187-1-ndabilpuram@marvell.com> References: <20220206143022.13098-1-ndabilpuram@marvell.com> <20220223095400.6187-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: NS6bzvP6QV1gNDM946q_PsuONUwlVYih X-Proofpoint-GUID: NS6bzvP6QV1gNDM946q_PsuONUwlVYih X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.816,Hydra:6.0.425,FMLib:17.11.64.514 definitions=2022-02-23_03,2022-02-21_02,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support to enable per port packet pool and also override vector pool size from command line args. This is useful on some HW to tune performance based on usecase. Signed-off-by: Nithin Dabilpuram Acked-by: Akhil Goyal --- doc/guides/sample_app_ug/ipsec_secgw.rst | 7 +++ examples/ipsec-secgw/event_helper.c | 17 +++++-- examples/ipsec-secgw/event_helper.h | 2 + examples/ipsec-secgw/ipsec-secgw.c | 82 ++++++++++++++++++++++++-------- examples/ipsec-secgw/ipsec-secgw.h | 2 + examples/ipsec-secgw/ipsec.h | 2 +- 6 files changed, 88 insertions(+), 24 deletions(-) diff --git a/doc/guides/sample_app_ug/ipsec_secgw.rst b/doc/guides/sample_app_ug/ipsec_secgw.rst index c53ee7c..d93acf0 100644 --- a/doc/guides/sample_app_ug/ipsec_secgw.rst +++ b/doc/guides/sample_app_ug/ipsec_secgw.rst @@ -249,6 +249,13 @@ Where: Should be lower for low number of reassembly buckets. Valid values: from 1 ns to 10 s. Default value: 10000000 (10 s). +* ``--per-port-pool``: Enable per ethdev port pktmbuf pool. + By default one packet mbuf pool per socket is created and configured + via Rx queue setup. + +* ``--vector-pool-sz``: Number of buffers in vector pool. + By default, vector pool size depeneds on packet pool size + and size of each vector. The mapping of lcores to port/queues is similar to other l3fwd applications. diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index 8947e41..172ab8e 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -792,8 +792,8 @@ eh_rx_adapter_configure(struct eventmode_conf *em_conf, uint32_t service_id, socket_id, nb_elem; struct rte_mempool *vector_pool = NULL; uint32_t lcore_id = rte_lcore_id(); + int ret, portid, nb_ports = 0; uint8_t eventdev_id; - int ret; int j; /* Get event dev ID */ @@ -806,10 +806,21 @@ eh_rx_adapter_configure(struct eventmode_conf *em_conf, return ret; } + RTE_ETH_FOREACH_DEV(portid) + if ((em_conf->eth_portmask & (1 << portid))) + nb_ports++; + if (em_conf->ext_params.event_vector) { socket_id = rte_lcore_to_socket_id(lcore_id); - nb_elem = (nb_bufs_in_pool / em_conf->ext_params.vector_size) - + 1; + + if (em_conf->vector_pool_sz) { + nb_elem = em_conf->vector_pool_sz; + } else { + nb_elem = (nb_bufs_in_pool / + em_conf->ext_params.vector_size) + 1; + if (per_port_pool) + nb_elem = nb_ports * nb_elem; + } vector_pool = rte_event_vector_pool_create( "vector_pool", nb_elem, 0, diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index 5be6c62..f3cbe57 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -183,6 +183,8 @@ struct eventmode_conf { /**< 64 bit field to specify extended params */ uint64_t vector_tmo_ns; /**< Max vector timeout in nanoseconds */ + uint64_t vector_pool_sz; + /**< Vector pool size */ }; /** diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index 9de1c6d..42b5081 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -118,6 +118,8 @@ struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; #define CMD_LINE_OPT_EVENT_VECTOR "event-vector" #define CMD_LINE_OPT_VECTOR_SIZE "vector-size" #define CMD_LINE_OPT_VECTOR_TIMEOUT "vector-tmo" +#define CMD_LINE_OPT_VECTOR_POOL_SZ "vector-pool-sz" +#define CMD_LINE_OPT_PER_PORT_POOL "per-port-pool" #define CMD_LINE_ARG_EVENT "event" #define CMD_LINE_ARG_POLL "poll" @@ -145,6 +147,8 @@ enum { CMD_LINE_OPT_EVENT_VECTOR_NUM, CMD_LINE_OPT_VECTOR_SIZE_NUM, CMD_LINE_OPT_VECTOR_TIMEOUT_NUM, + CMD_LINE_OPT_VECTOR_POOL_SZ_NUM, + CMD_LINE_OPT_PER_PORT_POOL_NUM, }; static const struct option lgopts[] = { @@ -161,6 +165,8 @@ static const struct option lgopts[] = { {CMD_LINE_OPT_EVENT_VECTOR, 0, 0, CMD_LINE_OPT_EVENT_VECTOR_NUM}, {CMD_LINE_OPT_VECTOR_SIZE, 1, 0, CMD_LINE_OPT_VECTOR_SIZE_NUM}, {CMD_LINE_OPT_VECTOR_TIMEOUT, 1, 0, CMD_LINE_OPT_VECTOR_TIMEOUT_NUM}, + {CMD_LINE_OPT_VECTOR_POOL_SZ, 1, 0, CMD_LINE_OPT_VECTOR_POOL_SZ_NUM}, + {CMD_LINE_OPT_PER_PORT_POOL, 0, 0, CMD_LINE_OPT_PER_PORT_POOL_NUM}, {NULL, 0, 0, 0} }; @@ -234,7 +240,6 @@ struct lcore_conf { struct rt_ctx *rt6_ctx; struct { struct rte_ip_frag_tbl *tbl; - struct rte_mempool *pool_dir; struct rte_mempool *pool_indir; struct rte_ip_frag_death_row dr; } frag; @@ -262,6 +267,8 @@ static struct rte_eth_conf port_conf = { struct socket_ctx socket_ctx[NB_SOCKETS]; +bool per_port_pool; + /* * Determine is multi-segment support required: * - either frame buffer size is smaller then mtu @@ -630,12 +637,10 @@ send_fragment_packet(struct lcore_conf *qconf, struct rte_mbuf *m, if (proto == IPPROTO_IP) rc = rte_ipv4_fragment_packet(m, tbl->m_table + len, - n, mtu_size, qconf->frag.pool_dir, - qconf->frag.pool_indir); + n, mtu_size, m->pool, qconf->frag.pool_indir); else rc = rte_ipv6_fragment_packet(m, tbl->m_table + len, - n, mtu_size, qconf->frag.pool_dir, - qconf->frag.pool_indir); + n, mtu_size, m->pool, qconf->frag.pool_indir); if (rc >= 0) len += rc; @@ -1256,7 +1261,6 @@ ipsec_poll_mode_worker(void) qconf->outbound.session_pool = socket_ctx[socket_id].session_pool; qconf->outbound.session_priv_pool = socket_ctx[socket_id].session_priv_pool; - qconf->frag.pool_dir = socket_ctx[socket_id].mbuf_pool; qconf->frag.pool_indir = socket_ctx[socket_id].mbuf_pool_indir; rc = ipsec_sad_lcore_cache_init(app_sa_prm.cache_sz); @@ -1511,6 +1515,9 @@ print_usage(const char *prgname) " --vector-size Max vector size (default value: 16)\n" " --vector-tmo Max vector timeout in nanoseconds" " (default value: 102400)\n" + " --" CMD_LINE_OPT_PER_PORT_POOL " Enable per port mbuf pool\n" + " --" CMD_LINE_OPT_VECTOR_POOL_SZ " Vector pool size\n" + " (default value is based on mbuf count)\n" "\n", prgname); } @@ -1894,6 +1901,15 @@ parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf) em_conf = eh_conf->mode_params; em_conf->vector_tmo_ns = ret; break; + case CMD_LINE_OPT_VECTOR_POOL_SZ_NUM: + ret = parse_decimal(optarg); + + em_conf = eh_conf->mode_params; + em_conf->vector_pool_sz = ret; + break; + case CMD_LINE_OPT_PER_PORT_POOL_NUM: + per_port_pool = 1; + break; default: print_usage(prgname); return -1; @@ -2381,6 +2397,7 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads) /* init RX queues */ for (queue = 0; queue < qconf->nb_rx_queue; ++queue) { struct rte_eth_rxconf rxq_conf; + struct rte_mempool *pool; if (portid != qconf->rx_queue_list[queue].port_id) continue; @@ -2392,9 +2409,14 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads) rxq_conf = dev_info.default_rxconf; rxq_conf.offloads = local_port_conf.rxmode.offloads; + + if (per_port_pool) + pool = socket_ctx[socket_id].mbuf_pool[portid]; + else + pool = socket_ctx[socket_id].mbuf_pool[0]; + ret = rte_eth_rx_queue_setup(portid, rx_queueid, - nb_rxd, socket_id, &rxq_conf, - socket_ctx[socket_id].mbuf_pool); + nb_rxd, socket_id, &rxq_conf, pool); if (ret < 0) rte_exit(EXIT_FAILURE, "rte_eth_rx_queue_setup: err=%d, " @@ -2507,28 +2529,37 @@ session_priv_pool_init(struct socket_ctx *ctx, int32_t socket_id, } static void -pool_init(struct socket_ctx *ctx, int32_t socket_id, uint32_t nb_mbuf) +pool_init(struct socket_ctx *ctx, int32_t socket_id, int portid, + uint32_t nb_mbuf) { char s[64]; int32_t ms; - snprintf(s, sizeof(s), "mbuf_pool_%d", socket_id); - ctx->mbuf_pool = rte_pktmbuf_pool_create(s, nb_mbuf, - MEMPOOL_CACHE_SIZE, ipsec_metadata_size(), - frame_buf_size, socket_id); + + /* mbuf_pool is initialised by the pool_init() function*/ + if (socket_ctx[socket_id].mbuf_pool[portid]) + return; + + snprintf(s, sizeof(s), "mbuf_pool_%d_%d", socket_id, portid); + ctx->mbuf_pool[portid] = rte_pktmbuf_pool_create(s, nb_mbuf, + MEMPOOL_CACHE_SIZE, + ipsec_metadata_size(), + frame_buf_size, + socket_id); /* * if multi-segment support is enabled, then create a pool - * for indirect mbufs. + * for indirect mbufs. This is not per-port but global. */ ms = multi_seg_required(); - if (ms != 0) { + if (ms != 0 && !ctx->mbuf_pool_indir) { snprintf(s, sizeof(s), "mbuf_pool_indir_%d", socket_id); ctx->mbuf_pool_indir = rte_pktmbuf_pool_create(s, nb_mbuf, MEMPOOL_CACHE_SIZE, 0, 0, socket_id); } - if (ctx->mbuf_pool == NULL || (ms != 0 && ctx->mbuf_pool_indir == NULL)) + if (ctx->mbuf_pool[portid] == NULL || + (ms != 0 && ctx->mbuf_pool_indir == NULL)) rte_exit(EXIT_FAILURE, "Cannot init mbuf pool on socket %d\n", socket_id); else @@ -3344,11 +3375,22 @@ main(int32_t argc, char **argv) else socket_id = 0; - /* mbuf_pool is initialised by the pool_init() function*/ - if (socket_ctx[socket_id].mbuf_pool) + if (per_port_pool) { + RTE_ETH_FOREACH_DEV(portid) { + if ((enabled_port_mask & (1 << portid)) == 0) + continue; + + pool_init(&socket_ctx[socket_id], socket_id, + portid, nb_bufs_in_pool); + } + } else { + pool_init(&socket_ctx[socket_id], socket_id, 0, + nb_bufs_in_pool); + } + + if (socket_ctx[socket_id].session_pool) continue; - pool_init(&socket_ctx[socket_id], socket_id, nb_bufs_in_pool); session_pool_init(&socket_ctx[socket_id], socket_id, sess_sz); session_priv_pool_init(&socket_ctx[socket_id], socket_id, sess_sz); @@ -3421,7 +3463,7 @@ main(int32_t argc, char **argv) /* Replicate each context per socket */ for (i = 0; i < NB_SOCKETS && i < rte_socket_count(); i++) { socket_id = rte_socket_id_by_idx(i); - if ((socket_ctx[socket_id].mbuf_pool != NULL) && + if ((socket_ctx[socket_id].session_pool != NULL) && (socket_ctx[socket_id].sa_in == NULL) && (socket_ctx[socket_id].sa_out == NULL)) { sa_init(&socket_ctx[socket_id], socket_id); diff --git a/examples/ipsec-secgw/ipsec-secgw.h b/examples/ipsec-secgw/ipsec-secgw.h index ac4fa5e..24f11ad 100644 --- a/examples/ipsec-secgw/ipsec-secgw.h +++ b/examples/ipsec-secgw/ipsec-secgw.h @@ -134,6 +134,8 @@ extern volatile bool force_quit; extern uint32_t nb_bufs_in_pool; +extern bool per_port_pool; + static inline uint8_t is_unprotected_port(uint16_t port_id) { diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h index bc87b1a..ccfde8e 100644 --- a/examples/ipsec-secgw/ipsec.h +++ b/examples/ipsec-secgw/ipsec.h @@ -248,7 +248,7 @@ struct socket_ctx { struct sp_ctx *sp_ip6_out; struct rt_ctx *rt_ip4; struct rt_ctx *rt_ip6; - struct rte_mempool *mbuf_pool; + struct rte_mempool *mbuf_pool[RTE_MAX_ETHPORTS]; struct rte_mempool *mbuf_pool_indir; struct rte_mempool *session_pool; struct rte_mempool *session_priv_pool;