From patchwork Fri May 13 17:58:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 111136 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 36043A00C3; Fri, 13 May 2022 19:59:11 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 155B4410F2; Fri, 13 May 2022 19:59:11 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id EBABD40DDE for ; Fri, 13 May 2022 19:59:08 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 24DCXbpx010313; Fri, 13 May 2022 10:59:07 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=ivAVMhbCwzjalT5Oa9dJmPqXXiKdhV3uyfLZrgSEjT8=; b=cN5bJmIM7BYNGmMCPjESHzoDPQcghEcejRPPSg6x7Fw4AKdXFbpcce1DGyyefKWK5FuC DIxkrBU8+NmTYxegp7kL3hz8rfDChFuaAuZISDwn5s9Q0i5K69qdDDRgw4b+Ee+238pI FOKDN/zowjw1Dy8LnvCSQCuHe2JevvsfSZflBaMi9IWx9xq4nYNxbwS/zs+TpAVOnY7K s2RCtypGAwKS9DZ+zr+UcGRFSBabkb1DsUqXyrNulQAg8FqAEJQCwefHKVI9Ca75F/zs 5gQxEGAZ9UDZkT/599taFQiKmVwG4rVGGH5xtocAOTejb8dAqx1R2gFseOFSBWuwas7W 2w== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3g1c37bgye-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 13 May 2022 10:59:07 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 13 May 2022 10:59:05 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Fri, 13 May 2022 10:59:05 -0700 Received: from MININT-80QBFE8.corp.innovium.com (unknown [10.193.70.72]) by maili.marvell.com (Postfix) with ESMTP id 5D3FB3F7063; Fri, 13 May 2022 10:59:00 -0700 (PDT) From: To: , Harry van Haaren , "Radu Nicolau" , Akhil Goyal , "Sunil Kumar Kori" , Pavan Nikhilesh CC: , , , , , , , , , , Subject: [PATCH v3 2/3] eventdev: update examples to use port quiesce Date: Fri, 13 May 2022 23:28:40 +0530 Message-ID: <20220513175841.11853-2-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220513175841.11853-1-pbhagavatula@marvell.com> References: <20220427113715.15509-1-pbhagavatula@marvell.com> <20220513175841.11853-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: lT6J4t-hcBk1qGF-V8vIpKKYIWHINwpo X-Proofpoint-ORIG-GUID: lT6J4t-hcBk1qGF-V8vIpKKYIWHINwpo X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.858,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-05-13_09,2022-05-13_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Pavan Nikhilesh Quiesce event ports used by the workers core on exit to free up any outstanding resources. Signed-off-by: Pavan Nikhilesh --- app/test-eventdev/test_perf_common.c | 8 ++++++++ app/test-eventdev/test_pipeline_common.c | 12 ++++++++++++ examples/eventdev_pipeline/pipeline_common.h | 9 +++++++++ examples/ipsec-secgw/ipsec_worker.c | 13 +++++++++++++ examples/l2fwd-event/l2fwd_common.c | 13 +++++++++++++ examples/l3fwd/l3fwd_event.c | 13 +++++++++++++ 6 files changed, 68 insertions(+) diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c index b51a100425..8e3836280d 100644 --- a/app/test-eventdev/test_perf_common.c +++ b/app/test-eventdev/test_perf_common.c @@ -985,6 +985,13 @@ perf_opt_dump(struct evt_options *opt, uint8_t nb_queues) evt_dump("prod_enq_burst_sz", "%d", opt->prod_enq_burst_sz); } +static void +perf_event_port_flush(uint8_t dev_id __rte_unused, struct rte_event ev, + void *args) +{ + rte_mempool_put(args, ev.event_ptr); +} + void perf_worker_cleanup(struct rte_mempool *const pool, uint8_t dev_id, uint8_t port_id, struct rte_event events[], uint16_t nb_enq, @@ -1000,6 +1007,7 @@ perf_worker_cleanup(struct rte_mempool *const pool, uint8_t dev_id, events[i].op = RTE_EVENT_OP_RELEASE; rte_event_enqueue_burst(dev_id, port_id, events, nb_deq); } + rte_event_port_quiesce(dev_id, port_id, perf_event_port_flush, pool); } void diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-eventdev/test_pipeline_common.c index d8e80903b2..c66656cd39 100644 --- a/app/test-eventdev/test_pipeline_common.c +++ b/app/test-eventdev/test_pipeline_common.c @@ -518,6 +518,16 @@ pipeline_vector_array_free(struct rte_event events[], uint16_t num) } } +static void +pipeline_event_port_flush(uint8_t dev_id __rte_unused, struct rte_event ev, + void *args __rte_unused) +{ + if (ev.event_type & RTE_EVENT_TYPE_VECTOR) + pipeline_vector_array_free(&ev, 1); + else + rte_pktmbuf_free(ev.mbuf); +} + void pipeline_worker_cleanup(uint8_t dev, uint8_t port, struct rte_event ev[], uint16_t enq, uint16_t deq) @@ -542,6 +552,8 @@ pipeline_worker_cleanup(uint8_t dev, uint8_t port, struct rte_event ev[], rte_event_enqueue_burst(dev, port, ev, deq); } + + rte_event_port_quiesce(dev, port, pipeline_event_port_flush, NULL); } void diff --git a/examples/eventdev_pipeline/pipeline_common.h b/examples/eventdev_pipeline/pipeline_common.h index 9899b257b0..28b6ab85ff 100644 --- a/examples/eventdev_pipeline/pipeline_common.h +++ b/examples/eventdev_pipeline/pipeline_common.h @@ -140,6 +140,13 @@ schedule_devices(unsigned int lcore_id) } } +static void +event_port_flush(uint8_t dev_id __rte_unused, struct rte_event ev, + void *args __rte_unused) +{ + rte_mempool_put(args, ev.event_ptr); +} + static inline void worker_cleanup(uint8_t dev_id, uint8_t port_id, struct rte_event events[], uint16_t nb_enq, uint16_t nb_deq) @@ -160,6 +167,8 @@ worker_cleanup(uint8_t dev_id, uint8_t port_id, struct rte_event events[], events[i].op = RTE_EVENT_OP_RELEASE; rte_event_enqueue_burst(dev_id, port_id, events, nb_deq); } + + rte_event_port_quiesce(dev_id, port_id, event_port_flush, NULL); } void set_worker_generic_setup_data(struct setup_data *caps, bool burst); diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c index 3df5acf384..7f259e4cf3 100644 --- a/examples/ipsec-secgw/ipsec_worker.c +++ b/examples/ipsec-secgw/ipsec_worker.c @@ -737,6 +737,13 @@ ipsec_ev_vector_drv_mode_process(struct eh_event_link_info *links, * selected. */ +static void +ipsec_event_port_flush(uint8_t eventdev_id __rte_unused, struct rte_event ev, + void *args __rte_unused) +{ + rte_pktmbuf_free(ev.mbuf); +} + /* Workers registered */ #define IPSEC_EVENTMODE_WORKERS 2 @@ -861,6 +868,9 @@ ipsec_wrkr_non_burst_int_port_drv_mode(struct eh_event_link_info *links, rte_event_enqueue_burst(links[0].eventdev_id, links[0].event_port_id, &ev, 1); } + + rte_event_port_quiesce(links[0].eventdev_id, links[0].event_port_id, + ipsec_event_port_flush, NULL); } /* @@ -974,6 +984,9 @@ ipsec_wrkr_non_burst_int_port_app_mode(struct eh_event_link_info *links, rte_event_enqueue_burst(links[0].eventdev_id, links[0].event_port_id, &ev, 1); } + + rte_event_port_quiesce(links[0].eventdev_id, links[0].event_port_id, + ipsec_event_port_flush, NULL); } static uint8_t diff --git a/examples/l2fwd-event/l2fwd_common.c b/examples/l2fwd-event/l2fwd_common.c index 15bfe790a0..41a0d3f22f 100644 --- a/examples/l2fwd-event/l2fwd_common.c +++ b/examples/l2fwd-event/l2fwd_common.c @@ -128,6 +128,16 @@ l2fwd_event_vector_array_free(struct rte_event events[], uint16_t num) } } +static void +l2fwd_event_port_flush(uint8_t event_d_id __rte_unused, struct rte_event ev, + void *args __rte_unused) +{ + if (ev.event_type & RTE_EVENT_TYPE_VECTOR) + l2fwd_event_vector_array_free(&ev, 1); + else + rte_pktmbuf_free(ev.mbuf); +} + void l2fwd_event_worker_cleanup(uint8_t event_d_id, uint8_t port_id, struct rte_event events[], uint16_t nb_enq, @@ -147,4 +157,7 @@ l2fwd_event_worker_cleanup(uint8_t event_d_id, uint8_t port_id, events[i].op = RTE_EVENT_OP_RELEASE; rte_event_enqueue_burst(event_d_id, port_id, events, nb_deq); } + + rte_event_port_quiesce(event_d_id, port_id, l2fwd_event_port_flush, + NULL); } diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c index a14a21b414..0b58475c85 100644 --- a/examples/l3fwd/l3fwd_event.c +++ b/examples/l3fwd/l3fwd_event.c @@ -301,6 +301,16 @@ l3fwd_event_vector_array_free(struct rte_event events[], uint16_t num) } } +static void +l3fwd_event_port_flush(uint8_t event_d_id __rte_unused, struct rte_event ev, + void *args __rte_unused) +{ + if (ev.event_type & RTE_EVENT_TYPE_VECTOR) + l3fwd_event_vector_array_free(&ev, 1); + else + rte_pktmbuf_free(ev.mbuf); +} + void l3fwd_event_worker_cleanup(uint8_t event_d_id, uint8_t event_p_id, struct rte_event events[], uint16_t nb_enq, @@ -320,4 +330,7 @@ l3fwd_event_worker_cleanup(uint8_t event_d_id, uint8_t event_p_id, events[i].op = RTE_EVENT_OP_RELEASE; rte_event_enqueue_burst(event_d_id, event_p_id, events, nb_deq); } + + rte_event_port_quiesce(event_d_id, event_p_id, l3fwd_event_port_flush, + NULL); }