From patchwork Wed Apr 27 11:37:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 110347 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C031CA00C3; Wed, 27 Apr 2022 13:37:29 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 054F9427F7; Wed, 27 Apr 2022 13:37:26 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id EDAF5410DC for ; Wed, 27 Apr 2022 13:37:23 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 23R8Pv4H029548; Wed, 27 Apr 2022 04:37:20 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=wrSnUk1CS1hkiYHFyUD1V2/ttwnMbRn0pkjD/TERG+Q=; b=hN3bN8r/WyRs8V27yVnrQ+KKgGWl21akOyM00Ju8bmKgNlfFiWeNg03gDhhyz5cBvU/A xF+kW7mnTpaimmTfs9L1ucWaIEDnqVMCs7XVIV1+MpEJXrO6by648YL8yrdRNPomMcIZ lMaxTpgWk1StxYtlhnQyvX8C44LhegPdr3td75q6uHN/0+g7v/wHpd2dk4pC25o2RRpV 3l/6zSHi12IYSv7I6YJmnVyYZwqilzKo3aCT+U/dtQREbyKo4ZiV8P82FrWyDAblyt4j d1UCCnaaizCnc1cxGsG27XjPrVQ2vzxKI3j43W3QsfV6cZbjJ6SL2G0SxERxHjCSydeS RQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3fprsqtg9m-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 27 Apr 2022 04:37:19 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 27 Apr 2022 04:37:18 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Wed, 27 Apr 2022 04:37:18 -0700 Received: from MININT-80QBFE8.corp.innovium.com (unknown [10.193.69.50]) by maili.marvell.com (Postfix) with ESMTP id 108B25B6950; Wed, 27 Apr 2022 04:37:15 -0700 (PDT) From: Pavan Nikhilesh To: , Ray Kinsella CC: , Pavan Nikhilesh Subject: [PATCH 1/3 v2] eventdev: add function to quiesce an event port Date: Wed, 27 Apr 2022 17:07:13 +0530 Message-ID: <20220427113715.15509-1-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220427113223.13948-1-pbhagavatula@marvell.com> References: <20220427113223.13948-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: cbgjF7MWjrhwQTawL6XjUuJlNWH2AOHE X-Proofpoint-GUID: cbgjF7MWjrhwQTawL6XjUuJlNWH2AOHE X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.858,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-04-27_04,2022-04-27_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add function to quiesce any core specific resources consumed by the event port. When the application decides to migrate the event port to another lcore or teardown the current lcore it may to call `rte_event_port_quiesce` to make sure that all the data associated with the event port are released from the lcore, this might also include any prefetched events. While releasing the event port from the lcore, this function calls the user-provided flush callback once per event. Signed-off-by: Pavan Nikhilesh --- v2 Changes: - Remove internal Change-Id tag from commit messages. lib/eventdev/eventdev_pmd.h | 19 +++++++++++++++++++ lib/eventdev/rte_eventdev.c | 19 +++++++++++++++++++ lib/eventdev/rte_eventdev.h | 33 +++++++++++++++++++++++++++++++++ lib/eventdev/version.map | 3 +++ 4 files changed, 74 insertions(+) -- 2.35.1 diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h index ce469d47a6..cf9f2146a1 100644 --- a/lib/eventdev/eventdev_pmd.h +++ b/lib/eventdev/eventdev_pmd.h @@ -381,6 +381,23 @@ typedef int (*eventdev_port_setup_t)(struct rte_eventdev *dev, */ typedef void (*eventdev_port_release_t)(void *port); +/** + * Quiesce any core specific resources consumed by the event port + * + * @param dev + * Event device pointer. + * @param port + * Event port pointer. + * @param flush_cb + * User-provided event flush function. + * @param args + * Arguments to be passed to the user-provided event flush function. + * + */ +typedef void (*eventdev_port_quiesce_t)(struct rte_eventdev *dev, void *port, + eventdev_port_flush_t flush_cb, + void *args); + /** * Link multiple source event queues to destination event port. * @@ -1218,6 +1235,8 @@ struct eventdev_ops { /**< Set up an event port. */ eventdev_port_release_t port_release; /**< Release an event port. */ + eventdev_port_quiesce_t port_quiesce; + /**< Quiesce an event port. */ eventdev_port_link_t port_link; /**< Link event queues to an event port. */ diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c index 532a253553..541fa5dc61 100644 --- a/lib/eventdev/rte_eventdev.c +++ b/lib/eventdev/rte_eventdev.c @@ -730,6 +730,25 @@ rte_event_port_setup(uint8_t dev_id, uint8_t port_id, return 0; } +void +rte_event_port_quiesce(uint8_t dev_id, uint8_t port_id, + eventdev_port_flush_t release_cb, void *args) +{ + struct rte_eventdev *dev; + + RTE_EVENTDEV_VALID_DEVID_OR_RET(dev_id); + dev = &rte_eventdevs[dev_id]; + + if (!is_valid_port(dev, port_id)) { + RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id); + return; + } + + if (dev->dev_ops->port_quiesce) + (*dev->dev_ops->port_quiesce)(dev, dev->data->ports[port_id], + release_cb, args); +} + int rte_event_dev_attr_get(uint8_t dev_id, uint32_t attr_id, uint32_t *attr_value) diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index 42a5660169..c86d8a5576 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -830,6 +830,39 @@ int rte_event_port_setup(uint8_t dev_id, uint8_t port_id, const struct rte_event_port_conf *port_conf); +typedef void (*eventdev_port_flush_t)(uint8_t dev_id, struct rte_event event, + void *arg); +/**< Callback function prototype that can be passed during + * rte_event_port_release(), invoked once per a released event. + */ + +/** + * Quiesce any core specific resources consumed by the event port. + * + * Event ports are generally coupled with lcores, and a given Hardware + * implementation might require the PMD to store port specific data in the + * lcore. + * When the application decides to migrate the event port to an other lcore + * or teardown the current lcore it may to call `rte_event_port_quiesce` + * to make sure that all the data associated with the event port are released + * from the lcore, this might also include any prefetched events. + * While releasing the event port from the lcore, this function calls the + * user-provided flush callback once per event. + * + * The event port specific config is not reset. + * + * @param dev_id + * The identifier of the device. + * @param port_id + * The index of the event port to setup. The value must be in the range + * [0, nb_event_ports - 1] previously supplied to rte_event_dev_configure(). + * @param release_cb + * Callback function invoked once per flushed event. + */ +__rte_experimental +void rte_event_port_quiesce(uint8_t dev_id, uint8_t port_id, + eventdev_port_flush_t release_cb, void *args); + /** * The queue depth of the port on the enqueue side */ diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map index cd5dada07f..1907093539 100644 --- a/lib/eventdev/version.map +++ b/lib/eventdev/version.map @@ -108,6 +108,9 @@ EXPERIMENTAL { # added in 22.03 rte_event_eth_rx_adapter_event_port_get; + + # added in 22.07 + rte_event_port_quiesce; }; INTERNAL { From patchwork Wed Apr 27 11:37:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 110346 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A4481A00C3; Wed, 27 Apr 2022 13:37:25 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 36A14410DC; Wed, 27 Apr 2022 13:37:25 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 99E3540E78 for ; Wed, 27 Apr 2022 13:37:23 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 23R8Dqnk029554; Wed, 27 Apr 2022 04:37:22 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=eP2nQDo8j48+zfIlFfbP1UwMhErGPKYjA58lkJmVZE4=; b=kGIj2rlhCKB9tADD/ZDqrtaCWgstBxVlJFgQ94jHBGEs88sl8XLvh4JHqvutD0xXxkP7 cpU2UBoXl0IKJvGX0h9RqmBug8nTjR5sm6VITpWB9e0Ku2/f84okM3P+eE0wMWx4ZRg3 LvpToRFC47NwHSLEBfVAGAnwfS2/V/lj9ivhg8YovgM0XNCPK7L7RulctqKEYaw8IHc9 I+kGlsUnBqgyD8cs/EpFOCRJicVy4I5JAljwQpulAkfamQns8T75nvNuAUfbYQBxw5Wh no+wZZvIs8oU4+b5hjdU+hg49V2Qh4e9aXmA49rEYjfM501r9xiFqnXo7uDUk9So3eoM Cw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3fprsqtg9w-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 27 Apr 2022 04:37:22 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 27 Apr 2022 04:37:21 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Wed, 27 Apr 2022 04:37:21 -0700 Received: from MININT-80QBFE8.corp.innovium.com (unknown [10.193.69.50]) by maili.marvell.com (Postfix) with ESMTP id 6BDD35B6932; Wed, 27 Apr 2022 04:37:18 -0700 (PDT) From: Pavan Nikhilesh To: , Harry van Haaren , "Radu Nicolau" , Akhil Goyal , "Sunil Kumar Kori" , Pavan Nikhilesh CC: Subject: [PATCH 2/3 v2] eventdev: update examples to use port quiesce Date: Wed, 27 Apr 2022 17:07:14 +0530 Message-ID: <20220427113715.15509-2-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220427113715.15509-1-pbhagavatula@marvell.com> References: <20220427113223.13948-1-pbhagavatula@marvell.com> <20220427113715.15509-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 8gauQcNWFF6emqf-hVYTR9BMbnkYDLkH X-Proofpoint-GUID: 8gauQcNWFF6emqf-hVYTR9BMbnkYDLkH X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.858,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-04-27_04,2022-04-27_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Quiesce event ports used by the workers core on exit to free up any outstanding resources. Signed-off-by: Pavan Nikhilesh --- Depends-on: Series-22677 app/test-eventdev/test_perf_common.c | 8 ++++++++ app/test-eventdev/test_pipeline_common.c | 12 ++++++++++++ examples/eventdev_pipeline/pipeline_common.h | 9 +++++++++ examples/ipsec-secgw/ipsec_worker.c | 13 +++++++++++++ examples/l2fwd-event/l2fwd_common.c | 13 +++++++++++++ examples/l3fwd/l3fwd_event.c | 13 +++++++++++++ 6 files changed, 68 insertions(+) -- 2.35.1 diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c index f673a9fddd..2016583979 100644 --- a/app/test-eventdev/test_perf_common.c +++ b/app/test-eventdev/test_perf_common.c @@ -985,6 +985,13 @@ perf_opt_dump(struct evt_options *opt, uint8_t nb_queues) evt_dump("prod_enq_burst_sz", "%d", opt->prod_enq_burst_sz); } +static void +perf_event_port_flush(uint8_t dev_id __rte_unused, struct rte_event ev, + void *args) +{ + rte_mempool_put(args, ev.event_ptr); +} + void perf_worker_cleanup(struct rte_mempool *const pool, uint8_t dev_id, uint8_t port_id, struct rte_event events[], uint16_t nb_enq, @@ -1000,6 +1007,7 @@ perf_worker_cleanup(struct rte_mempool *const pool, uint8_t dev_id, events[i].op = RTE_EVENT_OP_RELEASE; rte_event_enqueue_burst(dev_id, port_id, events, nb_deq); } + rte_event_port_quiesce(dev_id, port_id, perf_event_port_flush, pool); } void diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-eventdev/test_pipeline_common.c index a8dd070000..82e5745071 100644 --- a/app/test-eventdev/test_pipeline_common.c +++ b/app/test-eventdev/test_pipeline_common.c @@ -518,6 +518,16 @@ pipeline_vector_array_free(struct rte_event events[], uint16_t num) } } +static void +pipeline_event_port_flush(uint8_t dev_id __rte_unused, struct rte_event ev, + void *args __rte_unused) +{ + if (ev.event_type & RTE_EVENT_TYPE_VECTOR) + pipeline_vector_array_free(&ev, 1); + else + rte_pktmbuf_free(ev.mbuf); +} + void pipeline_worker_cleanup(uint8_t dev, uint8_t port, struct rte_event ev[], uint16_t enq, uint16_t deq) @@ -542,6 +552,8 @@ pipeline_worker_cleanup(uint8_t dev, uint8_t port, struct rte_event ev[], rte_event_enqueue_burst(dev, port, ev, deq); } + + rte_event_port_quiesce(dev, port, pipeline_event_port_flush, NULL); } void diff --git a/examples/eventdev_pipeline/pipeline_common.h b/examples/eventdev_pipeline/pipeline_common.h index 9899b257b0..28b6ab85ff 100644 --- a/examples/eventdev_pipeline/pipeline_common.h +++ b/examples/eventdev_pipeline/pipeline_common.h @@ -140,6 +140,13 @@ schedule_devices(unsigned int lcore_id) } } +static void +event_port_flush(uint8_t dev_id __rte_unused, struct rte_event ev, + void *args __rte_unused) +{ + rte_mempool_put(args, ev.event_ptr); +} + static inline void worker_cleanup(uint8_t dev_id, uint8_t port_id, struct rte_event events[], uint16_t nb_enq, uint16_t nb_deq) @@ -160,6 +167,8 @@ worker_cleanup(uint8_t dev_id, uint8_t port_id, struct rte_event events[], events[i].op = RTE_EVENT_OP_RELEASE; rte_event_enqueue_burst(dev_id, port_id, events, nb_deq); } + + rte_event_port_quiesce(dev_id, port_id, event_port_flush, NULL); } void set_worker_generic_setup_data(struct setup_data *caps, bool burst); diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c index 3df5acf384..7f259e4cf3 100644 --- a/examples/ipsec-secgw/ipsec_worker.c +++ b/examples/ipsec-secgw/ipsec_worker.c @@ -737,6 +737,13 @@ ipsec_ev_vector_drv_mode_process(struct eh_event_link_info *links, * selected. */ +static void +ipsec_event_port_flush(uint8_t eventdev_id __rte_unused, struct rte_event ev, + void *args __rte_unused) +{ + rte_pktmbuf_free(ev.mbuf); +} + /* Workers registered */ #define IPSEC_EVENTMODE_WORKERS 2 @@ -861,6 +868,9 @@ ipsec_wrkr_non_burst_int_port_drv_mode(struct eh_event_link_info *links, rte_event_enqueue_burst(links[0].eventdev_id, links[0].event_port_id, &ev, 1); } + + rte_event_port_quiesce(links[0].eventdev_id, links[0].event_port_id, + ipsec_event_port_flush, NULL); } /* @@ -974,6 +984,9 @@ ipsec_wrkr_non_burst_int_port_app_mode(struct eh_event_link_info *links, rte_event_enqueue_burst(links[0].eventdev_id, links[0].event_port_id, &ev, 1); } + + rte_event_port_quiesce(links[0].eventdev_id, links[0].event_port_id, + ipsec_event_port_flush, NULL); } static uint8_t diff --git a/examples/l2fwd-event/l2fwd_common.c b/examples/l2fwd-event/l2fwd_common.c index 15bfe790a0..41a0d3f22f 100644 --- a/examples/l2fwd-event/l2fwd_common.c +++ b/examples/l2fwd-event/l2fwd_common.c @@ -128,6 +128,16 @@ l2fwd_event_vector_array_free(struct rte_event events[], uint16_t num) } } +static void +l2fwd_event_port_flush(uint8_t event_d_id __rte_unused, struct rte_event ev, + void *args __rte_unused) +{ + if (ev.event_type & RTE_EVENT_TYPE_VECTOR) + l2fwd_event_vector_array_free(&ev, 1); + else + rte_pktmbuf_free(ev.mbuf); +} + void l2fwd_event_worker_cleanup(uint8_t event_d_id, uint8_t port_id, struct rte_event events[], uint16_t nb_enq, @@ -147,4 +157,7 @@ l2fwd_event_worker_cleanup(uint8_t event_d_id, uint8_t port_id, events[i].op = RTE_EVENT_OP_RELEASE; rte_event_enqueue_burst(event_d_id, port_id, events, nb_deq); } + + rte_event_port_quiesce(event_d_id, port_id, l2fwd_event_port_flush, + NULL); } diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c index a14a21b414..0b58475c85 100644 --- a/examples/l3fwd/l3fwd_event.c +++ b/examples/l3fwd/l3fwd_event.c @@ -301,6 +301,16 @@ l3fwd_event_vector_array_free(struct rte_event events[], uint16_t num) } } +static void +l3fwd_event_port_flush(uint8_t event_d_id __rte_unused, struct rte_event ev, + void *args __rte_unused) +{ + if (ev.event_type & RTE_EVENT_TYPE_VECTOR) + l3fwd_event_vector_array_free(&ev, 1); + else + rte_pktmbuf_free(ev.mbuf); +} + void l3fwd_event_worker_cleanup(uint8_t event_d_id, uint8_t event_p_id, struct rte_event events[], uint16_t nb_enq, @@ -320,4 +330,7 @@ l3fwd_event_worker_cleanup(uint8_t event_d_id, uint8_t event_p_id, events[i].op = RTE_EVENT_OP_RELEASE; rte_event_enqueue_burst(event_d_id, event_p_id, events, nb_deq); } + + rte_event_port_quiesce(event_d_id, event_p_id, l3fwd_event_port_flush, + NULL); } From patchwork Wed Apr 27 11:37:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 110348 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0F296A00C3; Wed, 27 Apr 2022 13:37:36 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D7F2642801; Wed, 27 Apr 2022 13:37:26 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id B08C241145 for ; Wed, 27 Apr 2022 13:37:25 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 23R8vUHe029540 for ; Wed, 27 Apr 2022 04:37:24 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=LXQ/Kzq2lbNwgy99RpkXe36MxkRinNbVvdGL2Z1M/wM=; b=EBQEX2Nu2AKwRKQU5SsOT3yt40Kb0JtNW0kfOro2cMbpAR4WYPJa9DpmaoLMzIoP3k30 jpiZBvAithEwMCampVJEF+ZIZnLSrlA1JNGlid5hkD/yVmqRcrRog/ksn405XH/vkaio 4fMLf9Qpj74FD4pKoyb8cbphxXWm9bWL6BZvRg6paHuo7CCZaCqL04EWTJ8NMgv/m1/Y 4hDmL52yReS/GnW2JZiK+qgLezhGKYrbwe0Kt1YcL1zuN8jy9UDbhbFvZ55mXy87i8/a hbO60rbpyWTUY/NMtwrx/F81KZhpfO7OLRipTIXQ5+FT1V6YJY/VUi3CvACnHga/+pHQ MQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3fprsqtga3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 27 Apr 2022 04:37:24 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 27 Apr 2022 04:37:23 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 27 Apr 2022 04:37:23 -0700 Received: from MININT-80QBFE8.corp.innovium.com (unknown [10.193.69.50]) by maili.marvell.com (Postfix) with ESMTP id A2F593F7063; Wed, 27 Apr 2022 04:37:21 -0700 (PDT) From: Pavan Nikhilesh To: , Pavan Nikhilesh , "Shijith Thotton" CC: Subject: [PATCH 3/3 v2] event/cnxk: implement event port quiesce function Date: Wed, 27 Apr 2022 17:07:15 +0530 Message-ID: <20220427113715.15509-3-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220427113715.15509-1-pbhagavatula@marvell.com> References: <20220427113223.13948-1-pbhagavatula@marvell.com> <20220427113715.15509-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 32muOw41i27xqiCr_7XQlcQ7O5-1ertN X-Proofpoint-GUID: 32muOw41i27xqiCr_7XQlcQ7O5-1ertN X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.858,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-04-27_04,2022-04-27_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Implement event port quiesce function to clean up any lcore resources used. Signed-off-by: Pavan Nikhilesh --- drivers/event/cnxk/cn10k_eventdev.c | 78 ++++++++++++++++++++++++++--- drivers/event/cnxk/cn9k_eventdev.c | 60 +++++++++++++++++++++- 2 files changed, 130 insertions(+), 8 deletions(-) -- 2.35.1 diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c index 94829e789c..d84c5d2d1e 100644 --- a/drivers/event/cnxk/cn10k_eventdev.c +++ b/drivers/event/cnxk/cn10k_eventdev.c @@ -167,15 +167,23 @@ cn10k_sso_hws_reset(void *arg, void *hws) uint64_t u64[2]; } gw; uint8_t pend_tt; + bool is_pend; plt_write64(0, ws->base + SSOW_LF_GWS_OP_GWC_INVAL); /* Wait till getwork/swtp/waitw/desched completes. */ + is_pend = false; + /* Work in WQE0 is always consumed, unless its a SWTAG. */ + pend_state = plt_read64(ws->base + SSOW_LF_GWS_PENDSTATE); + if (pend_state & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(54)) || + ws->swtag_req) + is_pend = true; + do { pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE); } while (pend_state & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(58) | BIT_ULL(56) | BIT_ULL(54))); pend_tt = CNXK_TT_FROM_TAG(plt_read64(base + SSOW_LF_GWS_WQE0)); - if (pend_tt != SSO_TT_EMPTY) { /* Work was pending */ + if (is_pend && pend_tt != SSO_TT_EMPTY) { /* Work was pending */ if (pend_tt == SSO_TT_ATOMIC || pend_tt == SSO_TT_ORDERED) cnxk_sso_hws_swtag_untag(base + SSOW_LF_GWS_OP_SWTAG_UNTAG); @@ -189,15 +197,10 @@ cn10k_sso_hws_reset(void *arg, void *hws) switch (dev->gw_mode) { case CN10K_GW_MODE_PREF: + case CN10K_GW_MODE_PREF_WFE: while (plt_read64(base + SSOW_LF_GWS_PRF_WQE0) & BIT_ULL(63)) ; break; - case CN10K_GW_MODE_PREF_WFE: - while (plt_read64(base + SSOW_LF_GWS_PRF_WQE0) & - SSOW_LF_GWS_TAG_PEND_GET_WORK_BIT) - continue; - plt_write64(0, base + SSOW_LF_GWS_OP_GWC_INVAL); - break; case CN10K_GW_MODE_NONE: default: break; @@ -533,6 +536,66 @@ cn10k_sso_port_release(void *port) rte_free(gws_cookie); } +static void +cn10k_sso_port_quiesce(struct rte_eventdev *event_dev, void *port, + eventdev_port_flush_t flush_cb, void *args) +{ + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); + struct cn10k_sso_hws *ws = port; + struct rte_event ev; + uint64_t ptag; + bool is_pend; + + is_pend = false; + /* Work in WQE0 is always consumed, unless its a SWTAG. */ + ptag = plt_read64(ws->base + SSOW_LF_GWS_PENDSTATE); + if (ptag & (BIT_ULL(62) | BIT_ULL(54)) || ws->swtag_req) + is_pend = true; + do { + ptag = plt_read64(ws->base + SSOW_LF_GWS_PENDSTATE); + } while (ptag & + (BIT_ULL(62) | BIT_ULL(58) | BIT_ULL(56) | BIT_ULL(54))); + + cn10k_sso_hws_get_work_empty(ws, &ev, + (NIX_RX_OFFLOAD_MAX - 1) | NIX_RX_REAS_F | + NIX_RX_MULTI_SEG_F | CPT_RX_WQE_F); + if (is_pend && ev.u64) { + if (flush_cb) + flush_cb(event_dev->data->dev_id, ev, args); + cnxk_sso_hws_swtag_flush(ws->base); + } + + /* Check if we have work in PRF_WQE0, if so extract it. */ + switch (dev->gw_mode) { + case CN10K_GW_MODE_PREF: + case CN10K_GW_MODE_PREF_WFE: + while (plt_read64(ws->base + SSOW_LF_GWS_PRF_WQE0) & + BIT_ULL(63)) + ; + break; + case CN10K_GW_MODE_NONE: + default: + break; + } + + if (CNXK_TT_FROM_TAG(plt_read64(ws->base + SSOW_LF_GWS_PRF_WQE0)) != + SSO_TT_EMPTY) { + plt_write64(BIT_ULL(16) | 1, + ws->base + SSOW_LF_GWS_OP_GET_WORK0); + cn10k_sso_hws_get_work_empty( + ws, &ev, + (NIX_RX_OFFLOAD_MAX - 1) | NIX_RX_REAS_F | + NIX_RX_MULTI_SEG_F | CPT_RX_WQE_F); + if (ev.u64) { + if (flush_cb) + flush_cb(event_dev->data->dev_id, ev, args); + cnxk_sso_hws_swtag_flush(ws->base); + } + } + ws->swtag_req = 0; + plt_write64(0, ws->base + SSOW_LF_GWS_OP_GWC_INVAL); +} + static int cn10k_sso_port_link(struct rte_eventdev *event_dev, void *port, const uint8_t queues[], const uint8_t priorities[], @@ -852,6 +915,7 @@ static struct eventdev_ops cn10k_sso_dev_ops = { .port_def_conf = cnxk_sso_port_def_conf, .port_setup = cn10k_sso_port_setup, .port_release = cn10k_sso_port_release, + .port_quiesce = cn10k_sso_port_quiesce, .port_link = cn10k_sso_port_link, .port_unlink = cn10k_sso_port_unlink, .timeout_ticks = cnxk_sso_timeout_ticks, diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c index 987888d3db..46885c5f92 100644 --- a/drivers/event/cnxk/cn9k_eventdev.c +++ b/drivers/event/cnxk/cn9k_eventdev.c @@ -186,6 +186,7 @@ cn9k_sso_hws_reset(void *arg, void *hws) uint64_t pend_state; uint8_t pend_tt; uintptr_t base; + bool is_pend; uint64_t tag; uint8_t i; @@ -193,6 +194,13 @@ cn9k_sso_hws_reset(void *arg, void *hws) ws = hws; for (i = 0; i < (dev->dual_ws ? CN9K_DUAL_WS_NB_WS : 1); i++) { base = dev->dual_ws ? dws->base[i] : ws->base; + is_pend = false; + /* Work in WQE0 is always consumed, unless its a SWTAG. */ + pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE); + if (pend_state & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(54)) || + (dev->dual_ws ? (dws->swtag_req && i == !dws->vws) : + ws->swtag_req)) + is_pend = true; /* Wait till getwork/swtp/waitw/desched completes. */ do { pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE); @@ -201,7 +209,7 @@ cn9k_sso_hws_reset(void *arg, void *hws) tag = plt_read64(base + SSOW_LF_GWS_TAG); pend_tt = (tag >> 32) & 0x3; - if (pend_tt != SSO_TT_EMPTY) { /* Work was pending */ + if (is_pend && pend_tt != SSO_TT_EMPTY) { /* Work was pending */ if (pend_tt == SSO_TT_ATOMIC || pend_tt == SSO_TT_ORDERED) cnxk_sso_hws_swtag_untag( @@ -213,7 +221,14 @@ cn9k_sso_hws_reset(void *arg, void *hws) do { pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE); } while (pend_state & BIT_ULL(58)); + + plt_write64(0, base + SSOW_LF_GWS_OP_GWC_INVAL); } + + if (dev->dual_ws) + dws->swtag_req = 0; + else + ws->swtag_req = 0; } void @@ -789,6 +804,48 @@ cn9k_sso_port_release(void *port) rte_free(gws_cookie); } +static void +cn9k_sso_port_quiesce(struct rte_eventdev *event_dev, void *port, + eventdev_port_flush_t flush_cb, void *args) +{ + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); + struct cn9k_sso_hws_dual *dws; + struct cn9k_sso_hws *ws; + struct rte_event ev; + uintptr_t base; + uint64_t ptag; + bool is_pend; + uint8_t i; + + dws = port; + ws = port; + for (i = 0; i < (dev->dual_ws ? CN9K_DUAL_WS_NB_WS : 1); i++) { + base = dev->dual_ws ? dws->base[i] : ws->base; + is_pend = false; + /* Work in WQE0 is always consumed, unless its a SWTAG. */ + ptag = plt_read64(base + SSOW_LF_GWS_PENDSTATE); + if (ptag & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(54)) || + (dev->dual_ws ? (dws->swtag_req && i == !dws->vws) : + ws->swtag_req)) + is_pend = true; + /* Wait till getwork/swtp/waitw/desched completes. */ + do { + ptag = plt_read64(base + SSOW_LF_GWS_PENDSTATE); + } while (ptag & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(58) | + BIT_ULL(56))); + + cn9k_sso_hws_get_work_empty( + base, &ev, dev->rx_offloads, + dev->dual_ws ? dws->lookup_mem : ws->lookup_mem, + dev->dual_ws ? dws->tstamp : ws->tstamp); + if (is_pend && ev.u64) { + if (flush_cb) + flush_cb(event_dev->data->dev_id, ev, args); + cnxk_sso_hws_swtag_flush(ws->base); + } + } +} + static int cn9k_sso_port_link(struct rte_eventdev *event_dev, void *port, const uint8_t queues[], const uint8_t priorities[], @@ -1090,6 +1147,7 @@ static struct eventdev_ops cn9k_sso_dev_ops = { .port_def_conf = cnxk_sso_port_def_conf, .port_setup = cn9k_sso_port_setup, .port_release = cn9k_sso_port_release, + .port_quiesce = cn9k_sso_port_quiesce, .port_link = cn9k_sso_port_link, .port_unlink = cn9k_sso_port_unlink, .timeout_ticks = cnxk_sso_timeout_ticks,