From patchwork Fri May 13 17:58:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 111137 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 422F1A00C3; Fri, 13 May 2022 19:59:17 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 709EB40DDE; Fri, 13 May 2022 19:59:14 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id D028D42838 for ; Fri, 13 May 2022 19:59:12 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 24D6COH5007624; Fri, 13 May 2022 10:59:12 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=vGRC7vDFASrFx5+yraT+STw8eUOhyR+y59IiWBYkkcY=; b=IoJTxaIc3UOSBv9uEqyZJpL/0GDUsy5sNpTNOafsinOP6sgx6gpM+FohajDegsnToDc+ XOC9I0cifJ3kCegUFRqjJ9F14jsQpv2D42f4pi1wKzLnhtTIaHj0kp5wAmPPo8zvpV2u b3GEZsBVOXg0J3xcqEtC0zXMJqFbcfbU5uEYTg17i8chM55HpHPjC45S/Rs+x3GnolpI tkU+/Yjo1AQ8p8KUaNDMFiCi7HDaDEv6W2laA3Hgwd/lMSammh8Q8RUvMjzs7ZW4F6Ha IODG96kt135pNGbrnMpqe5m6m5mw8ppVr3lfUmD6oC7zLvopnisEwj6GHTHoyga2wIEW Yw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3g0yqwq377-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 13 May 2022 10:59:11 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 13 May 2022 10:59:10 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Fri, 13 May 2022 10:59:10 -0700 Received: from MININT-80QBFE8.corp.innovium.com (unknown [10.193.70.72]) by maili.marvell.com (Postfix) with ESMTP id 0E6243F7052; Fri, 13 May 2022 10:59:05 -0700 (PDT) From: To: , Pavan Nikhilesh , "Shijith Thotton" CC: , , , , , , , , , , Subject: [PATCH v3 3/3] event/cnxk: implement event port quiesce function Date: Fri, 13 May 2022 23:28:41 +0530 Message-ID: <20220513175841.11853-3-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220513175841.11853-1-pbhagavatula@marvell.com> References: <20220427113715.15509-1-pbhagavatula@marvell.com> <20220513175841.11853-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: wIDrIhKfDriC8fF2-2BMgN1Vf8rlEtOp X-Proofpoint-GUID: wIDrIhKfDriC8fF2-2BMgN1Vf8rlEtOp X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.858,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-05-13_09,2022-05-13_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Pavan Nikhilesh Implement event port quiesce function to clean up any lcore resources used. Signed-off-by: Pavan Nikhilesh --- drivers/event/cnxk/cn10k_eventdev.c | 78 ++++++++++++++++++++++++++--- drivers/event/cnxk/cn9k_eventdev.c | 60 +++++++++++++++++++++- 2 files changed, 130 insertions(+), 8 deletions(-) diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c index 9b4d2895ec..409eb892a7 100644 --- a/drivers/event/cnxk/cn10k_eventdev.c +++ b/drivers/event/cnxk/cn10k_eventdev.c @@ -166,15 +166,23 @@ cn10k_sso_hws_reset(void *arg, void *hws) uint64_t u64[2]; } gw; uint8_t pend_tt; + bool is_pend; plt_write64(0, ws->base + SSOW_LF_GWS_OP_GWC_INVAL); /* Wait till getwork/swtp/waitw/desched completes. */ + is_pend = false; + /* Work in WQE0 is always consumed, unless its a SWTAG. */ + pend_state = plt_read64(ws->base + SSOW_LF_GWS_PENDSTATE); + if (pend_state & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(54)) || + ws->swtag_req) + is_pend = true; + do { pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE); } while (pend_state & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(58) | BIT_ULL(56) | BIT_ULL(54))); pend_tt = CNXK_TT_FROM_TAG(plt_read64(base + SSOW_LF_GWS_WQE0)); - if (pend_tt != SSO_TT_EMPTY) { /* Work was pending */ + if (is_pend && pend_tt != SSO_TT_EMPTY) { /* Work was pending */ if (pend_tt == SSO_TT_ATOMIC || pend_tt == SSO_TT_ORDERED) cnxk_sso_hws_swtag_untag(base + SSOW_LF_GWS_OP_SWTAG_UNTAG); @@ -188,15 +196,10 @@ cn10k_sso_hws_reset(void *arg, void *hws) switch (dev->gw_mode) { case CN10K_GW_MODE_PREF: + case CN10K_GW_MODE_PREF_WFE: while (plt_read64(base + SSOW_LF_GWS_PRF_WQE0) & BIT_ULL(63)) ; break; - case CN10K_GW_MODE_PREF_WFE: - while (plt_read64(base + SSOW_LF_GWS_PRF_WQE0) & - SSOW_LF_GWS_TAG_PEND_GET_WORK_BIT) - continue; - plt_write64(0, base + SSOW_LF_GWS_OP_GWC_INVAL); - break; case CN10K_GW_MODE_NONE: default: break; @@ -532,6 +535,66 @@ cn10k_sso_port_release(void *port) rte_free(gws_cookie); } +static void +cn10k_sso_port_quiesce(struct rte_eventdev *event_dev, void *port, + rte_eventdev_port_flush_t flush_cb, void *args) +{ + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); + struct cn10k_sso_hws *ws = port; + struct rte_event ev; + uint64_t ptag; + bool is_pend; + + is_pend = false; + /* Work in WQE0 is always consumed, unless its a SWTAG. */ + ptag = plt_read64(ws->base + SSOW_LF_GWS_PENDSTATE); + if (ptag & (BIT_ULL(62) | BIT_ULL(54)) || ws->swtag_req) + is_pend = true; + do { + ptag = plt_read64(ws->base + SSOW_LF_GWS_PENDSTATE); + } while (ptag & + (BIT_ULL(62) | BIT_ULL(58) | BIT_ULL(56) | BIT_ULL(54))); + + cn10k_sso_hws_get_work_empty(ws, &ev, + (NIX_RX_OFFLOAD_MAX - 1) | NIX_RX_REAS_F | + NIX_RX_MULTI_SEG_F | CPT_RX_WQE_F); + if (is_pend && ev.u64) { + if (flush_cb) + flush_cb(event_dev->data->dev_id, ev, args); + cnxk_sso_hws_swtag_flush(ws->base); + } + + /* Check if we have work in PRF_WQE0, if so extract it. */ + switch (dev->gw_mode) { + case CN10K_GW_MODE_PREF: + case CN10K_GW_MODE_PREF_WFE: + while (plt_read64(ws->base + SSOW_LF_GWS_PRF_WQE0) & + BIT_ULL(63)) + ; + break; + case CN10K_GW_MODE_NONE: + default: + break; + } + + if (CNXK_TT_FROM_TAG(plt_read64(ws->base + SSOW_LF_GWS_PRF_WQE0)) != + SSO_TT_EMPTY) { + plt_write64(BIT_ULL(16) | 1, + ws->base + SSOW_LF_GWS_OP_GET_WORK0); + cn10k_sso_hws_get_work_empty( + ws, &ev, + (NIX_RX_OFFLOAD_MAX - 1) | NIX_RX_REAS_F | + NIX_RX_MULTI_SEG_F | CPT_RX_WQE_F); + if (ev.u64) { + if (flush_cb) + flush_cb(event_dev->data->dev_id, ev, args); + cnxk_sso_hws_swtag_flush(ws->base); + } + } + ws->swtag_req = 0; + plt_write64(0, ws->base + SSOW_LF_GWS_OP_GWC_INVAL); +} + static int cn10k_sso_port_link(struct rte_eventdev *event_dev, void *port, const uint8_t queues[], const uint8_t priorities[], @@ -851,6 +914,7 @@ static struct eventdev_ops cn10k_sso_dev_ops = { .port_def_conf = cnxk_sso_port_def_conf, .port_setup = cn10k_sso_port_setup, .port_release = cn10k_sso_port_release, + .port_quiesce = cn10k_sso_port_quiesce, .port_link = cn10k_sso_port_link, .port_unlink = cn10k_sso_port_unlink, .timeout_ticks = cnxk_sso_timeout_ticks, diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c index 4bba477dd1..dde8497895 100644 --- a/drivers/event/cnxk/cn9k_eventdev.c +++ b/drivers/event/cnxk/cn9k_eventdev.c @@ -181,6 +181,7 @@ cn9k_sso_hws_reset(void *arg, void *hws) uint64_t pend_state; uint8_t pend_tt; uintptr_t base; + bool is_pend; uint64_t tag; uint8_t i; @@ -188,6 +189,13 @@ cn9k_sso_hws_reset(void *arg, void *hws) ws = hws; for (i = 0; i < (dev->dual_ws ? CN9K_DUAL_WS_NB_WS : 1); i++) { base = dev->dual_ws ? dws->base[i] : ws->base; + is_pend = false; + /* Work in WQE0 is always consumed, unless its a SWTAG. */ + pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE); + if (pend_state & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(54)) || + (dev->dual_ws ? (dws->swtag_req && i == !dws->vws) : + ws->swtag_req)) + is_pend = true; /* Wait till getwork/swtp/waitw/desched completes. */ do { pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE); @@ -196,7 +204,7 @@ cn9k_sso_hws_reset(void *arg, void *hws) tag = plt_read64(base + SSOW_LF_GWS_TAG); pend_tt = (tag >> 32) & 0x3; - if (pend_tt != SSO_TT_EMPTY) { /* Work was pending */ + if (is_pend && pend_tt != SSO_TT_EMPTY) { /* Work was pending */ if (pend_tt == SSO_TT_ATOMIC || pend_tt == SSO_TT_ORDERED) cnxk_sso_hws_swtag_untag( @@ -208,7 +216,14 @@ cn9k_sso_hws_reset(void *arg, void *hws) do { pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE); } while (pend_state & BIT_ULL(58)); + + plt_write64(0, base + SSOW_LF_GWS_OP_GWC_INVAL); } + + if (dev->dual_ws) + dws->swtag_req = 0; + else + ws->swtag_req = 0; } void @@ -784,6 +799,48 @@ cn9k_sso_port_release(void *port) rte_free(gws_cookie); } +static void +cn9k_sso_port_quiesce(struct rte_eventdev *event_dev, void *port, + rte_eventdev_port_flush_t flush_cb, void *args) +{ + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); + struct cn9k_sso_hws_dual *dws; + struct cn9k_sso_hws *ws; + struct rte_event ev; + uintptr_t base; + uint64_t ptag; + bool is_pend; + uint8_t i; + + dws = port; + ws = port; + for (i = 0; i < (dev->dual_ws ? CN9K_DUAL_WS_NB_WS : 1); i++) { + base = dev->dual_ws ? dws->base[i] : ws->base; + is_pend = false; + /* Work in WQE0 is always consumed, unless its a SWTAG. */ + ptag = plt_read64(base + SSOW_LF_GWS_PENDSTATE); + if (ptag & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(54)) || + (dev->dual_ws ? (dws->swtag_req && i == !dws->vws) : + ws->swtag_req)) + is_pend = true; + /* Wait till getwork/swtp/waitw/desched completes. */ + do { + ptag = plt_read64(base + SSOW_LF_GWS_PENDSTATE); + } while (ptag & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(58) | + BIT_ULL(56))); + + cn9k_sso_hws_get_work_empty( + base, &ev, dev->rx_offloads, + dev->dual_ws ? dws->lookup_mem : ws->lookup_mem, + dev->dual_ws ? dws->tstamp : ws->tstamp); + if (is_pend && ev.u64) { + if (flush_cb) + flush_cb(event_dev->data->dev_id, ev, args); + cnxk_sso_hws_swtag_flush(ws->base); + } + } +} + static int cn9k_sso_port_link(struct rte_eventdev *event_dev, void *port, const uint8_t queues[], const uint8_t priorities[], @@ -1085,6 +1142,7 @@ static struct eventdev_ops cn9k_sso_dev_ops = { .port_def_conf = cnxk_sso_port_def_conf, .port_setup = cn9k_sso_port_setup, .port_release = cn9k_sso_port_release, + .port_quiesce = cn9k_sso_port_quiesce, .port_link = cn9k_sso_port_link, .port_unlink = cn9k_sso_port_unlink, .timeout_ticks = cnxk_sso_timeout_ticks,