From patchwork Mon Aug 30 11:09:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shijith Thotton X-Patchwork-Id: 97546 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 63385A034F; Mon, 30 Aug 2021 13:11:43 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6D13B4118D; Mon, 30 Aug 2021 13:11:32 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 74A7E41172 for ; Mon, 30 Aug 2021 13:11:29 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 17UAl39Z024479 for ; Mon, 30 Aug 2021 04:11:29 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=Q2N+e9hko2VxvndILz0/QV4BaTZwC7JY+JleVrKXDbk=; b=bQfVP/1GriOuJsAXV95kVAj02sy5NFBQ6/We45m4QIOe7c0nkW/WvJxTY5WmwPp4ryB4 TG1dYTPmpoXtQKcZparHUSS6eK6QMxbp9k2c0pFmhgToiDtYdZ+nqevF70VnC7rXUpWN BNi6H6yI9H50WSLfMZLkNXZBjG/BUoOoyFguUN+o3dMdDQGsEVTh9CY+TlKXhdNqyC4F oB+5Is8P5e964wW2Ue3pLrEZhArjlpRoG4C/D4h91iYmk1u+3OfjSdGvuHOAFhBMGSvL n3IMR+0Mg4l71FMh28rKxK1vhi9tbK+lB21Cs/SAjtSr1lFOA1OyfHEYOp6QchY9wYR3 tQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 3arj9m1y6b-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 30 Aug 2021 04:11:28 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 30 Aug 2021 04:11:26 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Mon, 30 Aug 2021 04:11:26 -0700 Received: from localhost.localdomain (unknown [10.28.34.29]) by maili.marvell.com (Postfix) with ESMTP id 92C843F7078; Mon, 30 Aug 2021 04:11:24 -0700 (PDT) From: Shijith Thotton To: CC: Shijith Thotton , , , , , Date: Mon, 30 Aug 2021 16:39:15 +0530 Message-ID: <881b7ee6a32548610779b0ec9cba7d3d81aae04c.1630315730.git.sthotton@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Proofpoint-GUID: 23f3VKBEhpkBOjXwIrQTwIC6dlRjz46L X-Proofpoint-ORIG-GUID: 23f3VKBEhpkBOjXwIrQTwIC6dlRjz46L X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-08-30_04,2021-08-30_01,2020-04-07_01 Subject: [dpdk-dev] [PATCH 6/8] event/cnxk: add cn9k crypto adapter fast path ops X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Set crypto adapter enqueue and dequeue operations for CN9K. Signed-off-by: Shijith Thotton --- drivers/event/cnxk/cn9k_eventdev.c | 94 +++++++++++++++++++- drivers/event/cnxk/cn9k_worker.c | 22 +++++ drivers/event/cnxk/cn9k_worker.h | 41 ++++++++- drivers/event/cnxk/cn9k_worker_deq_ca.c | 65 ++++++++++++++ drivers/event/cnxk/cn9k_worker_dual_deq_ca.c | 75 ++++++++++++++++ drivers/event/cnxk/meson.build | 2 + 6 files changed, 292 insertions(+), 7 deletions(-) create mode 100644 drivers/event/cnxk/cn9k_worker_deq_ca.c create mode 100644 drivers/event/cnxk/cn9k_worker_dual_deq_ca.c diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c index 5851bc17f5..6601c44060 100644 --- a/drivers/event/cnxk/cn9k_eventdev.c +++ b/drivers/event/cnxk/cn9k_eventdev.c @@ -357,6 +357,20 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) #undef R }; + const event_dequeue_t sso_hws_deq_ca[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_ca_##name, + NIX_RX_FASTPATH_MODES +#undef R + }; + + const event_dequeue_burst_t sso_hws_deq_ca_burst[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_ca_burst_##name, + NIX_RX_FASTPATH_MODES +#undef R + }; + const event_dequeue_t sso_hws_deq_seg[2][2][2][2][2][2] = { #define R(name, f5, f4, f3, f2, f1, f0, flags) \ [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_seg_##name, @@ -384,7 +398,22 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_tmo_seg_burst_##name, NIX_RX_FASTPATH_MODES #undef R - }; + }; + + const event_dequeue_t sso_hws_deq_ca_seg[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_ca_seg_##name, + NIX_RX_FASTPATH_MODES +#undef R + }; + + const event_dequeue_burst_t + sso_hws_deq_ca_seg_burst[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_ca_seg_burst_##name, + NIX_RX_FASTPATH_MODES +#undef R + }; /* Dual WS modes */ const event_dequeue_t sso_hws_dual_deq[2][2][2][2][2][2] = { @@ -414,7 +443,22 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_tmo_burst_##name, NIX_RX_FASTPATH_MODES #undef R - }; + }; + + const event_dequeue_t sso_hws_dual_deq_ca[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_ca_##name, + NIX_RX_FASTPATH_MODES +#undef R + }; + + const event_dequeue_burst_t + sso_hws_dual_deq_ca_burst[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_ca_burst_##name, + NIX_RX_FASTPATH_MODES +#undef R + }; const event_dequeue_t sso_hws_dual_deq_seg[2][2][2][2][2][2] = { #define R(name, f5, f4, f3, f2, f1, f0, flags) \ @@ -446,6 +490,21 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) #undef R }; + const event_dequeue_t sso_hws_dual_deq_ca_seg[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_ca_seg_##name, + NIX_RX_FASTPATH_MODES +#undef R + }; + + const event_dequeue_burst_t + sso_hws_dual_deq_ca_seg_burst[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_ca_seg_burst_##name, + NIX_RX_FASTPATH_MODES +#undef R + }; + /* Tx modes */ const event_tx_adapter_enqueue sso_hws_tx_adptr_enq[2][2][2][2][2][2] = { @@ -493,6 +552,12 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue_burst, sso_hws_deq_tmo_seg_burst); } + if (dev->is_ca_internal_port) { + CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, + sso_hws_deq_ca_seg); + CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue_burst, + sso_hws_deq_ca_seg_burst); + } } else { CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, sso_hws_deq); CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue_burst, @@ -503,7 +568,14 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue_burst, sso_hws_deq_tmo_burst); } + if (dev->is_ca_internal_port) { + CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, + sso_hws_deq_ca); + CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue_burst, + sso_hws_deq_ca_burst); + } } + event_dev->ca_enqueue = cn9k_sso_hws_ca_enq; if (dev->tx_offloads & NIX_TX_MULTI_SEG_F) CN9K_SET_EVDEV_ENQ_OP(dev, event_dev->txa_enqueue, @@ -518,6 +590,7 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) event_dev->enqueue_new_burst = cn9k_sso_hws_dual_enq_new_burst; event_dev->enqueue_forward_burst = cn9k_sso_hws_dual_enq_fwd_burst; + event_dev->ca_enqueue = cn9k_sso_hws_dual_ca_enq; if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) { CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, @@ -531,6 +604,13 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) dev, event_dev->dequeue_burst, sso_hws_dual_deq_tmo_seg_burst); } + if (dev->is_ca_internal_port) { + CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, + sso_hws_dual_deq_ca_seg); + CN9K_SET_EVDEV_DEQ_OP( + dev, event_dev->dequeue_burst, + sso_hws_dual_deq_ca_seg_burst); + } } else { CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, sso_hws_dual_deq); @@ -543,6 +623,13 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) dev, event_dev->dequeue_burst, sso_hws_dual_deq_tmo_burst); } + if (dev->is_ca_internal_port) { + CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, + sso_hws_dual_deq_ca); + CN9K_SET_EVDEV_DEQ_OP( + dev, event_dev->dequeue_burst, + sso_hws_dual_deq_ca_burst); + } } if (dev->tx_offloads & NIX_TX_MULTI_SEG_F) @@ -929,7 +1016,8 @@ cn9k_crypto_adapter_caps_get(const struct rte_eventdev *event_dev, CNXK_VALID_DEV_OR_ERR_RET(event_dev->dev, "event_cn9k"); CNXK_VALID_DEV_OR_ERR_RET(cdev->device, "crypto_cn9k"); - *caps = 0; + *caps = RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD | + RTE_EVENT_CRYPTO_ADAPTER_CAP_SESSION_PRIVATE_DATA; return 0; } diff --git a/drivers/event/cnxk/cn9k_worker.c b/drivers/event/cnxk/cn9k_worker.c index 538bc4b0b3..32f7cc0343 100644 --- a/drivers/event/cnxk/cn9k_worker.c +++ b/drivers/event/cnxk/cn9k_worker.c @@ -5,6 +5,7 @@ #include "roc_api.h" #include "cn9k_worker.h" +#include "cn9k_cryptodev_ops.h" uint16_t __rte_hot cn9k_sso_hws_enq(void *port, const struct rte_event *ev) @@ -117,3 +118,24 @@ cn9k_sso_hws_dual_enq_fwd_burst(void *port, const struct rte_event ev[], return 1; } + +uint16_t __rte_hot +cn9k_sso_hws_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events) +{ + struct cn9k_sso_hws *ws = port; + + RTE_SET_USED(nb_events); + + return cn9k_cpt_crypto_adapter_enqueue(ws->tag_op, ev->event_ptr); +} + +uint16_t __rte_hot +cn9k_sso_hws_dual_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events) +{ + struct cn9k_sso_hws_dual *dws = port; + + RTE_SET_USED(nb_events); + + return cn9k_cpt_crypto_adapter_enqueue(dws->ws_state[!dws->vws].tag_op, + ev->event_ptr); +} diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h index 9b2a0bf882..3e8f214904 100644 --- a/drivers/event/cnxk/cn9k_worker.h +++ b/drivers/event/cnxk/cn9k_worker.h @@ -8,6 +8,7 @@ #include "cnxk_ethdev.h" #include "cnxk_eventdev.h" #include "cnxk_worker.h" +#include "cn9k_cryptodev_ops.h" #include "cn9k_ethdev.h" #include "cn9k_rx.h" @@ -187,8 +188,12 @@ cn9k_sso_hws_dual_get_work(struct cn9k_sso_hws_state *ws, (gw.u64[0] & 0xffffffff); if (CNXK_TT_FROM_EVENT(gw.u64[0]) != SSO_TT_EMPTY) { - if (CNXK_EVENT_TYPE_FROM_TAG(gw.u64[0]) == - RTE_EVENT_TYPE_ETHDEV) { + if ((flags & CPT_RX_WQE_F) && + (CNXK_EVENT_TYPE_FROM_TAG(gw.u64[0]) == + RTE_EVENT_TYPE_CRYPTODEV)) { + gw.u64[1] = cn9k_cpt_crypto_adapter_dequeue(gw.u64[1]); + } else if (CNXK_EVENT_TYPE_FROM_TAG(gw.u64[0]) == + RTE_EVENT_TYPE_ETHDEV) { uint8_t port = CNXK_SUB_EVENT_FROM_TAG(gw.u64[0]); gw.u64[0] = CNXK_CLR_SUB_EVENT(gw.u64[0]); @@ -260,8 +265,12 @@ cn9k_sso_hws_get_work(struct cn9k_sso_hws *ws, struct rte_event *ev, (gw.u64[0] & 0xffffffff); if (CNXK_TT_FROM_EVENT(gw.u64[0]) != SSO_TT_EMPTY) { - if (CNXK_EVENT_TYPE_FROM_TAG(gw.u64[0]) == - RTE_EVENT_TYPE_ETHDEV) { + if ((flags & CPT_RX_WQE_F) && + (CNXK_EVENT_TYPE_FROM_TAG(gw.u64[0]) == + RTE_EVENT_TYPE_CRYPTODEV)) { + gw.u64[1] = cn9k_cpt_crypto_adapter_dequeue(gw.u64[1]); + } else if (CNXK_EVENT_TYPE_FROM_TAG(gw.u64[0]) == + RTE_EVENT_TYPE_ETHDEV) { uint8_t port = CNXK_SUB_EVENT_FROM_TAG(gw.u64[0]); gw.u64[0] = CNXK_CLR_SUB_EVENT(gw.u64[0]); @@ -366,6 +375,10 @@ uint16_t __rte_hot cn9k_sso_hws_dual_enq_new_burst(void *port, uint16_t __rte_hot cn9k_sso_hws_dual_enq_fwd_burst(void *port, const struct rte_event ev[], uint16_t nb_events); +uint16_t __rte_hot cn9k_sso_hws_ca_enq(void *port, struct rte_event ev[], + uint16_t nb_events); +uint16_t __rte_hot cn9k_sso_hws_dual_ca_enq(void *port, struct rte_event ev[], + uint16_t nb_events); #define R(name, f5, f4, f3, f2, f1, f0, flags) \ uint16_t __rte_hot cn9k_sso_hws_deq_##name( \ @@ -378,6 +391,11 @@ uint16_t __rte_hot cn9k_sso_hws_dual_enq_fwd_burst(void *port, uint16_t __rte_hot cn9k_sso_hws_deq_tmo_burst_##name( \ void *port, struct rte_event ev[], uint16_t nb_events, \ uint64_t timeout_ticks); \ + uint16_t __rte_hot cn9k_sso_hws_deq_ca_##name( \ + void *port, struct rte_event *ev, uint64_t timeout_ticks); \ + uint16_t __rte_hot cn9k_sso_hws_deq_ca_burst_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events, \ + uint64_t timeout_ticks); \ uint16_t __rte_hot cn9k_sso_hws_deq_seg_##name( \ void *port, struct rte_event *ev, uint64_t timeout_ticks); \ uint16_t __rte_hot cn9k_sso_hws_deq_seg_burst_##name( \ @@ -386,6 +404,11 @@ uint16_t __rte_hot cn9k_sso_hws_dual_enq_fwd_burst(void *port, uint16_t __rte_hot cn9k_sso_hws_deq_tmo_seg_##name( \ void *port, struct rte_event *ev, uint64_t timeout_ticks); \ uint16_t __rte_hot cn9k_sso_hws_deq_tmo_seg_burst_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events, \ + uint64_t timeout_ticks); \ + uint16_t __rte_hot cn9k_sso_hws_deq_ca_seg_##name( \ + void *port, struct rte_event *ev, uint64_t timeout_ticks); \ + uint16_t __rte_hot cn9k_sso_hws_deq_ca_seg_burst_##name( \ void *port, struct rte_event ev[], uint16_t nb_events, \ uint64_t timeout_ticks); @@ -403,6 +426,11 @@ NIX_RX_FASTPATH_MODES uint16_t __rte_hot cn9k_sso_hws_dual_deq_tmo_burst_##name( \ void *port, struct rte_event ev[], uint16_t nb_events, \ uint64_t timeout_ticks); \ + uint16_t __rte_hot cn9k_sso_hws_dual_deq_ca_##name( \ + void *port, struct rte_event *ev, uint64_t timeout_ticks); \ + uint16_t __rte_hot cn9k_sso_hws_dual_deq_ca_burst_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events, \ + uint64_t timeout_ticks); \ uint16_t __rte_hot cn9k_sso_hws_dual_deq_seg_##name( \ void *port, struct rte_event *ev, uint64_t timeout_ticks); \ uint16_t __rte_hot cn9k_sso_hws_dual_deq_seg_burst_##name( \ @@ -411,6 +439,11 @@ NIX_RX_FASTPATH_MODES uint16_t __rte_hot cn9k_sso_hws_dual_deq_tmo_seg_##name( \ void *port, struct rte_event *ev, uint64_t timeout_ticks); \ uint16_t __rte_hot cn9k_sso_hws_dual_deq_tmo_seg_burst_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events, \ + uint64_t timeout_ticks); \ + uint16_t __rte_hot cn9k_sso_hws_dual_deq_ca_seg_##name( \ + void *port, struct rte_event *ev, uint64_t timeout_ticks); \ + uint16_t __rte_hot cn9k_sso_hws_dual_deq_ca_seg_burst_##name( \ void *port, struct rte_event ev[], uint16_t nb_events, \ uint64_t timeout_ticks); diff --git a/drivers/event/cnxk/cn9k_worker_deq_ca.c b/drivers/event/cnxk/cn9k_worker_deq_ca.c new file mode 100644 index 0000000000..dde82883d6 --- /dev/null +++ b/drivers/event/cnxk/cn9k_worker_deq_ca.c @@ -0,0 +1,65 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include "cn9k_worker.h" +#include "cnxk_eventdev.h" +#include "cnxk_worker.h" + +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + uint16_t __rte_hot cn9k_sso_hws_deq_ca_##name( \ + void *port, struct rte_event *ev, uint64_t timeout_ticks) \ + { \ + struct cn9k_sso_hws *ws = port; \ + \ + RTE_SET_USED(timeout_ticks); \ + \ + if (ws->swtag_req) { \ + ws->swtag_req = 0; \ + cnxk_sso_hws_swtag_wait(ws->tag_op); \ + return 1; \ + } \ + \ + return cn9k_sso_hws_get_work(ws, ev, flags | CPT_RX_WQE_F, \ + ws->lookup_mem); \ + } \ + \ + uint16_t __rte_hot cn9k_sso_hws_deq_ca_burst_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events, \ + uint64_t timeout_ticks) \ + { \ + RTE_SET_USED(nb_events); \ + \ + return cn9k_sso_hws_deq_ca_##name(port, ev, timeout_ticks); \ + } \ + \ + uint16_t __rte_hot cn9k_sso_hws_deq_ca_seg_##name( \ + void *port, struct rte_event *ev, uint64_t timeout_ticks) \ + { \ + struct cn9k_sso_hws *ws = port; \ + \ + RTE_SET_USED(timeout_ticks); \ + \ + if (ws->swtag_req) { \ + ws->swtag_req = 0; \ + cnxk_sso_hws_swtag_wait(ws->tag_op); \ + return 1; \ + } \ + \ + return cn9k_sso_hws_get_work( \ + ws, ev, flags | NIX_RX_MULTI_SEG_F | CPT_RX_WQE_F, \ + ws->lookup_mem); \ + } \ + \ + uint16_t __rte_hot cn9k_sso_hws_deq_ca_seg_burst_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events, \ + uint64_t timeout_ticks) \ + { \ + RTE_SET_USED(nb_events); \ + \ + return cn9k_sso_hws_deq_ca_seg_##name(port, ev, \ + timeout_ticks); \ + } + +NIX_RX_FASTPATH_MODES +#undef R diff --git a/drivers/event/cnxk/cn9k_worker_dual_deq_ca.c b/drivers/event/cnxk/cn9k_worker_dual_deq_ca.c new file mode 100644 index 0000000000..26cc60fd96 --- /dev/null +++ b/drivers/event/cnxk/cn9k_worker_dual_deq_ca.c @@ -0,0 +1,75 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include "cn9k_worker.h" +#include "cnxk_eventdev.h" +#include "cnxk_worker.h" + +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + uint16_t __rte_hot cn9k_sso_hws_dual_deq_ca_##name( \ + void *port, struct rte_event *ev, uint64_t timeout_ticks) \ + { \ + struct cn9k_sso_hws_dual *dws = port; \ + uint16_t gw; \ + \ + RTE_SET_USED(timeout_ticks); \ + if (dws->swtag_req) { \ + dws->swtag_req = 0; \ + cnxk_sso_hws_swtag_wait( \ + dws->ws_state[!dws->vws].tag_op); \ + return 1; \ + } \ + \ + gw = cn9k_sso_hws_dual_get_work(&dws->ws_state[dws->vws], \ + &dws->ws_state[!dws->vws], ev, \ + flags | CPT_RX_WQE_F, \ + dws->lookup_mem, dws->tstamp); \ + dws->vws = !dws->vws; \ + return gw; \ + } \ + \ + uint16_t __rte_hot cn9k_sso_hws_dual_deq_ca_burst_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events, \ + uint64_t timeout_ticks) \ + { \ + RTE_SET_USED(nb_events); \ + \ + return cn9k_sso_hws_dual_deq_ca_##name(port, ev, \ + timeout_ticks); \ + } \ + \ + uint16_t __rte_hot cn9k_sso_hws_dual_deq_ca_seg_##name( \ + void *port, struct rte_event *ev, uint64_t timeout_ticks) \ + { \ + struct cn9k_sso_hws_dual *dws = port; \ + uint16_t gw; \ + \ + RTE_SET_USED(timeout_ticks); \ + if (dws->swtag_req) { \ + dws->swtag_req = 0; \ + cnxk_sso_hws_swtag_wait( \ + dws->ws_state[!dws->vws].tag_op); \ + return 1; \ + } \ + \ + gw = cn9k_sso_hws_dual_get_work( \ + &dws->ws_state[dws->vws], &dws->ws_state[!dws->vws], \ + ev, flags | NIX_RX_MULTI_SEG_F | CPT_RX_WQE_F, \ + dws->lookup_mem, dws->tstamp); \ + dws->vws = !dws->vws; \ + return gw; \ + } \ + \ + uint16_t __rte_hot cn9k_sso_hws_dual_deq_ca_seg_burst_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events, \ + uint64_t timeout_ticks) \ + { \ + RTE_SET_USED(nb_events); \ + \ + return cn9k_sso_hws_dual_deq_ca_seg_##name(port, ev, \ + timeout_ticks); \ + } + +NIX_RX_FASTPATH_MODES +#undef R diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build index 1155e18ba7..ffbc0ce0f4 100644 --- a/drivers/event/cnxk/meson.build +++ b/drivers/event/cnxk/meson.build @@ -13,9 +13,11 @@ sources = files( 'cn9k_worker.c', 'cn9k_worker_deq.c', 'cn9k_worker_deq_burst.c', + 'cn9k_worker_deq_ca.c', 'cn9k_worker_deq_tmo.c', 'cn9k_worker_dual_deq.c', 'cn9k_worker_dual_deq_burst.c', + 'cn9k_worker_dual_deq_ca.c', 'cn9k_worker_dual_deq_tmo.c', 'cn9k_worker_tx_enq.c', 'cn9k_worker_tx_enq_seg.c',