From patchwork Thu Sep 2 12:17:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shijith Thotton X-Patchwork-Id: 97790 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C50F0A0C47; Thu, 2 Sep 2021 14:21:55 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D08D040142; Thu, 2 Sep 2021 14:21:54 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 7526D410D8 for ; Thu, 2 Sep 2021 14:21:52 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 1825LIYk028455 for ; Thu, 2 Sep 2021 05:21:51 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=1YX+bNi3QsDyoJusbFa+ASuVCz8KF4BtFL8jilQXmlk=; b=FEZxiutuZ7tDBHY99+8M2LGDjpYcWupWcJJElRReHJbEyKcDJG6J6yVwiw3d9CQ3p2vH Ie/feexDsiaP7awLRdbrkW6AC26IWDgKFzOVpFFDepknNUQ2eRXi7BmnWfFL2cnEDqjO fSLeuaoQXV3KOaayNL04Myue9NzyIUsZarGykrhhsXultpcgPqFdmskXNJ9g7dcrXKXa ul7BPkQ3T46BlbxxtvpagUBLHislmCtpbD4IKSmkpwfe/kZf47TiXJ7l7A1LvXe0EPkd Z7Iwg+7Dali0A/QD//XrNnYZgxQbXZJTEmMOClraI8SeiTbx2QzyNLmDHsrCLf78a0dq qA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com with ESMTP id 3atrd2he72-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 02 Sep 2021 05:21:51 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 2 Sep 2021 05:21:50 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Thu, 2 Sep 2021 05:21:49 -0700 Received: from localhost.localdomain (unknown [10.28.34.29]) by maili.marvell.com (Postfix) with ESMTP id 6FBCB3F705E; Thu, 2 Sep 2021 05:21:47 -0700 (PDT) From: Shijith Thotton To: CC: Shijith Thotton , , , , , Date: Thu, 2 Sep 2021 17:47:22 +0530 Message-ID: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Proofpoint-GUID: gKJXBM6mPNiDGve2YHFPDtNHjIJeVS1f X-Proofpoint-ORIG-GUID: gKJXBM6mPNiDGve2YHFPDtNHjIJeVS1f X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-09-02_04,2021-09-02_01,2020-04-07_01 Subject: [dpdk-dev] [PATCH v2 6/8] event/cnxk: add cn9k crypto adapter fast path ops X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Set crypto adapter enqueue and dequeue operations for CN9K. Signed-off-by: Shijith Thotton --- drivers/event/cnxk/cn9k_eventdev.c | 94 +++++++++++++++++++- drivers/event/cnxk/cn9k_worker.c | 22 +++++ drivers/event/cnxk/cn9k_worker.h | 41 ++++++++- drivers/event/cnxk/cn9k_worker_deq_ca.c | 65 ++++++++++++++ drivers/event/cnxk/cn9k_worker_dual_deq_ca.c | 75 ++++++++++++++++ drivers/event/cnxk/meson.build | 2 + 6 files changed, 292 insertions(+), 7 deletions(-) create mode 100644 drivers/event/cnxk/cn9k_worker_deq_ca.c create mode 100644 drivers/event/cnxk/cn9k_worker_dual_deq_ca.c diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c index cab00ed3ca..34247e6c74 100644 --- a/drivers/event/cnxk/cn9k_eventdev.c +++ b/drivers/event/cnxk/cn9k_eventdev.c @@ -363,6 +363,20 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) #undef R }; + const event_dequeue_t sso_hws_deq_ca[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_ca_##name, + NIX_RX_FASTPATH_MODES +#undef R + }; + + const event_dequeue_burst_t sso_hws_deq_ca_burst[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_ca_burst_##name, + NIX_RX_FASTPATH_MODES +#undef R + }; + const event_dequeue_t sso_hws_deq_seg[2][2][2][2][2][2] = { #define R(name, f5, f4, f3, f2, f1, f0, flags) \ [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_seg_##name, @@ -390,7 +404,22 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_tmo_seg_burst_##name, NIX_RX_FASTPATH_MODES #undef R - }; + }; + + const event_dequeue_t sso_hws_deq_ca_seg[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_ca_seg_##name, + NIX_RX_FASTPATH_MODES +#undef R + }; + + const event_dequeue_burst_t + sso_hws_deq_ca_seg_burst[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_ca_seg_burst_##name, + NIX_RX_FASTPATH_MODES +#undef R + }; /* Dual WS modes */ const event_dequeue_t sso_hws_dual_deq[2][2][2][2][2][2] = { @@ -420,7 +449,22 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_tmo_burst_##name, NIX_RX_FASTPATH_MODES #undef R - }; + }; + + const event_dequeue_t sso_hws_dual_deq_ca[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_ca_##name, + NIX_RX_FASTPATH_MODES +#undef R + }; + + const event_dequeue_burst_t + sso_hws_dual_deq_ca_burst[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_ca_burst_##name, + NIX_RX_FASTPATH_MODES +#undef R + }; const event_dequeue_t sso_hws_dual_deq_seg[2][2][2][2][2][2] = { #define R(name, f5, f4, f3, f2, f1, f0, flags) \ @@ -452,6 +496,21 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) #undef R }; + const event_dequeue_t sso_hws_dual_deq_ca_seg[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_ca_seg_##name, + NIX_RX_FASTPATH_MODES +#undef R + }; + + const event_dequeue_burst_t + sso_hws_dual_deq_ca_seg_burst[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_ca_seg_burst_##name, + NIX_RX_FASTPATH_MODES +#undef R + }; + /* Tx modes */ const event_tx_adapter_enqueue sso_hws_tx_adptr_enq[2][2][2][2][2][2] = { @@ -499,6 +558,12 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue_burst, sso_hws_deq_tmo_seg_burst); } + if (dev->is_ca_internal_port) { + CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, + sso_hws_deq_ca_seg); + CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue_burst, + sso_hws_deq_ca_seg_burst); + } } else { CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, sso_hws_deq); CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue_burst, @@ -509,7 +574,14 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue_burst, sso_hws_deq_tmo_burst); } + if (dev->is_ca_internal_port) { + CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, + sso_hws_deq_ca); + CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue_burst, + sso_hws_deq_ca_burst); + } } + event_dev->ca_enqueue = cn9k_sso_hws_ca_enq; if (dev->tx_offloads & NIX_TX_MULTI_SEG_F) CN9K_SET_EVDEV_ENQ_OP(dev, event_dev->txa_enqueue, @@ -524,6 +596,7 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) event_dev->enqueue_new_burst = cn9k_sso_hws_dual_enq_new_burst; event_dev->enqueue_forward_burst = cn9k_sso_hws_dual_enq_fwd_burst; + event_dev->ca_enqueue = cn9k_sso_hws_dual_ca_enq; if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) { CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, @@ -537,6 +610,13 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) dev, event_dev->dequeue_burst, sso_hws_dual_deq_tmo_seg_burst); } + if (dev->is_ca_internal_port) { + CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, + sso_hws_dual_deq_ca_seg); + CN9K_SET_EVDEV_DEQ_OP( + dev, event_dev->dequeue_burst, + sso_hws_dual_deq_ca_seg_burst); + } } else { CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, sso_hws_dual_deq); @@ -549,6 +629,13 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) dev, event_dev->dequeue_burst, sso_hws_dual_deq_tmo_burst); } + if (dev->is_ca_internal_port) { + CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, + sso_hws_dual_deq_ca); + CN9K_SET_EVDEV_DEQ_OP( + dev, event_dev->dequeue_burst, + sso_hws_dual_deq_ca_burst); + } } if (dev->tx_offloads & NIX_TX_MULTI_SEG_F) @@ -935,7 +1022,8 @@ cn9k_crypto_adapter_caps_get(const struct rte_eventdev *event_dev, CNXK_VALID_DEV_OR_ERR_RET(event_dev->dev, "event_cn9k"); CNXK_VALID_DEV_OR_ERR_RET(cdev->device, "crypto_cn9k"); - *caps = 0; + *caps = RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD | + RTE_EVENT_CRYPTO_ADAPTER_CAP_SESSION_PRIVATE_DATA; return 0; } diff --git a/drivers/event/cnxk/cn9k_worker.c b/drivers/event/cnxk/cn9k_worker.c index 538bc4b0b3..32f7cc0343 100644 --- a/drivers/event/cnxk/cn9k_worker.c +++ b/drivers/event/cnxk/cn9k_worker.c @@ -5,6 +5,7 @@ #include "roc_api.h" #include "cn9k_worker.h" +#include "cn9k_cryptodev_ops.h" uint16_t __rte_hot cn9k_sso_hws_enq(void *port, const struct rte_event *ev) @@ -117,3 +118,24 @@ cn9k_sso_hws_dual_enq_fwd_burst(void *port, const struct rte_event ev[], return 1; } + +uint16_t __rte_hot +cn9k_sso_hws_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events) +{ + struct cn9k_sso_hws *ws = port; + + RTE_SET_USED(nb_events); + + return cn9k_cpt_crypto_adapter_enqueue(ws->tag_op, ev->event_ptr); +} + +uint16_t __rte_hot +cn9k_sso_hws_dual_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events) +{ + struct cn9k_sso_hws_dual *dws = port; + + RTE_SET_USED(nb_events); + + return cn9k_cpt_crypto_adapter_enqueue(dws->ws_state[!dws->vws].tag_op, + ev->event_ptr); +} diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h index 9b2a0bf882..3e8f214904 100644 --- a/drivers/event/cnxk/cn9k_worker.h +++ b/drivers/event/cnxk/cn9k_worker.h @@ -8,6 +8,7 @@ #include "cnxk_ethdev.h" #include "cnxk_eventdev.h" #include "cnxk_worker.h" +#include "cn9k_cryptodev_ops.h" #include "cn9k_ethdev.h" #include "cn9k_rx.h" @@ -187,8 +188,12 @@ cn9k_sso_hws_dual_get_work(struct cn9k_sso_hws_state *ws, (gw.u64[0] & 0xffffffff); if (CNXK_TT_FROM_EVENT(gw.u64[0]) != SSO_TT_EMPTY) { - if (CNXK_EVENT_TYPE_FROM_TAG(gw.u64[0]) == - RTE_EVENT_TYPE_ETHDEV) { + if ((flags & CPT_RX_WQE_F) && + (CNXK_EVENT_TYPE_FROM_TAG(gw.u64[0]) == + RTE_EVENT_TYPE_CRYPTODEV)) { + gw.u64[1] = cn9k_cpt_crypto_adapter_dequeue(gw.u64[1]); + } else if (CNXK_EVENT_TYPE_FROM_TAG(gw.u64[0]) == + RTE_EVENT_TYPE_ETHDEV) { uint8_t port = CNXK_SUB_EVENT_FROM_TAG(gw.u64[0]); gw.u64[0] = CNXK_CLR_SUB_EVENT(gw.u64[0]); @@ -260,8 +265,12 @@ cn9k_sso_hws_get_work(struct cn9k_sso_hws *ws, struct rte_event *ev, (gw.u64[0] & 0xffffffff); if (CNXK_TT_FROM_EVENT(gw.u64[0]) != SSO_TT_EMPTY) { - if (CNXK_EVENT_TYPE_FROM_TAG(gw.u64[0]) == - RTE_EVENT_TYPE_ETHDEV) { + if ((flags & CPT_RX_WQE_F) && + (CNXK_EVENT_TYPE_FROM_TAG(gw.u64[0]) == + RTE_EVENT_TYPE_CRYPTODEV)) { + gw.u64[1] = cn9k_cpt_crypto_adapter_dequeue(gw.u64[1]); + } else if (CNXK_EVENT_TYPE_FROM_TAG(gw.u64[0]) == + RTE_EVENT_TYPE_ETHDEV) { uint8_t port = CNXK_SUB_EVENT_FROM_TAG(gw.u64[0]); gw.u64[0] = CNXK_CLR_SUB_EVENT(gw.u64[0]); @@ -366,6 +375,10 @@ uint16_t __rte_hot cn9k_sso_hws_dual_enq_new_burst(void *port, uint16_t __rte_hot cn9k_sso_hws_dual_enq_fwd_burst(void *port, const struct rte_event ev[], uint16_t nb_events); +uint16_t __rte_hot cn9k_sso_hws_ca_enq(void *port, struct rte_event ev[], + uint16_t nb_events); +uint16_t __rte_hot cn9k_sso_hws_dual_ca_enq(void *port, struct rte_event ev[], + uint16_t nb_events); #define R(name, f5, f4, f3, f2, f1, f0, flags) \ uint16_t __rte_hot cn9k_sso_hws_deq_##name( \ @@ -378,6 +391,11 @@ uint16_t __rte_hot cn9k_sso_hws_dual_enq_fwd_burst(void *port, uint16_t __rte_hot cn9k_sso_hws_deq_tmo_burst_##name( \ void *port, struct rte_event ev[], uint16_t nb_events, \ uint64_t timeout_ticks); \ + uint16_t __rte_hot cn9k_sso_hws_deq_ca_##name( \ + void *port, struct rte_event *ev, uint64_t timeout_ticks); \ + uint16_t __rte_hot cn9k_sso_hws_deq_ca_burst_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events, \ + uint64_t timeout_ticks); \ uint16_t __rte_hot cn9k_sso_hws_deq_seg_##name( \ void *port, struct rte_event *ev, uint64_t timeout_ticks); \ uint16_t __rte_hot cn9k_sso_hws_deq_seg_burst_##name( \ @@ -386,6 +404,11 @@ uint16_t __rte_hot cn9k_sso_hws_dual_enq_fwd_burst(void *port, uint16_t __rte_hot cn9k_sso_hws_deq_tmo_seg_##name( \ void *port, struct rte_event *ev, uint64_t timeout_ticks); \ uint16_t __rte_hot cn9k_sso_hws_deq_tmo_seg_burst_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events, \ + uint64_t timeout_ticks); \ + uint16_t __rte_hot cn9k_sso_hws_deq_ca_seg_##name( \ + void *port, struct rte_event *ev, uint64_t timeout_ticks); \ + uint16_t __rte_hot cn9k_sso_hws_deq_ca_seg_burst_##name( \ void *port, struct rte_event ev[], uint16_t nb_events, \ uint64_t timeout_ticks); @@ -403,6 +426,11 @@ NIX_RX_FASTPATH_MODES uint16_t __rte_hot cn9k_sso_hws_dual_deq_tmo_burst_##name( \ void *port, struct rte_event ev[], uint16_t nb_events, \ uint64_t timeout_ticks); \ + uint16_t __rte_hot cn9k_sso_hws_dual_deq_ca_##name( \ + void *port, struct rte_event *ev, uint64_t timeout_ticks); \ + uint16_t __rte_hot cn9k_sso_hws_dual_deq_ca_burst_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events, \ + uint64_t timeout_ticks); \ uint16_t __rte_hot cn9k_sso_hws_dual_deq_seg_##name( \ void *port, struct rte_event *ev, uint64_t timeout_ticks); \ uint16_t __rte_hot cn9k_sso_hws_dual_deq_seg_burst_##name( \ @@ -411,6 +439,11 @@ NIX_RX_FASTPATH_MODES uint16_t __rte_hot cn9k_sso_hws_dual_deq_tmo_seg_##name( \ void *port, struct rte_event *ev, uint64_t timeout_ticks); \ uint16_t __rte_hot cn9k_sso_hws_dual_deq_tmo_seg_burst_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events, \ + uint64_t timeout_ticks); \ + uint16_t __rte_hot cn9k_sso_hws_dual_deq_ca_seg_##name( \ + void *port, struct rte_event *ev, uint64_t timeout_ticks); \ + uint16_t __rte_hot cn9k_sso_hws_dual_deq_ca_seg_burst_##name( \ void *port, struct rte_event ev[], uint16_t nb_events, \ uint64_t timeout_ticks); diff --git a/drivers/event/cnxk/cn9k_worker_deq_ca.c b/drivers/event/cnxk/cn9k_worker_deq_ca.c new file mode 100644 index 0000000000..dbdbba17db --- /dev/null +++ b/drivers/event/cnxk/cn9k_worker_deq_ca.c @@ -0,0 +1,65 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include "cn9k_worker.h" +#include "cnxk_eventdev.h" +#include "cnxk_worker.h" + +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + uint16_t __rte_hot cn9k_sso_hws_deq_ca_##name( \ + void *port, struct rte_event *ev, uint64_t timeout_ticks) \ + { \ + struct cn9k_sso_hws *ws = port; \ + \ + RTE_SET_USED(timeout_ticks); \ + \ + if (ws->swtag_req) { \ + ws->swtag_req = 0; \ + cnxk_sso_hws_swtag_wait(ws->tag_op); \ + return 1; \ + } \ + \ + return cn9k_sso_hws_get_work(ws, ev, flags | CPT_RX_WQE_F, \ + ws->lookup_mem); \ + } \ + \ + uint16_t __rte_hot cn9k_sso_hws_deq_ca_burst_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events, \ + uint64_t timeout_ticks) \ + { \ + RTE_SET_USED(nb_events); \ + \ + return cn9k_sso_hws_deq_ca_##name(port, ev, timeout_ticks); \ + } \ + \ + uint16_t __rte_hot cn9k_sso_hws_deq_ca_seg_##name( \ + void *port, struct rte_event *ev, uint64_t timeout_ticks) \ + { \ + struct cn9k_sso_hws *ws = port; \ + \ + RTE_SET_USED(timeout_ticks); \ + \ + if (ws->swtag_req) { \ + ws->swtag_req = 0; \ + cnxk_sso_hws_swtag_wait(ws->tag_op); \ + return 1; \ + } \ + \ + return cn9k_sso_hws_get_work( \ + ws, ev, flags | NIX_RX_MULTI_SEG_F | CPT_RX_WQE_F, \ + ws->lookup_mem); \ + } \ + \ + uint16_t __rte_hot cn9k_sso_hws_deq_ca_seg_burst_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events, \ + uint64_t timeout_ticks) \ + { \ + RTE_SET_USED(nb_events); \ + \ + return cn9k_sso_hws_deq_ca_seg_##name(port, ev, \ + timeout_ticks); \ + } + +NIX_RX_FASTPATH_MODES +#undef R diff --git a/drivers/event/cnxk/cn9k_worker_dual_deq_ca.c b/drivers/event/cnxk/cn9k_worker_dual_deq_ca.c new file mode 100644 index 0000000000..dc9191fe80 --- /dev/null +++ b/drivers/event/cnxk/cn9k_worker_dual_deq_ca.c @@ -0,0 +1,75 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include "cn9k_worker.h" +#include "cnxk_eventdev.h" +#include "cnxk_worker.h" + +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + uint16_t __rte_hot cn9k_sso_hws_dual_deq_ca_##name( \ + void *port, struct rte_event *ev, uint64_t timeout_ticks) \ + { \ + struct cn9k_sso_hws_dual *dws = port; \ + uint16_t gw; \ + \ + RTE_SET_USED(timeout_ticks); \ + if (dws->swtag_req) { \ + dws->swtag_req = 0; \ + cnxk_sso_hws_swtag_wait( \ + dws->ws_state[!dws->vws].tag_op); \ + return 1; \ + } \ + \ + gw = cn9k_sso_hws_dual_get_work(&dws->ws_state[dws->vws], \ + &dws->ws_state[!dws->vws], ev, \ + flags | CPT_RX_WQE_F, \ + dws->lookup_mem, dws->tstamp); \ + dws->vws = !dws->vws; \ + return gw; \ + } \ + \ + uint16_t __rte_hot cn9k_sso_hws_dual_deq_ca_burst_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events, \ + uint64_t timeout_ticks) \ + { \ + RTE_SET_USED(nb_events); \ + \ + return cn9k_sso_hws_dual_deq_ca_##name(port, ev, \ + timeout_ticks); \ + } \ + \ + uint16_t __rte_hot cn9k_sso_hws_dual_deq_ca_seg_##name( \ + void *port, struct rte_event *ev, uint64_t timeout_ticks) \ + { \ + struct cn9k_sso_hws_dual *dws = port; \ + uint16_t gw; \ + \ + RTE_SET_USED(timeout_ticks); \ + if (dws->swtag_req) { \ + dws->swtag_req = 0; \ + cnxk_sso_hws_swtag_wait( \ + dws->ws_state[!dws->vws].tag_op); \ + return 1; \ + } \ + \ + gw = cn9k_sso_hws_dual_get_work( \ + &dws->ws_state[dws->vws], &dws->ws_state[!dws->vws], \ + ev, flags | NIX_RX_MULTI_SEG_F | CPT_RX_WQE_F, \ + dws->lookup_mem, dws->tstamp); \ + dws->vws = !dws->vws; \ + return gw; \ + } \ + \ + uint16_t __rte_hot cn9k_sso_hws_dual_deq_ca_seg_burst_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events, \ + uint64_t timeout_ticks) \ + { \ + RTE_SET_USED(nb_events); \ + \ + return cn9k_sso_hws_dual_deq_ca_seg_##name(port, ev, \ + timeout_ticks); \ + } + +NIX_RX_FASTPATH_MODES +#undef R diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build index 1155e18ba7..ffbc0ce0f4 100644 --- a/drivers/event/cnxk/meson.build +++ b/drivers/event/cnxk/meson.build @@ -13,9 +13,11 @@ sources = files( 'cn9k_worker.c', 'cn9k_worker_deq.c', 'cn9k_worker_deq_burst.c', + 'cn9k_worker_deq_ca.c', 'cn9k_worker_deq_tmo.c', 'cn9k_worker_dual_deq.c', 'cn9k_worker_dual_deq_burst.c', + 'cn9k_worker_dual_deq_ca.c', 'cn9k_worker_dual_deq_tmo.c', 'cn9k_worker_tx_enq.c', 'cn9k_worker_tx_enq_seg.c',