From patchwork Thu Sep 2 14:41:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shijith Thotton X-Patchwork-Id: 97826 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BEF14A0C4C; Thu, 2 Sep 2021 16:43:06 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A6D6A4003E; Thu, 2 Sep 2021 16:43:06 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 461894003C for ; Thu, 2 Sep 2021 16:43:05 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 18281Et4011520 for ; Thu, 2 Sep 2021 07:43:04 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=AXORyE7ICKGgDeYk8jUgq3d+l44nAIKHJ6dKCS+CLcQ=; b=hsJCugV/twhJ148QxExEncQLiJIjCeZavJazEIZ0Yi8gy+Iyr02b/GKu9/6foXJJAWMs vurt7c6w5Lk3KQ/UNs7Ch+C569lw3S5TDmsVPt+Qi9lnXnOg2XW7pnrVprjmEpD626oL QnldAYnPTmKEDvlBtx6FqBDmp3XoopVDzXtNgMr1XuZNcyb9+Ubvz9SJt//ykrdWXpOX XVyEUmHlVI9VdnlQJt5+atnszknk6gyjuh0RrQfzQ0VpiR+wCv9+9iF1/jooiTx+QbrK AgUQoePmqxAynAT/IeHY81DWhizpCgFt3rd79XH55uEqO8EzAz8V6C/lCCMRWGWUWr9/ 3w== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 3attqmhc6p-3 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 02 Sep 2021 07:43:04 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 2 Sep 2021 07:43:03 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Thu, 2 Sep 2021 07:43:03 -0700 Received: from localhost.localdomain (unknown [10.28.34.29]) by maili.marvell.com (Postfix) with ESMTP id 628283F705E; Thu, 2 Sep 2021 07:43:00 -0700 (PDT) From: Shijith Thotton To: CC: Shijith Thotton , , , , , , Kiran Kumar K , "Sunil Kumar Kori" , Satha Rao Date: Thu, 2 Sep 2021 20:11:49 +0530 Message-ID: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Proofpoint-GUID: HzMcB_enr2Szje0DSJ5sWgPuFc3UzI58 X-Proofpoint-ORIG-GUID: HzMcB_enr2Szje0DSJ5sWgPuFc3UzI58 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-09-02_04,2021-09-02_03,2020-04-07_01 Subject: [dpdk-dev] [PATCH v3 1/8] net/cnxk: add flag to show CPT can enqueue events X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" CPT can be told to submit events to SSO upon completion. Crypto adapter uses this feature and the new flag can be used to optimize receive path in those cases. Signed-off-by: Shijith Thotton --- drivers/net/cnxk/cn10k_rx.h | 5 +++-- drivers/net/cnxk/cn9k_rx.h | 3 ++- 2 files changed, 5 insertions(+), 3 deletions(-) diff --git a/drivers/net/cnxk/cn10k_rx.h b/drivers/net/cnxk/cn10k_rx.h index 4c5288b2cc..68219b8c19 100644 --- a/drivers/net/cnxk/cn10k_rx.h +++ b/drivers/net/cnxk/cn10k_rx.h @@ -21,8 +21,9 @@ * Defining it from backwards to denote its been * not used as offload flags to pick function */ -#define NIX_RX_VWQE_F BIT(14) -#define NIX_RX_MULTI_SEG_F BIT(15) +#define NIX_RX_VWQE_F BIT(13) +#define NIX_RX_MULTI_SEG_F BIT(14) +#define CPT_RX_WQE_F BIT(15) #define CNXK_NIX_CQ_ENTRY_SZ 128 #define NIX_DESCS_PER_LOOP 4 diff --git a/drivers/net/cnxk/cn9k_rx.h b/drivers/net/cnxk/cn9k_rx.h index beb52f39d5..a3bf4e0b63 100644 --- a/drivers/net/cnxk/cn9k_rx.h +++ b/drivers/net/cnxk/cn9k_rx.h @@ -22,7 +22,8 @@ * Defining it from backwards to denote its been * not used as offload flags to pick function */ -#define NIX_RX_MULTI_SEG_F BIT(15) +#define NIX_RX_MULTI_SEG_F BIT(14) +#define CPT_RX_WQE_F BIT(15) #define CNXK_NIX_CQ_ENTRY_SZ 128 #define NIX_DESCS_PER_LOOP 4 From patchwork Thu Sep 2 14:41:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shijith Thotton X-Patchwork-Id: 97827 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EF507A0C4C; Thu, 2 Sep 2021 16:43:11 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DA25A40142; Thu, 2 Sep 2021 16:43:11 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id DFCC440142 for ; Thu, 2 Sep 2021 16:43:09 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 18280qPh010845 for ; Thu, 2 Sep 2021 07:43:09 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=jv3GvE5GZCGAJjhcDlBbpynn3epMBmdmkWmj7vozTFI=; b=eNLXqhmUdPObzAfcEEkXyOmcDwRZGNixItB8tXG/5FmZEBhV8rmmPZXX5X5A6Bw90DNu UxoW/UeIKgkaieqsDGq4cXZTGTL5/Ac7G8kS62yPvc2UoNMiEZI3dSzI092beQJyapoy 1QEjO2yQF7S1aEp5fOulHr5UzfMQT2THuoxhFshw9PU1O5Ypaeb/ZgQYEDSAUY6mJrSt uy1qHFMUeSHl8X1HQ1od6cs4w+qT3EdCKhPgLk4xz0+zSAUf5RWikrk6Js02UJTZdzak 48IU2Q7HA39qHfj52xyDlKtxxbkESzFwkBkr8ggT+V2Vf+0JpVOIMbyyVq+xnRKvSdaA 4Q== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 3attqmhc70-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 02 Sep 2021 07:43:09 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 2 Sep 2021 07:43:06 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Thu, 2 Sep 2021 07:43:07 -0700 Received: from localhost.localdomain (unknown [10.28.34.29]) by maili.marvell.com (Postfix) with ESMTP id 0B97E3F705E; Thu, 2 Sep 2021 07:43:04 -0700 (PDT) From: Shijith Thotton To: CC: Shijith Thotton , , , , , Date: Thu, 2 Sep 2021 20:11:50 +0530 Message-ID: <82d036afcaab2c72a4009794533c564409483f5b.1630593512.git.sthotton@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Proofpoint-GUID: bydgXeb7jIbnWBIhJUJz-z9RAEho1kK7 X-Proofpoint-ORIG-GUID: bydgXeb7jIbnWBIhJUJz-z9RAEho1kK7 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-09-02_04,2021-09-02_03,2020-04-07_01 Subject: [dpdk-dev] [PATCH v3 2/8] event/cnxk: add macro to set eventdev ops X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Added a common macro to set eventdev enqueue and dequeue operations to reduce code. Signed-off-by: Shijith Thotton Signed-off-by: Nithin Dabilpuram --- drivers/event/cnxk/cn10k_eventdev.c | 134 +++++--------- drivers/event/cnxk/cn9k_eventdev.c | 268 +++++++--------------------- 2 files changed, 104 insertions(+), 298 deletions(-) diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c index 6f37c5bd23..f14a3edd34 100644 --- a/drivers/event/cnxk/cn10k_eventdev.c +++ b/drivers/event/cnxk/cn10k_eventdev.c @@ -6,6 +6,23 @@ #include "cnxk_eventdev.h" #include "cnxk_worker.h" +#define CN10K_SET_EVDEV_DEQ_OP(dev, deq_op, deq_ops) \ + (deq_op = deq_ops[!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] \ + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] \ + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] \ + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)] \ + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] \ + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]) + +#define CN10K_SET_EVDEV_ENQ_OP(dev, enq_op, enq_ops) \ + (enq_op = \ + enq_ops[!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)] \ + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)] \ + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)] \ + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_VLAN_QINQ_F)] \ + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] \ + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]) + static uint32_t cn10k_sso_gw_mode_wdata(struct cnxk_sso_evdev *dev) { @@ -285,14 +302,14 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev) #undef R }; - const event_dequeue_t sso_hws_tmo_deq[2][2][2][2][2][2] = { + const event_dequeue_t sso_hws_deq_tmo[2][2][2][2][2][2] = { #define R(name, f5, f4, f3, f2, f1, f0, flags) \ [f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_tmo_##name, NIX_RX_FASTPATH_MODES #undef R }; - const event_dequeue_burst_t sso_hws_tmo_deq_burst[2][2][2][2][2][2] = { + const event_dequeue_burst_t sso_hws_deq_tmo_burst[2][2][2][2][2][2] = { #define R(name, f5, f4, f3, f2, f1, f0, flags) \ [f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_tmo_burst_##name, NIX_RX_FASTPATH_MODES @@ -313,7 +330,7 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev) #undef R }; - const event_dequeue_t sso_hws_tmo_deq_seg[2][2][2][2][2][2] = { + const event_dequeue_t sso_hws_deq_tmo_seg[2][2][2][2][2][2] = { #define R(name, f5, f4, f3, f2, f1, f0, flags) \ [f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_tmo_seg_##name, NIX_RX_FASTPATH_MODES @@ -321,7 +338,7 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev) }; const event_dequeue_burst_t - sso_hws_tmo_deq_seg_burst[2][2][2][2][2][2] = { + sso_hws_deq_tmo_seg_burst[2][2][2][2][2][2] = { #define R(name, f5, f4, f3, f2, f1, f0, flags) \ [f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_tmo_seg_burst_##name, NIX_RX_FASTPATH_MODES @@ -350,99 +367,34 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev) event_dev->enqueue_new_burst = cn10k_sso_hws_enq_new_burst; event_dev->enqueue_forward_burst = cn10k_sso_hws_enq_fwd_burst; if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) { - event_dev->dequeue = sso_hws_deq_seg - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - event_dev->dequeue_burst = sso_hws_deq_seg_burst - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; + CN10K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, + sso_hws_deq_seg); + CN10K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue_burst, + sso_hws_deq_seg_burst); if (dev->is_timeout_deq) { - event_dev->dequeue = sso_hws_tmo_deq_seg - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - event_dev->dequeue_burst = sso_hws_tmo_deq_seg_burst - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; + CN10K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, + sso_hws_deq_tmo_seg); + CN10K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue_burst, + sso_hws_deq_tmo_seg_burst); } } else { - event_dev->dequeue = sso_hws_deq - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - event_dev->dequeue_burst = sso_hws_deq_burst - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; + CN10K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, sso_hws_deq); + CN10K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue_burst, + sso_hws_deq_burst); if (dev->is_timeout_deq) { - event_dev->dequeue = sso_hws_tmo_deq - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - event_dev->dequeue_burst = sso_hws_tmo_deq_burst - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; + CN10K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, + sso_hws_deq_tmo); + CN10K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue_burst, + sso_hws_deq_tmo_burst); } } - if (dev->tx_offloads & NIX_TX_MULTI_SEG_F) { - /* [SEC] [TSMP] [MBUF_NOFF] [VLAN] [OL3_L4_CSUM] [L3_L4_CSUM] */ - event_dev->txa_enqueue = sso_hws_tx_adptr_enq_seg - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)] - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)] - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)] - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_VLAN_QINQ_F)] - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; - } else { - event_dev->txa_enqueue = sso_hws_tx_adptr_enq - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)] - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)] - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)] - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_VLAN_QINQ_F)] - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; - } + if (dev->tx_offloads & NIX_TX_MULTI_SEG_F) + CN10K_SET_EVDEV_ENQ_OP(dev, event_dev->txa_enqueue, + sso_hws_tx_adptr_enq_seg); + else + CN10K_SET_EVDEV_ENQ_OP(dev, event_dev->txa_enqueue, + sso_hws_tx_adptr_enq); event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue; } @@ -864,7 +816,7 @@ cn10k_sso_init(struct rte_eventdev *event_dev) int rc; if (RTE_CACHE_LINE_SIZE != 64) { - plt_err("Driver not compiled for CN9K"); + plt_err("Driver not compiled for CN10K"); return -EFAULT; } diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c index a69edff195..7dea241fbc 100644 --- a/drivers/event/cnxk/cn9k_eventdev.c +++ b/drivers/event/cnxk/cn9k_eventdev.c @@ -9,6 +9,23 @@ #define CN9K_DUAL_WS_NB_WS 2 #define CN9K_DUAL_WS_PAIR_ID(x, id) (((x)*CN9K_DUAL_WS_NB_WS) + id) +#define CN9K_SET_EVDEV_DEQ_OP(dev, deq_op, deq_ops) \ + (deq_op = deq_ops[!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] \ + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] \ + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] \ + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)] \ + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] \ + [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]) + +#define CN9K_SET_EVDEV_ENQ_OP(dev, enq_op, enq_ops) \ + (enq_op = \ + enq_ops[!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)] \ + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)] \ + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)] \ + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_VLAN_QINQ_F)] \ + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] \ + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]) + static void cn9k_init_hws_ops(struct cn9k_sso_hws_state *ws, uintptr_t base) { @@ -468,99 +485,33 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) event_dev->enqueue_new_burst = cn9k_sso_hws_enq_new_burst; event_dev->enqueue_forward_burst = cn9k_sso_hws_enq_fwd_burst; if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) { - event_dev->dequeue = sso_hws_deq_seg - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - event_dev->dequeue_burst = sso_hws_deq_seg_burst - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; + CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, sso_hws_deq_seg); + CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue_burst, + sso_hws_deq_seg_burst); if (dev->is_timeout_deq) { - event_dev->dequeue = sso_hws_deq_tmo_seg - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - event_dev->dequeue_burst = sso_hws_deq_tmo_seg_burst - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; + CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, + sso_hws_deq_tmo_seg); + CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue_burst, + sso_hws_deq_tmo_seg_burst); } } else { - event_dev->dequeue = sso_hws_deq - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - event_dev->dequeue_burst = sso_hws_deq_burst - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; + CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, sso_hws_deq); + CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue_burst, + sso_hws_deq_burst); if (dev->is_timeout_deq) { - event_dev->dequeue = sso_hws_deq_tmo - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - event_dev->dequeue_burst = sso_hws_deq_tmo_burst - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; + CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, + sso_hws_deq_tmo); + CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue_burst, + sso_hws_deq_tmo_burst); } } - if (dev->tx_offloads & NIX_TX_MULTI_SEG_F) { - /* [SEC] [TSMP] [MBUF_NOFF] [VLAN] [OL3_L4_CSUM] [L3_L4_CSUM] */ - event_dev->txa_enqueue = sso_hws_tx_adptr_enq_seg - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)] - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)] - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)] - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_VLAN_QINQ_F)] - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; - } else { - event_dev->txa_enqueue = sso_hws_tx_adptr_enq - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)] - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)] - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)] - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_VLAN_QINQ_F)] - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; - } + if (dev->tx_offloads & NIX_TX_MULTI_SEG_F) + CN9K_SET_EVDEV_ENQ_OP(dev, event_dev->txa_enqueue, + sso_hws_tx_adptr_enq_seg); + else + CN9K_SET_EVDEV_ENQ_OP(dev, event_dev->txa_enqueue, + sso_hws_tx_adptr_enq); if (dev->dual_ws) { event_dev->enqueue = cn9k_sso_hws_dual_enq; @@ -570,134 +521,37 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) cn9k_sso_hws_dual_enq_fwd_burst; if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) { - event_dev->dequeue = sso_hws_dual_deq_seg - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - event_dev->dequeue_burst = sso_hws_dual_deq_seg_burst - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; + CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, + sso_hws_dual_deq_seg); + CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue_burst, + sso_hws_dual_deq_seg_burst); if (dev->is_timeout_deq) { - event_dev->dequeue = sso_hws_dual_deq_tmo_seg - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_RSS_F)]; - event_dev->dequeue_burst = - sso_hws_dual_deq_tmo_seg_burst - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_RSS_F)]; + CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, + sso_hws_dual_deq_tmo_seg); + CN9K_SET_EVDEV_DEQ_OP( + dev, event_dev->dequeue_burst, + sso_hws_dual_deq_tmo_seg_burst); } } else { - event_dev->dequeue = sso_hws_dual_deq - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - event_dev->dequeue_burst = sso_hws_dual_deq_burst - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; + CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, + sso_hws_dual_deq); + CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue_burst, + sso_hws_dual_deq_burst); if (dev->is_timeout_deq) { - event_dev->dequeue = sso_hws_dual_deq_tmo - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_RSS_F)]; - event_dev->dequeue_burst = - sso_hws_dual_deq_tmo_burst - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_RSS_F)]; + CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, + sso_hws_dual_deq_tmo); + CN9K_SET_EVDEV_DEQ_OP( + dev, event_dev->dequeue_burst, + sso_hws_dual_deq_tmo_burst); } } - if (dev->tx_offloads & NIX_TX_MULTI_SEG_F) { - /* [TSMP] [MBUF_NOFF] [VLAN] [OL3_L4_CSUM] [L3_L4_CSUM] - */ - event_dev->txa_enqueue = sso_hws_dual_tx_adptr_enq_seg - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)] - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)] - [!!(dev->tx_offloads & - NIX_TX_OFFLOAD_MBUF_NOFF_F)] - [!!(dev->tx_offloads & - NIX_TX_OFFLOAD_VLAN_QINQ_F)] - [!!(dev->tx_offloads & - NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] - [!!(dev->tx_offloads & - NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; - } else { - event_dev->txa_enqueue = sso_hws_dual_tx_adptr_enq - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)] - [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)] - [!!(dev->tx_offloads & - NIX_TX_OFFLOAD_MBUF_NOFF_F)] - [!!(dev->tx_offloads & - NIX_TX_OFFLOAD_VLAN_QINQ_F)] - [!!(dev->tx_offloads & - NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] - [!!(dev->tx_offloads & - NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; - } + if (dev->tx_offloads & NIX_TX_MULTI_SEG_F) + CN9K_SET_EVDEV_ENQ_OP(dev, event_dev->txa_enqueue, + sso_hws_dual_tx_adptr_enq_seg); + else + CN9K_SET_EVDEV_ENQ_OP(dev, event_dev->txa_enqueue, + sso_hws_dual_tx_adptr_enq); } event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue; From patchwork Thu Sep 2 14:41:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shijith Thotton X-Patchwork-Id: 97828 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 08FAFA0C4C; Thu, 2 Sep 2021 16:43:20 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7B78840E2D; Thu, 2 Sep 2021 16:43:16 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 2E5BF40698 for ; Thu, 2 Sep 2021 16:43:15 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 18280qPl010845 for ; Thu, 2 Sep 2021 07:43:13 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=1ooP44cncvEiFgrXni0SimD2D09H09GCz47L7qUVpeE=; b=b1RErO0XUZw+aPbxwtYpRTp56CJ/gWxdHUfAwvippxrW7kEVUiPEsr2u9/gG2WJeN/OD GiYrNbZzX+J3oLVEs/rCPd3jDVDp95gSiQ2XTVLTGk1tqR82HGvWuj8/2slOlstmb5HF QaVLrWHjHntl1IU5Tl0odQgdhAMkTSTf4WQ3gFDXkN2b9YSjyGmOb2EsuoAHvXgQQwly i+z3g1i7+PX+EtXVcR5Pk2BJlSJMipwaQUZWr9WhuqPSSypXHBzTYTuV/s47BRqb0/1c H/vOv1AGVZ4hig677CSOf7BVSX0zFBPQ7ZbBMpQ5guhV9nnjC5uzvsTWtcysMSmg5uN7 Fg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 3attqmhc83-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 02 Sep 2021 07:43:13 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 2 Sep 2021 07:43:11 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Thu, 2 Sep 2021 07:43:11 -0700 Received: from localhost.localdomain (unknown [10.28.34.29]) by maili.marvell.com (Postfix) with ESMTP id 33E4C3F705F; Thu, 2 Sep 2021 07:43:08 -0700 (PDT) From: Shijith Thotton To: CC: Shijith Thotton , , , , , , Kiran Kumar K , "Sunil Kumar Kori" , Satha Rao Date: Thu, 2 Sep 2021 20:11:51 +0530 Message-ID: <8e365d37c0475a35485b3bce15e8fe749cfe35f6.1630593512.git.sthotton@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Proofpoint-GUID: xBtVA2Hbjap1uYs7OZZBF_ldhKnWo4AL X-Proofpoint-ORIG-GUID: xBtVA2Hbjap1uYs7OZZBF_ldhKnWo4AL X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-09-02_04,2021-09-02_03,2020-04-07_01 Subject: [dpdk-dev] [PATCH v3 3/8] common/cnxk: add API to check CPT IQ is full X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Added flow control based check to determine CPT IQ is full. Signed-off-by: Shijith Thotton --- drivers/common/cnxk/roc_cpt.c | 6 ++++-- drivers/common/cnxk/roc_cpt.h | 11 +++++++++++ drivers/common/cnxk/roc_cpt_priv.h | 6 ------ 3 files changed, 15 insertions(+), 8 deletions(-) diff --git a/drivers/common/cnxk/roc_cpt.c b/drivers/common/cnxk/roc_cpt.c index c001497f74..5e35d1bdda 100644 --- a/drivers/common/cnxk/roc_cpt.c +++ b/drivers/common/cnxk/roc_cpt.c @@ -464,6 +464,8 @@ cpt_iq_init(struct roc_cpt_lf *lf) plt_write64(lf_q_size.u, lf->rbase + CPT_LF_Q_SIZE); lf->fc_addr = (uint64_t *)addr; + lf->fc_hyst_bits = plt_log2_u32(lf->nb_desc) / 2; + lf->fc_thresh = lf->nb_desc - (lf->nb_desc % (1 << lf->fc_hyst_bits)); } int @@ -809,8 +811,8 @@ roc_cpt_iq_enable(struct roc_cpt_lf *lf) lf_ctl.u = plt_read64(lf->rbase + CPT_LF_CTL); lf_ctl.s.ena = 1; lf_ctl.s.fc_ena = 1; - lf_ctl.s.fc_up_crossing = 1; - lf_ctl.s.fc_hyst_bits = CPT_FC_NUM_HYST_BITS; + lf_ctl.s.fc_up_crossing = 0; + lf_ctl.s.fc_hyst_bits = lf->fc_hyst_bits; plt_write64(lf_ctl.u, lf->rbase + CPT_LF_CTL); cpt_lf_dump(lf); diff --git a/drivers/common/cnxk/roc_cpt.h b/drivers/common/cnxk/roc_cpt.h index 3a2f5b97e1..f0f505a8c2 100644 --- a/drivers/common/cnxk/roc_cpt.h +++ b/drivers/common/cnxk/roc_cpt.h @@ -94,6 +94,8 @@ struct roc_cpt_lf { uint16_t msixoff; uint16_t pf_func; uint64_t *fc_addr; + uint32_t fc_hyst_bits; + uint64_t fc_thresh; uint64_t io_addr; uint8_t *iq_vaddr; struct roc_nix *inl_outb_nix; @@ -121,6 +123,15 @@ struct roc_cpt_rxc_time_cfg { uint16_t zombie_thres; }; +static inline int +roc_cpt_is_iq_full(struct roc_cpt_lf *lf) +{ + if (*lf->fc_addr < lf->fc_thresh) + return 0; + + return 1; +} + int __roc_api roc_cpt_rxc_time_cfg(struct roc_cpt *roc_cpt, struct roc_cpt_rxc_time_cfg *cfg); int __roc_api roc_cpt_dev_init(struct roc_cpt *roc_cpt); diff --git a/drivers/common/cnxk/roc_cpt_priv.h b/drivers/common/cnxk/roc_cpt_priv.h index 0880ec098d..21911e5d79 100644 --- a/drivers/common/cnxk/roc_cpt_priv.h +++ b/drivers/common/cnxk/roc_cpt_priv.h @@ -5,12 +5,6 @@ #ifndef _ROC_CPT_PRIV_H_ #define _ROC_CPT_PRIV_H_ -/* Set number of hystbits to 6. - * This will trigger the FC writes whenever number of outstanding commands in - * the queue becomes multiple of 32. - */ -#define CPT_FC_NUM_HYST_BITS 6 - struct cpt { struct plt_pci_device *pci_dev; struct dev dev; From patchwork Thu Sep 2 14:41:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shijith Thotton X-Patchwork-Id: 97829 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 48BAAA0C4C; Thu, 2 Sep 2021 16:43:26 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B017840140; Thu, 2 Sep 2021 16:43:21 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id DB2D94003C for ; Thu, 2 Sep 2021 16:43:19 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 18281Et7011520 for ; Thu, 2 Sep 2021 07:43:19 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=n+idSNm8IbSG5FNQn8WFWSY5qRPeZdN9vGuqCsk+vOs=; b=JK7zTKpwlOtdCQlVunMWmCvkzDXmi13yUw01PxDow2qz/BiIMsWRNQ+X2YRmz5ZHrjr5 sYlC+8Pk+s94gu7jUiJV/qaCRCm3ItNscfBbGTfmszNsgFGNJvudxABXejYtpAeoRvzT 7XUTHSx0xFW3VivJ4q06CL5vGgUjGWn1lIqSjc7t3lB6CvK71+Y1CNCXyjAqb+ASkLlV IIYCylH8kmy1AYdWEks4OpZIz79scqSQmK+99TpUUfYCjl7i8r8TOXiHZL9tiYtlt+a7 h+v+Ki6u4/j8XFAuGOQWFXHWfQb8et3VcW5BTdoL7sCqB7Hk6Wh/Z197AOxhR5n3KJy8 Zg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 3attqmhc8v-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 02 Sep 2021 07:43:19 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 2 Sep 2021 07:43:17 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Thu, 2 Sep 2021 07:43:17 -0700 Received: from localhost.localdomain (unknown [10.28.34.29]) by maili.marvell.com (Postfix) with ESMTP id 829F53F709A; Thu, 2 Sep 2021 07:43:14 -0700 (PDT) From: Shijith Thotton To: CC: Shijith Thotton , , , , , , Ankur Dwivedi , Tejasree Kondoj Date: Thu, 2 Sep 2021 20:11:52 +0530 Message-ID: <0b4d15e3bc64c79ee86b6d54fb322d09264ab2fc.1630593512.git.sthotton@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Proofpoint-GUID: 4Dl1kCVjDUFwnW5IIm8Id29bhvdiZbhn X-Proofpoint-ORIG-GUID: 4Dl1kCVjDUFwnW5IIm8Id29bhvdiZbhn X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-09-02_04,2021-09-02_03,2020-04-07_01 Subject: [dpdk-dev] [PATCH v3 4/8] drivers: add cnxk crypto adapter eventdev ops X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Added eventdev ops required to initialize crypto adapter. Signed-off-by: Shijith Thotton --- drivers/crypto/cnxk/cnxk_cryptodev_ops.h | 9 +++ drivers/event/cnxk/cn10k_eventdev.c | 46 ++++++++++++ drivers/event/cnxk/cn9k_eventdev.c | 45 ++++++++++++ drivers/event/cnxk/cnxk_eventdev.c | 94 ++++++++++++++++++++++++ drivers/event/cnxk/cnxk_eventdev.h | 18 +++++ drivers/event/cnxk/meson.build | 2 +- 6 files changed, 213 insertions(+), 1 deletion(-) diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h index c317f4049a..22dc2ab78d 100644 --- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h +++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h @@ -55,6 +55,13 @@ struct pending_queue { uint64_t time_out; }; +struct crypto_adpter_info { + bool enabled; + /**< Set if queue pair is added to crypto adapter */ + struct rte_mempool *req_mp; + /**< CPT inflight request mempool */ +}; + struct cnxk_cpt_qp { struct roc_cpt_lf lf; /**< Crypto LF */ @@ -68,6 +75,8 @@ struct cnxk_cpt_qp { /**< Metabuf info required to support operations on the queue pair */ struct roc_cpt_lmtline lmtline; /**< Lmtline information */ + struct crypto_adpter_info ca; + /**< Crypto adapter related info */ }; int cnxk_cpt_dev_config(struct rte_cryptodev *dev, diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c index f14a3edd34..1aacab050c 100644 --- a/drivers/event/cnxk/cn10k_eventdev.c +++ b/drivers/event/cnxk/cn10k_eventdev.c @@ -773,6 +773,48 @@ cn10k_sso_tx_adapter_queue_del(uint8_t id, const struct rte_eventdev *event_dev, return cn10k_sso_updt_tx_adptr_data(event_dev); } +static int +cn10k_crypto_adapter_caps_get(const struct rte_eventdev *event_dev, + const struct rte_cryptodev *cdev, uint32_t *caps) +{ + CNXK_VALID_DEV_OR_ERR_RET(event_dev->dev, "event_cn10k"); + CNXK_VALID_DEV_OR_ERR_RET(cdev->device, "crypto_cn10k"); + + *caps = 0; + + return 0; +} + +static int +cn10k_crypto_adapter_qp_add(const struct rte_eventdev *event_dev, + const struct rte_cryptodev *cdev, + int32_t queue_pair_id, + const struct rte_event *event) +{ + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); + + RTE_SET_USED(event); + + CNXK_VALID_DEV_OR_ERR_RET(event_dev->dev, "event_cn10k"); + CNXK_VALID_DEV_OR_ERR_RET(cdev->device, "crypto_cn10k"); + + dev->is_ca_internal_port = 1; + cn10k_sso_fp_fns_set((struct rte_eventdev *)(uintptr_t)event_dev); + + return cnxk_crypto_adapter_qp_add(event_dev, cdev, queue_pair_id); +} + +static int +cn10k_crypto_adapter_qp_del(const struct rte_eventdev *event_dev, + const struct rte_cryptodev *cdev, + int32_t queue_pair_id) +{ + CNXK_VALID_DEV_OR_ERR_RET(event_dev->dev, "event_cn10k"); + CNXK_VALID_DEV_OR_ERR_RET(cdev->device, "crypto_cn10k"); + + return cnxk_crypto_adapter_qp_del(cdev, queue_pair_id); +} + static struct rte_eventdev_ops cn10k_sso_dev_ops = { .dev_infos_get = cn10k_sso_info_get, .dev_configure = cn10k_sso_dev_configure, @@ -802,6 +844,10 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = { .timer_adapter_caps_get = cnxk_tim_caps_get, + .crypto_adapter_caps_get = cn10k_crypto_adapter_caps_get, + .crypto_adapter_queue_pair_add = cn10k_crypto_adapter_qp_add, + .crypto_adapter_queue_pair_del = cn10k_crypto_adapter_qp_del, + .dump = cnxk_sso_dump, .dev_start = cn10k_sso_start, .dev_stop = cn10k_sso_stop, diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c index 7dea241fbc..c73d81c092 100644 --- a/drivers/event/cnxk/cn9k_eventdev.c +++ b/drivers/event/cnxk/cn9k_eventdev.c @@ -923,6 +923,47 @@ cn9k_sso_tx_adapter_queue_del(uint8_t id, const struct rte_eventdev *event_dev, return cn9k_sso_updt_tx_adptr_data(event_dev); } +static int +cn9k_crypto_adapter_caps_get(const struct rte_eventdev *event_dev, + const struct rte_cryptodev *cdev, uint32_t *caps) +{ + CNXK_VALID_DEV_OR_ERR_RET(event_dev->dev, "event_cn9k"); + CNXK_VALID_DEV_OR_ERR_RET(cdev->device, "crypto_cn9k"); + + *caps = 0; + + return 0; +} + +static int +cn9k_crypto_adapter_qp_add(const struct rte_eventdev *event_dev, + const struct rte_cryptodev *cdev, + int32_t queue_pair_id, const struct rte_event *event) +{ + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); + + RTE_SET_USED(event); + + CNXK_VALID_DEV_OR_ERR_RET(event_dev->dev, "event_cn9k"); + CNXK_VALID_DEV_OR_ERR_RET(cdev->device, "crypto_cn9k"); + + dev->is_ca_internal_port = 1; + cn9k_sso_fp_fns_set((struct rte_eventdev *)(uintptr_t)event_dev); + + return cnxk_crypto_adapter_qp_add(event_dev, cdev, queue_pair_id); +} + +static int +cn9k_crypto_adapter_qp_del(const struct rte_eventdev *event_dev, + const struct rte_cryptodev *cdev, + int32_t queue_pair_id) +{ + CNXK_VALID_DEV_OR_ERR_RET(event_dev->dev, "event_cn9k"); + CNXK_VALID_DEV_OR_ERR_RET(cdev->device, "crypto_cn9k"); + + return cnxk_crypto_adapter_qp_del(cdev, queue_pair_id); +} + static struct rte_eventdev_ops cn9k_sso_dev_ops = { .dev_infos_get = cn9k_sso_info_get, .dev_configure = cn9k_sso_dev_configure, @@ -948,6 +989,10 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = { .timer_adapter_caps_get = cnxk_tim_caps_get, + .crypto_adapter_caps_get = cn9k_crypto_adapter_caps_get, + .crypto_adapter_queue_pair_add = cn9k_crypto_adapter_qp_add, + .crypto_adapter_queue_pair_del = cn9k_crypto_adapter_qp_del, + .dump = cnxk_sso_dump, .dev_start = cn9k_sso_start, .dev_stop = cn9k_sso_stop, diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c index cfd7fb971c..9a87239a59 100644 --- a/drivers/event/cnxk/cnxk_eventdev.c +++ b/drivers/event/cnxk/cnxk_eventdev.c @@ -2,8 +2,102 @@ * Copyright(C) 2021 Marvell. */ +#include "cnxk_cryptodev_ops.h" #include "cnxk_eventdev.h" +static int +crypto_adapter_qp_setup(const struct rte_cryptodev *cdev, + struct cnxk_cpt_qp *qp) +{ + char name[RTE_MEMPOOL_NAMESIZE]; + uint32_t cache_size, nb_req; + unsigned int req_size; + + snprintf(name, RTE_MEMPOOL_NAMESIZE, "cnxk_ca_req_%u:%u", + cdev->data->dev_id, qp->lf.lf_id); + req_size = sizeof(struct cpt_inflight_req); + cache_size = RTE_MIN(RTE_MEMPOOL_CACHE_MAX_SIZE, qp->lf.nb_desc / 1.5); + nb_req = RTE_MAX(qp->lf.nb_desc, cache_size * rte_lcore_count()); + qp->ca.req_mp = rte_mempool_create(name, nb_req, req_size, cache_size, + 0, NULL, NULL, NULL, NULL, + rte_socket_id(), 0); + if (qp->ca.req_mp == NULL) + return -ENOMEM; + + qp->ca.enabled = true; + + return 0; +} + +int +cnxk_crypto_adapter_qp_add(const struct rte_eventdev *event_dev, + const struct rte_cryptodev *cdev, + int32_t queue_pair_id) +{ + struct cnxk_sso_evdev *sso_evdev = cnxk_sso_pmd_priv(event_dev); + uint32_t adptr_xae_cnt = 0; + struct cnxk_cpt_qp *qp; + int ret; + + if (queue_pair_id == -1) { + uint16_t qp_id; + + for (qp_id = 0; qp_id < cdev->data->nb_queue_pairs; qp_id++) { + qp = cdev->data->queue_pairs[qp_id]; + ret = crypto_adapter_qp_setup(cdev, qp); + if (ret) { + cnxk_crypto_adapter_qp_del(cdev, -1); + return ret; + } + adptr_xae_cnt += qp->ca.req_mp->size; + } + } else { + qp = cdev->data->queue_pairs[queue_pair_id]; + ret = crypto_adapter_qp_setup(cdev, qp); + if (ret) + return ret; + adptr_xae_cnt = qp->ca.req_mp->size; + } + + /* Update crypto adapter XAE count */ + sso_evdev->adptr_xae_cnt += adptr_xae_cnt; + cnxk_sso_xae_reconfigure((struct rte_eventdev *)(uintptr_t)event_dev); + + return 0; +} + +static int +crypto_adapter_qp_free(struct cnxk_cpt_qp *qp) +{ + rte_mempool_free(qp->ca.req_mp); + qp->ca.enabled = false; + + return 0; +} + +int +cnxk_crypto_adapter_qp_del(const struct rte_cryptodev *cdev, + int32_t queue_pair_id) +{ + struct cnxk_cpt_qp *qp; + + if (queue_pair_id == -1) { + uint16_t qp_id; + + for (qp_id = 0; qp_id < cdev->data->nb_queue_pairs; qp_id++) { + qp = cdev->data->queue_pairs[qp_id]; + if (qp->ca.enabled) + crypto_adapter_qp_free(qp); + } + } else { + qp = cdev->data->queue_pairs[queue_pair_id]; + if (qp->ca.enabled) + crypto_adapter_qp_free(qp); + } + + return 0; +} + void cnxk_sso_info_get(struct cnxk_sso_evdev *dev, struct rte_event_dev_info *dev_info) diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h index fc49b88d6f..8a5c737e4b 100644 --- a/drivers/event/cnxk/cnxk_eventdev.h +++ b/drivers/event/cnxk/cnxk_eventdev.h @@ -5,6 +5,9 @@ #ifndef __CNXK_EVENTDEV_H__ #define __CNXK_EVENTDEV_H__ +#include + +#include #include #include #include @@ -51,6 +54,12 @@ #define CN10K_GW_MODE_PREF 1 #define CN10K_GW_MODE_PREF_WFE 2 +#define CNXK_VALID_DEV_OR_ERR_RET(dev, drv_name) \ + do { \ + if (strncmp(dev->driver->name, drv_name, strlen(drv_name))) \ + return -EINVAL; \ + } while (0) + typedef void *(*cnxk_sso_init_hws_mem_t)(void *dev, uint8_t port_id); typedef void (*cnxk_sso_hws_setup_t)(void *dev, void *ws, uintptr_t *grp_base); typedef void (*cnxk_sso_hws_release_t)(void *dev, void *ws); @@ -108,6 +117,8 @@ struct cnxk_sso_evdev { uint8_t dual_ws; /* CN10K */ uint8_t gw_mode; + /* Crypto adapter */ + uint8_t is_ca_internal_port; } __rte_cache_aligned; struct cn10k_sso_hws { @@ -266,6 +277,13 @@ int cnxk_sso_xstats_reset(struct rte_eventdev *event_dev, int16_t queue_port_id, const uint32_t ids[], uint32_t n); +/* Crypto adapter APIs. */ +int cnxk_crypto_adapter_qp_add(const struct rte_eventdev *event_dev, + const struct rte_cryptodev *cdev, + int32_t queue_pair_id); +int cnxk_crypto_adapter_qp_del(const struct rte_cryptodev *cdev, + int32_t queue_pair_id); + /* CN9K */ void cn9k_sso_set_rsrc(void *arg); diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build index 13e0634e86..1155e18ba7 100644 --- a/drivers/event/cnxk/meson.build +++ b/drivers/event/cnxk/meson.build @@ -43,4 +43,4 @@ foreach flag: extra_flags endif endforeach -deps += ['bus_pci', 'common_cnxk', 'net_cnxk'] +deps += ['bus_pci', 'common_cnxk', 'net_cnxk', 'crypto_cnxk'] From patchwork Thu Sep 2 14:41:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shijith Thotton X-Patchwork-Id: 97830 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7D1A4A0C4C; Thu, 2 Sep 2021 16:43:32 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 04ED240698; Thu, 2 Sep 2021 16:43:31 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 7AB8840687 for ; Thu, 2 Sep 2021 16:43:29 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 1825LInV028455; Thu, 2 Sep 2021 07:43:25 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=Uu/o3UgC+SEMExwrD7a4JsZ0I/6+hF4/nzdMBef3Ikc=; b=FCME3kCQTWfkBBk3EDYTHUCQiZ+jqaG7m5SGS6obUc1K5XzBrkhXjLtsHAmx+qTQ5dR6 4EmRSz174MkEQDYhrmsfMYEnhCHKlmO2TBcIFM3g6bu0v8htnx4dtnV8xau1muTOiftj 0PxF3R9DVbPW3mJ59b0J6yDF0tsC/+VxnlGRMWCzZ1J58+HDHfsCJ0uDRxXO21VFO71J SNBKgLf+GgwSBYLiZvX3pWBA9F3agYXt/GnNcktf2FP/OK9dHzzWu6MAGxC1D3iV3k6T Jn57Bmd5XMw83YzkKQbpJCPOW3b1G+aJB4bKDxzHc43a4V+e48wrpZ4id/UXVigipUl1 1w== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com with ESMTP id 3atrd2j1fb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 02 Sep 2021 07:43:24 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 2 Sep 2021 07:43:22 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Thu, 2 Sep 2021 07:43:22 -0700 Received: from localhost.localdomain (unknown [10.28.34.29]) by maili.marvell.com (Postfix) with ESMTP id 1DC633F705E; Thu, 2 Sep 2021 07:43:19 -0700 (PDT) From: Shijith Thotton To: CC: Shijith Thotton , , , , , , Ray Kinsella , Ankur Dwivedi , Tejasree Kondoj Date: Thu, 2 Sep 2021 20:11:53 +0530 Message-ID: <76a88e0ef4296a281ed425ecaebe16b93e71b610.1630593512.git.sthotton@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Proofpoint-GUID: uxOqflJ7Bje11nMwIJMBQjb5GRD9v3gj X-Proofpoint-ORIG-GUID: uxOqflJ7Bje11nMwIJMBQjb5GRD9v3gj X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-09-02_04,2021-09-02_03,2020-04-07_01 Subject: [dpdk-dev] [PATCH v3 5/8] crypto/cnxk: add cn9k crypto adapter fast path ops X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Added crypto adapter enqueue and dequeue operations for CN9K. Signed-off-by: Shijith Thotton Acked-by: Ray Kinsella Acked-by: Anoob Joseph --- drivers/crypto/cnxk/cn9k_cryptodev_ops.c | 235 ++++++++++++++++------- drivers/crypto/cnxk/cn9k_cryptodev_ops.h | 6 + drivers/crypto/cnxk/cnxk_cryptodev_ops.h | 28 +++ drivers/crypto/cnxk/meson.build | 2 +- drivers/crypto/cnxk/version.map | 5 + 5 files changed, 205 insertions(+), 71 deletions(-) diff --git a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c index 724965be5b..08f08c8339 100644 --- a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c +++ b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c @@ -4,6 +4,7 @@ #include #include +#include #include "cn9k_cryptodev.h" #include "cn9k_cryptodev_ops.h" @@ -62,27 +63,94 @@ cn9k_cpt_sym_temp_sess_create(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op) return NULL; } +static inline int +cn9k_cpt_prepare_instruction(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op, + struct cpt_inflight_req *infl_req, + struct cpt_inst_s *inst) +{ + int ret; + + if (op->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) { + struct rte_crypto_sym_op *sym_op; + struct cnxk_se_sess *sess; + + if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { + sym_op = op->sym; + sess = get_sym_session_private_data( + sym_op->session, cn9k_cryptodev_driver_id); + ret = cn9k_cpt_sym_inst_fill(qp, op, sess, infl_req, + inst); + } else { + sess = cn9k_cpt_sym_temp_sess_create(qp, op); + if (unlikely(sess == NULL)) { + plt_dp_err("Could not create temp session"); + return -1; + } + + ret = cn9k_cpt_sym_inst_fill(qp, op, sess, infl_req, + inst); + if (unlikely(ret)) { + sym_session_clear(cn9k_cryptodev_driver_id, + op->sym->session); + rte_mempool_put(qp->sess_mp, op->sym->session); + } + } + inst->w7.u64 = sess->cpt_inst_w7; + } else if (op->type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC) { + struct rte_crypto_asym_op *asym_op; + struct cnxk_ae_sess *sess; + + if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { + asym_op = op->asym; + sess = get_asym_session_private_data( + asym_op->session, cn9k_cryptodev_driver_id); + ret = cnxk_ae_enqueue(qp, op, infl_req, inst, sess); + inst->w7.u64 = sess->cpt_inst_w7; + } else { + ret = -EINVAL; + } + } else { + ret = -EINVAL; + plt_dp_err("Unsupported op type"); + } + + return ret; +} + +static inline void +cn9k_cpt_submit_instruction(struct cpt_inst_s *inst, uint64_t lmtline, + uint64_t io_addr) +{ + uint64_t lmt_status; + + do { + /* Copy CPT command to LMTLINE */ + roc_lmt_mov((void *)lmtline, inst, 2); + + /* + * Make sure compiler does not reorder memcpy and ldeor. + * LMTST transactions are always flushed from the write + * buffer immediately, a DMB is not required to push out + * LMTSTs. + */ + rte_io_wmb(); + lmt_status = roc_lmt_submit_ldeor(io_addr); + } while (lmt_status == 0); +} + static uint16_t cn9k_cpt_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) { struct cpt_inflight_req *infl_req; - struct rte_crypto_asym_op *asym_op; - struct rte_crypto_sym_op *sym_op; uint16_t nb_allowed, count = 0; struct cnxk_cpt_qp *qp = qptr; struct pending_queue *pend_q; struct rte_crypto_op *op; struct cpt_inst_s inst; - uint64_t lmt_status; - uint64_t lmtline; - uint64_t io_addr; int ret; pend_q = &qp->pend_q; - lmtline = qp->lmtline.lmt_base; - io_addr = qp->lmtline.io_addr; - inst.w0.u64 = 0; inst.w2.u64 = 0; inst.w3.u64 = 0; @@ -95,77 +163,18 @@ cn9k_cpt_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) infl_req = &pend_q->req_queue[pend_q->enq_tail]; infl_req->op_flags = 0; - if (op->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) { - struct cnxk_se_sess *sess; - - if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { - sym_op = op->sym; - sess = get_sym_session_private_data( - sym_op->session, - cn9k_cryptodev_driver_id); - ret = cn9k_cpt_sym_inst_fill(qp, op, sess, - infl_req, &inst); - } else { - sess = cn9k_cpt_sym_temp_sess_create(qp, op); - if (unlikely(sess == NULL)) { - plt_dp_err( - "Could not create temp session"); - break; - } - - ret = cn9k_cpt_sym_inst_fill(qp, op, sess, - infl_req, &inst); - if (unlikely(ret)) { - sym_session_clear( - cn9k_cryptodev_driver_id, - op->sym->session); - rte_mempool_put(qp->sess_mp, - op->sym->session); - } - } - inst.w7.u64 = sess->cpt_inst_w7; - } else if (op->type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC) { - struct cnxk_ae_sess *sess; - - ret = -EINVAL; - if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { - asym_op = op->asym; - sess = get_asym_session_private_data( - asym_op->session, - cn9k_cryptodev_driver_id); - ret = cnxk_ae_enqueue(qp, op, infl_req, &inst, - sess); - inst.w7.u64 = sess->cpt_inst_w7; - } - } else { - plt_dp_err("Unsupported op type"); - break; - } - + ret = cn9k_cpt_prepare_instruction(qp, op, infl_req, &inst); if (unlikely(ret)) { plt_dp_err("Could not process op: %p", op); break; } infl_req->cop = op; - infl_req->res.cn9k.compcode = CPT_COMP_NOT_DONE; inst.res_addr = (uint64_t)&infl_req->res; - do { - /* Copy CPT command to LMTLINE */ - memcpy((void *)lmtline, &inst, sizeof(inst)); - - /* - * Make sure compiler does not reorder memcpy and ldeor. - * LMTST transactions are always flushed from the write - * buffer immediately, a DMB is not required to push out - * LMTSTs. - */ - rte_io_wmb(); - lmt_status = roc_lmt_submit_ldeor(io_addr); - } while (lmt_status == 0); - + cn9k_cpt_submit_instruction(&inst, qp->lmtline.lmt_base, + qp->lmtline.io_addr); MOD_INC(pend_q->enq_tail, qp->lf.nb_desc); } @@ -176,6 +185,72 @@ cn9k_cpt_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) return count; } +uint16_t +cn9k_cpt_crypto_adapter_enqueue(uintptr_t tag_op, struct rte_crypto_op *op) +{ + union rte_event_crypto_metadata *ec_mdata; + struct cpt_inflight_req *infl_req; + struct rte_event *rsp_info; + struct cnxk_cpt_qp *qp; + struct cpt_inst_s inst; + uint8_t cdev_id; + uint16_t qp_id; + int ret; + + ec_mdata = cnxk_event_crypto_mdata_get(op); + if (!ec_mdata) { + rte_errno = EINVAL; + return 0; + } + + cdev_id = ec_mdata->request_info.cdev_id; + qp_id = ec_mdata->request_info.queue_pair_id; + qp = rte_cryptodevs[cdev_id].data->queue_pairs[qp_id]; + rsp_info = &ec_mdata->response_info; + + if (unlikely(!qp->ca.enabled)) { + rte_errno = EINVAL; + return 0; + } + + if (unlikely(rte_mempool_get(qp->ca.req_mp, (void **)&infl_req))) { + rte_errno = ENOMEM; + return 0; + } + infl_req->op_flags = 0; + + ret = cn9k_cpt_prepare_instruction(qp, op, infl_req, &inst); + if (unlikely(ret)) { + plt_dp_err("Could not process op: %p", op); + rte_mempool_put(qp->ca.req_mp, infl_req); + return 0; + } + + infl_req->cop = op; + infl_req->res.cn9k.compcode = CPT_COMP_NOT_DONE; + infl_req->qp = qp; + inst.w0.u64 = 0; + inst.res_addr = (uint64_t)&infl_req->res; + inst.w2.u64 = CNXK_CPT_INST_W2( + (RTE_EVENT_TYPE_CRYPTODEV << 28) | rsp_info->flow_id, + rsp_info->sched_type, rsp_info->queue_id, 0); + inst.w3.u64 = CNXK_CPT_INST_W3(1, infl_req); + + if (roc_cpt_is_iq_full(&qp->lf)) { + rte_mempool_put(qp->ca.req_mp, infl_req); + rte_errno = EAGAIN; + return 0; + } + + if (!rsp_info->sched_type) + roc_sso_hws_head_wait(tag_op); + + cn9k_cpt_submit_instruction(&inst, qp->lmtline.lmt_base, + qp->lmtline.io_addr); + + return 1; +} + static inline void cn9k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp, struct rte_crypto_op *cop, struct cpt_inflight_req *infl_req) @@ -249,6 +324,26 @@ cn9k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp, struct rte_crypto_op *cop, } } +uintptr_t +cn9k_cpt_crypto_adapter_dequeue(uintptr_t get_work1) +{ + struct cpt_inflight_req *infl_req; + struct rte_crypto_op *cop; + struct cnxk_cpt_qp *qp; + + infl_req = (struct cpt_inflight_req *)(get_work1); + cop = infl_req->cop; + qp = infl_req->qp; + + cn9k_cpt_dequeue_post_process(qp, infl_req->cop, infl_req); + + if (unlikely(infl_req->op_flags & CPT_OP_FLAGS_METABUF)) + rte_mempool_put(qp->meta_info.pool, infl_req->mdata); + + rte_mempool_put(qp->ca.req_mp, infl_req); + return (uintptr_t)cop; +} + static uint16_t cn9k_cpt_dequeue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) { diff --git a/drivers/crypto/cnxk/cn9k_cryptodev_ops.h b/drivers/crypto/cnxk/cn9k_cryptodev_ops.h index 2277f6bcfb..1255de33ae 100644 --- a/drivers/crypto/cnxk/cn9k_cryptodev_ops.h +++ b/drivers/crypto/cnxk/cn9k_cryptodev_ops.h @@ -11,4 +11,10 @@ extern struct rte_cryptodev_ops cn9k_cpt_ops; void cn9k_cpt_set_enqdeq_fns(struct rte_cryptodev *dev); +__rte_internal +uint16_t cn9k_cpt_crypto_adapter_enqueue(uintptr_t tag_op, + struct rte_crypto_op *op); +__rte_internal +uintptr_t cn9k_cpt_crypto_adapter_dequeue(uintptr_t get_work1); + #endif /* _CN9K_CRYPTODEV_OPS_H_ */ diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h index 22dc2ab78d..0d02d44799 100644 --- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h +++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h @@ -6,6 +6,7 @@ #define _CNXK_CRYPTODEV_OPS_H_ #include +#include #include "roc_api.h" @@ -16,6 +17,13 @@ #define MOD_INC(i, l) ((i) == (l - 1) ? (i) = 0 : (i)++) +/* Macros to form words in CPT instruction */ +#define CNXK_CPT_INST_W2(tag, tt, grp, rvu_pf_func) \ + ((tag) | ((uint64_t)(tt) << 32) | ((uint64_t)(grp) << 34) | \ + ((uint64_t)(rvu_pf_func) << 48)) +#define CNXK_CPT_INST_W3(qord, wqe_ptr) \ + (qord | ((uintptr_t)(wqe_ptr) >> 3) << 3) + struct cpt_qp_meta_info { struct rte_mempool *pool; int mlen; @@ -40,6 +48,7 @@ struct cpt_inflight_req { struct rte_crypto_op *cop; void *mdata; uint8_t op_flags; + void *qp; } __rte_aligned(16); struct pending_queue { @@ -122,4 +131,23 @@ int cnxk_ae_session_cfg(struct rte_cryptodev *dev, struct rte_crypto_asym_xform *xform, struct rte_cryptodev_asym_session *sess, struct rte_mempool *pool); + +static inline union rte_event_crypto_metadata * +cnxk_event_crypto_mdata_get(struct rte_crypto_op *op) +{ + union rte_event_crypto_metadata *ec_mdata; + + if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) + ec_mdata = rte_cryptodev_sym_session_get_user_data( + op->sym->session); + else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS && + op->private_data_offset) + ec_mdata = (union rte_event_crypto_metadata + *)((uint8_t *)op + op->private_data_offset); + else + return NULL; + + return ec_mdata; +} + #endif /* _CNXK_CRYPTODEV_OPS_H_ */ diff --git a/drivers/crypto/cnxk/meson.build b/drivers/crypto/cnxk/meson.build index c56d6cf35d..e076783629 100644 --- a/drivers/crypto/cnxk/meson.build +++ b/drivers/crypto/cnxk/meson.build @@ -20,6 +20,6 @@ sources = files( 'cnxk_cryptodev_sec.c', ) -deps += ['bus_pci', 'common_cnxk', 'security'] +deps += ['bus_pci', 'common_cnxk', 'security', 'eventdev'] includes += include_directories('../../../lib/net') diff --git a/drivers/crypto/cnxk/version.map b/drivers/crypto/cnxk/version.map index ee80c51721..0817743947 100644 --- a/drivers/crypto/cnxk/version.map +++ b/drivers/crypto/cnxk/version.map @@ -1,3 +1,8 @@ INTERNAL { + global: + + cn9k_cpt_crypto_adapter_enqueue; + cn9k_cpt_crypto_adapter_dequeue; + local: *; }; From patchwork Thu Sep 2 14:41:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shijith Thotton X-Patchwork-Id: 97831 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CB0C9A0C4C; Thu, 2 Sep 2021 16:43:38 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1B07C410F2; Thu, 2 Sep 2021 16:43:32 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 118D140687 for ; Thu, 2 Sep 2021 16:43:29 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 1825LInX028455 for ; Thu, 2 Sep 2021 07:43:29 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=6DBTdPH7avbE/oRmu1QX0ZJfBFKdEFvzzZLttRJMnNg=; b=Rm+ahy6HU/9BuW92pQ8qie2d6oZIRL84ozP//fxz+KuVsItmyLELuVzqtuPRCO6lLMjK qyuj/keLDnSsGkBOtjqEt6to3HZpsYq0tguHf9Ksl4sJ5mW8GleV1vadWzZTAsAPbu8h Zpt6tsdyrQsDku2I2c7pHxfytueBbIUBC59uwxmeEQPwln8gpJOsQkbx5KXv0Hq/ytap 491wVGRUH+8f46k8QpaG5kVAtU/nhNP/ACEkv0VEQZpaX2Fgl6DOJlAVvjtWKNF+fuld D3HRa6QTIgwrNSglrxU9HNB3LPh2u9/6NrxUWjTT+LqczPqOxHLqTDzzRKi/UN8JZgsj rw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com with ESMTP id 3atrd2j1fn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 02 Sep 2021 07:43:28 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 2 Sep 2021 07:43:27 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Thu, 2 Sep 2021 07:43:27 -0700 Received: from localhost.localdomain (unknown [10.28.34.29]) by maili.marvell.com (Postfix) with ESMTP id 3D6D93F705F; Thu, 2 Sep 2021 07:43:25 -0700 (PDT) From: Shijith Thotton To: CC: Shijith Thotton , , , , , Date: Thu, 2 Sep 2021 20:11:54 +0530 Message-ID: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Proofpoint-GUID: PHIhwFd6NtHCe796M2Bk2lj7PbUZYMpC X-Proofpoint-ORIG-GUID: PHIhwFd6NtHCe796M2Bk2lj7PbUZYMpC X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-09-02_04,2021-09-02_03,2020-04-07_01 Subject: [dpdk-dev] [PATCH v3 6/8] event/cnxk: add cn9k crypto adapter fast path ops X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Set crypto adapter enqueue and dequeue operations for CN9K. Signed-off-by: Shijith Thotton --- drivers/event/cnxk/cn9k_eventdev.c | 94 +++++++++++++++++++- drivers/event/cnxk/cn9k_worker.c | 22 +++++ drivers/event/cnxk/cn9k_worker.h | 41 ++++++++- drivers/event/cnxk/cn9k_worker_deq_ca.c | 65 ++++++++++++++ drivers/event/cnxk/cn9k_worker_dual_deq_ca.c | 75 ++++++++++++++++ drivers/event/cnxk/meson.build | 2 + 6 files changed, 292 insertions(+), 7 deletions(-) create mode 100644 drivers/event/cnxk/cn9k_worker_deq_ca.c create mode 100644 drivers/event/cnxk/cn9k_worker_dual_deq_ca.c diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c index c73d81c092..59a3dc22a3 100644 --- a/drivers/event/cnxk/cn9k_eventdev.c +++ b/drivers/event/cnxk/cn9k_eventdev.c @@ -358,6 +358,20 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) #undef R }; + const event_dequeue_t sso_hws_deq_ca[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_ca_##name, + NIX_RX_FASTPATH_MODES +#undef R + }; + + const event_dequeue_burst_t sso_hws_deq_ca_burst[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_ca_burst_##name, + NIX_RX_FASTPATH_MODES +#undef R + }; + const event_dequeue_t sso_hws_deq_seg[2][2][2][2][2][2] = { #define R(name, f5, f4, f3, f2, f1, f0, flags) \ [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_seg_##name, @@ -385,7 +399,22 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_tmo_seg_burst_##name, NIX_RX_FASTPATH_MODES #undef R - }; + }; + + const event_dequeue_t sso_hws_deq_ca_seg[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_ca_seg_##name, + NIX_RX_FASTPATH_MODES +#undef R + }; + + const event_dequeue_burst_t + sso_hws_deq_ca_seg_burst[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_ca_seg_burst_##name, + NIX_RX_FASTPATH_MODES +#undef R + }; /* Dual WS modes */ const event_dequeue_t sso_hws_dual_deq[2][2][2][2][2][2] = { @@ -415,7 +444,22 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_tmo_burst_##name, NIX_RX_FASTPATH_MODES #undef R - }; + }; + + const event_dequeue_t sso_hws_dual_deq_ca[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_ca_##name, + NIX_RX_FASTPATH_MODES +#undef R + }; + + const event_dequeue_burst_t + sso_hws_dual_deq_ca_burst[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_ca_burst_##name, + NIX_RX_FASTPATH_MODES +#undef R + }; const event_dequeue_t sso_hws_dual_deq_seg[2][2][2][2][2][2] = { #define R(name, f5, f4, f3, f2, f1, f0, flags) \ @@ -447,6 +491,21 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) #undef R }; + const event_dequeue_t sso_hws_dual_deq_ca_seg[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_ca_seg_##name, + NIX_RX_FASTPATH_MODES +#undef R + }; + + const event_dequeue_burst_t + sso_hws_dual_deq_ca_seg_burst[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_ca_seg_burst_##name, + NIX_RX_FASTPATH_MODES +#undef R + }; + /* Tx modes */ const event_tx_adapter_enqueue sso_hws_tx_adptr_enq[2][2][2][2][2][2] = { @@ -494,6 +553,12 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue_burst, sso_hws_deq_tmo_seg_burst); } + if (dev->is_ca_internal_port) { + CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, + sso_hws_deq_ca_seg); + CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue_burst, + sso_hws_deq_ca_seg_burst); + } } else { CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, sso_hws_deq); CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue_burst, @@ -504,7 +569,14 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue_burst, sso_hws_deq_tmo_burst); } + if (dev->is_ca_internal_port) { + CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, + sso_hws_deq_ca); + CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue_burst, + sso_hws_deq_ca_burst); + } } + event_dev->ca_enqueue = cn9k_sso_hws_ca_enq; if (dev->tx_offloads & NIX_TX_MULTI_SEG_F) CN9K_SET_EVDEV_ENQ_OP(dev, event_dev->txa_enqueue, @@ -519,6 +591,7 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) event_dev->enqueue_new_burst = cn9k_sso_hws_dual_enq_new_burst; event_dev->enqueue_forward_burst = cn9k_sso_hws_dual_enq_fwd_burst; + event_dev->ca_enqueue = cn9k_sso_hws_dual_ca_enq; if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) { CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, @@ -532,6 +605,13 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) dev, event_dev->dequeue_burst, sso_hws_dual_deq_tmo_seg_burst); } + if (dev->is_ca_internal_port) { + CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, + sso_hws_dual_deq_ca_seg); + CN9K_SET_EVDEV_DEQ_OP( + dev, event_dev->dequeue_burst, + sso_hws_dual_deq_ca_seg_burst); + } } else { CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, sso_hws_dual_deq); @@ -544,6 +624,13 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) dev, event_dev->dequeue_burst, sso_hws_dual_deq_tmo_burst); } + if (dev->is_ca_internal_port) { + CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, + sso_hws_dual_deq_ca); + CN9K_SET_EVDEV_DEQ_OP( + dev, event_dev->dequeue_burst, + sso_hws_dual_deq_ca_burst); + } } if (dev->tx_offloads & NIX_TX_MULTI_SEG_F) @@ -930,7 +1017,8 @@ cn9k_crypto_adapter_caps_get(const struct rte_eventdev *event_dev, CNXK_VALID_DEV_OR_ERR_RET(event_dev->dev, "event_cn9k"); CNXK_VALID_DEV_OR_ERR_RET(cdev->device, "crypto_cn9k"); - *caps = 0; + *caps = RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD | + RTE_EVENT_CRYPTO_ADAPTER_CAP_SESSION_PRIVATE_DATA; return 0; } diff --git a/drivers/event/cnxk/cn9k_worker.c b/drivers/event/cnxk/cn9k_worker.c index 538bc4b0b3..32f7cc0343 100644 --- a/drivers/event/cnxk/cn9k_worker.c +++ b/drivers/event/cnxk/cn9k_worker.c @@ -5,6 +5,7 @@ #include "roc_api.h" #include "cn9k_worker.h" +#include "cn9k_cryptodev_ops.h" uint16_t __rte_hot cn9k_sso_hws_enq(void *port, const struct rte_event *ev) @@ -117,3 +118,24 @@ cn9k_sso_hws_dual_enq_fwd_burst(void *port, const struct rte_event ev[], return 1; } + +uint16_t __rte_hot +cn9k_sso_hws_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events) +{ + struct cn9k_sso_hws *ws = port; + + RTE_SET_USED(nb_events); + + return cn9k_cpt_crypto_adapter_enqueue(ws->tag_op, ev->event_ptr); +} + +uint16_t __rte_hot +cn9k_sso_hws_dual_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events) +{ + struct cn9k_sso_hws_dual *dws = port; + + RTE_SET_USED(nb_events); + + return cn9k_cpt_crypto_adapter_enqueue(dws->ws_state[!dws->vws].tag_op, + ev->event_ptr); +} diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h index 9b2a0bf882..3e8f214904 100644 --- a/drivers/event/cnxk/cn9k_worker.h +++ b/drivers/event/cnxk/cn9k_worker.h @@ -8,6 +8,7 @@ #include "cnxk_ethdev.h" #include "cnxk_eventdev.h" #include "cnxk_worker.h" +#include "cn9k_cryptodev_ops.h" #include "cn9k_ethdev.h" #include "cn9k_rx.h" @@ -187,8 +188,12 @@ cn9k_sso_hws_dual_get_work(struct cn9k_sso_hws_state *ws, (gw.u64[0] & 0xffffffff); if (CNXK_TT_FROM_EVENT(gw.u64[0]) != SSO_TT_EMPTY) { - if (CNXK_EVENT_TYPE_FROM_TAG(gw.u64[0]) == - RTE_EVENT_TYPE_ETHDEV) { + if ((flags & CPT_RX_WQE_F) && + (CNXK_EVENT_TYPE_FROM_TAG(gw.u64[0]) == + RTE_EVENT_TYPE_CRYPTODEV)) { + gw.u64[1] = cn9k_cpt_crypto_adapter_dequeue(gw.u64[1]); + } else if (CNXK_EVENT_TYPE_FROM_TAG(gw.u64[0]) == + RTE_EVENT_TYPE_ETHDEV) { uint8_t port = CNXK_SUB_EVENT_FROM_TAG(gw.u64[0]); gw.u64[0] = CNXK_CLR_SUB_EVENT(gw.u64[0]); @@ -260,8 +265,12 @@ cn9k_sso_hws_get_work(struct cn9k_sso_hws *ws, struct rte_event *ev, (gw.u64[0] & 0xffffffff); if (CNXK_TT_FROM_EVENT(gw.u64[0]) != SSO_TT_EMPTY) { - if (CNXK_EVENT_TYPE_FROM_TAG(gw.u64[0]) == - RTE_EVENT_TYPE_ETHDEV) { + if ((flags & CPT_RX_WQE_F) && + (CNXK_EVENT_TYPE_FROM_TAG(gw.u64[0]) == + RTE_EVENT_TYPE_CRYPTODEV)) { + gw.u64[1] = cn9k_cpt_crypto_adapter_dequeue(gw.u64[1]); + } else if (CNXK_EVENT_TYPE_FROM_TAG(gw.u64[0]) == + RTE_EVENT_TYPE_ETHDEV) { uint8_t port = CNXK_SUB_EVENT_FROM_TAG(gw.u64[0]); gw.u64[0] = CNXK_CLR_SUB_EVENT(gw.u64[0]); @@ -366,6 +375,10 @@ uint16_t __rte_hot cn9k_sso_hws_dual_enq_new_burst(void *port, uint16_t __rte_hot cn9k_sso_hws_dual_enq_fwd_burst(void *port, const struct rte_event ev[], uint16_t nb_events); +uint16_t __rte_hot cn9k_sso_hws_ca_enq(void *port, struct rte_event ev[], + uint16_t nb_events); +uint16_t __rte_hot cn9k_sso_hws_dual_ca_enq(void *port, struct rte_event ev[], + uint16_t nb_events); #define R(name, f5, f4, f3, f2, f1, f0, flags) \ uint16_t __rte_hot cn9k_sso_hws_deq_##name( \ @@ -378,6 +391,11 @@ uint16_t __rte_hot cn9k_sso_hws_dual_enq_fwd_burst(void *port, uint16_t __rte_hot cn9k_sso_hws_deq_tmo_burst_##name( \ void *port, struct rte_event ev[], uint16_t nb_events, \ uint64_t timeout_ticks); \ + uint16_t __rte_hot cn9k_sso_hws_deq_ca_##name( \ + void *port, struct rte_event *ev, uint64_t timeout_ticks); \ + uint16_t __rte_hot cn9k_sso_hws_deq_ca_burst_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events, \ + uint64_t timeout_ticks); \ uint16_t __rte_hot cn9k_sso_hws_deq_seg_##name( \ void *port, struct rte_event *ev, uint64_t timeout_ticks); \ uint16_t __rte_hot cn9k_sso_hws_deq_seg_burst_##name( \ @@ -386,6 +404,11 @@ uint16_t __rte_hot cn9k_sso_hws_dual_enq_fwd_burst(void *port, uint16_t __rte_hot cn9k_sso_hws_deq_tmo_seg_##name( \ void *port, struct rte_event *ev, uint64_t timeout_ticks); \ uint16_t __rte_hot cn9k_sso_hws_deq_tmo_seg_burst_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events, \ + uint64_t timeout_ticks); \ + uint16_t __rte_hot cn9k_sso_hws_deq_ca_seg_##name( \ + void *port, struct rte_event *ev, uint64_t timeout_ticks); \ + uint16_t __rte_hot cn9k_sso_hws_deq_ca_seg_burst_##name( \ void *port, struct rte_event ev[], uint16_t nb_events, \ uint64_t timeout_ticks); @@ -403,6 +426,11 @@ NIX_RX_FASTPATH_MODES uint16_t __rte_hot cn9k_sso_hws_dual_deq_tmo_burst_##name( \ void *port, struct rte_event ev[], uint16_t nb_events, \ uint64_t timeout_ticks); \ + uint16_t __rte_hot cn9k_sso_hws_dual_deq_ca_##name( \ + void *port, struct rte_event *ev, uint64_t timeout_ticks); \ + uint16_t __rte_hot cn9k_sso_hws_dual_deq_ca_burst_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events, \ + uint64_t timeout_ticks); \ uint16_t __rte_hot cn9k_sso_hws_dual_deq_seg_##name( \ void *port, struct rte_event *ev, uint64_t timeout_ticks); \ uint16_t __rte_hot cn9k_sso_hws_dual_deq_seg_burst_##name( \ @@ -411,6 +439,11 @@ NIX_RX_FASTPATH_MODES uint16_t __rte_hot cn9k_sso_hws_dual_deq_tmo_seg_##name( \ void *port, struct rte_event *ev, uint64_t timeout_ticks); \ uint16_t __rte_hot cn9k_sso_hws_dual_deq_tmo_seg_burst_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events, \ + uint64_t timeout_ticks); \ + uint16_t __rte_hot cn9k_sso_hws_dual_deq_ca_seg_##name( \ + void *port, struct rte_event *ev, uint64_t timeout_ticks); \ + uint16_t __rte_hot cn9k_sso_hws_dual_deq_ca_seg_burst_##name( \ void *port, struct rte_event ev[], uint16_t nb_events, \ uint64_t timeout_ticks); diff --git a/drivers/event/cnxk/cn9k_worker_deq_ca.c b/drivers/event/cnxk/cn9k_worker_deq_ca.c new file mode 100644 index 0000000000..dbdbba17db --- /dev/null +++ b/drivers/event/cnxk/cn9k_worker_deq_ca.c @@ -0,0 +1,65 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include "cn9k_worker.h" +#include "cnxk_eventdev.h" +#include "cnxk_worker.h" + +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + uint16_t __rte_hot cn9k_sso_hws_deq_ca_##name( \ + void *port, struct rte_event *ev, uint64_t timeout_ticks) \ + { \ + struct cn9k_sso_hws *ws = port; \ + \ + RTE_SET_USED(timeout_ticks); \ + \ + if (ws->swtag_req) { \ + ws->swtag_req = 0; \ + cnxk_sso_hws_swtag_wait(ws->tag_op); \ + return 1; \ + } \ + \ + return cn9k_sso_hws_get_work(ws, ev, flags | CPT_RX_WQE_F, \ + ws->lookup_mem); \ + } \ + \ + uint16_t __rte_hot cn9k_sso_hws_deq_ca_burst_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events, \ + uint64_t timeout_ticks) \ + { \ + RTE_SET_USED(nb_events); \ + \ + return cn9k_sso_hws_deq_ca_##name(port, ev, timeout_ticks); \ + } \ + \ + uint16_t __rte_hot cn9k_sso_hws_deq_ca_seg_##name( \ + void *port, struct rte_event *ev, uint64_t timeout_ticks) \ + { \ + struct cn9k_sso_hws *ws = port; \ + \ + RTE_SET_USED(timeout_ticks); \ + \ + if (ws->swtag_req) { \ + ws->swtag_req = 0; \ + cnxk_sso_hws_swtag_wait(ws->tag_op); \ + return 1; \ + } \ + \ + return cn9k_sso_hws_get_work( \ + ws, ev, flags | NIX_RX_MULTI_SEG_F | CPT_RX_WQE_F, \ + ws->lookup_mem); \ + } \ + \ + uint16_t __rte_hot cn9k_sso_hws_deq_ca_seg_burst_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events, \ + uint64_t timeout_ticks) \ + { \ + RTE_SET_USED(nb_events); \ + \ + return cn9k_sso_hws_deq_ca_seg_##name(port, ev, \ + timeout_ticks); \ + } + +NIX_RX_FASTPATH_MODES +#undef R diff --git a/drivers/event/cnxk/cn9k_worker_dual_deq_ca.c b/drivers/event/cnxk/cn9k_worker_dual_deq_ca.c new file mode 100644 index 0000000000..dc9191fe80 --- /dev/null +++ b/drivers/event/cnxk/cn9k_worker_dual_deq_ca.c @@ -0,0 +1,75 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include "cn9k_worker.h" +#include "cnxk_eventdev.h" +#include "cnxk_worker.h" + +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + uint16_t __rte_hot cn9k_sso_hws_dual_deq_ca_##name( \ + void *port, struct rte_event *ev, uint64_t timeout_ticks) \ + { \ + struct cn9k_sso_hws_dual *dws = port; \ + uint16_t gw; \ + \ + RTE_SET_USED(timeout_ticks); \ + if (dws->swtag_req) { \ + dws->swtag_req = 0; \ + cnxk_sso_hws_swtag_wait( \ + dws->ws_state[!dws->vws].tag_op); \ + return 1; \ + } \ + \ + gw = cn9k_sso_hws_dual_get_work(&dws->ws_state[dws->vws], \ + &dws->ws_state[!dws->vws], ev, \ + flags | CPT_RX_WQE_F, \ + dws->lookup_mem, dws->tstamp); \ + dws->vws = !dws->vws; \ + return gw; \ + } \ + \ + uint16_t __rte_hot cn9k_sso_hws_dual_deq_ca_burst_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events, \ + uint64_t timeout_ticks) \ + { \ + RTE_SET_USED(nb_events); \ + \ + return cn9k_sso_hws_dual_deq_ca_##name(port, ev, \ + timeout_ticks); \ + } \ + \ + uint16_t __rte_hot cn9k_sso_hws_dual_deq_ca_seg_##name( \ + void *port, struct rte_event *ev, uint64_t timeout_ticks) \ + { \ + struct cn9k_sso_hws_dual *dws = port; \ + uint16_t gw; \ + \ + RTE_SET_USED(timeout_ticks); \ + if (dws->swtag_req) { \ + dws->swtag_req = 0; \ + cnxk_sso_hws_swtag_wait( \ + dws->ws_state[!dws->vws].tag_op); \ + return 1; \ + } \ + \ + gw = cn9k_sso_hws_dual_get_work( \ + &dws->ws_state[dws->vws], &dws->ws_state[!dws->vws], \ + ev, flags | NIX_RX_MULTI_SEG_F | CPT_RX_WQE_F, \ + dws->lookup_mem, dws->tstamp); \ + dws->vws = !dws->vws; \ + return gw; \ + } \ + \ + uint16_t __rte_hot cn9k_sso_hws_dual_deq_ca_seg_burst_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events, \ + uint64_t timeout_ticks) \ + { \ + RTE_SET_USED(nb_events); \ + \ + return cn9k_sso_hws_dual_deq_ca_seg_##name(port, ev, \ + timeout_ticks); \ + } + +NIX_RX_FASTPATH_MODES +#undef R diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build index 1155e18ba7..ffbc0ce0f4 100644 --- a/drivers/event/cnxk/meson.build +++ b/drivers/event/cnxk/meson.build @@ -13,9 +13,11 @@ sources = files( 'cn9k_worker.c', 'cn9k_worker_deq.c', 'cn9k_worker_deq_burst.c', + 'cn9k_worker_deq_ca.c', 'cn9k_worker_deq_tmo.c', 'cn9k_worker_dual_deq.c', 'cn9k_worker_dual_deq_burst.c', + 'cn9k_worker_dual_deq_ca.c', 'cn9k_worker_dual_deq_tmo.c', 'cn9k_worker_tx_enq.c', 'cn9k_worker_tx_enq_seg.c', From patchwork Thu Sep 2 14:41:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shijith Thotton X-Patchwork-Id: 97832 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DCFCBA0C4C; Thu, 2 Sep 2021 16:43:47 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BC84F4111C; Thu, 2 Sep 2021 16:43:37 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id D57F440687 for ; Thu, 2 Sep 2021 16:43:36 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 18281Et9011520; Thu, 2 Sep 2021 07:43:34 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=VGVMlgpYzgcFf/Q4BSQ0srWT/SmYtnyls9knscHUgrQ=; b=VQgtNNYoyJ/jaPwVA2KxhUqQQpEVH6yrxpD51oML9NyCsLupd4yyeDVvQrfQfuGkyK/t YdPfdakHV+Zo+ytFK0vOwH+oik8+x86X4SiY0IX/e04YsmdKKcOGEGoauJB9xsijZ9JG PWGKUucqMeVpMoidOcAEfaZAd96GjCjvGyFGRy6PqV/1U2rBuXYAVAZP/A9wKouKnyml dKmMu1qXHEPAgCgaAx/JhJUQFBI/VapsQ23tbG65oKRkxx7sFCaPnt5kkSrjO5niorGh EWazD67myeWwVBDHsvX4pg2E7qHnokwCK/k/CYZB93JPzT6DV6jTOdUXcs1ctcTSDDci VQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 3attqmhcah-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 02 Sep 2021 07:43:34 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 2 Sep 2021 07:43:31 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Thu, 2 Sep 2021 07:43:31 -0700 Received: from localhost.localdomain (unknown [10.28.34.29]) by maili.marvell.com (Postfix) with ESMTP id 263A75B692A; Thu, 2 Sep 2021 07:43:28 -0700 (PDT) From: Shijith Thotton To: CC: Shijith Thotton , , , , , , Ray Kinsella , Ankur Dwivedi , Tejasree Kondoj Date: Thu, 2 Sep 2021 20:11:55 +0530 Message-ID: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Proofpoint-GUID: IPdUWCP7f7F74gSQEIFyXvkfHycCxSDq X-Proofpoint-ORIG-GUID: IPdUWCP7f7F74gSQEIFyXvkfHycCxSDq X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-09-02_04,2021-09-02_03,2020-04-07_01 Subject: [dpdk-dev] [PATCH v3 7/8] crypto/cnxk: add cn10k crypto adapter fast path ops X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Added crypto adapter enqueue and dequeue operations for CN10K. Signed-off-by: Shijith Thotton Acked-by: Ray Kinsella Acked-by: Anoob Joseph --- drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 95 +++++++++++++++++++++++ drivers/crypto/cnxk/cn10k_cryptodev_ops.h | 6 ++ drivers/crypto/cnxk/version.map | 2 + 3 files changed, 103 insertions(+) diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c index 880009605e..28055aceed 100644 --- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c +++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c @@ -4,6 +4,7 @@ #include #include +#include #include #include "cn10k_cryptodev.h" @@ -256,6 +257,80 @@ cn10k_cpt_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) return count + i; } +uint16_t +cn10k_cpt_crypto_adapter_enqueue(uintptr_t tag_op, struct rte_crypto_op *op) +{ + union rte_event_crypto_metadata *ec_mdata; + struct cpt_inflight_req *infl_req; + struct rte_event *rsp_info; + uint64_t lmt_base, lmt_arg; + struct cpt_inst_s *inst; + struct cnxk_cpt_qp *qp; + uint8_t cdev_id; + uint16_t lmt_id; + uint16_t qp_id; + int ret; + + ec_mdata = cnxk_event_crypto_mdata_get(op); + if (!ec_mdata) { + rte_errno = EINVAL; + return 0; + } + + cdev_id = ec_mdata->request_info.cdev_id; + qp_id = ec_mdata->request_info.queue_pair_id; + qp = rte_cryptodevs[cdev_id].data->queue_pairs[qp_id]; + rsp_info = &ec_mdata->response_info; + + if (unlikely(!qp->ca.enabled)) { + rte_errno = EINVAL; + return 0; + } + + if (unlikely(rte_mempool_get(qp->ca.req_mp, (void **)&infl_req))) { + rte_errno = ENOMEM; + return 0; + } + infl_req->op_flags = 0; + + lmt_base = qp->lmtline.lmt_base; + ROC_LMT_BASE_ID_GET(lmt_base, lmt_id); + inst = (struct cpt_inst_s *)lmt_base; + + ret = cn10k_cpt_fill_inst(qp, &op, inst, infl_req); + if (unlikely(ret != 1)) { + plt_dp_err("Could not process op: %p", op); + rte_mempool_put(qp->ca.req_mp, infl_req); + return 0; + } + + infl_req->cop = op; + infl_req->res.cn10k.compcode = CPT_COMP_NOT_DONE; + infl_req->qp = qp; + inst->w0.u64 = 0; + inst->res_addr = (uint64_t)&infl_req->res; + inst->w2.u64 = CNXK_CPT_INST_W2( + (RTE_EVENT_TYPE_CRYPTODEV << 28) | rsp_info->flow_id, + rsp_info->sched_type, rsp_info->queue_id, 0); + inst->w3.u64 = CNXK_CPT_INST_W3(1, infl_req); + + if (roc_cpt_is_iq_full(&qp->lf)) { + rte_mempool_put(qp->ca.req_mp, infl_req); + rte_errno = EAGAIN; + return 0; + } + + if (!rsp_info->sched_type) + roc_sso_hws_head_wait(tag_op); + + lmt_arg = ROC_CN10K_CPT_LMT_ARG | (uint64_t)lmt_id; + roc_lmt_submit_steorl(lmt_arg, qp->lmtline.io_addr); + + rte_io_wmb(); + + return 1; +} + static inline void cn10k_cpt_sec_post_process(struct rte_crypto_op *cop, struct cpt_cn10k_res_s *res) @@ -347,6 +422,26 @@ cn10k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp, } } +uintptr_t +cn10k_cpt_crypto_adapter_dequeue(uintptr_t get_work1) +{ + struct cpt_inflight_req *infl_req; + struct rte_crypto_op *cop; + struct cnxk_cpt_qp *qp; + + infl_req = (struct cpt_inflight_req *)(get_work1); + cop = infl_req->cop; + qp = infl_req->qp; + + cn10k_cpt_dequeue_post_process(qp, infl_req->cop, infl_req); + + if (unlikely(infl_req->op_flags & CPT_OP_FLAGS_METABUF)) + rte_mempool_put(qp->meta_info.pool, infl_req->mdata); + + rte_mempool_put(qp->ca.req_mp, infl_req); + return (uintptr_t)cop; +} + static uint16_t cn10k_cpt_dequeue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) { diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.h b/drivers/crypto/cnxk/cn10k_cryptodev_ops.h index d500b7d227..b03d2eee14 100644 --- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.h +++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.h @@ -12,4 +12,10 @@ extern struct rte_cryptodev_ops cn10k_cpt_ops; void cn10k_cpt_set_enqdeq_fns(struct rte_cryptodev *dev); +__rte_internal +uint16_t cn10k_cpt_crypto_adapter_enqueue(uintptr_t tag_op, + struct rte_crypto_op *op); +__rte_internal +uintptr_t cn10k_cpt_crypto_adapter_dequeue(uintptr_t get_work1); + #endif /* _CN10K_CRYPTODEV_OPS_H_ */ diff --git a/drivers/crypto/cnxk/version.map b/drivers/crypto/cnxk/version.map index 0817743947..0178c416ec 100644 --- a/drivers/crypto/cnxk/version.map +++ b/drivers/crypto/cnxk/version.map @@ -3,6 +3,8 @@ INTERNAL { cn9k_cpt_crypto_adapter_enqueue; cn9k_cpt_crypto_adapter_dequeue; + cn10k_cpt_crypto_adapter_enqueue; + cn10k_cpt_crypto_adapter_dequeue; local: *; }; From patchwork Thu Sep 2 14:41:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shijith Thotton X-Patchwork-Id: 97833 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BCB6FA0C4C; Thu, 2 Sep 2021 16:43:55 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4E04D410D8; Thu, 2 Sep 2021 16:43:41 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 8AEF040687 for ; Thu, 2 Sep 2021 16:43:39 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 18280S3b010729 for ; Thu, 2 Sep 2021 07:43:39 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=Hxd//BBHg/GIYRB8gMqWB7IDHaPVrE/ft6fmOIf51hg=; b=OjfQ4Vf+JRv0EBg2S+w8DpB17sDJHJORhwUwttZTC3qTM59S+iyKsyxi1FEXDyIenayz j8r1cRlgk3gomDNK1yKeUKLNEzzwQKBJ7tCD71IATrKGGjhvZo43mJBjObFkeNQ5FP14 jr2Nq2o/sDCHBIpRwudYb5Rz2JHJrC1oYQYoMnLX3Ry/yGka3YANWbULMT52UGp2XaGU tFfPylTlGbL6NT05DzcM9IBE/BT7EUSHBrKPptLYtD6w/PwhWCaSWicrfYwi7HUkvGiU x4ltgi1Vl+/JjTq9vgCNcLVO7L72UHi5FnoOx5D68zAe/U0ixWj19kVPLtwoE6lPnxcb Tg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 3attqmhcaw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 02 Sep 2021 07:43:38 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 2 Sep 2021 07:43:36 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Thu, 2 Sep 2021 07:43:36 -0700 Received: from localhost.localdomain (unknown [10.28.34.29]) by maili.marvell.com (Postfix) with ESMTP id 7881C3F705E; Thu, 2 Sep 2021 07:43:34 -0700 (PDT) From: Shijith Thotton To: CC: Shijith Thotton , , , , , Date: Thu, 2 Sep 2021 20:11:56 +0530 Message-ID: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 X-Proofpoint-GUID: 3fux8e3FZGbbvNJdL-_AEJzkvZqiKGjQ X-Proofpoint-ORIG-GUID: 3fux8e3FZGbbvNJdL-_AEJzkvZqiKGjQ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-09-02_04,2021-09-02_03,2020-04-07_01 Subject: [dpdk-dev] [PATCH v3 8/8] event/cnxk: add cn10k crypto adapter fast path ops X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Set crypto adapter enqueue and dequeue operations for CN10K. Signed-off-by: Shijith Thotton --- doc/guides/rel_notes/release_21_11.rst | 3 ++ drivers/event/cnxk/cn10k_eventdev.c | 45 +++++++++++++++- drivers/event/cnxk/cn10k_worker.c | 11 ++++ drivers/event/cnxk/cn10k_worker.h | 21 +++++++- drivers/event/cnxk/cn10k_worker_deq_ca.c | 65 ++++++++++++++++++++++++ drivers/event/cnxk/meson.build | 1 + 6 files changed, 143 insertions(+), 3 deletions(-) create mode 100644 drivers/event/cnxk/cn10k_worker_deq_ca.c diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst index 70dd1c52f7..6d439693f7 100644 --- a/doc/guides/rel_notes/release_21_11.rst +++ b/doc/guides/rel_notes/release_21_11.rst @@ -61,6 +61,9 @@ New Features * Added transport mode in lookaside protocol (IPsec). * Added UDP encapsulation in lookaside protocol (IPsec). +* **Added support for event crypto adapter on Marvell CN10K and CN9K.** + + * Added event crypto adapter OP_FORWARD mode support. Removed Items ------------- diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c index 1aacab050c..8af273a01b 100644 --- a/drivers/event/cnxk/cn10k_eventdev.c +++ b/drivers/event/cnxk/cn10k_eventdev.c @@ -316,6 +316,20 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev) #undef R }; + const event_dequeue_t sso_hws_deq_ca[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_ca_##name, + NIX_RX_FASTPATH_MODES +#undef R + }; + + const event_dequeue_burst_t sso_hws_deq_ca_burst[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_ca_burst_##name, + NIX_RX_FASTPATH_MODES +#undef R + }; + const event_dequeue_t sso_hws_deq_seg[2][2][2][2][2][2] = { #define R(name, f5, f4, f3, f2, f1, f0, flags) \ [f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_seg_##name, @@ -345,6 +359,21 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev) #undef R }; + const event_dequeue_t sso_hws_deq_ca_seg[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_ca_seg_##name, + NIX_RX_FASTPATH_MODES +#undef R + }; + + const event_dequeue_burst_t + sso_hws_deq_ca_seg_burst[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_ca_seg_burst_##name, + NIX_RX_FASTPATH_MODES +#undef R + }; + /* Tx modes */ const event_tx_adapter_enqueue sso_hws_tx_adptr_enq[2][2][2][2][2][2] = { @@ -377,6 +406,12 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev) CN10K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue_burst, sso_hws_deq_tmo_seg_burst); } + if (dev->is_ca_internal_port) { + CN10K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, + sso_hws_deq_ca_seg); + CN10K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue_burst, + sso_hws_deq_ca_seg_burst); + } } else { CN10K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, sso_hws_deq); CN10K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue_burst, @@ -387,7 +422,14 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev) CN10K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue_burst, sso_hws_deq_tmo_burst); } + if (dev->is_ca_internal_port) { + CN10K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, + sso_hws_deq_ca); + CN10K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue_burst, + sso_hws_deq_ca_burst); + } } + event_dev->ca_enqueue = cn10k_sso_hws_ca_enq; if (dev->tx_offloads & NIX_TX_MULTI_SEG_F) CN10K_SET_EVDEV_ENQ_OP(dev, event_dev->txa_enqueue, @@ -780,7 +822,8 @@ cn10k_crypto_adapter_caps_get(const struct rte_eventdev *event_dev, CNXK_VALID_DEV_OR_ERR_RET(event_dev->dev, "event_cn10k"); CNXK_VALID_DEV_OR_ERR_RET(cdev->device, "crypto_cn10k"); - *caps = 0; + *caps = RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD | + RTE_EVENT_CRYPTO_ADAPTER_CAP_SESSION_PRIVATE_DATA; return 0; } diff --git a/drivers/event/cnxk/cn10k_worker.c b/drivers/event/cnxk/cn10k_worker.c index c71aa37327..975a22336a 100644 --- a/drivers/event/cnxk/cn10k_worker.c +++ b/drivers/event/cnxk/cn10k_worker.c @@ -60,3 +60,14 @@ cn10k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[], return 1; } + +uint16_t __rte_hot +cn10k_sso_hws_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events) +{ + struct cn10k_sso_hws *ws = port; + + RTE_SET_USED(nb_events); + + return cn10k_cpt_crypto_adapter_enqueue(ws->base + SSOW_LF_GWS_TAG, + ev->event_ptr); +} diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h index 9cc0992063..e5ed043212 100644 --- a/drivers/event/cnxk/cn10k_worker.h +++ b/drivers/event/cnxk/cn10k_worker.h @@ -10,6 +10,7 @@ #include "cnxk_ethdev.h" #include "cnxk_eventdev.h" #include "cnxk_worker.h" +#include "cn10k_cryptodev_ops.h" #include "cn10k_ethdev.h" #include "cn10k_rx.h" @@ -179,8 +180,12 @@ cn10k_sso_hws_get_work(struct cn10k_sso_hws *ws, struct rte_event *ev, (gw.u64[0] & 0xffffffff); if (CNXK_TT_FROM_EVENT(gw.u64[0]) != SSO_TT_EMPTY) { - if (CNXK_EVENT_TYPE_FROM_TAG(gw.u64[0]) == - RTE_EVENT_TYPE_ETHDEV) { + if ((flags & CPT_RX_WQE_F) && + (CNXK_EVENT_TYPE_FROM_TAG(gw.u64[0]) == + RTE_EVENT_TYPE_CRYPTODEV)) { + gw.u64[1] = cn10k_cpt_crypto_adapter_dequeue(gw.u64[1]); + } else if (CNXK_EVENT_TYPE_FROM_TAG(gw.u64[0]) == + RTE_EVENT_TYPE_ETHDEV) { uint8_t port = CNXK_SUB_EVENT_FROM_TAG(gw.u64[0]); gw.u64[0] = CNXK_CLR_SUB_EVENT(gw.u64[0]); @@ -282,6 +287,8 @@ uint16_t __rte_hot cn10k_sso_hws_enq_new_burst(void *port, uint16_t __rte_hot cn10k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[], uint16_t nb_events); +uint16_t __rte_hot cn10k_sso_hws_ca_enq(void *port, struct rte_event ev[], + uint16_t nb_events); #define R(name, f5, f4, f3, f2, f1, f0, flags) \ uint16_t __rte_hot cn10k_sso_hws_deq_##name( \ @@ -294,6 +301,11 @@ uint16_t __rte_hot cn10k_sso_hws_enq_fwd_burst(void *port, uint16_t __rte_hot cn10k_sso_hws_deq_tmo_burst_##name( \ void *port, struct rte_event ev[], uint16_t nb_events, \ uint64_t timeout_ticks); \ + uint16_t __rte_hot cn10k_sso_hws_deq_ca_##name( \ + void *port, struct rte_event *ev, uint64_t timeout_ticks); \ + uint16_t __rte_hot cn10k_sso_hws_deq_ca_burst_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events, \ + uint64_t timeout_ticks); \ uint16_t __rte_hot cn10k_sso_hws_deq_seg_##name( \ void *port, struct rte_event *ev, uint64_t timeout_ticks); \ uint16_t __rte_hot cn10k_sso_hws_deq_seg_burst_##name( \ @@ -302,6 +314,11 @@ uint16_t __rte_hot cn10k_sso_hws_enq_fwd_burst(void *port, uint16_t __rte_hot cn10k_sso_hws_deq_tmo_seg_##name( \ void *port, struct rte_event *ev, uint64_t timeout_ticks); \ uint16_t __rte_hot cn10k_sso_hws_deq_tmo_seg_burst_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events, \ + uint64_t timeout_ticks); \ + uint16_t __rte_hot cn10k_sso_hws_deq_ca_seg_##name( \ + void *port, struct rte_event *ev, uint64_t timeout_ticks); \ + uint16_t __rte_hot cn10k_sso_hws_deq_ca_seg_burst_##name( \ void *port, struct rte_event ev[], uint16_t nb_events, \ uint64_t timeout_ticks); diff --git a/drivers/event/cnxk/cn10k_worker_deq_ca.c b/drivers/event/cnxk/cn10k_worker_deq_ca.c new file mode 100644 index 0000000000..c90f6a9588 --- /dev/null +++ b/drivers/event/cnxk/cn10k_worker_deq_ca.c @@ -0,0 +1,65 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include "cn10k_worker.h" +#include "cnxk_eventdev.h" +#include "cnxk_worker.h" + +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + uint16_t __rte_hot cn10k_sso_hws_deq_ca_##name( \ + void *port, struct rte_event *ev, uint64_t timeout_ticks) \ + { \ + struct cn10k_sso_hws *ws = port; \ + \ + RTE_SET_USED(timeout_ticks); \ + \ + if (ws->swtag_req) { \ + ws->swtag_req = 0; \ + cnxk_sso_hws_swtag_wait(ws->base + SSOW_LF_GWS_WQE0); \ + return 1; \ + } \ + \ + return cn10k_sso_hws_get_work(ws, ev, flags | CPT_RX_WQE_F, \ + ws->lookup_mem); \ + } \ + \ + uint16_t __rte_hot cn10k_sso_hws_deq_ca_burst_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events, \ + uint64_t timeout_ticks) \ + { \ + RTE_SET_USED(nb_events); \ + \ + return cn10k_sso_hws_deq_ca_##name(port, ev, timeout_ticks); \ + } \ + \ + uint16_t __rte_hot cn10k_sso_hws_deq_ca_seg_##name( \ + void *port, struct rte_event *ev, uint64_t timeout_ticks) \ + { \ + struct cn10k_sso_hws *ws = port; \ + \ + RTE_SET_USED(timeout_ticks); \ + \ + if (ws->swtag_req) { \ + ws->swtag_req = 0; \ + cnxk_sso_hws_swtag_wait(ws->base + SSOW_LF_GWS_WQE0); \ + return 1; \ + } \ + \ + return cn10k_sso_hws_get_work( \ + ws, ev, flags | NIX_RX_MULTI_SEG_F | CPT_RX_WQE_F, \ + ws->lookup_mem); \ + } \ + \ + uint16_t __rte_hot cn10k_sso_hws_deq_ca_seg_burst_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events, \ + uint64_t timeout_ticks) \ + { \ + RTE_SET_USED(nb_events); \ + \ + return cn10k_sso_hws_deq_ca_seg_##name(port, ev, \ + timeout_ticks); \ + } + +NIX_RX_FASTPATH_MODES +#undef R diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build index ffbc0ce0f4..6f8b23c8e8 100644 --- a/drivers/event/cnxk/meson.build +++ b/drivers/event/cnxk/meson.build @@ -27,6 +27,7 @@ sources = files( 'cn10k_worker.c', 'cn10k_worker_deq.c', 'cn10k_worker_deq_burst.c', + 'cn10k_worker_deq_ca.c', 'cn10k_worker_deq_tmo.c', 'cn10k_worker_tx_enq.c', 'cn10k_worker_tx_enq_seg.c',