From patchwork Tue Apr 11 10:04:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 125927 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6C3D14291C; Tue, 11 Apr 2023 12:08:43 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4A64A40DFD; Tue, 11 Apr 2023 12:08:43 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 7382940A8B for ; Tue, 11 Apr 2023 12:08:41 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 33B8YWGx021305; Tue, 11 Apr 2023 03:08:40 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=bQuu+sVeWl+z6NXkNtduQo+nBegrxeN5OSfS7azSFJ8=; b=h1IU0gtYiPDtRXVuRexFgIhMWXRrkzRnR5kdHEQQxuI3lGT8/Llzutx79uT/IOPhXKds JiYI818Li2ceBXLaaItdd85Pq0fb+0mqTHKJdt3jEN9e9Is3M2q2Btf3Ca+9zHb5Xndd ItYtM2FmykDtpv78QYLJ7k5zCesVEj5ALysnUXxq/lg2RncAipaOz6AGsW7/+IC29ngY g8iL/RWiHyiNLdM019kEddfc52OzVSsaVC03jeQfZ+GkC6E2Nm81Fr4e23nrrrTJ3u23 aXzRShMrHq9BRWU8ZjHjcA6/OatZTvgu7+7TolVjhqTjmmzWTOTddGbX9qMma0X5EIyB Fg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3pvt73bbvb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 11 Apr 2023 03:08:40 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 11 Apr 2023 03:08:38 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 11 Apr 2023 03:08:38 -0700 Received: from hyd1588t430.caveonetworks.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id A35833F706A; Tue, 11 Apr 2023 03:08:35 -0700 (PDT) From: Nithin Dabilpuram To: Thomas Monjalon , Akhil Goyal CC: , , Nithin Dabilpuram Subject: [PATCH 1/3] security: introduce out of place support for inline ingress Date: Tue, 11 Apr 2023 15:34:07 +0530 Message-ID: <20230411100410.1174495-1-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230309085645.1630826-1-ndabilpuram@marvell.com> References: <20230309085645.1630826-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: d0IEtCJH2cALu0_zcHwUJiobxkWl5PXS X-Proofpoint-ORIG-GUID: d0IEtCJH2cALu0_zcHwUJiobxkWl5PXS X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-04-11_06,2023-04-11_01,2023-02-09_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Similar to out of place(OOP) processing support that exists for Lookaside crypto/security sessions, Inline ingress security sessions may also need out of place processing in usecases where original encrypted packet needs to be retained for post processing. So for NIC's which have such a kind of HW support, a new SA option is provided to indicate whether OOP needs to be enabled on that Inline ingress security session or not. Since for inline ingress sessions, packet is not received by CPU until the processing is done, we can only have per-SA option and not per-packet option like Lookaside sessions. Signed-off-by: Nithin Dabilpuram --- devtools/libabigail.abignore | 4 +++ lib/security/rte_security.c | 17 +++++++++++++ lib/security/rte_security.h | 39 +++++++++++++++++++++++++++++- lib/security/rte_security_driver.h | 8 ++++++ lib/security/version.map | 2 ++ 5 files changed, 69 insertions(+), 1 deletion(-) diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore index 3ff51509de..414baac060 100644 --- a/devtools/libabigail.abignore +++ b/devtools/libabigail.abignore @@ -40,3 +40,7 @@ ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ; Temporary exceptions till next major ABI version ; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; + +; Ignore change to reserved opts for new SA option +[suppress_type] + name = rte_security_ipsec_sa_options diff --git a/lib/security/rte_security.c b/lib/security/rte_security.c index e102c55e55..c2199dd8db 100644 --- a/lib/security/rte_security.c +++ b/lib/security/rte_security.c @@ -27,7 +27,10 @@ } while (0) #define RTE_SECURITY_DYNFIELD_NAME "rte_security_dynfield_metadata" +#define RTE_SECURITY_OOP_DYNFIELD_NAME "rte_security_oop_dynfield_metadata" + int rte_security_dynfield_offset = -1; +int rte_security_oop_dynfield_offset = -1; int rte_security_dynfield_register(void) @@ -42,6 +45,20 @@ rte_security_dynfield_register(void) return rte_security_dynfield_offset; } +int +rte_security_oop_dynfield_register(void) +{ + static const struct rte_mbuf_dynfield dynfield_desc = { + .name = RTE_SECURITY_OOP_DYNFIELD_NAME, + .size = sizeof(rte_security_oop_dynfield_t), + .align = __alignof__(rte_security_oop_dynfield_t), + }; + + rte_security_oop_dynfield_offset = + rte_mbuf_dynfield_register(&dynfield_desc); + return rte_security_oop_dynfield_offset; +} + void * rte_security_session_create(struct rte_security_ctx *instance, struct rte_security_session_conf *conf, diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h index 4bacf9fcd9..866cd4e8ee 100644 --- a/lib/security/rte_security.h +++ b/lib/security/rte_security.h @@ -275,6 +275,17 @@ struct rte_security_ipsec_sa_options { */ uint32_t ip_reassembly_en : 1; + /** Enable out of place processing on inline inbound packets. + * + * * 1: Enable driver to perform Out-of-place(OOP) processing for this inline + * inbound SA if supported by driver. PMD need to register mbuf + * dynamic field using rte_security_oop_dynfield_register() + * and security session creation would fail if dynfield is not + * registered successfully. + * * 0: Disable OOP processing for this session (default). + */ + uint32_t ingress_oop : 1; + /** Reserved bit fields for future extension * * User should ensure reserved_opts is cleared as it may change in @@ -282,7 +293,7 @@ struct rte_security_ipsec_sa_options { * * Note: Reduce number of bits in reserved_opts for every new option. */ - uint32_t reserved_opts : 17; + uint32_t reserved_opts : 16; }; /** IPSec security association direction */ @@ -812,6 +823,13 @@ typedef uint64_t rte_security_dynfield_t; /** Dynamic mbuf field for device-specific metadata */ extern int rte_security_dynfield_offset; +/** Out-of-Place(OOP) processing field type */ +typedef struct rte_mbuf *rte_security_oop_dynfield_t; +/** Dynamic mbuf field for pointer to original mbuf for + * OOP processing session. + */ +extern int rte_security_oop_dynfield_offset; + /** * @warning * @b EXPERIMENTAL: this API may change without prior notice @@ -834,6 +852,25 @@ rte_security_dynfield(struct rte_mbuf *mbuf) rte_security_dynfield_t *); } +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Get pointer to mbuf field for original mbuf pointer when + * Out-Of-Place(OOP) processing is enabled in security session. + * + * @param mbuf packet to access + * @return pointer to mbuf field + */ +__rte_experimental +static inline rte_security_oop_dynfield_t * +rte_security_oop_dynfield(struct rte_mbuf *mbuf) +{ + return RTE_MBUF_DYNFIELD(mbuf, + rte_security_oop_dynfield_offset, + rte_security_oop_dynfield_t *); +} + /** * @warning * @b EXPERIMENTAL: this API may change without prior notice diff --git a/lib/security/rte_security_driver.h b/lib/security/rte_security_driver.h index 421e6f7780..91e7786ab7 100644 --- a/lib/security/rte_security_driver.h +++ b/lib/security/rte_security_driver.h @@ -190,6 +190,14 @@ typedef int (*security_macsec_sa_stats_get_t)(void *device, uint16_t sa_id, __rte_internal int rte_security_dynfield_register(void); +/** + * @internal + * Register mbuf dynamic field for Security inline ingress Out-of-Place(OOP) + * processing. + */ +__rte_internal +int rte_security_oop_dynfield_register(void); + /** * Update the mbuf with provided metadata. * diff --git a/lib/security/version.map b/lib/security/version.map index 07dcce9ffb..59a95f40bd 100644 --- a/lib/security/version.map +++ b/lib/security/version.map @@ -23,10 +23,12 @@ EXPERIMENTAL { rte_security_macsec_sc_stats_get; rte_security_session_stats_get; rte_security_session_update; + rte_security_oop_dynfield_offset; }; INTERNAL { global: rte_security_dynfield_register; + rte_security_oop_dynfield_register; }; From patchwork Tue Apr 11 10:04:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 125928 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 02E0B4291C; Tue, 11 Apr 2023 12:08:48 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4615841143; Tue, 11 Apr 2023 12:08:47 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 34E494111C for ; Tue, 11 Apr 2023 12:08:45 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 33B8g4HL014876 for ; Tue, 11 Apr 2023 03:08:44 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=5x59imHQKJONarBUrz/9gN5RFz+a42AylJ875ERg3pQ=; b=c9r7NFiUu2tBmOCn8Q1b7knpmdE2xKngHzuWGd++AL5qaSj972jsL2SrOZ2Dng2vMqc7 4Cg5VUdzlXEJRHunlH/J2uug0JOYjskX16wa0beE2vf0cU6Gz67cFjozHnkqBzt1Wz7q Yv2n+zAn3az2m88uu4sBmR9f0VCqxobYhggb8EK6mbKjoBQ5bDkGA0TJCLXNhelc7eI/ K4THuXd94TPTbG9FD2uOuR26oocZs9AXczuqKlYIQSe8vmfZ2svzOc/tU0y5qbNtL88l nOPY51AMCogeZIXp1XycvJh0LWBTboEYZ+e5JAUiQlWAzrlnN0d2Jp3elbc9NWKYNhnd Cw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3purfs9a2e-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 11 Apr 2023 03:08:44 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 11 Apr 2023 03:08:41 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 11 Apr 2023 03:08:41 -0700 Received: from hyd1588t430.caveonetworks.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 0FC903F706F; Tue, 11 Apr 2023 03:08:38 -0700 (PDT) From: Nithin Dabilpuram To: Pavan Nikhilesh , Shijith Thotton , Nithin Kumar Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao CC: , , Subject: [PATCH 2/3] net/cnxk: support inline ingress out of place session Date: Tue, 11 Apr 2023 15:34:08 +0530 Message-ID: <20230411100410.1174495-2-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230411100410.1174495-1-ndabilpuram@marvell.com> References: <20230309085645.1630826-1-ndabilpuram@marvell.com> <20230411100410.1174495-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: BgD0ofVcRZu8B6Kv_yMzupn3TGD4CRwS X-Proofpoint-GUID: BgD0ofVcRZu8B6Kv_yMzupn3TGD4CRwS X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-04-11_06,2023-04-11_01,2023-02-09_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support for inline ingress session with out-of-place support. Signed-off-by: Nithin Dabilpuram --- Depends-on: series-27660 ("common/cnxk: allocate dynamic BPIDs) drivers/event/cnxk/cn10k_worker.h | 28 ++++- drivers/net/cnxk/cn10k_ethdev.c | 13 +- drivers/net/cnxk/cn10k_ethdev_sec.c | 43 +++++++ drivers/net/cnxk/cn10k_rx.h | 185 ++++++++++++++++++++++------ drivers/net/cnxk/cn10k_rxtx.h | 1 + drivers/net/cnxk/cnxk_ethdev.h | 9 ++ 6 files changed, 233 insertions(+), 46 deletions(-) diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h index 07f0dad97d..75244638d2 100644 --- a/drivers/event/cnxk/cn10k_worker.h +++ b/drivers/event/cnxk/cn10k_worker.h @@ -16,7 +16,7 @@ static __rte_always_inline void cn10k_wqe_to_mbuf(uint64_t wqe, const uint64_t __mbuf, uint8_t port_id, const uint32_t tag, const uint32_t flags, - const void *const lookup_mem) + const void *const lookup_mem, uintptr_t cpth) { const uint64_t mbuf_init = 0x100010000ULL | RTE_PKTMBUF_HEADROOM | (flags & NIX_RX_OFFLOAD_TSTAMP_F ? 8 : 0); @@ -27,7 +27,7 @@ cn10k_wqe_to_mbuf(uint64_t wqe, const uint64_t __mbuf, uint8_t port_id, cn10k_nix_cqe_to_mbuf((struct nix_cqe_hdr_s *)wqe, tag, (struct rte_mbuf *)mbuf, lookup_mem, - mbuf_init | ((uint64_t)port_id) << 48, flags); + mbuf_init | ((uint64_t)port_id) << 48, cpth, flags); } static void @@ -62,6 +62,7 @@ cn10k_process_vwqe(uintptr_t vwqe, uint16_t port_id, const uint32_t flags, struc uint16_t lmt_id, d_off; struct rte_mbuf **wqe; struct rte_mbuf *mbuf; + uintptr_t cpth = 0; uint8_t loff = 0; uint64_t sa_base; int i; @@ -125,13 +126,20 @@ cn10k_process_vwqe(uintptr_t vwqe, uint16_t port_id, const uint32_t flags, struc const uint64_t cq_w1 = *((const uint64_t *)cqe + 1); const uint64_t cq_w5 = *((const uint64_t *)cqe + 5); + cpth = ((uintptr_t)mbuf + (uint16_t)d_off); + + /* Update mempool pointer for full mode pkt */ + if ((flags & NIX_RX_REAS_F) && (cq_w1 & BIT(11)) && + !((*(uint64_t *)cpth) & BIT(15))) + mbuf->pool = mp; + mbuf = nix_sec_meta_to_mbuf_sc(cq_w1, cq_w5, sa_base, laddr, &loff, mbuf, d_off, flags, mbuf_init); } cn10k_nix_cqe_to_mbuf(cqe, cqe->tag, mbuf, lookup_mem, - mbuf_init, flags); + mbuf_init, cpth, flags); if (flags & NIX_RX_OFFLOAD_TSTAMP_F) cn10k_sso_process_tstamp((uint64_t)wqe[0], @@ -162,6 +170,7 @@ cn10k_sso_hws_post_process(struct cn10k_sso_hws *ws, uint64_t *u64, u64[1] = cn10k_cpt_crypto_adapter_vector_dequeue(u64[1]); } else if (CNXK_EVENT_TYPE_FROM_TAG(u64[0]) == RTE_EVENT_TYPE_ETHDEV) { uint8_t port = CNXK_SUB_EVENT_FROM_TAG(u64[0]); + uintptr_t cpth = 0; uint64_t mbuf; mbuf = u64[1] - sizeof(struct rte_mbuf); @@ -191,12 +200,19 @@ cn10k_sso_hws_post_process(struct cn10k_sso_hws *ws, uint64_t *u64, sa_base = cnxk_nix_sa_base_get(port, ws->lookup_mem); sa_base &= ~(ROC_NIX_INL_SA_BASE_ALIGN - 1); + cpth = ((uintptr_t)mbuf + (uint16_t)d_off); + mp = (struct rte_mempool *)cnxk_nix_inl_metapool_get(port, lookup_mem); + meta_aura = mp ? mp->pool_id : m->pool->pool_id; + + /* Update mempool pointer for full mode pkt */ + if ((flags & NIX_RX_REAS_F) && (cq_w1 & BIT(11)) && + !((*(uint64_t *)cpth) & BIT(15))) + ((struct rte_mbuf *)mbuf)->pool = mp; + mbuf = (uint64_t)nix_sec_meta_to_mbuf_sc( cq_w1, cq_w5, sa_base, (uintptr_t)&iova, &loff, (struct rte_mbuf *)mbuf, d_off, flags, mbuf_init | ((uint64_t)port) << 48); - mp = (struct rte_mempool *)cnxk_nix_inl_metapool_get(port, lookup_mem); - meta_aura = mp ? mp->pool_id : m->pool->pool_id; if (loff) roc_npa_aura_op_free(meta_aura, 0, iova); @@ -204,7 +220,7 @@ cn10k_sso_hws_post_process(struct cn10k_sso_hws *ws, uint64_t *u64, u64[0] = CNXK_CLR_SUB_EVENT(u64[0]); cn10k_wqe_to_mbuf(u64[1], mbuf, port, u64[0] & 0xFFFFF, flags, - ws->lookup_mem); + ws->lookup_mem, cpth); if (flags & NIX_RX_OFFLOAD_TSTAMP_F) cn10k_sso_process_tstamp(u64[1], mbuf, ws->tstamp[port]); diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c index 2b4ab8b772..c33646846e 100644 --- a/drivers/net/cnxk/cn10k_ethdev.c +++ b/drivers/net/cnxk/cn10k_ethdev.c @@ -352,11 +352,13 @@ cn10k_nix_rx_queue_meta_aura_update(struct rte_eth_dev *eth_dev) rq = &dev->rqs[i]; rxq = eth_dev->data->rx_queues[i]; rxq->meta_aura = rq->meta_aura_handle; + rxq->meta_pool = dev->nix.meta_mempool; /* Assume meta packet from normal aura if meta aura is not setup */ if (!rxq->meta_aura) { rxq_sp = cnxk_eth_rxq_to_sp(rxq); rxq->meta_aura = rxq_sp->qconf.mp->pool_id; + rxq->meta_pool = (uintptr_t)rxq_sp->qconf.mp; } } /* Store mempool in lookup mem */ @@ -623,14 +625,17 @@ cn10k_nix_reassembly_conf_set(struct rte_eth_dev *eth_dev, if (!conf->flags) { /* Clear offload flags on disable */ - dev->rx_offload_flags &= ~NIX_RX_REAS_F; + if (!dev->inb.nb_oop) + dev->rx_offload_flags &= ~NIX_RX_REAS_F; + dev->inb.reass_en = false; return 0; } - rc = roc_nix_reassembly_configure(conf->timeout_ms, - conf->max_frags); - if (!rc && dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) + rc = roc_nix_reassembly_configure(conf->timeout_ms, conf->max_frags); + if (!rc && dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) { dev->rx_offload_flags |= NIX_RX_REAS_F; + dev->inb.reass_en = true; + } return rc; } diff --git a/drivers/net/cnxk/cn10k_ethdev_sec.c b/drivers/net/cnxk/cn10k_ethdev_sec.c index 9625704ec1..f6992c8c8f 100644 --- a/drivers/net/cnxk/cn10k_ethdev_sec.c +++ b/drivers/net/cnxk/cn10k_ethdev_sec.c @@ -9,6 +9,7 @@ #include #include +#include #include #include @@ -293,6 +294,7 @@ static const struct rte_security_capability cn10k_eth_sec_capabilities[] = { .l4_csum_enable = 1, .stats = 1, .esn = 1, + .ingress_oop = 1, }, }, .crypto_capabilities = cn10k_eth_sec_crypto_caps, @@ -342,6 +344,7 @@ static const struct rte_security_capability cn10k_eth_sec_capabilities[] = { .l4_csum_enable = 1, .stats = 1, .esn = 1, + .ingress_oop = 1, }, }, .crypto_capabilities = cn10k_eth_sec_crypto_caps, @@ -365,6 +368,7 @@ static const struct rte_security_capability cn10k_eth_sec_capabilities[] = { .l4_csum_enable = 1, .stats = 1, .esn = 1, + .ingress_oop = 1, }, }, .crypto_capabilities = cn10k_eth_sec_crypto_caps, @@ -624,6 +628,20 @@ cn10k_eth_sec_session_create(void *device, return -rte_errno; } + if (conf->ipsec.options.ingress_oop && + rte_security_oop_dynfield_offset < 0) { + /* Register for security OOP dynfield if required */ + if (rte_security_oop_dynfield_register() < 0) + return -rte_errno; + } + + /* We cannot support inbound reassembly and OOP together */ + if (conf->ipsec.options.ip_reassembly_en && + conf->ipsec.options.ingress_oop) { + plt_err("Cannot support Inbound reassembly and OOP together"); + return -ENOTSUP; + } + ipsec = &conf->ipsec; crypto = conf->crypto_xform; inbound = !!(ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS); @@ -710,6 +728,12 @@ cn10k_eth_sec_session_create(void *device, inb_sa_dptr->w0.s.count_mib_bytes = 1; inb_sa_dptr->w0.s.count_mib_pkts = 1; } + + /* Enable out-of-place processing */ + if (ipsec->options.ingress_oop) + inb_sa_dptr->w0.s.pkt_format = + ROC_IE_OT_SA_PKT_FMT_FULL; + /* Prepare session priv */ sess_priv.inb_sa = 1; sess_priv.sa_idx = ipsec->spi & spi_mask; @@ -721,6 +745,7 @@ cn10k_eth_sec_session_create(void *device, eth_sec->spi = ipsec->spi; eth_sec->inl_dev = !!dev->inb.inl_dev; eth_sec->inb = true; + eth_sec->inb_oop = !!ipsec->options.ingress_oop; TAILQ_INSERT_TAIL(&dev->inb.list, eth_sec, entry); dev->inb.nb_sess++; @@ -736,6 +761,15 @@ cn10k_eth_sec_session_create(void *device, inb_priv->reass_dynflag_bit = dev->reass_dynflag_bit; } + if (ipsec->options.ingress_oop) + dev->inb.nb_oop++; + + /* Update function pointer to handle OOP sessions */ + if (dev->inb.nb_oop && + !(dev->rx_offload_flags & NIX_RX_REAS_F)) { + dev->rx_offload_flags |= NIX_RX_REAS_F; + cn10k_eth_set_rx_function(eth_dev); + } } else { struct roc_ot_ipsec_outb_sa *outb_sa, *outb_sa_dptr; struct cn10k_outb_priv_data *outb_priv; @@ -880,6 +914,15 @@ cn10k_eth_sec_session_destroy(void *device, struct rte_security_session *sess) sizeof(struct roc_ot_ipsec_inb_sa)); TAILQ_REMOVE(&dev->inb.list, eth_sec, entry); dev->inb.nb_sess--; + if (eth_sec->inb_oop) + dev->inb.nb_oop--; + + /* Clear offload flags if was used by OOP */ + if (!dev->inb.nb_oop && !dev->inb.reass_en && + dev->rx_offload_flags & NIX_RX_REAS_F) { + dev->rx_offload_flags &= ~NIX_RX_REAS_F; + cn10k_eth_set_rx_function(eth_dev); + } } else { /* Disable SA */ sa_dptr = dev->outb.sa_dptr; diff --git a/drivers/net/cnxk/cn10k_rx.h b/drivers/net/cnxk/cn10k_rx.h index 9fdb5565e9..b80e7388bd 100644 --- a/drivers/net/cnxk/cn10k_rx.h +++ b/drivers/net/cnxk/cn10k_rx.h @@ -420,11 +420,46 @@ nix_sec_reassemble_frags(const struct cpt_parse_hdr_s *hdr, uint64_t cq_w1, return head; } +static inline struct rte_mbuf * +nix_sec_oop_process(const struct cpt_parse_hdr_s *hdr, struct rte_mbuf *mbuf, uint64_t *mbuf_init) +{ + uintptr_t wqe = rte_be_to_cpu_64(hdr->wqe_ptr); + union nix_rx_parse_u *inner_rx; + struct rte_mbuf *inner; + uint16_t data_off; + + inner = ((struct rte_mbuf *)wqe) - 1; + + inner_rx = (union nix_rx_parse_u *)(wqe + 8); + inner->pkt_len = inner_rx->pkt_lenm1 + 1; + inner->data_len = inner_rx->pkt_lenm1 + 1; + + /* Mark inner mbuf as get */ + RTE_MEMPOOL_CHECK_COOKIES(inner->pool, + (void **)&inner, 1, 1); + /* Update rearm data for full mbuf as it has + * cpt parse header that needs to be skipped. + * + * Since meta pool will not have private area while + * ethdev RQ's first skip would be considering private area + * calculate actual data off and update in meta mbuf. + */ + data_off = (uintptr_t)hdr - (uintptr_t)mbuf->buf_addr; + data_off += sizeof(struct cpt_parse_hdr_s); + data_off += hdr->w0.pad_len; + *mbuf_init &= ~0xFFFFUL; + *mbuf_init |= data_off; + + *rte_security_oop_dynfield(mbuf) = inner; + /* Return outer instead of inner mbuf as inner mbuf would have original encrypted packet */ + return mbuf; +} + static __rte_always_inline struct rte_mbuf * nix_sec_meta_to_mbuf_sc(uint64_t cq_w1, uint64_t cq_w5, const uint64_t sa_base, uintptr_t laddr, uint8_t *loff, struct rte_mbuf *mbuf, uint16_t data_off, const uint16_t flags, - const uint64_t mbuf_init) + uint64_t mbuf_init) { const void *__p = (void *)((uintptr_t)mbuf + (uint16_t)data_off); const struct cpt_parse_hdr_s *hdr = (const struct cpt_parse_hdr_s *)__p; @@ -447,9 +482,13 @@ nix_sec_meta_to_mbuf_sc(uint64_t cq_w1, uint64_t cq_w5, const uint64_t sa_base, if (!hdr->w0.num_frags) { /* No Reassembly or inbound error */ - inner = (struct rte_mbuf *) - (rte_be_to_cpu_64(hdr->wqe_ptr) - - sizeof(struct rte_mbuf)); + if (hdr->w0.pkt_fmt == ROC_IE_OT_SA_PKT_FMT_FULL) { + inner = nix_sec_oop_process(hdr, mbuf, &mbuf_init); + } else { + inner = (struct rte_mbuf *) + (rte_be_to_cpu_64(hdr->wqe_ptr) - + sizeof(struct rte_mbuf)); + } /* Update dynamic field with userdata */ *rte_security_dynfield(inner) = @@ -506,14 +545,18 @@ nix_sec_meta_to_mbuf_sc(uint64_t cq_w1, uint64_t cq_w5, const uint64_t sa_base, /* Store meta in lmtline to free * Assume all meta's from same aura. */ - *(uint64_t *)(laddr + (*loff << 3)) = (uint64_t)mbuf; - *loff = *loff + 1; + if (hdr->w0.pkt_fmt != ROC_IE_OT_SA_PKT_FMT_FULL) { + *(uint64_t *)(laddr + (*loff << 3)) = (uint64_t)mbuf; + *loff = *loff + 1; - /* Mark meta mbuf as put */ - RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 0); + /* Mark meta mbuf as put */ + RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, + 1, 0); - /* Mark inner mbuf as get */ - RTE_MEMPOOL_CHECK_COOKIES(inner->pool, (void **)&inner, 1, 1); + /* Mark inner mbuf as get */ + RTE_MEMPOOL_CHECK_COOKIES(inner->pool, (void **)&inner, + 1, 1); + } return inner; } else if (cq_w1 & BIT(11)) { @@ -602,7 +645,9 @@ nix_sec_meta_to_mbuf(uint64_t cq_w1, uint64_t cq_w5, uintptr_t inb_sa, *rte_security_dynfield(inner) = (uint64_t)inb_priv->userdata; /* Mark inner mbuf as get */ - RTE_MEMPOOL_CHECK_COOKIES(inner->pool, (void **)&inner, 1, 1); + if (!(flags & NIX_RX_REAS_F) || + hdr->w0.pkt_fmt != ROC_IE_OT_SA_PKT_FMT_FULL) + RTE_MEMPOOL_CHECK_COOKIES(inner->pool, (void **)&inner, 1, 1); if (flags & NIX_RX_REAS_F && hdr->w0.num_frags) { if ((!(hdr->w0.err_sum) || roc_ie_ot_ucc_is_success(hdr->w3.uc_ccode)) && @@ -633,6 +678,19 @@ nix_sec_meta_to_mbuf(uint64_t cq_w1, uint64_t cq_w5, uintptr_t inb_sa, *rx_desc_field1 = vsetq_lane_u16(inner->data_len, *rx_desc_field1, 4); } + } else if (flags & NIX_RX_REAS_F) { + /* Without fragmentation but may have to handle OOP session */ + if (hdr->w0.pkt_fmt == ROC_IE_OT_SA_PKT_FMT_FULL) { + uint64_t mbuf_init = 0; + + /* Caller has already prepared to return second pass + * mbuf and inner mbuf is actually outer. + * Store original buffer pointer in dynfield. + */ + nix_sec_oop_process(hdr, inner, &mbuf_init); + /* Clear and update lower 16 bit of data offset */ + *rearm = (*rearm & ~(BIT_ULL(16) - 1)) | mbuf_init; + } } } #endif @@ -689,7 +747,7 @@ nix_update_match_id(const uint16_t match_id, uint64_t ol_flags, static __rte_always_inline void nix_cqe_xtract_mseg(const union nix_rx_parse_u *rx, struct rte_mbuf *mbuf, - uint64_t rearm, const uint16_t flags) + uint64_t rearm, uintptr_t cpth, const uint16_t flags) { const rte_iova_t *iova_list; uint16_t later_skip = 0; @@ -703,8 +761,11 @@ nix_cqe_xtract_mseg(const union nix_rx_parse_u *rx, struct rte_mbuf *mbuf, cq_w1 = *(const uint64_t *)rx; /* Use inner rx parse for meta pkts sg list */ if (cq_w1 & BIT(11) && flags & NIX_RX_OFFLOAD_SECURITY_F) { + const struct cpt_parse_hdr_s *hdr = (const struct cpt_parse_hdr_s *)cpth; const uint64_t *wqe = (const uint64_t *)(mbuf + 1); - rx = (const union nix_rx_parse_u *)(wqe + 1); + + if (!(flags & NIX_RX_REAS_F) || hdr->w0.pkt_fmt != ROC_IE_OT_SA_PKT_FMT_FULL) + rx = (const union nix_rx_parse_u *)(wqe + 1); } sg = *(const uint64_t *)(rx + 1); @@ -763,7 +824,7 @@ nix_cqe_xtract_mseg(const union nix_rx_parse_u *rx, struct rte_mbuf *mbuf, static __rte_always_inline void cn10k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag, struct rte_mbuf *mbuf, const void *lookup_mem, - const uint64_t val, const uint16_t flag) + const uint64_t val, const uintptr_t cpth, const uint16_t flag) { const union nix_rx_parse_u *rx = (const union nix_rx_parse_u *)((const uint64_t *)cq + 1); @@ -817,7 +878,7 @@ cn10k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag, * timestamp data process. * Hence, timestamp flag argument is not required. */ - nix_cqe_xtract_mseg(rx, mbuf, val, flag & ~NIX_RX_OFFLOAD_TSTAMP_F); + nix_cqe_xtract_mseg(rx, mbuf, val, cpth, flag & ~NIX_RX_OFFLOAD_TSTAMP_F); } static inline uint16_t @@ -888,6 +949,7 @@ cn10k_nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts, const uint64_t mbuf_init = rxq->mbuf_initializer; const void *lookup_mem = rxq->lookup_mem; const uint64_t data_off = rxq->data_off; + struct rte_mempool *meta_pool = NULL; const uintptr_t desc = rxq->desc; const uint64_t wdata = rxq->wdata; const uint32_t qmask = rxq->qmask; @@ -898,6 +960,7 @@ cn10k_nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts, struct nix_cqe_hdr_s *cq; struct rte_mbuf *mbuf; uint64_t aura_handle; + uintptr_t cpth = 0; uint64_t sa_base; uint16_t lmt_id; uint64_t laddr; @@ -911,6 +974,8 @@ cn10k_nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts, ROC_LMT_BASE_ID_GET(lbase, lmt_id); laddr = lbase; laddr += 8; + if (flags & NIX_RX_REAS_F) + meta_pool = (struct rte_mempool *)rxq->meta_pool; } while (packets < nb_pkts) { @@ -929,13 +994,20 @@ cn10k_nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts, const uint64_t cq_w1 = *((const uint64_t *)cq + 1); const uint64_t cq_w5 = *((const uint64_t *)cq + 5); + cpth = ((uintptr_t)mbuf + (uint16_t)data_off); + + /* Update mempool pointer for full mode pkt */ + if ((flags & NIX_RX_REAS_F) && (cq_w1 & BIT(11)) && + !((*(uint64_t *)cpth) & BIT(15))) + mbuf->pool = meta_pool; + mbuf = nix_sec_meta_to_mbuf_sc(cq_w1, cq_w5, sa_base, laddr, &loff, mbuf, data_off, flags, mbuf_init); } cn10k_nix_cqe_to_mbuf(cq, cq->tag, mbuf, lookup_mem, mbuf_init, - flags); + cpth, flags); cn10k_nix_mbuf_to_tstamp(mbuf, rxq->tstamp, (flags & NIX_RX_OFFLOAD_TSTAMP_F), (uint64_t *)((uint8_t *)mbuf @@ -1025,6 +1097,7 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts, const uint64_t wdata = flags & NIX_RX_VWQE_F ? 0 : rxq->wdata; const uintptr_t desc = flags & NIX_RX_VWQE_F ? 0 : rxq->desc; uint64x2_t cq0_w8, cq1_w8, cq2_w8, cq3_w8, mbuf01, mbuf23; + uintptr_t cpth0 = 0, cpth1 = 0, cpth2 = 0, cpth3 = 0; uint64_t ol_flags0, ol_flags1, ol_flags2, ol_flags3; uint64x2_t rearm0 = vdupq_n_u64(mbuf_initializer); uint64x2_t rearm1 = vdupq_n_u64(mbuf_initializer); @@ -1032,6 +1105,7 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts, uint64x2_t rearm3 = vdupq_n_u64(mbuf_initializer); struct rte_mbuf *mbuf0, *mbuf1, *mbuf2, *mbuf3; uint8_t loff = 0, lnum = 0, shft = 0; + struct rte_mempool *meta_pool = NULL; uint8x16_t f0, f1, f2, f3; uint16_t lmt_id, d_off; uint64_t lbase, laddr; @@ -1084,6 +1158,9 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts, /* Get SA Base from lookup tbl using port_id */ port = mbuf_initializer >> 48; sa_base = cnxk_nix_sa_base_get(port, lookup_mem); + if (flags & NIX_RX_REAS_F) + meta_pool = (struct rte_mempool *)cnxk_nix_inl_metapool_get(port, + lookup_mem); lbase = lmt_base; } else { @@ -1091,6 +1168,8 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts, d_off = rxq->data_off; sa_base = rxq->sa_base; lbase = rxq->lmt_base; + if (flags & NIX_RX_REAS_F) + meta_pool = (struct rte_mempool *)rxq->meta_pool; } sa_base &= ~(ROC_NIX_INL_SA_BASE_ALIGN - 1); ROC_LMT_BASE_ID_GET(lbase, lmt_id); @@ -1325,10 +1404,6 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts, uint64_t cq1_w5 = *CQE_PTR_OFF(cq0, 1, 40, flags); uint64_t cq2_w5 = *CQE_PTR_OFF(cq0, 2, 40, flags); uint64_t cq3_w5 = *CQE_PTR_OFF(cq0, 3, 40, flags); - uintptr_t cpth0 = (uintptr_t)mbuf0 + d_off; - uintptr_t cpth1 = (uintptr_t)mbuf1 + d_off; - uintptr_t cpth2 = (uintptr_t)mbuf2 + d_off; - uintptr_t cpth3 = (uintptr_t)mbuf3 + d_off; uint8_t code; uint64x2_t inner0, inner1, inner2, inner3; @@ -1336,6 +1411,11 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts, uint16x4_t lens, l2lens, ltypes; uint8x8_t ucc; + cpth0 = (uintptr_t)mbuf0 + d_off; + cpth1 = (uintptr_t)mbuf1 + d_off; + cpth2 = (uintptr_t)mbuf2 + d_off; + cpth3 = (uintptr_t)mbuf3 + d_off; + inner0 = vld1q_u64((const uint64_t *)cpth0); inner1 = vld1q_u64((const uint64_t *)cpth1); inner2 = vld1q_u64((const uint64_t *)cpth2); @@ -1488,10 +1568,19 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts, uint16_t len = vget_lane_u16(lens, 0); cpth0 = (uintptr_t)mbuf0 + d_off; + /* Free meta to aura */ - NIX_PUSH_META_TO_FREE(mbuf0, laddr, &loff); - mbuf01 = vsetq_lane_u64(wqe, mbuf01, 0); - mbuf0 = (struct rte_mbuf *)wqe; + if (!(flags & NIX_RX_REAS_F) || + *(uint64_t *)cpth0 & BIT_ULL(15)) { + /* Free meta to aura */ + NIX_PUSH_META_TO_FREE(mbuf0, laddr, + &loff); + mbuf01 = vsetq_lane_u64(wqe, mbuf01, 0); + mbuf0 = (struct rte_mbuf *)wqe; + } else if (flags & NIX_RX_REAS_F) { + /* Update meta pool for full mode pkts */ + mbuf0->pool = meta_pool; + } /* Update pkt_len and data_len */ f0 = vsetq_lane_u16(len, f0, 2); @@ -1513,10 +1602,18 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts, uint16_t len = vget_lane_u16(lens, 1); cpth1 = (uintptr_t)mbuf1 + d_off; + /* Free meta to aura */ - NIX_PUSH_META_TO_FREE(mbuf1, laddr, &loff); - mbuf01 = vsetq_lane_u64(wqe, mbuf01, 1); - mbuf1 = (struct rte_mbuf *)wqe; + if (!(flags & NIX_RX_REAS_F) || + *(uint64_t *)cpth1 & BIT_ULL(15)) { + NIX_PUSH_META_TO_FREE(mbuf1, laddr, + &loff); + mbuf01 = vsetq_lane_u64(wqe, mbuf01, 1); + mbuf1 = (struct rte_mbuf *)wqe; + } else if (flags & NIX_RX_REAS_F) { + /* Update meta pool for full mode pkts */ + mbuf1->pool = meta_pool; + } /* Update pkt_len and data_len */ f1 = vsetq_lane_u16(len, f1, 2); @@ -1537,10 +1634,18 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts, uint16_t len = vget_lane_u16(lens, 2); cpth2 = (uintptr_t)mbuf2 + d_off; + /* Free meta to aura */ - NIX_PUSH_META_TO_FREE(mbuf2, laddr, &loff); - mbuf23 = vsetq_lane_u64(wqe, mbuf23, 0); - mbuf2 = (struct rte_mbuf *)wqe; + if (!(flags & NIX_RX_REAS_F) || + *(uint64_t *)cpth2 & BIT_ULL(15)) { + NIX_PUSH_META_TO_FREE(mbuf2, laddr, + &loff); + mbuf23 = vsetq_lane_u64(wqe, mbuf23, 0); + mbuf2 = (struct rte_mbuf *)wqe; + } else if (flags & NIX_RX_REAS_F) { + /* Update meta pool for full mode pkts */ + mbuf2->pool = meta_pool; + } /* Update pkt_len and data_len */ f2 = vsetq_lane_u16(len, f2, 2); @@ -1561,10 +1666,18 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts, uint16_t len = vget_lane_u16(lens, 3); cpth3 = (uintptr_t)mbuf3 + d_off; + /* Free meta to aura */ - NIX_PUSH_META_TO_FREE(mbuf3, laddr, &loff); - mbuf23 = vsetq_lane_u64(wqe, mbuf23, 1); - mbuf3 = (struct rte_mbuf *)wqe; + if (!(flags & NIX_RX_REAS_F) || + *(uint64_t *)cpth3 & BIT_ULL(15)) { + NIX_PUSH_META_TO_FREE(mbuf3, laddr, + &loff); + mbuf23 = vsetq_lane_u64(wqe, mbuf23, 1); + mbuf3 = (struct rte_mbuf *)wqe; + } else if (flags & NIX_RX_REAS_F) { + /* Update meta pool for full mode pkts */ + mbuf3->pool = meta_pool; + } /* Update pkt_len and data_len */ f3 = vsetq_lane_u16(len, f3, 2); @@ -1721,16 +1834,16 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts, */ nix_cqe_xtract_mseg((union nix_rx_parse_u *) (CQE_PTR_OFF(cq0, 0, 8, flags)), - mbuf0, mbuf_initializer, flags); + mbuf0, mbuf_initializer, cpth0, flags); nix_cqe_xtract_mseg((union nix_rx_parse_u *) (CQE_PTR_OFF(cq0, 1, 8, flags)), - mbuf1, mbuf_initializer, flags); + mbuf1, mbuf_initializer, cpth1, flags); nix_cqe_xtract_mseg((union nix_rx_parse_u *) (CQE_PTR_OFF(cq0, 2, 8, flags)), - mbuf2, mbuf_initializer, flags); + mbuf2, mbuf_initializer, cpth2, flags); nix_cqe_xtract_mseg((union nix_rx_parse_u *) (CQE_PTR_OFF(cq0, 3, 8, flags)), - mbuf3, mbuf_initializer, flags); + mbuf3, mbuf_initializer, cpth3, flags); } /* Store the mbufs to rx_pkts */ diff --git a/drivers/net/cnxk/cn10k_rxtx.h b/drivers/net/cnxk/cn10k_rxtx.h index c256d54307..b5d8345270 100644 --- a/drivers/net/cnxk/cn10k_rxtx.h +++ b/drivers/net/cnxk/cn10k_rxtx.h @@ -77,6 +77,7 @@ struct cn10k_eth_rxq { uint64_t sa_base; uint64_t lmt_base; uint64_t meta_aura; + uintptr_t meta_pool; uint16_t rq; struct cnxk_timesync_info *tstamp; } __plt_cache_aligned; diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h index 85287dd66c..2b89ebb9bc 100644 --- a/drivers/net/cnxk/cnxk_ethdev.h +++ b/drivers/net/cnxk/cnxk_ethdev.h @@ -217,6 +217,9 @@ struct cnxk_eth_sec_sess { /* Inbound session on inl dev */ bool inl_dev; + + /* Out-Of-Place processing */ + bool inb_oop; }; TAILQ_HEAD(cnxk_eth_sec_sess_list, cnxk_eth_sec_sess); @@ -244,6 +247,12 @@ struct cnxk_eth_dev_sec_inb { /* DPTR for WRITE_SA microcode op */ void *sa_dptr; + /* Number of oop sessions */ + uint16_t nb_oop; + + /* Reassembly enabled */ + bool reass_en; + /* Lock to synchronize sa setup/release */ rte_spinlock_t lock; }; From patchwork Tue Apr 11 10:04:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 125929 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EAED34291C; Tue, 11 Apr 2023 12:08:55 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7CB5642BC9; Tue, 11 Apr 2023 12:08:48 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 099A24111C for ; Tue, 11 Apr 2023 12:08:46 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 33B8Svs9021524; Tue, 11 Apr 2023 03:08:46 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=hKP3BU0jZ2r+Jd19m9f4GTPpTZ5zf930yTuFSMJvds4=; b=avD3QE1wtCahHHP2oImkBXeuLo6gV8pI6MZk6xG4zYByToDiogEYupFSqvHwZXNrHTtw 74O0Zq0RewETNRfLa5vnx9yE1el6rndztbVoZh6+IyyxBRMwh29eTygM4onYpmA/J8dL Qf4juZOlNDGHB5sbHrh9C0kTplQFcGI2R6u8hyGMVdDiRYebQXuYpsSc2YwKEyTf36O2 641T+yRH6GyUMLcrnFtAVJo2NrWSqcX1Ho4eUXXmQuvP86U/IGOiyw61i9uQXXRwswE5 Uh6CHKuITuPdDIk0W20+7m4JZNzobQ2F8Va4aywyiGzsO8WgCvM8njvYwtIM7ly9y1lj /g== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3pvt73bbw4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 11 Apr 2023 03:08:46 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 11 Apr 2023 03:08:44 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 11 Apr 2023 03:08:44 -0700 Received: from hyd1588t430.caveonetworks.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 77BF03F706A; Tue, 11 Apr 2023 03:08:42 -0700 (PDT) From: Nithin Dabilpuram To: Akhil Goyal , Fan Zhang CC: , , Nithin Dabilpuram Subject: [PATCH 3/3] test/security: add unittest for inline ingress oop Date: Tue, 11 Apr 2023 15:34:09 +0530 Message-ID: <20230411100410.1174495-3-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230411100410.1174495-1-ndabilpuram@marvell.com> References: <20230309085645.1630826-1-ndabilpuram@marvell.com> <20230411100410.1174495-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 1JzPUpQvDV0Ds2ZFWRUHWJHKDdmry7Yr X-Proofpoint-ORIG-GUID: 1JzPUpQvDV0Ds2ZFWRUHWJHKDdmry7Yr X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-04-11_06,2023-04-11_01,2023-02-09_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add unittest for inline ingress out-of-place processing. Signed-off-by: Nithin Dabilpuram --- app/test/test_cryptodev_security_ipsec.c | 8 +++ app/test/test_cryptodev_security_ipsec.h | 1 + app/test/test_security_inline_proto.c | 85 ++++++++++++++++++++++++ 3 files changed, 94 insertions(+) diff --git a/app/test/test_cryptodev_security_ipsec.c b/app/test/test_cryptodev_security_ipsec.c index 7a8688c692..be9e246bfe 100644 --- a/app/test/test_cryptodev_security_ipsec.c +++ b/app/test/test_cryptodev_security_ipsec.c @@ -213,6 +213,14 @@ test_ipsec_sec_caps_verify(struct rte_security_ipsec_xform *ipsec_xform, } } + if (ipsec_xform->options.ingress_oop == 1 && + sec_cap->ipsec.options.ingress_oop == 0) { + if (!silent) + RTE_LOG(INFO, USER1, + "Inline Ingress OOP processing is not supported\n"); + return -ENOTSUP; + } + return 0; } diff --git a/app/test/test_cryptodev_security_ipsec.h b/app/test/test_cryptodev_security_ipsec.h index 92e641ba0b..5606ec056d 100644 --- a/app/test/test_cryptodev_security_ipsec.h +++ b/app/test/test_cryptodev_security_ipsec.h @@ -110,6 +110,7 @@ struct ipsec_test_flags { bool ah; uint32_t plaintext_len; int nb_segs_in_mbuf; + bool inb_oop; }; struct crypto_param { diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c index 79858e559f..80bcdfc701 100644 --- a/app/test/test_security_inline_proto.c +++ b/app/test/test_security_inline_proto.c @@ -735,6 +735,51 @@ get_and_verify_incomplete_frags(struct rte_mbuf *mbuf, return ret; } +static int +verify_inbound_oop(struct ipsec_test_data *td, + bool silent, struct rte_mbuf *mbuf) +{ + int ret = TEST_SUCCESS, rc; + struct rte_mbuf *orig; + uint32_t len; + void *data; + + orig = *rte_security_oop_dynfield(mbuf); + if (!orig) { + if (!silent) + printf("\nUnable to get orig buffer OOP session"); + return TEST_FAILED; + } + + /* Skip Ethernet header comparison */ + rte_pktmbuf_adj(orig, RTE_ETHER_HDR_LEN); + + len = td->input_text.len; + if (orig->pkt_len != len) { + if (!silent) + printf("\nOriginal packet length mismatch, expected %u, got %u ", + len, orig->pkt_len); + ret = TEST_FAILED; + } + + data = rte_pktmbuf_mtod(orig, void *); + rc = memcmp(data, td->input_text.data, len); + if (rc) { + ret = TEST_FAILED; + if (silent) + goto exit; + + printf("TestCase %s line %d: %s\n", __func__, __LINE__, + "output text not as expected\n"); + + rte_hexdump(stdout, "expected", td->input_text.data, len); + rte_hexdump(stdout, "actual", data, len); + } +exit: + rte_pktmbuf_free(orig); + return ret; +} + static int test_ipsec_with_reassembly(struct reassembly_vector *vector, const struct ipsec_test_flags *flags) @@ -1115,6 +1160,12 @@ test_ipsec_inline_proto_process(struct ipsec_test_data *td, if (ret) return ret; + if (flags->inb_oop && rte_security_oop_dynfield_offset < 0) { + printf("\nDynamic field not available for inline inbound OOP"); + ret = TEST_FAILED; + goto out; + } + if (td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) { ret = create_default_flow(port_id); if (ret) @@ -1206,6 +1257,15 @@ test_ipsec_inline_proto_process(struct ipsec_test_data *td, goto out; } + if (flags->inb_oop) { + ret = verify_inbound_oop(td, silent, rx_pkts_burst[i]); + if (ret != TEST_SUCCESS) { + for ( ; i < nb_rx; i++) + rte_pktmbuf_free(rx_pkts_burst[i]); + goto out; + } + } + rte_pktmbuf_free(rx_pkts_burst[i]); rx_pkts_burst[i] = NULL; } @@ -1994,6 +2054,26 @@ test_ipsec_inline_proto_known_vec_inb(const void *test_data) return test_ipsec_inline_proto_process(&td_inb, NULL, 1, false, &flags); } +static int +test_ipsec_inline_proto_oop_inb(const void *test_data) +{ + const struct ipsec_test_data *td = test_data; + struct ipsec_test_flags flags; + struct ipsec_test_data td_inb; + + memset(&flags, 0, sizeof(flags)); + flags.inb_oop = true; + + if (td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) + test_ipsec_td_in_from_out(td, &td_inb); + else + memcpy(&td_inb, td, sizeof(td_inb)); + + td_inb.ipsec_xform.options.ingress_oop = true; + + return test_ipsec_inline_proto_process(&td_inb, NULL, 1, false, &flags); +} + static int test_ipsec_inline_proto_display_list(const void *data __rte_unused) { @@ -3086,6 +3166,11 @@ static struct unit_test_suite inline_ipsec_testsuite = { "IPv4 Reassembly with burst of 4 fragments", ut_setup_inline_ipsec, ut_teardown_inline_ipsec, test_inline_ip_reassembly, &ipv4_4frag_burst_vector), + TEST_CASE_NAMED_WITH_DATA( + "Inbound Out-Of-Place processing", + ut_setup_inline_ipsec, ut_teardown_inline_ipsec, + test_ipsec_inline_proto_oop_inb, + &pkt_aes_128_gcm), TEST_CASES_END() /**< NULL terminate unit test array */ },