From patchwork Fri Aug 11 08:54:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 130112 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B1EC043032; Fri, 11 Aug 2023 10:54:52 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 370AE40F16; Fri, 11 Aug 2023 10:54:52 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id F29EB40E03 for ; Fri, 11 Aug 2023 10:54:49 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37AN18u8008578; Fri, 11 Aug 2023 01:54:49 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=gI6QQnmfrUofl9C8+3gNMBhw2ZQ/ZxB0P4ZjX8TBSng=; b=NtN5YJxkrg1OlS9vBXK3jtMdDdOQvWwCfDujB0St5IJqkM1Rioj9MI7cb/0GzJ8+cIeh 7BYsj10MM9Xx8+n6cSF55vl5+eMMpyIG1U4RqCWI3Sv2A3VtABP60KCmmu5/u4aPSm5k mcuOPONyBr/LMWFOENBghhiwtf1v+5LRVxNWQYfquvniR1V0KZPTxccj38w7JCFlM4xA tuZHitnwS8RYdKcppss/JrFuS9Oew/3azAuuJ3zScwC7RSAMKK2TNbXoI1HWEuYua3I0 +WPLzWJFnv1LlDL9gZefmgsH1u/BMA+NHjSxXnPOR5KK8QsL/udwXGJZnTZ6egjoqRWa tw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3sd8ya1fxy-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 11 Aug 2023 01:54:49 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 11 Aug 2023 01:54:47 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 11 Aug 2023 01:54:47 -0700 Received: from hyd1588t430.caveonetworks.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 36CB43F706A; Fri, 11 Aug 2023 01:54:44 -0700 (PDT) From: Nithin Dabilpuram To: , Cristian Dumitrescu CC: , , Nithin Dabilpuram Subject: [PATCH 1/3] security: introduce out of place support for inline ingress Date: Fri, 11 Aug 2023 14:24:38 +0530 Message-ID: <20230811085440.415916-1-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230309085645.1630826-1-ndabilpuram@marvell.com> References: <20230309085645.1630826-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: d9nphZK2f1ujve9I1eHn9hhzpiV-uXy1 X-Proofpoint-GUID: d9nphZK2f1ujve9I1eHn9hhzpiV-uXy1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-08-10_20,2023-08-10_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Similar to out of place(OOP) processing support that exists for Lookaside crypto/security sessions, Inline ingress security sessions may also need out of place processing in usecases where original encrypted packet needs to be retained for post processing. So for NIC's which have such a kind of HW support, a new SA option is provided to indicate whether OOP needs to be enabled on that Inline ingress security session or not. Since for inline ingress sessions, packet is not received by CPU until the processing is done, we can only have per-SA option and not per-packet option like Lookaside sessions. Also remove reserved_opts field from the rte_security_ipsec_sa_options struct as mentioned in deprecation notice. Signed-off-by: Nithin Dabilpuram --- v1: - Removed reserved_opts field from sa_options struct lib/pipeline/rte_swx_ipsec.c | 1 - lib/security/rte_security.c | 17 +++++++++++++ lib/security/rte_security.h | 40 +++++++++++++++++++++++++----- lib/security/rte_security_driver.h | 8 ++++++ lib/security/version.map | 2 ++ 5 files changed, 61 insertions(+), 7 deletions(-) diff --git a/lib/pipeline/rte_swx_ipsec.c b/lib/pipeline/rte_swx_ipsec.c index 6c217ee797..28576c2a48 100644 --- a/lib/pipeline/rte_swx_ipsec.c +++ b/lib/pipeline/rte_swx_ipsec.c @@ -1555,7 +1555,6 @@ ipsec_xform_get(struct rte_swx_ipsec_sa_params *p, ipsec_xform->options.ip_csum_enable = 0; ipsec_xform->options.l4_csum_enable = 0; ipsec_xform->options.ip_reassembly_en = 0; - ipsec_xform->options.reserved_opts = 0; ipsec_xform->direction = p->encrypt ? RTE_SECURITY_IPSEC_SA_DIR_EGRESS : diff --git a/lib/security/rte_security.c b/lib/security/rte_security.c index c4d64bb8e9..2391cd0aa2 100644 --- a/lib/security/rte_security.c +++ b/lib/security/rte_security.c @@ -27,7 +27,10 @@ } while (0) #define RTE_SECURITY_DYNFIELD_NAME "rte_security_dynfield_metadata" +#define RTE_SECURITY_OOP_DYNFIELD_NAME "rte_security_oop_dynfield_metadata" + int rte_security_dynfield_offset = -1; +int rte_security_oop_dynfield_offset = -1; int rte_security_dynfield_register(void) @@ -42,6 +45,20 @@ rte_security_dynfield_register(void) return rte_security_dynfield_offset; } +int +rte_security_oop_dynfield_register(void) +{ + static const struct rte_mbuf_dynfield dynfield_desc = { + .name = RTE_SECURITY_OOP_DYNFIELD_NAME, + .size = sizeof(rte_security_oop_dynfield_t), + .align = __alignof__(rte_security_oop_dynfield_t), + }; + + rte_security_oop_dynfield_offset = + rte_mbuf_dynfield_register(&dynfield_desc); + return rte_security_oop_dynfield_offset; +} + void * rte_security_session_create(struct rte_security_ctx *instance, struct rte_security_session_conf *conf, diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h index 3b2df526ba..3996ab21a1 100644 --- a/lib/security/rte_security.h +++ b/lib/security/rte_security.h @@ -274,14 +274,16 @@ struct rte_security_ipsec_sa_options { */ uint32_t ip_reassembly_en : 1; - /** Reserved bit fields for future extension + /** Enable out of place processing on inline inbound packets. * - * User should ensure reserved_opts is cleared as it may change in - * subsequent releases to support new options. - * - * Note: Reduce number of bits in reserved_opts for every new option. + * * 1: Enable driver to perform Out-of-place(OOP) processing for this inline + * inbound SA if supported by driver. PMD need to register mbuf + * dynamic field using rte_security_oop_dynfield_register() + * and security session creation would fail if dynfield is not + * registered successfully. + * * 0: Disable OOP processing for this session (default). */ - uint32_t reserved_opts : 17; + uint32_t ingress_oop : 1; }; /** IPSec security association direction */ @@ -821,6 +823,13 @@ typedef uint64_t rte_security_dynfield_t; /** Dynamic mbuf field for device-specific metadata */ extern int rte_security_dynfield_offset; +/** Out-of-Place(OOP) processing field type */ +typedef struct rte_mbuf *rte_security_oop_dynfield_t; +/** Dynamic mbuf field for pointer to original mbuf for + * OOP processing session. + */ +extern int rte_security_oop_dynfield_offset; + /** * @warning * @b EXPERIMENTAL: this API may change without prior notice @@ -843,6 +852,25 @@ rte_security_dynfield(struct rte_mbuf *mbuf) rte_security_dynfield_t *); } +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Get pointer to mbuf field for original mbuf pointer when + * Out-Of-Place(OOP) processing is enabled in security session. + * + * @param mbuf packet to access + * @return pointer to mbuf field + */ +__rte_experimental +static inline rte_security_oop_dynfield_t * +rte_security_oop_dynfield(struct rte_mbuf *mbuf) +{ + return RTE_MBUF_DYNFIELD(mbuf, + rte_security_oop_dynfield_offset, + rte_security_oop_dynfield_t *); +} + /** * @warning * @b EXPERIMENTAL: this API may change without prior notice diff --git a/lib/security/rte_security_driver.h b/lib/security/rte_security_driver.h index 31444a05d3..d5602650c2 100644 --- a/lib/security/rte_security_driver.h +++ b/lib/security/rte_security_driver.h @@ -197,6 +197,14 @@ typedef int (*security_macsec_sa_stats_get_t)(void *device, uint16_t sa_id, __rte_internal int rte_security_dynfield_register(void); +/** + * @internal + * Register mbuf dynamic field for Security inline ingress Out-of-Place(OOP) + * processing. + */ +__rte_internal +int rte_security_oop_dynfield_register(void); + /** * Update the mbuf with provided metadata. * diff --git a/lib/security/version.map b/lib/security/version.map index b2097a969d..86f976a302 100644 --- a/lib/security/version.map +++ b/lib/security/version.map @@ -23,10 +23,12 @@ EXPERIMENTAL { rte_security_macsec_sc_stats_get; rte_security_session_stats_get; rte_security_session_update; + rte_security_oop_dynfield_offset; }; INTERNAL { global: rte_security_dynfield_register; + rte_security_oop_dynfield_register; }; From patchwork Fri Aug 11 08:54:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 130113 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 29F5D43032; Fri, 11 Aug 2023 10:54:57 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 619F9410E3; Fri, 11 Aug 2023 10:54:56 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id C73F943247 for ; Fri, 11 Aug 2023 10:54:53 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37AMjx4w001610 for ; Fri, 11 Aug 2023 01:54:52 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=Tym9B/6I39rJcJL7FulDbalQKQfEwNajhKlCKLnJbbA=; b=TQ4H6agT5EP2H8KYgPKJDuscIXH/ySJ8GHJJt2Rq4JYMYN7h3gTjboCMiRbrPW6UHbWy ZSqW5prWs1bWf0O+PpaIPUH2yy28jH9kCHOS3rZQYbKxzADIqHLZy2k+FPBAMVzzrELo p8n5jtl5/UBdF9MdSD1epU+gpMlve51nI4flTocK+entl9XmzZ9jDdA10dFYaEXhEh2H TlU9Cr/NhJQ80Qm0tidc2IL6xkxR4QPuZU/FdHwswDJmi39weyQMJ1TgfCA7ooyVH4Dn +1ZP/8Nh7YkM1WY83PrGyX2UqvnPLuIDuvJbJTsgChlkV+WuxcUdOx/SAxgt1SLoVJIz GQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3sd8yp9qwq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 11 Aug 2023 01:54:52 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 11 Aug 2023 01:54:51 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 11 Aug 2023 01:54:50 -0700 Received: from hyd1588t430.caveonetworks.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id E905C5B6922; Fri, 11 Aug 2023 01:54:47 -0700 (PDT) From: Nithin Dabilpuram To: , Pavan Nikhilesh , "Shijith Thotton" , Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao CC: , Subject: [PATCH 2/3] net/cnxk: support inline ingress out of place session Date: Fri, 11 Aug 2023 14:24:39 +0530 Message-ID: <20230811085440.415916-2-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230811085440.415916-1-ndabilpuram@marvell.com> References: <20230309085645.1630826-1-ndabilpuram@marvell.com> <20230811085440.415916-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: HGkrp3sXQuL25LsaWBl9H5SWOl6UyXJz X-Proofpoint-GUID: HGkrp3sXQuL25LsaWBl9H5SWOl6UyXJz X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-08-10_20,2023-08-10_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support for inline ingress session with out-of-place support. Signed-off-by: Nithin Dabilpuram --- drivers/event/cnxk/cn10k_worker.h | 12 +- drivers/net/cnxk/cn10k_ethdev.c | 13 +- drivers/net/cnxk/cn10k_ethdev_sec.c | 43 +++++++ drivers/net/cnxk/cn10k_rx.h | 181 ++++++++++++++++++++++++---- drivers/net/cnxk/cn10k_rxtx.h | 1 + drivers/net/cnxk/cnxk_ethdev.h | 9 ++ 6 files changed, 229 insertions(+), 30 deletions(-) diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h index b4ee023723..46bfa9dd9d 100644 --- a/drivers/event/cnxk/cn10k_worker.h +++ b/drivers/event/cnxk/cn10k_worker.h @@ -59,9 +59,9 @@ cn10k_process_vwqe(uintptr_t vwqe, uint16_t port_id, const uint32_t flags, struc uint16_t lmt_id, d_off; struct rte_mbuf **wqe; struct rte_mbuf *mbuf; + uint64_t sa_base = 0; uintptr_t cpth = 0; uint8_t loff = 0; - uint64_t sa_base; int i; mbuf_init |= ((uint64_t)port_id) << 48; @@ -125,6 +125,11 @@ cn10k_process_vwqe(uintptr_t vwqe, uint16_t port_id, const uint32_t flags, struc cpth = ((uintptr_t)mbuf + (uint16_t)d_off); + /* Update mempool pointer for full mode pkt */ + if ((flags & NIX_RX_REAS_F) && (cq_w1 & BIT(11)) && + !((*(uint64_t *)cpth) & BIT(15))) + mbuf->pool = mp; + mbuf = nix_sec_meta_to_mbuf_sc(cq_w1, cq_w5, sa_base, laddr, &loff, mbuf, d_off, flags, mbuf_init); @@ -199,6 +204,11 @@ cn10k_sso_hws_post_process(struct cn10k_sso_hws *ws, uint64_t *u64, mp = (struct rte_mempool *)cnxk_nix_inl_metapool_get(port, lookup_mem); meta_aura = mp ? mp->pool_id : m->pool->pool_id; + /* Update mempool pointer for full mode pkt */ + if (mp && (flags & NIX_RX_REAS_F) && (cq_w1 & BIT(11)) && + !((*(uint64_t *)cpth) & BIT(15))) + ((struct rte_mbuf *)mbuf)->pool = mp; + mbuf = (uint64_t)nix_sec_meta_to_mbuf_sc( cq_w1, cq_w5, sa_base, (uintptr_t)&iova, &loff, (struct rte_mbuf *)mbuf, d_off, flags, diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c index 4c4acc7cf0..f1504a6873 100644 --- a/drivers/net/cnxk/cn10k_ethdev.c +++ b/drivers/net/cnxk/cn10k_ethdev.c @@ -355,11 +355,13 @@ cn10k_nix_rx_queue_meta_aura_update(struct rte_eth_dev *eth_dev) rq = &dev->rqs[i]; rxq = eth_dev->data->rx_queues[i]; rxq->meta_aura = rq->meta_aura_handle; + rxq->meta_pool = dev->nix.meta_mempool; /* Assume meta packet from normal aura if meta aura is not setup */ if (!rxq->meta_aura) { rxq_sp = cnxk_eth_rxq_to_sp(rxq); rxq->meta_aura = rxq_sp->qconf.mp->pool_id; + rxq->meta_pool = (uintptr_t)rxq_sp->qconf.mp; } } /* Store mempool in lookup mem */ @@ -639,14 +641,17 @@ cn10k_nix_reassembly_conf_set(struct rte_eth_dev *eth_dev, if (!conf->flags) { /* Clear offload flags on disable */ - dev->rx_offload_flags &= ~NIX_RX_REAS_F; + if (!dev->inb.nb_oop) + dev->rx_offload_flags &= ~NIX_RX_REAS_F; + dev->inb.reass_en = false; return 0; } - rc = roc_nix_reassembly_configure(conf->timeout_ms, - conf->max_frags); - if (!rc && dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) + rc = roc_nix_reassembly_configure(conf->timeout_ms, conf->max_frags); + if (!rc && dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) { dev->rx_offload_flags |= NIX_RX_REAS_F; + dev->inb.reass_en = true; + } return rc; } diff --git a/drivers/net/cnxk/cn10k_ethdev_sec.c b/drivers/net/cnxk/cn10k_ethdev_sec.c index b98fc9378e..63aa7ffde2 100644 --- a/drivers/net/cnxk/cn10k_ethdev_sec.c +++ b/drivers/net/cnxk/cn10k_ethdev_sec.c @@ -9,6 +9,7 @@ #include #include +#include #include #include #include @@ -324,6 +325,7 @@ static const struct rte_security_capability cn10k_eth_sec_capabilities[] = { .l4_csum_enable = 1, .stats = 1, .esn = 1, + .ingress_oop = 1, }, }, .crypto_capabilities = cn10k_eth_sec_crypto_caps, @@ -373,6 +375,7 @@ static const struct rte_security_capability cn10k_eth_sec_capabilities[] = { .l4_csum_enable = 1, .stats = 1, .esn = 1, + .ingress_oop = 1, }, }, .crypto_capabilities = cn10k_eth_sec_crypto_caps, @@ -396,6 +399,7 @@ static const struct rte_security_capability cn10k_eth_sec_capabilities[] = { .l4_csum_enable = 1, .stats = 1, .esn = 1, + .ingress_oop = 1, }, }, .crypto_capabilities = cn10k_eth_sec_crypto_caps, @@ -657,6 +661,20 @@ cn10k_eth_sec_session_create(void *device, return -rte_errno; } + if (conf->ipsec.options.ingress_oop && + rte_security_oop_dynfield_offset < 0) { + /* Register for security OOP dynfield if required */ + if (rte_security_oop_dynfield_register() < 0) + return -rte_errno; + } + + /* We cannot support inbound reassembly and OOP together */ + if (conf->ipsec.options.ip_reassembly_en && + conf->ipsec.options.ingress_oop) { + plt_err("Cannot support Inbound reassembly and OOP together"); + return -ENOTSUP; + } + ipsec = &conf->ipsec; crypto = conf->crypto_xform; inbound = !!(ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS); @@ -743,6 +761,12 @@ cn10k_eth_sec_session_create(void *device, inb_sa_dptr->w0.s.count_mib_bytes = 1; inb_sa_dptr->w0.s.count_mib_pkts = 1; } + + /* Enable out-of-place processing */ + if (ipsec->options.ingress_oop) + inb_sa_dptr->w0.s.pkt_format = + ROC_IE_OT_SA_PKT_FMT_FULL; + /* Prepare session priv */ sess_priv.inb_sa = 1; sess_priv.sa_idx = ipsec->spi & spi_mask; @@ -754,6 +778,7 @@ cn10k_eth_sec_session_create(void *device, eth_sec->spi = ipsec->spi; eth_sec->inl_dev = !!dev->inb.inl_dev; eth_sec->inb = true; + eth_sec->inb_oop = !!ipsec->options.ingress_oop; TAILQ_INSERT_TAIL(&dev->inb.list, eth_sec, entry); dev->inb.nb_sess++; @@ -769,6 +794,15 @@ cn10k_eth_sec_session_create(void *device, inb_priv->reass_dynflag_bit = dev->reass_dynflag_bit; } + if (ipsec->options.ingress_oop) + dev->inb.nb_oop++; + + /* Update function pointer to handle OOP sessions */ + if (dev->inb.nb_oop && + !(dev->rx_offload_flags & NIX_RX_REAS_F)) { + dev->rx_offload_flags |= NIX_RX_REAS_F; + cn10k_eth_set_rx_function(eth_dev); + } } else { struct roc_ot_ipsec_outb_sa *outb_sa, *outb_sa_dptr; struct cn10k_outb_priv_data *outb_priv; @@ -918,6 +952,15 @@ cn10k_eth_sec_session_destroy(void *device, struct rte_security_session *sess) sizeof(struct roc_ot_ipsec_inb_sa)); TAILQ_REMOVE(&dev->inb.list, eth_sec, entry); dev->inb.nb_sess--; + if (eth_sec->inb_oop) + dev->inb.nb_oop--; + + /* Clear offload flags if was used by OOP */ + if (!dev->inb.nb_oop && !dev->inb.reass_en && + dev->rx_offload_flags & NIX_RX_REAS_F) { + dev->rx_offload_flags &= ~NIX_RX_REAS_F; + cn10k_eth_set_rx_function(eth_dev); + } } else { /* Disable SA */ sa_dptr = dev->outb.sa_dptr; diff --git a/drivers/net/cnxk/cn10k_rx.h b/drivers/net/cnxk/cn10k_rx.h index 8148866e44..5f5318a607 100644 --- a/drivers/net/cnxk/cn10k_rx.h +++ b/drivers/net/cnxk/cn10k_rx.h @@ -402,6 +402,41 @@ nix_sec_reassemble_frags(const struct cpt_parse_hdr_s *hdr, struct rte_mbuf *hea return head; } +static inline struct rte_mbuf * +nix_sec_oop_process(const struct cpt_parse_hdr_s *hdr, struct rte_mbuf *mbuf, uint64_t *mbuf_init) +{ + uintptr_t wqe = rte_be_to_cpu_64(hdr->wqe_ptr); + union nix_rx_parse_u *inner_rx; + struct rte_mbuf *inner; + uint16_t data_off; + + inner = ((struct rte_mbuf *)wqe) - 1; + + inner_rx = (union nix_rx_parse_u *)(wqe + 8); + inner->pkt_len = inner_rx->pkt_lenm1 + 1; + inner->data_len = inner_rx->pkt_lenm1 + 1; + + /* Mark inner mbuf as get */ + RTE_MEMPOOL_CHECK_COOKIES(inner->pool, + (void **)&inner, 1, 1); + /* Update rearm data for full mbuf as it has + * cpt parse header that needs to be skipped. + * + * Since meta pool will not have private area while + * ethdev RQ's first skip would be considering private area + * calculate actual data off and update in meta mbuf. + */ + data_off = (uintptr_t)hdr - (uintptr_t)mbuf->buf_addr; + data_off += sizeof(struct cpt_parse_hdr_s); + data_off += hdr->w0.pad_len; + *mbuf_init &= ~0xFFFFUL; + *mbuf_init |= (uint64_t)data_off; + + *rte_security_oop_dynfield(mbuf) = inner; + /* Return outer instead of inner mbuf as inner mbuf would have original encrypted packet */ + return mbuf; +} + static __rte_always_inline struct rte_mbuf * nix_sec_meta_to_mbuf_sc(uint64_t cq_w1, uint64_t cq_w5, const uint64_t sa_base, uintptr_t laddr, uint8_t *loff, struct rte_mbuf *mbuf, @@ -422,14 +457,18 @@ nix_sec_meta_to_mbuf_sc(uint64_t cq_w1, uint64_t cq_w5, const uint64_t sa_base, if (!(cq_w1 & BIT(11))) return mbuf; - inner = (struct rte_mbuf *)(rte_be_to_cpu_64(hdr->wqe_ptr) - - sizeof(struct rte_mbuf)); + if (flags & NIX_RX_REAS_F && hdr->w0.pkt_fmt == ROC_IE_OT_SA_PKT_FMT_FULL) { + inner = nix_sec_oop_process(hdr, mbuf, &mbuf_init); + } else { + inner = (struct rte_mbuf *)(rte_be_to_cpu_64(hdr->wqe_ptr) - + sizeof(struct rte_mbuf)); - /* Store meta in lmtline to free - * Assume all meta's from same aura. - */ - *(uint64_t *)(laddr + (*loff << 3)) = (uint64_t)mbuf; - *loff = *loff + 1; + /* Store meta in lmtline to free + * Assume all meta's from same aura. + */ + *(uint64_t *)(laddr + (*loff << 3)) = (uint64_t)mbuf; + *loff = *loff + 1; + } /* Get SPI from CPT_PARSE_S's cookie(already swapped) */ w0 = hdr->w0.u64; @@ -471,11 +510,13 @@ nix_sec_meta_to_mbuf_sc(uint64_t cq_w1, uint64_t cq_w5, const uint64_t sa_base, & 0xFF) << 1 : RTE_MBUF_F_RX_IP_CKSUM_GOOD; } - /* Mark meta mbuf as put */ - RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 0); + if (!(flags & NIX_RX_REAS_F) || hdr->w0.pkt_fmt != ROC_IE_OT_SA_PKT_FMT_FULL) { + /* Mark meta mbuf as put */ + RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 0); - /* Mark inner mbuf as get */ - RTE_MEMPOOL_CHECK_COOKIES(inner->pool, (void **)&inner, 1, 1); + /* Mark inner mbuf as get */ + RTE_MEMPOOL_CHECK_COOKIES(inner->pool, (void **)&inner, 1, 1); + } /* Skip reassembly processing when multi-seg is enabled */ if (!(flags & NIX_RX_MULTI_SEG_F) && (flags & NIX_RX_REAS_F) && hdr->w0.num_frags) { @@ -522,7 +563,9 @@ nix_sec_meta_to_mbuf(uint64_t cq_w1, uint64_t cq_w5, uintptr_t inb_sa, *rte_security_dynfield(inner) = (uint64_t)inb_priv->userdata; /* Mark inner mbuf as get */ - RTE_MEMPOOL_CHECK_COOKIES(inner->pool, (void **)&inner, 1, 1); + if (!(flags & NIX_RX_REAS_F) || + hdr->w0.pkt_fmt != ROC_IE_OT_SA_PKT_FMT_FULL) + RTE_MEMPOOL_CHECK_COOKIES(inner->pool, (void **)&inner, 1, 1); if (!(flags & NIX_RX_MULTI_SEG_F) && flags & NIX_RX_REAS_F && hdr->w0.num_frags) { if ((!(hdr->w0.err_sum) || roc_ie_ot_ucc_is_success(hdr->w3.uc_ccode)) && @@ -552,6 +595,19 @@ nix_sec_meta_to_mbuf(uint64_t cq_w1, uint64_t cq_w5, uintptr_t inb_sa, nix_sec_attach_frags(hdr, inner, inb_priv, mbuf_init); *ol_flags |= inner->ol_flags; } + } else if (flags & NIX_RX_REAS_F) { + /* Without fragmentation but may have to handle OOP session */ + if (hdr->w0.pkt_fmt == ROC_IE_OT_SA_PKT_FMT_FULL) { + uint64_t mbuf_init = 0; + + /* Caller has already prepared to return second pass + * mbuf and inner mbuf is actually outer. + * Store original buffer pointer in dynfield. + */ + nix_sec_oop_process(hdr, inner, &mbuf_init); + /* Clear and update lower 16 bit of data offset */ + *rearm = (*rearm & ~(BIT_ULL(16) - 1)) | mbuf_init; + } } } #endif @@ -628,6 +684,7 @@ nix_cqe_xtract_mseg(const union nix_rx_parse_u *rx, struct rte_mbuf *mbuf, uint64_t cq_w1; int64_t len; uint64_t sg; + uintptr_t p; cq_w1 = *(const uint64_t *)rx; if (flags & NIX_RX_REAS_F) @@ -635,7 +692,9 @@ nix_cqe_xtract_mseg(const union nix_rx_parse_u *rx, struct rte_mbuf *mbuf, /* Use inner rx parse for meta pkts sg list */ if (cq_w1 & BIT(11) && flags & NIX_RX_OFFLOAD_SECURITY_F) { const uint64_t *wqe = (const uint64_t *)(mbuf + 1); - rx = (const union nix_rx_parse_u *)(wqe + 1); + + if (hdr->w0.pkt_fmt != ROC_IE_OT_SA_PKT_FMT_FULL) + rx = (const union nix_rx_parse_u *)(wqe + 1); } sg = *(const uint64_t *)(rx + 1); @@ -761,6 +820,31 @@ nix_cqe_xtract_mseg(const union nix_rx_parse_u *rx, struct rte_mbuf *mbuf, num_frags--; frag_i++; goto again; + } else if ((flags & NIX_RX_REAS_F) && (cq_w1 & BIT(11)) && !reas_success && + hdr->w0.pkt_fmt == ROC_IE_OT_SA_PKT_FMT_FULL) { + uintptr_t wqe = rte_be_to_cpu_64(hdr->wqe_ptr); + + /* Process OOP packet inner buffer mseg. reas_success flag is used here only + * to avoid looping. + */ + mbuf = ((struct rte_mbuf *)wqe) - 1; + rx = (const union nix_rx_parse_u *)(wqe + 8); + eol = ((const rte_iova_t *)(rx + 1) + ((rx->desc_sizem1 + 1) << 1)); + sg = *(const uint64_t *)(rx + 1); + nb_segs = (sg >> 48) & 0x3; + + + len = mbuf->pkt_len; + p = (uintptr_t)&mbuf->rearm_data; + *(uint64_t *)p = rearm; + mbuf->data_len = (sg & 0xFFFF) - + (flags & NIX_RX_OFFLOAD_TSTAMP_F ? + CNXK_NIX_TIMESYNC_RX_OFFSET : 0); + head = mbuf; + head->nb_segs = nb_segs; + /* Using this flag to avoid looping in case of OOP */ + reas_success = true; + goto again; } /* Update for last failure fragment */ @@ -899,6 +983,7 @@ cn10k_nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts, const uint64_t mbuf_init = rxq->mbuf_initializer; const void *lookup_mem = rxq->lookup_mem; const uint64_t data_off = rxq->data_off; + struct rte_mempool *meta_pool = NULL; const uintptr_t desc = rxq->desc; const uint64_t wdata = rxq->wdata; const uint32_t qmask = rxq->qmask; @@ -923,6 +1008,8 @@ cn10k_nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts, ROC_LMT_BASE_ID_GET(lbase, lmt_id); laddr = lbase; laddr += 8; + if (flags & NIX_RX_REAS_F) + meta_pool = (struct rte_mempool *)rxq->meta_pool; } while (packets < nb_pkts) { @@ -943,6 +1030,11 @@ cn10k_nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts, cpth = ((uintptr_t)mbuf + (uint16_t)data_off); + /* Update mempool pointer for full mode pkt */ + if ((flags & NIX_RX_REAS_F) && (cq_w1 & BIT(11)) && + !((*(uint64_t *)cpth) & BIT(15))) + mbuf->pool = meta_pool; + mbuf = nix_sec_meta_to_mbuf_sc(cq_w1, cq_w5, sa_base, laddr, &loff, mbuf, data_off, flags, mbuf_init); @@ -1047,6 +1139,7 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts, uint64x2_t rearm3 = vdupq_n_u64(mbuf_initializer); struct rte_mbuf *mbuf0, *mbuf1, *mbuf2, *mbuf3; uint8_t loff = 0, lnum = 0, shft = 0; + struct rte_mempool *meta_pool = NULL; uint8x16_t f0, f1, f2, f3; uint16_t lmt_id, d_off; uint64_t lbase, laddr; @@ -1099,6 +1192,9 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts, /* Get SA Base from lookup tbl using port_id */ port = mbuf_initializer >> 48; sa_base = cnxk_nix_sa_base_get(port, lookup_mem); + if (flags & NIX_RX_REAS_F) + meta_pool = (struct rte_mempool *)cnxk_nix_inl_metapool_get(port, + lookup_mem); lbase = lmt_base; } else { @@ -1106,6 +1202,8 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts, d_off = rxq->data_off; sa_base = rxq->sa_base; lbase = rxq->lmt_base; + if (flags & NIX_RX_REAS_F) + meta_pool = (struct rte_mempool *)rxq->meta_pool; } sa_base &= ~(ROC_NIX_INL_SA_BASE_ALIGN - 1); ROC_LMT_BASE_ID_GET(lbase, lmt_id); @@ -1510,10 +1608,19 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts, uint16_t len = vget_lane_u16(lens, 0); cpth0 = (uintptr_t)mbuf0 + d_off; + /* Free meta to aura */ - NIX_PUSH_META_TO_FREE(mbuf0, laddr, &loff); - mbuf01 = vsetq_lane_u64(wqe, mbuf01, 0); - mbuf0 = (struct rte_mbuf *)wqe; + if (!(flags & NIX_RX_REAS_F) || + *(uint64_t *)cpth0 & BIT_ULL(15)) { + /* Free meta to aura */ + NIX_PUSH_META_TO_FREE(mbuf0, laddr, + &loff); + mbuf01 = vsetq_lane_u64(wqe, mbuf01, 0); + mbuf0 = (struct rte_mbuf *)wqe; + } else if (flags & NIX_RX_REAS_F) { + /* Update meta pool for full mode pkts */ + mbuf0->pool = meta_pool; + } /* Update pkt_len and data_len */ f0 = vsetq_lane_u16(len, f0, 2); @@ -1535,10 +1642,18 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts, uint16_t len = vget_lane_u16(lens, 1); cpth1 = (uintptr_t)mbuf1 + d_off; + /* Free meta to aura */ - NIX_PUSH_META_TO_FREE(mbuf1, laddr, &loff); - mbuf01 = vsetq_lane_u64(wqe, mbuf01, 1); - mbuf1 = (struct rte_mbuf *)wqe; + if (!(flags & NIX_RX_REAS_F) || + *(uint64_t *)cpth1 & BIT_ULL(15)) { + NIX_PUSH_META_TO_FREE(mbuf1, laddr, + &loff); + mbuf01 = vsetq_lane_u64(wqe, mbuf01, 1); + mbuf1 = (struct rte_mbuf *)wqe; + } else if (flags & NIX_RX_REAS_F) { + /* Update meta pool for full mode pkts */ + mbuf1->pool = meta_pool; + } /* Update pkt_len and data_len */ f1 = vsetq_lane_u16(len, f1, 2); @@ -1559,10 +1674,18 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts, uint16_t len = vget_lane_u16(lens, 2); cpth2 = (uintptr_t)mbuf2 + d_off; + /* Free meta to aura */ - NIX_PUSH_META_TO_FREE(mbuf2, laddr, &loff); - mbuf23 = vsetq_lane_u64(wqe, mbuf23, 0); - mbuf2 = (struct rte_mbuf *)wqe; + if (!(flags & NIX_RX_REAS_F) || + *(uint64_t *)cpth2 & BIT_ULL(15)) { + NIX_PUSH_META_TO_FREE(mbuf2, laddr, + &loff); + mbuf23 = vsetq_lane_u64(wqe, mbuf23, 0); + mbuf2 = (struct rte_mbuf *)wqe; + } else if (flags & NIX_RX_REAS_F) { + /* Update meta pool for full mode pkts */ + mbuf2->pool = meta_pool; + } /* Update pkt_len and data_len */ f2 = vsetq_lane_u16(len, f2, 2); @@ -1583,10 +1706,18 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts, uint16_t len = vget_lane_u16(lens, 3); cpth3 = (uintptr_t)mbuf3 + d_off; + /* Free meta to aura */ - NIX_PUSH_META_TO_FREE(mbuf3, laddr, &loff); - mbuf23 = vsetq_lane_u64(wqe, mbuf23, 1); - mbuf3 = (struct rte_mbuf *)wqe; + if (!(flags & NIX_RX_REAS_F) || + *(uint64_t *)cpth3 & BIT_ULL(15)) { + NIX_PUSH_META_TO_FREE(mbuf3, laddr, + &loff); + mbuf23 = vsetq_lane_u64(wqe, mbuf23, 1); + mbuf3 = (struct rte_mbuf *)wqe; + } else if (flags & NIX_RX_REAS_F) { + /* Update meta pool for full mode pkts */ + mbuf3->pool = meta_pool; + } /* Update pkt_len and data_len */ f3 = vsetq_lane_u16(len, f3, 2); diff --git a/drivers/net/cnxk/cn10k_rxtx.h b/drivers/net/cnxk/cn10k_rxtx.h index b4287e2864..aeffc4ac92 100644 --- a/drivers/net/cnxk/cn10k_rxtx.h +++ b/drivers/net/cnxk/cn10k_rxtx.h @@ -78,6 +78,7 @@ struct cn10k_eth_rxq { uint64_t sa_base; uint64_t lmt_base; uint64_t meta_aura; + uintptr_t meta_pool; uint16_t rq; struct cnxk_timesync_info *tstamp; } __plt_cache_aligned; diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h index ed531fb277..2b9ff11a6a 100644 --- a/drivers/net/cnxk/cnxk_ethdev.h +++ b/drivers/net/cnxk/cnxk_ethdev.h @@ -219,6 +219,9 @@ struct cnxk_eth_sec_sess { /* Inbound session on inl dev */ bool inl_dev; + + /* Out-Of-Place processing */ + bool inb_oop; }; TAILQ_HEAD(cnxk_eth_sec_sess_list, cnxk_eth_sec_sess); @@ -246,6 +249,12 @@ struct cnxk_eth_dev_sec_inb { /* DPTR for WRITE_SA microcode op */ void *sa_dptr; + /* Number of oop sessions */ + uint16_t nb_oop; + + /* Reassembly enabled */ + bool reass_en; + /* Lock to synchronize sa setup/release */ rte_spinlock_t lock; }; From patchwork Fri Aug 11 08:54:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 130114 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0B3CD43032; Fri, 11 Aug 2023 10:55:11 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EED864324E; Fri, 11 Aug 2023 10:55:10 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 5F8004324E for ; Fri, 11 Aug 2023 10:55:09 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37AN2LhB011196; Fri, 11 Aug 2023 01:55:08 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=OVDfqDkHHG+EO9/pNGfeyPHncVktiQhfE931EGaXMI8=; b=lDzMjyyJRFnCkvYNV5TioiJcLHfsvUdUTyr5QQlRfVX26CkCI02VCrKmAT0YG0Wyhjsa mENKmapAzHNn6AlTOV5iEMoSBjRBuqslJ/S3u5nRVJsuswVnYxVXQ0tY3a1LGJ9oh3bm 5j1jR5z4ZowTtgCamvFjIzZcvLdmZPIbiIcZnBdzVBaTJ/ykop0e9/YMK1ZDsD4zqBp6 5RIgkBD+BCuUKNf5dL6+yuHHt2+l4CB8Iv6GYhQx3m1GCFObV/0Nw78/OwFzmlXyrfF3 X7BpGh0ibNErXpfyV+LZN1TekeMZ5zMfIMaB/7jrumVuzYltb+ZCCOtvl/OKGpUwfa5h uw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3sd8ya1fya-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 11 Aug 2023 01:55:07 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 11 Aug 2023 01:54:53 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 11 Aug 2023 01:54:53 -0700 Received: from hyd1588t430.caveonetworks.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id B18C23F706A; Fri, 11 Aug 2023 01:54:51 -0700 (PDT) From: Nithin Dabilpuram To: , Fan Zhang CC: , , Nithin Dabilpuram Subject: [PATCH 3/3] test/security: add unittest for inline ingress oop Date: Fri, 11 Aug 2023 14:24:40 +0530 Message-ID: <20230811085440.415916-3-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230811085440.415916-1-ndabilpuram@marvell.com> References: <20230309085645.1630826-1-ndabilpuram@marvell.com> <20230811085440.415916-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: X_2UJjBmlQjeuhvYI0X1lVbL-PjEQCwG X-Proofpoint-GUID: X_2UJjBmlQjeuhvYI0X1lVbL-PjEQCwG X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-08-10_20,2023-08-10_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add unittest for inline ingress out-of-place processing. Signed-off-by: Nithin Dabilpuram --- app/test/test_cryptodev_security_ipsec.c | 8 +++ app/test/test_cryptodev_security_ipsec.h | 1 + app/test/test_security_inline_proto.c | 85 ++++++++++++++++++++++++ 3 files changed, 94 insertions(+) diff --git a/app/test/test_cryptodev_security_ipsec.c b/app/test/test_cryptodev_security_ipsec.c index 7a8688c692..be9e246bfe 100644 --- a/app/test/test_cryptodev_security_ipsec.c +++ b/app/test/test_cryptodev_security_ipsec.c @@ -213,6 +213,14 @@ test_ipsec_sec_caps_verify(struct rte_security_ipsec_xform *ipsec_xform, } } + if (ipsec_xform->options.ingress_oop == 1 && + sec_cap->ipsec.options.ingress_oop == 0) { + if (!silent) + RTE_LOG(INFO, USER1, + "Inline Ingress OOP processing is not supported\n"); + return -ENOTSUP; + } + return 0; } diff --git a/app/test/test_cryptodev_security_ipsec.h b/app/test/test_cryptodev_security_ipsec.h index 92e641ba0b..5606ec056d 100644 --- a/app/test/test_cryptodev_security_ipsec.h +++ b/app/test/test_cryptodev_security_ipsec.h @@ -110,6 +110,7 @@ struct ipsec_test_flags { bool ah; uint32_t plaintext_len; int nb_segs_in_mbuf; + bool inb_oop; }; struct crypto_param { diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c index 45aa742c6b..6ceb9c5e3a 100644 --- a/app/test/test_security_inline_proto.c +++ b/app/test/test_security_inline_proto.c @@ -784,6 +784,51 @@ event_rx_burst(struct rte_mbuf **rx_pkts, uint16_t nb_pkts_to_rx) return nb_rx; } +static int +verify_inbound_oop(struct ipsec_test_data *td, + bool silent, struct rte_mbuf *mbuf) +{ + int ret = TEST_SUCCESS, rc; + struct rte_mbuf *orig; + uint32_t len; + void *data; + + orig = *rte_security_oop_dynfield(mbuf); + if (!orig) { + if (!silent) + printf("\nUnable to get orig buffer OOP session"); + return TEST_FAILED; + } + + /* Skip Ethernet header comparison */ + rte_pktmbuf_adj(orig, RTE_ETHER_HDR_LEN); + + len = td->input_text.len; + if (orig->pkt_len != len) { + if (!silent) + printf("\nOriginal packet length mismatch, expected %u, got %u ", + len, orig->pkt_len); + ret = TEST_FAILED; + } + + data = rte_pktmbuf_mtod(orig, void *); + rc = memcmp(data, td->input_text.data, len); + if (rc) { + ret = TEST_FAILED; + if (silent) + goto exit; + + printf("TestCase %s line %d: %s\n", __func__, __LINE__, + "output text not as expected\n"); + + rte_hexdump(stdout, "expected", td->input_text.data, len); + rte_hexdump(stdout, "actual", data, len); + } +exit: + rte_pktmbuf_free(orig); + return ret; +} + static int test_ipsec_with_reassembly(struct reassembly_vector *vector, const struct ipsec_test_flags *flags) @@ -1107,6 +1152,12 @@ test_ipsec_inline_proto_process(struct ipsec_test_data *td, if (ret) return ret; + if (flags->inb_oop && rte_security_oop_dynfield_offset < 0) { + printf("\nDynamic field not available for inline inbound OOP"); + ret = TEST_FAILED; + goto out; + } + if (td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) { ret = create_default_flow(port_id); if (ret) @@ -1198,6 +1249,15 @@ test_ipsec_inline_proto_process(struct ipsec_test_data *td, goto out; } + if (flags->inb_oop) { + ret = verify_inbound_oop(td, silent, rx_pkts_burst[i]); + if (ret != TEST_SUCCESS) { + for ( ; i < nb_rx; i++) + rte_pktmbuf_free(rx_pkts_burst[i]); + goto out; + } + } + rte_pktmbuf_free(rx_pkts_burst[i]); rx_pkts_burst[i] = NULL; } @@ -2075,6 +2135,26 @@ test_ipsec_inline_proto_known_vec_inb(const void *test_data) return test_ipsec_inline_proto_process(&td_inb, NULL, 1, false, &flags); } +static int +test_ipsec_inline_proto_oop_inb(const void *test_data) +{ + const struct ipsec_test_data *td = test_data; + struct ipsec_test_flags flags; + struct ipsec_test_data td_inb; + + memset(&flags, 0, sizeof(flags)); + flags.inb_oop = true; + + if (td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) + test_ipsec_td_in_from_out(td, &td_inb); + else + memcpy(&td_inb, td, sizeof(td_inb)); + + td_inb.ipsec_xform.options.ingress_oop = true; + + return test_ipsec_inline_proto_process(&td_inb, NULL, 1, false, &flags); +} + static int test_ipsec_inline_proto_display_list(const void *data __rte_unused) { @@ -3165,6 +3245,11 @@ static struct unit_test_suite inline_ipsec_testsuite = { "IPv4 Reassembly with burst of 4 fragments", ut_setup_inline_ipsec_reassembly, ut_teardown_inline_ipsec_reassembly, test_inline_ip_reassembly, &ipv4_4frag_burst_vector), + TEST_CASE_NAMED_WITH_DATA( + "Inbound Out-Of-Place processing", + ut_setup_inline_ipsec, ut_teardown_inline_ipsec, + test_ipsec_inline_proto_oop_inb, + &pkt_aes_128_gcm), TEST_CASES_END() /**< NULL terminate unit test array */ },