From patchwork Wed Oct 27 13:03:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Radu Nicolau X-Patchwork-Id: 103047 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 83BA0A0C47; Wed, 27 Oct 2021 15:18:02 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0E7E640DDA; Wed, 27 Oct 2021 15:18:02 +0200 (CEST) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by mails.dpdk.org (Postfix) with ESMTP id 256724068C for ; Wed, 27 Oct 2021 15:17:59 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10149"; a="217066658" X-IronPort-AV: E=Sophos;i="5.87,186,1631602800"; d="scan'208";a="217066658" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2021 06:17:58 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,186,1631602800"; d="scan'208";a="537557013" Received: from silpixa00400884.ir.intel.com ([10.243.22.82]) by fmsmga008.fm.intel.com with ESMTP; 27 Oct 2021 06:17:53 -0700 From: Radu Nicolau To: Konstantin Ananyev , Bernard Iremonger , Vladimir Medvedkin Cc: dev@dpdk.org, gakhil@marvell.com, anoobj@marvell.com, Radu Nicolau , Declan Doherty , Abhijit Sinha , Daniel Martin Buckley , Fan Zhang Date: Wed, 27 Oct 2021 14:03:44 +0100 Message-Id: <20211027130345.2249987-2-radu.nicolau@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211027130345.2249987-1-radu.nicolau@intel.com> References: <20211027130345.2249987-1-radu.nicolau@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v4 1/2] ipsec: add TSO support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add support for transmit segmentation offload to inline crypto processing mode. This offload is not supported by other offload modes, as at a minimum it requires inline crypto for IPsec to be supported on the network interface. Signed-off-by: Declan Doherty Signed-off-by: Radu Nicolau Signed-off-by: Abhijit Sinha Signed-off-by: Daniel Martin Buckley Acked-by: Fan Zhang Acked-by: Konstantin Ananyev --- doc/guides/prog_guide/ipsec_lib.rst | 2 + doc/guides/rel_notes/release_21_11.rst | 1 + lib/ipsec/esp_outb.c | 141 +++++++++++++++++++------ 3 files changed, 112 insertions(+), 32 deletions(-) diff --git a/doc/guides/prog_guide/ipsec_lib.rst b/doc/guides/prog_guide/ipsec_lib.rst index 52afdcda9f..0bdbdad1e4 100644 --- a/doc/guides/prog_guide/ipsec_lib.rst +++ b/doc/guides/prog_guide/ipsec_lib.rst @@ -315,6 +315,8 @@ Supported features * NAT-T / UDP encapsulated ESP. +* TSO (only for inline crypto mode) + * algorithms: 3DES-CBC, AES-CBC, AES-CTR, AES-GCM, AES_CCM, CHACHA20_POLY1305, AES_GMAC, HMAC-SHA1, NULL. diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst index 1ccac87b73..b5b5abadee 100644 --- a/doc/guides/rel_notes/release_21_11.rst +++ b/doc/guides/rel_notes/release_21_11.rst @@ -268,6 +268,7 @@ New Features * Added support for NAT-T / UDP encapsulated ESP. * Added support for SA telemetry. * Added support for setting a non default starting ESN value. + * Added support for TSO in inline crypto mode. * **Added multi-process support for testpmd.** diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c index 336d24a6af..b7a70fd001 100644 --- a/lib/ipsec/esp_outb.c +++ b/lib/ipsec/esp_outb.c @@ -18,7 +18,7 @@ typedef int32_t (*esp_outb_prepare_t)(struct rte_ipsec_sa *sa, rte_be64_t sqc, const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb, - union sym_op_data *icv, uint8_t sqh_len); + union sym_op_data *icv, uint8_t sqh_len, uint8_t tso); /* * helper function to fill crypto_sym op for cipher+auth algorithms. @@ -139,7 +139,7 @@ outb_cop_prepare(struct rte_crypto_op *cop, static inline int32_t outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc, const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb, - union sym_op_data *icv, uint8_t sqh_len) + union sym_op_data *icv, uint8_t sqh_len, uint8_t tso) { uint32_t clen, hlen, l2len, pdlen, pdofs, plen, tlen; struct rte_mbuf *ml; @@ -157,11 +157,19 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc, /* number of bytes to encrypt */ clen = plen + sizeof(*espt); - clen = RTE_ALIGN_CEIL(clen, sa->pad_align); - /* pad length + esp tail */ - pdlen = clen - plen; - tlen = pdlen + sa->icv_len + sqh_len; + if (!tso) { + clen = RTE_ALIGN_CEIL(clen, sa->pad_align); + /* pad length + esp tail */ + pdlen = clen - plen; + tlen = pdlen + sa->icv_len + sqh_len; + } else { + /* We don't need to pad/align packet or append ICV length + * when using TSO offload + */ + pdlen = clen - plen; + tlen = pdlen + sqh_len; + } /* do append and prepend */ ml = rte_pktmbuf_lastseg(mb); @@ -309,7 +317,7 @@ esp_outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], /* try to update the packet itself */ rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv, - sa->sqh_len); + sa->sqh_len, 0); /* success, setup crypto op */ if (rc >= 0) { outb_pkt_xprepare(sa, sqc, &icv); @@ -336,7 +344,7 @@ esp_outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], static inline int32_t outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc, const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb, - union sym_op_data *icv, uint8_t sqh_len) + union sym_op_data *icv, uint8_t sqh_len, uint8_t tso) { uint8_t np; uint32_t clen, hlen, pdlen, pdofs, plen, tlen, uhlen; @@ -358,11 +366,19 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc, /* number of bytes to encrypt */ clen = plen + sizeof(*espt); - clen = RTE_ALIGN_CEIL(clen, sa->pad_align); - /* pad length + esp tail */ - pdlen = clen - plen; - tlen = pdlen + sa->icv_len + sqh_len; + if (!tso) { + clen = RTE_ALIGN_CEIL(clen, sa->pad_align); + /* pad length + esp tail */ + pdlen = clen - plen; + tlen = pdlen + sa->icv_len + sqh_len; + } else { + /* We don't need to pad/align packet or append ICV length + * when using TSO offload + */ + pdlen = clen - plen; + tlen = pdlen + sqh_len; + } /* do append and insert */ ml = rte_pktmbuf_lastseg(mb); @@ -452,7 +468,7 @@ esp_outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], /* try to update the packet itself */ rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv, - sa->sqh_len); + sa->sqh_len, 0); /* success, setup crypto op */ if (rc >= 0) { outb_pkt_xprepare(sa, sqc, &icv); @@ -549,7 +565,7 @@ cpu_outb_pkt_prepare(const struct rte_ipsec_session *ss, gen_iv(ivbuf[k], sqc); /* try to update the packet itself */ - rc = prepare(sa, sqc, ivbuf[k], mb[i], &icv, sa->sqh_len); + rc = prepare(sa, sqc, ivbuf[k], mb[i], &icv, sa->sqh_len, 0); /* success, proceed with preparations */ if (rc >= 0) { @@ -668,6 +684,31 @@ inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss, ss->sa->statistics.bytes += bytes; } + +static inline int +esn_outb_nb_segments(struct rte_mbuf *m) +{ + if (m->ol_flags & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)) { + uint16_t pkt_l3len = m->pkt_len - m->l2_len; + uint16_t segments = + (m->tso_segsz > 0 && pkt_l3len > m->tso_segsz) ? + (pkt_l3len + m->tso_segsz - 1) / m->tso_segsz : 1; + return segments; + } + return 1; /* no TSO */ +} + +/* Compute how many packets can be sent before overflow occurs */ +static inline uint16_t +esn_outb_nb_valid_packets(uint16_t num, uint32_t n_sqn, uint16_t nb_segs[]) +{ + uint16_t i; + uint32_t seg_cnt = 0; + for (i = 0; i < num && seg_cnt < n_sqn; i++) + seg_cnt += nb_segs[i]; + return i - 1; +} + /* * process group of ESP outbound tunnel packets destined for * INLINE_CRYPTO type of device. @@ -677,29 +718,47 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], uint16_t num) { int32_t rc; - uint32_t i, k, n; + uint32_t i, k, nb_segs_total, n_sqn; uint64_t sqn; rte_be64_t sqc; struct rte_ipsec_sa *sa; union sym_op_data icv; uint64_t iv[IPSEC_MAX_IV_QWORD]; uint32_t dr[num]; + uint16_t nb_segs[num]; sa = ss->sa; + nb_segs_total = 0; + /* Calculate number of segments */ + for (i = 0; i != num; i++) { + nb_segs[i] = esn_outb_nb_segments(mb[i]); + nb_segs_total += nb_segs[i]; + } - n = num; - sqn = esn_outb_update_sqn(sa, &n); - if (n != num) + n_sqn = nb_segs_total; + sqn = esn_outb_update_sqn(sa, &n_sqn); + if (n_sqn != nb_segs_total) { rte_errno = EOVERFLOW; + /* if there are segmented packets find out how many can be + * sent until overflow occurs + */ + if (nb_segs_total > num) /* there is at least 1 */ + num = esn_outb_nb_valid_packets(num, n_sqn, nb_segs); + else + num = n_sqn; /* no segmented packets */ + } k = 0; - for (i = 0; i != n; i++) { + for (i = 0; i != num; i++) { - sqc = rte_cpu_to_be_64(sqn + i); + sqc = rte_cpu_to_be_64(sqn); gen_iv(iv, sqc); + sqn += nb_segs[i]; /* try to update the packet itself */ - rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0); + rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0, + (mb[i]->ol_flags & + (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)) != 0); k += (rc >= 0); @@ -711,8 +770,8 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss, } /* copy not processed mbufs beyond good ones */ - if (k != n && k != 0) - move_bad_mbufs(mb, dr, n, n - k); + if (k != num && k != 0) + move_bad_mbufs(mb, dr, num, num - k); inline_outb_mbuf_prepare(ss, mb, k); return k; @@ -727,29 +786,47 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], uint16_t num) { int32_t rc; - uint32_t i, k, n; + uint32_t i, k, nb_segs_total, n_sqn; uint64_t sqn; rte_be64_t sqc; struct rte_ipsec_sa *sa; union sym_op_data icv; uint64_t iv[IPSEC_MAX_IV_QWORD]; uint32_t dr[num]; + uint16_t nb_segs[num]; sa = ss->sa; + nb_segs_total = 0; + /* Calculate number of segments */ + for (i = 0; i != num; i++) { + nb_segs[i] = esn_outb_nb_segments(mb[i]); + nb_segs_total += nb_segs[i]; + } - n = num; - sqn = esn_outb_update_sqn(sa, &n); - if (n != num) + n_sqn = nb_segs_total; + sqn = esn_outb_update_sqn(sa, &n_sqn); + if (n_sqn != nb_segs_total) { rte_errno = EOVERFLOW; + /* if there are segmented packets find out how many can be + * sent until overflow occurs + */ + if (nb_segs_total > num) /* there is at least 1 */ + num = esn_outb_nb_valid_packets(num, n_sqn, nb_segs); + else + num = n_sqn; /* no segmented packets */ + } k = 0; - for (i = 0; i != n; i++) { + for (i = 0; i != num; i++) { - sqc = rte_cpu_to_be_64(sqn + i); + sqc = rte_cpu_to_be_64(sqn); gen_iv(iv, sqc); + sqn += nb_segs[i]; /* try to update the packet itself */ - rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0); + rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0, + (mb[i]->ol_flags & + (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)) != 0); k += (rc >= 0); @@ -761,8 +838,8 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss, } /* copy not processed mbufs beyond good ones */ - if (k != n && k != 0) - move_bad_mbufs(mb, dr, n, n - k); + if (k != num && k != 0) + move_bad_mbufs(mb, dr, num, num - k); inline_outb_mbuf_prepare(ss, mb, k); return k; From patchwork Wed Oct 27 13:03:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Radu Nicolau X-Patchwork-Id: 103049 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F0FA0A0C47; Wed, 27 Oct 2021 15:18:14 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EDA3C4114F; Wed, 27 Oct 2021 15:18:04 +0200 (CEST) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by mails.dpdk.org (Postfix) with ESMTP id CB2024068C for ; Wed, 27 Oct 2021 15:18:01 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10149"; a="217066661" X-IronPort-AV: E=Sophos;i="5.87,186,1631602800"; d="scan'208";a="217066661" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2021 06:17:59 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,186,1631602800"; d="scan'208";a="537557019" Received: from silpixa00400884.ir.intel.com ([10.243.22.82]) by fmsmga008.fm.intel.com with ESMTP; 27 Oct 2021 06:17:57 -0700 From: Radu Nicolau To: Radu Nicolau , Akhil Goyal Cc: dev@dpdk.org, anoobj@marvell.com, konstantin.ananyev@intel.com, Declan Doherty Date: Wed, 27 Oct 2021 14:03:45 +0100 Message-Id: <20211027130345.2249987-3-radu.nicolau@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211027130345.2249987-1-radu.nicolau@intel.com> References: <20211027130345.2249987-1-radu.nicolau@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v4 2/2] examples/ipsec-secgw: add support for TCP TSO X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add support to allow user to specific MSS for TCP TSO offload on a per SA basis. MSS configuration in the context of IPsec is only supported for outbound SA's in the context of an inline IPsec Crypto offload. Signed-off-by: Declan Doherty Signed-off-by: Radu Nicolau Acked-by: Konstantin Ananyev --- doc/guides/rel_notes/release_21_11.rst | 4 ++++ doc/guides/sample_app_ug/ipsec_secgw.rst | 11 +++++++++++ examples/ipsec-secgw/ipsec-secgw.c | 4 ++++ examples/ipsec-secgw/ipsec.h | 1 + examples/ipsec-secgw/ipsec_process.c | 22 +++++++++++++++++++++ examples/ipsec-secgw/sa.c | 25 +++++++++++++++++++++--- 6 files changed, 64 insertions(+), 3 deletions(-) diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst index b5b5abadee..35ececc3f2 100644 --- a/doc/guides/rel_notes/release_21_11.rst +++ b/doc/guides/rel_notes/release_21_11.rst @@ -306,6 +306,10 @@ New Features * Pcapng format with timestamps and meta-data. * Fixes packet capture with stripped VLAN tags. +* **IPsec Security Gateway sample application new features.** + + * Added support for TSO (only for inline crypto TCP packets) + Removed Items ------------- diff --git a/doc/guides/sample_app_ug/ipsec_secgw.rst b/doc/guides/sample_app_ug/ipsec_secgw.rst index 782574dd39..639d309a6e 100644 --- a/doc/guides/sample_app_ug/ipsec_secgw.rst +++ b/doc/guides/sample_app_ug/ipsec_secgw.rst @@ -720,6 +720,17 @@ where each options means: * *udp-encap* + ```` + + * Maximum segment size for TSO offload, available for egress SAs only. + + * Optional: Yes, TSO offload not set by default + + * Syntax: + + * *mss N* N is the segment size in bytes + + Example SA rules: .. code-block:: console diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index 4bdf99b62b..5fcf424efe 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -398,6 +398,10 @@ prepare_one_packet(struct rte_mbuf *pkt, struct ipsec_traffic *t) pkt->l2_len = 0; pkt->l3_len = sizeof(*iph4); pkt->packet_type |= RTE_PTYPE_L3_IPV4; + if (pkt->packet_type & RTE_PTYPE_L4_TCP) + pkt->l4_len = sizeof(struct rte_tcp_hdr); + else if (pkt->packet_type & RTE_PTYPE_L4_UDP) + pkt->l4_len = sizeof(struct rte_udp_hdr); } else if (eth->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6)) { int next_proto; size_t l3len, ext_len; diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h index 8405c48171..2c3640833d 100644 --- a/examples/ipsec-secgw/ipsec.h +++ b/examples/ipsec-secgw/ipsec.h @@ -137,6 +137,7 @@ struct ipsec_sa { enum rte_security_ipsec_sa_direction direction; uint8_t udp_encap; uint16_t portid; + uint16_t mss; uint8_t fdir_qid; uint8_t fdir_flag; diff --git a/examples/ipsec-secgw/ipsec_process.c b/examples/ipsec-secgw/ipsec_process.c index 5012e1a6a4..bb56e97ad7 100644 --- a/examples/ipsec-secgw/ipsec_process.c +++ b/examples/ipsec-secgw/ipsec_process.c @@ -222,6 +222,28 @@ prep_process_group(void *sa, struct rte_mbuf *mb[], uint32_t cnt) for (j = 0; j != cnt; j++) { priv = get_priv(mb[j]); priv->sa = sa; + /* setup TSO related fields if TSO enabled*/ + if (priv->sa->mss) { + /* TCP only */ + uint32_t ptype = mb[j]->packet_type; + if (ptype & (RTE_PTYPE_L4_TCP == 0)) + continue; + + mb[j]->tso_segsz = priv->sa->mss; + if ((IS_TUNNEL(priv->sa->flags))) { + mb[j]->outer_l3_len = mb[j]->l3_len; + mb[j]->outer_l2_len = mb[j]->l2_len; + mb[j]->ol_flags |= + (RTE_MBUF_F_TX_OUTER_IP_CKSUM | + RTE_MBUF_F_TX_TUNNEL_ESP); + } + mb[j]->ol_flags |= (RTE_MBUF_F_TX_TCP_SEG | + RTE_MBUF_F_TX_TCP_CKSUM); + if (RTE_ETH_IS_IPV4_HDR(ptype)) + mb[j]->ol_flags |= RTE_MBUF_F_TX_OUTER_IPV4; + else + mb[j]->ol_flags |= RTE_MBUF_F_TX_OUTER_IPV6; + } } } diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c index 88dd30464f..97f265cc7b 100644 --- a/examples/ipsec-secgw/sa.c +++ b/examples/ipsec-secgw/sa.c @@ -677,6 +677,16 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens, continue; } + if (strcmp(tokens[ti], "mss") == 0) { + INCREMENT_TOKEN_INDEX(ti, n_tokens, status); + if (status->status < 0) + return; + rule->mss = atoi(tokens[ti]); + if (status->status < 0) + return; + continue; + } + if (strcmp(tokens[ti], "fallback") == 0) { struct rte_ipsec_session *fb; @@ -970,7 +980,7 @@ sa_create(const char *name, int32_t socket_id, uint32_t nb_sa) } static int -check_eth_dev_caps(uint16_t portid, uint32_t inbound) +check_eth_dev_caps(uint16_t portid, uint32_t inbound, uint32_t tso) { struct rte_eth_dev_info dev_info; int retval; @@ -999,6 +1009,12 @@ check_eth_dev_caps(uint16_t portid, uint32_t inbound) "hardware TX IPSec offload is not supported\n"); return -EINVAL; } + if (tso && (dev_info.tx_offload_capa & + RTE_ETH_TX_OFFLOAD_TCP_TSO) == 0) { + RTE_LOG(WARNING, PORT, + "hardware TCP TSO offload is not supported\n"); + return -EINVAL; + } } return 0; } @@ -1127,7 +1143,7 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[], if (ips->type == RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL || ips->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO) { - if (check_eth_dev_caps(sa->portid, inbound)) + if (check_eth_dev_caps(sa->portid, inbound, sa->mss)) return -EINVAL; } @@ -1638,8 +1654,11 @@ sa_check_offloads(uint16_t port_id, uint64_t *rx_offloads, if ((rule_type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO || rule_type == RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) - && rule->portid == port_id) + && rule->portid == port_id) { *tx_offloads |= RTE_ETH_TX_OFFLOAD_SECURITY; + if (rule->mss) + *tx_offloads |= RTE_ETH_TX_OFFLOAD_TCP_TSO; + } } return 0; }