From patchwork Mon Aug 22 14:38:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 115336 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8A5AAA0540; Mon, 22 Aug 2022 16:38:48 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9AC504282B; Mon, 22 Aug 2022 16:38:37 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 5A9BE427EA for ; Mon, 22 Aug 2022 16:38:35 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 27MDX9qr011661; Mon, 22 Aug 2022 07:38:32 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=mqYkO32KAVW4Mch5djXDQuw7LS1UqV9nnkC0/32xGGM=; b=WDfjMyXDN/1cafuSWI5n+8/qgms1EOwLqXGv6sOHylolO1OU3oWk7nL+SDMV8f1TQvhv vRYM9s8H/lIOEHM51CCOma5zojnZ1w3ZYSujIeVPtARPBPeT/7QAzBU7OKPWqE/v/283 8RjLpwFmFISHZJYeA8rMckNVGU6btdk/kAP6YJirnK3+eZiNuhdDjsFDBHETmdXkVYkj 3TMCUjweGwH0k7q2rY5kL+FL8Stieg7FdQR+/i8VkVMKwFGmg9DtFzuiqrWqyBP0NWUF PWQtI+yXvZ+YffT6iJ8bY4tcmdQOFIqb+g8ADGuu+8QqYIHEPHK+JjrXaJKEXDWgHZC+ Uw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3j4askg7h3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 22 Aug 2022 07:38:31 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 22 Aug 2022 07:38:30 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Mon, 22 Aug 2022 07:38:30 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 5EF903F704B; Mon, 22 Aug 2022 07:38:28 -0700 (PDT) From: Nithin Dabilpuram To: Ruifeng Wang , Radu Nicolau , Akhil Goyal CC: , , Nithin Dabilpuram Subject: [PATCH v3 5/5] examples/ipsec-secgw: update ether type using tunnel info Date: Mon, 22 Aug 2022 20:08:12 +0530 Message-ID: <20220822143812.30010-5-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20220822143812.30010-1-ndabilpuram@marvell.com> References: <20220707072921.13448-1-ndabilpuram@marvell.com> <20220822143812.30010-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: fEAJUNjdq8mgmYO56VEaHoZFv9gE61-8 X-Proofpoint-ORIG-GUID: fEAJUNjdq8mgmYO56VEaHoZFv9gE61-8 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-08-22_08,2022-08-22_02,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Update ether type for outbound SA processing based on tunnel header information in both NEON functions for poll mode and event mode worker functions. Signed-off-by: Nithin Dabilpuram Reviewed-by: Ruifeng Wang Acked-by: Akhil Goyal --- examples/ipsec-secgw/ipsec_neon.h | 41 +++++++++++++++++++++++++------------ examples/ipsec-secgw/ipsec_worker.c | 30 +++++++++++++++++++-------- 2 files changed, 49 insertions(+), 22 deletions(-) diff --git a/examples/ipsec-secgw/ipsec_neon.h b/examples/ipsec-secgw/ipsec_neon.h index 3f2d0a0..9c0498b 100644 --- a/examples/ipsec-secgw/ipsec_neon.h +++ b/examples/ipsec-secgw/ipsec_neon.h @@ -18,12 +18,13 @@ extern xmm_t val_eth[RTE_MAX_ETHPORTS]; */ static inline void processx4_step3(struct rte_mbuf *pkts[FWDSTEP], uint16_t dst_port[FWDSTEP], - uint64_t tx_offloads, bool ip_cksum, uint8_t *l_pkt) + uint64_t tx_offloads, bool ip_cksum, bool is_ipv4, uint8_t *l_pkt) { uint32x4_t te[FWDSTEP]; uint32x4_t ve[FWDSTEP]; uint32_t *p[FWDSTEP]; struct rte_mbuf *pkt; + uint32_t val; uint8_t i; for (i = 0; i < FWDSTEP; i++) { @@ -38,7 +39,15 @@ processx4_step3(struct rte_mbuf *pkts[FWDSTEP], uint16_t dst_port[FWDSTEP], te[i] = vld1q_u32(p[i]); /* Update last 4 bytes */ - ve[i] = vsetq_lane_u32(vgetq_lane_u32(te[i], 3), ve[i], 3); + val = vgetq_lane_u32(te[i], 3); +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + val &= 0xFFFFUL << 16; + val |= rte_cpu_to_be_16(is_ipv4 ? RTE_ETHER_TYPE_IPV4 : RTE_ETHER_TYPE_IPV6); +#else + val &= 0xFFFFUL; + val |= rte_cpu_to_be_16(is_ipv4 ? RTE_ETHER_TYPE_IPV4 : RTE_ETHER_TYPE_IPV6) << 16; +#endif + ve[i] = vsetq_lane_u32(val, ve[i], 3); vst1q_u32(p[i], ve[i]); if (ip_cksum) { @@ -64,10 +73,11 @@ processx4_step3(struct rte_mbuf *pkts[FWDSTEP], uint16_t dst_port[FWDSTEP], */ static inline void process_packet(struct rte_mbuf *pkt, uint16_t *dst_port, uint64_t tx_offloads, - bool ip_cksum, uint8_t *l_pkt) + bool ip_cksum, bool is_ipv4, uint8_t *l_pkt) { struct rte_ether_hdr *eth_hdr; uint32x4_t te, ve; + uint32_t val; /* Check if it is a large packet */ if (pkt->pkt_len - RTE_ETHER_HDR_LEN > mtu_size) @@ -78,7 +88,15 @@ process_packet(struct rte_mbuf *pkt, uint16_t *dst_port, uint64_t tx_offloads, te = vld1q_u32((uint32_t *)eth_hdr); ve = vreinterpretq_u32_s32(val_eth[dst_port[0]]); - ve = vcopyq_laneq_u32(ve, 3, te, 3); + val = vgetq_lane_u32(te, 3); +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + val &= 0xFFFFUL << 16; + val |= rte_cpu_to_be_16(is_ipv4 ? RTE_ETHER_TYPE_IPV4 : RTE_ETHER_TYPE_IPV6); +#else + val &= 0xFFFFUL; + val |= rte_cpu_to_be_16(is_ipv4 ? RTE_ETHER_TYPE_IPV4 : RTE_ETHER_TYPE_IPV6) << 16; +#endif + ve = vsetq_lane_u32(val, ve, 3); vst1q_u32((uint32_t *)eth_hdr, ve); if (ip_cksum) { @@ -223,14 +241,14 @@ send_multi_pkts(struct rte_mbuf **pkts, uint16_t dst_port[MAX_PKT_BURST], lp = pnum; lp[0] = 1; - processx4_step3(pkts, dst_port, tx_offloads, ip_cksum, &l_pkt); + processx4_step3(pkts, dst_port, tx_offloads, ip_cksum, is_ipv4, &l_pkt); /* dp1: */ dp1 = vld1q_u16(dst_port); for (i = FWDSTEP; i != k; i += FWDSTEP) { - processx4_step3(&pkts[i], &dst_port[i], tx_offloads, - ip_cksum, &l_pkt); + processx4_step3(&pkts[i], &dst_port[i], tx_offloads, ip_cksum, is_ipv4, + &l_pkt); /* * dp2: @@ -268,20 +286,17 @@ send_multi_pkts(struct rte_mbuf **pkts, uint16_t dst_port[MAX_PKT_BURST], /* Process up to last 3 packets one by one. */ switch (nb_rx % FWDSTEP) { case 3: - process_packet(pkts[i], dst_port + i, tx_offloads, ip_cksum, - &l_pkt); + process_packet(pkts[i], dst_port + i, tx_offloads, ip_cksum, is_ipv4, &l_pkt); GROUP_PORT_STEP(dlp, dst_port, lp, pnum, i); i++; /* fallthrough */ case 2: - process_packet(pkts[i], dst_port + i, tx_offloads, ip_cksum, - &l_pkt); + process_packet(pkts[i], dst_port + i, tx_offloads, ip_cksum, is_ipv4, &l_pkt); GROUP_PORT_STEP(dlp, dst_port, lp, pnum, i); i++; /* fallthrough */ case 1: - process_packet(pkts[i], dst_port + i, tx_offloads, ip_cksum, - &l_pkt); + process_packet(pkts[i], dst_port + i, tx_offloads, ip_cksum, is_ipv4, &l_pkt); GROUP_PORT_STEP(dlp, dst_port, lp, pnum, i); } diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c index 803157d..5e69450 100644 --- a/examples/ipsec-secgw/ipsec_worker.c +++ b/examples/ipsec-secgw/ipsec_worker.c @@ -53,11 +53,8 @@ process_ipsec_get_pkt_type(struct rte_mbuf *pkt, uint8_t **nlp) } static inline void -update_mac_addrs(struct rte_mbuf *pkt, uint16_t portid) +update_mac_addrs(struct rte_ether_hdr *ethhdr, uint16_t portid) { - struct rte_ether_hdr *ethhdr; - - ethhdr = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *); memcpy(ðhdr->src_addr, ðaddr_tbl[portid].src, RTE_ETHER_ADDR_LEN); memcpy(ðhdr->dst_addr, ðaddr_tbl[portid].dst, RTE_ETHER_ADDR_LEN); } @@ -374,7 +371,7 @@ process_ipsec_ev_inbound(struct ipsec_ctx *ctx, struct route_table *rt, /* else, we have a matching route */ /* Update mac addresses */ - update_mac_addrs(pkt, port_id); + update_mac_addrs(rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *), port_id); /* Update the event with the dest port */ ipsec_event_pre_forward(pkt, port_id); @@ -392,6 +389,7 @@ process_ipsec_ev_outbound(struct ipsec_ctx *ctx, struct route_table *rt, struct rte_event *ev) { struct rte_ipsec_session *sess; + struct rte_ether_hdr *ethhdr; struct sa_ctx *sa_ctx; struct rte_mbuf *pkt; uint16_t port_id = 0; @@ -430,6 +428,7 @@ process_ipsec_ev_outbound(struct ipsec_ctx *ctx, struct route_table *rt, goto drop_pkt_and_exit; } + ethhdr = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *); /* Check if the packet has to be bypassed */ if (sa_idx == BYPASS) { port_id = get_route(pkt, rt, type); @@ -467,6 +466,9 @@ process_ipsec_ev_outbound(struct ipsec_ctx *ctx, struct route_table *rt, /* Mark the packet for Tx security offload */ pkt->ol_flags |= RTE_MBUF_F_TX_SEC_OFFLOAD; + /* Update ether type */ + ethhdr->ether_type = (IS_IP4(sa->flags) ? rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4) : + rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6)); /* Get the port to which this pkt need to be submitted */ port_id = sa->portid; @@ -476,7 +478,7 @@ process_ipsec_ev_outbound(struct ipsec_ctx *ctx, struct route_table *rt, pkt->l2_len = RTE_ETHER_HDR_LEN; /* Update mac addresses */ - update_mac_addrs(pkt, port_id); + update_mac_addrs(ethhdr, port_id); /* Update the event with the dest port */ ipsec_event_pre_forward(pkt, port_id); @@ -494,6 +496,7 @@ ipsec_ev_route_pkts(struct rte_event_vector *vec, struct route_table *rt, struct ipsec_traffic *t, struct sa_ctx *sa_ctx) { struct rte_ipsec_session *sess; + struct rte_ether_hdr *ethhdr; uint32_t sa_idx, i, j = 0; uint16_t port_id = 0; struct rte_mbuf *pkt; @@ -505,7 +508,8 @@ ipsec_ev_route_pkts(struct rte_event_vector *vec, struct route_table *rt, port_id = route4_pkt(pkt, rt->rt4_ctx); if (port_id != RTE_MAX_ETHPORTS) { /* Update mac addresses */ - update_mac_addrs(pkt, port_id); + ethhdr = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *); + update_mac_addrs(ethhdr, port_id); /* Update the event with the dest port */ ipsec_event_pre_forward(pkt, port_id); ev_vector_attr_update(vec, pkt); @@ -520,7 +524,8 @@ ipsec_ev_route_pkts(struct rte_event_vector *vec, struct route_table *rt, port_id = route6_pkt(pkt, rt->rt6_ctx); if (port_id != RTE_MAX_ETHPORTS) { /* Update mac addresses */ - update_mac_addrs(pkt, port_id); + ethhdr = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *); + update_mac_addrs(ethhdr, port_id); /* Update the event with the dest port */ ipsec_event_pre_forward(pkt, port_id); ev_vector_attr_update(vec, pkt); @@ -553,7 +558,14 @@ ipsec_ev_route_pkts(struct rte_event_vector *vec, struct route_table *rt, pkt->ol_flags |= RTE_MBUF_F_TX_SEC_OFFLOAD; port_id = sa->portid; - update_mac_addrs(pkt, port_id); + + /* Fetch outer ip type and update */ + ethhdr = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *); + ethhdr->ether_type = (IS_IP4(sa->flags) ? + rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4) : + rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6)); + update_mac_addrs(ethhdr, port_id); + ipsec_event_pre_forward(pkt, port_id); ev_vector_attr_update(vec, pkt); vec->mbufs[j++] = pkt;