From patchwork Tue Oct 11 09:08:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 117879 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F13E3A0545; Tue, 11 Oct 2022 11:08:28 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0201E42D05; Tue, 11 Oct 2022 11:08:25 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 60C8E42CFB for ; Tue, 11 Oct 2022 11:08:22 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 29AMl3sI008930 for ; Tue, 11 Oct 2022 02:08:21 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=YXS+m17EY6vr2fJR/9/nn/twmMV9/8Zt5UmvqA2eqEM=; b=jFI4r4bBN6nJKLzKyd5PO3D4JbePzN/LVKk7f3aOXRqspaYwlHUZ3qc/Z/NQTopHh6Sp 9yGPyxBcO3aCI6bSFbJyrGFGCkXG2U1eFOfntHImmqGhnX1z5Vup5kAaE2WEgJFyRTZ+ WcmwZGYQLmLzviH+tYcgaHcDV2YnPqaAle4F+jFSLmoWp/qBK6qol2vCgoNRNcR/Xbhl molemiRjxWHMfdQe+YG36006LSFhaxGUuIlgjlaaACSmXiYu8ha72eysYq8K3vuy462F fZiCCIPib8A17U4lByPQ6vSQgk0MWgD0m4yXSvC6kdsHpqmqSa0baow2L9EL7Plu7xXl Eg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3k40g4xgnf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 11 Oct 2022 02:08:21 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 11 Oct 2022 02:08:19 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Tue, 11 Oct 2022 02:08:19 -0700 Received: from MININT-80QBFE8.corp.innovium.com (unknown [10.28.161.88]) by maili.marvell.com (Postfix) with ESMTP id E3EE73F7083; Tue, 11 Oct 2022 02:08:17 -0700 (PDT) From: To: CC: , Pavan Nikhilesh Subject: [PATCH v4 4/5] examples/l3fwd: fix event vector processing in fib Date: Tue, 11 Oct 2022 14:38:04 +0530 Message-ID: <20221011090805.3602-4-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221011090805.3602-1-pbhagavatula@marvell.com> References: <20220911181250.2286-1-pbhagavatula@marvell.com> <20221011090805.3602-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: Q31HvVRg7m494mymiB-USCkR_5djzOGB X-Proofpoint-GUID: Q31HvVRg7m494mymiB-USCkR_5djzOGB X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.528,FMLib:17.11.122.1 definitions=2022-10-11_03,2022-10-10_02,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Pavan Nikhilesh Fix stack overflow when event vector size is greater than MAX_BURST_SIZE. Add missing mac swap and rfc1812 stage. Fixes: e8adca1951d4 ("examples/l3fwd: support event vector") Signed-off-by: Pavan Nikhilesh --- examples/l3fwd/l3fwd_fib.c | 130 ++++++++++++++++++++++++++----------- 1 file changed, 91 insertions(+), 39 deletions(-) diff --git a/examples/l3fwd/l3fwd_fib.c b/examples/l3fwd/l3fwd_fib.c index b82e0c0354..407e9def71 100644 --- a/examples/l3fwd/l3fwd_fib.c +++ b/examples/l3fwd/l3fwd_fib.c @@ -77,27 +77,37 @@ fib_parse_packet(struct rte_mbuf *mbuf, */ #if !defined FIB_SEND_MULTI static inline void -fib_send_single(int nb_tx, struct lcore_conf *qconf, - struct rte_mbuf **pkts_burst, uint16_t hops[nb_tx]) +process_packet(struct rte_mbuf *pkt, uint16_t *hop) { - int32_t j; struct rte_ether_hdr *eth_hdr; - for (j = 0; j < nb_tx; j++) { - /* Run rfc1812 if packet is ipv4 and checks enabled. */ + /* Run rfc1812 if packet is ipv4 and checks enabled. */ #if defined DO_RFC_1812_CHECKS - rfc1812_process((struct rte_ipv4_hdr *)(rte_pktmbuf_mtod( - pkts_burst[j], struct rte_ether_hdr *) + 1), - &hops[j], pkts_burst[j]->packet_type); + rfc1812_process( + (struct rte_ipv4_hdr *)(rte_pktmbuf_mtod( + pkt, struct rte_ether_hdr *) + + 1), + hop, pkt->packet_type); #endif - /* Set MAC addresses. */ - eth_hdr = rte_pktmbuf_mtod(pkts_burst[j], - struct rte_ether_hdr *); - *(uint64_t *)ð_hdr->dst_addr = dest_eth_addr[hops[j]]; - rte_ether_addr_copy(&ports_eth_addr[hops[j]], - ð_hdr->src_addr); + /* Set MAC addresses. */ + eth_hdr = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *); + *(uint64_t *)ð_hdr->dst_addr = dest_eth_addr[*hop]; + rte_ether_addr_copy(&ports_eth_addr[*hop], ð_hdr->src_addr); +} +static inline void +fib_send_single(int nb_tx, struct lcore_conf *qconf, + struct rte_mbuf **pkts_burst, uint16_t hops[nb_tx]) +{ + int32_t j; + + for (j = 0; j < nb_tx; j++) { + process_packet(pkts_burst[j], &hops[j]); + if (hops[j] == BAD_PORT) { + rte_pktmbuf_free(pkts_burst[j]); + continue; + } /* Send single packet. */ send_single_packet(qconf, pkts_burst[j], hops[j]); } @@ -261,7 +271,7 @@ fib_event_loop(struct l3fwd_event_resources *evt_rsrc, uint32_t ipv4_arr[MAX_PKT_BURST]; uint8_t ipv6_arr[MAX_PKT_BURST][RTE_FIB6_IPV6_ADDR_SIZE]; uint64_t hopsv4[MAX_PKT_BURST], hopsv6[MAX_PKT_BURST]; - uint16_t nh; + uint16_t nh, hops[MAX_PKT_BURST]; uint8_t type_arr[MAX_PKT_BURST]; uint32_t ipv4_cnt, ipv6_cnt; uint32_t ipv4_arr_assem, ipv6_arr_assem; @@ -350,7 +360,13 @@ fib_event_loop(struct l3fwd_event_resources *evt_rsrc, else nh = (uint16_t)hopsv6[ipv6_arr_assem++]; if (nh != FIB_DEFAULT_HOP) - events[i].mbuf->port = nh; + hops[i] = nh != FIB_DEFAULT_HOP ? + nh : + events[i].mbuf->port; + process_packet(events[i].mbuf, &hops[i]); + events[i].mbuf->port = hops[i] != BAD_PORT ? + hops[i] : + events[i].mbuf->port; } if (flags & L3FWD_EVENT_TX_ENQ) { @@ -418,14 +434,12 @@ fib_event_main_loop_tx_q_burst(__rte_unused void *dummy) } static __rte_always_inline void -fib_process_event_vector(struct rte_event_vector *vec) +fib_process_event_vector(struct rte_event_vector *vec, uint8_t *type_arr, + uint8_t **ipv6_arr, uint64_t *hopsv4, uint64_t *hopsv6, + uint32_t *ipv4_arr, uint16_t *hops) { - uint8_t ipv6_arr[MAX_PKT_BURST][RTE_FIB6_IPV6_ADDR_SIZE]; - uint64_t hopsv4[MAX_PKT_BURST], hopsv6[MAX_PKT_BURST]; uint32_t ipv4_arr_assem, ipv6_arr_assem; struct rte_mbuf **mbufs = vec->mbufs; - uint32_t ipv4_arr[MAX_PKT_BURST]; - uint8_t type_arr[MAX_PKT_BURST]; uint32_t ipv4_cnt, ipv6_cnt; struct lcore_conf *lconf; uint16_t nh; @@ -463,16 +477,10 @@ fib_process_event_vector(struct rte_event_vector *vec) /* Lookup IPv6 hops if IPv6 packets are present. */ if (ipv6_cnt > 0) - rte_fib6_lookup_bulk(lconf->ipv6_lookup_struct, ipv6_arr, - hopsv6, ipv6_cnt); - - if (vec->attr_valid) { - nh = type_arr[0] ? (uint16_t)hopsv4[0] : (uint16_t)hopsv6[0]; - if (nh != FIB_DEFAULT_HOP) - vec->port = nh; - else - vec->attr_valid = 0; - } + rte_fib6_lookup_bulk( + lconf->ipv6_lookup_struct, + (uint8_t(*)[RTE_FIB6_IPV6_ADDR_SIZE])ipv6_arr, hopsv6, + ipv6_cnt); /* Assign ports looked up in fib depending on IPv4 or IPv6 */ for (i = 0; i < vec->nb_elem; i++) { @@ -481,9 +489,26 @@ fib_process_event_vector(struct rte_event_vector *vec) else nh = (uint16_t)hopsv6[ipv6_arr_assem++]; if (nh != FIB_DEFAULT_HOP) - mbufs[i]->port = nh; - event_vector_attr_validate(vec, mbufs[i]); + hops[i] = nh; + else + hops[i] = vec->attr_valid ? vec->port : + vec->mbufs[i]->port; } + +#if defined FIB_SEND_MULTI + uint16_t k; + k = RTE_ALIGN_FLOOR(vec->nb_elem, FWDSTEP); + + for (i = 0; i != k; i += FWDSTEP) + processx4_step3(&vec->mbufs[i], &hops[i]); + for (; i < vec->nb_elem; i++) + process_packet(vec->mbufs[i], &hops[i]); +#else + for (i = 0; i < vec->nb_elem; i++) + process_packet(vec->mbufs[i], &hops[i]); +#endif + + process_event_vector(vec, hops); } static __rte_always_inline void @@ -496,10 +521,37 @@ fib_event_loop_vector(struct l3fwd_event_resources *evt_rsrc, const uint8_t event_d_id = evt_rsrc->event_d_id; const uint16_t deq_len = evt_rsrc->deq_depth; struct rte_event events[MAX_PKT_BURST]; + uint8_t *type_arr, **ipv6_arr, *ptr; int nb_enq = 0, nb_deq = 0, i; - - if (event_p_id < 0) + uint64_t *hopsv4, *hopsv6; + uint32_t *ipv4_arr; + uint16_t *hops; + uintptr_t mem; + + mem = (uintptr_t)rte_zmalloc( + "vector_fib", + (sizeof(uint32_t) + sizeof(uint8_t) + sizeof(uint64_t) + + sizeof(uint64_t) + sizeof(uint16_t) + sizeof(uint8_t *) + + (sizeof(uint8_t) * RTE_FIB6_IPV6_ADDR_SIZE)) * + evt_rsrc->vector_size, + RTE_CACHE_LINE_SIZE); + if (mem == 0) return; + ipv4_arr = (uint32_t *)mem; + type_arr = (uint8_t *)&ipv4_arr[evt_rsrc->vector_size]; + hopsv4 = (uint64_t *)&type_arr[evt_rsrc->vector_size]; + hopsv6 = (uint64_t *)&hopsv4[evt_rsrc->vector_size]; + hops = (uint16_t *)&hopsv6[evt_rsrc->vector_size]; + ipv6_arr = (uint8_t **)&hops[evt_rsrc->vector_size]; + + ptr = (uint8_t *)&ipv6_arr[evt_rsrc->vector_size]; + for (i = 0; i < evt_rsrc->vector_size; i++) + ipv6_arr[i] = &ptr[RTE_FIB6_IPV6_ADDR_SIZE + i]; + + if (event_p_id < 0) { + rte_free(mem); + return; + } RTE_LOG(INFO, L3FWD, "entering %s on lcore %u\n", __func__, rte_lcore_id()); @@ -519,10 +571,9 @@ fib_event_loop_vector(struct l3fwd_event_resources *evt_rsrc, events[i].op = RTE_EVENT_OP_FORWARD; } - fib_process_event_vector(events[i].vec); - - if (flags & L3FWD_EVENT_TX_DIRECT) - event_vector_txq_set(events[i].vec, 0); + fib_process_event_vector(events[i].vec, type_arr, + ipv6_arr, hopsv4, hopsv6, + ipv4_arr, hops); } if (flags & L3FWD_EVENT_TX_ENQ) { @@ -546,6 +597,7 @@ fib_event_loop_vector(struct l3fwd_event_resources *evt_rsrc, l3fwd_event_worker_cleanup(event_d_id, event_p_id, events, nb_enq, nb_deq, 1); + rte_free(mem); } int __rte_noinline