From patchwork Sun Sep 11 18:12:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 116168 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 07A17A00C5; Sun, 11 Sep 2022 20:13:27 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 354224282E; Sun, 11 Sep 2022 20:13:13 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id BCDC442802 for ; Sun, 11 Sep 2022 20:13:10 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 28B9s3V0018843 for ; Sun, 11 Sep 2022 11:13:10 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=LR2dSSlc22iyALls4fFE2QLRQryonixyhvCE2df+Phw=; b=XjNwI8O0XsSBsmu6MWRvnTelwqJdRr6go4x36xw41aC8F2jtcKkFcX89NTKtLRGBYjT6 GZvjY7ZjM9/7LG6DiISCDrGNg1Ujv2ibwGhXYq4oJfn4RnqGNVHS4TpV3csXDNVUuM6I D+GUJJ5fTm1Ag2Hw0V/Jaww/lB1kDOZ4eu5pN0JWCSBHorfs967sp9aX3aGYZL8NPjEp edk3JtJCybLOnjyGkK2Ndqj73JCsQRdSoc08DJ18mBabXqu/yQKu+V/HnoxvmFNW9eNM 4m70ddcaZ2WLvs67KPPThIWoYyl3ux7GKqmqEjCW/7TmaOQf0/PxZ2wo3R/IeVIgxVT3 vQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3jgt3mutcn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sun, 11 Sep 2022 11:13:10 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 11 Sep 2022 11:13:07 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Sun, 11 Sep 2022 11:13:07 -0700 Received: from MININT-80QBFE8.corp.innovium.com (unknown [10.193.65.138]) by maili.marvell.com (Postfix) with ESMTP id 922573F706D; Sun, 11 Sep 2022 11:13:06 -0700 (PDT) From: To: CC: , Pavan Nikhilesh Subject: [PATCH v3 5/5] examples/l3fwd: use em vector path for event vector Date: Sun, 11 Sep 2022 23:42:49 +0530 Message-ID: <20220911181250.2286-5-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220911181250.2286-1-pbhagavatula@marvell.com> References: <20220902091833.9074-1-pbhagavatula@marvell.com> <20220911181250.2286-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 5LLlYgYvbaPw6zd0yNSqYPNF6ArmeY1G X-Proofpoint-GUID: 5LLlYgYvbaPw6zd0yNSqYPNF6ArmeY1G X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.528,FMLib:17.11.122.1 definitions=2022-09-11_08,2022-09-09_01,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Pavan Nikhilesh Use em vector path to process event vector. Signed-off-by: Pavan Nikhilesh --- examples/l3fwd/l3fwd_em.c | 12 +++-- examples/l3fwd/l3fwd_em.h | 29 +++++------ examples/l3fwd/l3fwd_em_hlm.h | 72 +++++----------------------- examples/l3fwd/l3fwd_em_sequential.h | 25 ++++++---- examples/l3fwd/l3fwd_event.h | 21 -------- 5 files changed, 47 insertions(+), 112 deletions(-) diff --git a/examples/l3fwd/l3fwd_em.c b/examples/l3fwd/l3fwd_em.c index 10be24c61d..e7b35cfbd9 100644 --- a/examples/l3fwd/l3fwd_em.c +++ b/examples/l3fwd/l3fwd_em.c @@ -852,10 +852,15 @@ em_event_loop_vector(struct l3fwd_event_resources *evt_rsrc, int i, nb_enq = 0, nb_deq = 0; struct lcore_conf *lconf; unsigned int lcore_id; + uint16_t *dst_ports; if (event_p_id < 0) return; + dst_ports = rte_zmalloc("", sizeof(uint16_t) * evt_rsrc->vector_size, + RTE_CACHE_LINE_SIZE); + if (dst_ports == NULL) + return; lcore_id = rte_lcore_id(); lconf = &lcore_conf[lcore_id]; @@ -877,13 +882,12 @@ em_event_loop_vector(struct l3fwd_event_resources *evt_rsrc, } #if defined RTE_ARCH_X86 || defined __ARM_NEON - l3fwd_em_process_event_vector(events[i].vec, lconf); + l3fwd_em_process_event_vector(events[i].vec, lconf, + dst_ports); #else l3fwd_em_no_opt_process_event_vector(events[i].vec, - lconf); + lconf, dst_ports); #endif - if (flags & L3FWD_EVENT_TX_DIRECT) - event_vector_txq_set(events[i].vec, 0); } if (flags & L3FWD_EVENT_TX_ENQ) { diff --git a/examples/l3fwd/l3fwd_em.h b/examples/l3fwd/l3fwd_em.h index fe2ee59f6a..7d051fc076 100644 --- a/examples/l3fwd/l3fwd_em.h +++ b/examples/l3fwd/l3fwd_em.h @@ -100,7 +100,7 @@ l3fwd_em_simple_forward(struct rte_mbuf *m, uint16_t portid, } } -static __rte_always_inline void +static __rte_always_inline uint16_t l3fwd_em_simple_process(struct rte_mbuf *m, struct lcore_conf *qconf) { struct rte_ether_hdr *eth_hdr; @@ -117,6 +117,8 @@ l3fwd_em_simple_process(struct rte_mbuf *m, struct lcore_conf *qconf) m->port = l3fwd_em_handle_ipv6(m, m->port, eth_hdr, qconf); else m->port = BAD_PORT; + + return m->port; } /* @@ -179,7 +181,8 @@ l3fwd_em_no_opt_process_events(int nb_rx, struct rte_event **events, static inline void l3fwd_em_no_opt_process_event_vector(struct rte_event_vector *vec, - struct lcore_conf *qconf) + struct lcore_conf *qconf, + uint16_t *dst_ports) { struct rte_mbuf **mbufs = vec->mbufs; int32_t i; @@ -188,30 +191,20 @@ l3fwd_em_no_opt_process_event_vector(struct rte_event_vector *vec, for (i = 0; i < PREFETCH_OFFSET && i < vec->nb_elem; i++) rte_prefetch0(rte_pktmbuf_mtod(mbufs[i], void *)); - /* Process first packet to init vector attributes */ - l3fwd_em_simple_process(mbufs[0], qconf); - if (vec->attr_valid) { - if (mbufs[0]->port != BAD_PORT) - vec->port = mbufs[0]->port; - else - vec->attr_valid = 0; - } - /* * Prefetch and forward already prefetched packets. */ - for (i = 1; i < (vec->nb_elem - PREFETCH_OFFSET); i++) { + for (i = 0; i < (vec->nb_elem - PREFETCH_OFFSET); i++) { rte_prefetch0( rte_pktmbuf_mtod(mbufs[i + PREFETCH_OFFSET], void *)); - l3fwd_em_simple_process(mbufs[i], qconf); - event_vector_attr_validate(vec, mbufs[i]); + dst_ports[i] = l3fwd_em_simple_process(mbufs[i], qconf); } /* Forward remaining prefetched packets */ - for (; i < vec->nb_elem; i++) { - l3fwd_em_simple_process(mbufs[i], qconf); - event_vector_attr_validate(vec, mbufs[i]); - } + for (; i < vec->nb_elem; i++) + dst_ports[i] = l3fwd_em_simple_process(mbufs[i], qconf); + + process_event_vector(vec, dst_ports); } #endif /* __L3FWD_EM_H__ */ diff --git a/examples/l3fwd/l3fwd_em_hlm.h b/examples/l3fwd/l3fwd_em_hlm.h index 12b997e477..2e11eefad7 100644 --- a/examples/l3fwd/l3fwd_em_hlm.h +++ b/examples/l3fwd/l3fwd_em_hlm.h @@ -332,70 +332,20 @@ l3fwd_em_process_events(int nb_rx, struct rte_event **ev, static inline void l3fwd_em_process_event_vector(struct rte_event_vector *vec, - struct lcore_conf *qconf) + struct lcore_conf *qconf, uint16_t *dst_port) { - struct rte_mbuf **mbufs = vec->mbufs; - uint16_t dst_port[MAX_PKT_BURST]; - int32_t i, j, n, pos; - - for (j = 0; j < EM_HASH_LOOKUP_COUNT && j < vec->nb_elem; j++) - rte_prefetch0( - rte_pktmbuf_mtod(mbufs[j], struct rte_ether_hdr *) + 1); + uint16_t i; if (vec->attr_valid) - vec->port = em_get_dst_port(qconf, mbufs[0], mbufs[0]->port); - - n = RTE_ALIGN_FLOOR(vec->nb_elem, EM_HASH_LOOKUP_COUNT); - for (j = 0; j < n; j += EM_HASH_LOOKUP_COUNT) { - uint32_t pkt_type = - RTE_PTYPE_L3_MASK | RTE_PTYPE_L4_TCP | RTE_PTYPE_L4_UDP; - uint32_t l3_type, tcp_or_udp; - - for (i = 0; i < EM_HASH_LOOKUP_COUNT; i++) - pkt_type &= mbufs[j + i]->packet_type; - - l3_type = pkt_type & RTE_PTYPE_L3_MASK; - tcp_or_udp = pkt_type & (RTE_PTYPE_L4_TCP | RTE_PTYPE_L4_UDP); - - for (i = 0, pos = j + EM_HASH_LOOKUP_COUNT; - i < EM_HASH_LOOKUP_COUNT && pos < vec->nb_elem; - i++, pos++) { - rte_prefetch0(rte_pktmbuf_mtod(mbufs[pos], - struct rte_ether_hdr *) + - 1); - } - - if (tcp_or_udp && (l3_type == RTE_PTYPE_L3_IPV4)) { - em_get_dst_port_ipv4xN_events(qconf, &mbufs[j], - &dst_port[j]); - } else if (tcp_or_udp && (l3_type == RTE_PTYPE_L3_IPV6)) { - em_get_dst_port_ipv6xN_events(qconf, &mbufs[j], - &dst_port[j]); - } else { - for (i = 0; i < EM_HASH_LOOKUP_COUNT; i++) { - mbufs[j + i]->port = - em_get_dst_port(qconf, mbufs[j + i], - mbufs[j + i]->port); - process_packet(mbufs[j + i], - &mbufs[j + i]->port); - event_vector_attr_validate(vec, mbufs[j + i]); - } - continue; - } - processx4_step3(&mbufs[j], &dst_port[j]); - - for (i = 0; i < EM_HASH_LOOKUP_COUNT; i++) { - mbufs[j + i]->port = dst_port[j + i]; - event_vector_attr_validate(vec, mbufs[j + i]); - } - } - - for (; j < vec->nb_elem; j++) { - mbufs[j]->port = - em_get_dst_port(qconf, mbufs[j], mbufs[j]->port); - process_packet(mbufs[j], &mbufs[j]->port); - event_vector_attr_validate(vec, mbufs[j]); - } + l3fwd_em_process_packets(vec->nb_elem, vec->mbufs, dst_port, + vec->port, qconf, 1); + else + for (i = 0; i < vec->nb_elem; i++) + l3fwd_em_process_packets(1, &vec->mbufs[i], + &dst_port[i], + vec->mbufs[i]->port, qconf, 1); + + process_event_vector(vec, dst_port); } #endif /* __L3FWD_EM_HLM_H__ */ diff --git a/examples/l3fwd/l3fwd_em_sequential.h b/examples/l3fwd/l3fwd_em_sequential.h index d2f75edb8a..067f23889a 100644 --- a/examples/l3fwd/l3fwd_em_sequential.h +++ b/examples/l3fwd/l3fwd_em_sequential.h @@ -113,39 +113,48 @@ l3fwd_em_process_events(int nb_rx, struct rte_event **events, for (i = 1, j = 0; j < nb_rx; i++, j++) { struct rte_mbuf *mbuf = events[j]->mbuf; + uint16_t port; if (i < nb_rx) { rte_prefetch0(rte_pktmbuf_mtod( events[i]->mbuf, struct rte_ether_hdr *) + 1); } + port = mbuf->port; mbuf->port = em_get_dst_port(qconf, mbuf, mbuf->port); process_packet(mbuf, &mbuf->port); + if (mbuf->port == BAD_PORT) + mbuf->port = port; } } static inline void l3fwd_em_process_event_vector(struct rte_event_vector *vec, - struct lcore_conf *qconf) + struct lcore_conf *qconf, uint16_t *dst_ports) { + const uint8_t attr_valid = vec->attr_valid; struct rte_mbuf **mbufs = vec->mbufs; int32_t i, j; rte_prefetch0(rte_pktmbuf_mtod(mbufs[0], struct rte_ether_hdr *) + 1); - if (vec->attr_valid) - vec->port = em_get_dst_port(qconf, mbufs[0], mbufs[0]->port); - for (i = 0, j = 1; i < vec->nb_elem; i++, j++) { if (j < vec->nb_elem) rte_prefetch0(rte_pktmbuf_mtod(mbufs[j], struct rte_ether_hdr *) + 1); - mbufs[i]->port = - em_get_dst_port(qconf, mbufs[i], mbufs[i]->port); - process_packet(mbufs[i], &mbufs[i]->port); - event_vector_attr_validate(vec, mbufs[i]); + dst_ports[i] = em_get_dst_port(qconf, mbufs[i], + attr_valid ? vec->port : + mbufs[i]->port); } + j = RTE_ALIGN_FLOOR(vec->nb_elem, FWDSTEP); + + for (i = 0; i != j; i += FWDSTEP) + processx4_step3(&vec->mbufs[i], &dst_ports[i]); + for (; i < vec->nb_elem; i++) + process_packet(vec->mbufs[i], &dst_ports[i]); + + process_event_vector(vec, dst_ports); } #endif /* __L3FWD_EM_SEQUENTIAL_H__ */ diff --git a/examples/l3fwd/l3fwd_event.h b/examples/l3fwd/l3fwd_event.h index 3fe38aada0..e21817c36b 100644 --- a/examples/l3fwd/l3fwd_event.h +++ b/examples/l3fwd/l3fwd_event.h @@ -103,27 +103,6 @@ process_dst_port(uint16_t *dst_ports, uint16_t nb_elem) } #endif -static inline void -event_vector_attr_validate(struct rte_event_vector *vec, struct rte_mbuf *mbuf) -{ - /* l3fwd application only changes mbuf port while processing */ - if (vec->attr_valid && (vec->port != mbuf->port)) - vec->attr_valid = 0; -} - -static inline void -event_vector_txq_set(struct rte_event_vector *vec, uint16_t txq) -{ - if (vec->attr_valid) { - vec->queue = txq; - } else { - int i; - - for (i = 0; i < vec->nb_elem; i++) - rte_event_eth_tx_adapter_txq_set(vec->mbufs[i], txq); - } -} - static inline uint16_t filter_bad_packets(struct rte_mbuf **mbufs, uint16_t *dst_port, uint16_t nb_pkts)