From patchwork Fri Nov 11 09:04:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Rybchenko X-Patchwork-Id: 119792 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 84714A0542; Fri, 11 Nov 2022 10:04:39 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 06A7C42D39; Fri, 11 Nov 2022 10:04:31 +0100 (CET) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id BC43B42D1F for ; Fri, 11 Nov 2022 10:04:29 +0100 (CET) Received: by shelob.oktetlabs.ru (Postfix, from userid 115) id 54B6787; Fri, 11 Nov 2022 12:04:28 +0300 (MSK) X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on mail1.oktetlabs.ru X-Spam-Level: X-Spam-Status: No, score=0.8 required=5.0 tests=ALL_TRUSTED, DKIM_ADSP_DISCARD autolearn=no autolearn_force=no version=3.4.6 Received: from aros.oktetlabs.ru (aros.oktetlabs.ru [192.168.38.17]) by shelob.oktetlabs.ru (Postfix) with ESMTP id E0A977D; Fri, 11 Nov 2022 12:04:26 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru E0A977D Authentication-Results: shelob.oktetlabs.ru/E0A977D; dkim=none; dkim-atps=neutral From: Andrew Rybchenko To: Ferruh Yigit , Aman Singh , Yuying Zhang Cc: dev@dpdk.org, Georgiy Levashov , Ivan Ilchenko Subject: [PATCH v3 2/2] app/testpmd: support TCP TSO in Tx only mode Date: Fri, 11 Nov 2022 12:04:23 +0300 Message-Id: <20221111090423.1600091-3-andrew.rybchenko@oktetlabs.ru> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221111090423.1600091-1-andrew.rybchenko@oktetlabs.ru> References: <20221017144133.1899052-1-andrew.rybchenko@oktetlabs.ru> <20221111090423.1600091-1-andrew.rybchenko@oktetlabs.ru> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add '--txonly-tso-mss=N' option that enables TSO offload and generates packets with specified MSS in txonly mode. Signed-off-by: Georgiy Levashov Signed-off-by: Ivan Ilchenko Signed-off-by: Andrew Rybchenko --- app/test-pmd/parameters.c | 10 ++++++++ app/test-pmd/testpmd.c | 12 ++++++++++ app/test-pmd/testpmd.h | 1 + app/test-pmd/txonly.c | 34 ++++++++++++++++++++++++++- doc/guides/testpmd_app_ug/run_app.rst | 4 ++++ 5 files changed, 60 insertions(+), 1 deletion(-) diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c index 2ed8afedfd..e71cb3e139 100644 --- a/app/test-pmd/parameters.c +++ b/app/test-pmd/parameters.c @@ -156,6 +156,7 @@ usage(char* progname) printf(" --txpkts=X[,Y]*: set TX segment sizes" " or total packet length.\n"); printf(" --txonly-multi-flow: generate multiple flows in txonly mode\n"); + printf(" --txonly-tso-mss=N: enable TSO offload and generate packets with specified MSS in txonly mode\n"); printf(" --tx-ip=src,dst: IP addresses in Tx-only mode\n"); printf(" --tx-udp=src[,dst]: UDP ports in Tx-only mode\n"); printf(" --eth-link-speed: force link speed.\n"); @@ -670,6 +671,7 @@ launch_args_parse(int argc, char** argv) { "rxhdrs", 1, 0, 0 }, { "txpkts", 1, 0, 0 }, { "txonly-multi-flow", 0, 0, 0 }, + { "txonly-tso-mss", 1, 0, 0 }, { "rxq-share", 2, 0, 0 }, { "eth-link-speed", 1, 0, 0 }, { "disable-link-check", 0, 0, 0 }, @@ -1297,6 +1299,14 @@ launch_args_parse(int argc, char** argv) } if (!strcmp(lgopts[opt_idx].name, "txonly-multi-flow")) txonly_multi_flow = 1; + if (!strcmp(lgopts[opt_idx].name, "txonly-tso-mss")) { + n = atoi(optarg); + if (n >= 0 && n <= UINT16_MAX) + txonly_tso_segsz = n; + else + rte_exit(EXIT_FAILURE, + "TSO MSS must be >= 0 and <= UINT16_MAX\n"); + } if (!strcmp(lgopts[opt_idx].name, "rxq-share")) { if (optarg == NULL) { rxq_share = UINT32_MAX; diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index ef281ccd20..94d37be692 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -264,6 +264,9 @@ enum tx_pkt_split tx_pkt_split = TX_PKT_SPLIT_OFF; uint8_t txonly_multi_flow; /**< Whether multiple flows are generated in TXONLY mode. */ +uint16_t txonly_tso_segsz; +/**< TSO MSS for generated packets in TXONLY mode. */ + uint32_t tx_pkt_times_inter; /**< Timings for send scheduling in TXONLY mode, time between bursts. */ @@ -1615,6 +1618,15 @@ init_config_port_offloads(portid_t pid, uint32_t socket_id) port->dev_conf.txmode.offloads &= ~RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE; + if (txonly_tso_segsz > 0) { + if ((ports[pid].dev_info.tx_offload_capa & + RTE_ETH_TX_OFFLOAD_TCP_TSO) == 0) { + rte_exit(EXIT_FAILURE, + "TSO isn't supported for port %d\n", pid); + } + port->dev_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_TCP_TSO; + } + /* Apply Rx offloads configuration */ for (i = 0; i < port->dev_info.max_rx_queues; i++) port->rxq[i].conf.offloads = port->dev_conf.rxmode.offloads; diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 976f4f83dd..fbe1839a8f 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -611,6 +611,7 @@ enum tx_pkt_split { extern enum tx_pkt_split tx_pkt_split; extern uint8_t txonly_multi_flow; +extern uint16_t txonly_tso_segsz; extern uint32_t rxq_share; diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c index b304bd4bf8..59fdb5f953 100644 --- a/app/test-pmd/txonly.c +++ b/app/test-pmd/txonly.c @@ -60,6 +60,7 @@ RTE_DEFINE_PER_LCORE(uint8_t, _ip_var); /**< IP address variation */ static union pkt_l4_hdr_t { struct rte_udp_hdr udp; /**< UDP header of tx packets. */ + struct rte_tcp_hdr tcp; /**< TCP header of tx packets. */ } pkt_l4_hdr; /**< Layer 4 header of tx packets. */ static uint64_t timestamp_mask; /**< Timestamp dynamic flag mask */ @@ -112,8 +113,19 @@ setup_pkt_l4_ip_headers(uint8_t ip_proto, struct rte_ipv4_hdr *ip_hdr, uint32_t ip_cksum; uint16_t pkt_len; struct rte_udp_hdr *udp_hdr; + struct rte_tcp_hdr *tcp_hdr; switch (ip_proto) { + case IPPROTO_TCP: + /* + * Initialize TCP header. + */ + pkt_len = (uint16_t)(pkt_data_len + sizeof(struct rte_tcp_hdr)); + tcp_hdr = &l4_hdr->tcp; + tcp_hdr->src_port = rte_cpu_to_be_16(tx_l4_src_port); + tcp_hdr->dst_port = rte_cpu_to_be_16(tx_l4_dst_port); + tcp_hdr->data_off = (sizeof(struct rte_tcp_hdr) << 2) & 0xF0; + break; case IPPROTO_UDP: /* * Initialize UDP header. @@ -189,6 +201,8 @@ update_pkt_header(struct rte_mbuf *pkt, uint32_t total_pkt_len) ip_hdr->hdr_checksum = rte_ipv4_cksum(ip_hdr); switch (ip_hdr->next_proto_id) { + case IPPROTO_TCP: + break; case IPPROTO_UDP: /* update UDP packet length */ udp_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_udp_hdr *, @@ -232,6 +246,12 @@ pkt_burst_prepare(struct rte_mbuf *pkt, struct rte_mempool *mbp, pkt->l2_len = sizeof(struct rte_ether_hdr); pkt->l3_len = sizeof(struct rte_ipv4_hdr); + if (txonly_tso_segsz > 0) { + pkt->tso_segsz = txonly_tso_segsz; + pkt->ol_flags |= RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_IPV4 | + RTE_MBUF_F_TX_IP_CKSUM; + } + pkt_len = pkt->data_len; pkt_seg = pkt; for (i = 1; i < nb_segs; i++) { @@ -267,6 +287,12 @@ pkt_burst_prepare(struct rte_mbuf *pkt, struct rte_mempool *mbp, RTE_PER_LCORE(_ip_var) = ip_var; } switch (ip_hdr->next_proto_id) { + case IPPROTO_TCP: + copy_buf_to_pkt(&pkt_l4_hdr.tcp, sizeof(pkt_l4_hdr.tcp), pkt, + sizeof(struct rte_ether_hdr) + + sizeof(struct rte_ipv4_hdr)); + l4_hdr_size = sizeof(pkt_l4_hdr.tcp); + break; case IPPROTO_UDP: copy_buf_to_pkt(&pkt_l4_hdr.udp, sizeof(pkt_l4_hdr.udp), pkt, sizeof(struct rte_ether_hdr) + @@ -277,6 +303,7 @@ pkt_burst_prepare(struct rte_mbuf *pkt, struct rte_mempool *mbp, l4_hdr_size = 0; break; } + pkt->l4_len = l4_hdr_size; if (unlikely(tx_pkt_split == TX_PKT_SPLIT_RND) || txonly_multi_flow) update_pkt_header(pkt, pkt_len); @@ -459,11 +486,16 @@ tx_only_begin(portid_t pi) { uint16_t pkt_hdr_len, pkt_data_len; int dynf; - uint8_t ip_proto = IPPROTO_UDP; + uint8_t ip_proto; pkt_hdr_len = (uint16_t)(sizeof(struct rte_ether_hdr) + sizeof(struct rte_ipv4_hdr)); + + ip_proto = txonly_tso_segsz > 0 ? IPPROTO_TCP : IPPROTO_UDP; switch (ip_proto) { + case IPPROTO_TCP: + pkt_hdr_len += sizeof(struct rte_tcp_hdr); + break; case IPPROTO_UDP: pkt_hdr_len += sizeof(struct rte_udp_hdr); break; diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst index 610e442924..01d6852fd3 100644 --- a/doc/guides/testpmd_app_ug/run_app.rst +++ b/doc/guides/testpmd_app_ug/run_app.rst @@ -369,6 +369,10 @@ The command line options are: Generate multiple flows in txonly mode. +* ``--txonly-tso-mss=N``` + + Enable TSO offload and generate TCP packets with specified MSS in txonly mode. + * ``--rxq-share=[X]`` Create queues in shared Rx queue mode if device supports.