From patchwork Sun Apr 24 06:07:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: fengchengwen X-Patchwork-Id: 110179 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 694F5A00C4; Sun, 24 Apr 2022 08:13:53 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2E9DC427F4; Sun, 24 Apr 2022 08:13:40 +0200 (CEST) Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by mails.dpdk.org (Postfix) with ESMTP id 2DCC44069D for ; Sun, 24 Apr 2022 08:13:34 +0200 (CEST) Received: from dggpeml500024.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4KmHrf1m8wzhYF2; Sun, 24 Apr 2022 14:13:22 +0800 (CST) Received: from localhost.localdomain (10.67.165.24) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sun, 24 Apr 2022 14:13:32 +0800 From: Chengwen Feng To: , , , CC: Subject: [PATCH v3 1/3] examples/dma: fix MTU configuration Date: Sun, 24 Apr 2022 14:07:39 +0800 Message-ID: <20220424060741.33214-2-fengchengwen@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20220424060741.33214-1-fengchengwen@huawei.com> References: <20220411025634.33032-1-fengchengwen@huawei.com> <20220424060741.33214-1-fengchengwen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.24] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Huisong Li The MTU in dma App can be configured by 'max_frame_size' parameters which have a default value(1518). It's not reasonable to use it directly as MTU. This patch fix it. Fixes: 1bb4a528c41f ("ethdev: fix max Rx packet length") Cc: stable@dpdk.org Signed-off-by: Huisong Li --- examples/dma/dmafwd.c | 43 +++++++++++++++++++++++++++++++++++++++---- 1 file changed, 39 insertions(+), 4 deletions(-) diff --git a/examples/dma/dmafwd.c b/examples/dma/dmafwd.c index 608487e35c..a03ca05129 100644 --- a/examples/dma/dmafwd.c +++ b/examples/dma/dmafwd.c @@ -117,7 +117,7 @@ static uint16_t nb_txd = TX_DEFAULT_RINGSIZE; static volatile bool force_quit; static uint32_t dma_batch_sz = MAX_PKT_BURST; -static uint32_t max_frame_size = RTE_ETHER_MAX_LEN; +static uint32_t max_frame_size; /* ethernet addresses of ports */ static struct rte_ether_addr dma_ports_eth_addr[RTE_MAX_ETHPORTS]; @@ -851,6 +851,38 @@ assign_rings(void) } /* >8 End of assigning ring structures for packet exchanging. */ +static uint32_t +eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu) +{ + uint32_t overhead_len; + + if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu) + overhead_len = max_rx_pktlen - max_mtu; + else + overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN; + + return overhead_len; +} + +static int +config_port_max_pkt_len(struct rte_eth_conf *conf, + struct rte_eth_dev_info *dev_info) +{ + uint32_t overhead_len; + + if (max_frame_size == 0) + return 0; + + if (max_frame_size < RTE_ETHER_MIN_LEN) + return -1; + + overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen, + dev_info->max_mtu); + conf->rxmode.mtu = max_frame_size - overhead_len; + + return 0; +} + /* * Initializes a given port using global settings and with the RX buffers * coming from the mbuf_pool passed as a parameter. @@ -878,9 +910,6 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues) struct rte_eth_dev_info dev_info; int ret, i; - if (max_frame_size > local_port_conf.rxmode.mtu) - local_port_conf.rxmode.mtu = max_frame_size; - /* Skip ports that are not enabled */ if ((dma_enabled_port_mask & (1 << portid)) == 0) { printf("Skipping disabled port %u\n", portid); @@ -895,6 +924,12 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues) rte_exit(EXIT_FAILURE, "Cannot get device info: %s, port=%u\n", rte_strerror(-ret), portid); + ret = config_port_max_pkt_len(&local_port_conf, &dev_info); + if (ret != 0) + rte_exit(EXIT_FAILURE, + "Invalid max frame size: %u (port %u)\n", + max_frame_size, portid); + local_port_conf.rx_adv_conf.rss_conf.rss_hf &= dev_info.flow_type_rss_offloads; ret = rte_eth_dev_configure(portid, nb_queues, 1, &local_port_conf); From patchwork Sun Apr 24 06:07:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: fengchengwen X-Patchwork-Id: 110177 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 03AEBA00C4; Sun, 24 Apr 2022 08:13:43 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 26458410E0; Sun, 24 Apr 2022 08:13:38 +0200 (CEST) Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by mails.dpdk.org (Postfix) with ESMTP id 2AB7440141 for ; Sun, 24 Apr 2022 08:13:34 +0200 (CEST) Received: from dggpeml500024.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4KmHrZ620SzhYS7; Sun, 24 Apr 2022 14:13:18 +0800 (CST) Received: from localhost.localdomain (10.67.165.24) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sun, 24 Apr 2022 14:13:32 +0800 From: Chengwen Feng To: , , , CC: Subject: [PATCH v3 2/3] examples/dma: fix Tx drop statistic is not collected Date: Sun, 24 Apr 2022 14:07:40 +0800 Message-ID: <20220424060741.33214-3-fengchengwen@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20220424060741.33214-1-fengchengwen@huawei.com> References: <20220411025634.33032-1-fengchengwen@huawei.com> <20220424060741.33214-1-fengchengwen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.24] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The Tx drop statistic was designed to collected by rte_eth_dev_tx_buffer mechanism, but the application uses rte_eth_tx_burst to send packets and this lead the Tx drop statistic was not collected. This patch removes rte_eth_dev_tx_buffer mechanism to fix the problem. Fixes: 632bcd9b5d4f ("examples/ioat: print statistics") Cc: stable@dpdk.org Signed-off-by: Chengwen Feng Acked-by: Bruce Richardson Acked-by: Kevin Laatz --- examples/dma/dmafwd.c | 27 +++++---------------------- 1 file changed, 5 insertions(+), 22 deletions(-) diff --git a/examples/dma/dmafwd.c b/examples/dma/dmafwd.c index a03ca05129..dd576bcf77 100644 --- a/examples/dma/dmafwd.c +++ b/examples/dma/dmafwd.c @@ -122,7 +122,6 @@ static uint32_t max_frame_size; /* ethernet addresses of ports */ static struct rte_ether_addr dma_ports_eth_addr[RTE_MAX_ETHPORTS]; -static struct rte_eth_dev_tx_buffer *tx_buffer[RTE_MAX_ETHPORTS]; struct rte_mempool *dma_pktmbuf_pool; /* Print out statistics for one port. */ @@ -484,10 +483,13 @@ dma_tx_port(struct rxtx_port_config *tx_config) port_statistics.tx[tx_config->rxtx_port] += nb_tx; - /* Free any unsent packets. */ - if (unlikely(nb_tx < nb_dq)) + if (unlikely(nb_tx < nb_dq)) { + port_statistics.tx_dropped[tx_config->rxtx_port] += + (nb_dq - nb_tx); + /* Free any unsent packets. */ rte_mempool_put_bulk(dma_pktmbuf_pool, (void *)&mbufs[nb_tx], nb_dq - nb_tx); + } } } /* >8 End of transmitting packets from dmadev. */ @@ -970,25 +972,6 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues) "rte_eth_tx_queue_setup:err=%d,port=%u\n", ret, portid); - /* Initialize TX buffers */ - tx_buffer[portid] = rte_zmalloc_socket("tx_buffer", - RTE_ETH_TX_BUFFER_SIZE(MAX_PKT_BURST), 0, - rte_eth_dev_socket_id(portid)); - if (tx_buffer[portid] == NULL) - rte_exit(EXIT_FAILURE, - "Cannot allocate buffer for tx on port %u\n", - portid); - - rte_eth_tx_buffer_init(tx_buffer[portid], MAX_PKT_BURST); - - ret = rte_eth_tx_buffer_set_err_callback(tx_buffer[portid], - rte_eth_tx_buffer_count_callback, - &port_statistics.tx_dropped[portid]); - if (ret < 0) - rte_exit(EXIT_FAILURE, - "Cannot set error callback for tx buffer on port %u\n", - portid); - /* Start device. 8< */ ret = rte_eth_dev_start(portid); if (ret < 0) From patchwork Sun Apr 24 06:07:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: fengchengwen X-Patchwork-Id: 110178 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3D8C8A00C4; Sun, 24 Apr 2022 08:13:48 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 05FCE4114A; Sun, 24 Apr 2022 08:13:39 +0200 (CEST) Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by mails.dpdk.org (Postfix) with ESMTP id 104B94003F for ; Sun, 24 Apr 2022 08:13:34 +0200 (CEST) Received: from dggpeml500024.china.huawei.com (unknown [172.30.72.54]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4KmHqt6mp5z1J9hn; Sun, 24 Apr 2022 14:12:42 +0800 (CST) Received: from localhost.localdomain (10.67.165.24) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sun, 24 Apr 2022 14:13:32 +0800 From: Chengwen Feng To: , , , CC: Subject: [PATCH v3 3/3] examples/dma: add force minimal copy size parameter Date: Sun, 24 Apr 2022 14:07:41 +0800 Message-ID: <20220424060741.33214-4-fengchengwen@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20220424060741.33214-1-fengchengwen@huawei.com> References: <20220411025634.33032-1-fengchengwen@huawei.com> <20220424060741.33214-1-fengchengwen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.24] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch adds force minimal copy size parameter (-m/--force-min-copy-size), so when do copy by CPU or DMA, the real copy size will be the maximum of mbuf's data_len and this parameter. This parameter was designed to compare the performance between CPU copy and DMA copy. User could send small packets with a high rate to drive the performance test. Signed-off-by: Chengwen Feng Acked-by: Bruce Richardson Acked-by: Kevin Laatz --- examples/dma/dmafwd.c | 30 +++++++++++++++++++++++++++--- 1 file changed, 27 insertions(+), 3 deletions(-) diff --git a/examples/dma/dmafwd.c b/examples/dma/dmafwd.c index dd576bcf77..67b5a9b22b 100644 --- a/examples/dma/dmafwd.c +++ b/examples/dma/dmafwd.c @@ -25,6 +25,7 @@ #define CMD_LINE_OPT_RING_SIZE "ring-size" #define CMD_LINE_OPT_BATCH_SIZE "dma-batch-size" #define CMD_LINE_OPT_FRAME_SIZE "max-frame-size" +#define CMD_LINE_OPT_FORCE_COPY_SIZE "force-min-copy-size" #define CMD_LINE_OPT_STATS_INTERVAL "stats-interval" /* configurable number of RX/TX ring descriptors */ @@ -118,6 +119,7 @@ static volatile bool force_quit; static uint32_t dma_batch_sz = MAX_PKT_BURST; static uint32_t max_frame_size; +static uint32_t force_min_copy_size; /* ethernet addresses of ports */ static struct rte_ether_addr dma_ports_eth_addr[RTE_MAX_ETHPORTS]; @@ -205,7 +207,13 @@ print_stats(char *prgname) "Rx Queues = %d, ", nb_queues); status_strlen += snprintf(status_string + status_strlen, sizeof(status_string) - status_strlen, - "Ring Size = %d", ring_size); + "Ring Size = %d\n", ring_size); + status_strlen += snprintf(status_string + status_strlen, + sizeof(status_string) - status_strlen, + "Force Min Copy Size = %u Packet Data Room Size = %u", + force_min_copy_size, + rte_pktmbuf_data_room_size(dma_pktmbuf_pool) - + RTE_PKTMBUF_HEADROOM); memset(&ts, 0, sizeof(struct total_statistics)); @@ -303,7 +311,8 @@ static inline void pktmbuf_sw_copy(struct rte_mbuf *src, struct rte_mbuf *dst) { rte_memcpy(rte_pktmbuf_mtod(dst, char *), - rte_pktmbuf_mtod(src, char *), src->data_len); + rte_pktmbuf_mtod(src, char *), + RTE_MAX(src->data_len, force_min_copy_size)); } /* >8 End of perform packet copy there is a user-defined function. */ @@ -320,7 +329,9 @@ dma_enqueue_packets(struct rte_mbuf *pkts[], struct rte_mbuf *pkts_copy[], ret = rte_dma_copy(dev_id, 0, rte_pktmbuf_iova(pkts[i]), rte_pktmbuf_iova(pkts_copy[i]), - rte_pktmbuf_data_len(pkts[i]), 0); + RTE_MAX(rte_pktmbuf_data_len(pkts[i]), + force_min_copy_size), + 0); if (ret < 0) break; @@ -572,6 +583,7 @@ dma_usage(const char *prgname) printf("%s [EAL options] -- -p PORTMASK [-q NQ]\n" " -b --dma-batch-size: number of requests per DMA batch\n" " -f --max-frame-size: max frame size\n" + " -m --force-min-copy-size: force a minimum copy length, even for smaller packets\n" " -p --portmask: hexadecimal bitmask of ports to configure\n" " -q NQ: number of RX queues per port (default is 1)\n" " --[no-]mac-updating: Enable or disable MAC addresses updating (enabled by default)\n" @@ -617,6 +629,7 @@ dma_parse_args(int argc, char **argv, unsigned int nb_ports) "b:" /* dma batch size */ "c:" /* copy type (sw|hw) */ "f:" /* max frame size */ + "m:" /* force min copy size */ "p:" /* portmask */ "q:" /* number of RX queues per port */ "s:" /* ring size */ @@ -632,6 +645,7 @@ dma_parse_args(int argc, char **argv, unsigned int nb_ports) {CMD_LINE_OPT_RING_SIZE, required_argument, NULL, 's'}, {CMD_LINE_OPT_BATCH_SIZE, required_argument, NULL, 'b'}, {CMD_LINE_OPT_FRAME_SIZE, required_argument, NULL, 'f'}, + {CMD_LINE_OPT_FORCE_COPY_SIZE, required_argument, NULL, 'm'}, {CMD_LINE_OPT_STATS_INTERVAL, required_argument, NULL, 'i'}, {NULL, 0, 0, 0} }; @@ -666,6 +680,10 @@ dma_parse_args(int argc, char **argv, unsigned int nb_ports) } break; + case 'm': + force_min_copy_size = atoi(optarg); + break; + /* portmask */ case 'p': dma_enabled_port_mask = dma_parse_portmask(optarg); @@ -1064,6 +1082,12 @@ main(int argc, char **argv) rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n"); /* >8 End of allocates mempool to hold the mbufs. */ + if (force_min_copy_size > + (uint32_t)(rte_pktmbuf_data_room_size(dma_pktmbuf_pool) - + RTE_PKTMBUF_HEADROOM)) + rte_exit(EXIT_FAILURE, + "Force min copy size > packet mbuf size\n"); + /* Initialize each port. 8< */ cfg.nb_ports = 0; RTE_ETH_FOREACH_DEV(portid)