From patchwork Mon Oct 5 06:26:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Slava Ovsiienko X-Patchwork-Id: 79594 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 519A6A04BA; Mon, 5 Oct 2020 08:27:56 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 80FFC1B812; Mon, 5 Oct 2020 08:26:57 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 059272C27 for ; Mon, 5 Oct 2020 08:26:49 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from viacheslavo@nvidia.com) with SMTP; 5 Oct 2020 09:26:47 +0300 Received: from nvidia.com (pegasus12.mtr.labs.mlnx [10.210.17.40]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 0956QlRu012304; Mon, 5 Oct 2020 09:26:47 +0300 From: Viacheslav Ovsiienko To: dev@dpdk.org Cc: thomasm@monjalon.net, stephen@networkplumber.org, ferruh.yigit@intel.com, olivier.matz@6wind.com, jerinjacobk@gmail.com, maxime.coquelin@redhat.com, david.marchand@redhat.com, arybchenko@solarflare.com Date: Mon, 5 Oct 2020 06:26:43 +0000 Message-Id: <1601879207-6504-2-git-send-email-viacheslavo@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1601879207-6504-1-git-send-email-viacheslavo@nvidia.com> References: <1601879207-6504-1-git-send-email-viacheslavo@nvidia.com> Subject: [dpdk-dev] [PATCH 1/5] ethdev: introduce Rx buffer split X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The DPDK datapath in the transmit direction is very flexible. An application can build the multi-segment packet and manages almost all data aspects - the memory pools where segments are allocated from, the segment lengths, the memory attributes like external buffers, registered for DMA, etc. In the receiving direction, the datapath is much less flexible, an application can only specify the memory pool to configure the receiving queue and nothing more. In order to extend receiving datapath capabilities it is proposed to add the way to provide extended information how to split the packets being received. The following structure is introduced to specify the Rx packet segment: struct rte_eth_rxseg { struct rte_mempool *mp; /* memory pools to allocate segment from */ uint16_t length; /* segment maximal data length */ uint16_t offset; /* data offset from beginning of mbuf data buffer */ uint32_t reserved; /* reserved field */ }; The new routine rte_eth_rx_queue_setup_ex() is introduced to setup the given Rx queue using the new extended Rx packet segment description: int rte_eth_rx_queue_setup_ex(uint16_t port_id, uint16_t rx_queue_id, uint16_t nb_rx_desc, unsigned int socket_id, const struct rte_eth_rxconf *rx_conf, const struct rte_eth_rxseg *rx_seg, uint16_t n_seg) This routine presents the two new parameters: rx_seg - pointer the array of segment descriptions, each element describes the memory pool, maximal data length, initial data offset from the beginning of data buffer in mbuf n_seg - number of elements in the array The new offload flag DEV_RX_OFFLOAD_BUFFER_SPLIT in device capabilities is introduced to present the way for PMD to report to application about supporting Rx packet split to configurable segments. Prior invoking the rte_eth_rx_queue_setup_ex() routine application should check DEV_RX_OFFLOAD_BUFFER_SPLIT flag. If the Rx queue is configured with new routine the packets being received will be split into multiple segments pushed to the mbufs with specified attributes. The PMD will allocate the first mbuf from the pool specified in the first segment descriptor and puts the data staring at specified offset in the allocated mbuf data buffer. If packet length exceeds the specified segment length the next mbuf will be allocated according to the next segment descriptor (if any) and data will be put in its data buffer at specified offset and not exceeding specified length. If there is no next descriptor the next mbuf will be allocated and filled in the same way (from the same pool and with the same buffer offset/length) as the current one. For example, let's suppose we configured the Rx queue with the following segments: seg0 - pool0, len0=14B, off0=RTE_PKTMBUF_HEADROOM seg1 - pool1, len1=20B, off1=0B seg2 - pool2, len2=20B, off2=0B seg3 - pool3, len3=512B, off3=0B The packet 46 bytes long will look like the following: seg0 - 14B long @ RTE_PKTMBUF_HEADROOM in mbuf from pool0 seg1 - 20B long @ 0 in mbuf from pool1 seg2 - 12B long @ 0 in mbuf from pool2 The packet 1500 bytes long will look like the following: seg0 - 14B @ RTE_PKTMBUF_HEADROOM in mbuf from pool0 seg1 - 20B @ 0 in mbuf from pool1 seg2 - 20B @ 0 in mbuf from pool2 seg3 - 512B @ 0 in mbuf from pool3 seg4 - 512B @ 0 in mbuf from pool3 seg5 - 422B @ 0 in mbuf from pool3 The offload DEV_RX_OFFLOAD_SCATTER must be present and configured to support new buffer split feature (if n_seg is greater than one). The new approach would allow splitting the ingress packets into multiple parts pushed to the memory with different attributes. For example, the packet headers can be pushed to the embedded data buffers within mbufs and the application data into the external buffers attached to mbufs allocated from the different memory pools. The memory attributes for the split parts may differ either - for example the application data may be pushed into the external memory located on the dedicated physical device, say GPU or NVMe. This would improve the DPDK receiving datapath flexibility with preserving compatibility with existing API. Also, the proposed segment description might be used to specify Rx packet split for some other features. For example, provide the way to specify the extra memory pool for the Header Split feature of some Intel PMD. Signed-off-by: Viacheslav Ovsiienko --- doc/guides/nics/features.rst | 15 +++ doc/guides/rel_notes/release_20_11.rst | 6 ++ lib/librte_ethdev/rte_ethdev.c | 172 +++++++++++++++++++++++++++++++++ lib/librte_ethdev/rte_ethdev.h | 16 +++ lib/librte_ethdev/rte_ethdev_driver.h | 10 ++ 5 files changed, 219 insertions(+) diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst index dd8c955..ac9dfd7 100644 --- a/doc/guides/nics/features.rst +++ b/doc/guides/nics/features.rst @@ -185,6 +185,21 @@ Supports receiving segmented mbufs. * **[related] eth_dev_ops**: ``rx_pkt_burst``. +.. _nic_features_buffer_split: + +Buffer Split on Rx +------------ + +Scatters the packets being received on specified boundaries to segmented mbufs. + +* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_BUFFER_SPLIT``. +* **[implements] datapath**: ``Buffer Split functionality``. +* **[implements] rte_eth_dev_data**: ``buffer_split``. +* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_BUFFER_SPLIT``. +* **[provides] eth_dev_ops**: ``rxq_info_get:buffer_split``. +* **[related] API**: ``rte_eth_rx_queue_setup_ex()``. + + .. _nic_features_lro: LRO diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst index 4bcf220..8da5cc9 100644 --- a/doc/guides/rel_notes/release_20_11.rst +++ b/doc/guides/rel_notes/release_20_11.rst @@ -55,6 +55,12 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= +* **Introduced extended buffer description for receiving.** + + * Added extended Rx queue setup routine + * Added description for Rx segment sizes + * Added capability to specify the memory pool for each segment + * **Updated Cisco enic driver.** * Added support for VF representors with single-queue Tx/Rx and flow API diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.c index dfe5c1b..ace7567 100644 --- a/lib/librte_ethdev/rte_ethdev.c +++ b/lib/librte_ethdev/rte_ethdev.c @@ -128,6 +128,7 @@ struct rte_eth_xstats_name_off { RTE_RX_OFFLOAD_BIT2STR(SCTP_CKSUM), RTE_RX_OFFLOAD_BIT2STR(OUTER_UDP_CKSUM), RTE_RX_OFFLOAD_BIT2STR(RSS_HASH), + RTE_RX_OFFLOAD_BIT2STR(BUFFER_SPLIT), }; #undef RTE_RX_OFFLOAD_BIT2STR @@ -1933,6 +1934,177 @@ struct rte_eth_dev * } int +rte_eth_rx_queue_setup_ex(uint16_t port_id, uint16_t rx_queue_id, + uint16_t nb_rx_desc, unsigned int socket_id, + const struct rte_eth_rxconf *rx_conf, + const struct rte_eth_rxseg *rx_seg, uint16_t n_seg) +{ + int ret; + uint16_t seg_idx; + uint32_t mbp_buf_size; + struct rte_eth_dev *dev; + struct rte_eth_dev_info dev_info; + struct rte_eth_rxconf local_conf; + void **rxq; + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL); + + dev = &rte_eth_devices[port_id]; + if (rx_queue_id >= dev->data->nb_rx_queues) { + RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u\n", rx_queue_id); + return -EINVAL; + } + + if (rx_seg == NULL) { + RTE_ETHDEV_LOG(ERR, "Invalid null description pointer\n"); + return -EINVAL; + } + + if (n_seg == 0) { + RTE_ETHDEV_LOG(ERR, "Invalid zero description number\n"); + return -EINVAL; + } + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_setup_ex, -ENOTSUP); + + /* + * Check the size of the mbuf data buffer. + * This value must be provided in the private data of the memory pool. + * First check that the memory pool has a valid private data. + */ + ret = rte_eth_dev_info_get(port_id, &dev_info); + if (ret != 0) + return ret; + + for (seg_idx = 0; seg_idx < n_seg; seg_idx++) { + struct rte_mempool *mp = rx_seg[seg_idx].mp; + + if (mp->private_data_size < + sizeof(struct rte_pktmbuf_pool_private)) { + RTE_ETHDEV_LOG(ERR, "%s private_data_size %d < %d\n", + mp->name, (int)mp->private_data_size, + (int)sizeof(struct rte_pktmbuf_pool_private)); + return -ENOSPC; + } + + mbp_buf_size = rte_pktmbuf_data_room_size(mp); + if (mbp_buf_size < + rx_seg[seg_idx].length + rx_seg[seg_idx].offset) { + RTE_ETHDEV_LOG(ERR, + "%s mbuf_data_room_size %d < %d" + " (segment length=%d + segment offset=%d)\n", + mp->name, (int)mbp_buf_size, + (int)(rx_seg[seg_idx].length + + rx_seg[seg_idx].offset), + (int)rx_seg[seg_idx].length, + (int)rx_seg[seg_idx].offset); + return -EINVAL; + } + } + + /* Use default specified by driver, if nb_rx_desc is zero */ + if (nb_rx_desc == 0) { + nb_rx_desc = dev_info.default_rxportconf.ring_size; + /* If driver default is also zero, fall back on EAL default */ + if (nb_rx_desc == 0) + nb_rx_desc = RTE_ETH_DEV_FALLBACK_RX_RINGSIZE; + } + + if (nb_rx_desc > dev_info.rx_desc_lim.nb_max || + nb_rx_desc < dev_info.rx_desc_lim.nb_min || + nb_rx_desc % dev_info.rx_desc_lim.nb_align != 0) { + + RTE_ETHDEV_LOG(ERR, + "Invalid value for nb_rx_desc(=%hu), should be: " + "<= %hu, >= %hu, and a product of %hu\n", + nb_rx_desc, dev_info.rx_desc_lim.nb_max, + dev_info.rx_desc_lim.nb_min, + dev_info.rx_desc_lim.nb_align); + return -EINVAL; + } + + if (dev->data->dev_started && + !(dev_info.dev_capa & + RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP)) + return -EBUSY; + + if (dev->data->dev_started && + (dev->data->rx_queue_state[rx_queue_id] != + RTE_ETH_QUEUE_STATE_STOPPED)) + return -EBUSY; + + rxq = dev->data->rx_queues; + if (rxq[rx_queue_id]) { + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release, + -ENOTSUP); + (*dev->dev_ops->rx_queue_release)(rxq[rx_queue_id]); + rxq[rx_queue_id] = NULL; + } + + if (rx_conf == NULL) + rx_conf = &dev_info.default_rxconf; + + local_conf = *rx_conf; + + /* + * If an offloading has already been enabled in + * rte_eth_dev_configure(), it has been enabled on all queues, + * so there is no need to enable it in this queue again. + * The local_conf.offloads input to underlying PMD only carries + * those offloadings which are only enabled on this queue and + * not enabled on all queues. + */ + local_conf.offloads &= ~dev->data->dev_conf.rxmode.offloads; + + /* + * New added offloadings for this queue are those not enabled in + * rte_eth_dev_configure() and they must be per-queue type. + * A pure per-port offloading can't be enabled on a queue while + * disabled on another queue. A pure per-port offloading can't + * be enabled for any queue as new added one if it hasn't been + * enabled in rte_eth_dev_configure(). + */ + if ((local_conf.offloads & dev_info.rx_queue_offload_capa) != + local_conf.offloads) { + RTE_ETHDEV_LOG(ERR, + "Ethdev port_id=%d rx_queue_id=%d, new added offloads" + " 0x%"PRIx64" must be within per-queue offload" + " capabilities 0x%"PRIx64" in %s()\n", + port_id, rx_queue_id, local_conf.offloads, + dev_info.rx_queue_offload_capa, + __func__); + return -EINVAL; + } + + /* + * If LRO is enabled, check that the maximum aggregated packet + * size is supported by the configured device. + */ + if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) { + if (dev->data->dev_conf.rxmode.max_lro_pkt_size == 0) + dev->data->dev_conf.rxmode.max_lro_pkt_size = + dev->data->dev_conf.rxmode.max_rx_pkt_len; + int ret = check_lro_pkt_size(port_id, + dev->data->dev_conf.rxmode.max_lro_pkt_size, + dev->data->dev_conf.rxmode.max_rx_pkt_len, + dev_info.max_lro_pkt_size); + if (ret != 0) + return ret; + } + + ret = (*dev->dev_ops->rx_queue_setup_ex)(dev, rx_queue_id, nb_rx_desc, + socket_id, &local_conf, + rx_seg, n_seg); + if (!ret) { + if (!dev->data->min_rx_buf_size || + dev->data->min_rx_buf_size > mbp_buf_size) + dev->data->min_rx_buf_size = mbp_buf_size; + } + + return eth_err(port_id, ret); +} + +int rte_eth_rx_hairpin_queue_setup(uint16_t port_id, uint16_t rx_queue_id, uint16_t nb_rx_desc, const struct rte_eth_hairpin_conf *conf) diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h index 645a186..553900b 100644 --- a/lib/librte_ethdev/rte_ethdev.h +++ b/lib/librte_ethdev/rte_ethdev.h @@ -970,6 +970,16 @@ struct rte_eth_txmode { }; /** + * A structure used to configure an RX packet segment to split. + */ +struct rte_eth_rxseg { + struct rte_mempool *mp; /**< Memory pools to allocate segment from */ + uint16_t length; /**< Segment maximal data length */ + uint16_t offset; /**< Data offset from beggining of mbuf data buffer */ + uint32_t reserved; /**< Reserved field */ +}; + +/** * A structure used to configure an RX ring of an Ethernet port. */ struct rte_eth_rxconf { @@ -1260,6 +1270,7 @@ struct rte_eth_conf { #define DEV_RX_OFFLOAD_SCTP_CKSUM 0x00020000 #define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM 0x00040000 #define DEV_RX_OFFLOAD_RSS_HASH 0x00080000 +#define DEV_RX_OFFLOAD_BUFFER_SPLIT 0x00100000 #define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \ DEV_RX_OFFLOAD_UDP_CKSUM | \ @@ -2020,6 +2031,11 @@ int rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, const struct rte_eth_rxconf *rx_conf, struct rte_mempool *mb_pool); +int rte_eth_rx_queue_setup_ex(uint16_t port_id, uint16_t rx_queue_id, + uint16_t nb_rx_desc, unsigned int socket_id, + const struct rte_eth_rxconf *rx_conf, + const struct rte_eth_rxseg *rx_seg, uint16_t n_seg); + /** * @warning * @b EXPERIMENTAL: this API may change, or be removed, without prior notice diff --git a/lib/librte_ethdev/rte_ethdev_driver.h b/lib/librte_ethdev/rte_ethdev_driver.h index 04ac8e9..de4d7de 100644 --- a/lib/librte_ethdev/rte_ethdev_driver.h +++ b/lib/librte_ethdev/rte_ethdev_driver.h @@ -264,6 +264,15 @@ typedef int (*eth_rx_queue_setup_t)(struct rte_eth_dev *dev, struct rte_mempool *mb_pool); /**< @internal Set up a receive queue of an Ethernet device. */ +typedef int (*eth_rx_queue_setup_ex_t)(struct rte_eth_dev *dev, + uint16_t rx_queue_id, + uint16_t nb_rx_desc, + unsigned int socket_id, + const struct rte_eth_rxconf *rx_conf, + const struct rte_eth_rxseg *rx_seg, + uint16_t n_seg); +/**< @internal extended Set up a receive queue of an Ethernet device. */ + typedef int (*eth_tx_queue_setup_t)(struct rte_eth_dev *dev, uint16_t tx_queue_id, uint16_t nb_tx_desc, @@ -630,6 +639,7 @@ struct eth_dev_ops { eth_queue_start_t tx_queue_start;/**< Start TX for a queue. */ eth_queue_stop_t tx_queue_stop; /**< Stop TX for a queue. */ eth_rx_queue_setup_t rx_queue_setup;/**< Set up device RX queue. */ + eth_rx_queue_setup_ex_t rx_queue_setup_ex;/**< Extended RX setup. */ eth_queue_release_t rx_queue_release; /**< Release RX queue. */ eth_rx_enable_intr_t rx_queue_intr_enable; /**< Enable Rx queue interrupt. */ From patchwork Mon Oct 5 06:26:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Slava Ovsiienko X-Patchwork-Id: 79593 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id CB30FA04BA; Mon, 5 Oct 2020 08:27:35 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 171051B774; Mon, 5 Oct 2020 08:26:56 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 0E8151B68E for ; Mon, 5 Oct 2020 08:26:50 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from viacheslavo@nvidia.com) with SMTP; 5 Oct 2020 09:26:47 +0300 Received: from nvidia.com (pegasus12.mtr.labs.mlnx [10.210.17.40]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 0956QlRv012304; Mon, 5 Oct 2020 09:26:47 +0300 From: Viacheslav Ovsiienko To: dev@dpdk.org Cc: thomasm@monjalon.net, stephen@networkplumber.org, ferruh.yigit@intel.com, olivier.matz@6wind.com, jerinjacobk@gmail.com, maxime.coquelin@redhat.com, david.marchand@redhat.com, arybchenko@solarflare.com Date: Mon, 5 Oct 2020 06:26:44 +0000 Message-Id: <1601879207-6504-3-git-send-email-viacheslavo@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1601879207-6504-1-git-send-email-viacheslavo@nvidia.com> References: <1601879207-6504-1-git-send-email-viacheslavo@nvidia.com> Subject: [dpdk-dev] [PATCH 2/5] app/testpmd: add multiple pools per core creation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The command line parameter --mbuf-size is updated, it can handle the multiple values like the following: --mbuf-size=2176,512,768,4096 specifying the creation the extra memory pools with the requested mbuf data buffer sizes. If some buffer split feature is engaged the extra memory pools can be used to configure the Rx queues with rte_the_dev_rx_queue_setup_ex(). The extra pools are created with requested sizes, and pool names are assigned with appended index: mbuf_pool_socket_%socket_%index. Index zero is used to specify the first mandatory pool to maintain compatibility with existing code. Signed-off-by: Viacheslav Ovsiienko --- app/test-pmd/bpf_cmd.c | 4 +-- app/test-pmd/cmdline.c | 2 +- app/test-pmd/config.c | 6 ++-- app/test-pmd/parameters.c | 24 +++++++++---- app/test-pmd/testpmd.c | 63 +++++++++++++++++++---------------- app/test-pmd/testpmd.h | 24 ++++++++++--- doc/guides/testpmd_app_ug/run_app.rst | 7 ++-- 7 files changed, 83 insertions(+), 47 deletions(-) diff --git a/app/test-pmd/bpf_cmd.c b/app/test-pmd/bpf_cmd.c index 0f984cc..d8bb7ca 100644 --- a/app/test-pmd/bpf_cmd.c +++ b/app/test-pmd/bpf_cmd.c @@ -69,7 +69,7 @@ struct cmd_bpf_ld_result { *flags = RTE_BPF_ETH_F_NONE; arg->type = RTE_BPF_ARG_PTR; - arg->size = mbuf_data_size; + arg->size = mbuf_data_size[0]; for (i = 0; str[i] != 0; i++) { v = toupper(str[i]); @@ -78,7 +78,7 @@ struct cmd_bpf_ld_result { else if (v == 'M') { arg->type = RTE_BPF_ARG_PTR_MBUF; arg->size = sizeof(struct rte_mbuf); - arg->buf_size = mbuf_data_size; + arg->buf_size = mbuf_data_size[0]; } else if (v == '-') continue; else diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index 08e123f..3f57182 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -2898,7 +2898,7 @@ struct cmd_setup_rxtx_queue { if (!numa_support || socket_id == NUMA_NO_CONFIG) socket_id = port->socket_id; - mp = mbuf_pool_find(socket_id); + mp = mbuf_pool_find(socket_id, 0); if (mp == NULL) { printf("Failed to setup RX queue: " "No mempool allocation" diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 17a6efe..7048288 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -625,7 +625,7 @@ static int bus_match_all(const struct rte_bus *bus, const void *data) printf("\nConnect to socket: %u", port->socket_id); if (port_numa[port_id] != NUMA_NO_CONFIG) { - mp = mbuf_pool_find(port_numa[port_id]); + mp = mbuf_pool_find(port_numa[port_id], 0); if (mp) printf("\nmemory allocation on the socket: %d", port_numa[port_id]); @@ -3124,9 +3124,9 @@ struct igb_ring_desc_16_bytes { */ tx_pkt_len = 0; for (i = 0; i < nb_segs; i++) { - if (seg_lengths[i] > (unsigned) mbuf_data_size) { + if (seg_lengths[i] > mbuf_data_size[0]) { printf("length[%u]=%u > mbuf_data_size=%u - give up\n", - i, seg_lengths[i], (unsigned) mbuf_data_size); + i, seg_lengths[i], mbuf_data_size[0]); return; } tx_pkt_len = (uint16_t)(tx_pkt_len + seg_lengths[i]); diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c index 1ead595..1f40d73 100644 --- a/app/test-pmd/parameters.c +++ b/app/test-pmd/parameters.c @@ -106,7 +106,9 @@ "(flag: 1 for RX; 2 for TX; 3 for RX and TX).\n"); printf(" --socket-num=N: set socket from which all memory is allocated " "in NUMA mode.\n"); - printf(" --mbuf-size=N: set the data size of mbuf to N bytes.\n"); + printf(" --mbuf-size=N,[N1[,..Nn]: set the data size of mbuf to " + "N bytes. If multiple numbers are specified the extra pools " + "will be created to receive with packet split features\n"); printf(" --total-num-mbufs=N: set the number of mbufs to be allocated " "in mbuf pools.\n"); printf(" --max-pkt-len=N: set the maximum size of packet to N bytes.\n"); @@ -890,12 +892,22 @@ } } if (!strcmp(lgopts[opt_idx].name, "mbuf-size")) { - n = atoi(optarg); - if (n > 0 && n <= 0xFFFF) - mbuf_data_size = (uint16_t) n; - else + unsigned int mb_sz[MAX_SEGS_BUFFER_SPLIT]; + unsigned int nb_segs, i; + + nb_segs = parse_item_list(optarg, "mbuf-size", + MAX_SEGS_BUFFER_SPLIT, mb_sz, 0); + if (nb_segs <= 0) rte_exit(EXIT_FAILURE, - "mbuf-size should be > 0 and < 65536\n"); + "bad mbuf-size\n"); + for (i = 0; i < nb_segs; i++) { + if (mb_sz[i] <= 0 || mb_sz[i] > 0xFFFF) + rte_exit(EXIT_FAILURE, + "mbuf-size should be " + "> 0 and < 65536\n"); + mbuf_data_size[i] = (uint16_t) mb_sz[i]; + } + mbuf_data_size_n = nb_segs; } if (!strcmp(lgopts[opt_idx].name, "total-num-mbufs")) { n = atoi(optarg); diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index fe6450c..f5060ee 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -186,7 +186,7 @@ struct fwd_engine * fwd_engines[] = { NULL, }; -struct rte_mempool *mempools[RTE_MAX_NUMA_NODES]; +struct rte_mempool *mempools[RTE_MAX_NUMA_NODES * MAX_SEGS_BUFFER_SPLIT]; uint16_t mempool_flags; struct fwd_config cur_fwd_config; @@ -195,7 +195,10 @@ struct fwd_engine * fwd_engines[] = { uint32_t burst_tx_delay_time = BURST_TX_WAIT_US; uint32_t burst_tx_retry_num = BURST_TX_RETRIES; -uint16_t mbuf_data_size = DEFAULT_MBUF_DATA_SIZE; /**< Mbuf data space size. */ +uint32_t mbuf_data_size_n = 1; /* Number of specified mbuf sizes. */ +uint16_t mbuf_data_size[MAX_SEGS_BUFFER_SPLIT] = { + DEFAULT_MBUF_DATA_SIZE +}; /**< Mbuf data space size. */ uint32_t param_total_num_mbufs = 0; /**< number of mbufs in all pools - if * specified on command-line. */ uint16_t stats_period; /**< Period to show statistics (disabled by default) */ @@ -955,14 +958,14 @@ struct extmem_param { */ static struct rte_mempool * mbuf_pool_create(uint16_t mbuf_seg_size, unsigned nb_mbuf, - unsigned int socket_id) + unsigned int socket_id, unsigned int size_idx) { char pool_name[RTE_MEMPOOL_NAMESIZE]; struct rte_mempool *rte_mp = NULL; uint32_t mb_size; mb_size = sizeof(struct rte_mbuf) + mbuf_seg_size; - mbuf_poolname_build(socket_id, pool_name, sizeof(pool_name)); + mbuf_poolname_build(socket_id, pool_name, sizeof(pool_name), size_idx); TESTPMD_LOG(INFO, "create a new mbuf pool <%s>: n=%u, size=%u, socket=%u\n", @@ -1485,8 +1488,8 @@ struct extmem_param { port->dev_info.rx_desc_lim.nb_mtu_seg_max; if ((data_size + RTE_PKTMBUF_HEADROOM) > - mbuf_data_size) { - mbuf_data_size = data_size + + mbuf_data_size[0]) { + mbuf_data_size[0] = data_size + RTE_PKTMBUF_HEADROOM; warning = 1; } @@ -1494,9 +1497,9 @@ struct extmem_param { } if (warning) - TESTPMD_LOG(WARNING, "Configured mbuf size %hu\n", - mbuf_data_size); - + TESTPMD_LOG(WARNING, + "Configured mbuf size of the first segment %hu\n", + mbuf_data_size[0]); /* * Create pools of mbuf. * If NUMA support is disabled, create a single pool of mbuf in @@ -1516,21 +1519,23 @@ struct extmem_param { } if (numa_support) { - uint8_t i; + uint8_t i, j; for (i = 0; i < num_sockets; i++) - mempools[i] = mbuf_pool_create(mbuf_data_size, - nb_mbuf_per_pool, - socket_ids[i]); + for (j = 0; j < mbuf_data_size_n; j++) + mempools[i * MAX_SEGS_BUFFER_SPLIT + j] = + mbuf_pool_create(mbuf_data_size[j], + nb_mbuf_per_pool, + socket_ids[i], 0); } else { - if (socket_num == UMA_NO_CONFIG) - mempools[0] = mbuf_pool_create(mbuf_data_size, - nb_mbuf_per_pool, 0); - else - mempools[socket_num] = mbuf_pool_create - (mbuf_data_size, - nb_mbuf_per_pool, - socket_num); + uint8_t i; + + for (i = 0; i < mbuf_data_size_n; i++) + mempools[i] = mbuf_pool_create + (mbuf_data_size[i], + nb_mbuf_per_pool, + socket_num == UMA_NO_CONFIG ? + 0 : socket_num, 0); } init_port_config(); @@ -1542,10 +1547,10 @@ struct extmem_param { */ for (lc_id = 0; lc_id < nb_lcores; lc_id++) { mbp = mbuf_pool_find( - rte_lcore_to_socket_id(fwd_lcores_cpuids[lc_id])); + rte_lcore_to_socket_id(fwd_lcores_cpuids[lc_id]), 0); if (mbp == NULL) - mbp = mbuf_pool_find(0); + mbp = mbuf_pool_find(0, 0); fwd_lcores[lc_id]->mbp = mbp; /* initialize GSO context */ fwd_lcores[lc_id]->gso_ctx.direct_pool = mbp; @@ -2498,7 +2503,8 @@ struct extmem_param { if ((numa_support) && (rxring_numa[pi] != NUMA_NO_CONFIG)) { struct rte_mempool * mp = - mbuf_pool_find(rxring_numa[pi]); + mbuf_pool_find + (rxring_numa[pi], 0); if (mp == NULL) { printf("Failed to setup RX queue:" "No mempool allocation" @@ -2514,7 +2520,8 @@ struct extmem_param { mp); } else { struct rte_mempool *mp = - mbuf_pool_find(port->socket_id); + mbuf_pool_find + (port->socket_id, 0); if (mp == NULL) { printf("Failed to setup RX queue:" "No mempool allocation" @@ -2928,13 +2935,13 @@ struct extmem_param { pmd_test_exit(void) { portid_t pt_id; + unsigned int i; int ret; - int i; if (test_done == 0) stop_packet_forwarding(); - for (i = 0 ; i < RTE_MAX_NUMA_NODES ; i++) { + for (i = 0 ; i < RTE_DIM(mempools) ; i++) { if (mempools[i]) { if (mp_alloc_type == MP_ALLOC_ANON) rte_mempool_mem_iter(mempools[i], dma_unmap_cb, @@ -2978,7 +2985,7 @@ struct extmem_param { return; } } - for (i = 0 ; i < RTE_MAX_NUMA_NODES ; i++) { + for (i = 0 ; i < RTE_DIM(mempools) ; i++) { if (mempools[i]) rte_mempool_free(mempools[i]); } diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index c7e7e41..e5cdd12 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -42,6 +42,13 @@ */ #define RTE_MAX_SEGS_PER_PKT 255 /**< nb_segs is a 8-bit unsigned char. */ +/* + * The maximum number of segments per packet is used to configure + * buffer split feature, also specifies the maximum amount of + * optional Rx pools to allocate mbufs to split. + */ +#define MAX_SEGS_BUFFER_SPLIT 8 /**< nb_segs is a 8-bit unsigned char. */ + #define MAX_PKT_BURST 512 #define DEF_PKT_BURST 32 @@ -393,7 +400,9 @@ struct queue_stats_mappings { extern uint8_t dcb_config; extern uint8_t dcb_test; -extern uint16_t mbuf_data_size; /**< Mbuf data space size. */ +extern uint32_t mbuf_data_size_n; +extern uint16_t mbuf_data_size[MAX_SEGS_BUFFER_SPLIT]; +/**< Mbuf data space size. */ extern uint32_t param_total_num_mbufs; extern uint16_t stats_period; @@ -604,17 +613,22 @@ struct mplsoudp_decap_conf { /* Mbuf Pools */ static inline void -mbuf_poolname_build(unsigned int sock_id, char* mp_name, int name_size) +mbuf_poolname_build(unsigned int sock_id, char *mp_name, + int name_size, unsigned int idx) { - snprintf(mp_name, name_size, "mbuf_pool_socket_%u", sock_id); + if (!idx) + snprintf(mp_name, name_size, "mbuf_pool_socket_%u", sock_id); + else + snprintf(mp_name, name_size, "mbuf_pool_socket_%u_%u", + sock_id, idx); } static inline struct rte_mempool * -mbuf_pool_find(unsigned int sock_id) +mbuf_pool_find(unsigned int sock_id, unsigned int idx) { char pool_name[RTE_MEMPOOL_NAMESIZE]; - mbuf_poolname_build(sock_id, pool_name, sizeof(pool_name)); + mbuf_poolname_build(sock_id, pool_name, sizeof(pool_name), idx); return rte_mempool_lookup((const char *)pool_name); } diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst index e2539f6..2d5a263 100644 --- a/doc/guides/testpmd_app_ug/run_app.rst +++ b/doc/guides/testpmd_app_ug/run_app.rst @@ -107,9 +107,12 @@ The command line options are: Set the socket from which all memory is allocated in NUMA mode, where 0 <= N < number of sockets on the board. -* ``--mbuf-size=N`` +* ``--mbuf-size=N[,N1[,...Nn]`` - Set the data size of the mbufs used to N bytes, where N < 65536. The default value is 2048. + Set the data size of the mbufs used to N bytes, where N < 65536. + The default value is 2048. If multiple mbuf-size values are specified the + extra memory pools will be created for allocating mbufs to receive packets + with buffer splittling features. * ``--total-num-mbufs=N`` From patchwork Mon Oct 5 06:26:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Slava Ovsiienko X-Patchwork-Id: 79592 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 36ED1A04BA; Mon, 5 Oct 2020 08:27:15 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B35BA1B711; Mon, 5 Oct 2020 08:26:54 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 0AF8E3256 for ; Mon, 5 Oct 2020 08:26:50 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from viacheslavo@nvidia.com) with SMTP; 5 Oct 2020 09:26:47 +0300 Received: from nvidia.com (pegasus12.mtr.labs.mlnx [10.210.17.40]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 0956QlRw012304; Mon, 5 Oct 2020 09:26:47 +0300 From: Viacheslav Ovsiienko To: dev@dpdk.org Cc: thomasm@monjalon.net, stephen@networkplumber.org, ferruh.yigit@intel.com, olivier.matz@6wind.com, jerinjacobk@gmail.com, maxime.coquelin@redhat.com, david.marchand@redhat.com, arybchenko@solarflare.com Date: Mon, 5 Oct 2020 06:26:45 +0000 Message-Id: <1601879207-6504-4-git-send-email-viacheslavo@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1601879207-6504-1-git-send-email-viacheslavo@nvidia.com> References: <1601879207-6504-1-git-send-email-viacheslavo@nvidia.com> Subject: [dpdk-dev] [PATCH 3/5] app/testpmd: add buffer split offload configuration X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch add support for DEV_RX_OFFLOAD_BUFFER_SPLIT providing per queue configuration for this offload. Signed-off-by: Viacheslav Ovsiienko --- app/test-pmd/cmdline.c | 21 +++++++++++---------- app/test-pmd/config.c | 9 +++++++++ 2 files changed, 20 insertions(+), 10 deletions(-) diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index 3f57182..24ca56a 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -874,16 +874,16 @@ static void cmd_help_long_parsed(void *parsed_result, "port config rx_offload vlan_strip|" "ipv4_cksum|udp_cksum|tcp_cksum|tcp_lro|qinq_strip|" "outer_ipv4_cksum|macsec_strip|header_split|" - "vlan_filter|vlan_extend|jumbo_frame|" - "scatter|timestamp|security|keep_crc on|off\n" + "vlan_filter|vlan_extend|jumbo_frame|scatter|" + "buffer_split|timestamp|security|keep_crc on|off\n" " Enable or disable a per port Rx offloading" " on all Rx queues of a port\n\n" "port (port_id) rxq (queue_id) rx_offload vlan_strip|" "ipv4_cksum|udp_cksum|tcp_cksum|tcp_lro|qinq_strip|" "outer_ipv4_cksum|macsec_strip|header_split|" - "vlan_filter|vlan_extend|jumbo_frame|" - "scatter|timestamp|security|keep_crc on|off\n" + "vlan_filter|vlan_extend|jumbo_frame|scatter|" + "buffer_split|timestamp|security|keep_crc on|off\n" " Enable or disable a per queue Rx offloading" " only on a specific Rx queue\n\n" @@ -18399,7 +18399,8 @@ struct cmd_config_per_port_rx_offload_result { offload, "vlan_strip#ipv4_cksum#udp_cksum#tcp_cksum#tcp_lro#" "qinq_strip#outer_ipv4_cksum#macsec_strip#" "header_split#vlan_filter#vlan_extend#jumbo_frame#" - "scatter#timestamp#security#keep_crc#rss_hash"); + "scatter#buffer_split#timestamp#security#" + "keep_crc#rss_hash"); cmdline_parse_token_string_t cmd_config_per_port_rx_offload_result_on_off = TOKEN_STRING_INITIALIZER (struct cmd_config_per_port_rx_offload_result, @@ -18479,8 +18480,8 @@ struct cmd_config_per_port_rx_offload_result { .help_str = "port config rx_offload vlan_strip|ipv4_cksum|" "udp_cksum|tcp_cksum|tcp_lro|qinq_strip|outer_ipv4_cksum|" "macsec_strip|header_split|vlan_filter|vlan_extend|" - "jumbo_frame|scatter|timestamp|security|keep_crc|rss_hash " - "on|off", + "jumbo_frame|scatter|buffer_split|timestamp|security|" + "keep_crc|rss_hash on|off", .tokens = { (void *)&cmd_config_per_port_rx_offload_result_port, (void *)&cmd_config_per_port_rx_offload_result_config, @@ -18529,7 +18530,7 @@ struct cmd_config_per_queue_rx_offload_result { offload, "vlan_strip#ipv4_cksum#udp_cksum#tcp_cksum#tcp_lro#" "qinq_strip#outer_ipv4_cksum#macsec_strip#" "header_split#vlan_filter#vlan_extend#jumbo_frame#" - "scatter#timestamp#security#keep_crc"); + "scatter#buffer_split#timestamp#security#keep_crc"); cmdline_parse_token_string_t cmd_config_per_queue_rx_offload_result_on_off = TOKEN_STRING_INITIALIZER (struct cmd_config_per_queue_rx_offload_result, @@ -18585,8 +18586,8 @@ struct cmd_config_per_queue_rx_offload_result { "vlan_strip|ipv4_cksum|" "udp_cksum|tcp_cksum|tcp_lro|qinq_strip|outer_ipv4_cksum|" "macsec_strip|header_split|vlan_filter|vlan_extend|" - "jumbo_frame|scatter|timestamp|security|keep_crc " - "on|off", + "jumbo_frame|scatter|buffer_split|timestamp|security|" + "keep_crc on|off", .tokens = { (void *)&cmd_config_per_queue_rx_offload_result_port, (void *)&cmd_config_per_queue_rx_offload_result_port_id, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 7048288..395ea6b 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -1027,6 +1027,15 @@ static int bus_match_all(const struct rte_bus *bus, const void *data) printf("off\n"); } + if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_BUFFER_SPLIT) { + printf("RX offload buffer split: "); + if (ports[port_id].dev_conf.rxmode.offloads & + DEV_RX_OFFLOAD_BUFFER_SPLIT) + printf("on\n"); + else + printf("off\n"); + } + if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_VLAN_INSERT) { printf("VLAN insert: "); if (ports[port_id].dev_conf.txmode.offloads & From patchwork Mon Oct 5 06:26:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Slava Ovsiienko X-Patchwork-Id: 79596 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id BF960A04BA; Mon, 5 Oct 2020 08:28:41 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0E5121B9E8; Mon, 5 Oct 2020 08:27:01 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id EF1822C27 for ; Mon, 5 Oct 2020 08:26:50 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from viacheslavo@nvidia.com) with SMTP; 5 Oct 2020 09:26:47 +0300 Received: from nvidia.com (pegasus12.mtr.labs.mlnx [10.210.17.40]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 0956QlRx012304; Mon, 5 Oct 2020 09:26:47 +0300 From: Viacheslav Ovsiienko To: dev@dpdk.org Cc: thomasm@monjalon.net, stephen@networkplumber.org, ferruh.yigit@intel.com, olivier.matz@6wind.com, jerinjacobk@gmail.com, maxime.coquelin@redhat.com, david.marchand@redhat.com, arybchenko@solarflare.com Date: Mon, 5 Oct 2020 06:26:46 +0000 Message-Id: <1601879207-6504-5-git-send-email-viacheslavo@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1601879207-6504-1-git-send-email-viacheslavo@nvidia.com> References: <1601879207-6504-1-git-send-email-viacheslavo@nvidia.com> Subject: [dpdk-dev] [PATCH 4/5] app/testpmd: add rxpkts commands and parameters X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add command line parameter: --rxpkts=X[,Y] Sets the length of segments to scatter packets on receiving if split feature is engaged. Affects only the queues configured with split offloads (currently BUFFER_SPLIT is supported only). Add interactive mode command: testpmd> set txpkts (x[,y]*) Where x[,y]* represents a CSV list of values, without white space. Sets the length of segments to scatter packets on receiving if split feature is engaged. Affects only the queues configured with split offloads (currently BUFFER_SPLIT is supported only). Optionally the multiple memory pools can be specified with --mbuf-size command line parameter and the mbufs to receive will be allocated sequentially from these extra memory pools. Signed-off-by: Viacheslav Ovsiienko --- app/test-pmd/cmdline.c | 61 +++++++++++++++++++++++++++-- app/test-pmd/config.c | 48 ++++++++++++++++++++++- app/test-pmd/parameters.c | 15 +++++++ app/test-pmd/testpmd.c | 7 ++++ app/test-pmd/testpmd.h | 11 +++++- doc/guides/testpmd_app_ug/run_app.rst | 9 +++++ doc/guides/testpmd_app_ug/testpmd_funcs.rst | 21 +++++++++- 7 files changed, 165 insertions(+), 7 deletions(-) diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index 24ca56a..e0ac76e 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -183,7 +183,7 @@ static void cmd_help_long_parsed(void *parsed_result, "show (rxq|txq) info (port_id) (queue_id)\n" " Display information for configured RX/TX queue.\n\n" - "show config (rxtx|cores|fwd|txpkts)\n" + "show config (rxtx|cores|fwd|rxpkts|txpkts)\n" " Display the given configuration.\n\n" "read rxd (port_id) (queue_id) (rxd_id)\n" @@ -288,6 +288,12 @@ static void cmd_help_long_parsed(void *parsed_result, " Set the transmit delay time and number of retries," " effective when retry is enabled.\n\n" + "set rxpkts (x[,y]*)\n" + " Set the length of each segment to scatter" + " packets on receiving if split feature is engaged." + " Affects only the queues configured with split" + " offloads.\n\n" + "set txpkts (x[,y]*)\n" " Set the length of each segment of TXONLY" " and optionally CSUM packets.\n\n" @@ -3880,6 +3886,52 @@ struct cmd_set_log_result { }, }; +/* *** SET SEGMENT LENGTHS OF RX PACKETS SPLIT *** */ + +struct cmd_set_rxpkts_result { + cmdline_fixed_string_t cmd_keyword; + cmdline_fixed_string_t rxpkts; + cmdline_fixed_string_t seg_lengths; +}; + +static void +cmd_set_rxpkts_parsed(void *parsed_result, + __rte_unused struct cmdline *cl, + __rte_unused void *data) +{ + struct cmd_set_rxpkts_result *res; + unsigned int seg_lengths[MAX_SEGS_BUFFER_SPLIT]; + unsigned int nb_segs; + + res = parsed_result; + nb_segs = parse_item_list(res->seg_lengths, "segment lengths", + MAX_SEGS_BUFFER_SPLIT, seg_lengths, 0); + if (nb_segs > 0) + set_rx_pkt_segments(seg_lengths, nb_segs); +} + +cmdline_parse_token_string_t cmd_set_rxpkts_keyword = + TOKEN_STRING_INITIALIZER(struct cmd_set_rxpkts_result, + cmd_keyword, "set"); +cmdline_parse_token_string_t cmd_set_rxpkts_name = + TOKEN_STRING_INITIALIZER(struct cmd_set_rxpkts_result, + rxpkts, "rxpkts"); +cmdline_parse_token_string_t cmd_set_rxpkts_lengths = + TOKEN_STRING_INITIALIZER(struct cmd_set_rxpkts_result, + seg_lengths, NULL); + +cmdline_parse_inst_t cmd_set_rxpkts = { + .f = cmd_set_rxpkts_parsed, + .data = NULL, + .help_str = "set rxpkts ", + .tokens = { + (void *)&cmd_set_rxpkts_keyword, + (void *)&cmd_set_rxpkts_name, + (void *)&cmd_set_rxpkts_lengths, + NULL, + }, +}; + /* *** SET SEGMENT LENGTHS OF TXONLY PACKETS *** */ struct cmd_set_txpkts_result { @@ -7499,6 +7551,8 @@ static void cmd_showcfg_parsed(void *parsed_result, fwd_lcores_config_display(); else if (!strcmp(res->what, "fwd")) pkt_fwd_config_display(&cur_fwd_config); + else if (!strcmp(res->what, "rxpkts")) + show_rx_pkt_segments(); else if (!strcmp(res->what, "txpkts")) show_tx_pkt_segments(); else if (!strcmp(res->what, "txtimes")) @@ -7511,12 +7565,12 @@ static void cmd_showcfg_parsed(void *parsed_result, TOKEN_STRING_INITIALIZER(struct cmd_showcfg_result, cfg, "config"); cmdline_parse_token_string_t cmd_showcfg_what = TOKEN_STRING_INITIALIZER(struct cmd_showcfg_result, what, - "rxtx#cores#fwd#txpkts#txtimes"); + "rxtx#cores#fwd#rxpkts#txpkts#txtimes"); cmdline_parse_inst_t cmd_showcfg = { .f = cmd_showcfg_parsed, .data = NULL, - .help_str = "show config rxtx|cores|fwd|txpkts|txtimes", + .help_str = "show config rxtx|cores|fwd|rxpkts|txpkts|txtimes", .tokens = { (void *)&cmd_showcfg_show, (void *)&cmd_showcfg_port, @@ -19569,6 +19623,7 @@ struct cmd_showport_macs_result { (cmdline_parse_inst_t *)&cmd_reset, (cmdline_parse_inst_t *)&cmd_set_numbers, (cmdline_parse_inst_t *)&cmd_set_log, + (cmdline_parse_inst_t *)&cmd_set_rxpkts, (cmdline_parse_inst_t *)&cmd_set_txpkts, (cmdline_parse_inst_t *)&cmd_set_txsplit, (cmdline_parse_inst_t *)&cmd_set_txtimes, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 395ea6b..ff09ead 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -3096,6 +3096,50 @@ struct igb_ring_desc_16_bytes { } void +show_rx_pkt_segments(void) +{ + uint32_t i, n; + + n = rx_pkt_nb_segs; + printf("Number of segments: %u\n", n); + if (n) { + printf("Segment sizes: "); + for (i = 0; i != n - 1; i++) + printf("%hu,", rx_pkt_seg_lengths[i]); + printf("%hu\n", rx_pkt_seg_lengths[i]); + } +} + +void +set_rx_pkt_segments(unsigned int *seg_lengths, unsigned int nb_segs) +{ + unsigned int i; + + if (nb_segs >= MAX_SEGS_BUFFER_SPLIT) { + printf("nb segments per RX packets=%u >= " + "MAX_SEGS_BUFFER_SPLIT - ignored\n", nb_segs); + return; + } + + /* + * No extra check here, the segment length will be checked by PMD + * in the extended queue setup. + */ + for (i = 0; i < nb_segs; i++) { + if (seg_lengths[i] >= UINT16_MAX) { + printf("length[%u]=%u > UINT16_MAX - give up\n", + i, seg_lengths[i]); + return; + } + } + + for (i = 0; i < nb_segs; i++) + rx_pkt_seg_lengths[i] = (uint16_t) seg_lengths[i]; + + rx_pkt_nb_segs = (uint8_t) nb_segs; +} + +void show_tx_pkt_segments(void) { uint32_t i, n; @@ -3113,10 +3157,10 @@ struct igb_ring_desc_16_bytes { } void -set_tx_pkt_segments(unsigned *seg_lengths, unsigned nb_segs) +set_tx_pkt_segments(unsigned int *seg_lengths, unsigned int nb_segs) { uint16_t tx_pkt_len; - unsigned i; + unsigned int i; if (nb_segs >= (unsigned) nb_txd) { printf("nb segments per TX packets=%u >= nb_txd=%u - ignored\n", diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c index 1f40d73..99f0223 100644 --- a/app/test-pmd/parameters.c +++ b/app/test-pmd/parameters.c @@ -184,6 +184,7 @@ "(0 <= mapping <= %d).\n", RTE_ETHDEV_QUEUE_STAT_CNTRS - 1); printf(" --no-flush-rx: Don't flush RX streams before forwarding." " Used mainly with PCAP drivers.\n"); + printf(" --rxpkts=X[,Y]*: set RX segment sizes to split.\n"); printf(" --txpkts=X[,Y]*: set TX segment sizes" " or total packet length.\n"); printf(" --txonly-multi-flow: generate multiple flows in txonly mode\n"); @@ -661,6 +662,7 @@ { "rx-queue-stats-mapping", 1, 0, 0 }, { "no-flush-rx", 0, 0, 0 }, { "flow-isolate-all", 0, 0, 0 }, + { "rxpkts", 1, 0, 0 }, { "txpkts", 1, 0, 0 }, { "txonly-multi-flow", 0, 0, 0 }, { "disable-link-check", 0, 0, 0 }, @@ -1270,6 +1272,19 @@ "invalid RX queue statistics mapping config entered\n"); } } + if (!strcmp(lgopts[opt_idx].name, "rxpkts")) { + unsigned int seg_len[MAX_SEGS_BUFFER_SPLIT]; + unsigned int nb_segs; + + nb_segs = parse_item_list + (optarg, "rxpkt segments", + MAX_SEGS_BUFFER_SPLIT, + seg_len, 0); + if (nb_segs > 0) + set_rx_pkt_segments(seg_len, nb_segs); + else + rte_exit(EXIT_FAILURE, "bad rxpkts\n"); + } if (!strcmp(lgopts[opt_idx].name, "txpkts")) { unsigned seg_lengths[RTE_MAX_SEGS_PER_PKT]; unsigned int nb_segs; diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index f5060ee..3c88ca7 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -210,6 +210,13 @@ struct fwd_engine * fwd_engines[] = { uint8_t f_quit; /* + * Configuration of packet segments used to scatter received packets + * if some of split features is configured. + */ +uint16_t rx_pkt_seg_lengths[MAX_SEGS_BUFFER_SPLIT]; +uint8_t rx_pkt_nb_segs; /**< Number of segments to split */ + +/* * Configuration of packet segments used by the "txonly" processing engine. */ uint16_t tx_pkt_length = TXONLY_DEF_PACKET_LEN; /**< TXONLY packet length. */ diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index e5cdd12..0576b7c 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -420,6 +420,13 @@ struct queue_stats_mappings { extern struct rte_fdir_conf fdir_conf; /* + * Configuration of packet segments used to scatter received packets + * if some of split features is configured. + */ +extern uint16_t rx_pkt_seg_lengths[MAX_SEGS_BUFFER_SPLIT]; +extern uint8_t rx_pkt_nb_segs; /**< Number of segments to split */ + +/* * Configuration of packet segments used by the "txonly" processing engine. */ #define TXONLY_DEF_PACKET_LEN 64 @@ -815,7 +822,9 @@ void vlan_tpid_set(portid_t port_id, enum rte_vlan_type vlan_type, void set_record_core_cycles(uint8_t on_off); void set_record_burst_stats(uint8_t on_off); void set_verbose_level(uint16_t vb_level); -void set_tx_pkt_segments(unsigned *seg_lengths, unsigned nb_segs); +void set_rx_pkt_segments(unsigned int *seg_lengths, unsigned int nb_segs); +void show_rx_pkt_segments(void); +void set_tx_pkt_segments(unsigned int *seg_lengths, unsigned int nb_segs); void show_tx_pkt_segments(void); void set_tx_pkt_times(unsigned int *tx_times); void show_tx_pkt_times(void); diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst index 2d5a263..9286281 100644 --- a/doc/guides/testpmd_app_ug/run_app.rst +++ b/doc/guides/testpmd_app_ug/run_app.rst @@ -361,6 +361,15 @@ The command line options are: Don't flush the RX streams before starting forwarding. Used mainly with the PCAP PMD. +* ``--rxpkts=X[,Y]`` + + Set the length of segments to scatter packets on receiving if split + feature is engaged. Affects only the queues configured + with split offloads (currently BUFFER_SPLIT is supported only). + Optionally the multiple memory pools can be specified with --mbuf-size + command line parameter and the mbufs to receive will be allocated + sequentially from these extra memory pools. + * ``--txpkts=X[,Y]`` Set TX segment sizes or total packet length. Valid for ``tx-only`` diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 7f067af..0466920 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -273,7 +273,7 @@ show config Displays the configuration of the application. The configuration comes from the command-line, the runtime or the application defaults:: - testpmd> show config (rxtx|cores|fwd|txpkts|txtimes) + testpmd> show config (rxtx|cores|fwd|rxpkts|txpkts|txtimes) The available information categories are: @@ -283,6 +283,8 @@ The available information categories are: * ``fwd``: Packet forwarding configuration. +* ``rxpkts``: Packets to RX split configuration. + * ``txpkts``: Packets to TX configuration. * ``txtimes``: Burst time pattern for Tx only mode. @@ -760,6 +762,23 @@ When retry is enabled, the transmit delay time and number of retries can also be testpmd> set burst tx delay (microseconds) retry (num) +set rxpkts +~~~~~~~~~~ + +Set the length of segments to scatter packets on receiving if split +feature is engaged. Affects only the queues configured with split offloads +(currently BUFFER_SPLIT is supported only). Optionally the multiple memory +pools can be specified with --mbuf-size command line parameter and the mbufs +to receive will be allocated sequentially from these extra memory pools (the +mbuf for the first segment is allocated from the first pool, the second one +from the second pool, and so on, if segment number is greater then pool's the +mbuf for remaining segments will be allocated from the last valid pool). + + testpmd> set rxpkts (x[,y]*) + +Where x[,y]* represents a CSV list of values, without white space. Zero value +means to use the corresponding memory pool data buffer size. + set txpkts ~~~~~~~~~~ From patchwork Mon Oct 5 06:26:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Slava Ovsiienko X-Patchwork-Id: 79595 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 372B7A04BA; Mon, 5 Oct 2020 08:28:17 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 02CF01B868; Mon, 5 Oct 2020 08:26:59 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 181171B690 for ; Mon, 5 Oct 2020 08:26:51 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from viacheslavo@nvidia.com) with SMTP; 5 Oct 2020 09:26:47 +0300 Received: from nvidia.com (pegasus12.mtr.labs.mlnx [10.210.17.40]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 0956QlS0012304; Mon, 5 Oct 2020 09:26:47 +0300 From: Viacheslav Ovsiienko To: dev@dpdk.org Cc: thomasm@monjalon.net, stephen@networkplumber.org, ferruh.yigit@intel.com, olivier.matz@6wind.com, jerinjacobk@gmail.com, maxime.coquelin@redhat.com, david.marchand@redhat.com, arybchenko@solarflare.com Date: Mon, 5 Oct 2020 06:26:47 +0000 Message-Id: <1601879207-6504-6-git-send-email-viacheslavo@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1601879207-6504-1-git-send-email-viacheslavo@nvidia.com> References: <1601879207-6504-1-git-send-email-viacheslavo@nvidia.com> Subject: [dpdk-dev] [PATCH 5/5] app/testpmd: add extended Rx queue setup X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" If Rx queue is configured with split feature the extended setup with specified segment sizes and pool will be performed. Signed-off-by: Viacheslav Ovsiienko --- app/test-pmd/cmdline.c | 12 ++++++------ app/test-pmd/testpmd.c | 38 ++++++++++++++++++++++++++++++++++++-- app/test-pmd/testpmd.h | 6 ++++++ 3 files changed, 48 insertions(+), 8 deletions(-) diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index e0ac76e..1c65499 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -2912,12 +2912,12 @@ struct cmd_setup_rxtx_queue { rxring_numa[res->portid]); return; } - ret = rte_eth_rx_queue_setup(res->portid, - res->qid, - port->nb_rx_desc[res->qid], - socket_id, - &port->rx_conf[res->qid], - mp); + ret = rx_queue_setup(res->portid, + res->qid, + port->nb_rx_desc[res->qid], + socket_id, + &port->rx_conf[res->qid], + mp); if (ret) printf("Failed to setup RX queue\n"); } else { diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 3c88ca7..cd17cb0 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -2412,6 +2412,40 @@ struct extmem_param { return 0; } +/* Configure the Rx with optional split. */ +int +rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, + uint16_t nb_rx_desc, unsigned int socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp) +{ + struct rte_eth_rxseg rx_seg[MAX_SEGS_BUFFER_SPLIT] = {}; + unsigned int i, mp_n; + + if (rx_pkt_nb_segs <= 1 || + (rx_conf->offloads & DEV_RX_OFFLOAD_BUFFER_SPLIT) == 0) + return rte_eth_rx_queue_setup(port_id, rx_queue_id, + nb_rx_desc, socket_id, + rx_conf, mp); + for (i = 0; i < rx_pkt_nb_segs; i++) { + struct rte_mempool *mpx; + /* + * Use last valid pool for the segments with number + * exceeding the pool index. + */ + mp_n = (i > mbuf_data_size_n) ? mbuf_data_size_n - 1 : i; + mpx = mbuf_pool_find(socket_id, mp_n); + /* Handle zero as mbuf data buffer size. */ + rx_seg[i].length = rx_pkt_seg_lengths[i] ? + rx_pkt_seg_lengths[i] : + mbuf_data_size[mp_n]; + rx_seg[i].mp = mpx ? mpx : mp; + } + return rte_eth_rx_queue_setup_ex(port_id, rx_queue_id, + nb_rx_desc, socket_id, rx_conf, + rx_seg, rx_pkt_nb_segs); +} + int start_port(portid_t pid) { @@ -2520,7 +2554,7 @@ struct extmem_param { return -1; } - diag = rte_eth_rx_queue_setup(pi, qi, + diag = rx_queue_setup(pi, qi, port->nb_rx_desc[qi], rxring_numa[pi], &(port->rx_conf[qi]), @@ -2536,7 +2570,7 @@ struct extmem_param { port->socket_id); return -1; } - diag = rte_eth_rx_queue_setup(pi, qi, + diag = rx_queue_setup(pi, qi, port->nb_rx_desc[qi], port->socket_id, &(port->rx_conf[qi]), diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 0576b7c..1953c11 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -869,6 +869,12 @@ void port_rss_reta_info(portid_t port_id, void set_vf_traffic(portid_t port_id, uint8_t is_rx, uint16_t vf, uint8_t on); +int +rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, + uint16_t nb_rx_desc, unsigned int socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp); + int set_queue_rate_limit(portid_t port_id, uint16_t queue_idx, uint16_t rate); int set_vf_rate_limit(portid_t port_id, uint16_t vf, uint16_t rate, uint64_t q_msk);