From patchwork Mon Jun 13 10:25:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, WenxuanX" X-Patchwork-Id: 112692 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9C7FDA0543; Mon, 13 Jun 2022 12:49:06 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 77B12410DD; Mon, 13 Jun 2022 12:49:04 +0200 (CEST) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by mails.dpdk.org (Postfix) with ESMTP id E51FD4069C for ; Mon, 13 Jun 2022 12:49:02 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1655117343; x=1686653343; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=1JjaoW+ZvoVoZzYl6honTnADjv1U+nxGt2JOOtZrK3I=; b=cnHo3254OtSEdcYiCzLFgkCDgQUo7NfNvDZhjDwDWxfaH4Y1rHxv+po3 EydEvTFdcN7se2hT376UiGAaxm51+470yAaSkWxlpHnzJTHXK3KAMYHhO /oW+ljGmHMCjHsmy9lADJc7+LXHwJnGQew0MEcjup5LFqWcrMEOFTwv9l 3f6uiXV9t2/iIuDHPcZZAD+S3JG7LQWPeDE76i3zPGVNrq5zMbQXwWVPy D5Fhy7pONyBuvfSpIy63qL7sg/nhG6VycKZSQNZX6tn+70a4tBhKrRDcJ /e5rxfZgmRSwm5Q4GA7yDZalZjPc9GSMbVREbmrVyJWqNCydkHecf+WoE g==; X-IronPort-AV: E=McAfee;i="6400,9594,10376"; a="258076128" X-IronPort-AV: E=Sophos;i="5.91,297,1647327600"; d="scan'208";a="258076128" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Jun 2022 03:49:02 -0700 X-IronPort-AV: E=Sophos;i="5.91,297,1647327600"; d="scan'208";a="617442805" Received: from unknown (HELO localhost.localdomain) ([10.239.251.3]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Jun 2022 03:48:58 -0700 From: wenxuanx.wu@intel.com To: thomas@monjalon.net, andrew.rybchenko@oktetlabs.ru, xiaoyun.li@intel.com, ferruh.yigit@xilinx.com, aman.deep.singh@intel.com, dev@dpdk.org, yuying.zhang@intel.com, qi.z.zhang@intel.com, jerinjacobk@gmail.com Cc: stephen@networkplumber.org, Wenxuan Wu Subject: [PATCH v9 1/4] ethdev: introduce protocol header API Date: Mon, 13 Jun 2022 10:25:47 +0000 Message-Id: <20220613102550.241759-2-wenxuanx.wu@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220613102550.241759-1-wenxuanx.wu@intel.com> References: <20220303060136.36427-1-xuan.ding@intel.com> <20220613102550.241759-1-wenxuanx.wu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Wenxuan Wu This patch added new ethdev API to retrieve supported protocol header mask of a PMD, which helps to configure protocol header based buffer split. Signed-off-by: Wenxuan Wu --- doc/guides/rel_notes/release_22_07.rst | 2 ++ lib/ethdev/ethdev_driver.h | 18 +++++++++++++++ lib/ethdev/rte_ethdev.c | 31 +++++++++++++++++--------- lib/ethdev/rte_ethdev.h | 22 ++++++++++++++++++ lib/ethdev/version.map | 3 +++ 5 files changed, 65 insertions(+), 11 deletions(-) diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst index 42a5f2d990..a9b8ed3494 100644 --- a/doc/guides/rel_notes/release_22_07.rst +++ b/doc/guides/rel_notes/release_22_07.rst @@ -54,7 +54,9 @@ New Features This section is a comment. Do not overwrite or remove it. Also, make sure to start the actual text at the margin. ======================================================= +* **Added new ethdev API for PMD to get buffer split supported protocol types.** + Added ``rte_eth_supported_hdrs_get()``, to get supported header protocol mask of a PMD to split. Removed Items ------------- diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h index 69d9dc21d8..7b19842582 100644 --- a/lib/ethdev/ethdev_driver.h +++ b/lib/ethdev/ethdev_driver.h @@ -1054,6 +1054,21 @@ typedef int (*eth_ip_reassembly_conf_get_t)(struct rte_eth_dev *dev, typedef int (*eth_ip_reassembly_conf_set_t)(struct rte_eth_dev *dev, const struct rte_eth_ip_reassembly_params *conf); +/** + * @internal + * Get supported protocol flags of a PMD to split. + * + * @param dev + * ethdev handle of port. + * + * @param[out] ptype mask + * supported ptype mask of a PMD. + * + * @return + * Negative errno value on error, zero on success. + */ +typedef int (*eth_buffer_split_hdr_ptype_get_t)(struct rte_eth_dev *dev, uint32_t *ptype); + /** * @internal * Dump private info from device to a file. @@ -1281,6 +1296,9 @@ struct eth_dev_ops { /** Set IP reassembly configuration */ eth_ip_reassembly_conf_set_t ip_reassembly_conf_set; + /** Get supported ptypes to split */ + eth_buffer_split_hdr_ptype_get_t hdrs_supported_ptypes_get; + /** Dump private info from device */ eth_dev_priv_dump_t eth_dev_priv_dump; }; diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 29a3d80466..e1f2a0ffe3 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -1636,9 +1636,10 @@ rte_eth_dev_is_removed(uint16_t port_id) } static int -rte_eth_rx_queue_check_split(const struct rte_eth_rxseg_split *rx_seg, - uint16_t n_seg, uint32_t *mbp_buf_size, - const struct rte_eth_dev_info *dev_info) +rte_eth_rx_queue_check_split(uint16_t port_id, + const struct rte_eth_rxseg_split *rx_seg, + int16_t n_seg, uint32_t *mbp_buf_size, + const struct rte_eth_dev_info *dev_info) { const struct rte_eth_rxseg_capa *seg_capa = &dev_info->rx_seg_capa; struct rte_mempool *mp_first; @@ -1694,13 +1695,7 @@ rte_eth_rx_queue_check_split(const struct rte_eth_rxseg_split *rx_seg, } offset += seg_idx != 0 ? 0 : RTE_PKTMBUF_HEADROOM; *mbp_buf_size = rte_pktmbuf_data_room_size(mpl); - length = length != 0 ? length : *mbp_buf_size; - if (*mbp_buf_size < length + offset) { - RTE_ETHDEV_LOG(ERR, - "%s mbuf_data_room_size %u < %u (segment length=%u + segment offset=%u)\n", - mpl->name, *mbp_buf_size, - length + offset, length, offset); - return -EINVAL; + } } return 0; @@ -1779,7 +1774,7 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, n_seg = rx_conf->rx_nseg; if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { - ret = rte_eth_rx_queue_check_split(rx_seg, n_seg, + ret = rte_eth_rx_queue_check_split(port_id, rx_seg, n_seg, &mbp_buf_size, &dev_info); if (ret != 0) @@ -5844,6 +5839,20 @@ rte_eth_ip_reassembly_conf_set(uint16_t port_id, (*dev->dev_ops->ip_reassembly_conf_set)(dev, conf)); } +int +rte_eth_supported_hdrs_get(uint16_t port_id, uint32_t *ptypes) +{ + struct rte_eth_dev *dev; + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); + dev = &rte_eth_devices[port_id]; + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->hdrs_supported_ptypes_get, + -ENOTSUP); + + return eth_err(port_id, + (*dev->dev_ops->hdrs_supported_ptypes_get)(dev, ptypes)); +} + int rte_eth_dev_priv_dump(uint16_t port_id, FILE *file) { diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index 04cff8ee10..72cac1518e 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -6152,6 +6152,28 @@ rte_eth_tx_buffer(uint16_t port_id, uint16_t queue_id, return rte_eth_tx_buffer_flush(port_id, queue_id, buffer); } + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Get supported header protocols to split supported by PMD. + * The API will return error if the device is not valid. + * + * @param port_id + * The port identifier of the device. + * @param ptype + * Supported protocol headers of driver. + * @return + * - (-ENOTSUP) if header protocol is not supported by device. + * - (-ENODEV) if *port_id* invalid. + * - (-EIO) if device is removed. + * - (0) on success. + */ +__rte_experimental +int rte_eth_supported_hdrs_get(uint16_t port_id, + uint32_t *ptype); + #ifdef __cplusplus } #endif diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 20391ab29e..7705c0364a 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -279,6 +279,9 @@ EXPERIMENTAL { rte_flow_async_action_handle_create; rte_flow_async_action_handle_destroy; rte_flow_async_action_handle_update; + + # added in 22.07 + rte_eth_supported_hdrs_get; }; INTERNAL { From patchwork Mon Jun 13 10:25:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, WenxuanX" X-Patchwork-Id: 112693 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2912CA0543; Mon, 13 Jun 2022 12:49:12 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 715CB427E9; Mon, 13 Jun 2022 12:49:09 +0200 (CEST) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by mails.dpdk.org (Postfix) with ESMTP id 79353427E9 for ; Mon, 13 Jun 2022 12:49:07 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1655117347; x=1686653347; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=UbILn8drMy7/82hw5L+QWudLJZuvt/dvXR+Zf9Ix55Q=; b=jBeFkeu58n46OrRz2tk4LWT1Lzg2XfbkEn7DfxY0d4vP2kzGHimSvuMs WnWAUjaUFZzQ/Fczn9pbj4vo8e7zX1FXozyM9mi9vgacgUiBKWCiXIWME rX7MVHjbzu33MUxYC76ESTQHFig1vZdTEPqKgY6k1dRaFh0X6bM6RPS6Y qwDncTrThfXKjiwyW4TTZrJTVSxp/wCXwDQX11z5/E1BECa9whc/KMdLh JqiUQbPmv0z+76E/g8bp4CxxMP+LyVEDBLORf0v5lgx08CHimoW9dUlje uPLCVoKAb2d9MDaG1NHV2cHTT+TPOhEwuJe2ntPTeJPxWaOAAAiyAFlaN Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10376"; a="258076141" X-IronPort-AV: E=Sophos;i="5.91,297,1647327600"; d="scan'208";a="258076141" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Jun 2022 03:49:06 -0700 X-IronPort-AV: E=Sophos;i="5.91,297,1647327600"; d="scan'208";a="617442814" Received: from unknown (HELO localhost.localdomain) ([10.239.251.3]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Jun 2022 03:49:02 -0700 From: wenxuanx.wu@intel.com To: thomas@monjalon.net, andrew.rybchenko@oktetlabs.ru, xiaoyun.li@intel.com, ferruh.yigit@xilinx.com, aman.deep.singh@intel.com, dev@dpdk.org, yuying.zhang@intel.com, qi.z.zhang@intel.com, jerinjacobk@gmail.com Cc: stephen@networkplumber.org, Wenxuan Wu , Xuan Ding , Yuan Wang , Ray Kinsella Subject: [PATCH v9 2/4] ethdev: introduce protocol hdr based buffer split Date: Mon, 13 Jun 2022 10:25:48 +0000 Message-Id: <20220613102550.241759-3-wenxuanx.wu@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220613102550.241759-1-wenxuanx.wu@intel.com> References: <20220303060136.36427-1-xuan.ding@intel.com> <20220613102550.241759-1-wenxuanx.wu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Wenxuan Wu Currently, Rx buffer split supports length based split. With Rx queue offload RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT enabled and Rx packet segment configured, PMD will be able to split the received packets into multiple segments. However, length based buffer split is not suitable for NICs that do split based on protocol headers. Given an arbitrarily variable length in Rx packet segment, it is almost impossible to pass a fixed protocol header to driver. Besides, the existence of tunneling results in the composition of a packet is various, which makes the situation even worse. This patch extends current buffer split to support protocol header based buffer split. A new proto_hdr field is introduced in the reserved field of rte_eth_rxseg_split structure to specify protocol header. The proto_hdr field defines the split position of packet, splitting will always happens after the protocol header defined in the Rx packet segment. When Rx queue offload RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT is enabled and corresponding protocol header is configured, driver will split the ingress packets into multiple segments. struct rte_eth_rxseg_split { struct rte_mempool *mp; /* memory pools to allocate segment from */ uint16_t length; /* segment maximal data length, configures "split point" */ uint16_t offset; /* data offset from beginning of mbuf data buffer */ uint32_t proto_hdr; /* inner/outer L2/L3/L4 protocol header, configures "split point" */ }; If both inner and outer L2/L3/L4 level protocol header split can be supported by a PMD. Corresponding protocol header capability is RTE_PTYPE_L2_ETHER, RTE_PTYPE_L3_IPV4, RTE_PTYPE_L3_IPV6, RTE_PTYPE_L4_TCP, RTE_PTYPE_L4_UDP, RTE_PTYPE_L4_SCTP, RTE_PTYPE_INNER_L2_ETHER, RTE_PTYPE_INNER_L3_IPV4, RTE_PTYPE_INNER_L3_IPV6, RTE_PTYPE_INNER_L4_TCP, RTE_PTYPE_INNER_L4_UDP, RTE_PTYPE_INNER_L4_SCTP. For example, let's suppose we configured the Rx queue with the following segments: seg0 - pool0, proto_hdr0=RTE_PTYPE_L3_IPV4, off0=2B seg1 - pool1, proto_hdr1=RTE_PTYPE_L4_UDP, off1=128B seg2 - pool2, off1=0B The packet consists of MAC_IPV4_UDP_PAYLOAD will be split like following: seg0 - ipv4 header @ RTE_PKTMBUF_HEADROOM + 2 in mbuf from pool0 seg1 - udp header @ 128 in mbuf from pool1 seg2 - payload @ 0 in mbuf from pool2 Now buffer split can be configured in two modes. For length based buffer split, the mp, length, offset field in Rx packet segment should be configured, while the proto_hdr field should not be configured. For protocol header based buffer split, the mp, offset, proto_hdr field in Rx packet segment should be configured, while the length field should not be configured. The split limitations imposed by underlying driver is reported in the rte_eth_dev_info->rx_seg_capa field. The memory attributes for the split parts may differ either, dpdk memory and external memory, respectively. Signed-off-by: Xuan Ding Signed-off-by: Wenxuan Wu Signed-off-by: Yuan Wang Reviewed-by: Qi Zhang Acked-by: Ray Kinsella --- lib/ethdev/rte_ethdev.c | 32 +++++++++++++++++++++++++++++++- lib/ethdev/rte_ethdev.h | 14 +++++++++++++- 2 files changed, 44 insertions(+), 2 deletions(-) diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index e1f2a0ffe3..b89e30296f 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -1662,6 +1662,7 @@ rte_eth_rx_queue_check_split(uint16_t port_id, struct rte_mempool *mpl = rx_seg[seg_idx].mp; uint32_t length = rx_seg[seg_idx].length; uint32_t offset = rx_seg[seg_idx].offset; + uint32_t proto_hdr = rx_seg[seg_idx].proto_hdr; if (mpl == NULL) { RTE_ETHDEV_LOG(ERR, "null mempool pointer\n"); @@ -1695,7 +1696,36 @@ rte_eth_rx_queue_check_split(uint16_t port_id, } offset += seg_idx != 0 ? 0 : RTE_PKTMBUF_HEADROOM; *mbp_buf_size = rte_pktmbuf_data_room_size(mpl); - + uint32_t ptypes_mask; + int ret = rte_eth_supported_hdrs_get(port_id, &ptypes_mask); + + if (ret == ENOTSUP || ptypes_mask == RTE_PTYPE_UNKNOWN) { + /* Split at fixed length. */ + length = length != 0 ? length : *mbp_buf_size; + if (*mbp_buf_size < length + offset) { + RTE_ETHDEV_LOG(ERR, + "%s mbuf_data_room_size %u < %u (segment length=%u + segment offset=%u)\n", + mpl->name, *mbp_buf_size, + length + offset, length, offset); + return -EINVAL; + } + } else if (ret == 0) { + /* Split after specified protocol header. */ + if (proto_hdr & ~ptypes_mask) { + RTE_ETHDEV_LOG(ERR, + "Protocol header 0x%x is not supported.\n", + proto_hdr & ~ptypes_mask); + return -EINVAL; + } + if (*mbp_buf_size < offset) { + RTE_ETHDEV_LOG(ERR, + "%s mbuf_data_room_size %u < %u segment offset)\n", + mpl->name, *mbp_buf_size, + offset); + return -EINVAL; + } + } else { + return ret; } } return 0; diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index 72cac1518e..7df40f9f9b 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -1176,6 +1176,9 @@ struct rte_eth_txmode { * specified in the first array element, the second buffer, from the * pool in the second element, and so on. * + * - The proto_hdrs in the elements define the split position of + * received packets. + * * - The offsets from the segment description elements specify * the data offset from the buffer beginning except the first mbuf. * The first segment offset is added with RTE_PKTMBUF_HEADROOM. @@ -1197,12 +1200,21 @@ struct rte_eth_txmode { * - pool from the last valid element * - the buffer size from this pool * - zero offset + * + * - Length based buffer split: + * - mp, length, offset should be configured. + * - The proto_hdr field should not be configured. + * + * - Protocol header based buffer split: + * - mp, offset, proto_hdr should be configured. + * - The length field should not be configured. */ struct rte_eth_rxseg_split { struct rte_mempool *mp; /**< Memory pool to allocate segment from. */ uint16_t length; /**< Segment data length, configures split point. */ uint16_t offset; /**< Data offset from beginning of mbuf data buffer. */ - uint32_t reserved; /**< Reserved field. */ + /**< Supported ptypes mask of a specific pmd, configures split point. */ + uint32_t proto_hdr; }; /** From patchwork Mon Jun 13 10:25:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, WenxuanX" X-Patchwork-Id: 112694 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 31F12A0543; Mon, 13 Jun 2022 12:49:20 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AA48D4282C; Mon, 13 Jun 2022 12:49:13 +0200 (CEST) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by mails.dpdk.org (Postfix) with ESMTP id 1B5404282A for ; Mon, 13 Jun 2022 12:49:11 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1655117352; x=1686653352; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=QL6h49G2Yj8hv+kxI39sxIl4bAO75j03FHgWxYODvlg=; b=Es3X2syaJRYoA+2xJ2WeItMyeQWFKI7XNB9uTL7jWONb8jytBLjzl/9I lYnbMew8X8cSDvXjCOQ2DYgMkORiNBKeXzv3GnRfi8fBNtaC1b/9A0w1W f2cvwz2H+EaKWNCR4Ge/WrDvvxqwHUMQrerbXbQpBfWg8M9qaB0pwYtgT dvIYOPvERiDp+alN7sJTzx7bNjVTZduqBszm+//bN8cNhgTcTKqMJlIyY CQnWpCdGiQmi22OZp4wtNLtsnbs7IyfPjBxAm1jRJU4953h5ZHo8se4u0 O6NXvDSwDfiiXv0GxoYprXfay7rt+CRNS3pXzAi5OEddzpXDRGsTqkhvA A==; X-IronPort-AV: E=McAfee;i="6400,9594,10376"; a="258076149" X-IronPort-AV: E=Sophos;i="5.91,297,1647327600"; d="scan'208";a="258076149" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Jun 2022 03:49:11 -0700 X-IronPort-AV: E=Sophos;i="5.91,297,1647327600"; d="scan'208";a="617442832" Received: from unknown (HELO localhost.localdomain) ([10.239.251.3]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Jun 2022 03:49:07 -0700 From: wenxuanx.wu@intel.com To: thomas@monjalon.net, andrew.rybchenko@oktetlabs.ru, xiaoyun.li@intel.com, ferruh.yigit@xilinx.com, aman.deep.singh@intel.com, dev@dpdk.org, yuying.zhang@intel.com, qi.z.zhang@intel.com, jerinjacobk@gmail.com Cc: stephen@networkplumber.org, Wenxuan Wu , Xuan Ding , Yuan Wang Subject: [PATCH v9 3/4] app/testpmd: add rxhdrs commands and parameters Date: Mon, 13 Jun 2022 10:25:49 +0000 Message-Id: <20220613102550.241759-4-wenxuanx.wu@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220613102550.241759-1-wenxuanx.wu@intel.com> References: <20220303060136.36427-1-xuan.ding@intel.com> <20220613102550.241759-1-wenxuanx.wu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Wenxuan Wu Add command line parameter: --rxhdrs=mac,[ipv4,udp] Set the protocol_hdr of segments to scatter packets on receiving if split feature is engaged. And the queues with BUFFER_SPLIT flag. Add interactive mode command: testpmd>set rxhdrs mac,ipv4,l3,tcp,udp,sctp (protocol sequence should be valid) The protocol split feature is off by default. To enable protocol split, you need: 1. Start testpmd with two mempools. E.g. --mbuf-size=2048,2048 2. Configure Rx queue with rx_offload buffer split on. 3. Set the protocol type of buffer split. E.g. set rxhdrs mac,ipv4 (default protocols of testpmd : mac|icmp|ipv4|ipv6|l3|tcp|udp| sctp|l4|inner_mac|inner_ipv4|inner_ipv6| inner_l3|inner_tcp|inner_udp|inner_sctp| inner_l4) Above protocols can be configured in testpmd. But the configuration can only be applied when it is supported by specific pmd. Signed-off-by: Wenxuan Wu Signed-off-by: Xuan Ding Signed-off-by: Yuan Wang Reviewed-by: Qi Zhang --- app/test-pmd/cmdline.c | 133 +++++++++++++++++++++++++++++++++++++- app/test-pmd/config.c | 75 +++++++++++++++++++++ app/test-pmd/parameters.c | 15 ++++- app/test-pmd/testpmd.c | 6 +- app/test-pmd/testpmd.h | 6 ++ 5 files changed, 229 insertions(+), 6 deletions(-) diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index 6ffea8e21a..474235bc91 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -183,7 +183,7 @@ static void cmd_help_long_parsed(void *parsed_result, "show (rxq|txq) info (port_id) (queue_id)\n" " Display information for configured RX/TX queue.\n\n" - "show config (rxtx|cores|fwd|rxoffs|rxpkts|txpkts)\n" + "show config (rxtx|cores|fwd|rxoffs|rxpkts|rxhdrs|txpkts)\n" " Display the given configuration.\n\n" "read rxd (port_id) (queue_id) (rxd_id)\n" @@ -316,6 +316,15 @@ static void cmd_help_long_parsed(void *parsed_result, " Affects only the queues configured with split" " offloads.\n\n" + "set rxhdrs (mac[,ipv4])*\n" + " Set the protocol hdr of each segment to scatter" + " packets on receiving if split feature is engaged." + " Affects only the queues configured with split" + " offloads.\n\n" + " Supported proto header: mac|ipv4||qinq|gre|ipv6|l3|tcp|udp|sctp|l4|" + "inner_mac|inner_ipv4|inner_ipv6|inner_l3|inner_tcp|" + "inner_udp|inner_sctp\n" + "set txpkts (x[,y]*)\n" " Set the length of each segment of TXONLY" " and optionally CSUM packets.\n\n" @@ -3617,6 +3626,78 @@ cmdline_parse_inst_t cmd_stop = { }, }; +static unsigned int +get_ptype(char *value) +{ + uint32_t protocol; + if (!strcmp(value, "mac")) + protocol = RTE_PTYPE_L2_ETHER; + else if (!strcmp(value, "ipv4")) + protocol = RTE_PTYPE_L3_IPV4; + else if (!strcmp(value, "ipv6")) + protocol = RTE_PTYPE_L3_IPV6; + else if (!strcmp(value, "l3")) + protocol = RTE_PTYPE_L3_IPV4|RTE_PTYPE_L3_IPV6; + else if (!strcmp(value, "tcp")) + protocol = RTE_PTYPE_L4_TCP; + else if (!strcmp(value, "udp")) + protocol = RTE_PTYPE_L4_UDP; + else if (!strcmp(value, "sctp")) + protocol = RTE_PTYPE_L4_SCTP; + else if (!strcmp(value, "l4")) + protocol = RTE_PTYPE_L4_TCP|RTE_PTYPE_L4_UDP|RTE_PTYPE_L4_SCTP; + else if (!strcmp(value, "inner_mac")) + protocol = RTE_PTYPE_INNER_L2_ETHER; + else if (!strcmp(value, "inner_ipv4")) + protocol = RTE_PTYPE_INNER_L3_IPV4; + else if (!strcmp(value, "inner_ipv6")) + protocol = RTE_PTYPE_INNER_L3_IPV6; + else if (!strcmp(value, "inner_l3")) + protocol = RTE_PTYPE_INNER_L3_IPV4|RTE_PTYPE_INNER_L3_IPV6; + else if (!strcmp(value, "inner_tcp")) + protocol = RTE_PTYPE_INNER_L4_TCP; + else if (!strcmp(value, "inner_udp")) + protocol = RTE_PTYPE_INNER_L4_UDP; + else if (!strcmp(value, "inner_sctp")) + protocol = RTE_PTYPE_INNER_L4_SCTP; + else if (!strcmp(value, "unknown")) + protocol = RTE_PTYPE_UNKNOWN; + else if (!strcmp(value, "gre")) + protocol = RTE_PTYPE_TUNNEL_GRE; + else if (!strcmp(value, "qinq")) + protocol = RTE_PTYPE_L2_ETHER_QINQ; + else { + fprintf(stderr, "Unsupported protocol name: %s\n", value); + return 0; + } + return protocol; +} +/* *** SET RXHDRSLIST *** */ + +unsigned int +parse_hdrs_list(const char *str, const char *item_name, unsigned int max_items, + unsigned int *parsed_items, int check_hdrs_sequence) +{ + unsigned int nb_item; + char *cur; + char *tmp; + nb_item = 0; + char *str2 = strdup(str); + cur = strtok_r(str2, ",", &tmp); + while (cur != NULL) { + parsed_items[nb_item] = get_ptype(cur); + cur = strtok_r(NULL, ",", &tmp); + nb_item++; + } + if (nb_item > max_items) + fprintf(stderr, "Number of %s = %u > %u (maximum items)\n", + item_name, nb_item + 1, max_items); + set_rx_pkt_hdrs(parsed_items, nb_item); + free(str2); + if (!check_hdrs_sequence) + return nb_item; + return nb_item; +} /* *** SET CORELIST and PORTLIST CONFIGURATION *** */ unsigned int @@ -3986,6 +4067,49 @@ cmdline_parse_inst_t cmd_set_rxpkts = { }, }; +/* *** SET SEGMENT HEADERS OF RX PACKETS SPLIT *** */ +struct cmd_set_rxhdrs_result { + cmdline_fixed_string_t cmd_keyword; + cmdline_fixed_string_t rxhdrs; + cmdline_fixed_string_t seg_hdrs; +}; + +static void +cmd_set_rxhdrs_parsed(void *parsed_result, + __rte_unused struct cmdline *cl, + __rte_unused void *data) +{ + struct cmd_set_rxhdrs_result *res; + unsigned int seg_hdrs[MAX_SEGS_BUFFER_SPLIT]; + unsigned int nb_segs; + + res = parsed_result; + nb_segs = parse_hdrs_list(res->seg_hdrs, "segment hdrs", + MAX_SEGS_BUFFER_SPLIT, seg_hdrs, 0); + if (nb_segs >= 1) + set_rx_pkt_hdrs(seg_hdrs, nb_segs); + cmd_reconfig_device_queue(RTE_PORT_ALL, 0, 1); +} +cmdline_parse_token_string_t cmd_set_rxhdrs_keyword = + TOKEN_STRING_INITIALIZER(struct cmd_set_rxhdrs_result, + cmd_keyword, "set"); +cmdline_parse_token_string_t cmd_set_rxhdrs_name = + TOKEN_STRING_INITIALIZER(struct cmd_set_rxhdrs_result, + rxhdrs, "rxhdrs"); +cmdline_parse_token_string_t cmd_set_rxhdrs_seg_hdrs = + TOKEN_STRING_INITIALIZER(struct cmd_set_rxhdrs_result, + seg_hdrs, NULL); +cmdline_parse_inst_t cmd_set_rxhdrs = { + .f = cmd_set_rxhdrs_parsed, + .data = NULL, + .help_str = "set rxhdrs ", + .tokens = { + (void *)&cmd_set_rxhdrs_keyword, + (void *)&cmd_set_rxhdrs_name, + (void *)&cmd_set_rxhdrs_seg_hdrs, + NULL, + }, +}; /* *** SET SEGMENT LENGTHS OF TXONLY PACKETS *** */ struct cmd_set_txpkts_result { @@ -8058,6 +8182,8 @@ static void cmd_showcfg_parsed(void *parsed_result, show_rx_pkt_offsets(); else if (!strcmp(res->what, "rxpkts")) show_rx_pkt_segments(); + else if (!strcmp(res->what, "rxhdrs")) + show_rx_pkt_hdrs(); else if (!strcmp(res->what, "txpkts")) show_tx_pkt_segments(); else if (!strcmp(res->what, "txtimes")) @@ -8070,12 +8196,12 @@ cmdline_parse_token_string_t cmd_showcfg_port = TOKEN_STRING_INITIALIZER(struct cmd_showcfg_result, cfg, "config"); cmdline_parse_token_string_t cmd_showcfg_what = TOKEN_STRING_INITIALIZER(struct cmd_showcfg_result, what, - "rxtx#cores#fwd#rxoffs#rxpkts#txpkts#txtimes"); + "rxtx#cores#fwd#rxoffs#rxpkts#rxhdrs#txpkts#txtimes"); cmdline_parse_inst_t cmd_showcfg = { .f = cmd_showcfg_parsed, .data = NULL, - .help_str = "show config rxtx|cores|fwd|rxoffs|rxpkts|txpkts|txtimes", + .help_str = "show config rxtx|cores|fwd|rxoffs|rxpkts|rxhdrs|txpkts|txtimes", .tokens = { (void *)&cmd_showcfg_show, (void *)&cmd_showcfg_port, @@ -17833,6 +17959,7 @@ cmdline_parse_ctx_t main_ctx[] = { (cmdline_parse_inst_t *)&cmd_set_log, (cmdline_parse_inst_t *)&cmd_set_rxoffs, (cmdline_parse_inst_t *)&cmd_set_rxpkts, + (cmdline_parse_inst_t *)&cmd_set_rxhdrs, (cmdline_parse_inst_t *)&cmd_set_txpkts, (cmdline_parse_inst_t *)&cmd_set_txsplit, (cmdline_parse_inst_t *)&cmd_set_txtimes, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index cc8e7aa138..90ac5cfa68 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -4757,6 +4757,81 @@ show_rx_pkt_segments(void) printf("%hu\n", rx_pkt_seg_lengths[i]); } } +static const char *get_ptype_str(uint32_t ptype) +{ + switch (ptype) { + case RTE_PTYPE_INNER_L2_ETHER_QINQ: + return "qinq"; + case RTE_PTYPE_TUNNEL_GRE: + return "gre"; + case RTE_PTYPE_UNKNOWN: + return "unknown"; + case RTE_PTYPE_L2_ETHER: + return "outer_mac"; + case RTE_PTYPE_L3_IPV4: + return "ipv4"; + case RTE_PTYPE_L3_IPV6: + return "ipv6"; + case RTE_PTYPE_L3_IPV6|RTE_PTYPE_L3_IPV4: + return "ip"; + case RTE_PTYPE_L4_TCP: + return "tcp"; + case RTE_PTYPE_L4_UDP: + return "udp"; + case RTE_PTYPE_L4_SCTP: + return "sctp"; + case RTE_PTYPE_L4_TCP|RTE_PTYPE_L4_UDP|RTE_PTYPE_L4_SCTP: + return "l4"; + case RTE_PTYPE_INNER_L2_ETHER: + return "inner_mac"; + case RTE_PTYPE_INNER_L3_IPV4: + return "inner_ipv4"; + case RTE_PTYPE_INNER_L3_IPV6: + return "inner_ipv6"; + case RTE_PTYPE_INNER_L4_TCP: + return "inner_tcp"; + case RTE_PTYPE_INNER_L4_UDP: + return "inner_udp"; + case RTE_PTYPE_INNER_L4_SCTP: + return "inner_sctp"; + default: + return "unsupported"; + } +} +void +show_rx_pkt_hdrs(void) +{ + uint32_t i, n; + + n = rx_pkt_nb_segs; + printf("Number of segments: %u\n", n); + if (n) { + printf("Packet segs: "); + for (i = 0; i != n - 1; i++) + printf("%s, ", get_ptype_str(rx_pkt_hdr_protos[i])); + printf("%s\n", rx_pkt_hdr_protos[i] == 0 ? "payload" : + get_ptype_str(rx_pkt_hdr_protos[i])); + } +} +void +set_rx_pkt_hdrs(unsigned int *seg_hdrs, unsigned int nb_segs) +{ + unsigned int i; + + if (nb_segs >= MAX_SEGS_BUFFER_SPLIT) { + printf("nb segments per RX packets=%u >= " + "MAX_SEGS_BUFFER_SPLIT - ignored\n", nb_segs); + return; + } + + for (i = 0; i < nb_segs; i++) + rx_pkt_hdr_protos[i] = (uint32_t) seg_hdrs[i]; + /* + * We calculate the number of hdrs, but payload is not included, + * so rx_pkt_nb_segs would increase 1. + */ + rx_pkt_nb_segs = (rx_pkt_nb_segs == 0) ? (uint8_t) nb_segs + 1 : rx_pkt_nb_segs; +} void set_rx_pkt_segments(unsigned int *seg_lengths, unsigned int nb_segs) diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c index daf6a31b2b..f86d626276 100644 --- a/app/test-pmd/parameters.c +++ b/app/test-pmd/parameters.c @@ -161,6 +161,7 @@ usage(char* progname) " Used mainly with PCAP drivers.\n"); printf(" --rxoffs=X[,Y]*: set RX segment offsets for split.\n"); printf(" --rxpkts=X[,Y]*: set RX segment sizes to split.\n"); + printf(" --rxhdrs=mac[,ipv4]*: set RX segment protocol to split.\n"); printf(" --txpkts=X[,Y]*: set TX segment sizes" " or total packet length.\n"); printf(" --txonly-multi-flow: generate multiple flows in txonly mode\n"); @@ -673,6 +674,7 @@ launch_args_parse(int argc, char** argv) { "flow-isolate-all", 0, 0, 0 }, { "rxoffs", 1, 0, 0 }, { "rxpkts", 1, 0, 0 }, + { "rxhdrs", 1, 0, 0 }, { "txpkts", 1, 0, 0 }, { "txonly-multi-flow", 0, 0, 0 }, { "rxq-share", 2, 0, 0 }, @@ -1327,7 +1329,6 @@ launch_args_parse(int argc, char** argv) if (!strcmp(lgopts[opt_idx].name, "rxpkts")) { unsigned int seg_len[MAX_SEGS_BUFFER_SPLIT]; unsigned int nb_segs; - nb_segs = parse_item_list (optarg, "rxpkt segments", MAX_SEGS_BUFFER_SPLIT, @@ -1337,6 +1338,18 @@ launch_args_parse(int argc, char** argv) else rte_exit(EXIT_FAILURE, "bad rxpkts\n"); } + if (!strcmp(lgopts[opt_idx].name, "rxhdrs")) { + unsigned int seg_hdrs[MAX_SEGS_BUFFER_SPLIT]; + unsigned int nb_segs; + nb_segs = parse_hdrs_list + (optarg, "rxpkt segments", + MAX_SEGS_BUFFER_SPLIT, + seg_hdrs, 0); + if (nb_segs >= 1) + set_rx_pkt_hdrs(seg_hdrs, nb_segs); + else + rte_exit(EXIT_FAILURE, "bad rxpkts\n"); + } if (!strcmp(lgopts[opt_idx].name, "txpkts")) { unsigned seg_lengths[RTE_MAX_SEGS_PER_PKT]; unsigned int nb_segs; diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index fe2ce19f99..ef679c70be 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -240,6 +240,7 @@ uint16_t rx_pkt_seg_lengths[MAX_SEGS_BUFFER_SPLIT]; uint8_t rx_pkt_nb_segs; /**< Number of segments to split */ uint16_t rx_pkt_seg_offsets[MAX_SEGS_BUFFER_SPLIT]; uint8_t rx_pkt_nb_offs; /**< Number of specified offsets */ +uint32_t rx_pkt_hdr_protos[MAX_SEGS_BUFFER_SPLIT]; /* * Configuration of packet segments used by the "txonly" processing engine. @@ -2587,11 +2588,12 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, mpx = mbuf_pool_find(socket_id, mp_n); /* Handle zero as mbuf data buffer size. */ rx_seg->length = rx_pkt_seg_lengths[i] ? - rx_pkt_seg_lengths[i] : - mbuf_data_size[mp_n]; + rx_pkt_seg_lengths[i] : + mbuf_data_size[mp_n]; rx_seg->offset = i < rx_pkt_nb_offs ? rx_pkt_seg_offsets[i] : 0; rx_seg->mp = mpx ? mpx : mp; + rx_seg->proto_hdr = rx_pkt_hdr_protos[i]; } rx_conf->rx_nseg = rx_pkt_nb_segs; rx_conf->rx_seg = rx_useg; diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 31f766c965..e791b9becd 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -534,6 +534,7 @@ extern uint32_t max_rx_pkt_len; * Configuration of packet segments used to scatter received packets * if some of split features is configured. */ +extern uint32_t rx_pkt_hdr_protos[MAX_SEGS_BUFFER_SPLIT]; extern uint16_t rx_pkt_seg_lengths[MAX_SEGS_BUFFER_SPLIT]; extern uint8_t rx_pkt_nb_segs; /**< Number of segments to split */ extern uint16_t rx_pkt_seg_offsets[MAX_SEGS_BUFFER_SPLIT]; @@ -864,6 +865,9 @@ inc_tx_burst_stats(struct fwd_stream *fs, uint16_t nb_tx) unsigned int parse_item_list(const char *str, const char *item_name, unsigned int max_items, unsigned int *parsed_items, int check_unique_values); +unsigned int parse_hdrs_list(const char *str, const char *item_name, + unsigned int max_item, + unsigned int *parsed_items, int check_unique_values); void launch_args_parse(int argc, char** argv); void cmdline_read_from_file(const char *filename); void prompt(void); @@ -1018,6 +1022,8 @@ void set_record_core_cycles(uint8_t on_off); void set_record_burst_stats(uint8_t on_off); void set_verbose_level(uint16_t vb_level); void set_rx_pkt_segments(unsigned int *seg_lengths, unsigned int nb_segs); +void set_rx_pkt_hdrs(unsigned int *seg_protos, unsigned int nb_segs); +void show_rx_pkt_hdrs(void); void show_rx_pkt_segments(void); void set_rx_pkt_offsets(unsigned int *seg_offsets, unsigned int nb_offs); void show_rx_pkt_offsets(void); From patchwork Mon Jun 13 10:25:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, WenxuanX" X-Patchwork-Id: 112695 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 21364A0543; Mon, 13 Jun 2022 12:49:27 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A79074067C; Mon, 13 Jun 2022 12:49:19 +0200 (CEST) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by mails.dpdk.org (Postfix) with ESMTP id B5F4F4067C for ; Mon, 13 Jun 2022 12:49:16 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1655117357; x=1686653357; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=cS9/K7T0TblrJ7orFHnstAeHtUnxKONPgE5r4vA4Crk=; b=dcDlqlVKCF7oxb/hoOgXMrz7Iem6mxFtMi/2OHbJ2wfcR4pPqp1DnkwD uGrsPRv/31YuZcoVOi4vl6jxPh5Nt0duJow7K4xCuXH4CtlmHMVvrvbct NP9B1NABu43T6QWqB/5wgK5TWvCyTkkk3PERVkViEdjUUQYQTb+dTFkuk AHi8zMYmpfjXJYVaBkmIbOFIBaf0ffDa9dY3rT1rhxmhh9VtTx6R2nStR Iojod//PJshOdwMhbj8fmh1G4KHql6JtlbhEEaEwIweLEfthW36NDyYBX 8pNxL7TnnEG2BFM0T5vl/uSi5sTvCzkUR3w/0sSYvnssA1lPM/viKyOfD g==; X-IronPort-AV: E=McAfee;i="6400,9594,10376"; a="258076161" X-IronPort-AV: E=Sophos;i="5.91,297,1647327600"; d="scan'208";a="258076161" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Jun 2022 03:49:16 -0700 X-IronPort-AV: E=Sophos;i="5.91,297,1647327600"; d="scan'208";a="617442851" Received: from unknown (HELO localhost.localdomain) ([10.239.251.3]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Jun 2022 03:49:11 -0700 From: wenxuanx.wu@intel.com To: thomas@monjalon.net, andrew.rybchenko@oktetlabs.ru, xiaoyun.li@intel.com, ferruh.yigit@xilinx.com, aman.deep.singh@intel.com, dev@dpdk.org, yuying.zhang@intel.com, qi.z.zhang@intel.com, jerinjacobk@gmail.com Cc: stephen@networkplumber.org, Wenxuan Wu , Xuan Ding , Yuan Wang Subject: [PATCH v9 4/4] net/ice: support buffer split in Rx path Date: Mon, 13 Jun 2022 10:25:50 +0000 Message-Id: <20220613102550.241759-5-wenxuanx.wu@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220613102550.241759-1-wenxuanx.wu@intel.com> References: <20220303060136.36427-1-xuan.ding@intel.com> <20220613102550.241759-1-wenxuanx.wu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Wenxuan Wu This patch adds support for protocol based buffer split in normal Rx data paths. When the Rx queue is configured with specific protocol type, packets received will be directly split into protocol header and payload parts limitation of ice pmd. And the two parts will be put into different mempools. Currently, protocol based buffer split is not supported in vectorized paths. A new api ice_get_supported_split_hdrs() has been introduced, it will return the supported header protocols of ice PMD to app for splitting. Signed-off-by: Xuan Ding Signed-off-by: Wenxuan Wu Signed-off-by: Yuan Wang Reviewed-by: Qi Zhang --- drivers/net/ice/ice_ethdev.c | 38 ++++- drivers/net/ice/ice_rxtx.c | 220 ++++++++++++++++++++++---- drivers/net/ice/ice_rxtx.h | 16 ++ drivers/net/ice/ice_rxtx_vec_common.h | 3 + 4 files changed, 245 insertions(+), 32 deletions(-) diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index 73e550f5fb..dcd4ad2eb4 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -169,6 +169,8 @@ static int ice_timesync_read_time(struct rte_eth_dev *dev, static int ice_timesync_write_time(struct rte_eth_dev *dev, const struct timespec *timestamp); static int ice_timesync_disable(struct rte_eth_dev *dev); +static int ice_get_supported_split_hdrs(struct rte_eth_dev *dev, + uint32_t *ptypes); static const struct rte_pci_id pci_id_ice_map[] = { { RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E823L_BACKPLANE) }, @@ -267,6 +269,7 @@ static const struct eth_dev_ops ice_eth_dev_ops = { .timesync_read_time = ice_timesync_read_time, .timesync_write_time = ice_timesync_write_time, .timesync_disable = ice_timesync_disable, + .hdrs_supported_ptypes_get = ice_get_supported_split_hdrs, }; /* store statistics names and its offset in stats structure */ @@ -3713,7 +3716,8 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | RTE_ETH_RX_OFFLOAD_RSS_HASH | - RTE_ETH_RX_OFFLOAD_TIMESTAMP; + RTE_ETH_RX_OFFLOAD_TIMESTAMP | + RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT; dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_QINQ_INSERT | RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | @@ -3725,7 +3729,7 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->flow_type_rss_offloads |= ICE_RSS_OFFLOAD_ALL; } - dev_info->rx_queue_offload_capa = 0; + dev_info->rx_queue_offload_capa = RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT; dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE; dev_info->reta_size = pf->hash_lut_size; @@ -3794,6 +3798,11 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->default_rxportconf.ring_size = ICE_BUF_SIZE_MIN; dev_info->default_txportconf.ring_size = ICE_BUF_SIZE_MIN; + dev_info->rx_seg_capa.max_nseg = ICE_RX_MAX_NSEG; + dev_info->rx_seg_capa.multi_pools = 1; + dev_info->rx_seg_capa.offset_allowed = 0; + dev_info->rx_seg_capa.offset_align_log2 = 0; + return 0; } @@ -5840,6 +5849,31 @@ ice_timesync_disable(struct rte_eth_dev *dev) return 0; } +static int +ice_get_supported_split_hdrs(struct rte_eth_dev *dev, uint32_t *ptypes) +{ + if (!dev) + return -EINVAL; +/* Buffer split protocol header capability. */ +#define RTE_BUFFER_SPLIT_PROTO_HDR_MASK ( \ + RTE_PTYPE_L2_ETHER | \ + RTE_PTYPE_L3_IPV4 | \ + RTE_PTYPE_L3_IPV6 | \ + RTE_PTYPE_L4_TCP | \ + RTE_PTYPE_L4_UDP | \ + RTE_PTYPE_L4_SCTP | \ + RTE_PTYPE_INNER_L2_ETHER | \ + RTE_PTYPE_INNER_L3_IPV4 | \ + RTE_PTYPE_INNER_L3_IPV6 | \ + RTE_PTYPE_INNER_L4_TCP | \ + RTE_PTYPE_INNER_L4_UDP | \ + RTE_PTYPE_INNER_L4_SCTP) + + *ptypes = RTE_BUFFER_SPLIT_PROTO_HDR_MASK; + + return 0; +} + static int ice_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_device *pci_dev) diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c index 2dd2637fbb..47ef5bbe35 100644 --- a/drivers/net/ice/ice_rxtx.c +++ b/drivers/net/ice/ice_rxtx.c @@ -282,7 +282,6 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq) /* Set buffer size as the head split is disabled. */ buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM); - rxq->rx_hdr_len = 0; rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << ICE_RLAN_CTX_DBUF_S)); rxq->max_pkt_len = RTE_MIN((uint32_t)ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len, @@ -311,11 +310,53 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq) memset(&rx_ctx, 0, sizeof(rx_ctx)); + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + switch (rxq->rxseg[0].proto_hdr) { + case RTE_PTYPE_L2_ETHER: + rx_ctx.dtype = ICE_RX_DTYPE_HEADER_SPLIT; + rx_ctx.hsplit_1 = ICE_RLAN_RX_HSPLIT_1_SPLIT_L2; + break; + case RTE_PTYPE_INNER_L2_ETHER: + rx_ctx.dtype = ICE_RX_DTYPE_HEADER_SPLIT; + rx_ctx.hsplit_0 = ICE_RLAN_RX_HSPLIT_0_SPLIT_L2; + break; + case RTE_PTYPE_L3_IPV4: + case RTE_PTYPE_L3_IPV6: + case RTE_PTYPE_INNER_L3_IPV4: + case RTE_PTYPE_INNER_L3_IPV6: + case RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L3_IPV6: + rx_ctx.dtype = ICE_RX_DTYPE_HEADER_SPLIT; + rx_ctx.hsplit_0 = ICE_RLAN_RX_HSPLIT_0_SPLIT_IP; + break; + case RTE_PTYPE_L4_TCP: + case RTE_PTYPE_L4_UDP: + case RTE_PTYPE_INNER_L4_TCP: + case RTE_PTYPE_INNER_L4_UDP: + rx_ctx.dtype = ICE_RX_DTYPE_HEADER_SPLIT; + rx_ctx.hsplit_0 = ICE_RLAN_RX_HSPLIT_0_SPLIT_TCP_UDP; + break; + case RTE_PTYPE_L4_SCTP: + case RTE_PTYPE_INNER_L4_SCTP: + rx_ctx.dtype = ICE_RX_DTYPE_HEADER_SPLIT; + rx_ctx.hsplit_0 = ICE_RLAN_RX_HSPLIT_0_SPLIT_SCTP; + break; + case RTE_PTYPE_UNKNOWN: + PMD_DRV_LOG(ERR, "Buffer split protocol must be configured"); + return -EINVAL; + default: + PMD_DRV_LOG(ERR, "Buffer split protocol is not supported"); + return -EINVAL; + } + rxq->rx_hdr_len = ICE_RX_HDR_BUF_SIZE; + } else { + rxq->rx_hdr_len = 0; + rx_ctx.dtype = 0; /* No Protocol Based Buffer Split mode */ + } + rx_ctx.base = rxq->rx_ring_dma / ICE_QUEUE_BASE_ADDR_UNIT; rx_ctx.qlen = rxq->nb_rx_desc; rx_ctx.dbuf = rxq->rx_buf_len >> ICE_RLAN_CTX_DBUF_S; rx_ctx.hbuf = rxq->rx_hdr_len >> ICE_RLAN_CTX_HBUF_S; - rx_ctx.dtype = 0; /* No Header Split mode */ #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC rx_ctx.dsize = 1; /* 32B descriptors */ #endif @@ -401,6 +442,7 @@ ice_alloc_rx_queue_mbufs(struct ice_rx_queue *rxq) for (i = 0; i < rxq->nb_rx_desc; i++) { volatile union ice_rx_flex_desc *rxd; + rxd = &rxq->rx_ring[i]; struct rte_mbuf *mbuf = rte_mbuf_raw_alloc(rxq->mp); if (unlikely(!mbuf)) { @@ -408,8 +450,6 @@ ice_alloc_rx_queue_mbufs(struct ice_rx_queue *rxq) return -ENOMEM; } - rte_mbuf_refcnt_set(mbuf, 1); - mbuf->next = NULL; mbuf->data_off = RTE_PKTMBUF_HEADROOM; mbuf->nb_segs = 1; mbuf->port = rxq->port_id; @@ -417,9 +457,33 @@ ice_alloc_rx_queue_mbufs(struct ice_rx_queue *rxq) dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf)); - rxd = &rxq->rx_ring[i]; - rxd->read.pkt_addr = dma_addr; - rxd->read.hdr_addr = 0; + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + struct rte_mbuf *mbuf_pay; + mbuf_pay = rte_mbuf_raw_alloc(rxq->rxseg[1].mp); + if (unlikely(!mbuf_pay)) { + PMD_DRV_LOG(ERR, "Failed to allocate payload mbuf for RX"); + return -ENOMEM; + } + + mbuf_pay->next = NULL; + mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + mbuf_pay->nb_segs = 1; + mbuf_pay->port = rxq->port_id; + mbuf->next = mbuf_pay; + + rxd->read.hdr_addr = dma_addr; + /* The LS bit should be set to zero regardless of + * buffer split enablement. + */ + rxd->read.pkt_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf_pay)); + + } else { + rte_mbuf_refcnt_set(mbuf, 1); + mbuf->next = NULL; + rxd->read.hdr_addr = 0; + rxd->read.pkt_addr = dma_addr; + } + #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC rxd->read.rsvd1 = 0; rxd->read.rsvd2 = 0; @@ -443,14 +507,14 @@ _ice_rx_queue_release_mbufs(struct ice_rx_queue *rxq) for (i = 0; i < rxq->nb_rx_desc; i++) { if (rxq->sw_ring[i].mbuf) { - rte_pktmbuf_free_seg(rxq->sw_ring[i].mbuf); + rte_pktmbuf_free(rxq->sw_ring[i].mbuf); rxq->sw_ring[i].mbuf = NULL; } } if (rxq->rx_nb_avail == 0) return; for (i = 0; i < rxq->rx_nb_avail; i++) - rte_pktmbuf_free_seg(rxq->rx_stage[rxq->rx_next_avail + i]); + rte_pktmbuf_free(rxq->rx_stage[rxq->rx_next_avail + i]); rxq->rx_nb_avail = 0; } @@ -742,7 +806,7 @@ ice_fdir_program_hw_rx_queue(struct ice_rx_queue *rxq) rx_ctx.qlen = rxq->nb_rx_desc; rx_ctx.dbuf = rxq->rx_buf_len >> ICE_RLAN_CTX_DBUF_S; rx_ctx.hbuf = rxq->rx_hdr_len >> ICE_RLAN_CTX_HBUF_S; - rx_ctx.dtype = 0; /* No Header Split mode */ + rx_ctx.dtype = 0; /* No Buffer Split mode */ rx_ctx.dsize = 1; /* 32B descriptors */ rx_ctx.rxmax = ICE_ETH_MAX_LEN; /* TPH: Transaction Layer Packet (TLP) processing hints */ @@ -1076,6 +1140,7 @@ ice_rx_queue_setup(struct rte_eth_dev *dev, uint16_t len; int use_def_burst_func = 1; uint64_t offloads; + uint16_t n_seg = rx_conf->rx_nseg; if (nb_desc % ICE_ALIGN_RING_DESC != 0 || nb_desc > ICE_MAX_RING_DESC || @@ -1087,6 +1152,17 @@ ice_rx_queue_setup(struct rte_eth_dev *dev, offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads; + if (mp) + n_seg = 1; + + if (n_seg > 1) { + if (!(offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + PMD_INIT_LOG(ERR, "port %u queue index %u split offload not configured", + dev->data->port_id, queue_idx); + return -EINVAL; + } + } + /* Free memory if needed */ if (dev->data->rx_queues[queue_idx]) { ice_rx_queue_release(dev->data->rx_queues[queue_idx]); @@ -1098,12 +1174,22 @@ ice_rx_queue_setup(struct rte_eth_dev *dev, sizeof(struct ice_rx_queue), RTE_CACHE_LINE_SIZE, socket_id); + if (!rxq) { PMD_INIT_LOG(ERR, "Failed to allocate memory for " "rx queue data structure"); return -ENOMEM; } - rxq->mp = mp; + + rxq->rxseg_nb = n_seg; + if (n_seg > 1) { + rte_memcpy(rxq->rxseg, rx_conf->rx_seg, + sizeof(struct rte_eth_rxseg_split) * n_seg); + rxq->mp = rxq->rxseg[0].mp; + } else { + rxq->mp = mp; + } + rxq->nb_rx_desc = nb_desc; rxq->rx_free_thresh = rx_conf->rx_free_thresh; rxq->queue_id = queue_idx; @@ -1568,7 +1654,7 @@ ice_rx_scan_hw_ring(struct ice_rx_queue *rxq) struct ice_rx_entry *rxep; struct rte_mbuf *mb; uint16_t stat_err0; - uint16_t pkt_len; + uint16_t pkt_len, hdr_len; int32_t s[ICE_LOOK_AHEAD], nb_dd; int32_t i, j, nb_rx = 0; uint64_t pkt_flags = 0; @@ -1623,6 +1709,24 @@ ice_rx_scan_hw_ring(struct ice_rx_queue *rxq) ICE_RX_FLX_DESC_PKT_LEN_M) - rxq->crc_len; mb->data_len = pkt_len; mb->pkt_len = pkt_len; + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + mb->nb_segs = (uint16_t)(mb->nb_segs + mb->next->nb_segs); + mb->next->next = NULL; + hdr_len = rte_le_to_cpu_16(rxdp[j].wb.hdr_len_sph_flex_flags1) & + ICE_RX_FLEX_DESC_HEADER_LEN_M; + pkt_len = (rte_le_to_cpu_16(rxdp[j].wb.pkt_len) & + ICE_RX_FLX_DESC_PKT_LEN_M) - rxq->crc_len; + mb->data_len = hdr_len; + mb->pkt_len = hdr_len + pkt_len; + mb->next->data_len = pkt_len; + } else { + pkt_len = (rte_le_to_cpu_16(rxdp[j].wb.pkt_len) & + ICE_RX_FLX_DESC_PKT_LEN_M) - rxq->crc_len; + mb->data_len = pkt_len; + mb->pkt_len = pkt_len; + } + mb->ol_flags = 0; stat_err0 = rte_le_to_cpu_16(rxdp[j].wb.status_error0); pkt_flags = ice_rxd_error_to_pkt_flags(stat_err0); @@ -1714,7 +1818,9 @@ ice_rx_alloc_bufs(struct ice_rx_queue *rxq) struct rte_mbuf *mb; uint16_t alloc_idx, i; uint64_t dma_addr; - int diag; + int diag, diag_pay; + uint64_t pay_addr; + struct rte_mbuf *mbufs_pay[rxq->rx_free_thresh]; /* Allocate buffers in bulk */ alloc_idx = (uint16_t)(rxq->rx_free_trigger - @@ -1727,6 +1833,15 @@ ice_rx_alloc_bufs(struct ice_rx_queue *rxq) return -ENOMEM; } + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + diag_pay = rte_mempool_get_bulk(rxq->rxseg[1].mp, + (void *)mbufs_pay, rxq->rx_free_thresh); + if (unlikely(diag_pay != 0)) { + PMD_RX_LOG(ERR, "Failed to get payload mbufs in bulk"); + return -ENOMEM; + } + } + rxdp = &rxq->rx_ring[alloc_idx]; for (i = 0; i < rxq->rx_free_thresh; i++) { if (likely(i < (rxq->rx_free_thresh - 1))) @@ -1735,13 +1850,21 @@ ice_rx_alloc_bufs(struct ice_rx_queue *rxq) mb = rxep[i].mbuf; rte_mbuf_refcnt_set(mb, 1); - mb->next = NULL; mb->data_off = RTE_PKTMBUF_HEADROOM; mb->nb_segs = 1; mb->port = rxq->port_id; dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mb)); - rxdp[i].read.hdr_addr = 0; - rxdp[i].read.pkt_addr = dma_addr; + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + mb->next = mbufs_pay[i]; + pay_addr = rte_mbuf_data_iova_default(mbufs_pay[i]); + rxdp[i].read.hdr_addr = dma_addr; + rxdp[i].read.pkt_addr = rte_cpu_to_le_64(pay_addr); + } else { + mb->next = NULL; + rxdp[i].read.hdr_addr = 0; + rxdp[i].read.pkt_addr = dma_addr; + } } /* Update Rx tail register */ @@ -2350,11 +2473,13 @@ ice_recv_pkts(void *rx_queue, struct ice_rx_entry *sw_ring = rxq->sw_ring; struct ice_rx_entry *rxe; struct rte_mbuf *nmb; /* new allocated mbuf */ + struct rte_mbuf *nmb_pay; /* new allocated payload mbuf */ struct rte_mbuf *rxm; /* pointer to store old mbuf in SW ring */ uint16_t rx_id = rxq->rx_tail; uint16_t nb_rx = 0; uint16_t nb_hold = 0; uint16_t rx_packet_len; + uint16_t rx_header_len; uint16_t rx_stat_err0; uint64_t dma_addr; uint64_t pkt_flags; @@ -2382,12 +2507,16 @@ ice_recv_pkts(void *rx_queue, if (!(rx_stat_err0 & (1 << ICE_RX_FLEX_DESC_STATUS0_DD_S))) break; - /* allocate mbuf */ + if (rx_stat_err0 & (1 << ICE_RX_FLEX_DESC_STATUS0_HBO_S)) + break; + + /* allocate header mbuf */ nmb = rte_mbuf_raw_alloc(rxq->mp); if (unlikely(!nmb)) { rxq->vsi->adapter->pf.dev_data->rx_mbuf_alloc_failed++; break; } + rxd = *rxdp; /* copy descriptor in ring to temp variable*/ nb_hold++; @@ -2400,24 +2529,55 @@ ice_recv_pkts(void *rx_queue, dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb)); - /** - * fill the read format of descriptor with physic address in - * new allocated mbuf: nmb - */ - rxdp->read.hdr_addr = 0; - rxdp->read.pkt_addr = dma_addr; + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + /* allocate payload mbuf */ + nmb_pay = rte_mbuf_raw_alloc(rxq->rxseg[1].mp); + if (unlikely(!nmb_pay)) { + rxq->vsi->adapter->pf.dev_data->rx_mbuf_alloc_failed++; + break; + } + + nmb->next = nmb_pay; + nmb_pay->next = NULL; - /* calculate rx_packet_len of the received pkt */ - rx_packet_len = (rte_le_to_cpu_16(rxd.wb.pkt_len) & - ICE_RX_FLX_DESC_PKT_LEN_M) - rxq->crc_len; + /** + * fill the read format of descriptor with physic address in + * new allocated mbuf: nmb + */ + rxdp->read.hdr_addr = dma_addr; + rxdp->read.pkt_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb_pay)); + } else { + /** + * fill the read format of descriptor with physic address in + * new allocated mbuf: nmb + */ + rxdp->read.hdr_addr = 0; + rxdp->read.pkt_addr = dma_addr; + } /* fill old mbuf with received descriptor: rxd */ rxm->data_off = RTE_PKTMBUF_HEADROOM; rte_prefetch0(RTE_PTR_ADD(rxm->buf_addr, RTE_PKTMBUF_HEADROOM)); - rxm->nb_segs = 1; - rxm->next = NULL; - rxm->pkt_len = rx_packet_len; - rxm->data_len = rx_packet_len; + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + rxm->nb_segs = (uint16_t)(rxm->nb_segs + rxm->next->nb_segs); + rxm->next->next = NULL; + /* calculate rx_packet_len of the received pkt */ + rx_header_len = rte_le_to_cpu_16(rxd.wb.hdr_len_sph_flex_flags1) & + ICE_RX_FLEX_DESC_HEADER_LEN_M; + rx_packet_len = (rte_le_to_cpu_16(rxd.wb.pkt_len) & + ICE_RX_FLX_DESC_PKT_LEN_M) - rxq->crc_len; + rxm->data_len = rx_header_len; + rxm->pkt_len = rx_header_len + rx_packet_len; + rxm->next->data_len = rx_packet_len; + } else { + rxm->nb_segs = 1; + rxm->next = NULL; + /* calculate rx_packet_len of the received pkt */ + rx_packet_len = (rte_le_to_cpu_16(rxd.wb.pkt_len) & + ICE_RX_FLX_DESC_PKT_LEN_M) - rxq->crc_len; + rxm->data_len = rx_packet_len; + rxm->pkt_len = rx_packet_len; + } rxm->port = rxq->port_id; rxm->packet_type = ptype_tbl[ICE_RX_FLEX_DESC_PTYPE_M & rte_le_to_cpu_16(rxd.wb.ptype_flex_flags0)]; diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h index bb18a01951..611dbc8503 100644 --- a/drivers/net/ice/ice_rxtx.h +++ b/drivers/net/ice/ice_rxtx.h @@ -16,6 +16,9 @@ #define ICE_RX_MAX_BURST 32 #define ICE_TX_MAX_BURST 32 +/* Maximal number of segments to split. */ +#define ICE_RX_MAX_NSEG 2 + #define ICE_CHK_Q_ENA_COUNT 100 #define ICE_CHK_Q_ENA_INTERVAL_US 100 @@ -43,6 +46,11 @@ extern uint64_t ice_timestamp_dynflag; extern int ice_timestamp_dynfield_offset; +/* Max header size can be 2K - 64 bytes */ +#define ICE_RX_HDR_BUF_SIZE (2048 - 64) + +#define ICE_HEADER_SPLIT_ENA BIT(0) + typedef void (*ice_rx_release_mbufs_t)(struct ice_rx_queue *rxq); typedef void (*ice_tx_release_mbufs_t)(struct ice_tx_queue *txq); typedef void (*ice_rxd_to_pkt_fields_t)(struct ice_rx_queue *rxq, @@ -53,6 +61,12 @@ struct ice_rx_entry { struct rte_mbuf *mbuf; }; +enum ice_rx_dtype { + ICE_RX_DTYPE_NO_SPLIT = 0, + ICE_RX_DTYPE_HEADER_SPLIT = 1, + ICE_RX_DTYPE_SPLIT_ALWAYS = 2, +}; + struct ice_rx_queue { struct rte_mempool *mp; /* mbuf pool to populate RX ring */ volatile union ice_rx_flex_desc *rx_ring;/* RX ring virtual address */ @@ -95,6 +109,8 @@ struct ice_rx_queue { uint32_t time_high; uint32_t hw_register_set; const struct rte_memzone *mz; + struct rte_eth_rxseg_split rxseg[ICE_RX_MAX_NSEG]; + uint32_t rxseg_nb; }; struct ice_tx_entry { diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h index 2dd2d83650..eec6ea2134 100644 --- a/drivers/net/ice/ice_rxtx_vec_common.h +++ b/drivers/net/ice/ice_rxtx_vec_common.h @@ -291,6 +291,9 @@ ice_rx_vec_queue_default(struct ice_rx_queue *rxq) if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) return -1; + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) + return -1; + if (rxq->offloads & ICE_RX_VECTOR_OFFLOAD) return ICE_VECTOR_OFFLOAD_PATH;