From patchwork Sun Oct 9 20:25:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wang, YuanX" X-Patchwork-Id: 117735 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 70977A0542; Sun, 9 Oct 2022 14:39:38 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5EBE641141; Sun, 9 Oct 2022 14:39:38 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id 696FF4113F for ; Sun, 9 Oct 2022 14:39:37 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1665319177; x=1696855177; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=rsizVrJj5PTvyzhfbNrlcDMiq0vFSfiwasFfnVyWNhU=; b=HEs8nNTbleiyoF3lnAV/JVoyadWRqKQ2c/Hud9xgCYMxAN7kRDPh4Yzs 3FIQtZwPugCIDpw+drEiYGb4Ty1Uu8B4RfaQaMpPQobF8davKMApTQCiU OQ2mw4mEFb1z9Cf3QXb+5tfMxLcDEpUIiCM+ax0n+3X3urDeCWlxuUXsY 4h9k85hsWksu6b/5Yilymy5aFHoLVP5DGiR061dK2X7mAJzptRsgUXJ5w auNV4EL5C/OimT8e744qftTIe6F+d+snnEEmLYbP6YEWvvW55f5AtoYMZ MNGCUdsqbQs3yXXI9UFbj+Cg/Yj0Eg7O2I2YjRm+dU9ohUatNT4dXE0Ks Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10494"; a="304040369" X-IronPort-AV: E=Sophos;i="5.95,171,1661842800"; d="scan'208";a="304040369" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2022 05:39:36 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10494"; a="656628305" X-IronPort-AV: E=Sophos;i="5.95,171,1661842800"; d="scan'208";a="656628305" Received: from unknown (HELO localhost.localdomain) ([10.239.252.55]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2022 05:39:31 -0700 From: Yuan Wang To: dev@dpdk.org, Thomas Monjalon , Ferruh Yigit , Andrew Rybchenko , Ray Kinsella Cc: ferruh.yigit@xilinx.com, xiaoyun.li@intel.com, aman.deep.singh@intel.com, yuying.zhang@intel.com, qi.z.zhang@intel.com, qiming.yang@intel.com, jerinjacobk@gmail.com, viacheslavo@nvidia.com, stephen@networkplumber.org, xuan.ding@intel.com, hpothula@marvell.com, yaqi.tang@intel.com, Yuan Wang , Wenxuan Wu Subject: [PATCH v9 1/4] ethdev: introduce protocol header API Date: Mon, 10 Oct 2022 04:25:38 +0800 Message-Id: <20221009202541.352724-2-yuanx.wang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221009202541.352724-1-yuanx.wang@intel.com> References: <20220812181552.2908067-1-yuanx.wang@intel.com> <20221009202541.352724-1-yuanx.wang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add a new ethdev API to retrieve supported protocol headers of a PMD, which helps to configure protocol header based buffer split. Signed-off-by: Yuan Wang Signed-off-by: Xuan Ding Signed-off-by: Wenxuan Wu Reviewed-by: Andrew Rybchenko --- doc/guides/nics/features.rst | 2 +- doc/guides/rel_notes/release_22_11.rst | 5 ++++ lib/ethdev/ethdev_driver.h | 15 ++++++++++++ lib/ethdev/rte_ethdev.c | 33 ++++++++++++++++++++++++++ lib/ethdev/rte_ethdev.h | 30 +++++++++++++++++++++++ lib/ethdev/version.map | 1 + 6 files changed, 85 insertions(+), 1 deletion(-) diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst index 6aa1085c5b..fea604e77f 100644 --- a/doc/guides/nics/features.rst +++ b/doc/guides/nics/features.rst @@ -183,7 +183,7 @@ Scatters the packets being received on specified boundaries to segmented mbufs. * **[uses] rte_eth_rxconf**: ``rx_conf.rx_seg, rx_conf.rx_nseg``. * **[implements] datapath**: ``Buffer Split functionality``. * **[provides] rte_eth_dev_info**: ``rx_offload_capa:RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT``. -* **[related] API**: ``rte_eth_rx_queue_setup()``. +* **[related] API**: ``rte_eth_rx_queue_setup()``, ``rte_eth_buffer_split_get_supported_hdr_ptypes()``. .. _nic_features_lro: diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst index c560dbdab7..16aca14bab 100644 --- a/doc/guides/rel_notes/release_22_11.rst +++ b/doc/guides/rel_notes/release_22_11.rst @@ -189,6 +189,11 @@ New Features into single event containing ``rte_event_vector`` whose event type is ``RTE_EVENT_TYPE_CRYPTODEV_VECTOR``. +* **Added protocol header based buffer split.** + + * Added ``rte_eth_buffer_split_get_supported_hdr_ptypes()``, to get supported + header protocols of a PMD to split. + Removed Items ------------- diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h index e2bd4642b9..1300acc95d 100644 --- a/lib/ethdev/ethdev_driver.h +++ b/lib/ethdev/ethdev_driver.h @@ -1055,6 +1055,18 @@ typedef int (*eth_ip_reassembly_conf_get_t)(struct rte_eth_dev *dev, typedef int (*eth_ip_reassembly_conf_set_t)(struct rte_eth_dev *dev, const struct rte_eth_ip_reassembly_params *conf); +/** + * @internal + * Get supported header protocols of a PMD to split. + * + * @param dev + * Ethdev handle of port. + * + * @return + * An array pointer to store supported protocol headers. + */ +typedef const uint32_t *(*eth_buffer_split_supported_hdr_ptypes_get_t)(struct rte_eth_dev *dev); + /** * @internal * Dump private info from device to a file. @@ -1366,6 +1378,9 @@ struct eth_dev_ops { /** Set IP reassembly configuration */ eth_ip_reassembly_conf_set_t ip_reassembly_conf_set; + /** Get supported header ptypes to split */ + eth_buffer_split_supported_hdr_ptypes_get_t buffer_split_supported_hdr_ptypes_get; + /** Dump private info from device */ eth_dev_priv_dump_t eth_dev_priv_dump; diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 4703ab0caf..79d1f9b993 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -6209,6 +6209,39 @@ rte_eth_tx_descriptor_dump(uint16_t port_id, uint16_t queue_id, queue_id, offset, num, file)); } +int +rte_eth_buffer_split_get_supported_hdr_ptypes(uint16_t port_id, uint32_t *ptypes, int num) +{ + int i, j; + struct rte_eth_dev *dev; + const uint32_t *all_types; + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); + dev = &rte_eth_devices[port_id]; + + if (ptypes == NULL && num > 0) { + RTE_ETHDEV_LOG(ERR, + "Cannot get ethdev port %u supported header protocol types to NULL when array size is non zero\n", + port_id); + return -EINVAL; + } + + if (*dev->dev_ops->buffer_split_supported_hdr_ptypes_get == NULL) + return -ENOTSUP; + all_types = (*dev->dev_ops->buffer_split_supported_hdr_ptypes_get)(dev); + + if (all_types == NULL) + return 0; + + for (i = 0, j = 0; all_types[i] != RTE_PTYPE_UNKNOWN; ++i) { + if (j < num) + ptypes[j] = all_types[i]; + j++; + } + + return j; +} + RTE_LOG_REGISTER_DEFAULT(rte_eth_dev_logtype, INFO); RTE_INIT(ethdev_init_telemetry) diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index 8c4a35cc1f..f9da569179 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -6337,6 +6337,36 @@ rte_eth_tx_buffer(uint16_t port_id, uint16_t queue_id, return rte_eth_tx_buffer_flush(port_id, queue_id, buffer); } +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Get supported header protocols to split on Rx. + * + * When a packet type is announced to be split, it *must* be supported by + * the PMD. For instance, if eth-ipv4, eth-ipv4-udp is announced, the PMD must + * return the following packet types for these packets: + * - Ether/IPv4 -> RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4 + * - Ether/IPv4/UDP -> RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP + * + * @param port_id + * The port identifier of the device. + * @param[out] ptypes + * An array pointer to store supported protocol headers, allocated by caller. + * These ptypes are composed with RTE_PTYPE_*. + * @param num + * Size of the array pointed by param ptypes. + * @return + * - (>=0) Number of supported ptypes. If the number of types exceeds num, + * only num entries will be filled into the ptypes array, but the full + * count of supported ptypes will be returned. + * - (-ENOTSUP) if header protocol is not supported by device. + * - (-ENODEV) if *port_id* invalid. + * - (-EINVAL) if bad parameter. + */ +__rte_experimental +int rte_eth_buffer_split_get_supported_hdr_ptypes(uint16_t port_id, uint32_t *ptypes, int num); + #ifdef __cplusplus } #endif diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 3205556ce7..30b067e0b6 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -296,6 +296,7 @@ EXPERIMENTAL { rte_flow_async_action_handle_query; rte_mtr_meter_policy_get; rte_mtr_meter_profile_get; + rte_eth_buffer_split_get_supported_hdr_ptypes; }; INTERNAL { From patchwork Sun Oct 9 20:25:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wang, YuanX" X-Patchwork-Id: 117736 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A18DBA0542; Sun, 9 Oct 2022 14:39:48 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 93EB542686; Sun, 9 Oct 2022 14:39:48 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 173B8410D7 for ; Sun, 9 Oct 2022 14:39:45 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1665319186; x=1696855186; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=f/rXV2mV4XIDm6JbgkYs/Y9M80Kv7Txr6gUSEpC1/Gw=; b=Y1pRBPe/2ivD/qUF2a1jpN1fQjF6w2AMlg21hk48ZdboBBN9ptv9eKkP z7BIEgo3Le0kphl6NAVG6k4sCKPhAx2UJ8DJuT50fQGERxeVCWXo/Ipk4 ZMd6X3DekHgLYVAmVnkfwtMqn2xbJO1/Xx94fQQXb2d7XyTelwUtI0ia+ M7w4pKUJWR3gdQQ0Giw0DwX8invITHxRPRHoIdsiQdDYR/Yqcjzz5n0Pu fJmOeFJImwauRWUtsv/GVuc5+XN3gxVY1bg3pels7QvOQplrPpDOjvueZ Ftg4jsSNijaRnPw+gctBKU+Gp13VAiiZNPaJpsOJIjkLukaIaYp7lNaaC A==; X-IronPort-AV: E=McAfee;i="6500,9779,10494"; a="390357881" X-IronPort-AV: E=Sophos;i="5.95,171,1661842800"; d="scan'208";a="390357881" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2022 05:39:45 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10494"; a="656628321" X-IronPort-AV: E=Sophos;i="5.95,171,1661842800"; d="scan'208";a="656628321" Received: from unknown (HELO localhost.localdomain) ([10.239.252.55]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2022 05:39:40 -0700 From: Yuan Wang To: dev@dpdk.org, Thomas Monjalon , Ferruh Yigit , Andrew Rybchenko Cc: ferruh.yigit@xilinx.com, mdr@ashroe.eu, xiaoyun.li@intel.com, aman.deep.singh@intel.com, yuying.zhang@intel.com, qi.z.zhang@intel.com, qiming.yang@intel.com, jerinjacobk@gmail.com, viacheslavo@nvidia.com, stephen@networkplumber.org, xuan.ding@intel.com, hpothula@marvell.com, yaqi.tang@intel.com, Yuan Wang , Wenxuan Wu Subject: [PATCH v9 2/4] ethdev: introduce protocol hdr based buffer split Date: Mon, 10 Oct 2022 04:25:39 +0800 Message-Id: <20221009202541.352724-3-yuanx.wang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221009202541.352724-1-yuanx.wang@intel.com> References: <20220812181552.2908067-1-yuanx.wang@intel.com> <20221009202541.352724-1-yuanx.wang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Currently, Rx buffer split supports length based split. With Rx queue offload RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT enabled and Rx packet segment configured, PMD will be able to split the received packets into multiple segments. However, length based buffer split is not suitable for NICs that do split based on protocol headers. Given an arbitrarily variable length in Rx packet segment, it is almost impossible to pass a fixed protocol header to driver. Besides, the existence of tunneling results in the composition of a packet is various, which makes the situation even worse. This patch extends current buffer split to support protocol header based buffer split. A new proto_hdr field is introduced in the reserved field of rte_eth_rxseg_split structure to specify protocol header. The proto_hdr field defines the split position of packet, splitting will always happen after the protocol header defined in the Rx packet segment. When Rx queue offload RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT is enabled and corresponding protocol header is configured, driver will split the ingress packets into multiple segments. Examples for proto_hdr field defines: To split after ETH-IPV4-UDP, it should be defined as proto_hdr = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP For inner ETH-IPV4-UDP, it should be defined as proto_hdr = RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_UDP If the protocol header is repeated with the previously defined one, the repeated part should be omitted. For example, split after ETH, ETH-IPV4 and ETH-IPV4-UDP, it should be defined as proto_hdr0 = RTE_PTYPE_L2_ETHER proto_hdr1 = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN proto_hdr2 = RTE_PTYPE_L4_UDP struct rte_eth_rxseg_split { struct rte_mempool *mp; uint16_t length; uint16_t offset; uint32_t proto_hdr; }; If protocol header split can be supported by a PMD, the rte_eth_buffer_split_get_supported_hdr_ptypes function can be used to obtain a list of these protocol headers. For example, let's suppose we configured the Rx queue with the following segments: seg0 - pool0, proto_hdr0=RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4, off0=2B seg1 - pool1, proto_hdr1=RTE_PTYPE_L4_UDP, off1=128B seg2 - pool2, proto_hdr2=0, off1=0B The packet consists of ETH_IPV4_UDP_PAYLOAD will be split like following: seg0 - ipv4 header @ RTE_PKTMBUF_HEADROOM + 2 in mbuf from pool0 seg1 - udp header @ 128 in mbuf from pool1 seg2 - payload @ 0 in mbuf from pool2 Now buffer split can be configured in two modes. User can choose length or protocol header to configure buffer split according to NIC's capability. For length based buffer split, the mp, length, offset field in Rx packet segment should be configured, while the proto_hdr field must be 0. For protocol header based buffer split, the mp, offset, proto_hdr field in Rx packet segment should be configured, while the length field must be 0. Note: When protocol header split is enabled, NIC may receive packets which do not match all the protocol headers within the Rx segments. At this point, NIC will have two possible split behaviors according to matching results, one is exact match, another is longest match. The split result of NIC must belong to one of them. The exact match means NIC only do split when the packets exactly match all the protocol headers in the segments. Otherwise, the whole packet will be put into the last valid mempool. The longest match means NIC will do split until packets mismatch the protocol header in the segments. The rest will be put into the last valid pool. Pseudo-code for exact match: FOR each seg in segs except last one IF proto_hdr is not matched THEN BREAK END IF END FOR IF loop breaked THEN put whole pkt in last seg ELSE put protocol header in each seg put everything else in last seg END IF Pseudo-code for longest match: FOR each seg in segs except last one IF proto_hdr is matched THEN put protocol header in seg ELSE BREAK END IF END FOR put everything else in last seg The split limitations imposed by underlying driver is reported in the rte_eth_dev_info->rx_seg_capa field. The memory attributes for the split parts may differ either, dpdk memory and external memory, respectively. Signed-off-by: Yuan Wang Signed-off-by: Xuan Ding Signed-off-by: Wenxuan Wu --- doc/guides/rel_notes/release_22_11.rst | 7 ++ lib/ethdev/rte_ethdev.c | 95 +++++++++++++++++++++++--- lib/ethdev/rte_ethdev.h | 37 +++++++++- 3 files changed, 127 insertions(+), 12 deletions(-) diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst index 16aca14bab..b4329d4cb0 100644 --- a/doc/guides/rel_notes/release_22_11.rst +++ b/doc/guides/rel_notes/release_22_11.rst @@ -193,6 +193,8 @@ New Features * Added ``rte_eth_buffer_split_get_supported_hdr_ptypes()``, to get supported header protocols of a PMD to split. + * Supported protocol-based buffer split using added ``proto_hdr`` + in structure ``rte_eth_rxseg_split``. Removed Items @@ -338,6 +340,11 @@ API Changes for per-queue packet split offload, which is configured by ``rte_eth_rxseg_split``. +* ethdev: The ``reserved`` field in the ``rte_eth_rxseg_split`` structure is + replaced with ``proto_hdr`` to support protocol header based buffer split. + User can choose length or protocol header to configure buffer split + according to NIC's capability. + * ethdev: Changed the type of the parameter ``rate`` of the function ``rte_eth_set_queue_rate_limit()`` from ``uint16_t`` to ``uint32_t`` to support more than 64 Gbps. diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 79d1f9b993..3696d4f044 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -1687,15 +1687,38 @@ rte_eth_check_rx_mempool(struct rte_mempool *mp, uint16_t offset, } static int -rte_eth_rx_queue_check_split(const struct rte_eth_rxseg_split *rx_seg, - uint16_t n_seg, uint32_t *mbp_buf_size, - const struct rte_eth_dev_info *dev_info) +eth_dev_buffer_split_get_supported_hdrs_helper(uint16_t port_id, uint32_t **ptypes) +{ + int cnt; + + cnt = rte_eth_buffer_split_get_supported_hdr_ptypes(port_id, NULL, 0); + if (cnt <= 0) + return cnt; + + *ptypes = malloc(sizeof(uint32_t) * cnt); + if (*ptypes == NULL) + return -ENOMEM; + + return rte_eth_buffer_split_get_supported_hdr_ptypes(port_id, *ptypes, cnt); +} + +static int +rte_eth_rx_queue_check_split(uint16_t port_id, + const struct rte_eth_rxseg_split *rx_seg, + uint16_t n_seg, uint32_t *mbp_buf_size, + const struct rte_eth_dev_info *dev_info) { const struct rte_eth_rxseg_capa *seg_capa = &dev_info->rx_seg_capa; struct rte_mempool *mp_first; uint32_t offset_mask; uint16_t seg_idx; int ret; + int ptype_cnt; + uint32_t *ptypes, prev_proto_hdrs; + int i; + + ret = 0; + prev_proto_hdrs = RTE_PTYPE_UNKNOWN; if (n_seg > seg_capa->max_nseg) { RTE_ETHDEV_LOG(ERR, @@ -1709,42 +1732,92 @@ rte_eth_rx_queue_check_split(const struct rte_eth_rxseg_split *rx_seg, */ mp_first = rx_seg[0].mp; offset_mask = RTE_BIT32(seg_capa->offset_align_log2) - 1; + + ptypes = NULL; + ptype_cnt = eth_dev_buffer_split_get_supported_hdrs_helper(port_id, &ptypes); + for (seg_idx = 0; seg_idx < n_seg; seg_idx++) { struct rte_mempool *mpl = rx_seg[seg_idx].mp; uint32_t length = rx_seg[seg_idx].length; uint32_t offset = rx_seg[seg_idx].offset; + uint32_t proto_hdr = rx_seg[seg_idx].proto_hdr; if (mpl == NULL) { RTE_ETHDEV_LOG(ERR, "null mempool pointer\n"); - return -EINVAL; + ret = -EINVAL; + goto out; } if (seg_idx != 0 && mp_first != mpl && seg_capa->multi_pools == 0) { RTE_ETHDEV_LOG(ERR, "Receiving to multiple pools is not supported\n"); - return -ENOTSUP; + ret = -ENOTSUP; + goto out; } if (offset != 0) { if (seg_capa->offset_allowed == 0) { RTE_ETHDEV_LOG(ERR, "Rx segmentation with offset is not supported\n"); - return -ENOTSUP; + ret = -ENOTSUP; + goto out; } if (offset & offset_mask) { RTE_ETHDEV_LOG(ERR, "Rx segmentation invalid offset alignment %u, %u\n", offset, seg_capa->offset_align_log2); - return -EINVAL; + ret = -EINVAL; + goto out; } } offset += seg_idx != 0 ? 0 : RTE_PKTMBUF_HEADROOM; *mbp_buf_size = rte_pktmbuf_data_room_size(mpl); - length = length != 0 ? length : *mbp_buf_size; + if (proto_hdr != 0) { + /* Split based on protocol headers. */ + if (length != 0) { + RTE_ETHDEV_LOG(ERR, + "Do not set length split and protocol split within a segment\n" + ); + ret = -EINVAL; + goto out; + } + if ((proto_hdr & prev_proto_hdrs) != 0) { + RTE_ETHDEV_LOG(ERR, + "Repeat with previous protocol headers or proto-split after length-based split\n" + ); + ret = -EINVAL; + goto out; + } + if (ptype_cnt <= 0) { + RTE_ETHDEV_LOG(ERR, + "Port %u failed to get supported buffer split header protocols\n", + port_id); + ret = -ENOTSUP; + goto out; + } + for (i = 0; i < ptype_cnt; i++) { + if ((prev_proto_hdrs | proto_hdr) == ptypes[i]) + break; + } + if (i == ptype_cnt) { + RTE_ETHDEV_LOG(ERR, + "Requested Rx split header protocols 0x%x is not supported.\n", + proto_hdr); + ret = -EINVAL; + goto out; + } + prev_proto_hdrs |= proto_hdr; + } else { + /* Split at fixed length. */ + length = length != 0 ? length : *mbp_buf_size; + prev_proto_hdrs = RTE_PTYPE_ALL_MASK; + } ret = rte_eth_check_rx_mempool(mpl, offset, length); if (ret != 0) - return ret; + goto out; } - return 0; +out: + free(ptypes); + return ret; } static int @@ -1846,7 +1919,7 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, n_seg = rx_conf->rx_nseg; if (rx_offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { - ret = rte_eth_rx_queue_check_split(rx_seg, n_seg, + ret = rte_eth_rx_queue_check_split(port_id, rx_seg, n_seg, &mbp_buf_size, &dev_info); if (ret != 0) diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index f9da569179..811c029bf8 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -994,6 +994,9 @@ struct rte_eth_txmode { * specified in the first array element, the second buffer, from the * pool in the second element, and so on. * + * - The proto_hdrs in the elements define the split position of + * received packets. + * * - The offsets from the segment description elements specify * the data offset from the buffer beginning except the first mbuf. * The first segment offset is added with RTE_PKTMBUF_HEADROOM. @@ -1015,12 +1018,44 @@ struct rte_eth_txmode { * - pool from the last valid element * - the buffer size from this pool * - zero offset + * + * - Length based buffer split: + * - mp, length, offset should be configured. + * - The proto_hdr field must be 0. + * + * - Protocol header based buffer split: + * - mp, offset, proto_hdr should be configured. + * - The length field must be 0. + * - The proto_hdr field in the last segment should be 0. + * + * - When protocol header split is enabled, NIC may receive packets + * which do not match all the protocol headers within the Rx segments. + * At this point, NIC will have two possible split behaviors according to + * matching results, one is exact match, another is longest match. + * The split result of NIC must belong to one of them. + * The exact match means NIC only do split when the packets exactly match all + * the protocol headers in the segments. Otherwise, the whole packet will be + * put into the last valid mempool. The longest match means NIC will do split + * until packets mismatch the protocol header in the segments. The rest will + * be put into the last valid pool. */ struct rte_eth_rxseg_split { struct rte_mempool *mp; /**< Memory pool to allocate segment from. */ uint16_t length; /**< Segment data length, configures split point. */ uint16_t offset; /**< Data offset from beginning of mbuf data buffer. */ - uint32_t reserved; /**< Reserved field. */ + /** + * Proto_hdr defines a bit mask of the protocol sequence as RTE_PTYPE_*, + * configures split point. The last RTE_PTYPE* in the mask indicates the + * split position. + * + * If one protocol header is defined to split packets into two segments, + * for non-tunneling packets, the complete protocol sequence should be defined. + * For tunneling packets, for simplicity, only the tunnel and inner part of + * comple protocol sequence is required. + * If several protocol headers are defined to split packets into multi-segments, + * the repeated parts of adjacent segments should be omitted. + */ + uint32_t proto_hdr; }; /** From patchwork Sun Oct 9 20:25:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wang, YuanX" X-Patchwork-Id: 117737 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5DF4FA0542; Sun, 9 Oct 2022 14:40:03 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4A5564114A; Sun, 9 Oct 2022 14:40:03 +0200 (CEST) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by mails.dpdk.org (Postfix) with ESMTP id 4355F427EE for ; Sun, 9 Oct 2022 14:40:01 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1665319201; x=1696855201; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=QJ8xw7H3Vdh/kw7YbH6uy1giSVzTdVvY+hiuuhbdR9w=; b=MzpWC6/32IkiaIi7UhaeKQrXfG2HQORntgfrUNm4xIIuZXcEiCZ7h7EM vQMs5naVOVH3XGcQd3WjXZJ+yO14NvK83mF+eLpGEYBqe5U//XFTD+9hx 2HC9ZDWzn9MO5ei0cNbflCUkCLthhJ1XmGjz0XT5jdU/Qhr1BJvB/+lfF azm6p+urwiSSxHwGO6hATjRe3Z/u5NoDKe7nrhOFllNK2pEOmpeEIuHKB AK8pzqhID/2ccQ15DD8fD38T82qQ9D8lGTKRjqwN/wj0oHAEBLwr4zpTb JYCeMYOF84wtshmYPnhspDlKy6Uz0ALzW6HotT2o98qYEeE4V0vJ7kB41 Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10494"; a="283780097" X-IronPort-AV: E=Sophos;i="5.95,171,1661842800"; d="scan'208";a="283780097" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2022 05:40:00 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10494"; a="656628398" X-IronPort-AV: E=Sophos;i="5.95,171,1661842800"; d="scan'208";a="656628398" Received: from unknown (HELO localhost.localdomain) ([10.239.252.55]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2022 05:39:54 -0700 From: Yuan Wang To: dev@dpdk.org, Aman Singh , Yuying Zhang Cc: thomas@monjalon.net, andrew.rybchenko@oktetlabs.ru, ferruh.yigit@xilinx.com, mdr@ashroe.eu, xiaoyun.li@intel.com, qi.z.zhang@intel.com, qiming.yang@intel.com, jerinjacobk@gmail.com, viacheslavo@nvidia.com, stephen@networkplumber.org, xuan.ding@intel.com, hpothula@marvell.com, yaqi.tang@intel.com, Yuan Wang , Wenxuan Wu Subject: [PATCH v9 3/4] app/testpmd: add rxhdrs commands and parameters Date: Mon, 10 Oct 2022 04:25:40 +0800 Message-Id: <20221009202541.352724-4-yuanx.wang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221009202541.352724-1-yuanx.wang@intel.com> References: <20220812181552.2908067-1-yuanx.wang@intel.com> <20221009202541.352724-1-yuanx.wang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add command line parameter: --rxhdrs=eth[,ipv4] Set the protocol_hdr of segments to scatter packets on receiving if split feature is engaged. And the queues with BUFFER_SPLIT flag. Add interactive mode command: testpmd>set rxhdrs eth,ipv4,ipv4-udp (protocol sequence should be valid) The protocol split feature is off by default. To enable protocol split, you need: 1. Start testpmd with multiple mempools. E.g. --mbuf-size=2048,2048 2. Configure Rx queue with rx_offload buffer split on. 3. Set the protocol type of buffer split. E.g. set rxhdrs eth,eth-ipv4 (default protocols of testpmd : eth|ipv4|ipv6|ipv4-tcp|ipv6-tcp| ipv4-udp|ipv6-udp|ipv4-sctp|ipv6-sctp|grenat|inner-eth| inner-ipv4|inner-ipv6|inner-ipv4-tcp|inner-ipv6-tcp| inner-ipv4-udp|inner-ipv6-udp|inner-ipv4-sctp|inner-ipv6-sctp) Above protocols can be configured in testpmd. But the configuration can only be applied when it is supported by specific pmd. Signed-off-by: Yuan Wang Signed-off-by: Xuan Ding Signed-off-by: Wenxuan Wu --- app/test-pmd/cmdline.c | 152 +++++++++++++++++++- app/test-pmd/config.c | 108 ++++++++++++++ app/test-pmd/parameters.c | 16 ++- app/test-pmd/testpmd.c | 11 +- app/test-pmd/testpmd.h | 6 + doc/guides/testpmd_app_ug/testpmd_funcs.rst | 19 ++- 6 files changed, 303 insertions(+), 9 deletions(-) diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index 4565a3953a..57ac6828d0 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -181,7 +181,7 @@ static void cmd_help_long_parsed(void *parsed_result, "show (rxq|txq) info (port_id) (queue_id)\n" " Display information for configured RX/TX queue.\n\n" - "show config (rxtx|cores|fwd|rxoffs|rxpkts|txpkts)\n" + "show config (rxtx|cores|fwd|rxoffs|rxpkts|rxhdrs|txpkts)\n" " Display the given configuration.\n\n" "read rxd (port_id) (queue_id) (rxd_id)\n" @@ -305,6 +305,17 @@ static void cmd_help_long_parsed(void *parsed_result, " Affects only the queues configured with split" " offloads.\n\n" + "set rxhdrs (eth[,ipv4])*\n" + " Set the protocol hdr of each segment to scatter" + " packets on receiving if split feature is engaged." + " Affects only the queues configured with split" + " offloads.\n" + " Supported values: eth|ipv4|ipv6|ipv4-tcp|ipv6-tcp|" + "ipv4-udp|ipv6-udp|ipv4-sctp|ipv6-sctp|" + "grenat|inner-eth|inner-ipv4|inner-ipv6|inner-ipv4-tcp|" + "inner-ipv6-tcp|inner-ipv4-udp|inner-ipv6-udp|" + "inner-ipv4-sctp|inner-ipv6-sctp\n\n" + "set txpkts (x[,y]*)\n" " Set the length of each segment of TXONLY" " and optionally CSUM packets.\n\n" @@ -3366,6 +3377,94 @@ static cmdline_parse_inst_t cmd_stop = { }, }; +static unsigned int +get_ptype(char *value) +{ + uint32_t protocol; + + if (!strcmp(value, "eth")) + protocol = RTE_PTYPE_L2_ETHER; + else if (!strcmp(value, "ipv4")) + protocol = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN; + else if (!strcmp(value, "ipv6")) + protocol = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN; + else if (!strcmp(value, "ipv4-tcp")) + protocol = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_TCP; + else if (!strcmp(value, "ipv4-udp")) + protocol = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP; + else if (!strcmp(value, "ipv4-sctp")) + protocol = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_SCTP; + else if (!strcmp(value, "ipv6-tcp")) + protocol = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_L4_TCP; + else if (!strcmp(value, "ipv6-udp")) + protocol = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_L4_UDP; + else if (!strcmp(value, "ipv6-sctp")) + protocol = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_L4_SCTP; + else if (!strcmp(value, "grenat")) + protocol = RTE_PTYPE_TUNNEL_GRENAT; + else if (!strcmp(value, "inner-eth")) + protocol = RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER; + else if (!strcmp(value, "inner-ipv4")) + protocol = RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER | + RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN; + else if (!strcmp(value, "inner-ipv6")) + protocol = RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER | + RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN; + else if (!strcmp(value, "inner-ipv4-tcp")) + protocol = RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER | + RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_TCP; + else if (!strcmp(value, "inner-ipv4-udp")) + protocol = RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER | + RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_UDP; + else if (!strcmp(value, "inner-ipv4-sctp")) + protocol = RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER | + RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_SCTP; + else if (!strcmp(value, "inner-ipv6-tcp")) + protocol = RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER | + RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_TCP; + else if (!strcmp(value, "inner-ipv6-udp")) + protocol = RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER | + RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_UDP; + else if (!strcmp(value, "inner-ipv6-sctp")) + protocol = RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER | + RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_SCTP; + else { + fprintf(stderr, "Unsupported protocol: %s\n", value); + protocol = RTE_PTYPE_UNKNOWN; + } + + return protocol; +} +/* *** SET RXHDRSLIST *** */ + +unsigned int +parse_hdrs_list(const char *str, const char *item_name, unsigned int max_items, + unsigned int *parsed_items, int check_hdrs_sequence) +{ + unsigned int nb_item; + char *cur; + char *tmp; + unsigned int cur_item, prev_items = 0; + + nb_item = 0; + char *str2 = strdup(str); + cur = strtok_r(str2, ",", &tmp); + while (cur != NULL) { + cur_item = get_ptype(cur); + cur_item &= ~prev_items; + parsed_items[nb_item] = cur_item; + cur = strtok_r(NULL, ",", &tmp); + nb_item++; + prev_items |= cur_item; + } + if (nb_item > max_items) + fprintf(stderr, "Number of %s = %u > %u (maximum items)\n", + item_name, nb_item + 1, max_items); + free(str2); + if (!check_hdrs_sequence) + return nb_item; + return nb_item; +} /* *** SET CORELIST and PORTLIST CONFIGURATION *** */ unsigned int @@ -3735,6 +3834,50 @@ static cmdline_parse_inst_t cmd_set_rxpkts = { }, }; +/* *** SET SEGMENT HEADERS OF RX PACKETS SPLIT *** */ +struct cmd_set_rxhdrs_result { + cmdline_fixed_string_t set; + cmdline_fixed_string_t rxhdrs; + cmdline_fixed_string_t values; +}; + +static void +cmd_set_rxhdrs_parsed(void *parsed_result, + __rte_unused struct cmdline *cl, + __rte_unused void *data) +{ + struct cmd_set_rxhdrs_result *res; + unsigned int seg_hdrs[MAX_SEGS_BUFFER_SPLIT]; + unsigned int nb_segs; + + res = parsed_result; + nb_segs = parse_hdrs_list(res->values, "segment hdrs", + MAX_SEGS_BUFFER_SPLIT, seg_hdrs, 0); + if (nb_segs > 0) + set_rx_pkt_hdrs(seg_hdrs, nb_segs); + cmd_reconfig_device_queue(RTE_PORT_ALL, 0, 1); +} +static cmdline_parse_token_string_t cmd_set_rxhdrs_set = + TOKEN_STRING_INITIALIZER(struct cmd_set_rxhdrs_result, + set, "set"); +static cmdline_parse_token_string_t cmd_set_rxhdrs_rxhdrs = + TOKEN_STRING_INITIALIZER(struct cmd_set_rxhdrs_result, + rxhdrs, "rxhdrs"); +static cmdline_parse_token_string_t cmd_set_rxhdrs_values = + TOKEN_STRING_INITIALIZER(struct cmd_set_rxhdrs_result, + values, NULL); + +static cmdline_parse_inst_t cmd_set_rxhdrs = { + .f = cmd_set_rxhdrs_parsed, + .data = NULL, + .help_str = "set rxhdrs ", + .tokens = { + (void *)&cmd_set_rxhdrs_set, + (void *)&cmd_set_rxhdrs_rxhdrs, + (void *)&cmd_set_rxhdrs_values, + NULL, + }, +}; /* *** SET SEGMENT LENGTHS OF TXONLY PACKETS *** */ struct cmd_set_txpkts_result { @@ -6487,6 +6630,8 @@ static void cmd_showcfg_parsed(void *parsed_result, show_rx_pkt_offsets(); else if (!strcmp(res->what, "rxpkts")) show_rx_pkt_segments(); + else if (!strcmp(res->what, "rxhdrs")) + show_rx_pkt_hdrs(); else if (!strcmp(res->what, "txpkts")) show_tx_pkt_segments(); else if (!strcmp(res->what, "txtimes")) @@ -6499,12 +6644,12 @@ static cmdline_parse_token_string_t cmd_showcfg_port = TOKEN_STRING_INITIALIZER(struct cmd_showcfg_result, cfg, "config"); static cmdline_parse_token_string_t cmd_showcfg_what = TOKEN_STRING_INITIALIZER(struct cmd_showcfg_result, what, - "rxtx#cores#fwd#rxoffs#rxpkts#txpkts#txtimes"); + "rxtx#cores#fwd#rxoffs#rxpkts#rxhdrs#txpkts#txtimes"); static cmdline_parse_inst_t cmd_showcfg = { .f = cmd_showcfg_parsed, .data = NULL, - .help_str = "show config rxtx|cores|fwd|rxoffs|rxpkts|txpkts|txtimes", + .help_str = "show config rxtx|cores|fwd|rxoffs|rxpkts|rxhdrs|txpkts|txtimes", .tokens = { (void *)&cmd_showcfg_show, (void *)&cmd_showcfg_port, @@ -12455,6 +12600,7 @@ static cmdline_parse_ctx_t builtin_ctx[] = { (cmdline_parse_inst_t *)&cmd_set_log, (cmdline_parse_inst_t *)&cmd_set_rxoffs, (cmdline_parse_inst_t *)&cmd_set_rxpkts, + (cmdline_parse_inst_t *)&cmd_set_rxhdrs, (cmdline_parse_inst_t *)&cmd_set_txpkts, (cmdline_parse_inst_t *)&cmd_set_txsplit, (cmdline_parse_inst_t *)&cmd_set_txtimes, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 841e8efe78..dec16a9049 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -4889,6 +4889,114 @@ show_rx_pkt_segments(void) } } +static const char *get_ptype_str(uint32_t ptype) +{ + if ((ptype & (RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_TCP)) == + (RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_TCP)) + return "ipv4-tcp"; + else if ((ptype & (RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP)) == + (RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP)) + return "ipv4-udp"; + else if ((ptype & (RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_SCTP)) == + (RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_SCTP)) + return "ipv4-sctp"; + else if ((ptype & (RTE_PTYPE_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_L4_TCP)) == + (RTE_PTYPE_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_L4_TCP)) + return "ipv6-tcp"; + else if ((ptype & (RTE_PTYPE_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_L4_UDP)) == + (RTE_PTYPE_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_L4_UDP)) + return "ipv6-udp"; + else if ((ptype & (RTE_PTYPE_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_L4_SCTP)) == + (RTE_PTYPE_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_L4_SCTP)) + return "ipv6-sctp"; + else if ((ptype & RTE_PTYPE_L4_TCP) == RTE_PTYPE_L4_TCP) + return "tcp"; + else if ((ptype & RTE_PTYPE_L4_UDP) == RTE_PTYPE_L4_UDP) + return "udp"; + else if ((ptype & RTE_PTYPE_L4_SCTP) == RTE_PTYPE_L4_SCTP) + return "sctp"; + else if ((ptype & RTE_PTYPE_L3_IPV4_EXT_UNKNOWN) == RTE_PTYPE_L3_IPV4_EXT_UNKNOWN) + return "ipv4"; + else if ((ptype & RTE_PTYPE_L3_IPV6_EXT_UNKNOWN) == RTE_PTYPE_L3_IPV6_EXT_UNKNOWN) + return "ipv6"; + else if ((ptype & RTE_PTYPE_L2_ETHER) == RTE_PTYPE_L2_ETHER) + return "eth"; + + else if ((ptype & (RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_TCP)) == + (RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_TCP)) + return "inner-ipv4-tcp"; + else if ((ptype & (RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_UDP)) == + (RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_UDP)) + return "inner-ipv4-udp"; + else if ((ptype & (RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_SCTP)) == + (RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_SCTP)) + return "inner-ipv4-sctp"; + else if ((ptype & (RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_TCP)) == + (RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_TCP)) + return "inner-ipv6-tcp"; + else if ((ptype & (RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_UDP)) == + (RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_UDP)) + return "inner-ipv6-udp"; + else if ((ptype & (RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_SCTP)) == + (RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_SCTP)) + return "inner-ipv6-sctp"; + else if ((ptype & RTE_PTYPE_INNER_L4_TCP) == RTE_PTYPE_INNER_L4_TCP) + return "inner-tcp"; + else if ((ptype & RTE_PTYPE_INNER_L4_UDP) == RTE_PTYPE_INNER_L4_UDP) + return "inner-udp"; + else if ((ptype & RTE_PTYPE_INNER_L4_SCTP) == RTE_PTYPE_INNER_L4_SCTP) + return "inner-sctp"; + else if ((ptype & RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN) == + RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN) + return "inner-ipv4"; + else if ((ptype & RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN) == + RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN) + return "inner-ipv6"; + else if ((ptype & RTE_PTYPE_INNER_L2_ETHER) == RTE_PTYPE_INNER_L2_ETHER) + return "inner-eth"; + else if ((ptype & RTE_PTYPE_TUNNEL_GRENAT) == RTE_PTYPE_TUNNEL_GRENAT) + return "grenat"; + else + return "unsupported"; +} + +void +show_rx_pkt_hdrs(void) +{ + uint32_t i, n; + + n = rx_pkt_nb_segs; + printf("Number of segments: %u\n", n); + if (n) { + printf("Packet segs: "); + for (i = 0; i < n - 1; i++) + printf("%s, ", get_ptype_str(rx_pkt_hdr_protos[i])); + printf("payload\n"); + } +} + +void +set_rx_pkt_hdrs(unsigned int *seg_hdrs, unsigned int nb_segs) +{ + unsigned int i; + + if (nb_segs + 1 > MAX_SEGS_BUFFER_SPLIT) { + printf("nb segments per RX packets=%u > " + "MAX_SEGS_BUFFER_SPLIT - ignored\n", nb_segs + 1); + return; + } + + memset(rx_pkt_hdr_protos, 0, sizeof(rx_pkt_hdr_protos)); + + for (i = 0; i < nb_segs; i++) + rx_pkt_hdr_protos[i] = (uint32_t)seg_hdrs[i]; + /* + * We calculate the number of hdrs, but payload is not included, + * so rx_pkt_nb_segs would increase 1. + */ + rx_pkt_nb_segs = nb_segs + 1; +} + void set_rx_pkt_segments(unsigned int *seg_lengths, unsigned int nb_segs) { diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c index 14752f9571..ff760460ec 100644 --- a/app/test-pmd/parameters.c +++ b/app/test-pmd/parameters.c @@ -152,6 +152,7 @@ usage(char* progname) " Used mainly with PCAP drivers.\n"); printf(" --rxoffs=X[,Y]*: set RX segment offsets for split.\n"); printf(" --rxpkts=X[,Y]*: set RX segment sizes to split.\n"); + printf(" --rxhdrs=eth[,ipv4]*: set RX segment protocol to split.\n"); printf(" --txpkts=X[,Y]*: set TX segment sizes" " or total packet length.\n"); printf(" --txonly-multi-flow: generate multiple flows in txonly mode\n"); @@ -660,6 +661,7 @@ launch_args_parse(int argc, char** argv) { "flow-isolate-all", 0, 0, 0 }, { "rxoffs", 1, 0, 0 }, { "rxpkts", 1, 0, 0 }, + { "rxhdrs", 1, 0, 0 }, { "txpkts", 1, 0, 0 }, { "txonly-multi-flow", 0, 0, 0 }, { "rxq-share", 2, 0, 0 }, @@ -1254,7 +1256,6 @@ launch_args_parse(int argc, char** argv) if (!strcmp(lgopts[opt_idx].name, "rxpkts")) { unsigned int seg_len[MAX_SEGS_BUFFER_SPLIT]; unsigned int nb_segs; - nb_segs = parse_item_list (optarg, "rxpkt segments", MAX_SEGS_BUFFER_SPLIT, @@ -1264,6 +1265,19 @@ launch_args_parse(int argc, char** argv) else rte_exit(EXIT_FAILURE, "bad rxpkts\n"); } + if (!strcmp(lgopts[opt_idx].name, "rxhdrs")) { + unsigned int seg_hdrs[MAX_SEGS_BUFFER_SPLIT]; + unsigned int nb_segs; + + nb_segs = parse_hdrs_list + (optarg, "rxpkt segments", + MAX_SEGS_BUFFER_SPLIT, + seg_hdrs, 0); + if (nb_segs > 0) + set_rx_pkt_hdrs(seg_hdrs, nb_segs); + else + rte_exit(EXIT_FAILURE, "bad rxpkts\n"); + } if (!strcmp(lgopts[opt_idx].name, "txpkts")) { unsigned seg_lengths[RTE_MAX_SEGS_PER_PKT]; unsigned int nb_segs; diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index bb1c901742..5b0f0838dc 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -247,6 +247,7 @@ uint16_t rx_pkt_seg_lengths[MAX_SEGS_BUFFER_SPLIT]; uint8_t rx_pkt_nb_segs; /**< Number of segments to split */ uint16_t rx_pkt_seg_offsets[MAX_SEGS_BUFFER_SPLIT]; uint8_t rx_pkt_nb_offs; /**< Number of specified offsets */ +uint32_t rx_pkt_hdr_protos[MAX_SEGS_BUFFER_SPLIT]; /* * Configuration of packet segments used by the "txonly" processing engine. @@ -2668,12 +2669,16 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, mp_n = (i >= mbuf_data_size_n) ? mbuf_data_size_n - 1 : i; mpx = mbuf_pool_find(socket_id, mp_n); /* Handle zero as mbuf data buffer size. */ - rx_seg->length = rx_pkt_seg_lengths[i] ? - rx_pkt_seg_lengths[i] : - mbuf_data_size[mp_n]; rx_seg->offset = i < rx_pkt_nb_offs ? rx_pkt_seg_offsets[i] : 0; rx_seg->mp = mpx ? mpx : mp; + if (rx_pkt_hdr_protos[i] != 0 && rx_pkt_seg_lengths[i] == 0) { + rx_seg->proto_hdr = rx_pkt_hdr_protos[i]; + } else { + rx_seg->length = rx_pkt_seg_lengths[i] ? + rx_pkt_seg_lengths[i] : + mbuf_data_size[mp_n]; + } } rx_conf->rx_nseg = rx_pkt_nb_segs; rx_conf->rx_seg = rx_useg; diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index ca2408cb6b..e65be323b8 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -580,6 +580,7 @@ extern uint32_t max_rx_pkt_len; * Configuration of packet segments used to scatter received packets * if some of split features is configured. */ +extern uint32_t rx_pkt_hdr_protos[MAX_SEGS_BUFFER_SPLIT]; extern uint16_t rx_pkt_seg_lengths[MAX_SEGS_BUFFER_SPLIT]; extern uint8_t rx_pkt_nb_segs; /**< Number of segments to split */ extern uint16_t rx_pkt_seg_offsets[MAX_SEGS_BUFFER_SPLIT]; @@ -851,6 +852,9 @@ inc_tx_burst_stats(struct fwd_stream *fs, uint16_t nb_tx) unsigned int parse_item_list(const char *str, const char *item_name, unsigned int max_items, unsigned int *parsed_items, int check_unique_values); +unsigned int parse_hdrs_list(const char *str, const char *item_name, + unsigned int max_item, + unsigned int *parsed_items, int check_unique_values); void launch_args_parse(int argc, char** argv); void cmd_reconfig_device_queue(portid_t id, uint8_t dev, uint8_t queue); void cmdline_read_from_file(const char *filename); @@ -1006,6 +1010,8 @@ void set_record_core_cycles(uint8_t on_off); void set_record_burst_stats(uint8_t on_off); void set_verbose_level(uint16_t vb_level); void set_rx_pkt_segments(unsigned int *seg_lengths, unsigned int nb_segs); +void set_rx_pkt_hdrs(unsigned int *seg_protos, unsigned int nb_segs); +void show_rx_pkt_hdrs(void); void show_rx_pkt_segments(void); void set_rx_pkt_offsets(unsigned int *seg_offsets, unsigned int nb_offs); void show_rx_pkt_offsets(void); diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 1cf814ae89..fdad100944 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -278,7 +278,7 @@ show config Displays the configuration of the application. The configuration comes from the command-line, the runtime or the application defaults:: - testpmd> show config (rxtx|cores|fwd|rxoffs|rxpkts|txpkts|txtimes) + testpmd> show config (rxtx|cores|fwd|rxoffs|rxpkts|rxhdrs|txpkts|txtimes) The available information categories are: @@ -290,7 +290,9 @@ The available information categories are: * ``rxoffs``: Packet offsets for RX split. -* ``rxpkts``: Packets to RX split configuration. +* ``rxpkts``: Packets to RX length-based split configuration. + +* ``rxhdrs``: Packets to RX proto-based split configuration. * ``txpkts``: Packets to TX configuration. @@ -799,6 +801,19 @@ mbuf for remaining segments will be allocated from the last valid pool). Where x[,y]* represents a CSV list of values, without white space. Zero value means to use the corresponding memory pool data buffer size. +set rxhdrs +~~~~~~~~~~ + +Set the protocol headers of segments to scatter packets on receiving if split +feature is engaged. Affects only the queues configured with split +offloads (currently BUFFER_SPLIT is supported only). + + testpmd> set rxhdrs (eth[,ipv4]*) + +Where eth[,ipv4]* represents a CSV list of values, without white space. If the list +of offsets is shorter than the list of segments the zero offsets will be used +for the remaining segments. + set txpkts ~~~~~~~~~~ From patchwork Sun Oct 9 20:25:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wang, YuanX" X-Patchwork-Id: 117738 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C0DE9A0542; Sun, 9 Oct 2022 14:40:11 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B41D040146; Sun, 9 Oct 2022 14:40:11 +0200 (CEST) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id 5326640042 for ; Sun, 9 Oct 2022 14:40:10 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1665319210; x=1696855210; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=dekXYG7bGLK2Rs56E4XgdRIo9PruPg6JftoIZPOQpOI=; b=YxQDDJXbwAS4N5yDxHYsCHoJUvCP6Edz3/P0Hvzielc3K/RN/aLxNNmE A1rvBVhk3VLXE0aE1MswBO803D0JN1a8XpYWcR3Ty6yZ5ySRkhGJUsaA8 TCNsDxCkZuKZPUg/rLyMwqrp1J8WWKqTZ24zPFZOxDyVB3IiVN9aAhTFk 7Y2o5ukBWRtwgTWhAq215A3kCgQt83KW3DY86OS7khVWnIESt6lg5HhQH ALkTLI4l3nH1K3mcu7LRxQ0u56AAuxRH+M9RGV0swRJBMzRI9C4Ebwu25 GEvoQZ3i4jOKwwJ8AlZ+dnlP6H4s+cdGNDMSS9Uq29RfZMagNGDsxk0ku g==; X-IronPort-AV: E=McAfee;i="6500,9779,10494"; a="366010305" X-IronPort-AV: E=Sophos;i="5.95,171,1661842800"; d="scan'208";a="366010305" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2022 05:40:09 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10494"; a="656628454" X-IronPort-AV: E=Sophos;i="5.95,171,1661842800"; d="scan'208";a="656628454" Received: from unknown (HELO localhost.localdomain) ([10.239.252.55]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2022 05:40:03 -0700 From: Yuan Wang To: dev@dpdk.org, Ferruh Yigit , Qiming Yang , Qi Zhang Cc: thomas@monjalon.net, andrew.rybchenko@oktetlabs.ru, ferruh.yigit@xilinx.com, mdr@ashroe.eu, xiaoyun.li@intel.com, aman.deep.singh@intel.com, yuying.zhang@intel.com, jerinjacobk@gmail.com, viacheslavo@nvidia.com, stephen@networkplumber.org, xuan.ding@intel.com, hpothula@marvell.com, yaqi.tang@intel.com, Yuan Wang , Wenxuan Wu Subject: [PATCH v9 4/4] net/ice: support buffer split in Rx path Date: Mon, 10 Oct 2022 04:25:41 +0800 Message-Id: <20221009202541.352724-5-yuanx.wang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221009202541.352724-1-yuanx.wang@intel.com> References: <20220812181552.2908067-1-yuanx.wang@intel.com> <20221009202541.352724-1-yuanx.wang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support for protocol based buffer split in normal Rx data paths. When the Rx queue is configured with specific protocol type, packets received will be directly split into protocol header and payload parts. And the two parts will be put into different mempools. Currently, protocol based buffer split is not supported in vectorized paths. A new API ice_buffer_split_supported_hdr_ptypes_get() has been introduced, it will return the supported header protocols of ice PMD to app for splitting. Signed-off-by: Yuan Wang Signed-off-by: Xuan Ding Signed-off-by: Wenxuan Wu --- doc/guides/nics/features/default.ini | 1 + doc/guides/nics/features/ice.ini | 1 + doc/guides/rel_notes/release_22_11.rst | 4 + drivers/net/ice/ice_ethdev.c | 58 +++++- drivers/net/ice/ice_rxtx.c | 263 ++++++++++++++++++++++--- drivers/net/ice/ice_rxtx.h | 16 ++ drivers/net/ice/ice_rxtx_vec_common.h | 3 + 7 files changed, 314 insertions(+), 32 deletions(-) diff --git a/doc/guides/nics/features/default.ini b/doc/guides/nics/features/default.ini index 05e47d7552..1c736ca1aa 100644 --- a/doc/guides/nics/features/default.ini +++ b/doc/guides/nics/features/default.ini @@ -7,6 +7,7 @@ ; string should not exceed feature_str_len defined in conf.py. ; [Features] +Buffer Split on Rx = Speed capabilities = Link status = Link status event = diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini index 2f4a5a9a30..b72e83e42e 100644 --- a/doc/guides/nics/features/ice.ini +++ b/doc/guides/nics/features/ice.ini @@ -7,6 +7,7 @@ ; is selected. ; [Features] +Buffer Split on Rx = P Speed capabilities = Y Link status = Y Link status event = Y diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst index b4329d4cb0..537cdeee61 100644 --- a/doc/guides/rel_notes/release_22_11.rst +++ b/doc/guides/rel_notes/release_22_11.rst @@ -196,6 +196,10 @@ New Features * Supported protocol-based buffer split using added ``proto_hdr`` in structure ``rte_eth_rxseg_split``. +* **Updated Intel ice driver.** + + * Added protocol based buffer split support in scalar path. + Removed Items ------------- diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index 6e21c38152..8618a3e6b7 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -161,6 +161,7 @@ static int ice_timesync_read_time(struct rte_eth_dev *dev, static int ice_timesync_write_time(struct rte_eth_dev *dev, const struct timespec *timestamp); static int ice_timesync_disable(struct rte_eth_dev *dev); +static const uint32_t *ice_buffer_split_supported_hdr_ptypes_get(struct rte_eth_dev *dev); static const struct rte_pci_id pci_id_ice_map[] = { { RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E823L_BACKPLANE) }, @@ -275,6 +276,7 @@ static const struct eth_dev_ops ice_eth_dev_ops = { .timesync_write_time = ice_timesync_write_time, .timesync_disable = ice_timesync_disable, .tm_ops_get = ice_tm_ops_get, + .buffer_split_supported_hdr_ptypes_get = ice_buffer_split_supported_hdr_ptypes_get, }; /* store statistics names and its offset in stats structure */ @@ -3802,7 +3804,8 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | RTE_ETH_RX_OFFLOAD_RSS_HASH | - RTE_ETH_RX_OFFLOAD_TIMESTAMP; + RTE_ETH_RX_OFFLOAD_TIMESTAMP | + RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT; dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_QINQ_INSERT | RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | @@ -3814,7 +3817,7 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->flow_type_rss_offloads |= ICE_RSS_OFFLOAD_ALL; } - dev_info->rx_queue_offload_capa = 0; + dev_info->rx_queue_offload_capa = RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT; dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE; dev_info->reta_size = pf->hash_lut_size; @@ -3883,6 +3886,11 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->default_rxportconf.ring_size = ICE_BUF_SIZE_MIN; dev_info->default_txportconf.ring_size = ICE_BUF_SIZE_MIN; + dev_info->rx_seg_capa.max_nseg = ICE_RX_MAX_NSEG; + dev_info->rx_seg_capa.multi_pools = 1; + dev_info->rx_seg_capa.offset_allowed = 0; + dev_info->rx_seg_capa.offset_align_log2 = 0; + return 0; } @@ -5960,6 +5968,52 @@ ice_timesync_disable(struct rte_eth_dev *dev) return 0; } +static const uint32_t * +ice_buffer_split_supported_hdr_ptypes_get(struct rte_eth_dev *dev __rte_unused) +{ + /* Buffer split protocol header capability. */ + static const uint32_t ptypes[] = { + /* Non tunneled */ + RTE_PTYPE_L2_ETHER, + RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN, + RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP, + RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_TCP, + RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_SCTP, + RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN, + RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_L4_UDP, + RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_L4_TCP, + RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_L4_SCTP, + + /* Tunneled */ + RTE_PTYPE_TUNNEL_GRENAT, + + RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER, + + RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER | + RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN, + RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER | + RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN, + + RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER | + RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_UDP, + RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER | + RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_TCP, + RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER | + RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_SCTP, + + RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER | + RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_UDP, + RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER | + RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_TCP, + RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER | + RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_SCTP, + + RTE_PTYPE_UNKNOWN + }; + + return ptypes; +} + static int ice_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_device *pci_dev) diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c index d1e1fadf9d..697251c603 100644 --- a/drivers/net/ice/ice_rxtx.c +++ b/drivers/net/ice/ice_rxtx.c @@ -259,7 +259,6 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq) /* Set buffer size as the head split is disabled. */ buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM); - rxq->rx_hdr_len = 0; rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << ICE_RLAN_CTX_DBUF_S)); rxq->max_pkt_len = RTE_MIN((uint32_t)ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len, @@ -288,11 +287,91 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq) memset(&rx_ctx, 0, sizeof(rx_ctx)); + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + uint32_t proto_hdr; + proto_hdr = rxq->rxseg[0].proto_hdr; + + if (proto_hdr == RTE_PTYPE_UNKNOWN) { + PMD_DRV_LOG(ERR, "Buffer split protocol must be configured"); + return -EINVAL; + } + + switch (proto_hdr & RTE_PTYPE_L4_MASK) { + case RTE_PTYPE_L4_TCP: + case RTE_PTYPE_L4_UDP: + rx_ctx.dtype = ICE_RX_DTYPE_HEADER_SPLIT; + rx_ctx.hsplit_0 = ICE_RLAN_RX_HSPLIT_0_SPLIT_TCP_UDP; + goto set_hsplit_finish; + case RTE_PTYPE_L4_SCTP: + rx_ctx.dtype = ICE_RX_DTYPE_HEADER_SPLIT; + rx_ctx.hsplit_0 = ICE_RLAN_RX_HSPLIT_0_SPLIT_SCTP; + goto set_hsplit_finish; + } + + switch (proto_hdr & RTE_PTYPE_L3_MASK) { + case RTE_PTYPE_L3_IPV4_EXT_UNKNOWN: + case RTE_PTYPE_L3_IPV6_EXT_UNKNOWN: + rx_ctx.dtype = ICE_RX_DTYPE_HEADER_SPLIT; + rx_ctx.hsplit_0 = ICE_RLAN_RX_HSPLIT_0_SPLIT_IP; + goto set_hsplit_finish; + } + + switch (proto_hdr & RTE_PTYPE_L2_MASK) { + case RTE_PTYPE_L2_ETHER: + rx_ctx.dtype = ICE_RX_DTYPE_HEADER_SPLIT; + rx_ctx.hsplit_0 = ICE_RLAN_RX_HSPLIT_0_SPLIT_L2; + rx_ctx.hsplit_1 = ICE_RLAN_RX_HSPLIT_1_SPLIT_L2; + goto set_hsplit_finish; + } + + switch (proto_hdr & RTE_PTYPE_TUNNEL_MASK) { + case RTE_PTYPE_TUNNEL_GRENAT: + rx_ctx.dtype = ICE_RX_DTYPE_HEADER_SPLIT; + rx_ctx.hsplit_1 = ICE_RLAN_RX_HSPLIT_1_SPLIT_ALWAYS; + goto set_hsplit_finish; + } + + switch (proto_hdr & RTE_PTYPE_INNER_L4_MASK) { + case RTE_PTYPE_INNER_L4_TCP: + case RTE_PTYPE_INNER_L4_UDP: + rx_ctx.dtype = ICE_RX_DTYPE_HEADER_SPLIT; + rx_ctx.hsplit_0 = ICE_RLAN_RX_HSPLIT_0_SPLIT_TCP_UDP; + goto set_hsplit_finish; + case RTE_PTYPE_INNER_L4_SCTP: + rx_ctx.dtype = ICE_RX_DTYPE_HEADER_SPLIT; + rx_ctx.hsplit_0 = ICE_RLAN_RX_HSPLIT_0_SPLIT_SCTP; + goto set_hsplit_finish; + } + + switch (proto_hdr & RTE_PTYPE_INNER_L3_MASK) { + case RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN: + case RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN: + rx_ctx.dtype = ICE_RX_DTYPE_HEADER_SPLIT; + rx_ctx.hsplit_0 = ICE_RLAN_RX_HSPLIT_0_SPLIT_IP; + goto set_hsplit_finish; + } + + switch (proto_hdr & RTE_PTYPE_INNER_L2_MASK) { + case RTE_PTYPE_INNER_L2_ETHER: + rx_ctx.dtype = ICE_RX_DTYPE_HEADER_SPLIT; + rx_ctx.hsplit_0 = ICE_RLAN_RX_HSPLIT_0_SPLIT_L2; + goto set_hsplit_finish; + } + + PMD_DRV_LOG(ERR, "Buffer split protocol is not supported"); + return -EINVAL; + +set_hsplit_finish: + rxq->rx_hdr_len = ICE_RX_HDR_BUF_SIZE; + } else { + rxq->rx_hdr_len = 0; + rx_ctx.dtype = 0; /* No Protocol Based Buffer Split mode */ + } + rx_ctx.base = rxq->rx_ring_dma / ICE_QUEUE_BASE_ADDR_UNIT; rx_ctx.qlen = rxq->nb_rx_desc; rx_ctx.dbuf = rxq->rx_buf_len >> ICE_RLAN_CTX_DBUF_S; rx_ctx.hbuf = rxq->rx_hdr_len >> ICE_RLAN_CTX_HBUF_S; - rx_ctx.dtype = 0; /* No Header Split mode */ #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC rx_ctx.dsize = 1; /* 32B descriptors */ #endif @@ -378,6 +457,7 @@ ice_alloc_rx_queue_mbufs(struct ice_rx_queue *rxq) for (i = 0; i < rxq->nb_rx_desc; i++) { volatile union ice_rx_flex_desc *rxd; + rxd = &rxq->rx_ring[i]; struct rte_mbuf *mbuf = rte_mbuf_raw_alloc(rxq->mp); if (unlikely(!mbuf)) { @@ -385,8 +465,6 @@ ice_alloc_rx_queue_mbufs(struct ice_rx_queue *rxq) return -ENOMEM; } - rte_mbuf_refcnt_set(mbuf, 1); - mbuf->next = NULL; mbuf->data_off = RTE_PKTMBUF_HEADROOM; mbuf->nb_segs = 1; mbuf->port = rxq->port_id; @@ -394,9 +472,32 @@ ice_alloc_rx_queue_mbufs(struct ice_rx_queue *rxq) dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf)); - rxd = &rxq->rx_ring[i]; - rxd->read.pkt_addr = dma_addr; - rxd->read.hdr_addr = 0; + if (!(rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + rte_mbuf_refcnt_set(mbuf, 1); + mbuf->next = NULL; + rxd->read.hdr_addr = 0; + rxd->read.pkt_addr = dma_addr; + } else { + struct rte_mbuf *mbuf_pay; + mbuf_pay = rte_mbuf_raw_alloc(rxq->rxseg[1].mp); + if (unlikely(!mbuf_pay)) { + PMD_DRV_LOG(ERR, "Failed to allocate payload mbuf for RX"); + return -ENOMEM; + } + + mbuf_pay->next = NULL; + mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + mbuf_pay->nb_segs = 1; + mbuf_pay->port = rxq->port_id; + mbuf->next = mbuf_pay; + + rxd->read.hdr_addr = dma_addr; + /* The LS bit should be set to zero regardless of + * buffer split enablement. + */ + rxd->read.pkt_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf_pay)); + } + #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC rxd->read.rsvd1 = 0; rxd->read.rsvd2 = 0; @@ -420,14 +521,14 @@ _ice_rx_queue_release_mbufs(struct ice_rx_queue *rxq) for (i = 0; i < rxq->nb_rx_desc; i++) { if (rxq->sw_ring[i].mbuf) { - rte_pktmbuf_free_seg(rxq->sw_ring[i].mbuf); + rte_pktmbuf_free(rxq->sw_ring[i].mbuf); rxq->sw_ring[i].mbuf = NULL; } } if (rxq->rx_nb_avail == 0) return; for (i = 0; i < rxq->rx_nb_avail; i++) - rte_pktmbuf_free_seg(rxq->rx_stage[rxq->rx_next_avail + i]); + rte_pktmbuf_free(rxq->rx_stage[rxq->rx_next_avail + i]); rxq->rx_nb_avail = 0; } @@ -719,7 +820,7 @@ ice_fdir_program_hw_rx_queue(struct ice_rx_queue *rxq) rx_ctx.qlen = rxq->nb_rx_desc; rx_ctx.dbuf = rxq->rx_buf_len >> ICE_RLAN_CTX_DBUF_S; rx_ctx.hbuf = rxq->rx_hdr_len >> ICE_RLAN_CTX_HBUF_S; - rx_ctx.dtype = 0; /* No Header Split mode */ + rx_ctx.dtype = 0; /* No Buffer Split mode */ rx_ctx.dsize = 1; /* 32B descriptors */ rx_ctx.rxmax = ICE_ETH_MAX_LEN; /* TPH: Transaction Layer Packet (TLP) processing hints */ @@ -1053,6 +1154,8 @@ ice_rx_queue_setup(struct rte_eth_dev *dev, uint16_t len; int use_def_burst_func = 1; uint64_t offloads; + uint16_t n_seg = rx_conf->rx_nseg; + uint16_t i; if (nb_desc % ICE_ALIGN_RING_DESC != 0 || nb_desc > ICE_MAX_RING_DESC || @@ -1064,6 +1167,15 @@ ice_rx_queue_setup(struct rte_eth_dev *dev, offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads; + if (mp) + n_seg = 1; + + if (n_seg > 1 && !(offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + PMD_INIT_LOG(ERR, "port %u queue index %u split offload not configured", + dev->data->port_id, queue_idx); + return -EINVAL; + } + /* Free memory if needed */ if (dev->data->rx_queues[queue_idx]) { ice_rx_queue_release(dev->data->rx_queues[queue_idx]); @@ -1075,12 +1187,24 @@ ice_rx_queue_setup(struct rte_eth_dev *dev, sizeof(struct ice_rx_queue), RTE_CACHE_LINE_SIZE, socket_id); + if (!rxq) { PMD_INIT_LOG(ERR, "Failed to allocate memory for " "rx queue data structure"); return -ENOMEM; } - rxq->mp = mp; + + rxq->rxseg_nb = n_seg; + if (n_seg > 1) { + for (i = 0; i < n_seg; i++) + memcpy(&rxq->rxseg[i], &rx_conf->rx_seg[i].split, + sizeof(struct rte_eth_rxseg_split)); + + rxq->mp = rxq->rxseg[0].mp; + } else { + rxq->mp = mp; + } + rxq->nb_rx_desc = nb_desc; rxq->rx_free_thresh = rx_conf->rx_free_thresh; rxq->queue_id = queue_idx; @@ -1551,7 +1675,7 @@ ice_rx_scan_hw_ring(struct ice_rx_queue *rxq) struct ice_rx_entry *rxep; struct rte_mbuf *mb; uint16_t stat_err0; - uint16_t pkt_len; + uint16_t pkt_len, hdr_len; int32_t s[ICE_LOOK_AHEAD], nb_dd; int32_t i, j, nb_rx = 0; uint64_t pkt_flags = 0; @@ -1606,6 +1730,27 @@ ice_rx_scan_hw_ring(struct ice_rx_queue *rxq) ICE_RX_FLX_DESC_PKT_LEN_M) - rxq->crc_len; mb->data_len = pkt_len; mb->pkt_len = pkt_len; + + if (!(rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + pkt_len = (rte_le_to_cpu_16(rxdp[j].wb.pkt_len) & + ICE_RX_FLX_DESC_PKT_LEN_M) - rxq->crc_len; + mb->data_len = pkt_len; + mb->pkt_len = pkt_len; + } else { + mb->nb_segs = (uint16_t)(mb->nb_segs + mb->next->nb_segs); + mb->next->next = NULL; + hdr_len = rte_le_to_cpu_16(rxdp[j].wb.hdr_len_sph_flex_flags1) & + ICE_RX_FLEX_DESC_HEADER_LEN_M; + pkt_len = (rte_le_to_cpu_16(rxdp[j].wb.pkt_len) & + ICE_RX_FLX_DESC_PKT_LEN_M) - rxq->crc_len; + mb->data_len = hdr_len; + mb->pkt_len = hdr_len + pkt_len; + mb->next->data_len = pkt_len; +#ifdef RTE_ETHDEV_DEBUG_RX + rte_pktmbuf_dump(stdout, mb, rte_pktmbuf_pkt_len(mb)); +#endif + } + mb->ol_flags = 0; stat_err0 = rte_le_to_cpu_16(rxdp[j].wb.status_error0); pkt_flags = ice_rxd_error_to_pkt_flags(stat_err0); @@ -1697,7 +1842,9 @@ ice_rx_alloc_bufs(struct ice_rx_queue *rxq) struct rte_mbuf *mb; uint16_t alloc_idx, i; uint64_t dma_addr; - int diag; + int diag, diag_pay; + uint64_t pay_addr; + struct rte_mbuf *mbufs_pay[rxq->rx_free_thresh]; /* Allocate buffers in bulk */ alloc_idx = (uint16_t)(rxq->rx_free_trigger - @@ -1710,6 +1857,15 @@ ice_rx_alloc_bufs(struct ice_rx_queue *rxq) return -ENOMEM; } + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + diag_pay = rte_mempool_get_bulk(rxq->rxseg[1].mp, + (void *)mbufs_pay, rxq->rx_free_thresh); + if (unlikely(diag_pay != 0)) { + PMD_RX_LOG(ERR, "Failed to get payload mbufs in bulk"); + return -ENOMEM; + } + } + rxdp = &rxq->rx_ring[alloc_idx]; for (i = 0; i < rxq->rx_free_thresh; i++) { if (likely(i < (rxq->rx_free_thresh - 1))) @@ -1718,13 +1874,21 @@ ice_rx_alloc_bufs(struct ice_rx_queue *rxq) mb = rxep[i].mbuf; rte_mbuf_refcnt_set(mb, 1); - mb->next = NULL; mb->data_off = RTE_PKTMBUF_HEADROOM; mb->nb_segs = 1; mb->port = rxq->port_id; dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mb)); - rxdp[i].read.hdr_addr = 0; - rxdp[i].read.pkt_addr = dma_addr; + + if (!(rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + mb->next = NULL; + rxdp[i].read.hdr_addr = 0; + rxdp[i].read.pkt_addr = dma_addr; + } else { + mb->next = mbufs_pay[i]; + pay_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbufs_pay[i])); + rxdp[i].read.hdr_addr = dma_addr; + rxdp[i].read.pkt_addr = pay_addr; + } } /* Update Rx tail register */ @@ -2333,11 +2497,13 @@ ice_recv_pkts(void *rx_queue, struct ice_rx_entry *sw_ring = rxq->sw_ring; struct ice_rx_entry *rxe; struct rte_mbuf *nmb; /* new allocated mbuf */ + struct rte_mbuf *nmb_pay; /* new allocated payload mbuf */ struct rte_mbuf *rxm; /* pointer to store old mbuf in SW ring */ uint16_t rx_id = rxq->rx_tail; uint16_t nb_rx = 0; uint16_t nb_hold = 0; uint16_t rx_packet_len; + uint16_t rx_header_len; uint16_t rx_stat_err0; uint64_t dma_addr; uint64_t pkt_flags; @@ -2365,12 +2531,13 @@ ice_recv_pkts(void *rx_queue, if (!(rx_stat_err0 & (1 << ICE_RX_FLEX_DESC_STATUS0_DD_S))) break; - /* allocate mbuf */ + /* allocate header mbuf */ nmb = rte_mbuf_raw_alloc(rxq->mp); if (unlikely(!nmb)) { rxq->vsi->adapter->pf.dev_data->rx_mbuf_alloc_failed++; break; } + rxd = *rxdp; /* copy descriptor in ring to temp variable*/ nb_hold++; @@ -2383,24 +2550,60 @@ ice_recv_pkts(void *rx_queue, dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb)); - /** - * fill the read format of descriptor with physic address in - * new allocated mbuf: nmb - */ - rxdp->read.hdr_addr = 0; - rxdp->read.pkt_addr = dma_addr; + if (!(rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + /** + * fill the read format of descriptor with physic address in + * new allocated mbuf: nmb + */ + rxdp->read.hdr_addr = 0; + rxdp->read.pkt_addr = dma_addr; + } else { + /* allocate payload mbuf */ + nmb_pay = rte_mbuf_raw_alloc(rxq->rxseg[1].mp); + if (unlikely(!nmb_pay)) { + rxq->vsi->adapter->pf.dev_data->rx_mbuf_alloc_failed++; + break; + } - /* calculate rx_packet_len of the received pkt */ - rx_packet_len = (rte_le_to_cpu_16(rxd.wb.pkt_len) & - ICE_RX_FLX_DESC_PKT_LEN_M) - rxq->crc_len; + nmb->next = nmb_pay; + nmb_pay->next = NULL; + + /** + * fill the read format of descriptor with physic address in + * new allocated mbuf: nmb + */ + rxdp->read.hdr_addr = dma_addr; + rxdp->read.pkt_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb_pay)); + } /* fill old mbuf with received descriptor: rxd */ rxm->data_off = RTE_PKTMBUF_HEADROOM; rte_prefetch0(RTE_PTR_ADD(rxm->buf_addr, RTE_PKTMBUF_HEADROOM)); - rxm->nb_segs = 1; - rxm->next = NULL; - rxm->pkt_len = rx_packet_len; - rxm->data_len = rx_packet_len; + if (!(rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + rxm->nb_segs = 1; + rxm->next = NULL; + /* calculate rx_packet_len of the received pkt */ + rx_packet_len = (rte_le_to_cpu_16(rxd.wb.pkt_len) & + ICE_RX_FLX_DESC_PKT_LEN_M) - rxq->crc_len; + rxm->data_len = rx_packet_len; + rxm->pkt_len = rx_packet_len; + } else { + rxm->nb_segs = (uint16_t)(rxm->nb_segs + rxm->next->nb_segs); + rxm->next->next = NULL; + /* calculate rx_packet_len of the received pkt */ + rx_header_len = rte_le_to_cpu_16(rxd.wb.hdr_len_sph_flex_flags1) & + ICE_RX_FLEX_DESC_HEADER_LEN_M; + rx_packet_len = (rte_le_to_cpu_16(rxd.wb.pkt_len) & + ICE_RX_FLX_DESC_PKT_LEN_M) - rxq->crc_len; + rxm->data_len = rx_header_len; + rxm->pkt_len = rx_header_len + rx_packet_len; + rxm->next->data_len = rx_packet_len; + +#ifdef RTE_ETHDEV_DEBUG_RX + rte_pktmbuf_dump(stdout, rxm, rte_pktmbuf_pkt_len(rxm)); +#endif + } + rxm->port = rxq->port_id; rxm->packet_type = ptype_tbl[ICE_RX_FLEX_DESC_PTYPE_M & rte_le_to_cpu_16(rxd.wb.ptype_flex_flags0)]; diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h index e1d4fe8e47..4947d5c25f 100644 --- a/drivers/net/ice/ice_rxtx.h +++ b/drivers/net/ice/ice_rxtx.h @@ -16,6 +16,9 @@ #define ICE_RX_MAX_BURST 32 #define ICE_TX_MAX_BURST 32 +/* Maximal number of segments to split. */ +#define ICE_RX_MAX_NSEG 2 + #define ICE_CHK_Q_ENA_COUNT 100 #define ICE_CHK_Q_ENA_INTERVAL_US 100 @@ -45,6 +48,11 @@ extern uint64_t ice_timestamp_dynflag; extern int ice_timestamp_dynfield_offset; +/* Max header size can be 2K - 64 bytes */ +#define ICE_RX_HDR_BUF_SIZE (2048 - 64) + +#define ICE_HEADER_SPLIT_ENA BIT(0) + typedef void (*ice_rx_release_mbufs_t)(struct ice_rx_queue *rxq); typedef void (*ice_tx_release_mbufs_t)(struct ice_tx_queue *txq); typedef void (*ice_rxd_to_pkt_fields_t)(struct ice_rx_queue *rxq, @@ -55,6 +63,12 @@ struct ice_rx_entry { struct rte_mbuf *mbuf; }; +enum ice_rx_dtype { + ICE_RX_DTYPE_NO_SPLIT = 0, + ICE_RX_DTYPE_HEADER_SPLIT = 1, + ICE_RX_DTYPE_SPLIT_ALWAYS = 2, +}; + struct ice_rx_queue { struct rte_mempool *mp; /* mbuf pool to populate RX ring */ volatile union ice_rx_flex_desc *rx_ring;/* RX ring virtual address */ @@ -101,6 +115,8 @@ struct ice_rx_queue { uint32_t hw_time_high; /* high 32 bits of timestamp */ uint32_t hw_time_low; /* low 32 bits of timestamp */ uint64_t hw_time_update; /* SW time of HW record updating */ + struct rte_eth_rxseg_split rxseg[ICE_RX_MAX_NSEG]; + uint32_t rxseg_nb; }; struct ice_tx_entry { diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h index 2dd2d83650..eec6ea2134 100644 --- a/drivers/net/ice/ice_rxtx_vec_common.h +++ b/drivers/net/ice/ice_rxtx_vec_common.h @@ -291,6 +291,9 @@ ice_rx_vec_queue_default(struct ice_rx_queue *rxq) if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) return -1; + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) + return -1; + if (rxq->offloads & ICE_RX_VECTOR_OFFLOAD) return ICE_VECTOR_OFFLOAD_PATH;