From patchwork Tue Mar 29 06:49:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ding, Xuan" X-Patchwork-Id: 108986 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4E170A0506; Tue, 29 Mar 2022 08:54:14 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E718042826; Tue, 29 Mar 2022 08:54:13 +0200 (CEST) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by mails.dpdk.org (Postfix) with ESMTP id DBE5642819 for ; Tue, 29 Mar 2022 08:54:11 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1648536852; x=1680072852; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=9C0JvG3Ct6qVMD96pKWJpA+Y0ZG5gTMYHtTYF82fhNU=; b=OGkoCmpmMac248IZ/TUWjD0G0VxKagwinf+9RkJuhdSkSA1tw/2kgD+i D1YOCdAYrBgO+D9OZ5yoQt8ZOahzsmibbVbx84i7MWxfpPWrd0NpRBWKP iVr5XUbs+PGVK3iJusN26wPUWayGykUunNPLK9JCLkGqjPjZ+46pAVisw ZHVANMAejnBQ+yCL8kQooMH66GpbFUWIHdOgW4j9Cum1OAA0otU4s99bo j4uX1IjIdFlJdB8Z8Vn8MO4ZTYSmvclRl9UDSrC3ISntgrfe4+rIbKqkJ S5aOI8GukdITJs16EKHZBFcTKQw2OHBNO3+/Hu+acPQ47VVTd4jIrRyrv w==; X-IronPort-AV: E=McAfee;i="6200,9189,10300"; a="241330918" X-IronPort-AV: E=Sophos;i="5.90,219,1643702400"; d="scan'208";a="241330918" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Mar 2022 23:54:10 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,219,1643702400"; d="scan'208";a="564372146" Received: from npg-dpdk-xuan-cbdma.sh.intel.com ([10.67.110.228]) by orsmga008.jf.intel.com with ESMTP; 28 Mar 2022 23:54:06 -0700 From: xuan.ding@intel.com To: thomas@monjalon.net, ferruh.yigit@intel.com, andrew.rybchenko@oktetlabs.ru Cc: dev@dpdk.org, stephen@networkplumber.org, mb@smartsharesystems.com, viacheslavo@nvidia.com, qi.z.zhang@intel.com, ping.yu@intel.com, wenxuanx.wu@intel.com, Xuan Ding , Yuan Wang Subject: [RFC,v3 1/3] ethdev: introduce protocol type based header split Date: Tue, 29 Mar 2022 06:49:43 +0000 Message-Id: <20220329064945.54777-2-xuan.ding@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220329064945.54777-1-xuan.ding@intel.com> References: <20220303060136.36427-1-xuan.ding@intel.com> <20220329064945.54777-1-xuan.ding@intel.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Xuan Ding Header split consists of splitting a received packet into two separate regions based on the packet content. The split happens after the packet header and before the packet payload. Splitting is usually between the packet header that can be posted to a dedicated buffer and the packet payload that can be posted to a different buffer. Currently, Rx buffer split supports length and offset based packet split. Although header split is a subset of buffer split, configuring buffer split based on length is not suitable for NICs that do split based on header protocol types. Because tunneling makes the conversion from length to protocol type impossible. This patch extends the current buffer split to support protocol type and offset based header split. A new proto field is introduced in the rte_eth_rxseg_split structure reserved field to specify header protocol type. With Rx offload flag RTE_ETH_RX_OFFLOAD_HEADER_SPLIT enabled and protocol type configured, PMD will split the ingress packets into two separate regions. Currently, both inner and outer L2/L3/L4 level header split can be supported. For example, let's suppose we configured the Rx queue with the following segments: seg0 - pool0, off0=2B seg1 - pool1, off1=128B With header split type configured with RTE_ETH_RX_HEADER_SPLIT_UDP, the packet consists of MAC_IP_UDP_PAYLOAD will be split like following: seg0 - udp header @ RTE_PKTMBUF_HEADROOM + 2 in mbuf from pool0 seg1 - payload @ 128 in mbuf from pool1 The memory attributes for the split parts may differ either - for example the mempool0 and mempool1 belong to dpdk memory and external memory, respectively. Signed-off-by: Xuan Ding Signed-off-by: Yuan Wang Reviewed-by: Qi Zhang --- lib/ethdev/rte_ethdev.c | 34 ++++++++++++++++++++++------- lib/ethdev/rte_ethdev.h | 48 +++++++++++++++++++++++++++++++++++++++-- 2 files changed, 72 insertions(+), 10 deletions(-) diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 29a3d80466..144a43588c 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -1661,6 +1661,7 @@ rte_eth_rx_queue_check_split(const struct rte_eth_rxseg_split *rx_seg, struct rte_mempool *mpl = rx_seg[seg_idx].mp; uint32_t length = rx_seg[seg_idx].length; uint32_t offset = rx_seg[seg_idx].offset; + uint16_t proto = rx_seg[seg_idx].proto; if (mpl == NULL) { RTE_ETHDEV_LOG(ERR, "null mempool pointer\n"); @@ -1694,13 +1695,29 @@ rte_eth_rx_queue_check_split(const struct rte_eth_rxseg_split *rx_seg, } offset += seg_idx != 0 ? 0 : RTE_PKTMBUF_HEADROOM; *mbp_buf_size = rte_pktmbuf_data_room_size(mpl); - length = length != 0 ? length : *mbp_buf_size; - if (*mbp_buf_size < length + offset) { - RTE_ETHDEV_LOG(ERR, - "%s mbuf_data_room_size %u < %u (segment length=%u + segment offset=%u)\n", - mpl->name, *mbp_buf_size, - length + offset, length, offset); - return -EINVAL; + if (proto == 0) { + /* Check buffer split. */ + length = length != 0 ? length : *mbp_buf_size; + if (*mbp_buf_size < length + offset) { + RTE_ETHDEV_LOG(ERR, + "%s mbuf_data_room_size %u < %u (segment length=%u + segment offset=%u)\n", + mpl->name, *mbp_buf_size, + length + offset, length, offset); + return -EINVAL; + } + } else { + /* Check header split. */ + if (length != 0) { + RTE_ETHDEV_LOG(ERR, "segment length should be set to zero in header split\n"); + return -EINVAL; + } + if (*mbp_buf_size < offset) { + RTE_ETHDEV_LOG(ERR, + "%s mbuf_data_room_size %u < %u segment offset)\n", + mpl->name, *mbp_buf_size, + offset); + return -EINVAL; + } } } return 0; @@ -1778,7 +1795,8 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, rx_seg = (const struct rte_eth_rxseg_split *)rx_conf->rx_seg; n_seg = rx_conf->rx_nseg; - if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT || + rx_conf->offloads & RTE_ETH_RX_OFFLOAD_HEADER_SPLIT) { ret = rte_eth_rx_queue_check_split(rx_seg, n_seg, &mbp_buf_size, &dev_info); diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index 04cff8ee10..e8371b98ed 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -1197,12 +1197,31 @@ struct rte_eth_txmode { * - pool from the last valid element * - the buffer size from this pool * - zero offset + * + * Header split is a subset of buffer split. The split happens after the + * packet header and before the packet payload. For PMDs that do not + * support header split configuration by length, the location of the split + * needs to be specified by the header protocol type. While for buffer split, + * this field should not be configured. + * + * If RTE_ETH_RX_OFFLOAD_HEADER_SPLIT flag is set in offloads field, + * the PMD will split the received packets into two separate regions: + * - The header buffer will be allocated from the memory pool, + * specified in the first array element, the second buffer, from the + * pool in the second element. + * + * - The lengths do not need to be configured in header split. + * + * - The offsets from the segment description elements specify + * the data offset from the buffer beginning except the first mbuf. + * The first segment offset is added with RTE_PKTMBUF_HEADROOM. */ struct rte_eth_rxseg_split { struct rte_mempool *mp; /**< Memory pool to allocate segment from. */ uint16_t length; /**< Segment data length, configures split point. */ uint16_t offset; /**< Data offset from beginning of mbuf data buffer. */ - uint32_t reserved; /**< Reserved field. */ + uint16_t proto; /**< header protocol type, configures header split point. */ + uint16_t reserved; /**< Reserved field. */ }; /** @@ -1212,7 +1231,7 @@ struct rte_eth_rxseg_split { * A common structure used to describe Rx packet segment properties. */ union rte_eth_rxseg { - /* The settings for buffer split offload. */ + /* The settings for buffer split and header split offload. */ struct rte_eth_rxseg_split split; /* The other features settings should be added here. */ }; @@ -1664,6 +1683,31 @@ struct rte_eth_conf { RTE_ETH_RX_OFFLOAD_QINQ_STRIP) #define DEV_RX_OFFLOAD_VLAN RTE_DEPRECATED(DEV_RX_OFFLOAD_VLAN) RTE_ETH_RX_OFFLOAD_VLAN +/** + * @warning + * @b EXPERIMENTAL: this enum may change without prior notice. + * This enum indicates the header split protocol type + */ +enum rte_eth_rx_header_split_protocol_type { + RTE_ETH_RX_HEADER_SPLIT_NONE = 0, + RTE_ETH_RX_HEADER_SPLIT_MAC, + RTE_ETH_RX_HEADER_SPLIT_IPV4, + RTE_ETH_RX_HEADER_SPLIT_IPV6, + RTE_ETH_RX_HEADER_SPLIT_L3, + RTE_ETH_RX_HEADER_SPLIT_TCP, + RTE_ETH_RX_HEADER_SPLIT_UDP, + RTE_ETH_RX_HEADER_SPLIT_SCTP, + RTE_ETH_RX_HEADER_SPLIT_L4, + RTE_ETH_RX_HEADER_SPLIT_INNER_MAC, + RTE_ETH_RX_HEADER_SPLIT_INNER_IPV4, + RTE_ETH_RX_HEADER_SPLIT_INNER_IPV6, + RTE_ETH_RX_HEADER_SPLIT_INNER_L3, + RTE_ETH_RX_HEADER_SPLIT_INNER_TCP, + RTE_ETH_RX_HEADER_SPLIT_INNER_UDP, + RTE_ETH_RX_HEADER_SPLIT_INNER_SCTP, + RTE_ETH_RX_HEADER_SPLIT_INNER_L4, +}; + /* * If new Rx offload capabilities are defined, they also must be * mentioned in rte_rx_offload_names in rte_ethdev.c file. From patchwork Tue Mar 29 06:49:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ding, Xuan" X-Patchwork-Id: 108987 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0981BA0506; Tue, 29 Mar 2022 08:54:20 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 731C94289D; Tue, 29 Mar 2022 08:54:19 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id D33C842897 for ; Tue, 29 Mar 2022 08:54:16 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1648536857; x=1680072857; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=POJd0dmTcHkcVAKD4eiUbUpHPo4ptQXHZS7E1bG0F88=; b=ZmX5wg9zAp5Uvj1DDYiS1v1EBv0aMeK9MpnTmdErCYaDSBIcN8Na5g2R L18CXlVCANOFY4tmzCBHz0E7wZcIzF4JGOOD33rS02fdcErsyXMMaL0o3 M670Vo5PW3CzGxWQmAtZr/T0f+rk57ArQv0elKolrRJ5zKVv8HQvxGFbv L7W9bIzLBdecxROfJrrBCX/kR5c67ia0s75QJAfUMwoNlT2rzZuIg3Ipo 86pHWZvfeWTVUoJZbDQ2nE/GUBLLsYyqmWaBAgSQpPv3O2KuKXG2MWe6h ZSamtfECXb8ASJqzcDS3dw80GBnNMJZ9dzqHXczVkGn7K6y0rt47tOUkK A==; X-IronPort-AV: E=McAfee;i="6200,9189,10300"; a="319876431" X-IronPort-AV: E=Sophos;i="5.90,219,1643702400"; d="scan'208";a="319876431" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Mar 2022 23:54:15 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,219,1643702400"; d="scan'208";a="564372200" Received: from npg-dpdk-xuan-cbdma.sh.intel.com ([10.67.110.228]) by orsmga008.jf.intel.com with ESMTP; 28 Mar 2022 23:54:11 -0700 From: xuan.ding@intel.com To: thomas@monjalon.net, ferruh.yigit@intel.com, andrew.rybchenko@oktetlabs.ru Cc: dev@dpdk.org, stephen@networkplumber.org, mb@smartsharesystems.com, viacheslavo@nvidia.com, qi.z.zhang@intel.com, ping.yu@intel.com, wenxuanx.wu@intel.com, Xuan Ding , Yuan Wang Subject: [RFC,v3 2/3] app/testpmd: add header split configuration Date: Tue, 29 Mar 2022 06:49:44 +0000 Message-Id: <20220329064945.54777-3-xuan.ding@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220329064945.54777-1-xuan.ding@intel.com> References: <20220303060136.36427-1-xuan.ding@intel.com> <20220329064945.54777-1-xuan.ding@intel.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Xuan Ding This patch adds header split configuration in testpmd. The header split feature is off by default. To enable header split, you need: 1. Configure Rx queue with rx_offload header split on. 2. Set the protocol type of header split. Command for set header split protocol type: testpmd> port config header_split mac|ipv4|ipv6|l3|tcp|udp|sctp| l4|inner_mac|inner_ipv4|inner_ipv6|inner_l3|inner_tcp| inner_udp|inner_sctp|inner_l4 Signed-off-by: Xuan Ding Signed-off-by: Yuan Wang --- app/test-pmd/cmdline.c | 117 +++++++++++++++++++++++++++++++++++++++++ app/test-pmd/testpmd.c | 6 ++- app/test-pmd/testpmd.h | 2 + 3 files changed, 124 insertions(+), 1 deletion(-) diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index 6ffea8e21a..abda81b4bc 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -866,6 +866,12 @@ static void cmd_help_long_parsed(void *parsed_result, " Enable or disable a per port Rx offloading" " on all Rx queues of a port\n\n" + "port config header_split mac|ipv4|ipv6|l3|tcp|udp|sctp|l4|" + "inner_mac|inner_ipv4|inner_ipv6|inner_l3|inner_tcp|" + "inner_udp|inner_sctp|inner_l4\n" + " Configure protocol for header split" + " on all Rx queues of a port\n\n" + "port (port_id) rxq (queue_id) rx_offload vlan_strip|" "ipv4_cksum|udp_cksum|tcp_cksum|tcp_lro|qinq_strip|" "outer_ipv4_cksum|macsec_strip|header_split|" @@ -16353,6 +16359,116 @@ cmdline_parse_inst_t cmd_config_per_port_rx_offload = { } }; +/* config a per port header split protocol */ +struct cmd_config_per_port_headersplit_protocol_result { + cmdline_fixed_string_t port; + cmdline_fixed_string_t config; + uint16_t port_id; + cmdline_fixed_string_t headersplit; + cmdline_fixed_string_t protocol; +}; + +cmdline_parse_token_string_t cmd_config_per_port_headersplit_protocol_result_port = + TOKEN_STRING_INITIALIZER + (struct cmd_config_per_port_headersplit_protocol_result, + port, "port"); +cmdline_parse_token_string_t cmd_config_per_port_headersplit_protocol_result_config = + TOKEN_STRING_INITIALIZER + (struct cmd_config_per_port_headersplit_protocol_result, + config, "config"); +cmdline_parse_token_num_t cmd_config_per_port_headersplit_protocol_result_port_id = + TOKEN_NUM_INITIALIZER + (struct cmd_config_per_port_headersplit_protocol_result, + port_id, RTE_UINT16); +cmdline_parse_token_string_t cmd_config_per_port_headersplit_protocol_result_headersplit = + TOKEN_STRING_INITIALIZER + (struct cmd_config_per_port_headersplit_protocol_result, + headersplit, "header_split"); +cmdline_parse_token_string_t cmd_config_per_port_headersplit_protocol_result_protocol = + TOKEN_STRING_INITIALIZER + (struct cmd_config_per_port_headersplit_protocol_result, + protocol, "mac#ipv4#ipv6#l3#tcp#udp#sctp#l4#" + "inner_mac#inner_ipv4#inner_ipv6#inner_l3#inner_tcp#" + "inner_udp#inner_sctp#inner_l4"); + +static void +cmd_config_per_port_headersplit_protocol_parsed(void *parsed_result, + __rte_unused struct cmdline *cl, + __rte_unused void *data) +{ + struct cmd_config_per_port_headersplit_protocol_result *res = parsed_result; + portid_t port_id = res->port_id; + struct rte_port *port = &ports[port_id]; + uint16_t protocol; + + if (port_id_is_invalid(port_id, ENABLED_WARN)) + return; + + if (port->port_status != RTE_PORT_STOPPED) { + fprintf(stderr, + "Error: Can't config offload when Port %d is not stopped\n", + port_id); + return; + } + + if (!strcmp(res->protocol, "mac")) + protocol = RTE_ETH_RX_HEADER_SPLIT_MAC; + else if (!strcmp(res->protocol, "ipv4")) + protocol = RTE_ETH_RX_HEADER_SPLIT_IPV4; + else if (!strcmp(res->protocol, "ipv6")) + protocol = RTE_ETH_RX_HEADER_SPLIT_IPV6; + else if (!strcmp(res->protocol, "l3")) + protocol = RTE_ETH_RX_HEADER_SPLIT_L3; + else if (!strcmp(res->protocol, "tcp")) + protocol = RTE_ETH_RX_HEADER_SPLIT_TCP; + else if (!strcmp(res->protocol, "udp")) + protocol = RTE_ETH_RX_HEADER_SPLIT_UDP; + else if (!strcmp(res->protocol, "sctp")) + protocol = RTE_ETH_RX_HEADER_SPLIT_SCTP; + else if (!strcmp(res->protocol, "l4")) + protocol = RTE_ETH_RX_HEADER_SPLIT_L4; + else if (!strcmp(res->protocol, "inner_mac")) + protocol = RTE_ETH_RX_HEADER_SPLIT_INNER_MAC; + else if (!strcmp(res->protocol, "inner_ipv4")) + protocol = RTE_ETH_RX_HEADER_SPLIT_INNER_IPV4; + else if (!strcmp(res->protocol, "inner_ipv6")) + protocol = RTE_ETH_RX_HEADER_SPLIT_INNER_IPV6; + else if (!strcmp(res->protocol, "inner_l3")) + protocol = RTE_ETH_RX_HEADER_SPLIT_INNER_L3; + else if (!strcmp(res->protocol, "inner_tcp")) + protocol = RTE_ETH_RX_HEADER_SPLIT_INNER_TCP; + else if (!strcmp(res->protocol, "inner_udp")) + protocol = RTE_ETH_RX_HEADER_SPLIT_INNER_UDP; + else if (!strcmp(res->protocol, "inner_sctp")) + protocol = RTE_ETH_RX_HEADER_SPLIT_INNER_SCTP; + else if (!strcmp(res->protocol, "inner_l4")) + protocol = RTE_ETH_RX_HEADER_SPLIT_INNER_L4; + else { + fprintf(stderr, "Unknown protocol name: %s\n", res->protocol); + return; + } + + rx_pkt_header_split_proto = protocol; + + cmd_reconfig_device_queue(port_id, 1, 1); +} + +cmdline_parse_inst_t cmd_config_per_port_headersplit_protocol = { + .f = cmd_config_per_port_headersplit_protocol_parsed, + .data = NULL, + .help_str = "port config header_split mac|ipv4|ipv6|l3|tcp|udp|sctp|l4|" + "inner_mac|inner_ipv4|inner_ipv6|inner_l3|inner_tcp|" + "inner_udp|inner_sctp|inner_l4", + .tokens = { + (void *)&cmd_config_per_port_headersplit_protocol_result_port, + (void *)&cmd_config_per_port_headersplit_protocol_result_config, + (void *)&cmd_config_per_port_headersplit_protocol_result_port_id, + (void *)&cmd_config_per_port_headersplit_protocol_result_headersplit, + (void *)&cmd_config_per_port_headersplit_protocol_result_protocol, + NULL, + } +}; + /* Enable/Disable a per queue offloading */ struct cmd_config_per_queue_rx_offload_result { cmdline_fixed_string_t port; @@ -18071,6 +18187,7 @@ cmdline_parse_ctx_t main_ctx[] = { (cmdline_parse_inst_t *)&cmd_rx_offload_get_capa, (cmdline_parse_inst_t *)&cmd_rx_offload_get_configuration, (cmdline_parse_inst_t *)&cmd_config_per_port_rx_offload, + (cmdline_parse_inst_t *)&cmd_config_per_port_headersplit_protocol, (cmdline_parse_inst_t *)&cmd_config_per_queue_rx_offload, (cmdline_parse_inst_t *)&cmd_tx_offload_get_capa, (cmdline_parse_inst_t *)&cmd_tx_offload_get_configuration, diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index fe2ce19f99..a00fa0e236 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -253,6 +253,8 @@ uint8_t tx_pkt_nb_segs = 1; /**< Number of segments in TXONLY packets */ enum tx_pkt_split tx_pkt_split = TX_PKT_SPLIT_OFF; /**< Split policy for packets to TX. */ +uint8_t rx_pkt_header_split_proto; + uint8_t txonly_multi_flow; /**< Whether multiple flows are generated in TXONLY mode. */ @@ -2568,7 +2570,8 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, int ret; if (rx_pkt_nb_segs <= 1 || - (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) == 0) { + (((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) == 0) && + ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_HEADER_SPLIT) == 0))) { rx_conf->rx_seg = NULL; rx_conf->rx_nseg = 0; ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, @@ -2592,6 +2595,7 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, rx_seg->offset = i < rx_pkt_nb_offs ? rx_pkt_seg_offsets[i] : 0; rx_seg->mp = mpx ? mpx : mp; + rx_seg->proto = rx_pkt_header_split_proto; } rx_conf->rx_nseg = rx_pkt_nb_segs; rx_conf->rx_seg = rx_useg; diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 31f766c965..021e2768be 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -557,6 +557,8 @@ enum tx_pkt_split { extern enum tx_pkt_split tx_pkt_split; +extern uint8_t rx_pkt_header_split_proto; + extern uint8_t txonly_multi_flow; extern uint32_t rxq_share; From patchwork Tue Mar 29 06:49:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ding, Xuan" X-Patchwork-Id: 108988 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4F302A0506; Tue, 29 Mar 2022 08:54:29 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 082F4428B5; Tue, 29 Mar 2022 08:54:29 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id DAC0E428B4 for ; Tue, 29 Mar 2022 08:54:26 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1648536867; x=1680072867; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=YyBWzChac0aX1shWD+NaqOCAJWIPyyPs8b5sLLYk7xo=; b=En2mqG5lG8xY00AXsGMBWyAj4n/EOXSCAduV1PeI/sH3wxP2xahZ+C9V 00zPk1/md/a43xclsPdHqV22VzNTdUxaHyAz5h0YETlO/4Mz/63TdUqhG P9ClUiCuRXCZoMb/Raa811FuICesR+5jPL3a7yrqfIH7GxImQbghToF9T kNhdRfvivsr9a9yp/CETc+pJg1Q8MUZ0LIB81ttfRHhF24fD+ekj/XJC2 lhHkFjGgjRvmw271OLmx8JBvcJUmC/43uJuZRuWKPYRoDhx6FSqb5XJ1K b4KSgqBJGRDTLNLRh26IGAoHOvQ4iCigEGs2yPgFGfOypoTF6yMayQA90 w==; X-IronPort-AV: E=McAfee;i="6200,9189,10300"; a="322363423" X-IronPort-AV: E=Sophos;i="5.90,219,1643702400"; d="scan'208";a="322363423" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Mar 2022 23:54:25 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,219,1643702400"; d="scan'208";a="564372280" Received: from npg-dpdk-xuan-cbdma.sh.intel.com ([10.67.110.228]) by orsmga008.jf.intel.com with ESMTP; 28 Mar 2022 23:54:20 -0700 From: xuan.ding@intel.com To: thomas@monjalon.net, ferruh.yigit@intel.com, andrew.rybchenko@oktetlabs.ru Cc: dev@dpdk.org, stephen@networkplumber.org, mb@smartsharesystems.com, viacheslavo@nvidia.com, qi.z.zhang@intel.com, ping.yu@intel.com, wenxuanx.wu@intel.com, Xuan Ding , Yuan Wang Subject: [RFC,v3 3/3] net/ice: support header split in Rx data path Date: Tue, 29 Mar 2022 06:49:45 +0000 Message-Id: <20220329064945.54777-4-xuan.ding@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220329064945.54777-1-xuan.ding@intel.com> References: <20220303060136.36427-1-xuan.ding@intel.com> <20220329064945.54777-1-xuan.ding@intel.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Xuan Ding This patch adds support for header split in normal Rx data paths. When the Rx queue is configured with header split for specific protocol type, packets received will be directly splited into header and payload parts. And the two parts will be put into different mempools. Currently, header split is not supported in vectorized paths. Signed-off-by: Xuan Ding Signed-off-by: Yuan Wang --- drivers/net/ice/ice_ethdev.c | 10 +- drivers/net/ice/ice_rxtx.c | 223 ++++++++++++++++++++++---- drivers/net/ice/ice_rxtx.h | 16 ++ drivers/net/ice/ice_rxtx_vec_common.h | 3 + 4 files changed, 221 insertions(+), 31 deletions(-) diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index 13adcf90ed..cb32265dbe 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -3713,7 +3713,8 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | RTE_ETH_RX_OFFLOAD_RSS_HASH | - RTE_ETH_RX_OFFLOAD_TIMESTAMP; + RTE_ETH_RX_OFFLOAD_TIMESTAMP | + RTE_ETH_RX_OFFLOAD_HEADER_SPLIT; dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_QINQ_INSERT | RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | @@ -3725,7 +3726,7 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->flow_type_rss_offloads |= ICE_RSS_OFFLOAD_ALL; } - dev_info->rx_queue_offload_capa = 0; + dev_info->rx_queue_offload_capa = RTE_ETH_RX_OFFLOAD_HEADER_SPLIT; dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE; dev_info->reta_size = pf->hash_lut_size; @@ -3794,6 +3795,11 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->default_rxportconf.ring_size = ICE_BUF_SIZE_MIN; dev_info->default_txportconf.ring_size = ICE_BUF_SIZE_MIN; + dev_info->rx_seg_capa.max_nseg = ICE_RX_MAX_NSEG; + dev_info->rx_seg_capa.multi_pools = 1; + dev_info->rx_seg_capa.offset_allowed = 0; + dev_info->rx_seg_capa.offset_align_log2 = 0; + return 0; } diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c index 041f4bc91f..1f245c853b 100644 --- a/drivers/net/ice/ice_rxtx.c +++ b/drivers/net/ice/ice_rxtx.c @@ -282,7 +282,6 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq) /* Set buffer size as the head split is disabled. */ buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM); - rxq->rx_hdr_len = 0; rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << ICE_RLAN_CTX_DBUF_S)); rxq->max_pkt_len = RTE_MIN((uint32_t)ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len, @@ -311,11 +310,54 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq) memset(&rx_ctx, 0, sizeof(rx_ctx)); + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_HEADER_SPLIT) { + switch (rxq->rxseg[0].proto) { + case RTE_ETH_RX_HEADER_SPLIT_MAC: + rx_ctx.dtype = ICE_RX_DTYPE_HEADER_SPLIT; + rx_ctx.hsplit_1 = ICE_RLAN_RX_HSPLIT_1_SPLIT_L2; + break; + case RTE_ETH_RX_HEADER_SPLIT_INNER_MAC: + rx_ctx.dtype = ICE_RX_DTYPE_HEADER_SPLIT; + rx_ctx.hsplit_0 = ICE_RLAN_RX_HSPLIT_0_SPLIT_L2; + break; + case RTE_ETH_RX_HEADER_SPLIT_IPV4: + case RTE_ETH_RX_HEADER_SPLIT_IPV6: + case RTE_ETH_RX_HEADER_SPLIT_L3: + case RTE_ETH_RX_HEADER_SPLIT_INNER_IPV4: + case RTE_ETH_RX_HEADER_SPLIT_INNER_IPV6: + case RTE_ETH_RX_HEADER_SPLIT_INNER_L3: + rx_ctx.dtype = ICE_RX_DTYPE_HEADER_SPLIT; + rx_ctx.hsplit_0 = ICE_RLAN_RX_HSPLIT_0_SPLIT_IP; + break; + case RTE_ETH_RX_HEADER_SPLIT_TCP: + case RTE_ETH_RX_HEADER_SPLIT_UDP: + case RTE_ETH_RX_HEADER_SPLIT_INNER_TCP: + case RTE_ETH_RX_HEADER_SPLIT_INNER_UDP: + rx_ctx.dtype = ICE_RX_DTYPE_HEADER_SPLIT; + rx_ctx.hsplit_0 = ICE_RLAN_RX_HSPLIT_0_SPLIT_TCP_UDP; + break; + case RTE_ETH_RX_HEADER_SPLIT_SCTP: + case RTE_ETH_RX_HEADER_SPLIT_INNER_SCTP: + rx_ctx.dtype = ICE_RX_DTYPE_HEADER_SPLIT; + rx_ctx.hsplit_0 = ICE_RLAN_RX_HSPLIT_0_SPLIT_SCTP; + break; + case RTE_ETH_RX_HEADER_SPLIT_NONE: + PMD_DRV_LOG(ERR, "Header split protocol must be configured"); + return -EINVAL; + default: + PMD_DRV_LOG(ERR, "Header split protocol is not supported"); + return -EINVAL; + } + rxq->rx_hdr_len = ICE_RX_HDR_BUF_SIZE; + } else { + rxq->rx_hdr_len = 0; + rx_ctx.dtype = 0; /* No Header Split mode */ + } + rx_ctx.base = rxq->rx_ring_dma / ICE_QUEUE_BASE_ADDR_UNIT; rx_ctx.qlen = rxq->nb_rx_desc; rx_ctx.dbuf = rxq->rx_buf_len >> ICE_RLAN_CTX_DBUF_S; rx_ctx.hbuf = rxq->rx_hdr_len >> ICE_RLAN_CTX_HBUF_S; - rx_ctx.dtype = 0; /* No Header Split mode */ #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC rx_ctx.dsize = 1; /* 32B descriptors */ #endif @@ -401,6 +443,7 @@ ice_alloc_rx_queue_mbufs(struct ice_rx_queue *rxq) for (i = 0; i < rxq->nb_rx_desc; i++) { volatile union ice_rx_flex_desc *rxd; + rxd = &rxq->rx_ring[i]; struct rte_mbuf *mbuf = rte_mbuf_raw_alloc(rxq->mp); if (unlikely(!mbuf)) { @@ -408,8 +451,6 @@ ice_alloc_rx_queue_mbufs(struct ice_rx_queue *rxq) return -ENOMEM; } - rte_mbuf_refcnt_set(mbuf, 1); - mbuf->next = NULL; mbuf->data_off = RTE_PKTMBUF_HEADROOM; mbuf->nb_segs = 1; mbuf->port = rxq->port_id; @@ -417,9 +458,32 @@ ice_alloc_rx_queue_mbufs(struct ice_rx_queue *rxq) dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf)); - rxd = &rxq->rx_ring[i]; - rxd->read.pkt_addr = dma_addr; - rxd->read.hdr_addr = 0; + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_HEADER_SPLIT) { + struct rte_mbuf *mbuf_pay; + mbuf_pay = rte_mbuf_raw_alloc(rxq->rxseg[1].mp); + if (unlikely(!mbuf_pay)) { + PMD_DRV_LOG(ERR, "Failed to allocate payload mbuf for RX"); + return -ENOMEM; + } + + mbuf_pay->next = NULL; + mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + mbuf_pay->nb_segs = 1; + mbuf_pay->port = rxq->port_id; + mbuf->next = mbuf_pay; + + rxd->read.hdr_addr = dma_addr; + /* The LS bit should be set to zero regardless of + * header split enablement. + */ + rxd->read.pkt_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf_pay)); + } else { + rte_mbuf_refcnt_set(mbuf, 1); + mbuf->next = NULL; + rxd->read.hdr_addr = 0; + rxd->read.pkt_addr = dma_addr; + } + #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC rxd->read.rsvd1 = 0; rxd->read.rsvd2 = 0; @@ -443,14 +507,14 @@ _ice_rx_queue_release_mbufs(struct ice_rx_queue *rxq) for (i = 0; i < rxq->nb_rx_desc; i++) { if (rxq->sw_ring[i].mbuf) { - rte_pktmbuf_free_seg(rxq->sw_ring[i].mbuf); + rte_pktmbuf_free(rxq->sw_ring[i].mbuf); rxq->sw_ring[i].mbuf = NULL; } } if (rxq->rx_nb_avail == 0) return; for (i = 0; i < rxq->rx_nb_avail; i++) - rte_pktmbuf_free_seg(rxq->rx_stage[rxq->rx_next_avail + i]); + rte_pktmbuf_free(rxq->rx_stage[rxq->rx_next_avail + i]); rxq->rx_nb_avail = 0; } @@ -1076,6 +1140,7 @@ ice_rx_queue_setup(struct rte_eth_dev *dev, uint16_t len; int use_def_burst_func = 1; uint64_t offloads; + uint16_t n_seg = rx_conf->rx_nseg; if (nb_desc % ICE_ALIGN_RING_DESC != 0 || nb_desc > ICE_MAX_RING_DESC || @@ -1087,6 +1152,22 @@ ice_rx_queue_setup(struct rte_eth_dev *dev, offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads; + if (mp) + n_seg = 1; + + if (n_seg > 1) { + if (!(offloads & RTE_ETH_RX_OFFLOAD_HEADER_SPLIT)) { + PMD_INIT_LOG(ERR, "port %u queue index %u split offload not configured", + dev->data->port_id, queue_idx); + return -EINVAL; + } + if (n_seg > ICE_RX_MAX_NSEG) { + PMD_INIT_LOG(ERR, "port %u queue index %u split seg exceed maximum", + dev->data->port_id, queue_idx); + return -EINVAL; + } + } + /* Free memory if needed */ if (dev->data->rx_queues[queue_idx]) { ice_rx_queue_release(dev->data->rx_queues[queue_idx]); @@ -1098,12 +1179,22 @@ ice_rx_queue_setup(struct rte_eth_dev *dev, sizeof(struct ice_rx_queue), RTE_CACHE_LINE_SIZE, socket_id); + if (!rxq) { PMD_INIT_LOG(ERR, "Failed to allocate memory for " "rx queue data structure"); return -ENOMEM; } - rxq->mp = mp; + + rxq->rxseg_nb = n_seg; + if (n_seg > 1) { + rte_memcpy(rxq->rxseg, rx_conf->rx_seg, + sizeof(struct rte_eth_rxseg_split) * n_seg); + rxq->mp = rxq->rxseg[0].mp; + } else { + rxq->mp = mp; + } + rxq->nb_rx_desc = nb_desc; rxq->rx_free_thresh = rx_conf->rx_free_thresh; rxq->queue_id = queue_idx; @@ -1568,7 +1659,7 @@ ice_rx_scan_hw_ring(struct ice_rx_queue *rxq) struct ice_rx_entry *rxep; struct rte_mbuf *mb; uint16_t stat_err0; - uint16_t pkt_len; + uint16_t pkt_len, hdr_len; int32_t s[ICE_LOOK_AHEAD], nb_dd; int32_t i, j, nb_rx = 0; uint64_t pkt_flags = 0; @@ -1616,6 +1707,24 @@ ice_rx_scan_hw_ring(struct ice_rx_queue *rxq) ICE_RX_FLX_DESC_PKT_LEN_M) - rxq->crc_len; mb->data_len = pkt_len; mb->pkt_len = pkt_len; + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_HEADER_SPLIT) { + mb->nb_segs = (uint16_t)(mb->nb_segs + mb->next->nb_segs); + mb->next->next = NULL; + hdr_len = rte_le_to_cpu_16(rxdp[j].wb.hdr_len_sph_flex_flags1) & + ICE_RX_FLEX_DESC_HEADER_LEN_M; + pkt_len = (rte_le_to_cpu_16(rxdp[j].wb.pkt_len) & + ICE_RX_FLX_DESC_PKT_LEN_M) - rxq->crc_len; + mb->data_len = hdr_len; + mb->pkt_len = hdr_len + pkt_len; + mb->next->data_len = pkt_len; + } else { + pkt_len = (rte_le_to_cpu_16(rxdp[j].wb.pkt_len) & + ICE_RX_FLX_DESC_PKT_LEN_M) - rxq->crc_len; + mb->data_len = pkt_len; + mb->pkt_len = pkt_len; + } + mb->ol_flags = 0; stat_err0 = rte_le_to_cpu_16(rxdp[j].wb.status_error0); pkt_flags = ice_rxd_error_to_pkt_flags(stat_err0); @@ -1695,7 +1804,9 @@ ice_rx_alloc_bufs(struct ice_rx_queue *rxq) struct rte_mbuf *mb; uint16_t alloc_idx, i; uint64_t dma_addr; - int diag; + int diag, diag_pay; + uint64_t pay_addr; + struct rte_mbuf *mbufs_pay[rxq->rx_free_thresh]; /* Allocate buffers in bulk */ alloc_idx = (uint16_t)(rxq->rx_free_trigger - @@ -1708,6 +1819,15 @@ ice_rx_alloc_bufs(struct ice_rx_queue *rxq) return -ENOMEM; } + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_HEADER_SPLIT) { + diag_pay = rte_mempool_get_bulk(rxq->rxseg[1].mp, + (void *)mbufs_pay, rxq->rx_free_thresh); + if (unlikely(diag_pay != 0)) { + PMD_RX_LOG(ERR, "Failed to get payload mbufs in bulk"); + return -ENOMEM; + } + } + rxdp = &rxq->rx_ring[alloc_idx]; for (i = 0; i < rxq->rx_free_thresh; i++) { if (likely(i < (rxq->rx_free_thresh - 1))) @@ -1716,13 +1836,21 @@ ice_rx_alloc_bufs(struct ice_rx_queue *rxq) mb = rxep[i].mbuf; rte_mbuf_refcnt_set(mb, 1); - mb->next = NULL; mb->data_off = RTE_PKTMBUF_HEADROOM; mb->nb_segs = 1; mb->port = rxq->port_id; dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mb)); - rxdp[i].read.hdr_addr = 0; - rxdp[i].read.pkt_addr = dma_addr; + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_HEADER_SPLIT) { + mb->next = mbufs_pay[i]; + pay_addr = rte_mbuf_data_iova_default(mbufs_pay[i]); + rxdp[i].read.hdr_addr = dma_addr; + rxdp[i].read.pkt_addr = rte_cpu_to_le_64(pay_addr); + } else { + mb->next = NULL; + rxdp[i].read.hdr_addr = 0; + rxdp[i].read.pkt_addr = dma_addr; + } } /* Update Rx tail register */ @@ -2315,11 +2443,13 @@ ice_recv_pkts(void *rx_queue, struct ice_rx_entry *sw_ring = rxq->sw_ring; struct ice_rx_entry *rxe; struct rte_mbuf *nmb; /* new allocated mbuf */ + struct rte_mbuf *nmb_pay; /* new allocated payload mbuf */ struct rte_mbuf *rxm; /* pointer to store old mbuf in SW ring */ uint16_t rx_id = rxq->rx_tail; uint16_t nb_rx = 0; uint16_t nb_hold = 0; uint16_t rx_packet_len; + uint16_t rx_header_len; uint16_t rx_stat_err0; uint64_t dma_addr; uint64_t pkt_flags; @@ -2342,12 +2472,16 @@ ice_recv_pkts(void *rx_queue, if (!(rx_stat_err0 & (1 << ICE_RX_FLEX_DESC_STATUS0_DD_S))) break; - /* allocate mbuf */ + if (rx_stat_err0 & (1 << ICE_RX_FLEX_DESC_STATUS0_HBO_S)) + break; + + /* allocate header mbuf */ nmb = rte_mbuf_raw_alloc(rxq->mp); if (unlikely(!nmb)) { rxq->vsi->adapter->pf.dev_data->rx_mbuf_alloc_failed++; break; } + rxd = *rxdp; /* copy descriptor in ring to temp variable*/ nb_hold++; @@ -2360,24 +2494,55 @@ ice_recv_pkts(void *rx_queue, dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb)); - /** - * fill the read format of descriptor with physic address in - * new allocated mbuf: nmb - */ - rxdp->read.hdr_addr = 0; - rxdp->read.pkt_addr = dma_addr; + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_HEADER_SPLIT) { + /* allocate payload mbuf */ + nmb_pay = rte_mbuf_raw_alloc(rxq->rxseg[1].mp); + if (unlikely(!nmb_pay)) { + rxq->vsi->adapter->pf.dev_data->rx_mbuf_alloc_failed++; + break; + } + + nmb->next = nmb_pay; + nmb_pay->next = NULL; - /* calculate rx_packet_len of the received pkt */ - rx_packet_len = (rte_le_to_cpu_16(rxd.wb.pkt_len) & - ICE_RX_FLX_DESC_PKT_LEN_M) - rxq->crc_len; + /** + * fill the read format of descriptor with physic address in + * new allocated mbuf: nmb + */ + rxdp->read.hdr_addr = dma_addr; + rxdp->read.pkt_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb_pay)); + } else { + /** + * fill the read format of descriptor with physic address in + * new allocated mbuf: nmb + */ + rxdp->read.hdr_addr = 0; + rxdp->read.pkt_addr = dma_addr; + } /* fill old mbuf with received descriptor: rxd */ rxm->data_off = RTE_PKTMBUF_HEADROOM; rte_prefetch0(RTE_PTR_ADD(rxm->buf_addr, RTE_PKTMBUF_HEADROOM)); - rxm->nb_segs = 1; - rxm->next = NULL; - rxm->pkt_len = rx_packet_len; - rxm->data_len = rx_packet_len; + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_HEADER_SPLIT) { + rxm->nb_segs = (uint16_t)(rxm->nb_segs + rxm->next->nb_segs); + rxm->next->next = NULL; + /* calculate rx_packet_len of the received pkt */ + rx_header_len = rte_le_to_cpu_16(rxd.wb.hdr_len_sph_flex_flags1) & + ICE_RX_FLEX_DESC_HEADER_LEN_M; + rx_packet_len = (rte_le_to_cpu_16(rxd.wb.pkt_len) & + ICE_RX_FLX_DESC_PKT_LEN_M) - rxq->crc_len; + rxm->data_len = rx_header_len; + rxm->pkt_len = rx_header_len + rx_packet_len; + rxm->next->data_len = rx_packet_len; + } else { + rxm->nb_segs = 1; + rxm->next = NULL; + /* calculate rx_packet_len of the received pkt */ + rx_packet_len = (rte_le_to_cpu_16(rxd.wb.pkt_len) & + ICE_RX_FLX_DESC_PKT_LEN_M) - rxq->crc_len; + rxm->data_len = rx_packet_len; + rxm->pkt_len = rx_packet_len; + } rxm->port = rxq->port_id; rxm->packet_type = ptype_tbl[ICE_RX_FLEX_DESC_PTYPE_M & rte_le_to_cpu_16(rxd.wb.ptype_flex_flags0)]; diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h index bb18a01951..611dbc8503 100644 --- a/drivers/net/ice/ice_rxtx.h +++ b/drivers/net/ice/ice_rxtx.h @@ -16,6 +16,9 @@ #define ICE_RX_MAX_BURST 32 #define ICE_TX_MAX_BURST 32 +/* Maximal number of segments to split. */ +#define ICE_RX_MAX_NSEG 2 + #define ICE_CHK_Q_ENA_COUNT 100 #define ICE_CHK_Q_ENA_INTERVAL_US 100 @@ -43,6 +46,11 @@ extern uint64_t ice_timestamp_dynflag; extern int ice_timestamp_dynfield_offset; +/* Max header size can be 2K - 64 bytes */ +#define ICE_RX_HDR_BUF_SIZE (2048 - 64) + +#define ICE_HEADER_SPLIT_ENA BIT(0) + typedef void (*ice_rx_release_mbufs_t)(struct ice_rx_queue *rxq); typedef void (*ice_tx_release_mbufs_t)(struct ice_tx_queue *txq); typedef void (*ice_rxd_to_pkt_fields_t)(struct ice_rx_queue *rxq, @@ -53,6 +61,12 @@ struct ice_rx_entry { struct rte_mbuf *mbuf; }; +enum ice_rx_dtype { + ICE_RX_DTYPE_NO_SPLIT = 0, + ICE_RX_DTYPE_HEADER_SPLIT = 1, + ICE_RX_DTYPE_SPLIT_ALWAYS = 2, +}; + struct ice_rx_queue { struct rte_mempool *mp; /* mbuf pool to populate RX ring */ volatile union ice_rx_flex_desc *rx_ring;/* RX ring virtual address */ @@ -95,6 +109,8 @@ struct ice_rx_queue { uint32_t time_high; uint32_t hw_register_set; const struct rte_memzone *mz; + struct rte_eth_rxseg_split rxseg[ICE_RX_MAX_NSEG]; + uint32_t rxseg_nb; }; struct ice_tx_entry { diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h index 2dd2d83650..7a155a66f2 100644 --- a/drivers/net/ice/ice_rxtx_vec_common.h +++ b/drivers/net/ice/ice_rxtx_vec_common.h @@ -291,6 +291,9 @@ ice_rx_vec_queue_default(struct ice_rx_queue *rxq) if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) return -1; + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_HEADER_SPLIT) + return -1; + if (rxq->offloads & ICE_RX_VECTOR_OFFLOAD) return ICE_VECTOR_OFFLOAD_PATH;