From patchwork Tue Apr 26 11:13:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, WenxuanX" X-Patchwork-Id: 110277 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0BB87A00C4; Tue, 26 Apr 2022 13:36:31 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 915F441143; Tue, 26 Apr 2022 13:36:28 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id B355A41141 for ; Tue, 26 Apr 2022 13:36:26 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650972986; x=1682508986; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=WBiCLTgdwpNjQy2SRlebhE39QpMAdC8BAswAv2NIQFI=; b=Pc9DYNoE9PGURAvdqFAuKKO9O7ZgdKdG8PlVoaO9P1lZLyD8LY47rbUM kGCA02iVMGcCT9y+2txtzSrW1NCJVQkzk6vMTYzZH+SHynIACP7we2HAq TJdcwFJw/u7OXwxyJO+mzff79vHg9w1zGu2CaP/ZU6RYtAZAvJ6tBAj9a vy2uJZ72ilIrkQtWHdiyRTXgXgItrUXNFXAraMBdKTjMpRkCfjQ4yv+1Z mPHETmdjG9VgQy8To5Gz7I+7buFl5vVaaSK5vTrkytNveEECSD3r0mVl/ XMJqqkBlnGmVgYbxE/wghshfdpA/gjj6PFKrHA9A1WEXDMtBOB/U5E6bL Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10328"; a="265714207" X-IronPort-AV: E=Sophos;i="5.90,290,1643702400"; d="scan'208";a="265714207" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Apr 2022 04:36:26 -0700 X-IronPort-AV: E=Sophos;i="5.90,290,1643702400"; d="scan'208";a="579842308" Received: from unknown (HELO localhost.localdomain) ([10.239.251.3]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Apr 2022 04:36:21 -0700 From: wenxuanx.wu@intel.com To: thomas@monjalon.net, andrew.rybchenko@oktetlabs.ru, xiaoyun.li@intel.com, ferruh.yigit@xilinx.com, aman.deep.singh@intel.com, dev@dpdk.org, yuying.zhang@intel.com, qi.z.zhang@intel.com, jerinjacobk@gmail.com Cc: stephen@networkplumber.org, mb@smartsharesystems.com, viacheslavo@nvidia.com, ping.yu@intel.com, xuan.ding@intel.com, yuanx.wang@intel.com, wenxuanx.wu@intel.com Subject: [PATCH v5 1/4] lib/ethdev: introduce protocol type based buffer split Date: Tue, 26 Apr 2022 11:13:36 +0000 Message-Id: <20220426111338.1074785-2-wenxuanx.wu@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220426111338.1074785-1-wenxuanx.wu@intel.com> References: <20220402104109.472078-2-wenxuanx.wu@intel.com> <20220426111338.1074785-1-wenxuanx.wu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Wenxuan Wu Protocol based buffer split consists of splitting a received packet into two separate regions based on its content. The split happens after the packet protocol header and before the packet payload. Splitting is usually between the packet protocol header that can be posted to a dedicated buffer and the packet payload that can be posted to a different buffer. Currently, Rx buffer split supports length and offset based packet split. protocol split is based on buffer split, configuring length of buffer split is not suitable for NICs that do split based on protocol types. Because tunneling makes the conversion from length to protocol type impossible. This patch extends the current buffer split to support protocol and offset based buffer split. A new proto field is introduced in the rte_eth_rxseg_split structure reserved field to specify header protocol type. With Rx queue offload RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT enabled and corresponding protocol type configured. PMD will split the ingress packets into two separate regions. Currently, both inner and outer L2/L3/L4 level protocol based buffer split can be supported. For example, let's suppose we configured the Rx queue with the following segments: seg0 - pool0, off0=2B seg1 - pool1, off1=128B With protocol split type configured with RTE_PTYPE_L4_UDP. The packet consists of MAC_IP_UDP_PAYLOAD will be splitted like following: seg0 - udp header @ RTE_PKTMBUF_HEADROOM + 2 in mbuf from pool0 seg1 - payload @ 128 in mbuf from pool1 The memory attributes for the split parts may differ either - for example the mempool0 and mempool1 belong to dpdk memory and external memory, respectively. Signed-off-by: Xuan Ding Signed-off-by: Yuan Wang Signed-off-by: Wenxuan Wu Reviewed-by: Qi Zhang --- lib/ethdev/rte_ethdev.c | 36 +++++++++++++++++++++++++++++------- lib/ethdev/rte_ethdev.h | 15 ++++++++++++++- 2 files changed, 43 insertions(+), 8 deletions(-) diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 29a3d80466..1a2bc172ab 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -1661,6 +1661,7 @@ rte_eth_rx_queue_check_split(const struct rte_eth_rxseg_split *rx_seg, struct rte_mempool *mpl = rx_seg[seg_idx].mp; uint32_t length = rx_seg[seg_idx].length; uint32_t offset = rx_seg[seg_idx].offset; + uint32_t proto = rx_seg[seg_idx].proto; if (mpl == NULL) { RTE_ETHDEV_LOG(ERR, "null mempool pointer\n"); @@ -1694,13 +1695,34 @@ rte_eth_rx_queue_check_split(const struct rte_eth_rxseg_split *rx_seg, } offset += seg_idx != 0 ? 0 : RTE_PKTMBUF_HEADROOM; *mbp_buf_size = rte_pktmbuf_data_room_size(mpl); - length = length != 0 ? length : *mbp_buf_size; - if (*mbp_buf_size < length + offset) { - RTE_ETHDEV_LOG(ERR, - "%s mbuf_data_room_size %u < %u (segment length=%u + segment offset=%u)\n", - mpl->name, *mbp_buf_size, - length + offset, length, offset); - return -EINVAL; + if (proto == 0) { + length = length != 0 ? length : *mbp_buf_size; + if (*mbp_buf_size < length + offset) { + RTE_ETHDEV_LOG(ERR, + "%s mbuf_data_room_size %u < %u (segment length=%u + segment offset=%u)\n", + mpl->name, *mbp_buf_size, + length + offset, length, offset); + return -EINVAL; + } + } else { + /* Ensure n_seg is 2 in protocol based buffer split. */ + if (n_seg != 2) { + RTE_ETHDEV_LOG(ERR, "number of buffer split protocol segments should be 2.\n"); + return -EINVAL; + } + /* Length and protocol are exclusive here, so make sure length is 0 in protocol + based buffer split. */ + if (length != 0) { + RTE_ETHDEV_LOG(ERR, "segment length should be set to zero in buffer split\n"); + return -EINVAL; + } + if (*mbp_buf_size < offset) { + RTE_ETHDEV_LOG(ERR, + "%s mbuf_data_room_size %u < %u segment offset)\n", + mpl->name, *mbp_buf_size, + offset); + return -EINVAL; + } } } return 0; diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index 04cff8ee10..ef7f59aae6 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -1187,6 +1187,9 @@ struct rte_eth_txmode { * mbuf) the following data will be pushed to the next segment * up to its own length, and so on. * + * + * - The proto in the elements defines the split position of received packets. + * * - If the length in the segment description element is zero * the actual buffer size will be deduced from the appropriate * memory pool properties. @@ -1197,12 +1200,21 @@ struct rte_eth_txmode { * - pool from the last valid element * - the buffer size from this pool * - zero offset + * + * - Length based buffer split: + * - mp, length, offset should be configured. + * - The proto should not be configured in length split. Zero default. + * + * - Protocol based buffer split: + * - mp, offset, proto should be configured. + * - The length should not be configured in protocol split. Zero default. + * */ struct rte_eth_rxseg_split { struct rte_mempool *mp; /**< Memory pool to allocate segment from. */ uint16_t length; /**< Segment data length, configures split point. */ uint16_t offset; /**< Data offset from beginning of mbuf data buffer. */ - uint32_t reserved; /**< Reserved field. */ + uint32_t proto; /**< Protocol of buffer split, determines protocol split point. */ }; /** @@ -1664,6 +1676,7 @@ struct rte_eth_conf { RTE_ETH_RX_OFFLOAD_QINQ_STRIP) #define DEV_RX_OFFLOAD_VLAN RTE_DEPRECATED(DEV_RX_OFFLOAD_VLAN) RTE_ETH_RX_OFFLOAD_VLAN + /* * If new Rx offload capabilities are defined, they also must be * mentioned in rte_rx_offload_names in rte_ethdev.c file. From patchwork Tue Apr 26 11:13:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, WenxuanX" X-Patchwork-Id: 110278 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9E385A00C4; Tue, 26 Apr 2022 13:36:36 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EA209427F4; Tue, 26 Apr 2022 13:36:34 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id 8D5CC41141 for ; Tue, 26 Apr 2022 13:36:33 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650972993; x=1682508993; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Ayffeie60aa5ZB/EirWsPZLWJn+yMbxLe8kzlWo29mc=; b=XnsD5dpgcAk5TuxbHpxQOiKgRsax6u32D/6nOARlcjmXOWa9+XM4M+WO pNMU2uNO1axPKLgqvALN4HoR7QY7hOz3q9SCm3eJq7+sGNp4yJTRrUlFj uW704JwQMBhaENcIu4zIWgmSk1/IPfAkKpV1sYq8TpZYYrqCG5C4rmJv9 DrW7Jcuvq1HHZAhan/oRy6Qs3IK0TxH6tEqlE5VbRs2Jxk3Kub1EaWCFJ EKcgC0C+4/3FZXQBwuqSGuHwyBoxYWyl7PlGDgRRKW58woBwmEMHQyaqU p7yJhv94r7izyH464YBjnbaK61BiHUnCGM4teIvPvdIFeImnJ8ff94CHf g==; X-IronPort-AV: E=McAfee;i="6400,9594,10328"; a="265714220" X-IronPort-AV: E=Sophos;i="5.90,290,1643702400"; d="scan'208";a="265714220" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Apr 2022 04:36:32 -0700 X-IronPort-AV: E=Sophos;i="5.90,290,1643702400"; d="scan'208";a="579842342" Received: from unknown (HELO localhost.localdomain) ([10.239.251.3]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Apr 2022 04:36:26 -0700 From: wenxuanx.wu@intel.com To: thomas@monjalon.net, andrew.rybchenko@oktetlabs.ru, xiaoyun.li@intel.com, ferruh.yigit@xilinx.com, aman.deep.singh@intel.com, dev@dpdk.org, yuying.zhang@intel.com, qi.z.zhang@intel.com, jerinjacobk@gmail.com Cc: stephen@networkplumber.org, mb@smartsharesystems.com, viacheslavo@nvidia.com, ping.yu@intel.com, xuan.ding@intel.com, yuanx.wang@intel.com, wenxuanx.wu@intel.com Subject: [PATCH v5 2/4] app/testpmd: add proto based buffer split config Date: Tue, 26 Apr 2022 11:13:37 +0000 Message-Id: <20220426111338.1074785-3-wenxuanx.wu@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220426111338.1074785-1-wenxuanx.wu@intel.com> References: <20220402104109.472078-2-wenxuanx.wu@intel.com> <20220426111338.1074785-1-wenxuanx.wu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Wenxuan Wu This patch adds protocol based buffer split configuration in testpmd. The protocol split feature is off by default. To enable protocol split, you need: 1. Start testpmd with two mempools. e.g. --mbuf-size=2048,2048 2. Configure Rx queue with rx_offload buffer split on. 3. Set the protocol type of buffer split. Testpmd View: testpmd>port config rx_offload buffer_split on testpmd>port config buffer_split mac|ipv4|ipv6|l3|tcp|udp|sctp| l4|inner_mac|inner_ipv4|inner_ipv6|inner_l3|inner_tcp| inner_udp|inner_sctp|inner_l4 Signed-off-by: Xuan Ding Signed-off-by: Yuan Wang Signed-off-by: Wenxuan Wu Reviewed-by: Qi Zhang --- app/test-pmd/cmdline.c | 118 +++++++++++++++++++++++++++++++++++++++++ app/test-pmd/testpmd.c | 7 +-- app/test-pmd/testpmd.h | 2 + 3 files changed, 124 insertions(+), 3 deletions(-) diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index 6ffea8e21a..5cd4beca95 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -866,6 +866,12 @@ static void cmd_help_long_parsed(void *parsed_result, " Enable or disable a per port Rx offloading" " on all Rx queues of a port\n\n" + "port config buffer_split mac|ipv4|ipv6|l3|tcp|udp|sctp|l4|" + "inner_mac|inner_ipv4|inner_ipv6|inner_l3|inner_tcp|" + "inner_udp|inner_sctp|inner_l4\n" + " Configure protocol type for buffer split" + " on all Rx queues of a port\n\n" + "port (port_id) rxq (queue_id) rx_offload vlan_strip|" "ipv4_cksum|udp_cksum|tcp_cksum|tcp_lro|qinq_strip|" "outer_ipv4_cksum|macsec_strip|header_split|" @@ -16353,6 +16359,117 @@ cmdline_parse_inst_t cmd_config_per_port_rx_offload = { } }; +/* config a per port buffer split protocol */ +struct cmd_config_per_port_buffer_split_protocol_result { + cmdline_fixed_string_t port; + cmdline_fixed_string_t config; + uint16_t port_id; + cmdline_fixed_string_t buffer_split; + cmdline_fixed_string_t protocol; +}; + +cmdline_parse_token_string_t cmd_config_per_port_buffer_split_protocol_result_port = + TOKEN_STRING_INITIALIZER + (struct cmd_config_per_port_buffer_split_protocol_result, + port, "port"); +cmdline_parse_token_string_t cmd_config_per_port_buffer_split_protocol_result_config = + TOKEN_STRING_INITIALIZER + (struct cmd_config_per_port_buffer_split_protocol_result, + config, "config"); +cmdline_parse_token_num_t cmd_config_per_port_buffer_split_protocol_result_port_id = + TOKEN_NUM_INITIALIZER + (struct cmd_config_per_port_buffer_split_protocol_result, + port_id, RTE_UINT16); +cmdline_parse_token_string_t cmd_config_per_port_buffer_split_protocol_result_buffer_split = + TOKEN_STRING_INITIALIZER + (struct cmd_config_per_port_buffer_split_protocol_result, + buffer_split, "buffer_split"); +cmdline_parse_token_string_t cmd_config_per_port_buffer_split_protocol_result_protocol = + TOKEN_STRING_INITIALIZER + (struct cmd_config_per_port_buffer_split_protocol_result, + protocol, "mac#ipv4#ipv6#l3#tcp#udp#sctp#l4#" + "inner_mac#inner_ipv4#inner_ipv6#inner_l3#inner_tcp#" + "inner_udp#inner_sctp#inner_l4"); + +static void +cmd_config_per_port_buffer_split_protocol_parsed(void *parsed_result, + __rte_unused struct cmdline *cl, + __rte_unused void *data) +{ + struct cmd_config_per_port_buffer_split_protocol_result *res = parsed_result; + portid_t port_id = res->port_id; + struct rte_port *port = &ports[port_id]; + uint32_t protocol; + + if (port_id_is_invalid(port_id, ENABLED_WARN)) + return; + + if (port->port_status != RTE_PORT_STOPPED) { + fprintf(stderr, + "Error: Can't config offload when Port %d is not stopped\n", + port_id); + return; + } + + if (!strcmp(res->protocol, "mac")) + protocol = RTE_PTYPE_L2_ETHER; + else if (!strcmp(res->protocol, "ipv4")) + protocol = RTE_PTYPE_L3_IPV4; + else if (!strcmp(res->protocol, "ipv6")) + protocol = RTE_PTYPE_L3_IPV6; + else if (!strcmp(res->protocol, "l3")) + protocol = RTE_PTYPE_L3_IPV4|RTE_PTYPE_L3_IPV6; + else if (!strcmp(res->protocol, "tcp")) + protocol = RTE_PTYPE_L4_TCP; + else if (!strcmp(res->protocol, "udp")) + protocol = RTE_PTYPE_L4_UDP; + else if (!strcmp(res->protocol, "sctp")) + protocol = RTE_PTYPE_L4_SCTP; + else if (!strcmp(res->protocol, "l4")) + protocol = RTE_PTYPE_L4_TCP|RTE_PTYPE_L4_UDP|RTE_PTYPE_L4_SCTP; + else if (!strcmp(res->protocol, "inner_mac")) + protocol = RTE_PTYPE_INNER_L2_ETHER; + else if (!strcmp(res->protocol, "inner_ipv4")) + protocol = RTE_PTYPE_INNER_L3_IPV4; + else if (!strcmp(res->protocol, "inner_ipv6")) + protocol = RTE_PTYPE_INNER_L3_IPV6; + else if (!strcmp(res->protocol, "inner_l3")) + protocol = RTE_PTYPE_INNER_L3_IPV4|RTE_PTYPE_INNER_L3_IPV6; + else if (!strcmp(res->protocol, "inner_tcp")) + protocol = RTE_PTYPE_INNER_L4_TCP; + else if (!strcmp(res->protocol, "inner_udp")) + protocol = RTE_PTYPE_INNER_L4_UDP; + else if (!strcmp(res->protocol, "inner_sctp")) + protocol = RTE_PTYPE_INNER_L4_SCTP; + else if (!strcmp(res->protocol, "inner_l4")) + protocol = RTE_PTYPE_INNER_L4_TCP|RTE_PTYPE_INNER_L4_UDP|RTE_PTYPE_INNER_L4_SCTP; + else { + fprintf(stderr, "Unknown protocol name: %s\n", res->protocol); + return; + } + + rx_pkt_buffer_split_proto = protocol; + rx_pkt_nb_segs = 2; + + cmd_reconfig_device_queue(port_id, 1, 1); +} + +cmdline_parse_inst_t cmd_config_per_port_buffer_split_protocol = { + .f = cmd_config_per_port_buffer_split_protocol_parsed, + .data = NULL, + .help_str = "port config buffer_split mac|ipv4|ipv6|l3|tcp|udp|sctp|l4|" + "inner_mac|inner_ipv4|inner_ipv6|inner_l3|inner_tcp|" + "inner_udp|inner_sctp|inner_l4", + .tokens = { + (void *)&cmd_config_per_port_buffer_split_protocol_result_port, + (void *)&cmd_config_per_port_buffer_split_protocol_result_config, + (void *)&cmd_config_per_port_buffer_split_protocol_result_port_id, + (void *)&cmd_config_per_port_buffer_split_protocol_result_buffer_split, + (void *)&cmd_config_per_port_buffer_split_protocol_result_protocol, + NULL, + } +}; + /* Enable/Disable a per queue offloading */ struct cmd_config_per_queue_rx_offload_result { cmdline_fixed_string_t port; @@ -18071,6 +18188,7 @@ cmdline_parse_ctx_t main_ctx[] = { (cmdline_parse_inst_t *)&cmd_rx_offload_get_capa, (cmdline_parse_inst_t *)&cmd_rx_offload_get_configuration, (cmdline_parse_inst_t *)&cmd_config_per_port_rx_offload, + (cmdline_parse_inst_t *)&cmd_config_per_port_buffer_split_protocol, (cmdline_parse_inst_t *)&cmd_config_per_queue_rx_offload, (cmdline_parse_inst_t *)&cmd_tx_offload_get_capa, (cmdline_parse_inst_t *)&cmd_tx_offload_get_configuration, diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index fe2ce19f99..bd77d6bf10 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -253,6 +253,8 @@ uint8_t tx_pkt_nb_segs = 1; /**< Number of segments in TXONLY packets */ enum tx_pkt_split tx_pkt_split = TX_PKT_SPLIT_OFF; /**< Split policy for packets to TX. */ +uint32_t rx_pkt_buffer_split_proto; + uint8_t txonly_multi_flow; /**< Whether multiple flows are generated in TXONLY mode. */ @@ -2586,12 +2588,11 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, mp_n = (i > mbuf_data_size_n) ? mbuf_data_size_n - 1 : i; mpx = mbuf_pool_find(socket_id, mp_n); /* Handle zero as mbuf data buffer size. */ - rx_seg->length = rx_pkt_seg_lengths[i] ? - rx_pkt_seg_lengths[i] : - mbuf_data_size[mp_n]; + rx_seg->length = rx_pkt_seg_lengths[i]; rx_seg->offset = i < rx_pkt_nb_offs ? rx_pkt_seg_offsets[i] : 0; rx_seg->mp = mpx ? mpx : mp; + rx_seg->proto = rx_pkt_buffer_split_proto; } rx_conf->rx_nseg = rx_pkt_nb_segs; rx_conf->rx_seg = rx_useg; diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 31f766c965..707e1781d4 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -557,6 +557,8 @@ enum tx_pkt_split { extern enum tx_pkt_split tx_pkt_split; +extern uint32_t rx_pkt_buffer_split_proto; + extern uint8_t txonly_multi_flow; extern uint32_t rxq_share; From patchwork Tue Apr 26 11:13:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, WenxuanX" X-Patchwork-Id: 110279 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 05BC0A00C4; Tue, 26 Apr 2022 13:36:42 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E69C2427F9; Tue, 26 Apr 2022 13:36:41 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id A669641141 for ; Tue, 26 Apr 2022 13:36:38 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650972998; x=1682508998; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=I17lH8Amhyy5BPJUWnhbJJV0Fdofh5osFqSzKQMSUsE=; b=mKbDu4Zb2YzXvtjruuheFYcNFRAkbt7tN1gxbtLayvH+vobTKP4Clh5Z Xf0yMqcd6g/Cg0owInc5kWBLk5UR+A4usMl2upJvqcjeLOeYNburWs9Mi aqxfE7ONzd7dDBLN+d3TF2gVf8JAGDE0Dmq7JfIhWWNu8oKG+P+h3BYwO gayYDMPVXchg7h2a0Vx/kXoHJrJwgdgROqnhpTHrSRmjXSvZ6/K3r0wmA LjxSMlXfo8EDsuh4F9Sh1gsRbR09YWQj/HzMKRGzSt9KuynrtdkJYzQZ0 eJpNWZIWcrYRA+7s6yIuQc1mRdfMt+1X87mmNDxm8WIj9p9GpgP/I9WR1 g==; X-IronPort-AV: E=McAfee;i="6400,9594,10328"; a="265714229" X-IronPort-AV: E=Sophos;i="5.90,290,1643702400"; d="scan'208";a="265714229" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Apr 2022 04:36:36 -0700 X-IronPort-AV: E=Sophos;i="5.90,290,1643702400"; d="scan'208";a="579842362" Received: from unknown (HELO localhost.localdomain) ([10.239.251.3]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Apr 2022 04:36:31 -0700 From: wenxuanx.wu@intel.com To: thomas@monjalon.net, andrew.rybchenko@oktetlabs.ru, xiaoyun.li@intel.com, ferruh.yigit@xilinx.com, aman.deep.singh@intel.com, dev@dpdk.org, yuying.zhang@intel.com, qi.z.zhang@intel.com, jerinjacobk@gmail.com Cc: stephen@networkplumber.org, mb@smartsharesystems.com, viacheslavo@nvidia.com, ping.yu@intel.com, xuan.ding@intel.com, yuanx.wang@intel.com, wenxuanx.wu@intel.com Subject: [PATCH v5 3/4] net/ice: support proto based buf split in Rx path Date: Tue, 26 Apr 2022 11:13:38 +0000 Message-Id: <20220426111338.1074785-4-wenxuanx.wu@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220426111338.1074785-1-wenxuanx.wu@intel.com> References: <20220402104109.472078-2-wenxuanx.wu@intel.com> <20220426111338.1074785-1-wenxuanx.wu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Wenxuan Wu This patch adds support for proto based buffer split in normal Rx data paths. When the Rx queue is configured with specific protocol type, packets received will be directly splitted into protocol header and payload parts. And the two parts will be put into different mempools. Currently, protocol based buffer split is not supported in vectorized paths. Signed-off-by: Xuan Ding Signed-off-by: Yuan Wang Signed-off-by: Wenxuan Wu Reviewed-by: Qi Zhang --- drivers/net/ice/ice_ethdev.c | 10 +- drivers/net/ice/ice_rxtx.c | 219 ++++++++++++++++++++++---- drivers/net/ice/ice_rxtx.h | 16 ++ drivers/net/ice/ice_rxtx_vec_common.h | 3 + 4 files changed, 216 insertions(+), 32 deletions(-) diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index 73e550f5fb..ce3f49c863 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -3713,7 +3713,8 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | RTE_ETH_RX_OFFLOAD_RSS_HASH | - RTE_ETH_RX_OFFLOAD_TIMESTAMP; + RTE_ETH_RX_OFFLOAD_TIMESTAMP | + RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT; dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_QINQ_INSERT | RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | @@ -3725,7 +3726,7 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->flow_type_rss_offloads |= ICE_RSS_OFFLOAD_ALL; } - dev_info->rx_queue_offload_capa = 0; + dev_info->rx_queue_offload_capa = RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT; dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE; dev_info->reta_size = pf->hash_lut_size; @@ -3794,6 +3795,11 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->default_rxportconf.ring_size = ICE_BUF_SIZE_MIN; dev_info->default_txportconf.ring_size = ICE_BUF_SIZE_MIN; + dev_info->rx_seg_capa.max_nseg = ICE_RX_MAX_NSEG; + dev_info->rx_seg_capa.multi_pools = 1; + dev_info->rx_seg_capa.offset_allowed = 0; + dev_info->rx_seg_capa.offset_align_log2 = 0; + return 0; } diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c index 2dd2637fbb..8cbcee3543 100644 --- a/drivers/net/ice/ice_rxtx.c +++ b/drivers/net/ice/ice_rxtx.c @@ -282,7 +282,6 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq) /* Set buffer size as the head split is disabled. */ buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM); - rxq->rx_hdr_len = 0; rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << ICE_RLAN_CTX_DBUF_S)); rxq->max_pkt_len = RTE_MIN((uint32_t)ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len, @@ -311,11 +310,52 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq) memset(&rx_ctx, 0, sizeof(rx_ctx)); + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + switch (rxq->rxseg[0].proto) { + case RTE_PTYPE_L2_ETHER: + rx_ctx.dtype = ICE_RX_DTYPE_HEADER_SPLIT; + rx_ctx.hsplit_1 = ICE_RLAN_RX_HSPLIT_1_SPLIT_L2; + break; + case RTE_PTYPE_INNER_L2_ETHER: + rx_ctx.dtype = ICE_RX_DTYPE_HEADER_SPLIT; + rx_ctx.hsplit_0 = ICE_RLAN_RX_HSPLIT_0_SPLIT_L2; + break; + case RTE_PTYPE_L3_IPV4: + case RTE_PTYPE_L3_IPV6: + case RTE_PTYPE_INNER_L3_IPV4: + case RTE_PTYPE_INNER_L3_IPV6: + rx_ctx.dtype = ICE_RX_DTYPE_HEADER_SPLIT; + rx_ctx.hsplit_0 = ICE_RLAN_RX_HSPLIT_0_SPLIT_IP; + break; + case RTE_PTYPE_L4_TCP: + case RTE_PTYPE_L4_UDP: + case RTE_PTYPE_INNER_L4_TCP: + case RTE_PTYPE_INNER_L4_UDP: + rx_ctx.dtype = ICE_RX_DTYPE_HEADER_SPLIT; + rx_ctx.hsplit_0 = ICE_RLAN_RX_HSPLIT_0_SPLIT_TCP_UDP; + break; + case RTE_PTYPE_L4_SCTP: + case RTE_PTYPE_INNER_L4_SCTP: + rx_ctx.dtype = ICE_RX_DTYPE_HEADER_SPLIT; + rx_ctx.hsplit_0 = ICE_RLAN_RX_HSPLIT_0_SPLIT_SCTP; + break; + case 0: + PMD_DRV_LOG(ERR, "Buffer split protocol must be configured"); + return -EINVAL; + default: + PMD_DRV_LOG(ERR, "Buffer split protocol is not supported"); + return -EINVAL; + } + rxq->rx_hdr_len = ICE_RX_HDR_BUF_SIZE; + } else { + rxq->rx_hdr_len = 0; + rx_ctx.dtype = 0; /* No Protocol Based Buffer Split mode */ + } + rx_ctx.base = rxq->rx_ring_dma / ICE_QUEUE_BASE_ADDR_UNIT; rx_ctx.qlen = rxq->nb_rx_desc; rx_ctx.dbuf = rxq->rx_buf_len >> ICE_RLAN_CTX_DBUF_S; rx_ctx.hbuf = rxq->rx_hdr_len >> ICE_RLAN_CTX_HBUF_S; - rx_ctx.dtype = 0; /* No Header Split mode */ #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC rx_ctx.dsize = 1; /* 32B descriptors */ #endif @@ -401,6 +441,7 @@ ice_alloc_rx_queue_mbufs(struct ice_rx_queue *rxq) for (i = 0; i < rxq->nb_rx_desc; i++) { volatile union ice_rx_flex_desc *rxd; + rxd = &rxq->rx_ring[i]; struct rte_mbuf *mbuf = rte_mbuf_raw_alloc(rxq->mp); if (unlikely(!mbuf)) { @@ -408,8 +449,6 @@ ice_alloc_rx_queue_mbufs(struct ice_rx_queue *rxq) return -ENOMEM; } - rte_mbuf_refcnt_set(mbuf, 1); - mbuf->next = NULL; mbuf->data_off = RTE_PKTMBUF_HEADROOM; mbuf->nb_segs = 1; mbuf->port = rxq->port_id; @@ -417,9 +456,33 @@ ice_alloc_rx_queue_mbufs(struct ice_rx_queue *rxq) dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf)); - rxd = &rxq->rx_ring[i]; - rxd->read.pkt_addr = dma_addr; - rxd->read.hdr_addr = 0; + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + struct rte_mbuf *mbuf_pay; + mbuf_pay = rte_mbuf_raw_alloc(rxq->rxseg[1].mp); + if (unlikely(!mbuf_pay)) { + PMD_DRV_LOG(ERR, "Failed to allocate payload mbuf for RX"); + return -ENOMEM; + } + + mbuf_pay->next = NULL; + mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + mbuf_pay->nb_segs = 1; + mbuf_pay->port = rxq->port_id; + mbuf->next = mbuf_pay; + + rxd->read.hdr_addr = dma_addr; + /* The LS bit should be set to zero regardless of + * buffer split enablement. + */ + rxd->read.pkt_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf_pay)); + + } else { + rte_mbuf_refcnt_set(mbuf, 1); + mbuf->next = NULL; + rxd->read.hdr_addr = 0; + rxd->read.pkt_addr = dma_addr; + } + #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC rxd->read.rsvd1 = 0; rxd->read.rsvd2 = 0; @@ -443,14 +506,14 @@ _ice_rx_queue_release_mbufs(struct ice_rx_queue *rxq) for (i = 0; i < rxq->nb_rx_desc; i++) { if (rxq->sw_ring[i].mbuf) { - rte_pktmbuf_free_seg(rxq->sw_ring[i].mbuf); + rte_pktmbuf_free(rxq->sw_ring[i].mbuf); rxq->sw_ring[i].mbuf = NULL; } } if (rxq->rx_nb_avail == 0) return; for (i = 0; i < rxq->rx_nb_avail; i++) - rte_pktmbuf_free_seg(rxq->rx_stage[rxq->rx_next_avail + i]); + rte_pktmbuf_free(rxq->rx_stage[rxq->rx_next_avail + i]); rxq->rx_nb_avail = 0; } @@ -742,7 +805,7 @@ ice_fdir_program_hw_rx_queue(struct ice_rx_queue *rxq) rx_ctx.qlen = rxq->nb_rx_desc; rx_ctx.dbuf = rxq->rx_buf_len >> ICE_RLAN_CTX_DBUF_S; rx_ctx.hbuf = rxq->rx_hdr_len >> ICE_RLAN_CTX_HBUF_S; - rx_ctx.dtype = 0; /* No Header Split mode */ + rx_ctx.dtype = 0; /* No Buffer Split mode */ rx_ctx.dsize = 1; /* 32B descriptors */ rx_ctx.rxmax = ICE_ETH_MAX_LEN; /* TPH: Transaction Layer Packet (TLP) processing hints */ @@ -1076,6 +1139,7 @@ ice_rx_queue_setup(struct rte_eth_dev *dev, uint16_t len; int use_def_burst_func = 1; uint64_t offloads; + uint16_t n_seg = rx_conf->rx_nseg; if (nb_desc % ICE_ALIGN_RING_DESC != 0 || nb_desc > ICE_MAX_RING_DESC || @@ -1087,6 +1151,17 @@ ice_rx_queue_setup(struct rte_eth_dev *dev, offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads; + if (mp) + n_seg = 1; + + if (n_seg > 1) { + if (!(offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + PMD_INIT_LOG(ERR, "port %u queue index %u split offload not configured", + dev->data->port_id, queue_idx); + return -EINVAL; + } + } + /* Free memory if needed */ if (dev->data->rx_queues[queue_idx]) { ice_rx_queue_release(dev->data->rx_queues[queue_idx]); @@ -1098,12 +1173,22 @@ ice_rx_queue_setup(struct rte_eth_dev *dev, sizeof(struct ice_rx_queue), RTE_CACHE_LINE_SIZE, socket_id); + if (!rxq) { PMD_INIT_LOG(ERR, "Failed to allocate memory for " "rx queue data structure"); return -ENOMEM; } - rxq->mp = mp; + + rxq->rxseg_nb = n_seg; + if (n_seg > 1) { + rte_memcpy(rxq->rxseg, rx_conf->rx_seg, + sizeof(struct rte_eth_rxseg_split) * n_seg); + rxq->mp = rxq->rxseg[0].mp; + } else { + rxq->mp = mp; + } + rxq->nb_rx_desc = nb_desc; rxq->rx_free_thresh = rx_conf->rx_free_thresh; rxq->queue_id = queue_idx; @@ -1568,7 +1653,7 @@ ice_rx_scan_hw_ring(struct ice_rx_queue *rxq) struct ice_rx_entry *rxep; struct rte_mbuf *mb; uint16_t stat_err0; - uint16_t pkt_len; + uint16_t pkt_len, hdr_len; int32_t s[ICE_LOOK_AHEAD], nb_dd; int32_t i, j, nb_rx = 0; uint64_t pkt_flags = 0; @@ -1623,6 +1708,24 @@ ice_rx_scan_hw_ring(struct ice_rx_queue *rxq) ICE_RX_FLX_DESC_PKT_LEN_M) - rxq->crc_len; mb->data_len = pkt_len; mb->pkt_len = pkt_len; + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + mb->nb_segs = (uint16_t)(mb->nb_segs + mb->next->nb_segs); + mb->next->next = NULL; + hdr_len = rte_le_to_cpu_16(rxdp[j].wb.hdr_len_sph_flex_flags1) & + ICE_RX_FLEX_DESC_HEADER_LEN_M; + pkt_len = (rte_le_to_cpu_16(rxdp[j].wb.pkt_len) & + ICE_RX_FLX_DESC_PKT_LEN_M) - rxq->crc_len; + mb->data_len = hdr_len; + mb->pkt_len = hdr_len + pkt_len; + mb->next->data_len = pkt_len; + } else { + pkt_len = (rte_le_to_cpu_16(rxdp[j].wb.pkt_len) & + ICE_RX_FLX_DESC_PKT_LEN_M) - rxq->crc_len; + mb->data_len = pkt_len; + mb->pkt_len = pkt_len; + } + mb->ol_flags = 0; stat_err0 = rte_le_to_cpu_16(rxdp[j].wb.status_error0); pkt_flags = ice_rxd_error_to_pkt_flags(stat_err0); @@ -1714,7 +1817,9 @@ ice_rx_alloc_bufs(struct ice_rx_queue *rxq) struct rte_mbuf *mb; uint16_t alloc_idx, i; uint64_t dma_addr; - int diag; + int diag, diag_pay; + uint64_t pay_addr; + struct rte_mbuf *mbufs_pay[rxq->rx_free_thresh]; /* Allocate buffers in bulk */ alloc_idx = (uint16_t)(rxq->rx_free_trigger - @@ -1727,6 +1832,15 @@ ice_rx_alloc_bufs(struct ice_rx_queue *rxq) return -ENOMEM; } + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + diag_pay = rte_mempool_get_bulk(rxq->rxseg[1].mp, + (void *)mbufs_pay, rxq->rx_free_thresh); + if (unlikely(diag_pay != 0)) { + PMD_RX_LOG(ERR, "Failed to get payload mbufs in bulk"); + return -ENOMEM; + } + } + rxdp = &rxq->rx_ring[alloc_idx]; for (i = 0; i < rxq->rx_free_thresh; i++) { if (likely(i < (rxq->rx_free_thresh - 1))) @@ -1735,13 +1849,21 @@ ice_rx_alloc_bufs(struct ice_rx_queue *rxq) mb = rxep[i].mbuf; rte_mbuf_refcnt_set(mb, 1); - mb->next = NULL; mb->data_off = RTE_PKTMBUF_HEADROOM; mb->nb_segs = 1; mb->port = rxq->port_id; dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mb)); - rxdp[i].read.hdr_addr = 0; - rxdp[i].read.pkt_addr = dma_addr; + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + mb->next = mbufs_pay[i]; + pay_addr = rte_mbuf_data_iova_default(mbufs_pay[i]); + rxdp[i].read.hdr_addr = dma_addr; + rxdp[i].read.pkt_addr = rte_cpu_to_le_64(pay_addr); + } else { + mb->next = NULL; + rxdp[i].read.hdr_addr = 0; + rxdp[i].read.pkt_addr = dma_addr; + } } /* Update Rx tail register */ @@ -2350,11 +2472,13 @@ ice_recv_pkts(void *rx_queue, struct ice_rx_entry *sw_ring = rxq->sw_ring; struct ice_rx_entry *rxe; struct rte_mbuf *nmb; /* new allocated mbuf */ + struct rte_mbuf *nmb_pay; /* new allocated payload mbuf */ struct rte_mbuf *rxm; /* pointer to store old mbuf in SW ring */ uint16_t rx_id = rxq->rx_tail; uint16_t nb_rx = 0; uint16_t nb_hold = 0; uint16_t rx_packet_len; + uint16_t rx_header_len; uint16_t rx_stat_err0; uint64_t dma_addr; uint64_t pkt_flags; @@ -2382,12 +2506,16 @@ ice_recv_pkts(void *rx_queue, if (!(rx_stat_err0 & (1 << ICE_RX_FLEX_DESC_STATUS0_DD_S))) break; - /* allocate mbuf */ + if (rx_stat_err0 & (1 << ICE_RX_FLEX_DESC_STATUS0_HBO_S)) + break; + + /* allocate header mbuf */ nmb = rte_mbuf_raw_alloc(rxq->mp); if (unlikely(!nmb)) { rxq->vsi->adapter->pf.dev_data->rx_mbuf_alloc_failed++; break; } + rxd = *rxdp; /* copy descriptor in ring to temp variable*/ nb_hold++; @@ -2400,24 +2528,55 @@ ice_recv_pkts(void *rx_queue, dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb)); - /** - * fill the read format of descriptor with physic address in - * new allocated mbuf: nmb - */ - rxdp->read.hdr_addr = 0; - rxdp->read.pkt_addr = dma_addr; + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + /* allocate payload mbuf */ + nmb_pay = rte_mbuf_raw_alloc(rxq->rxseg[1].mp); + if (unlikely(!nmb_pay)) { + rxq->vsi->adapter->pf.dev_data->rx_mbuf_alloc_failed++; + break; + } + + nmb->next = nmb_pay; + nmb_pay->next = NULL; - /* calculate rx_packet_len of the received pkt */ - rx_packet_len = (rte_le_to_cpu_16(rxd.wb.pkt_len) & - ICE_RX_FLX_DESC_PKT_LEN_M) - rxq->crc_len; + /** + * fill the read format of descriptor with physic address in + * new allocated mbuf: nmb + */ + rxdp->read.hdr_addr = dma_addr; + rxdp->read.pkt_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb_pay)); + } else { + /** + * fill the read format of descriptor with physic address in + * new allocated mbuf: nmb + */ + rxdp->read.hdr_addr = 0; + rxdp->read.pkt_addr = dma_addr; + } /* fill old mbuf with received descriptor: rxd */ rxm->data_off = RTE_PKTMBUF_HEADROOM; rte_prefetch0(RTE_PTR_ADD(rxm->buf_addr, RTE_PKTMBUF_HEADROOM)); - rxm->nb_segs = 1; - rxm->next = NULL; - rxm->pkt_len = rx_packet_len; - rxm->data_len = rx_packet_len; + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + rxm->nb_segs = (uint16_t)(rxm->nb_segs + rxm->next->nb_segs); + rxm->next->next = NULL; + /* calculate rx_packet_len of the received pkt */ + rx_header_len = rte_le_to_cpu_16(rxd.wb.hdr_len_sph_flex_flags1) & + ICE_RX_FLEX_DESC_HEADER_LEN_M; + rx_packet_len = (rte_le_to_cpu_16(rxd.wb.pkt_len) & + ICE_RX_FLX_DESC_PKT_LEN_M) - rxq->crc_len; + rxm->data_len = rx_header_len; + rxm->pkt_len = rx_header_len + rx_packet_len; + rxm->next->data_len = rx_packet_len; + } else { + rxm->nb_segs = 1; + rxm->next = NULL; + /* calculate rx_packet_len of the received pkt */ + rx_packet_len = (rte_le_to_cpu_16(rxd.wb.pkt_len) & + ICE_RX_FLX_DESC_PKT_LEN_M) - rxq->crc_len; + rxm->data_len = rx_packet_len; + rxm->pkt_len = rx_packet_len; + } rxm->port = rxq->port_id; rxm->packet_type = ptype_tbl[ICE_RX_FLEX_DESC_PTYPE_M & rte_le_to_cpu_16(rxd.wb.ptype_flex_flags0)]; diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h index bb18a01951..611dbc8503 100644 --- a/drivers/net/ice/ice_rxtx.h +++ b/drivers/net/ice/ice_rxtx.h @@ -16,6 +16,9 @@ #define ICE_RX_MAX_BURST 32 #define ICE_TX_MAX_BURST 32 +/* Maximal number of segments to split. */ +#define ICE_RX_MAX_NSEG 2 + #define ICE_CHK_Q_ENA_COUNT 100 #define ICE_CHK_Q_ENA_INTERVAL_US 100 @@ -43,6 +46,11 @@ extern uint64_t ice_timestamp_dynflag; extern int ice_timestamp_dynfield_offset; +/* Max header size can be 2K - 64 bytes */ +#define ICE_RX_HDR_BUF_SIZE (2048 - 64) + +#define ICE_HEADER_SPLIT_ENA BIT(0) + typedef void (*ice_rx_release_mbufs_t)(struct ice_rx_queue *rxq); typedef void (*ice_tx_release_mbufs_t)(struct ice_tx_queue *txq); typedef void (*ice_rxd_to_pkt_fields_t)(struct ice_rx_queue *rxq, @@ -53,6 +61,12 @@ struct ice_rx_entry { struct rte_mbuf *mbuf; }; +enum ice_rx_dtype { + ICE_RX_DTYPE_NO_SPLIT = 0, + ICE_RX_DTYPE_HEADER_SPLIT = 1, + ICE_RX_DTYPE_SPLIT_ALWAYS = 2, +}; + struct ice_rx_queue { struct rte_mempool *mp; /* mbuf pool to populate RX ring */ volatile union ice_rx_flex_desc *rx_ring;/* RX ring virtual address */ @@ -95,6 +109,8 @@ struct ice_rx_queue { uint32_t time_high; uint32_t hw_register_set; const struct rte_memzone *mz; + struct rte_eth_rxseg_split rxseg[ICE_RX_MAX_NSEG]; + uint32_t rxseg_nb; }; struct ice_tx_entry { diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h index 2dd2d83650..eec6ea2134 100644 --- a/drivers/net/ice/ice_rxtx_vec_common.h +++ b/drivers/net/ice/ice_rxtx_vec_common.h @@ -291,6 +291,9 @@ ice_rx_vec_queue_default(struct ice_rx_queue *rxq) if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) return -1; + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) + return -1; + if (rxq->offloads & ICE_RX_VECTOR_OFFLOAD) return ICE_VECTOR_OFFLOAD_PATH;