From patchwork Tue Apr 26 11:13:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, WenxuanX" X-Patchwork-Id: 110278 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9E385A00C4; Tue, 26 Apr 2022 13:36:36 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EA209427F4; Tue, 26 Apr 2022 13:36:34 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id 8D5CC41141 for ; Tue, 26 Apr 2022 13:36:33 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650972993; x=1682508993; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Ayffeie60aa5ZB/EirWsPZLWJn+yMbxLe8kzlWo29mc=; b=XnsD5dpgcAk5TuxbHpxQOiKgRsax6u32D/6nOARlcjmXOWa9+XM4M+WO pNMU2uNO1axPKLgqvALN4HoR7QY7hOz3q9SCm3eJq7+sGNp4yJTRrUlFj uW704JwQMBhaENcIu4zIWgmSk1/IPfAkKpV1sYq8TpZYYrqCG5C4rmJv9 DrW7Jcuvq1HHZAhan/oRy6Qs3IK0TxH6tEqlE5VbRs2Jxk3Kub1EaWCFJ EKcgC0C+4/3FZXQBwuqSGuHwyBoxYWyl7PlGDgRRKW58woBwmEMHQyaqU p7yJhv94r7izyH464YBjnbaK61BiHUnCGM4teIvPvdIFeImnJ8ff94CHf g==; X-IronPort-AV: E=McAfee;i="6400,9594,10328"; a="265714220" X-IronPort-AV: E=Sophos;i="5.90,290,1643702400"; d="scan'208";a="265714220" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Apr 2022 04:36:32 -0700 X-IronPort-AV: E=Sophos;i="5.90,290,1643702400"; d="scan'208";a="579842342" Received: from unknown (HELO localhost.localdomain) ([10.239.251.3]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Apr 2022 04:36:26 -0700 From: wenxuanx.wu@intel.com To: thomas@monjalon.net, andrew.rybchenko@oktetlabs.ru, xiaoyun.li@intel.com, ferruh.yigit@xilinx.com, aman.deep.singh@intel.com, dev@dpdk.org, yuying.zhang@intel.com, qi.z.zhang@intel.com, jerinjacobk@gmail.com Cc: stephen@networkplumber.org, mb@smartsharesystems.com, viacheslavo@nvidia.com, ping.yu@intel.com, xuan.ding@intel.com, yuanx.wang@intel.com, wenxuanx.wu@intel.com Subject: [PATCH v5 2/4] app/testpmd: add proto based buffer split config Date: Tue, 26 Apr 2022 11:13:37 +0000 Message-Id: <20220426111338.1074785-3-wenxuanx.wu@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220426111338.1074785-1-wenxuanx.wu@intel.com> References: <20220402104109.472078-2-wenxuanx.wu@intel.com> <20220426111338.1074785-1-wenxuanx.wu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Wenxuan Wu This patch adds protocol based buffer split configuration in testpmd. The protocol split feature is off by default. To enable protocol split, you need: 1. Start testpmd with two mempools. e.g. --mbuf-size=2048,2048 2. Configure Rx queue with rx_offload buffer split on. 3. Set the protocol type of buffer split. Testpmd View: testpmd>port config rx_offload buffer_split on testpmd>port config buffer_split mac|ipv4|ipv6|l3|tcp|udp|sctp| l4|inner_mac|inner_ipv4|inner_ipv6|inner_l3|inner_tcp| inner_udp|inner_sctp|inner_l4 Signed-off-by: Xuan Ding Signed-off-by: Yuan Wang Signed-off-by: Wenxuan Wu Reviewed-by: Qi Zhang --- app/test-pmd/cmdline.c | 118 +++++++++++++++++++++++++++++++++++++++++ app/test-pmd/testpmd.c | 7 +-- app/test-pmd/testpmd.h | 2 + 3 files changed, 124 insertions(+), 3 deletions(-) diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index 6ffea8e21a..5cd4beca95 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -866,6 +866,12 @@ static void cmd_help_long_parsed(void *parsed_result, " Enable or disable a per port Rx offloading" " on all Rx queues of a port\n\n" + "port config buffer_split mac|ipv4|ipv6|l3|tcp|udp|sctp|l4|" + "inner_mac|inner_ipv4|inner_ipv6|inner_l3|inner_tcp|" + "inner_udp|inner_sctp|inner_l4\n" + " Configure protocol type for buffer split" + " on all Rx queues of a port\n\n" + "port (port_id) rxq (queue_id) rx_offload vlan_strip|" "ipv4_cksum|udp_cksum|tcp_cksum|tcp_lro|qinq_strip|" "outer_ipv4_cksum|macsec_strip|header_split|" @@ -16353,6 +16359,117 @@ cmdline_parse_inst_t cmd_config_per_port_rx_offload = { } }; +/* config a per port buffer split protocol */ +struct cmd_config_per_port_buffer_split_protocol_result { + cmdline_fixed_string_t port; + cmdline_fixed_string_t config; + uint16_t port_id; + cmdline_fixed_string_t buffer_split; + cmdline_fixed_string_t protocol; +}; + +cmdline_parse_token_string_t cmd_config_per_port_buffer_split_protocol_result_port = + TOKEN_STRING_INITIALIZER + (struct cmd_config_per_port_buffer_split_protocol_result, + port, "port"); +cmdline_parse_token_string_t cmd_config_per_port_buffer_split_protocol_result_config = + TOKEN_STRING_INITIALIZER + (struct cmd_config_per_port_buffer_split_protocol_result, + config, "config"); +cmdline_parse_token_num_t cmd_config_per_port_buffer_split_protocol_result_port_id = + TOKEN_NUM_INITIALIZER + (struct cmd_config_per_port_buffer_split_protocol_result, + port_id, RTE_UINT16); +cmdline_parse_token_string_t cmd_config_per_port_buffer_split_protocol_result_buffer_split = + TOKEN_STRING_INITIALIZER + (struct cmd_config_per_port_buffer_split_protocol_result, + buffer_split, "buffer_split"); +cmdline_parse_token_string_t cmd_config_per_port_buffer_split_protocol_result_protocol = + TOKEN_STRING_INITIALIZER + (struct cmd_config_per_port_buffer_split_protocol_result, + protocol, "mac#ipv4#ipv6#l3#tcp#udp#sctp#l4#" + "inner_mac#inner_ipv4#inner_ipv6#inner_l3#inner_tcp#" + "inner_udp#inner_sctp#inner_l4"); + +static void +cmd_config_per_port_buffer_split_protocol_parsed(void *parsed_result, + __rte_unused struct cmdline *cl, + __rte_unused void *data) +{ + struct cmd_config_per_port_buffer_split_protocol_result *res = parsed_result; + portid_t port_id = res->port_id; + struct rte_port *port = &ports[port_id]; + uint32_t protocol; + + if (port_id_is_invalid(port_id, ENABLED_WARN)) + return; + + if (port->port_status != RTE_PORT_STOPPED) { + fprintf(stderr, + "Error: Can't config offload when Port %d is not stopped\n", + port_id); + return; + } + + if (!strcmp(res->protocol, "mac")) + protocol = RTE_PTYPE_L2_ETHER; + else if (!strcmp(res->protocol, "ipv4")) + protocol = RTE_PTYPE_L3_IPV4; + else if (!strcmp(res->protocol, "ipv6")) + protocol = RTE_PTYPE_L3_IPV6; + else if (!strcmp(res->protocol, "l3")) + protocol = RTE_PTYPE_L3_IPV4|RTE_PTYPE_L3_IPV6; + else if (!strcmp(res->protocol, "tcp")) + protocol = RTE_PTYPE_L4_TCP; + else if (!strcmp(res->protocol, "udp")) + protocol = RTE_PTYPE_L4_UDP; + else if (!strcmp(res->protocol, "sctp")) + protocol = RTE_PTYPE_L4_SCTP; + else if (!strcmp(res->protocol, "l4")) + protocol = RTE_PTYPE_L4_TCP|RTE_PTYPE_L4_UDP|RTE_PTYPE_L4_SCTP; + else if (!strcmp(res->protocol, "inner_mac")) + protocol = RTE_PTYPE_INNER_L2_ETHER; + else if (!strcmp(res->protocol, "inner_ipv4")) + protocol = RTE_PTYPE_INNER_L3_IPV4; + else if (!strcmp(res->protocol, "inner_ipv6")) + protocol = RTE_PTYPE_INNER_L3_IPV6; + else if (!strcmp(res->protocol, "inner_l3")) + protocol = RTE_PTYPE_INNER_L3_IPV4|RTE_PTYPE_INNER_L3_IPV6; + else if (!strcmp(res->protocol, "inner_tcp")) + protocol = RTE_PTYPE_INNER_L4_TCP; + else if (!strcmp(res->protocol, "inner_udp")) + protocol = RTE_PTYPE_INNER_L4_UDP; + else if (!strcmp(res->protocol, "inner_sctp")) + protocol = RTE_PTYPE_INNER_L4_SCTP; + else if (!strcmp(res->protocol, "inner_l4")) + protocol = RTE_PTYPE_INNER_L4_TCP|RTE_PTYPE_INNER_L4_UDP|RTE_PTYPE_INNER_L4_SCTP; + else { + fprintf(stderr, "Unknown protocol name: %s\n", res->protocol); + return; + } + + rx_pkt_buffer_split_proto = protocol; + rx_pkt_nb_segs = 2; + + cmd_reconfig_device_queue(port_id, 1, 1); +} + +cmdline_parse_inst_t cmd_config_per_port_buffer_split_protocol = { + .f = cmd_config_per_port_buffer_split_protocol_parsed, + .data = NULL, + .help_str = "port config buffer_split mac|ipv4|ipv6|l3|tcp|udp|sctp|l4|" + "inner_mac|inner_ipv4|inner_ipv6|inner_l3|inner_tcp|" + "inner_udp|inner_sctp|inner_l4", + .tokens = { + (void *)&cmd_config_per_port_buffer_split_protocol_result_port, + (void *)&cmd_config_per_port_buffer_split_protocol_result_config, + (void *)&cmd_config_per_port_buffer_split_protocol_result_port_id, + (void *)&cmd_config_per_port_buffer_split_protocol_result_buffer_split, + (void *)&cmd_config_per_port_buffer_split_protocol_result_protocol, + NULL, + } +}; + /* Enable/Disable a per queue offloading */ struct cmd_config_per_queue_rx_offload_result { cmdline_fixed_string_t port; @@ -18071,6 +18188,7 @@ cmdline_parse_ctx_t main_ctx[] = { (cmdline_parse_inst_t *)&cmd_rx_offload_get_capa, (cmdline_parse_inst_t *)&cmd_rx_offload_get_configuration, (cmdline_parse_inst_t *)&cmd_config_per_port_rx_offload, + (cmdline_parse_inst_t *)&cmd_config_per_port_buffer_split_protocol, (cmdline_parse_inst_t *)&cmd_config_per_queue_rx_offload, (cmdline_parse_inst_t *)&cmd_tx_offload_get_capa, (cmdline_parse_inst_t *)&cmd_tx_offload_get_configuration, diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index fe2ce19f99..bd77d6bf10 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -253,6 +253,8 @@ uint8_t tx_pkt_nb_segs = 1; /**< Number of segments in TXONLY packets */ enum tx_pkt_split tx_pkt_split = TX_PKT_SPLIT_OFF; /**< Split policy for packets to TX. */ +uint32_t rx_pkt_buffer_split_proto; + uint8_t txonly_multi_flow; /**< Whether multiple flows are generated in TXONLY mode. */ @@ -2586,12 +2588,11 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, mp_n = (i > mbuf_data_size_n) ? mbuf_data_size_n - 1 : i; mpx = mbuf_pool_find(socket_id, mp_n); /* Handle zero as mbuf data buffer size. */ - rx_seg->length = rx_pkt_seg_lengths[i] ? - rx_pkt_seg_lengths[i] : - mbuf_data_size[mp_n]; + rx_seg->length = rx_pkt_seg_lengths[i]; rx_seg->offset = i < rx_pkt_nb_offs ? rx_pkt_seg_offsets[i] : 0; rx_seg->mp = mpx ? mpx : mp; + rx_seg->proto = rx_pkt_buffer_split_proto; } rx_conf->rx_nseg = rx_pkt_nb_segs; rx_conf->rx_seg = rx_useg; diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 31f766c965..707e1781d4 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -557,6 +557,8 @@ enum tx_pkt_split { extern enum tx_pkt_split tx_pkt_split; +extern uint32_t rx_pkt_buffer_split_proto; + extern uint8_t txonly_multi_flow; extern uint32_t rxq_share;