From patchwork Wed Apr 22 17:21:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 69113 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 72E52A00C2; Wed, 22 Apr 2020 19:21:22 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BAC4F1C1B9; Wed, 22 Apr 2020 19:21:21 +0200 (CEST) Received: from mail-pf1-f193.google.com (mail-pf1-f193.google.com [209.85.210.193]) by dpdk.org (Postfix) with ESMTP id A7A3E1C1B8 for ; Wed, 22 Apr 2020 19:21:19 +0200 (CEST) Received: by mail-pf1-f193.google.com with SMTP id x77so1444243pfc.0 for ; Wed, 22 Apr 2020 10:21:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=t+sQXsvfhOHLdTQR2Wii/4b+h9dumnwX831AO2Edp1I=; b=tm8BkbkjXa16c+lHE3HWnsyPRUO/BddIPqpHw6YOZEPt63pZN18zGAVpg4PQbCdX80 55P8aqK0oRL+jBRfXYcLeHwXyRFdEjBU/7LIAtmGPx0C/j+0bRwAv1M5PICuXq83BJjE Ll4C9871TEiQxxaC/aWCgcT3D2Xq2XUwC29rVnMG93zc194i/1NsDGDWb7SYPN9dACLS saDc0FxWzm6elwSSEPLlqUjRqiSufOuYh2xLJFYBDjHjF5AW+yv8isoITGKKj4mfOZbh tvBp8xbX8QBLO/Ks2wv+Hyu4OPrj078KNQh6gzhpIC421E+S+dLuZx7W1QpTapL5QJLH 1lfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=t+sQXsvfhOHLdTQR2Wii/4b+h9dumnwX831AO2Edp1I=; b=IzvUkylEKCamHxbduaxoP0gudSn+wuadu2QeE/1xee3VlyNOcqlur+9Uh3MEqEcXcF glyQHR1x8PUI3N3LHgxKpKGJwzaKDN1UW8SeXn2BtTzOkUNR9zHW3C95mrmw09jWW0Q9 sFsSD5fPomRujq0z3E/m2gsVDMn7Ws8/OhYi82c+2byCzpW3qY5Jcpp6ihR4cBpBaE2B S424LLSRZTdPwnYWwj6AEi4QLcSh+qCtDR1uzawBqvOhCpjb6RV8uWLNxw5sUDkQFe0e owoZRsY0LQtmSIeVxRwE4JFq51TFUG8aULkuuD1T7txxA+n7Ecw0cNLHUytczzArTKqI 8Akw== X-Gm-Message-State: AGi0Pua0PwHkSROd2lMlGlVS5FmFQyDzULYmdIiuUbahZ14Q/8GwBDhh tiLvsAsUg/w/CSdfBsUodXA= X-Google-Smtp-Source: APiQypJzIKzpd5lFPT/hTFv91DQPcqp6AslV2R9QvrKjtDe15xUB7dD7wsA3lLyatCge27gZfNRpjg== X-Received: by 2002:a65:418c:: with SMTP id a12mr116100pgq.2.1587576078220; Wed, 22 Apr 2020 10:21:18 -0700 (PDT) Received: from hyd1588t430.marvell.com ([115.113.156.2]) by smtp.gmail.com with ESMTPSA id b24sm28196pfd.175.2020.04.22.10.21.14 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 22 Apr 2020 10:21:17 -0700 (PDT) From: Nithin Dabilpuram To: Jasvinder Singh , Cristian Dumitrescu , Thomas Monjalon , Ferruh Yigit , Andrew Rybchenko Cc: dev@dpdk.org, jerinj@marvell.com, kkanas@marvell.com, Nithin Dabilpuram Date: Wed, 22 Apr 2020 22:51:01 +0530 Message-Id: <20200422172104.23099-1-nithind1988@gmail.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20200330160019.29674-1-ndabilpuram@marvell.com> References: <20200330160019.29674-1-ndabilpuram@marvell.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v4 1/4] ethdev: add tm support for shaper config in pkt mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram Some NIC hardware support shaper to work in packet mode i.e shaping or ratelimiting traffic is in packets per second (PPS) as opposed to default bytes per second (BPS). Hence this patch adds support to configure shared or private shaper in packet mode, provide rate in PPS and add related tm capabilities in port/level/node capability structures. This patch also updates tm port/level/node capability structures with exiting features of scheduler wfq packet mode, scheduler wfq byte mode and private/shared shaper byte mode. SoftNIC PMD is also updated with new capabilities. Signed-off-by: Nithin Dabilpuram Acked-by: Cristian Dumitrescu --- v3..v4: - Update text under packet_mode as per Cristian. - Update rte_eth_softnic_tm.c based on Jasvinder's comments. - Add error enum RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PACKET_MODE - Fix shaper_profile_check() with packet mode check - Fix typo's v2..v3: - Fix typo's - Add shaper_shared_(packet, byte)_mode_supported in level and node cap - Fix comment in pkt_length_adjust. - Move rte_eth_softnic_tm.c capability update to patch 1/4 to avoid compilations issues in node and level cap array in softnicpmd. ../drivers/net/softnic/rte_eth_softnic_tm.c:782:3: warning: braces around scalar initializer {.nonleaf = { ../drivers/net/softnic/rte_eth_softnic_tm.c:782:3: note: (near initialization for ‘tm_node_cap[0].shaper_shared_byte_mode_supported’) ../drivers/net/softnic/rte_eth_softnic_tm.c:782:4: error: field name not in record or union initializer {.nonleaf = { v1..v2: - Add seperate capability for shaper and scheduler pktmode and bytemode. - Add packet_mode field in struct rte_tm_shaper_params to indicate packet mode shaper profile. drivers/net/softnic/rte_eth_softnic_tm.c | 72 +++++++++++ lib/librte_ethdev/rte_tm.h | 197 ++++++++++++++++++++++++++++++- 2 files changed, 267 insertions(+), 2 deletions(-) diff --git a/drivers/net/softnic/rte_eth_softnic_tm.c b/drivers/net/softnic/rte_eth_softnic_tm.c index 80a470c..d309763 100644 --- a/drivers/net/softnic/rte_eth_softnic_tm.c +++ b/drivers/net/softnic/rte_eth_softnic_tm.c @@ -447,6 +447,8 @@ static const struct rte_tm_capabilities tm_cap = { .shaper_private_dual_rate_n_max = 0, .shaper_private_rate_min = 1, .shaper_private_rate_max = UINT32_MAX, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 1, .shaper_shared_n_max = UINT32_MAX, .shaper_shared_n_nodes_per_shaper_max = UINT32_MAX, @@ -454,6 +456,8 @@ static const struct rte_tm_capabilities tm_cap = { .shaper_shared_dual_rate_n_max = 0, .shaper_shared_rate_min = 1, .shaper_shared_rate_max = UINT32_MAX, + .shaper_shared_packet_mode_supported = 0, + .shaper_shared_byte_mode_supported = 1, .shaper_pkt_length_adjust_min = RTE_TM_ETH_FRAMING_OVERHEAD_FCS, .shaper_pkt_length_adjust_max = RTE_TM_ETH_FRAMING_OVERHEAD_FCS, @@ -463,6 +467,8 @@ static const struct rte_tm_capabilities tm_cap = { .sched_wfq_n_children_per_group_max = UINT32_MAX, .sched_wfq_n_groups_max = 1, .sched_wfq_weight_max = UINT32_MAX, + .sched_wfq_packet_mode_supported = 0, + .sched_wfq_byte_mode_supported = 1, .cman_wred_packet_mode_supported = WRED_SUPPORTED, .cman_wred_byte_mode_supported = 0, @@ -548,13 +554,19 @@ static const struct rte_tm_level_capabilities tm_level_cap[] = { .shaper_private_dual_rate_supported = 0, .shaper_private_rate_min = 1, .shaper_private_rate_max = UINT32_MAX, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 1, .shaper_shared_n_max = 0, + .shaper_shared_packet_mode_supported = 0, + .shaper_shared_byte_mode_supported = 0, .sched_n_children_max = UINT32_MAX, .sched_sp_n_priorities_max = 1, .sched_wfq_n_children_per_group_max = UINT32_MAX, .sched_wfq_n_groups_max = 1, .sched_wfq_weight_max = 1, + .sched_wfq_packet_mode_supported = 0, + .sched_wfq_byte_mode_supported = 1, .stats_mask = STATS_MASK_DEFAULT, } }, @@ -572,7 +584,11 @@ static const struct rte_tm_level_capabilities tm_level_cap[] = { .shaper_private_dual_rate_supported = 0, .shaper_private_rate_min = 1, .shaper_private_rate_max = UINT32_MAX, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 1, .shaper_shared_n_max = 0, + .shaper_shared_packet_mode_supported = 0, + .shaper_shared_byte_mode_supported = 0, .sched_n_children_max = UINT32_MAX, .sched_sp_n_priorities_max = 1, @@ -580,9 +596,14 @@ static const struct rte_tm_level_capabilities tm_level_cap[] = { .sched_wfq_n_groups_max = 1, #ifdef RTE_SCHED_SUBPORT_TC_OV .sched_wfq_weight_max = UINT32_MAX, + .sched_wfq_packet_mode_supported = 0, + .sched_wfq_byte_mode_supported = 1, #else .sched_wfq_weight_max = 1, + .sched_wfq_packet_mode_supported = 0, + .sched_wfq_byte_mode_supported = 1, #endif + .stats_mask = STATS_MASK_DEFAULT, } }, }, @@ -599,7 +620,11 @@ static const struct rte_tm_level_capabilities tm_level_cap[] = { .shaper_private_dual_rate_supported = 0, .shaper_private_rate_min = 1, .shaper_private_rate_max = UINT32_MAX, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 1, .shaper_shared_n_max = 0, + .shaper_shared_packet_mode_supported = 0, + .shaper_shared_byte_mode_supported = 0, .sched_n_children_max = RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE, @@ -608,6 +633,8 @@ static const struct rte_tm_level_capabilities tm_level_cap[] = { .sched_wfq_n_children_per_group_max = 1, .sched_wfq_n_groups_max = 0, .sched_wfq_weight_max = 1, + .sched_wfq_packet_mode_supported = 0, + .sched_wfq_byte_mode_supported = 0, .stats_mask = STATS_MASK_DEFAULT, } }, @@ -625,7 +652,11 @@ static const struct rte_tm_level_capabilities tm_level_cap[] = { .shaper_private_dual_rate_supported = 0, .shaper_private_rate_min = 1, .shaper_private_rate_max = UINT32_MAX, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 1, .shaper_shared_n_max = 1, + .shaper_shared_packet_mode_supported = 0, + .shaper_shared_byte_mode_supported = 1, .sched_n_children_max = RTE_SCHED_BE_QUEUES_PER_PIPE, @@ -634,6 +665,8 @@ static const struct rte_tm_level_capabilities tm_level_cap[] = { RTE_SCHED_BE_QUEUES_PER_PIPE, .sched_wfq_n_groups_max = 1, .sched_wfq_weight_max = UINT32_MAX, + .sched_wfq_packet_mode_supported = 0, + .sched_wfq_byte_mode_supported = 1, .stats_mask = STATS_MASK_DEFAULT, } }, @@ -651,7 +684,11 @@ static const struct rte_tm_level_capabilities tm_level_cap[] = { .shaper_private_dual_rate_supported = 0, .shaper_private_rate_min = 0, .shaper_private_rate_max = 0, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 0, .shaper_shared_n_max = 0, + .shaper_shared_packet_mode_supported = 0, + .shaper_shared_byte_mode_supported = 0, .cman_head_drop_supported = 0, .cman_wred_packet_mode_supported = WRED_SUPPORTED, @@ -736,7 +773,11 @@ static const struct rte_tm_node_capabilities tm_node_cap[] = { .shaper_private_dual_rate_supported = 0, .shaper_private_rate_min = 1, .shaper_private_rate_max = UINT32_MAX, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 1, .shaper_shared_n_max = 0, + .shaper_shared_packet_mode_supported = 0, + .shaper_shared_byte_mode_supported = 0, {.nonleaf = { .sched_n_children_max = UINT32_MAX, @@ -744,6 +785,8 @@ static const struct rte_tm_node_capabilities tm_node_cap[] = { .sched_wfq_n_children_per_group_max = UINT32_MAX, .sched_wfq_n_groups_max = 1, .sched_wfq_weight_max = 1, + .sched_wfq_packet_mode_supported = 0, + .sched_wfq_byte_mode_supported = 1, } }, .stats_mask = STATS_MASK_DEFAULT, @@ -754,7 +797,11 @@ static const struct rte_tm_node_capabilities tm_node_cap[] = { .shaper_private_dual_rate_supported = 0, .shaper_private_rate_min = 1, .shaper_private_rate_max = UINT32_MAX, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 1, .shaper_shared_n_max = 0, + .shaper_shared_packet_mode_supported = 0, + .shaper_shared_byte_mode_supported = 0, {.nonleaf = { .sched_n_children_max = UINT32_MAX, @@ -762,6 +809,8 @@ static const struct rte_tm_node_capabilities tm_node_cap[] = { .sched_wfq_n_children_per_group_max = UINT32_MAX, .sched_wfq_n_groups_max = 1, .sched_wfq_weight_max = UINT32_MAX, + .sched_wfq_packet_mode_supported = 0, + .sched_wfq_byte_mode_supported = 1, } }, .stats_mask = STATS_MASK_DEFAULT, @@ -772,7 +821,11 @@ static const struct rte_tm_node_capabilities tm_node_cap[] = { .shaper_private_dual_rate_supported = 0, .shaper_private_rate_min = 1, .shaper_private_rate_max = UINT32_MAX, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 1, .shaper_shared_n_max = 0, + .shaper_shared_packet_mode_supported = 0, + .shaper_shared_byte_mode_supported = 0, {.nonleaf = { .sched_n_children_max = @@ -782,6 +835,8 @@ static const struct rte_tm_node_capabilities tm_node_cap[] = { .sched_wfq_n_children_per_group_max = 1, .sched_wfq_n_groups_max = 0, .sched_wfq_weight_max = 1, + .sched_wfq_packet_mode_supported = 0, + .sched_wfq_byte_mode_supported = 0, } }, .stats_mask = STATS_MASK_DEFAULT, @@ -792,7 +847,11 @@ static const struct rte_tm_node_capabilities tm_node_cap[] = { .shaper_private_dual_rate_supported = 0, .shaper_private_rate_min = 1, .shaper_private_rate_max = UINT32_MAX, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 1, .shaper_shared_n_max = 1, + .shaper_shared_packet_mode_supported = 0, + .shaper_shared_byte_mode_supported = 1, {.nonleaf = { .sched_n_children_max = @@ -802,6 +861,8 @@ static const struct rte_tm_node_capabilities tm_node_cap[] = { RTE_SCHED_BE_QUEUES_PER_PIPE, .sched_wfq_n_groups_max = 1, .sched_wfq_weight_max = UINT32_MAX, + .sched_wfq_packet_mode_supported = 0, + .sched_wfq_byte_mode_supported = 1, } }, .stats_mask = STATS_MASK_DEFAULT, @@ -812,7 +873,11 @@ static const struct rte_tm_node_capabilities tm_node_cap[] = { .shaper_private_dual_rate_supported = 0, .shaper_private_rate_min = 0, .shaper_private_rate_max = 0, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 0, .shaper_shared_n_max = 0, + .shaper_shared_packet_mode_supported = 0, + .shaper_shared_byte_mode_supported = 0, {.leaf = { @@ -947,6 +1012,13 @@ shaper_profile_check(struct rte_eth_dev *dev, NULL, rte_strerror(EINVAL)); + /* Packet mode is not supported. */ + if (profile->packet_mode != 0) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PACKET_MODE, + NULL, + rte_strerror(EINVAL)); return 0; } diff --git a/lib/librte_ethdev/rte_tm.h b/lib/librte_ethdev/rte_tm.h index f9c0cf3..110c263 100644 --- a/lib/librte_ethdev/rte_tm.h +++ b/lib/librte_ethdev/rte_tm.h @@ -250,6 +250,23 @@ struct rte_tm_capabilities { */ uint64_t shaper_private_rate_max; + /** Shaper private packet mode supported. When non-zero, this parameter + * indicates that there is at least one node that can be configured + * with packet mode in its private shaper. When shaper is configured + * in packet mode, committed/peak rate provided is interpreted + * in packets per second. + */ + int shaper_private_packet_mode_supported; + + /** Shaper private byte mode supported. When non-zero, this parameter + * indicates that there is at least one node that can be configured + * with byte mode in its private shaper. When shaper is configured + * in byte mode, committed/peak rate provided is interpreted in + * bytes per second. + */ + int shaper_private_byte_mode_supported; + + /** Maximum number of shared shapers. The value of zero indicates that * shared shapers are not supported. */ @@ -284,6 +301,21 @@ struct rte_tm_capabilities { */ uint64_t shaper_shared_rate_max; + /** Shaper shared packet mode supported. When non-zero, this parameter + * indicates a shared shaper can be configured with packet mode. + * When shared shaper is configured in packet mode, committed/peak rate + * provided is interpreted in packets per second. + */ + int shaper_shared_packet_mode_supported; + + /** Shaper shared byte mode supported. When non-zero, this parameter + * indicates that a shared shaper can be configured with byte mode. + * When shared shaper is configured in byte mode, committed/peak rate + * provided is interpreted in bytes per second. + */ + int shaper_shared_byte_mode_supported; + + /** Minimum value allowed for packet length adjustment for any private * or shared shaper. */ @@ -339,6 +371,22 @@ struct rte_tm_capabilities { */ uint32_t sched_wfq_weight_max; + /** WFQ packet mode supported. When non-zero, this parameter indicates + * that there is at least one non-leaf node that supports packet mode + * for WFQ among its children. WFQ weights will be applied against + * packet count for scheduling children when a non-leaf node + * is configured appropriately. + */ + int sched_wfq_packet_mode_supported; + + /** WFQ byte mode supported. When non-zero, this parameter indicates + * that there is at least one non-leaf node that supports byte mode + * for WFQ among its children. WFQ weights will be applied against + * bytes for scheduling children when a non-leaf node is configured + * appropriately. + */ + int sched_wfq_byte_mode_supported; + /** WRED packet mode support. When non-zero, this parameter indicates * that there is at least one leaf node that supports the WRED packet * mode, which might not be true for all the leaf nodes. In packet @@ -485,6 +533,24 @@ struct rte_tm_level_capabilities { */ uint64_t shaper_private_rate_max; + /** Shaper private packet mode supported. When non-zero, + * this parameter indicates there is at least one + * non-leaf node at this level that can be configured + * with packet mode in its private shaper. When private + * shaper is configured in packet mode, committed/peak + * rate provided is interpreted in packets per second. + */ + int shaper_private_packet_mode_supported; + + /** Shaper private byte mode supported. When non-zero, + * this parameter indicates there is at least one + * non-leaf node at this level that can be configured + * with byte mode in its private shaper. When private + * shaper is configured in byte mode, committed/peak + * rate provided is interpreted in bytes per second. + */ + int shaper_private_byte_mode_supported; + /** Maximum number of shared shapers that any non-leaf * node on this level can be part of. The value of zero * indicates that shared shapers are not supported by @@ -495,6 +561,20 @@ struct rte_tm_level_capabilities { */ uint32_t shaper_shared_n_max; + /** Shaper shared packet mode supported. When non-zero, + * this parameter indicates that there is at least one + * non-leaf node on this level that can be part of + * shared shapers which work in packet mode. + */ + int shaper_shared_packet_mode_supported; + + /** Shaper shared byte mode supported. When non-zero, + * this parameter indicates that there is at least one + * non-leaf node on this level that can be part of + * shared shapers which work in byte mode. + */ + int shaper_shared_byte_mode_supported; + /** Maximum number of children nodes. This parameter * indicates that there is at least one non-leaf node on * this level that can be configured with this many @@ -554,6 +634,25 @@ struct rte_tm_level_capabilities { */ uint32_t sched_wfq_weight_max; + /** WFQ packet mode supported. When non-zero, this + * parameter indicates that there is at least one + * non-leaf node at this level that supports packet + * mode for WFQ among its children. WFQ weights will + * be applied against packet count for scheduling + * children when a non-leaf node is configured + * appropriately. + */ + int sched_wfq_packet_mode_supported; + + /** WFQ byte mode supported. When non-zero, this + * parameter indicates that there is at least one + * non-leaf node at this level that supports byte + * mode for WFQ among its children. WFQ weights will + * be applied against bytes for scheduling children + * when a non-leaf node is configured appropriately. + */ + int sched_wfq_byte_mode_supported; + /** Mask of statistics counter types supported by the * non-leaf nodes on this level. Every supported * statistics counter type is supported by at least one @@ -596,6 +695,24 @@ struct rte_tm_level_capabilities { */ uint64_t shaper_private_rate_max; + /** Shaper private packet mode supported. When non-zero, + * this parameter indicates there is at least one leaf + * node at this level that can be configured with + * packet mode in its private shaper. When private + * shaper is configured in packet mode, committed/peak + * rate provided is interpreted in packets per second. + */ + int shaper_private_packet_mode_supported; + + /** Shaper private byte mode supported. When non-zero, + * this parameter indicates there is at least one leaf + * node at this level that can be configured with + * byte mode in its private shaper. When private shaper + * is configured in byte mode, committed/peak rate + * provided is interpreted in bytes per second. + */ + int shaper_private_byte_mode_supported; + /** Maximum number of shared shapers that any leaf node * on this level can be part of. The value of zero * indicates that shared shapers are not supported by @@ -606,6 +723,20 @@ struct rte_tm_level_capabilities { */ uint32_t shaper_shared_n_max; + /** Shaper shared packet mode supported. When non-zero, + * this parameter indicates that there is at least one + * leaf node on this level that can be part of + * shared shapers which work in packet mode. + */ + int shaper_shared_packet_mode_supported; + + /** Shaper shared byte mode supported. When non-zero, + * this parameter indicates that there is at least one + * leaf node on this level that can be part of + * shared shapers which work in byte mode. + */ + int shaper_shared_byte_mode_supported; + /** WRED packet mode support. When non-zero, this * parameter indicates that there is at least one leaf * node on this level that supports the WRED packet @@ -686,12 +817,38 @@ struct rte_tm_node_capabilities { */ uint64_t shaper_private_rate_max; + /** Shaper private packet mode supported. When non-zero, this parameter + * indicates private shaper of current node can be configured with + * packet mode. When configured in packet mode, committed/peak rate + * provided is interpreted in packets per second. + */ + int shaper_private_packet_mode_supported; + + /** Shaper private byte mode supported. When non-zero, this parameter + * indicates private shaper of current node can be configured with + * byte mode. When configured in byte mode, committed/peak rate + * provided is interpreted in bytes per second. + */ + int shaper_private_byte_mode_supported; + /** Maximum number of shared shapers the current node can be part of. * The value of zero indicates that shared shapers are not supported by * the current node. */ uint32_t shaper_shared_n_max; + /** Shaper shared packet mode supported. When non-zero, + * this parameter indicates that current node can be part of + * shared shapers which work in packet mode. + */ + int shaper_shared_packet_mode_supported; + + /** Shaper shared byte mode supported. When non-zero, + * this parameter indicates that current node can be part of + * shared shapers which work in byte mode. + */ + int shaper_shared_byte_mode_supported; + RTE_STD_C11 union { /** Items valid only for non-leaf nodes. */ @@ -735,6 +892,23 @@ struct rte_tm_node_capabilities { * WFQ weight, so WFQ is reduced to FQ. */ uint32_t sched_wfq_weight_max; + + /** WFQ packet mode supported. When non-zero, this + * parameter indicates that current node supports packet + * mode for WFQ among its children. WFQ weights will be + * applied against packet count for scheduling children + * when configured appropriately. + */ + int sched_wfq_packet_mode_supported; + + /** WFQ byte mode supported. When non-zero, this + * parameter indicates that current node supports byte + * mode for WFQ among its children. WFQ weights will be + * applied against bytes for scheduling children when + * configured appropriately. + */ + int sched_wfq_byte_mode_supported; + } nonleaf; /** Items valid only for leaf nodes. */ @@ -836,10 +1010,10 @@ struct rte_tm_wred_params { * Token bucket */ struct rte_tm_token_bucket { - /** Token bucket rate (bytes per second) */ + /** Token bucket rate (bytes per second or packets per second) */ uint64_t rate; - /** Token bucket size (bytes), a.k.a. max burst size */ + /** Token bucket size (bytes or packets), a.k.a. max burst size */ uint64_t size; }; @@ -860,6 +1034,11 @@ struct rte_tm_token_bucket { * Dual rate shapers use both the committed and the peak token buckets. The * rate of the peak bucket has to be bigger than zero, as well as greater than * or equal to the rate of the committed bucket. + * + * @see struct rte_tm_capabilities::shaper_private_packet_mode_supported + * @see struct rte_tm_capabilities::shaper_private_byte_mode_supported + * @see struct rte_tm_capabilities::shaper_shared_packet_mode_supported + * @see struct rte_tm_capabilities::shaper_shared_byte_mode_supported */ struct rte_tm_shaper_params { /** Committed token bucket */ @@ -872,8 +1051,19 @@ struct rte_tm_shaper_params { * purpose of shaping. Can be used to correct the packet length with * the framing overhead bytes that are also consumed on the wire (e.g. * RTE_TM_ETH_FRAMING_OVERHEAD_FCS). + * This field is ignored when the profile enables packet mode. */ int32_t pkt_length_adjust; + + /** When zero, the byte mode is enabled for the current profile, so the + * *rate* and *size* fields in both the committed and peak token buckets + * are specified in bytes per second and bytes, respectively. + * When non-zero, the packet mode is enabled for the current profile, + * so the *rate* and *size* fields in both the committed and peak token + * buckets are specified in packets per second and packets, + * respectively. + */ + int packet_mode; }; /** @@ -925,6 +1115,8 @@ struct rte_tm_node_params { * When non-NULL, it points to a pre-allocated array of * *n_sp_priorities* values, with non-zero value for * byte-mode and zero for packet-mode. + * @see struct rte_tm_node_capabilities::sched_wfq_packet_mode_supported + * @see struct rte_tm_node_capabilities::sched_wfq_byte_mode_supported */ int *wfq_weight_mode; @@ -997,6 +1189,7 @@ enum rte_tm_error_type { RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_RATE, RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_SIZE, RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PKT_ADJUST_LEN, + RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PACKET_MODE, RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID, RTE_TM_ERROR_TYPE_SHARED_SHAPER_ID, RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID, From patchwork Wed Apr 22 17:21:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 69114 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 51583A00C2; Wed, 22 Apr 2020 19:21:34 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 623D71C1CC; Wed, 22 Apr 2020 19:21:25 +0200 (CEST) Received: from mail-pg1-f193.google.com (mail-pg1-f193.google.com [209.85.215.193]) by dpdk.org (Postfix) with ESMTP id EEDB21C1C9 for ; Wed, 22 Apr 2020 19:21:22 +0200 (CEST) Received: by mail-pg1-f193.google.com with SMTP id j7so1405907pgj.13 for ; Wed, 22 Apr 2020 10:21:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=t3uoJAleIeFsD3dlX3QxwYrFaf6hykk0Fz3mH639q2o=; b=Dc3ymDZe0AVMOwN7gZ7NJmGFRxpkqZe7feivlZE6m1ingpRRn3EoypZLLFzG4WoGv5 fTv7+vp7aMzJY7jw87NbA0XLhC3Ms1FgWZVTToNGLnnGKTU6JAmNPYcJEjPlGf3NOBlz BFycgPLK25MuihpTnh127juJs9jFTeP09tNNVh+K7bgVbzrkrxO9EcvzGUcxCXBNHSCQ C991jNVodtsl5XNphs6F0OavbMdBAFdbTwipXGv2T16U7B5QjVEiTlkP7A3cefUZAPCK ld9Z6DTpx9RJVtvXJ6OeN0JSGr/r9k3TYqFei5shuqiFYKC/JsRY1ENoZIaEsSX2q/AD /1Bg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=t3uoJAleIeFsD3dlX3QxwYrFaf6hykk0Fz3mH639q2o=; b=a9+o1MifAYW0U16qXMo65miYiSQXM5TeWBxCWQ01sfu3faAPltLqq8WbSQGvFu2P5o XWYABra0+ssM+m31kS01rjUPFs3cmJ9N5QiId9y9Kemx+Kn9PX0xVLD4V2kVlb9684RH /g+FOD2RkI7faBJ8kGnjGdOdCxaDec0qXAqKlQxbHlNZPiooplam3bNglx743GAg64A2 tQLOwPmmdzKdpnktdr1SjAqKd3FYjhsQNYdYEhuZonoW7UAxA+e1hVx+S0DQKUd/dibo kHXzmssQ+P1SmzsA45dvojuxnhR+OX8tCXmpyb21Cmtnpg6mcI2r7BnCQeh+OPcermfV 4pEg== X-Gm-Message-State: AGi0Pua7vVoEIHaM8/T3l0nrWHw3uoAAdJ7oO2G+hQb1wkHW7w1vD8Zx PmtdkAdsK4MS2NZmcqXMVSs= X-Google-Smtp-Source: APiQypJajjIEhCBtEDdEqUracfHU3sp0oCuPw8eCApOFgoLph1nY1yBSFRDSSQehyzQ/tiYF8WCy2w== X-Received: by 2002:a62:18cc:: with SMTP id 195mr29020311pfy.135.1587576081956; Wed, 22 Apr 2020 10:21:21 -0700 (PDT) Received: from hyd1588t430.marvell.com ([115.113.156.2]) by smtp.gmail.com with ESMTPSA id b24sm28196pfd.175.2020.04.22.10.21.18 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 22 Apr 2020 10:21:21 -0700 (PDT) From: Nithin Dabilpuram To: Beilei Xing , Qi Zhang , Rosen Xu , Wenzhuo Lu , Konstantin Ananyev , Liron Himi Cc: dev@dpdk.org, jerinj@marvell.com, kkanas@marvell.com, Nithin Dabilpuram Date: Wed, 22 Apr 2020 22:51:02 +0530 Message-Id: <20200422172104.23099-2-nithind1988@gmail.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20200422172104.23099-1-nithind1988@gmail.com> References: <20200330160019.29674-1-ndabilpuram@marvell.com> <20200422172104.23099-1-nithind1988@gmail.com> Subject: [dpdk-dev] [PATCH v4 2/4] drivers/net: update tm capability for existing pmds X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram Since existing PMD's support shaper byte mode and scheduler wfq byte mode, update the same in their port/level/node capabilities that are added. SoftNIC PMD is already upto date with new capabilities. Signed-off-by: Nithin Dabilpuram --- v3..v4: - No change v2..v3: - Update node/level cap with shaper_shared_(packet, byte)_mode_supported. v2: - Newly included patch to change exiting pmd's with tm support of byte mode to show the same in port/level/node cap. drivers/net/i40e/i40e_tm.c | 22 ++++++++++++++++++++++ drivers/net/ipn3ke/ipn3ke_tm.c | 38 ++++++++++++++++++++++++++++++++++++++ drivers/net/ixgbe/ixgbe_tm.c | 22 ++++++++++++++++++++++ drivers/net/mvpp2/mrvl_tm.c | 14 ++++++++++++++ 4 files changed, 96 insertions(+) diff --git a/drivers/net/i40e/i40e_tm.c b/drivers/net/i40e/i40e_tm.c index c76760c..5d722f9 100644 --- a/drivers/net/i40e/i40e_tm.c +++ b/drivers/net/i40e/i40e_tm.c @@ -160,12 +160,16 @@ i40e_tm_capabilities_get(struct rte_eth_dev *dev, cap->shaper_private_rate_min = 0; /* 40Gbps -> 5GBps */ cap->shaper_private_rate_max = 5000000000ull; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 1; cap->shaper_shared_n_max = 0; cap->shaper_shared_n_nodes_per_shaper_max = 0; cap->shaper_shared_n_shapers_per_node_max = 0; cap->shaper_shared_dual_rate_n_max = 0; cap->shaper_shared_rate_min = 0; cap->shaper_shared_rate_max = 0; + cap->shaper_shared_packet_mode_supported = 0; + cap->shaper_shared_byte_mode_supported = 0; cap->sched_n_children_max = hw->func_caps.num_tx_qp; /** * HW supports SP. But no plan to support it now. @@ -179,6 +183,8 @@ i40e_tm_capabilities_get(struct rte_eth_dev *dev, * So, all the nodes should have the same weight. */ cap->sched_wfq_weight_max = 1; + cap->sched_wfq_packet_mode_supported = 0; + cap->sched_wfq_byte_mode_supported = 0; cap->cman_head_drop_supported = 0; cap->dynamic_update_mask = 0; cap->shaper_pkt_length_adjust_min = RTE_TM_ETH_FRAMING_OVERHEAD; @@ -754,7 +760,11 @@ i40e_level_capabilities_get(struct rte_eth_dev *dev, cap->nonleaf.shaper_private_rate_min = 0; /* 40Gbps -> 5GBps */ cap->nonleaf.shaper_private_rate_max = 5000000000ull; + cap->nonleaf.shaper_private_packet_mode_supported = 0; + cap->nonleaf.shaper_private_byte_mode_supported = 1; cap->nonleaf.shaper_shared_n_max = 0; + cap->nonleaf.shaper_shared_packet_mode_supported = 0; + cap->nonleaf.shaper_shared_byte_mode_supported = 0; if (level_id == I40E_TM_NODE_TYPE_PORT) cap->nonleaf.sched_n_children_max = I40E_MAX_TRAFFIC_CLASS; @@ -765,6 +775,8 @@ i40e_level_capabilities_get(struct rte_eth_dev *dev, cap->nonleaf.sched_wfq_n_children_per_group_max = 0; cap->nonleaf.sched_wfq_n_groups_max = 0; cap->nonleaf.sched_wfq_weight_max = 1; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 0; cap->nonleaf.stats_mask = 0; return 0; @@ -776,7 +788,11 @@ i40e_level_capabilities_get(struct rte_eth_dev *dev, cap->leaf.shaper_private_rate_min = 0; /* 40Gbps -> 5GBps */ cap->leaf.shaper_private_rate_max = 5000000000ull; + cap->leaf.shaper_private_packet_mode_supported = 0; + cap->leaf.shaper_private_byte_mode_supported = 1; cap->leaf.shaper_shared_n_max = 0; + cap->leaf.shaper_shared_packet_mode_supported = 0; + cap->leaf.shaper_shared_byte_mode_supported = 0; cap->leaf.cman_head_drop_supported = false; cap->leaf.cman_wred_context_private_supported = true; cap->leaf.cman_wred_context_shared_n_max = 0; @@ -817,7 +833,11 @@ i40e_node_capabilities_get(struct rte_eth_dev *dev, cap->shaper_private_rate_min = 0; /* 40Gbps -> 5GBps */ cap->shaper_private_rate_max = 5000000000ull; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 1; cap->shaper_shared_n_max = 0; + cap->shaper_shared_packet_mode_supported = 0; + cap->shaper_shared_byte_mode_supported = 0; if (node_type == I40E_TM_NODE_TYPE_QUEUE) { cap->leaf.cman_head_drop_supported = false; @@ -834,6 +854,8 @@ i40e_node_capabilities_get(struct rte_eth_dev *dev, cap->nonleaf.sched_wfq_n_children_per_group_max = 0; cap->nonleaf.sched_wfq_n_groups_max = 0; cap->nonleaf.sched_wfq_weight_max = 1; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 0; } cap->stats_mask = 0; diff --git a/drivers/net/ipn3ke/ipn3ke_tm.c b/drivers/net/ipn3ke/ipn3ke_tm.c index 5a16c5f..17ac026 100644 --- a/drivers/net/ipn3ke/ipn3ke_tm.c +++ b/drivers/net/ipn3ke/ipn3ke_tm.c @@ -440,6 +440,8 @@ ipn3ke_tm_capabilities_get(__rte_unused struct rte_eth_dev *dev, cap->shaper_private_dual_rate_n_max = 0; cap->shaper_private_rate_min = 1; cap->shaper_private_rate_max = 1 + IPN3KE_TM_VT_NODE_NUM; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 1; cap->shaper_shared_n_max = 0; cap->shaper_shared_n_nodes_per_shaper_max = 0; @@ -447,6 +449,8 @@ ipn3ke_tm_capabilities_get(__rte_unused struct rte_eth_dev *dev, cap->shaper_shared_dual_rate_n_max = 0; cap->shaper_shared_rate_min = 0; cap->shaper_shared_rate_max = 0; + cap->shaper_shared_packet_mode_supported = 0; + cap->shaper_shared_byte_mode_supported = 0; cap->shaper_pkt_length_adjust_min = RTE_TM_ETH_FRAMING_OVERHEAD_FCS; cap->shaper_pkt_length_adjust_max = RTE_TM_ETH_FRAMING_OVERHEAD_FCS; @@ -456,6 +460,8 @@ ipn3ke_tm_capabilities_get(__rte_unused struct rte_eth_dev *dev, cap->sched_wfq_n_children_per_group_max = UINT32_MAX; cap->sched_wfq_n_groups_max = 1; cap->sched_wfq_weight_max = UINT32_MAX; + cap->sched_wfq_packet_mode_supported = 0; + cap->sched_wfq_byte_mode_supported = 1; cap->cman_wred_packet_mode_supported = 0; cap->cman_wred_byte_mode_supported = 0; @@ -517,13 +523,19 @@ ipn3ke_tm_level_capabilities_get(struct rte_eth_dev *dev, cap->nonleaf.shaper_private_dual_rate_supported = 0; cap->nonleaf.shaper_private_rate_min = 1; cap->nonleaf.shaper_private_rate_max = UINT32_MAX; + cap->nonleaf.shaper_private_packet_mode_supported = 0; + cap->nonleaf.shaper_private_byte_mode_supported = 1; cap->nonleaf.shaper_shared_n_max = 0; + cap->nonleaf.shaper_shared_packet_mode_supported = 0; + cap->nonleaf.shaper_shared_byte_mode_supported = 0; cap->nonleaf.sched_n_children_max = IPN3KE_TM_VT_NODE_NUM; cap->nonleaf.sched_sp_n_priorities_max = 1; cap->nonleaf.sched_wfq_n_children_per_group_max = 0; cap->nonleaf.sched_wfq_n_groups_max = 0; cap->nonleaf.sched_wfq_weight_max = 0; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 0; cap->nonleaf.stats_mask = STATS_MASK_DEFAULT; break; @@ -539,13 +551,19 @@ ipn3ke_tm_level_capabilities_get(struct rte_eth_dev *dev, cap->nonleaf.shaper_private_dual_rate_supported = 0; cap->nonleaf.shaper_private_rate_min = 1; cap->nonleaf.shaper_private_rate_max = UINT32_MAX; + cap->nonleaf.shaper_private_packet_mode_supported = 0; + cap->nonleaf.shaper_private_byte_mode_supported = 1; cap->nonleaf.shaper_shared_n_max = 0; + cap->nonleaf.shaper_shared_packet_mode_supported = 0; + cap->nonleaf.shaper_shared_byte_mode_supported = 0; cap->nonleaf.sched_n_children_max = IPN3KE_TM_COS_NODE_NUM; cap->nonleaf.sched_sp_n_priorities_max = 1; cap->nonleaf.sched_wfq_n_children_per_group_max = 0; cap->nonleaf.sched_wfq_n_groups_max = 0; cap->nonleaf.sched_wfq_weight_max = 0; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 0; cap->nonleaf.stats_mask = STATS_MASK_DEFAULT; break; @@ -561,7 +579,11 @@ ipn3ke_tm_level_capabilities_get(struct rte_eth_dev *dev, cap->leaf.shaper_private_dual_rate_supported = 0; cap->leaf.shaper_private_rate_min = 0; cap->leaf.shaper_private_rate_max = 0; + cap->leaf.shaper_private_packet_mode_supported = 0; + cap->leaf.shaper_private_byte_mode_supported = 1; cap->leaf.shaper_shared_n_max = 0; + cap->leaf.shaper_shared_packet_mode_supported = 0; + cap->leaf.shaper_shared_byte_mode_supported = 0; cap->leaf.cman_head_drop_supported = 0; cap->leaf.cman_wred_packet_mode_supported = WRED_SUPPORTED; @@ -632,7 +654,11 @@ ipn3ke_tm_node_capabilities_get(struct rte_eth_dev *dev, cap->shaper_private_dual_rate_supported = 0; cap->shaper_private_rate_min = 1; cap->shaper_private_rate_max = UINT32_MAX; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 1; cap->shaper_shared_n_max = 0; + cap->shaper_shared_packet_mode_supported = 0; + cap->shaper_shared_byte_mode_supported = 0; cap->nonleaf.sched_n_children_max = IPN3KE_TM_VT_NODE_NUM; cap->nonleaf.sched_sp_n_priorities_max = 1; @@ -640,6 +666,8 @@ ipn3ke_tm_node_capabilities_get(struct rte_eth_dev *dev, IPN3KE_TM_VT_NODE_NUM; cap->nonleaf.sched_wfq_n_groups_max = 1; cap->nonleaf.sched_wfq_weight_max = 1; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 0; cap->stats_mask = STATS_MASK_DEFAULT; break; @@ -649,7 +677,11 @@ ipn3ke_tm_node_capabilities_get(struct rte_eth_dev *dev, cap->shaper_private_dual_rate_supported = 0; cap->shaper_private_rate_min = 1; cap->shaper_private_rate_max = UINT32_MAX; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 1; cap->shaper_shared_n_max = 0; + cap->shaper_shared_packet_mode_supported = 0; + cap->shaper_shared_byte_mode_supported = 0; cap->nonleaf.sched_n_children_max = IPN3KE_TM_COS_NODE_NUM; cap->nonleaf.sched_sp_n_priorities_max = 1; @@ -657,6 +689,8 @@ ipn3ke_tm_node_capabilities_get(struct rte_eth_dev *dev, IPN3KE_TM_COS_NODE_NUM; cap->nonleaf.sched_wfq_n_groups_max = 1; cap->nonleaf.sched_wfq_weight_max = 1; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 0; cap->stats_mask = STATS_MASK_DEFAULT; break; @@ -666,7 +700,11 @@ ipn3ke_tm_node_capabilities_get(struct rte_eth_dev *dev, cap->shaper_private_dual_rate_supported = 0; cap->shaper_private_rate_min = 0; cap->shaper_private_rate_max = 0; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 0; cap->shaper_shared_n_max = 0; + cap->shaper_shared_packet_mode_supported = 0; + cap->shaper_shared_byte_mode_supported = 0; cap->leaf.cman_head_drop_supported = 0; cap->leaf.cman_wred_packet_mode_supported = WRED_SUPPORTED; diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c index 73845a7..a8407e7 100644 --- a/drivers/net/ixgbe/ixgbe_tm.c +++ b/drivers/net/ixgbe/ixgbe_tm.c @@ -168,12 +168,16 @@ ixgbe_tm_capabilities_get(struct rte_eth_dev *dev, cap->shaper_private_rate_min = 0; /* 10Gbps -> 1.25GBps */ cap->shaper_private_rate_max = 1250000000ull; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 1; cap->shaper_shared_n_max = 0; cap->shaper_shared_n_nodes_per_shaper_max = 0; cap->shaper_shared_n_shapers_per_node_max = 0; cap->shaper_shared_dual_rate_n_max = 0; cap->shaper_shared_rate_min = 0; cap->shaper_shared_rate_max = 0; + cap->shaper_shared_packet_mode_supported = 0; + cap->shaper_shared_byte_mode_supported = 0; cap->sched_n_children_max = hw->mac.max_tx_queues; /** * HW supports SP. But no plan to support it now. @@ -182,6 +186,8 @@ ixgbe_tm_capabilities_get(struct rte_eth_dev *dev, cap->sched_sp_n_priorities_max = 1; cap->sched_wfq_n_children_per_group_max = 0; cap->sched_wfq_n_groups_max = 0; + cap->sched_wfq_packet_mode_supported = 0; + cap->sched_wfq_byte_mode_supported = 0; /** * SW only supports fair round robin now. * So, all the nodes should have the same weight. @@ -875,7 +881,11 @@ ixgbe_level_capabilities_get(struct rte_eth_dev *dev, cap->nonleaf.shaper_private_rate_min = 0; /* 10Gbps -> 1.25GBps */ cap->nonleaf.shaper_private_rate_max = 1250000000ull; + cap->nonleaf.shaper_private_packet_mode_supported = 0; + cap->nonleaf.shaper_private_byte_mode_supported = 1; cap->nonleaf.shaper_shared_n_max = 0; + cap->nonleaf.shaper_shared_packet_mode_supported = 0; + cap->nonleaf.shaper_shared_byte_mode_supported = 0; if (level_id == IXGBE_TM_NODE_TYPE_PORT) cap->nonleaf.sched_n_children_max = IXGBE_DCB_MAX_TRAFFIC_CLASS; @@ -886,6 +896,8 @@ ixgbe_level_capabilities_get(struct rte_eth_dev *dev, cap->nonleaf.sched_wfq_n_children_per_group_max = 0; cap->nonleaf.sched_wfq_n_groups_max = 0; cap->nonleaf.sched_wfq_weight_max = 1; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 0; cap->nonleaf.stats_mask = 0; return 0; @@ -897,7 +909,11 @@ ixgbe_level_capabilities_get(struct rte_eth_dev *dev, cap->leaf.shaper_private_rate_min = 0; /* 10Gbps -> 1.25GBps */ cap->leaf.shaper_private_rate_max = 1250000000ull; + cap->leaf.shaper_private_packet_mode_supported = 0; + cap->leaf.shaper_private_byte_mode_supported = 1; cap->leaf.shaper_shared_n_max = 0; + cap->leaf.shaper_shared_packet_mode_supported = 0; + cap->leaf.shaper_shared_byte_mode_supported = 0; cap->leaf.cman_head_drop_supported = false; cap->leaf.cman_wred_context_private_supported = true; cap->leaf.cman_wred_context_shared_n_max = 0; @@ -938,7 +954,11 @@ ixgbe_node_capabilities_get(struct rte_eth_dev *dev, cap->shaper_private_rate_min = 0; /* 10Gbps -> 1.25GBps */ cap->shaper_private_rate_max = 1250000000ull; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 1; cap->shaper_shared_n_max = 0; + cap->shaper_shared_packet_mode_supported = 0; + cap->shaper_shared_byte_mode_supported = 0; if (node_type == IXGBE_TM_NODE_TYPE_QUEUE) { cap->leaf.cman_head_drop_supported = false; @@ -955,6 +975,8 @@ ixgbe_node_capabilities_get(struct rte_eth_dev *dev, cap->nonleaf.sched_wfq_n_children_per_group_max = 0; cap->nonleaf.sched_wfq_n_groups_max = 0; cap->nonleaf.sched_wfq_weight_max = 1; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 0; } cap->stats_mask = 0; diff --git a/drivers/net/mvpp2/mrvl_tm.c b/drivers/net/mvpp2/mrvl_tm.c index 3de8997..e98f576 100644 --- a/drivers/net/mvpp2/mrvl_tm.c +++ b/drivers/net/mvpp2/mrvl_tm.c @@ -193,12 +193,16 @@ mrvl_capabilities_get(struct rte_eth_dev *dev, cap->shaper_private_n_max = cap->shaper_n_max; cap->shaper_private_rate_min = MRVL_RATE_MIN; cap->shaper_private_rate_max = priv->rate_max; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 1; cap->sched_n_children_max = dev->data->nb_tx_queues; cap->sched_sp_n_priorities_max = dev->data->nb_tx_queues; cap->sched_wfq_n_children_per_group_max = dev->data->nb_tx_queues; cap->sched_wfq_n_groups_max = 1; cap->sched_wfq_weight_max = MRVL_WEIGHT_MAX; + cap->sched_wfq_packet_mode_supported = 0; + cap->sched_wfq_byte_mode_supported = 1; cap->dynamic_update_mask = RTE_TM_UPDATE_NODE_SUSPEND_RESUME | RTE_TM_UPDATE_NODE_STATS; @@ -244,6 +248,8 @@ mrvl_level_capabilities_get(struct rte_eth_dev *dev, cap->nonleaf.shaper_private_supported = 1; cap->nonleaf.shaper_private_rate_min = MRVL_RATE_MIN; cap->nonleaf.shaper_private_rate_max = priv->rate_max; + cap->nonleaf.shaper_private_packet_mode_supported = 0; + cap->nonleaf.shaper_private_byte_mode_supported = 1; cap->nonleaf.sched_n_children_max = dev->data->nb_tx_queues; cap->nonleaf.sched_sp_n_priorities_max = 1; @@ -251,6 +257,8 @@ mrvl_level_capabilities_get(struct rte_eth_dev *dev, dev->data->nb_tx_queues; cap->nonleaf.sched_wfq_n_groups_max = 1; cap->nonleaf.sched_wfq_weight_max = MRVL_WEIGHT_MAX; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 1; cap->nonleaf.stats_mask = RTE_TM_STATS_N_PKTS | RTE_TM_STATS_N_BYTES; } else { /* level_id == MRVL_NODE_QUEUE */ @@ -261,6 +269,8 @@ mrvl_level_capabilities_get(struct rte_eth_dev *dev, cap->leaf.shaper_private_supported = 1; cap->leaf.shaper_private_rate_min = MRVL_RATE_MIN; cap->leaf.shaper_private_rate_max = priv->rate_max; + cap->leaf.shaper_private_packet_mode_supported = 0; + cap->leaf.shaper_private_byte_mode_supported = 1; cap->leaf.stats_mask = RTE_TM_STATS_N_PKTS; } @@ -300,6 +310,8 @@ mrvl_node_capabilities_get(struct rte_eth_dev *dev, uint32_t node_id, cap->shaper_private_supported = 1; cap->shaper_private_rate_min = MRVL_RATE_MIN; cap->shaper_private_rate_max = priv->rate_max; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 1; if (node->type == MRVL_NODE_PORT) { cap->nonleaf.sched_n_children_max = dev->data->nb_tx_queues; @@ -308,6 +320,8 @@ mrvl_node_capabilities_get(struct rte_eth_dev *dev, uint32_t node_id, dev->data->nb_tx_queues; cap->nonleaf.sched_wfq_n_groups_max = 1; cap->nonleaf.sched_wfq_weight_max = MRVL_WEIGHT_MAX; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 1; cap->stats_mask = RTE_TM_STATS_N_PKTS | RTE_TM_STATS_N_BYTES; } else { cap->stats_mask = RTE_TM_STATS_N_PKTS; From patchwork Wed Apr 22 17:21:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 69115 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id E2448A00C2; Wed, 22 Apr 2020 19:21:43 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8D11E1C22D; Wed, 22 Apr 2020 19:21:28 +0200 (CEST) Received: from mail-pl1-f193.google.com (mail-pl1-f193.google.com [209.85.214.193]) by dpdk.org (Postfix) with ESMTP id 9C6681C241 for ; Wed, 22 Apr 2020 19:21:26 +0200 (CEST) Received: by mail-pl1-f193.google.com with SMTP id t16so1195165plo.7 for ; Wed, 22 Apr 2020 10:21:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=kG1B5vvEVsE5on4mx1egqiECfABsRIkMPs92LwmLSWk=; b=anYsT1TPdWRwRAoFCahEhMt7lV/Hwnl0aONHTkjZBLdQcGwQKQ46WVU7boEGBb3avC BwMmBFBznbY3OQIfCH2tdL/QvbjTFIQ9J2fX3Tw1QQRGxvA1SiiV8dZtc5lNWu6qmqgC e+Bn+IUGZV6AYZ1xRubIRGS8MCUz2DTUW/5InRnZ7D+jFrg5beP4PgB7xhcMOJp1YPqh euxLCkGbSOzOn+/qKRTMtcub+OKSW0n5Z7zqZxkgPBTo+fkEWeYRStxBrS1wWmGnKDlf CMn84ZRoBI+vASGtOR3N/I8XMqh/P893n6rMKepUweUMRMZCInIjm423Gb6xlLvzcsPr ua7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=kG1B5vvEVsE5on4mx1egqiECfABsRIkMPs92LwmLSWk=; b=D6aJCKbwYUW7z38JMegAb+se/Zby+WkX5NGmDY9FdzVCL0HmbyrpRdKTHUBqNKqTH0 X9qSTy0qMJynS4DoHhmR3FZWjOfK39ly6phF4XCbuzv9q5/7JF9d2wSLHHhPVsEKOUVP jPsW+ZSYEdDOmGAxB/hY8G2E89p9yetRXz5rG0NMAWaGlgmFlAB6Q968V7P4Fa9uRK1Y JVvGT7CAw1EsM0qOr90F6RBEXpD+44u/iFWrdseVdn4lW3HKyreYljvVAErOeIS1OPaN FzA2e5cT5Ps3eJa8j1ayYPvgUOtiqpVXWxXtUtrkUEiyb/sXXbVxKOLvHZWb9+cjrYgJ 3sJg== X-Gm-Message-State: AGi0Pub3kg4vLpne/ae42EwBQj8wFVnxHRjS9tBVKsI9yV/P1E5/iT1Q WlobPo+eJFsTDpOiT0fGAkHTSG7YLiZBIw== X-Google-Smtp-Source: APiQypL8YJTVkv4VZzQntKxSw5lv6kLVHUxnzNdQmhZVAC4x7aOtC5ZmgtiPmmaPsNic4igZ43UKVA== X-Received: by 2002:a17:902:6bc3:: with SMTP id m3mr8291893plt.288.1587576085555; Wed, 22 Apr 2020 10:21:25 -0700 (PDT) Received: from hyd1588t430.marvell.com ([115.113.156.2]) by smtp.gmail.com with ESMTPSA id b24sm28196pfd.175.2020.04.22.10.21.22 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 22 Apr 2020 10:21:25 -0700 (PDT) From: Nithin Dabilpuram To: Wenzhuo Lu , Jingjing Wu , Bernard Iremonger , John McNamara , Marko Kovacevic Cc: dev@dpdk.org, jerinj@marvell.com, kkanas@marvell.com, Nithin Dabilpuram Date: Wed, 22 Apr 2020 22:51:03 +0530 Message-Id: <20200422172104.23099-3-nithind1988@gmail.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20200422172104.23099-1-nithind1988@gmail.com> References: <20200330160019.29674-1-ndabilpuram@marvell.com> <20200422172104.23099-1-nithind1988@gmail.com> Subject: [dpdk-dev] [PATCH v4 3/4] app/testpmd: add tm cmd for non leaf and shaper pktmode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram Add TM command to enable packet mode for all SP children in non leaf node. This is a new command as "add tm nonleaf node pktmode". Also add support to shaper profile add command to take packet mode parameter used to setup shaper in packet mode. This adds an extra argument "packet_mode" to shaper profile add command "add port tm node shaper profile" as last argument. This patch also dumps new tm port/level/node capabilities sched_wfq_packet_mode_supported, sched_wfq_byte_mode_supported, shaper_private_packet_mode_supported, shaper_private_byte_mode_supported, shaper_shared_packet_mode_supported, shaper_shared_byte_mode_supported. Signed-off-by: Nithin Dabilpuram --- v3..v4: - Add packet mode error string. v2..v3: - Update cmdline dump of node/level cap. v1..v2: - Update tm capability show cmd to dump lastest pktmode/bytemode fields of v2. - Update existing shaper profile add command to take last argument as pkt_mode and update struct rte_tm_shaper_params:packet_mode with the same. - Update documentation with latest command changes. app/test-pmd/cmdline.c | 9 +- app/test-pmd/cmdline_tm.c | 222 ++++++++++++++++++++++++++++ app/test-pmd/cmdline_tm.h | 1 + doc/guides/testpmd_app_ug/testpmd_funcs.rst | 40 ++++- 4 files changed, 266 insertions(+), 6 deletions(-) diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index 22fb23a..880ec61 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -1189,7 +1189,7 @@ static void cmd_help_long_parsed(void *parsed_result, "add port tm node shaper profile (port_id) (shaper_profile_id)" " (cmit_tb_rate) (cmit_tb_size) (peak_tb_rate) (peak_tb_size)" - " (packet_length_adjust)\n" + " (packet_length_adjust) (packet_mode)\n" " Add port tm node private shaper profile.\n\n" "del port tm node shaper profile (port_id) (shaper_profile_id)\n" @@ -1221,6 +1221,12 @@ static void cmd_help_long_parsed(void *parsed_result, " [(shared_shaper_id_0) (shared_shaper_id_1)...]\n" " Add port tm nonleaf node.\n\n" + "add port tm nonleaf node pktmode (port_id) (node_id) (parent_node_id)" + " (priority) (weight) (level_id) (shaper_profile_id)" + " (n_sp_priorities) (stats_mask) (n_shared_shapers)" + " [(shared_shaper_id_0) (shared_shaper_id_1)...]\n" + " Add port tm nonleaf node with pkt mode enabled.\n\n" + "add port tm leaf node (port_id) (node_id) (parent_node_id)" " (priority) (weight) (level_id) (shaper_profile_id)" " (cman_mode) (wred_profile_id) (stats_mask) (n_shared_shapers)" @@ -19655,6 +19661,7 @@ cmdline_parse_ctx_t main_ctx[] = { (cmdline_parse_inst_t *)&cmd_del_port_tm_node_wred_profile, (cmdline_parse_inst_t *)&cmd_set_port_tm_node_shaper_profile, (cmdline_parse_inst_t *)&cmd_add_port_tm_nonleaf_node, + (cmdline_parse_inst_t *)&cmd_add_port_tm_nonleaf_node_pmode, (cmdline_parse_inst_t *)&cmd_add_port_tm_leaf_node, (cmdline_parse_inst_t *)&cmd_del_port_tm_node, (cmdline_parse_inst_t *)&cmd_set_port_tm_node_parent, diff --git a/app/test-pmd/cmdline_tm.c b/app/test-pmd/cmdline_tm.c index 6951beb..52bdbd6 100644 --- a/app/test-pmd/cmdline_tm.c +++ b/app/test-pmd/cmdline_tm.c @@ -54,6 +54,8 @@ print_err_msg(struct rte_tm_error *error) = "peak size field (shaper profile)", [RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PKT_ADJUST_LEN] = "packet adjust length field (shaper profile)", + [RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PACKET_MODE] + = "packet mode field (shaper profile)", [RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID] = "shaper profile id", [RTE_TM_ERROR_TYPE_SHARED_SHAPER_ID] = "shared shaper id", [RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID] = "parent node id", @@ -257,6 +259,10 @@ static void cmd_show_port_tm_cap_parsed(void *parsed_result, cap.shaper_private_rate_min); printf("cap.shaper_private_rate_max %" PRIu64 "\n", cap.shaper_private_rate_max); + printf("cap.shaper_private_packet_mode_supported %" PRId32 "\n", + cap.shaper_private_packet_mode_supported); + printf("cap.shaper_private_byte_mode_supported %" PRId32 "\n", + cap.shaper_private_byte_mode_supported); printf("cap.shaper_shared_n_max %" PRIu32 "\n", cap.shaper_shared_n_max); printf("cap.shaper_shared_n_nodes_per_shaper_max %" PRIu32 "\n", @@ -269,6 +275,10 @@ static void cmd_show_port_tm_cap_parsed(void *parsed_result, cap.shaper_shared_rate_min); printf("cap.shaper_shared_rate_max %" PRIu64 "\n", cap.shaper_shared_rate_max); + printf("cap.shaper_shared_packet_mode_supported %" PRId32 "\n", + cap.shaper_shared_packet_mode_supported); + printf("cap.shaper_shared_byte_mode_supported %" PRId32 "\n", + cap.shaper_shared_byte_mode_supported); printf("cap.shaper_pkt_length_adjust_min %" PRId32 "\n", cap.shaper_pkt_length_adjust_min); printf("cap.shaper_pkt_length_adjust_max %" PRId32 "\n", @@ -283,6 +293,10 @@ static void cmd_show_port_tm_cap_parsed(void *parsed_result, cap.sched_wfq_n_groups_max); printf("cap.sched_wfq_weight_max %" PRIu32 "\n", cap.sched_wfq_weight_max); + printf("cap.sched_wfq_packet_mode_supported %" PRId32 "\n", + cap.sched_wfq_packet_mode_supported); + printf("cap.sched_wfq_byte_mode_supported %" PRId32 "\n", + cap.sched_wfq_byte_mode_supported); printf("cap.cman_head_drop_supported %" PRId32 "\n", cap.cman_head_drop_supported); printf("cap.cman_wred_context_n_max %" PRIu32 "\n", @@ -401,8 +415,19 @@ static void cmd_show_port_tm_level_cap_parsed(void *parsed_result, lcap.nonleaf.shaper_private_rate_min); printf("cap.nonleaf.shaper_private_rate_max %" PRIu64 "\n", lcap.nonleaf.shaper_private_rate_max); + printf("cap.nonleaf.shaper_private_packet_mode_supported %" + PRId32 "\n", + lcap.nonleaf.shaper_private_packet_mode_supported); + printf("cap.nonleaf.shaper_private_byte_mode_supported %" PRId32 + "\n", lcap.nonleaf.shaper_private_byte_mode_supported); printf("cap.nonleaf.shaper_shared_n_max %" PRIu32 "\n", lcap.nonleaf.shaper_shared_n_max); + printf("cap.nonleaf.shaper_shared_packet_mode_supported %" + PRId32 "\n", + lcap.nonleaf.shaper_shared_packet_mode_supported); + printf("cap.nonleaf.shaper_shared_byte_mode_supported %" + PRId32 "\n", + lcap.nonleaf.shaper_shared_byte_mode_supported); printf("cap.nonleaf.sched_n_children_max %" PRIu32 "\n", lcap.nonleaf.sched_n_children_max); printf("cap.nonleaf.sched_sp_n_priorities_max %" PRIu32 "\n", @@ -413,6 +438,10 @@ static void cmd_show_port_tm_level_cap_parsed(void *parsed_result, lcap.nonleaf.sched_wfq_n_groups_max); printf("cap.nonleaf.sched_wfq_weight_max %" PRIu32 "\n", lcap.nonleaf.sched_wfq_weight_max); + printf("cap.nonleaf.sched_wfq_packet_mode_supported %" PRId32 "\n", + lcap.nonleaf.sched_wfq_packet_mode_supported); + printf("cap.nonleaf.sched_wfq_byte_mode_supported %" PRId32 + "\n", lcap.nonleaf.sched_wfq_byte_mode_supported); printf("cap.nonleaf.stats_mask %" PRIx64 "\n", lcap.nonleaf.stats_mask); } else { @@ -424,8 +453,16 @@ static void cmd_show_port_tm_level_cap_parsed(void *parsed_result, lcap.leaf.shaper_private_rate_min); printf("cap.leaf.shaper_private_rate_max %" PRIu64 "\n", lcap.leaf.shaper_private_rate_max); + printf("cap.leaf.shaper_private_packet_mode_supported %" PRId32 + "\n", lcap.leaf.shaper_private_packet_mode_supported); + printf("cap.leaf.shaper_private_byte_mode_supported %" PRId32 "\n", + lcap.leaf.shaper_private_byte_mode_supported); printf("cap.leaf.shaper_shared_n_max %" PRIu32 "\n", lcap.leaf.shaper_shared_n_max); + printf("cap.leaf.shaper_shared_packet_mode_supported %" PRId32 "\n", + lcap.leaf.shaper_shared_packet_mode_supported); + printf("cap.leaf.shaper_shared_byte_mode_supported %" PRId32 "\n", + lcap.leaf.shaper_shared_byte_mode_supported); printf("cap.leaf.cman_head_drop_supported %" PRId32 "\n", lcap.leaf.cman_head_drop_supported); printf("cap.leaf.cman_wred_context_private_supported %" PRId32 @@ -524,8 +561,16 @@ static void cmd_show_port_tm_node_cap_parsed(void *parsed_result, ncap.shaper_private_rate_min); printf("cap.shaper_private_rate_max %" PRIu64 "\n", ncap.shaper_private_rate_max); + printf("cap.shaper_private_packet_mode_supported %" PRId32 "\n", + ncap.shaper_private_packet_mode_supported); + printf("cap.shaper_private_byte_mode_supported %" PRId32 "\n", + ncap.shaper_private_byte_mode_supported); printf("cap.shaper_shared_n_max %" PRIu32 "\n", ncap.shaper_shared_n_max); + printf("cap.shaper_shared_packet_mode_supported %" PRId32 "\n", + ncap.shaper_shared_packet_mode_supported); + printf("cap.shaper_shared_byte_mode_supported %" PRId32 "\n", + ncap.shaper_shared_byte_mode_supported); if (!is_leaf) { printf("cap.nonleaf.sched_n_children_max %" PRIu32 "\n", ncap.nonleaf.sched_n_children_max); @@ -537,6 +582,10 @@ static void cmd_show_port_tm_node_cap_parsed(void *parsed_result, ncap.nonleaf.sched_wfq_n_groups_max); printf("cap.nonleaf.sched_wfq_weight_max %" PRIu32 "\n", ncap.nonleaf.sched_wfq_weight_max); + printf("cap.nonleaf.sched_wfq_packet_mode_supported %" PRId32 "\n", + ncap.nonleaf.sched_wfq_packet_mode_supported); + printf("cap.nonleaf.sched_wfq_byte_mode_supported %" PRId32 "\n", + ncap.nonleaf.sched_wfq_byte_mode_supported); } else { printf("cap.leaf.cman_head_drop_supported %" PRId32 "\n", ncap.leaf.cman_head_drop_supported); @@ -776,6 +825,7 @@ struct cmd_add_port_tm_node_shaper_profile_result { uint64_t peak_tb_rate; uint64_t peak_tb_size; uint32_t pktlen_adjust; + int pkt_mode; }; cmdline_parse_token_string_t cmd_add_port_tm_node_shaper_profile_add = @@ -829,6 +879,10 @@ cmdline_parse_token_num_t cmd_add_port_tm_node_shaper_profile_pktlen_adjust = TOKEN_NUM_INITIALIZER( struct cmd_add_port_tm_node_shaper_profile_result, pktlen_adjust, UINT32); +cmdline_parse_token_num_t cmd_add_port_tm_node_shaper_profile_packet_mode = + TOKEN_NUM_INITIALIZER( + struct cmd_add_port_tm_node_shaper_profile_result, + pkt_mode, UINT32); static void cmd_add_port_tm_node_shaper_profile_parsed(void *parsed_result, __rte_unused struct cmdline *cl, @@ -853,6 +907,7 @@ static void cmd_add_port_tm_node_shaper_profile_parsed(void *parsed_result, sp.peak.rate = res->peak_tb_rate; sp.peak.size = res->peak_tb_size; sp.pkt_length_adjust = pkt_len_adjust; + sp.packet_mode = res->pkt_mode; ret = rte_tm_shaper_profile_add(port_id, shaper_id, &sp, &error); if (ret != 0) { @@ -879,6 +934,7 @@ cmdline_parse_inst_t cmd_add_port_tm_node_shaper_profile = { (void *)&cmd_add_port_tm_node_shaper_profile_peak_tb_rate, (void *)&cmd_add_port_tm_node_shaper_profile_peak_tb_size, (void *)&cmd_add_port_tm_node_shaper_profile_pktlen_adjust, + (void *)&cmd_add_port_tm_node_shaper_profile_packet_mode, NULL, }, }; @@ -1671,6 +1727,172 @@ cmdline_parse_inst_t cmd_add_port_tm_nonleaf_node = { }, }; +/* *** Add Port TM nonleaf node pkt mode *** */ +struct cmd_add_port_tm_nonleaf_node_pmode_result { + cmdline_fixed_string_t add; + cmdline_fixed_string_t port; + cmdline_fixed_string_t tm; + cmdline_fixed_string_t nonleaf; + cmdline_fixed_string_t node; + uint16_t port_id; + uint32_t node_id; + int32_t parent_node_id; + uint32_t priority; + uint32_t weight; + uint32_t level_id; + int32_t shaper_profile_id; + uint32_t n_sp_priorities; + uint64_t stats_mask; + cmdline_multi_string_t multi_shared_shaper_id; +}; + +cmdline_parse_token_string_t cmd_add_port_tm_nonleaf_node_pmode_add = + TOKEN_STRING_INITIALIZER( + struct cmd_add_port_tm_nonleaf_node_pmode_result, add, "add"); +cmdline_parse_token_string_t cmd_add_port_tm_nonleaf_node_pmode_port = + TOKEN_STRING_INITIALIZER( + struct cmd_add_port_tm_nonleaf_node_pmode_result, port, "port"); +cmdline_parse_token_string_t cmd_add_port_tm_nonleaf_node_pmode_tm = + TOKEN_STRING_INITIALIZER( + struct cmd_add_port_tm_nonleaf_node_pmode_result, tm, "tm"); +cmdline_parse_token_string_t cmd_add_port_tm_nonleaf_node_pmode_nonleaf = + TOKEN_STRING_INITIALIZER( + struct cmd_add_port_tm_nonleaf_node_pmode_result, nonleaf, "nonleaf"); +cmdline_parse_token_string_t cmd_add_port_tm_nonleaf_node_pmode_node = + TOKEN_STRING_INITIALIZER( + struct cmd_add_port_tm_nonleaf_node_pmode_result, node, "node"); +cmdline_parse_token_string_t cmd_add_port_tm_nonleaf_node_pmode_pktmode = + TOKEN_STRING_INITIALIZER( + struct cmd_add_port_tm_nonleaf_node_pmode_result, node, "pktmode"); +cmdline_parse_token_num_t cmd_add_port_tm_nonleaf_node_pmode_port_id = + TOKEN_NUM_INITIALIZER( + struct cmd_add_port_tm_nonleaf_node_pmode_result, + port_id, UINT16); +cmdline_parse_token_num_t cmd_add_port_tm_nonleaf_node_pmode_node_id = + TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_nonleaf_node_pmode_result, + node_id, UINT32); +cmdline_parse_token_num_t cmd_add_port_tm_nonleaf_node_pmode_parent_node_id = + TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_nonleaf_node_pmode_result, + parent_node_id, INT32); +cmdline_parse_token_num_t cmd_add_port_tm_nonleaf_node_pmode_priority = + TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_nonleaf_node_pmode_result, + priority, UINT32); +cmdline_parse_token_num_t cmd_add_port_tm_nonleaf_node_pmode_weight = + TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_nonleaf_node_pmode_result, + weight, UINT32); +cmdline_parse_token_num_t cmd_add_port_tm_nonleaf_node_pmode_level_id = + TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_nonleaf_node_pmode_result, + level_id, UINT32); +cmdline_parse_token_num_t cmd_add_port_tm_nonleaf_node_pmode_shaper_profile_id = + TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_nonleaf_node_pmode_result, + shaper_profile_id, INT32); +cmdline_parse_token_num_t cmd_add_port_tm_nonleaf_node_pmode_n_sp_priorities = + TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_nonleaf_node_pmode_result, + n_sp_priorities, UINT32); +cmdline_parse_token_num_t cmd_add_port_tm_nonleaf_node_pmode_stats_mask = + TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_nonleaf_node_pmode_result, + stats_mask, UINT64); +cmdline_parse_token_string_t + cmd_add_port_tm_nonleaf_node_pmode_multi_shrd_shpr_id = + TOKEN_STRING_INITIALIZER( + struct cmd_add_port_tm_nonleaf_node_pmode_result, + multi_shared_shaper_id, TOKEN_STRING_MULTI); + +static void cmd_add_port_tm_nonleaf_node_pmode_parsed(void *parsed_result, + __attribute__((unused)) struct cmdline *cl, + __attribute__((unused)) void *data) +{ + struct cmd_add_port_tm_nonleaf_node_pmode_result *res = parsed_result; + uint32_t parent_node_id, n_shared_shapers = 0; + char *s_str = res->multi_shared_shaper_id; + portid_t port_id = res->port_id; + struct rte_tm_node_params np; + int *wfq_weight_mode = NULL; + uint32_t *shared_shaper_id; + struct rte_tm_error error; + int ret; + + if (port_id_is_invalid(port_id, ENABLED_WARN)) + return; + + memset(&np, 0, sizeof(struct rte_tm_node_params)); + memset(&error, 0, sizeof(struct rte_tm_error)); + + /* Node parameters */ + if (res->parent_node_id < 0) + parent_node_id = UINT32_MAX; + else + parent_node_id = res->parent_node_id; + + shared_shaper_id = (uint32_t *)malloc(MAX_NUM_SHARED_SHAPERS * + sizeof(uint32_t)); + if (shared_shaper_id == NULL) { + printf(" Memory not allocated for shared shapers (error)\n"); + return; + } + + /* Parse multi shared shaper id string */ + ret = parse_multi_ss_id_str(s_str, &n_shared_shapers, shared_shaper_id); + if (ret) { + printf(" Shared shapers params string parse error\n"); + free(shared_shaper_id); + return; + } + + if (res->shaper_profile_id < 0) + np.shaper_profile_id = UINT32_MAX; + else + np.shaper_profile_id = res->shaper_profile_id; + + np.n_shared_shapers = n_shared_shapers; + if (np.n_shared_shapers) { + np.shared_shaper_id = &shared_shaper_id[0]; + } else { + free(shared_shaper_id); + shared_shaper_id = NULL; + } + + if (res->n_sp_priorities) + wfq_weight_mode = calloc(res->n_sp_priorities, sizeof(int)); + np.nonleaf.n_sp_priorities = res->n_sp_priorities; + np.stats_mask = res->stats_mask; + np.nonleaf.wfq_weight_mode = wfq_weight_mode; + + ret = rte_tm_node_add(port_id, res->node_id, parent_node_id, + res->priority, res->weight, res->level_id, + &np, &error); + if (ret != 0) { + print_err_msg(&error); + free(shared_shaper_id); + free(wfq_weight_mode); + return; + } +} + +cmdline_parse_inst_t cmd_add_port_tm_nonleaf_node_pmode = { + .f = cmd_add_port_tm_nonleaf_node_pmode_parsed, + .data = NULL, + .help_str = "Add port tm nonleaf node pktmode", + .tokens = { + (void *)&cmd_add_port_tm_nonleaf_node_pmode_add, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_port, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_tm, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_nonleaf, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_node, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_pktmode, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_port_id, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_node_id, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_parent_node_id, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_priority, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_weight, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_level_id, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_shaper_profile_id, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_n_sp_priorities, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_stats_mask, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_multi_shrd_shpr_id, + NULL, + }, +}; /* *** Add Port TM leaf node *** */ struct cmd_add_port_tm_leaf_node_result { cmdline_fixed_string_t add; diff --git a/app/test-pmd/cmdline_tm.h b/app/test-pmd/cmdline_tm.h index 950cb75..e59c15c 100644 --- a/app/test-pmd/cmdline_tm.h +++ b/app/test-pmd/cmdline_tm.h @@ -19,6 +19,7 @@ extern cmdline_parse_inst_t cmd_add_port_tm_node_wred_profile; extern cmdline_parse_inst_t cmd_del_port_tm_node_wred_profile; extern cmdline_parse_inst_t cmd_set_port_tm_node_shaper_profile; extern cmdline_parse_inst_t cmd_add_port_tm_nonleaf_node; +extern cmdline_parse_inst_t cmd_add_port_tm_nonleaf_node_pmode; extern cmdline_parse_inst_t cmd_add_port_tm_leaf_node; extern cmdline_parse_inst_t cmd_del_port_tm_node; extern cmdline_parse_inst_t cmd_set_port_tm_node_parent; diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index a360ecc..7513a97 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -2842,19 +2842,22 @@ Add the port traffic management private shaper profile:: testpmd> add port tm node shaper profile (port_id) (shaper_profile_id) \ (cmit_tb_rate) (cmit_tb_size) (peak_tb_rate) (peak_tb_size) \ - (packet_length_adjust) + (packet_length_adjust) (packet_mode) where: * ``shaper_profile id``: Shaper profile ID for the new profile. -* ``cmit_tb_rate``: Committed token bucket rate (bytes per second). -* ``cmit_tb_size``: Committed token bucket size (bytes). -* ``peak_tb_rate``: Peak token bucket rate (bytes per second). -* ``peak_tb_size``: Peak token bucket size (bytes). +* ``cmit_tb_rate``: Committed token bucket rate (bytes per second or packets per second). +* ``cmit_tb_size``: Committed token bucket size (bytes or packets). +* ``peak_tb_rate``: Peak token bucket rate (bytes per second or packets per second). +* ``peak_tb_size``: Peak token bucket size (bytes or packets). * ``packet_length_adjust``: The value (bytes) to be added to the length of each packet for the purpose of shaping. This parameter value can be used to correct the packet length with the framing overhead bytes that are consumed on the wire. +* ``packet_mode``: Shaper configured in packet mode. This parameter value if + zero, configures shaper in byte mode and if non-zero configures it in packet + mode. Delete port traffic management private shaper profile ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -2979,6 +2982,33 @@ where: * ``n_shared_shapers``: Number of shared shapers. * ``shared_shaper_id``: Shared shaper id. +Add port traffic management hierarchy nonleaf node with packet mode +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Add nonleaf node with packet mode to port traffic management hierarchy:: + + testpmd> add port tm nonleaf node pktmode (port_id) (node_id) (parent_node_id) \ + (priority) (weight) (level_id) (shaper_profile_id) \ + (n_sp_priorities) (stats_mask) (n_shared_shapers) \ + [(shared_shaper_0) (shared_shaper_1) ...] \ + +where: + +* ``parent_node_id``: Node ID of the parent. +* ``priority``: Node priority (highest node priority is zero). This is used by + the SP algorithm running on the parent node for scheduling this node. +* ``weight``: Node weight (lowest weight is one). The node weight is relative + to the weight sum of all siblings that have the same priority. It is used by + the WFQ algorithm running on the parent node for scheduling this node. +* ``level_id``: Hierarchy level of the node. +* ``shaper_profile_id``: Shaper profile ID of the private shaper to be used by + the node. +* ``n_sp_priorities``: Number of strict priorities. Packet mode is enabled on + all of them. +* ``stats_mask``: Mask of statistics counter types to be enabled for this node. +* ``n_shared_shapers``: Number of shared shapers. +* ``shared_shaper_id``: Shared shaper id. + Add port traffic management hierarchy leaf node ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From patchwork Wed Apr 22 17:21:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 69116 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 88893A00C2; Wed, 22 Apr 2020 19:21:55 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4C0C81C2A9; Wed, 22 Apr 2020 19:21:32 +0200 (CEST) Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) by dpdk.org (Postfix) with ESMTP id 4D8591C29A for ; Wed, 22 Apr 2020 19:21:29 +0200 (CEST) Received: by mail-pl1-f182.google.com with SMTP id d24so1191091pll.8 for ; Wed, 22 Apr 2020 10:21:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=4t/HOkpE5dVBr0LsOJ0IN7x5mynQjcCdxwB/EMhSRso=; b=AnMLfh9svdCxabrpSPnFjIX/kTnGTRAZQXjkfaalbtRiaFXfZ8mQJBXcKJflrUcxua 8VNKTurir5LyIumYBDA5lOWWhG15HpQU97ruhkz8lBFhT4QZwmDp4mmlKQso0D9VhT2O DiweWgVqML8C4thNXib5KPS8mGpLgvALD2N6OpDDGa3c5x4oN9jh8uh+YvjZKhF87kRh npCbqs4t40E5SfKnaahtdwSew86OVfgejxqyP9wULY4D4eoNnJuyPEH8pxaxMQce3NgS v0HDAKMEijGJuFoNvje8cEfuVQ/rsIgoIUOlPCgo84Q1iNV5c4zwu9MRN4R1j9O3eL+k W+Ow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=4t/HOkpE5dVBr0LsOJ0IN7x5mynQjcCdxwB/EMhSRso=; b=K3Pk5C72cytMUyBNU/bjZfWsxnBSf8Xwv+PzMj7Vcvzw60zdNhxCs97umNTXiLoSy3 iP/S+Qasei6vqqC3axjUK4Hb55iYR3EzFmKI4hfAjx17iSxT88SxVUl0OppEA8baY/0i qns851fw8KqK6a2S/8tGbvoWCvO26vZACvNzKzWTu3fPyDQWQdEzKPPEZUswAj0wm45i TVqLMEyufRoyBNDPPfvbKmlOwLVX2492ATPGJF2WunsD7j8pU2rkGln/gCOIAbYOqknr w+f+FrZfdjTN/5+cNN+Mjy7N7I6nsLvO9FrXbdTW+X5tFAnY1Q/XiHegtgn5CNEmnxHX 2xZw== X-Gm-Message-State: AGi0PubH/6bR5saEpYI91J36FGQUiVh2QT7RYneMYF1NRoW6Dj9s4KR2 9/tFwVA0X+dv3DLdjDD+Eus= X-Google-Smtp-Source: APiQypJV0hqheioZHXU6Gsbt/f7RVBZLkIPiZXHJiuNDCgbsxehZ3jSah622C6dIpf0v8utZQ6pUcg== X-Received: by 2002:a17:90a:30a5:: with SMTP id h34mr12551553pjb.171.1587576088187; Wed, 22 Apr 2020 10:21:28 -0700 (PDT) Received: from hyd1588t430.marvell.com ([115.113.156.2]) by smtp.gmail.com with ESMTPSA id b24sm28196pfd.175.2020.04.22.10.21.26 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 22 Apr 2020 10:21:27 -0700 (PDT) From: Nithin Dabilpuram To: Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K Cc: dev@dpdk.org, kkanas@marvell.com Date: Wed, 22 Apr 2020 22:51:04 +0530 Message-Id: <20200422172104.23099-4-nithind1988@gmail.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20200422172104.23099-1-nithind1988@gmail.com> References: <20200330160019.29674-1-ndabilpuram@marvell.com> <20200422172104.23099-1-nithind1988@gmail.com> Subject: [dpdk-dev] [PATCH v4 4/4] net/octeontx2: support tm length adjust and pkt mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram This patch adds support to packet length adjust TM feature for private shaper. It also adds support to packet mode feature that applies both to private shaper and node DWRR scheduling of SP children. Signed-off-by: Nithin Dabilpuram --- v3..v4: - No change. v2..v3: - No change. v1..v2: - Newly included patch. drivers/net/octeontx2/otx2_tm.c | 140 +++++++++++++++++++++++++++++++++------- drivers/net/octeontx2/otx2_tm.h | 5 ++ 2 files changed, 122 insertions(+), 23 deletions(-) diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c index f94618d..fa7d21b 100644 --- a/drivers/net/octeontx2/otx2_tm.c +++ b/drivers/net/octeontx2/otx2_tm.c @@ -336,18 +336,25 @@ prepare_tm_shaper_reg(struct otx2_nix_tm_node *tm_node, { struct shaper_params cir, pir; uint32_t schq = tm_node->hw_id; + uint64_t adjust = 0; uint8_t k = 0; memset(&cir, 0, sizeof(cir)); memset(&pir, 0, sizeof(pir)); shaper_config_to_nix(profile, &cir, &pir); - otx2_tm_dbg("Shaper config node %s(%u) lvl %u id %u, " - "pir %" PRIu64 "(%" PRIu64 "B)," - " cir %" PRIu64 "(%" PRIu64 "B) (%p)", - nix_hwlvl2str(tm_node->hw_lvl), schq, tm_node->lvl, - tm_node->id, pir.rate, pir.burst, - cir.rate, cir.burst, tm_node); + /* Packet length adjust */ + if (tm_node->pkt_mode) + adjust = 1; + else if (profile) + adjust = profile->params.pkt_length_adjust & 0x1FF; + + otx2_tm_dbg("Shaper config node %s(%u) lvl %u id %u, pir %" PRIu64 + "(%" PRIu64 "B), cir %" PRIu64 "(%" PRIu64 "B)" + "adjust 0x%" PRIx64 "(pktmode %u) (%p)", + nix_hwlvl2str(tm_node->hw_lvl), schq, tm_node->lvl, + tm_node->id, pir.rate, pir.burst, cir.rate, cir.burst, + adjust, tm_node->pkt_mode, tm_node); switch (tm_node->hw_lvl) { case NIX_TXSCH_LVL_SMQ: @@ -364,7 +371,9 @@ prepare_tm_shaper_reg(struct otx2_nix_tm_node *tm_node, /* Configure RED ALG */ reg[k] = NIX_AF_MDQX_SHAPE(schq); - regval[k] = ((uint64_t)tm_node->red_algo << 9); + regval[k] = (adjust | + (uint64_t)tm_node->red_algo << 9 | + (uint64_t)tm_node->pkt_mode << 24); k++; break; case NIX_TXSCH_LVL_TL4: @@ -381,7 +390,9 @@ prepare_tm_shaper_reg(struct otx2_nix_tm_node *tm_node, /* Configure RED algo */ reg[k] = NIX_AF_TL4X_SHAPE(schq); - regval[k] = ((uint64_t)tm_node->red_algo << 9); + regval[k] = (adjust | + (uint64_t)tm_node->red_algo << 9 | + (uint64_t)tm_node->pkt_mode << 24); k++; break; case NIX_TXSCH_LVL_TL3: @@ -398,7 +409,9 @@ prepare_tm_shaper_reg(struct otx2_nix_tm_node *tm_node, /* Configure RED algo */ reg[k] = NIX_AF_TL3X_SHAPE(schq); - regval[k] = ((uint64_t)tm_node->red_algo << 9); + regval[k] = (adjust | + (uint64_t)tm_node->red_algo << 9 | + (uint64_t)tm_node->pkt_mode << 24); k++; break; @@ -416,7 +429,9 @@ prepare_tm_shaper_reg(struct otx2_nix_tm_node *tm_node, /* Configure RED algo */ reg[k] = NIX_AF_TL2X_SHAPE(schq); - regval[k] = ((uint64_t)tm_node->red_algo << 9); + regval[k] = (adjust | + (uint64_t)tm_node->red_algo << 9 | + (uint64_t)tm_node->pkt_mode << 24); k++; break; @@ -426,6 +441,12 @@ prepare_tm_shaper_reg(struct otx2_nix_tm_node *tm_node, regval[k] = (cir.rate && cir.burst) ? (shaper2regval(&cir) | 1) : 0; k++; + + /* Configure length disable and adjust */ + reg[k] = NIX_AF_TL1X_SHAPE(schq); + regval[k] = (adjust | + (uint64_t)tm_node->pkt_mode << 24); + k++; break; } @@ -773,6 +794,15 @@ nix_tm_node_add_to_list(struct otx2_eth_dev *dev, uint32_t node_id, tm_node->flags = 0; if (user) tm_node->flags = NIX_TM_NODE_USER; + + /* Packet mode */ + if (!nix_tm_is_leaf(dev, lvl) && + ((profile && profile->params.packet_mode) || + (params->nonleaf.wfq_weight_mode && + params->nonleaf.n_sp_priorities && + !params->nonleaf.wfq_weight_mode[0]))) + tm_node->pkt_mode = 1; + rte_memcpy(&tm_node->params, params, sizeof(struct rte_tm_node_params)); if (profile) @@ -1873,8 +1903,10 @@ otx2_nix_tm_capa_get(struct rte_eth_dev *eth_dev, cap->shaper_private_dual_rate_n_max = max_nr_nodes; cap->shaper_private_rate_min = MIN_SHAPER_RATE / 8; cap->shaper_private_rate_max = MAX_SHAPER_RATE / 8; - cap->shaper_pkt_length_adjust_min = 0; - cap->shaper_pkt_length_adjust_max = 0; + cap->shaper_private_packet_mode_supported = 1; + cap->shaper_private_byte_mode_supported = 1; + cap->shaper_pkt_length_adjust_min = NIX_LENGTH_ADJUST_MIN; + cap->shaper_pkt_length_adjust_max = NIX_LENGTH_ADJUST_MAX; /* Schedule Capabilities */ cap->sched_n_children_max = rsp->schq[NIX_TXSCH_LVL_MDQ]; @@ -1882,6 +1914,8 @@ otx2_nix_tm_capa_get(struct rte_eth_dev *eth_dev, cap->sched_wfq_n_children_per_group_max = cap->sched_n_children_max; cap->sched_wfq_n_groups_max = 1; cap->sched_wfq_weight_max = MAX_SCHED_WEIGHT; + cap->sched_wfq_packet_mode_supported = 1; + cap->sched_wfq_byte_mode_supported = 1; cap->dynamic_update_mask = RTE_TM_UPDATE_NODE_PARENT_KEEP_LEVEL | @@ -1944,12 +1978,16 @@ otx2_nix_tm_level_capa_get(struct rte_eth_dev *eth_dev, uint32_t lvl, nix_tm_have_tl1_access(dev) ? false : true; cap->nonleaf.shaper_private_rate_min = MIN_SHAPER_RATE / 8; cap->nonleaf.shaper_private_rate_max = MAX_SHAPER_RATE / 8; + cap->nonleaf.shaper_private_packet_mode_supported = 1; + cap->nonleaf.shaper_private_byte_mode_supported = 1; cap->nonleaf.sched_n_children_max = rsp->schq[hw_lvl - 1]; cap->nonleaf.sched_sp_n_priorities_max = nix_max_prio(dev, hw_lvl) + 1; cap->nonleaf.sched_wfq_n_groups_max = 1; cap->nonleaf.sched_wfq_weight_max = MAX_SCHED_WEIGHT; + cap->nonleaf.sched_wfq_packet_mode_supported = 1; + cap->nonleaf.sched_wfq_byte_mode_supported = 1; if (nix_tm_have_tl1_access(dev)) cap->nonleaf.stats_mask = @@ -1966,6 +2004,8 @@ otx2_nix_tm_level_capa_get(struct rte_eth_dev *eth_dev, uint32_t lvl, cap->nonleaf.shaper_private_dual_rate_supported = true; cap->nonleaf.shaper_private_rate_min = MIN_SHAPER_RATE / 8; cap->nonleaf.shaper_private_rate_max = MAX_SHAPER_RATE / 8; + cap->nonleaf.shaper_private_packet_mode_supported = 1; + cap->nonleaf.shaper_private_byte_mode_supported = 1; /* MDQ doesn't support Strict Priority */ if (hw_lvl == NIX_TXSCH_LVL_MDQ) @@ -1977,6 +2017,8 @@ otx2_nix_tm_level_capa_get(struct rte_eth_dev *eth_dev, uint32_t lvl, nix_max_prio(dev, hw_lvl) + 1; cap->nonleaf.sched_wfq_n_groups_max = 1; cap->nonleaf.sched_wfq_weight_max = MAX_SCHED_WEIGHT; + cap->nonleaf.sched_wfq_packet_mode_supported = 1; + cap->nonleaf.sched_wfq_byte_mode_supported = 1; } else { /* unsupported level */ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; @@ -2029,6 +2071,8 @@ otx2_nix_tm_node_capa_get(struct rte_eth_dev *eth_dev, uint32_t node_id, (hw_lvl == NIX_TXSCH_LVL_TL1) ? false : true; cap->shaper_private_rate_min = MIN_SHAPER_RATE / 8; cap->shaper_private_rate_max = MAX_SHAPER_RATE / 8; + cap->shaper_private_packet_mode_supported = 1; + cap->shaper_private_byte_mode_supported = 1; /* Non Leaf Scheduler */ if (hw_lvl == NIX_TXSCH_LVL_MDQ) @@ -2041,6 +2085,8 @@ otx2_nix_tm_node_capa_get(struct rte_eth_dev *eth_dev, uint32_t node_id, cap->nonleaf.sched_n_children_max; cap->nonleaf.sched_wfq_n_groups_max = 1; cap->nonleaf.sched_wfq_weight_max = MAX_SCHED_WEIGHT; + cap->nonleaf.sched_wfq_packet_mode_supported = 1; + cap->nonleaf.sched_wfq_byte_mode_supported = 1; if (hw_lvl == NIX_TXSCH_LVL_TL1) cap->stats_mask = RTE_TM_STATS_N_PKTS_RED_DROPPED | @@ -2096,6 +2142,13 @@ otx2_nix_tm_shaper_profile_add(struct rte_eth_dev *eth_dev, } } + if (params->pkt_length_adjust < NIX_LENGTH_ADJUST_MIN || + params->pkt_length_adjust > NIX_LENGTH_ADJUST_MAX) { + error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PKT_ADJUST_LEN; + error->message = "length adjust invalid"; + return -EINVAL; + } + profile = rte_zmalloc("otx2_nix_tm_shaper_profile", sizeof(struct otx2_nix_tm_shaper_profile), 0); if (!profile) @@ -2108,13 +2161,14 @@ otx2_nix_tm_shaper_profile_add(struct rte_eth_dev *eth_dev, otx2_tm_dbg("Added TM shaper profile %u, " " pir %" PRIu64 " , pbs %" PRIu64 ", cir %" PRIu64 - ", cbs %" PRIu64 " , adj %u", + ", cbs %" PRIu64 " , adj %u, pkt mode %d", profile_id, params->peak.rate * 8, params->peak.size, params->committed.rate * 8, params->committed.size, - params->pkt_length_adjust); + params->pkt_length_adjust, + params->packet_mode); /* Translate rate as bits per second */ profile->params.peak.rate = profile->params.peak.rate * 8; @@ -2170,9 +2224,11 @@ otx2_nix_tm_node_add(struct rte_eth_dev *eth_dev, uint32_t node_id, struct rte_tm_error *error) { struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_nix_tm_shaper_profile *profile = NULL; struct otx2_nix_tm_node *parent_node; - int rc, clear_on_fail = 0; - uint32_t exp_next_lvl; + int rc, pkt_mode, clear_on_fail = 0; + uint32_t exp_next_lvl, i; + uint32_t profile_id; uint16_t hw_lvl; /* we don't support dynamic updates */ @@ -2234,13 +2290,45 @@ otx2_nix_tm_node_add(struct rte_eth_dev *eth_dev, uint32_t node_id, return -EINVAL; } - /* Check if shaper profile exists for non leaf node */ - if (!nix_tm_is_leaf(dev, lvl) && - params->shaper_profile_id != RTE_TM_SHAPER_PROFILE_ID_NONE && - !nix_tm_shaper_profile_search(dev, params->shaper_profile_id)) { - error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID; - error->message = "invalid shaper profile"; - return -EINVAL; + if (!nix_tm_is_leaf(dev, lvl)) { + /* Check if shaper profile exists for non leaf node */ + profile_id = params->shaper_profile_id; + profile = nix_tm_shaper_profile_search(dev, profile_id); + if (profile_id != RTE_TM_SHAPER_PROFILE_ID_NONE && !profile) { + error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID; + error->message = "invalid shaper profile"; + return -EINVAL; + } + + /* Minimum static priority count is 1 */ + if (!params->nonleaf.n_sp_priorities || + params->nonleaf.n_sp_priorities > TXSCH_TLX_SP_PRIO_MAX) { + error->type = + RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SP_PRIORITIES; + error->message = "invalid sp priorities"; + return -EINVAL; + } + + pkt_mode = 0; + /* Validate weight mode */ + for (i = 0; i < params->nonleaf.n_sp_priorities && + params->nonleaf.wfq_weight_mode; i++) { + pkt_mode = !params->nonleaf.wfq_weight_mode[i]; + if (pkt_mode == !params->nonleaf.wfq_weight_mode[0]) + continue; + + error->type = + RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE; + error->message = "unsupported weight mode"; + return -EINVAL; + } + + if (profile && params->nonleaf.n_sp_priorities && + pkt_mode != profile->params.packet_mode) { + error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE; + error->message = "shaper wfq packet mode mismatch"; + return -EINVAL; + } } /* Check if there is second DWRR already in siblings or holes in prio */ @@ -2482,6 +2570,12 @@ otx2_nix_tm_node_shaper_update(struct rte_eth_dev *eth_dev, } } + if (profile && profile->params.packet_mode != tm_node->pkt_mode) { + error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID; + error->message = "shaper profile pkt mode mismatch"; + return -EINVAL; + } + tm_node->params.shaper_profile_id = profile_id; /* Nothing to do if not yet committed */ diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h index 9675182..cdca987 100644 --- a/drivers/net/octeontx2/otx2_tm.h +++ b/drivers/net/octeontx2/otx2_tm.h @@ -48,6 +48,7 @@ struct otx2_nix_tm_node { #define NIX_TM_NODE_USER BIT_ULL(2) /* Shaper algorithm for RED state @NIX_REDALG_E */ uint32_t red_algo:2; + uint32_t pkt_mode:1; struct otx2_nix_tm_node *parent; struct rte_tm_node_params params; @@ -114,6 +115,10 @@ TAILQ_HEAD(otx2_nix_tm_shaper_profile_list, otx2_nix_tm_shaper_profile); #define MAX_SHAPER_RATE \ SHAPER_RATE(MAX_RATE_EXPONENT, MAX_RATE_MANTISSA, 0) +/* Min is limited so that NIX_AF_SMQX_CFG[MINLEN]+ADJUST is not -ve */ +#define NIX_LENGTH_ADJUST_MIN ((int)-NIX_MIN_HW_FRS + 1) +#define NIX_LENGTH_ADJUST_MAX 255 + /** TM Shaper - low level operations */ /** NIX burst limits */