From patchwork Wed Apr 22 07:59:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 69098 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 58890A00C2; Wed, 22 Apr 2020 10:00:20 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2320E1D156; Wed, 22 Apr 2020 10:00:19 +0200 (CEST) Received: from mail-pf1-f195.google.com (mail-pf1-f195.google.com [209.85.210.195]) by dpdk.org (Postfix) with ESMTP id 5730B1D155 for ; Wed, 22 Apr 2020 10:00:17 +0200 (CEST) Received: by mail-pf1-f195.google.com with SMTP id x77so693085pfc.0 for ; Wed, 22 Apr 2020 01:00:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=mMXNdLR3gAQbq51JtAjV9zGXvKbr2tPjsRryx4VSS8c=; b=tNc4amNjTqAjyzFOnPGJgbr1AfcBdgwR5M7NEPFbpjSEjrFyinLWGf+jEFy3F/mmi3 Le0ycTAP6Hk5uFkDPWYnMuOxVR4eB8qe9Kuwv4jAp7H9+Hm49T9x7tLXkSQStk3fOK21 b2WlU9uM4m9PREoAh8NCbPmuvJxm3GT8S4JKnGwMx+3Uh+y+4wNyBRn82H+gERrprnh8 Q93HzmHveJYgm4INklEfcGo5yDjlFNhuY5j4pf+9MNMVTqW9MkPXwtw0w6JsGfMxr3l+ cl/q+yrTZdRHYCWiMKRA1aU9r0ixNbFLRDQ+E/dk5NXxz7rsrJO9lVtyGE1+2T0RgCI7 ZrIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=mMXNdLR3gAQbq51JtAjV9zGXvKbr2tPjsRryx4VSS8c=; b=A/xh8yxVrNJvTad//6Y7zQygfE3NGCvJB9R7LAFaOcPqNLJBxVFLhS3pWKjDdkzQrS EgVx8Ju6nJ4YfWdQE3tRAbbM9EJie4cbugxdEKXDWRup3t+c+ho+yGEWOHe29i46QITy 6d3USHL0YVCjrlfSzZIHgjOR2YoaFoXUDNaBgJ3EbzcV0b2c/5U6jbT2osOWu52D5ntk RMwEzItq/Q3NRLONf+uo8XV9DFLkvigBiULgGdgO5yxVhRGX2Zhje9X8zNjlmFoLkwii eEp9WZV09b5mXFNVnl2UqAAfQGO5/3pRbsGqCzlB0wozUv/4VOCAcxs0/o+PRmoVYUN4 1b5Q== X-Gm-Message-State: AGi0PuZQ7il1ep9RDqfkI339YyNWEm8Rv9R2w3LCY4mianNkeuXZj2rw POtqUcAU/h9QWO4RV9FdJGZbIVpcDZo= X-Google-Smtp-Source: APiQypJyCWpJ2+UwHb5UxLDN/J38d0949djRX0mFglDGwi08uH/jw/WGOqDDLwuipSk4MQamT5Fd5Q== X-Received: by 2002:a63:e050:: with SMTP id n16mr25573318pgj.93.1587542416007; Wed, 22 Apr 2020 01:00:16 -0700 (PDT) Received: from hyd1588t430.marvell.com ([115.113.156.2]) by smtp.gmail.com with ESMTPSA id j6sm4521666pfe.134.2020.04.22.01.00.12 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 22 Apr 2020 01:00:15 -0700 (PDT) From: Nithin Dabilpuram To: Jasvinder Singh , Cristian Dumitrescu , Thomas Monjalon , Ferruh Yigit , Andrew Rybchenko Cc: dev@dpdk.org, jerinj@marvell.com, kkanas@marvell.com, Nithin Dabilpuram Date: Wed, 22 Apr 2020 13:29:44 +0530 Message-Id: <20200422075948.10051-1-nithind1988@gmail.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20200330160019.29674-1-ndabilpuram@marvell.com> References: <20200330160019.29674-1-ndabilpuram@marvell.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3] ethdev: add tm support for shaper config in pkt mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram Some NIC hardware support shaper to work in packet mode i.e shaping or ratelimiting traffic is in packets per second (PPS) as opposed to default bytes per second (BPS). Hence this patch adds support to configure shared or private shaper in packet mode, provide rate in PPS and add related tm capabilities in port/level/node capability structures. This patch also updates tm port/level/node capability structures with exiting features of scheduler wfq packet mode, scheduler wfq byte mode and private/shared shaper byte mode. SoftNIC PMD is also updated with new capabilities. Signed-off-by: Nithin Dabilpuram --- v2..v3: - Fix typo's - Add shaper_shared_(packet, byte)_mode_supported in level and node cap - Fix comment in pkt_length_adjust. - Move rte_eth_softnic_tm.c capability update to patch 1/4 to avoid compilations issues in node and level cap array in softnicpmd. ../drivers/net/softnic/rte_eth_softnic_tm.c:782:3: warning: braces around scalar initializer {.nonleaf = { ../drivers/net/softnic/rte_eth_softnic_tm.c:782:3: note: (near initialization for ‘tm_node_cap[0].shaper_shared_byte_mode_supported’) ../drivers/net/softnic/rte_eth_softnic_tm.c:782:4: error: field name not in record or union initializer {.nonleaf = { v1..v2: - Add seperate capability for shaper and scheduler pktmode and bytemode. - Add packet_mode field in struct rte_tm_shaper_params to indicate packet mode shaper profile. drivers/net/softnic/rte_eth_softnic_tm.c | 65 ++++++++++ lib/librte_ethdev/rte_tm.h | 196 ++++++++++++++++++++++++++++++- 2 files changed, 259 insertions(+), 2 deletions(-) diff --git a/drivers/net/softnic/rte_eth_softnic_tm.c b/drivers/net/softnic/rte_eth_softnic_tm.c index 80a470c..344819f 100644 --- a/drivers/net/softnic/rte_eth_softnic_tm.c +++ b/drivers/net/softnic/rte_eth_softnic_tm.c @@ -447,6 +447,8 @@ static const struct rte_tm_capabilities tm_cap = { .shaper_private_dual_rate_n_max = 0, .shaper_private_rate_min = 1, .shaper_private_rate_max = UINT32_MAX, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 1, .shaper_shared_n_max = UINT32_MAX, .shaper_shared_n_nodes_per_shaper_max = UINT32_MAX, @@ -454,6 +456,8 @@ static const struct rte_tm_capabilities tm_cap = { .shaper_shared_dual_rate_n_max = 0, .shaper_shared_rate_min = 1, .shaper_shared_rate_max = UINT32_MAX, + .shaper_shared_packet_mode_supported = 0, + .shaper_shared_byte_mode_supported = 1, .shaper_pkt_length_adjust_min = RTE_TM_ETH_FRAMING_OVERHEAD_FCS, .shaper_pkt_length_adjust_max = RTE_TM_ETH_FRAMING_OVERHEAD_FCS, @@ -463,6 +467,8 @@ static const struct rte_tm_capabilities tm_cap = { .sched_wfq_n_children_per_group_max = UINT32_MAX, .sched_wfq_n_groups_max = 1, .sched_wfq_weight_max = UINT32_MAX, + .sched_wfq_packet_mode_supported = 0, + .sched_wfq_byte_mode_supported = 1, .cman_wred_packet_mode_supported = WRED_SUPPORTED, .cman_wred_byte_mode_supported = 0, @@ -548,13 +554,19 @@ static const struct rte_tm_level_capabilities tm_level_cap[] = { .shaper_private_dual_rate_supported = 0, .shaper_private_rate_min = 1, .shaper_private_rate_max = UINT32_MAX, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 1, .shaper_shared_n_max = 0, + .shaper_shared_packet_mode_supported = 0, + .shaper_shared_byte_mode_supported = 0, .sched_n_children_max = UINT32_MAX, .sched_sp_n_priorities_max = 1, .sched_wfq_n_children_per_group_max = UINT32_MAX, .sched_wfq_n_groups_max = 1, .sched_wfq_weight_max = 1, + .sched_wfq_packet_mode_supported = 0, + .sched_wfq_byte_mode_supported = 0, .stats_mask = STATS_MASK_DEFAULT, } }, @@ -572,7 +584,11 @@ static const struct rte_tm_level_capabilities tm_level_cap[] = { .shaper_private_dual_rate_supported = 0, .shaper_private_rate_min = 1, .shaper_private_rate_max = UINT32_MAX, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 1, .shaper_shared_n_max = 0, + .shaper_shared_packet_mode_supported = 0, + .shaper_shared_byte_mode_supported = 0, .sched_n_children_max = UINT32_MAX, .sched_sp_n_priorities_max = 1, @@ -580,9 +596,14 @@ static const struct rte_tm_level_capabilities tm_level_cap[] = { .sched_wfq_n_groups_max = 1, #ifdef RTE_SCHED_SUBPORT_TC_OV .sched_wfq_weight_max = UINT32_MAX, + .sched_wfq_packet_mode_supported = 0, + .sched_wfq_byte_mode_supported = 1, #else .sched_wfq_weight_max = 1, + .sched_wfq_packet_mode_supported = 0, + .sched_wfq_byte_mode_supported = 0, #endif + .stats_mask = STATS_MASK_DEFAULT, } }, }, @@ -599,7 +620,11 @@ static const struct rte_tm_level_capabilities tm_level_cap[] = { .shaper_private_dual_rate_supported = 0, .shaper_private_rate_min = 1, .shaper_private_rate_max = UINT32_MAX, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 1, .shaper_shared_n_max = 0, + .shaper_shared_packet_mode_supported = 0, + .shaper_shared_byte_mode_supported = 0, .sched_n_children_max = RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE, @@ -608,6 +633,8 @@ static const struct rte_tm_level_capabilities tm_level_cap[] = { .sched_wfq_n_children_per_group_max = 1, .sched_wfq_n_groups_max = 0, .sched_wfq_weight_max = 1, + .sched_wfq_packet_mode_supported = 0, + .sched_wfq_byte_mode_supported = 0, .stats_mask = STATS_MASK_DEFAULT, } }, @@ -625,7 +652,11 @@ static const struct rte_tm_level_capabilities tm_level_cap[] = { .shaper_private_dual_rate_supported = 0, .shaper_private_rate_min = 1, .shaper_private_rate_max = UINT32_MAX, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 1, .shaper_shared_n_max = 1, + .shaper_shared_packet_mode_supported = 0, + .shaper_shared_byte_mode_supported = 1, .sched_n_children_max = RTE_SCHED_BE_QUEUES_PER_PIPE, @@ -634,6 +665,8 @@ static const struct rte_tm_level_capabilities tm_level_cap[] = { RTE_SCHED_BE_QUEUES_PER_PIPE, .sched_wfq_n_groups_max = 1, .sched_wfq_weight_max = UINT32_MAX, + .sched_wfq_packet_mode_supported = 0, + .sched_wfq_byte_mode_supported = 1, .stats_mask = STATS_MASK_DEFAULT, } }, @@ -651,7 +684,11 @@ static const struct rte_tm_level_capabilities tm_level_cap[] = { .shaper_private_dual_rate_supported = 0, .shaper_private_rate_min = 0, .shaper_private_rate_max = 0, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 0, .shaper_shared_n_max = 0, + .shaper_shared_packet_mode_supported = 0, + .shaper_shared_byte_mode_supported = 0, .cman_head_drop_supported = 0, .cman_wred_packet_mode_supported = WRED_SUPPORTED, @@ -736,7 +773,11 @@ static const struct rte_tm_node_capabilities tm_node_cap[] = { .shaper_private_dual_rate_supported = 0, .shaper_private_rate_min = 1, .shaper_private_rate_max = UINT32_MAX, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 1, .shaper_shared_n_max = 0, + .shaper_shared_packet_mode_supported = 0, + .shaper_shared_byte_mode_supported = 0, {.nonleaf = { .sched_n_children_max = UINT32_MAX, @@ -744,6 +785,8 @@ static const struct rte_tm_node_capabilities tm_node_cap[] = { .sched_wfq_n_children_per_group_max = UINT32_MAX, .sched_wfq_n_groups_max = 1, .sched_wfq_weight_max = 1, + .sched_wfq_packet_mode_supported = 0, + .sched_wfq_byte_mode_supported = 0, } }, .stats_mask = STATS_MASK_DEFAULT, @@ -754,7 +797,11 @@ static const struct rte_tm_node_capabilities tm_node_cap[] = { .shaper_private_dual_rate_supported = 0, .shaper_private_rate_min = 1, .shaper_private_rate_max = UINT32_MAX, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 1, .shaper_shared_n_max = 0, + .shaper_shared_packet_mode_supported = 0, + .shaper_shared_byte_mode_supported = 0, {.nonleaf = { .sched_n_children_max = UINT32_MAX, @@ -762,6 +809,8 @@ static const struct rte_tm_node_capabilities tm_node_cap[] = { .sched_wfq_n_children_per_group_max = UINT32_MAX, .sched_wfq_n_groups_max = 1, .sched_wfq_weight_max = UINT32_MAX, + .sched_wfq_packet_mode_supported = 0, + .sched_wfq_byte_mode_supported = 0, } }, .stats_mask = STATS_MASK_DEFAULT, @@ -772,7 +821,11 @@ static const struct rte_tm_node_capabilities tm_node_cap[] = { .shaper_private_dual_rate_supported = 0, .shaper_private_rate_min = 1, .shaper_private_rate_max = UINT32_MAX, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 1, .shaper_shared_n_max = 0, + .shaper_shared_packet_mode_supported = 0, + .shaper_shared_byte_mode_supported = 0, {.nonleaf = { .sched_n_children_max = @@ -782,6 +835,8 @@ static const struct rte_tm_node_capabilities tm_node_cap[] = { .sched_wfq_n_children_per_group_max = 1, .sched_wfq_n_groups_max = 0, .sched_wfq_weight_max = 1, + .sched_wfq_packet_mode_supported = 0, + .sched_wfq_byte_mode_supported = 0, } }, .stats_mask = STATS_MASK_DEFAULT, @@ -792,7 +847,11 @@ static const struct rte_tm_node_capabilities tm_node_cap[] = { .shaper_private_dual_rate_supported = 0, .shaper_private_rate_min = 1, .shaper_private_rate_max = UINT32_MAX, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 1, .shaper_shared_n_max = 1, + .shaper_shared_packet_mode_supported = 0, + .shaper_shared_byte_mode_supported = 1, {.nonleaf = { .sched_n_children_max = @@ -802,6 +861,8 @@ static const struct rte_tm_node_capabilities tm_node_cap[] = { RTE_SCHED_BE_QUEUES_PER_PIPE, .sched_wfq_n_groups_max = 1, .sched_wfq_weight_max = UINT32_MAX, + .sched_wfq_packet_mode_supported = 0, + .sched_wfq_byte_mode_supported = 1, } }, .stats_mask = STATS_MASK_DEFAULT, @@ -812,7 +873,11 @@ static const struct rte_tm_node_capabilities tm_node_cap[] = { .shaper_private_dual_rate_supported = 0, .shaper_private_rate_min = 0, .shaper_private_rate_max = 0, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 0, .shaper_shared_n_max = 0, + .shaper_shared_packet_mode_supported = 0, + .shaper_shared_byte_mode_supported = 0, {.leaf = { diff --git a/lib/librte_ethdev/rte_tm.h b/lib/librte_ethdev/rte_tm.h index f9c0cf3..b3865af 100644 --- a/lib/librte_ethdev/rte_tm.h +++ b/lib/librte_ethdev/rte_tm.h @@ -250,6 +250,23 @@ struct rte_tm_capabilities { */ uint64_t shaper_private_rate_max; + /** Shaper private packet mode supported. When non-zero, this parameter + * indicates that there is at least one node that can be configured + * with packet mode in it's private shaper. When shaper is configured + * in packet mode, committed/peak rate provided is interpreted + * in packets per second. + */ + int shaper_private_packet_mode_supported; + + /** Shaper private byte mode supported. When non-zero, this parameter + * indicates that there is at least one node that can be configured + * with byte mode in it's private shaper. When shaper is configured + * in byte mode, committed/peak rate provided is interpreted in + * bytes per second. + */ + int shaper_private_byte_mode_supported; + + /** Maximum number of shared shapers. The value of zero indicates that * shared shapers are not supported. */ @@ -284,6 +301,21 @@ struct rte_tm_capabilities { */ uint64_t shaper_shared_rate_max; + /** Shaper shared packet mode supported. When non-zero, this parameter + * indicates a shared shaper can be configured with packet mode. + * When shared shaper is configured in packet mode, committed/peak rate + * provided is interpreted in packets per second. + */ + int shaper_shared_packet_mode_supported; + + /** Shaper shared byte mode supported. When non-zero, this parameter + * indicates that a shared shaper can be configured with byte mode. + * When shared shaper is configured in byte mode, committed/peak rate + * provided is interpreted in bytes per second. + */ + int shaper_shared_byte_mode_supported; + + /** Minimum value allowed for packet length adjustment for any private * or shared shaper. */ @@ -339,6 +371,22 @@ struct rte_tm_capabilities { */ uint32_t sched_wfq_weight_max; + /** WFQ packet mode supported. When non-zero, this parameter indicates + * that there is at least one non-leaf node that supports packet mode + * for WFQ among it's children. WFQ weights will be applied against + * packet count for scheduling children when a non-leaf node + * is configured appropriately. + */ + int sched_wfq_packet_mode_supported; + + /** WFQ byte mode supported. When non-zero, this parameter indicates + * that there is at least one non-leaf node that supports byte mode + * for WFQ among it's children. WFQ weights will be applied against + * bytes for scheduling children when a non-leaf node is configured + * appropriately. + */ + int sched_wfq_byte_mode_supported; + /** WRED packet mode support. When non-zero, this parameter indicates * that there is at least one leaf node that supports the WRED packet * mode, which might not be true for all the leaf nodes. In packet @@ -485,6 +533,24 @@ struct rte_tm_level_capabilities { */ uint64_t shaper_private_rate_max; + /** Shaper private packet mode supported. When non-zero, + * this parameter indicates there is at least one + * non-leaf node at this level that can be configured + * with packet mode in it's private shaper. When private + * shaper is configured in packet mode, committed/peak + * rate provided is interpreted in packets per second. + */ + int shaper_private_packet_mode_supported; + + /** Shaper private byte mode supported. When non-zero, + * this parameter indicates there is at least one + * non-leaf node at this level that can be configured + * with byte mode in it's private shaper. When private + * shaper is configured in byte mode, committed/peak + * rate provided is interpreted in bytes per second. + */ + int shaper_private_byte_mode_supported; + /** Maximum number of shared shapers that any non-leaf * node on this level can be part of. The value of zero * indicates that shared shapers are not supported by @@ -495,6 +561,20 @@ struct rte_tm_level_capabilities { */ uint32_t shaper_shared_n_max; + /** Shaper shared packet mode supported. When non-zero, + * this parameter indicates that there is at least one + * non-leaf node on this level that can be part of + * shared shapers which work in packet mode. + */ + int shaper_shared_packet_mode_supported; + + /** Shaper shared byte mode supported. When non-zero, + * this parameter indicates that there is at least one + * non-leaf node on this level that can be part of + * shared shapers which work in byte mode. + */ + int shaper_shared_byte_mode_supported; + /** Maximum number of children nodes. This parameter * indicates that there is at least one non-leaf node on * this level that can be configured with this many @@ -554,6 +634,25 @@ struct rte_tm_level_capabilities { */ uint32_t sched_wfq_weight_max; + /** WFQ packet mode supported. When non-zero, this + * parameter indicates that there is at least one + * non-leaf node at this level that supports packet + * mode for WFQ among it's children. WFQ weights will + * be applied against packet count for scheduling + * children when a non-leaf node is configured + * appropriately. + */ + int sched_wfq_packet_mode_supported; + + /** WFQ byte mode supported. When non-zero, this + * parameter indicates that there is at least one + * non-leaf node at this level that supports byte + * mode for WFQ among it's children. WFQ weights will + * be applied against bytes for scheduling children + * when a non-leaf node is configured appropriately. + */ + int sched_wfq_byte_mode_supported; + /** Mask of statistics counter types supported by the * non-leaf nodes on this level. Every supported * statistics counter type is supported by at least one @@ -596,6 +695,24 @@ struct rte_tm_level_capabilities { */ uint64_t shaper_private_rate_max; + /** Shaper private packet mode supported. When non-zero, + * this parameter indicates there is at least one leaf + * node at this level that can be configured with + * packet mode in it's private shaper. When private + * shaper is configured in packet mode, committed/peak + * rate provided is interpreted in packets per second. + */ + int shaper_private_packet_mode_supported; + + /** Shaper private byte mode supported. When non-zero, + * this parameter indicates there is at least one leaf + * node at this level that can be configured with + * byte mode in it's private shaper. When private shaper + * is configured in byte mode, committed/peak rate + * provided is interpreted in bytes per second. + */ + int shaper_private_byte_mode_supported; + /** Maximum number of shared shapers that any leaf node * on this level can be part of. The value of zero * indicates that shared shapers are not supported by @@ -606,6 +723,20 @@ struct rte_tm_level_capabilities { */ uint32_t shaper_shared_n_max; + /** Shaper shared packet mode supported. When non-zero, + * this parameter indicates that there is at least one + * leaf node on this level that can be part of + * shared shapers which work in packet mode. + */ + int shaper_shared_packet_mode_supported; + + /** Shaper shared byte mode supported. When non-zero, + * this parameter indicates that there is at least one + * leaf node on this level that can be part of + * shared shapers which work in byte mode. + */ + int shaper_shared_byte_mode_supported; + /** WRED packet mode support. When non-zero, this * parameter indicates that there is at least one leaf * node on this level that supports the WRED packet @@ -686,12 +817,38 @@ struct rte_tm_node_capabilities { */ uint64_t shaper_private_rate_max; + /** Shaper private packet mode supported. When non-zero, this parameter + * indicates private shaper of current node can be configured with + * packet mode. When configured in packet mode, committed/peak rate + * provided is interpreted in packets per second. + */ + int shaper_private_packet_mode_supported; + + /** Shaper private byte mode supported. When non-zero, this parameter + * indicates private shaper of current node can be configured with + * byte mode. When configured in byte mode, committed/peak rate + * provided is interpreted in bytes per second. + */ + int shaper_private_byte_mode_supported; + /** Maximum number of shared shapers the current node can be part of. * The value of zero indicates that shared shapers are not supported by * the current node. */ uint32_t shaper_shared_n_max; + /** Shaper shared packet mode supported. When non-zero, + * this parameter indicates that current node can be part of + * shared shapers which work in packet mode. + */ + int shaper_shared_packet_mode_supported; + + /** Shaper shared byte mode supported. When non-zero, + * this parameter indicates that current node can be part of + * shared shapers which work in byte mode. + */ + int shaper_shared_byte_mode_supported; + RTE_STD_C11 union { /** Items valid only for non-leaf nodes. */ @@ -735,6 +892,23 @@ struct rte_tm_node_capabilities { * WFQ weight, so WFQ is reduced to FQ. */ uint32_t sched_wfq_weight_max; + + /** WFQ packet mode supported. When non-zero, this + * parameter indicates that current node supports packet + * mode for WFQ among it's children. WFQ weights will be + * applied against packet count for scheduling children + * when configured appropriately. + */ + int sched_wfq_packet_mode_supported; + + /** WFQ byte mode supported. When non-zero, this + * parameter indicates that current node supports byte + * mode for WFQ among it's children. WFQ weights will be + * applied against bytes for scheduling children when + * configured appropriately. + */ + int sched_wfq_byte_mode_supported; + } nonleaf; /** Items valid only for leaf nodes. */ @@ -836,10 +1010,10 @@ struct rte_tm_wred_params { * Token bucket */ struct rte_tm_token_bucket { - /** Token bucket rate (bytes per second) */ + /** Token bucket rate (bytes per second or packets per second) */ uint64_t rate; - /** Token bucket size (bytes), a.k.a. max burst size */ + /** Token bucket size (bytes or packets), a.k.a. max burst size */ uint64_t size; }; @@ -860,6 +1034,11 @@ struct rte_tm_token_bucket { * Dual rate shapers use both the committed and the peak token buckets. The * rate of the peak bucket has to be bigger than zero, as well as greater than * or equal to the rate of the committed bucket. + * + * @see struct rte_tm_capabilities::shaper_private_packet_mode_supported + * @see struct rte_tm_capabilities::shaper_private_byte_mode_supported + * @see struct rte_tm_capabilities::shaper_shared_packet_mode_supported + * @see struct rte_tm_capabilities::shaper_shared_byte_mode_supported */ struct rte_tm_shaper_params { /** Committed token bucket */ @@ -872,8 +1051,19 @@ struct rte_tm_shaper_params { * purpose of shaping. Can be used to correct the packet length with * the framing overhead bytes that are also consumed on the wire (e.g. * RTE_TM_ETH_FRAMING_OVERHEAD_FCS). + * This field is ignored when the profile enables packet mode. */ int32_t pkt_length_adjust; + + /** When zero, the private or shared shaper that is associated to this + * profile works in byte mode and hence *rate* and *size* fields in + * both token bucket configurations are specified in bytes per second + * and bytes respectively. + * When non-zero, that private or shared shaper works in packet mode and + * hence *rate* and *size* fields in both token bucket configurations + * are specified in packets per second and packets respectively. + */ + int packet_mode; }; /** @@ -925,6 +1115,8 @@ struct rte_tm_node_params { * When non-NULL, it points to a pre-allocated array of * *n_sp_priorities* values, with non-zero value for * byte-mode and zero for packet-mode. + * @see struct rte_tm_node_capabilities::sched_wfq_packet_mode_supported + * @see struct rte_tm_node_capabilities::sched_wfq_byte_mode_supported */ int *wfq_weight_mode; From patchwork Wed Apr 22 07:59:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 69099 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6898EA00C2; Wed, 22 Apr 2020 10:00:32 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 35DE11D37D; Wed, 22 Apr 2020 10:00:23 +0200 (CEST) Received: from mail-pg1-f195.google.com (mail-pg1-f195.google.com [209.85.215.195]) by dpdk.org (Postfix) with ESMTP id 612381D16D for ; Wed, 22 Apr 2020 10:00:21 +0200 (CEST) Received: by mail-pg1-f195.google.com with SMTP id r4so699949pgg.4 for ; Wed, 22 Apr 2020 01:00:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Nr6UYCFPaBG++CX9FII9mU8CR3Gp18ezcjV8aW90v6c=; b=A/xlBkreQe+jYSTvRsV11kMd0qQ5VU2hcfotXNDwvzgykONyh+J02xIJuGnouUQrbS iU6QsgZPfmjvXqEeSFcNPIeHGvpv02gJ0aab2b9kI4GEZx6P6lmzLRO6tHI8Itq1ePQZ xsWjd1mPmWfZY3H+7c5jhOYFVgFspMKocQVKhkAM9cy10PoHV8/e+qKcZYnWcuDAWvg9 9f6UC+ZqO1h5B5lIetFqKpzynE/p2kOPV5NESK2ElhrHjyl0cSa1VkH5P+8Zm7Iw/jfG trVudpmf6+a98sdQQQFa2HVUj+N6Qs5I6WQzKclAopxleogtwCgjzn/YRgqse8mlW95X mW+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Nr6UYCFPaBG++CX9FII9mU8CR3Gp18ezcjV8aW90v6c=; b=e7aHq6NNrQQ7ctgoSSIHX0lQpnpw9SxGJ/aAOD3tE4dsaKpMzlvMxrDWjGeR5rvQGK PNVWWkAJT06kJimIj4wXLlomPPXey182gpDgKbD//4kaOp6BMTJdfg2WgQeqmwGNBEMp oLq0ZurEGJNvx6ctrpXjRFOZliv+Uqsg6Odu+P0tTGwn+Nm7e8uC9wVS+M8FR44ZeuAl NSTr5WFMqhxJbK0nsyjKY3n+tNPJH95NAcQxj4px4bzLLF8lAKv7+XCif4bcyJT2tmme MK7N2L//XEiGLMKZ6D3DhrXG0/1IMt02c/zwJYNCCkW5lXCcZ2EAGEKpaLlZkFPprQAR uhPw== X-Gm-Message-State: AGi0PuZ7mWdD9K+0MenuA5yxvHLIw0sx7gugwYxWr44Jl1O8zddZu3kd F5cqIKQ/cuNQi58biqaztkk= X-Google-Smtp-Source: APiQypKQ+lANmWfSi8q8t88xRK+nZ9RC0gKrobzJMynKtmVz6r8Vsq4WdT8hNN4x1FhfstF0DFuSFg== X-Received: by 2002:a62:5cc1:: with SMTP id q184mr25031851pfb.259.1587542420484; Wed, 22 Apr 2020 01:00:20 -0700 (PDT) Received: from hyd1588t430.marvell.com ([115.113.156.2]) by smtp.gmail.com with ESMTPSA id j6sm4521666pfe.134.2020.04.22.01.00.16 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 22 Apr 2020 01:00:19 -0700 (PDT) From: Nithin Dabilpuram To: Beilei Xing , Qi Zhang , Rosen Xu , Wenzhuo Lu , Konstantin Ananyev , Liron Himi Cc: dev@dpdk.org, jerinj@marvell.com, kkanas@marvell.com, Nithin Dabilpuram Date: Wed, 22 Apr 2020 13:29:45 +0530 Message-Id: <20200422075948.10051-2-nithind1988@gmail.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20200422075948.10051-1-nithind1988@gmail.com> References: <20200330160019.29674-1-ndabilpuram@marvell.com> <20200422075948.10051-1-nithind1988@gmail.com> Subject: [dpdk-dev] [PATCH v3 2/4] drivers/net: update tm capability for existing pmds X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram Since existing PMD's support shaper byte mode and scheduler wfq byte mode, update the same in their port/level/node capabilities that are added. SoftNIC PMD is already upto date with new capabilities. Signed-off-by: Nithin Dabilpuram --- v2..v3: - Update node/level cap with shaper_shared_(packet, byte)_mode_supported. v2: - Newly included patch to change exiting pmd's with tm support of byte mode to show the same in port/level/node cap. drivers/net/i40e/i40e_tm.c | 22 ++++++++++++++++++++++ drivers/net/ipn3ke/ipn3ke_tm.c | 38 ++++++++++++++++++++++++++++++++++++++ drivers/net/ixgbe/ixgbe_tm.c | 22 ++++++++++++++++++++++ drivers/net/mvpp2/mrvl_tm.c | 14 ++++++++++++++ 4 files changed, 96 insertions(+) diff --git a/drivers/net/i40e/i40e_tm.c b/drivers/net/i40e/i40e_tm.c index c76760c..5d722f9 100644 --- a/drivers/net/i40e/i40e_tm.c +++ b/drivers/net/i40e/i40e_tm.c @@ -160,12 +160,16 @@ i40e_tm_capabilities_get(struct rte_eth_dev *dev, cap->shaper_private_rate_min = 0; /* 40Gbps -> 5GBps */ cap->shaper_private_rate_max = 5000000000ull; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 1; cap->shaper_shared_n_max = 0; cap->shaper_shared_n_nodes_per_shaper_max = 0; cap->shaper_shared_n_shapers_per_node_max = 0; cap->shaper_shared_dual_rate_n_max = 0; cap->shaper_shared_rate_min = 0; cap->shaper_shared_rate_max = 0; + cap->shaper_shared_packet_mode_supported = 0; + cap->shaper_shared_byte_mode_supported = 0; cap->sched_n_children_max = hw->func_caps.num_tx_qp; /** * HW supports SP. But no plan to support it now. @@ -179,6 +183,8 @@ i40e_tm_capabilities_get(struct rte_eth_dev *dev, * So, all the nodes should have the same weight. */ cap->sched_wfq_weight_max = 1; + cap->sched_wfq_packet_mode_supported = 0; + cap->sched_wfq_byte_mode_supported = 0; cap->cman_head_drop_supported = 0; cap->dynamic_update_mask = 0; cap->shaper_pkt_length_adjust_min = RTE_TM_ETH_FRAMING_OVERHEAD; @@ -754,7 +760,11 @@ i40e_level_capabilities_get(struct rte_eth_dev *dev, cap->nonleaf.shaper_private_rate_min = 0; /* 40Gbps -> 5GBps */ cap->nonleaf.shaper_private_rate_max = 5000000000ull; + cap->nonleaf.shaper_private_packet_mode_supported = 0; + cap->nonleaf.shaper_private_byte_mode_supported = 1; cap->nonleaf.shaper_shared_n_max = 0; + cap->nonleaf.shaper_shared_packet_mode_supported = 0; + cap->nonleaf.shaper_shared_byte_mode_supported = 0; if (level_id == I40E_TM_NODE_TYPE_PORT) cap->nonleaf.sched_n_children_max = I40E_MAX_TRAFFIC_CLASS; @@ -765,6 +775,8 @@ i40e_level_capabilities_get(struct rte_eth_dev *dev, cap->nonleaf.sched_wfq_n_children_per_group_max = 0; cap->nonleaf.sched_wfq_n_groups_max = 0; cap->nonleaf.sched_wfq_weight_max = 1; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 0; cap->nonleaf.stats_mask = 0; return 0; @@ -776,7 +788,11 @@ i40e_level_capabilities_get(struct rte_eth_dev *dev, cap->leaf.shaper_private_rate_min = 0; /* 40Gbps -> 5GBps */ cap->leaf.shaper_private_rate_max = 5000000000ull; + cap->leaf.shaper_private_packet_mode_supported = 0; + cap->leaf.shaper_private_byte_mode_supported = 1; cap->leaf.shaper_shared_n_max = 0; + cap->leaf.shaper_shared_packet_mode_supported = 0; + cap->leaf.shaper_shared_byte_mode_supported = 0; cap->leaf.cman_head_drop_supported = false; cap->leaf.cman_wred_context_private_supported = true; cap->leaf.cman_wred_context_shared_n_max = 0; @@ -817,7 +833,11 @@ i40e_node_capabilities_get(struct rte_eth_dev *dev, cap->shaper_private_rate_min = 0; /* 40Gbps -> 5GBps */ cap->shaper_private_rate_max = 5000000000ull; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 1; cap->shaper_shared_n_max = 0; + cap->shaper_shared_packet_mode_supported = 0; + cap->shaper_shared_byte_mode_supported = 0; if (node_type == I40E_TM_NODE_TYPE_QUEUE) { cap->leaf.cman_head_drop_supported = false; @@ -834,6 +854,8 @@ i40e_node_capabilities_get(struct rte_eth_dev *dev, cap->nonleaf.sched_wfq_n_children_per_group_max = 0; cap->nonleaf.sched_wfq_n_groups_max = 0; cap->nonleaf.sched_wfq_weight_max = 1; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 0; } cap->stats_mask = 0; diff --git a/drivers/net/ipn3ke/ipn3ke_tm.c b/drivers/net/ipn3ke/ipn3ke_tm.c index 5a16c5f..17ac026 100644 --- a/drivers/net/ipn3ke/ipn3ke_tm.c +++ b/drivers/net/ipn3ke/ipn3ke_tm.c @@ -440,6 +440,8 @@ ipn3ke_tm_capabilities_get(__rte_unused struct rte_eth_dev *dev, cap->shaper_private_dual_rate_n_max = 0; cap->shaper_private_rate_min = 1; cap->shaper_private_rate_max = 1 + IPN3KE_TM_VT_NODE_NUM; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 1; cap->shaper_shared_n_max = 0; cap->shaper_shared_n_nodes_per_shaper_max = 0; @@ -447,6 +449,8 @@ ipn3ke_tm_capabilities_get(__rte_unused struct rte_eth_dev *dev, cap->shaper_shared_dual_rate_n_max = 0; cap->shaper_shared_rate_min = 0; cap->shaper_shared_rate_max = 0; + cap->shaper_shared_packet_mode_supported = 0; + cap->shaper_shared_byte_mode_supported = 0; cap->shaper_pkt_length_adjust_min = RTE_TM_ETH_FRAMING_OVERHEAD_FCS; cap->shaper_pkt_length_adjust_max = RTE_TM_ETH_FRAMING_OVERHEAD_FCS; @@ -456,6 +460,8 @@ ipn3ke_tm_capabilities_get(__rte_unused struct rte_eth_dev *dev, cap->sched_wfq_n_children_per_group_max = UINT32_MAX; cap->sched_wfq_n_groups_max = 1; cap->sched_wfq_weight_max = UINT32_MAX; + cap->sched_wfq_packet_mode_supported = 0; + cap->sched_wfq_byte_mode_supported = 1; cap->cman_wred_packet_mode_supported = 0; cap->cman_wred_byte_mode_supported = 0; @@ -517,13 +523,19 @@ ipn3ke_tm_level_capabilities_get(struct rte_eth_dev *dev, cap->nonleaf.shaper_private_dual_rate_supported = 0; cap->nonleaf.shaper_private_rate_min = 1; cap->nonleaf.shaper_private_rate_max = UINT32_MAX; + cap->nonleaf.shaper_private_packet_mode_supported = 0; + cap->nonleaf.shaper_private_byte_mode_supported = 1; cap->nonleaf.shaper_shared_n_max = 0; + cap->nonleaf.shaper_shared_packet_mode_supported = 0; + cap->nonleaf.shaper_shared_byte_mode_supported = 0; cap->nonleaf.sched_n_children_max = IPN3KE_TM_VT_NODE_NUM; cap->nonleaf.sched_sp_n_priorities_max = 1; cap->nonleaf.sched_wfq_n_children_per_group_max = 0; cap->nonleaf.sched_wfq_n_groups_max = 0; cap->nonleaf.sched_wfq_weight_max = 0; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 0; cap->nonleaf.stats_mask = STATS_MASK_DEFAULT; break; @@ -539,13 +551,19 @@ ipn3ke_tm_level_capabilities_get(struct rte_eth_dev *dev, cap->nonleaf.shaper_private_dual_rate_supported = 0; cap->nonleaf.shaper_private_rate_min = 1; cap->nonleaf.shaper_private_rate_max = UINT32_MAX; + cap->nonleaf.shaper_private_packet_mode_supported = 0; + cap->nonleaf.shaper_private_byte_mode_supported = 1; cap->nonleaf.shaper_shared_n_max = 0; + cap->nonleaf.shaper_shared_packet_mode_supported = 0; + cap->nonleaf.shaper_shared_byte_mode_supported = 0; cap->nonleaf.sched_n_children_max = IPN3KE_TM_COS_NODE_NUM; cap->nonleaf.sched_sp_n_priorities_max = 1; cap->nonleaf.sched_wfq_n_children_per_group_max = 0; cap->nonleaf.sched_wfq_n_groups_max = 0; cap->nonleaf.sched_wfq_weight_max = 0; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 0; cap->nonleaf.stats_mask = STATS_MASK_DEFAULT; break; @@ -561,7 +579,11 @@ ipn3ke_tm_level_capabilities_get(struct rte_eth_dev *dev, cap->leaf.shaper_private_dual_rate_supported = 0; cap->leaf.shaper_private_rate_min = 0; cap->leaf.shaper_private_rate_max = 0; + cap->leaf.shaper_private_packet_mode_supported = 0; + cap->leaf.shaper_private_byte_mode_supported = 1; cap->leaf.shaper_shared_n_max = 0; + cap->leaf.shaper_shared_packet_mode_supported = 0; + cap->leaf.shaper_shared_byte_mode_supported = 0; cap->leaf.cman_head_drop_supported = 0; cap->leaf.cman_wred_packet_mode_supported = WRED_SUPPORTED; @@ -632,7 +654,11 @@ ipn3ke_tm_node_capabilities_get(struct rte_eth_dev *dev, cap->shaper_private_dual_rate_supported = 0; cap->shaper_private_rate_min = 1; cap->shaper_private_rate_max = UINT32_MAX; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 1; cap->shaper_shared_n_max = 0; + cap->shaper_shared_packet_mode_supported = 0; + cap->shaper_shared_byte_mode_supported = 0; cap->nonleaf.sched_n_children_max = IPN3KE_TM_VT_NODE_NUM; cap->nonleaf.sched_sp_n_priorities_max = 1; @@ -640,6 +666,8 @@ ipn3ke_tm_node_capabilities_get(struct rte_eth_dev *dev, IPN3KE_TM_VT_NODE_NUM; cap->nonleaf.sched_wfq_n_groups_max = 1; cap->nonleaf.sched_wfq_weight_max = 1; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 0; cap->stats_mask = STATS_MASK_DEFAULT; break; @@ -649,7 +677,11 @@ ipn3ke_tm_node_capabilities_get(struct rte_eth_dev *dev, cap->shaper_private_dual_rate_supported = 0; cap->shaper_private_rate_min = 1; cap->shaper_private_rate_max = UINT32_MAX; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 1; cap->shaper_shared_n_max = 0; + cap->shaper_shared_packet_mode_supported = 0; + cap->shaper_shared_byte_mode_supported = 0; cap->nonleaf.sched_n_children_max = IPN3KE_TM_COS_NODE_NUM; cap->nonleaf.sched_sp_n_priorities_max = 1; @@ -657,6 +689,8 @@ ipn3ke_tm_node_capabilities_get(struct rte_eth_dev *dev, IPN3KE_TM_COS_NODE_NUM; cap->nonleaf.sched_wfq_n_groups_max = 1; cap->nonleaf.sched_wfq_weight_max = 1; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 0; cap->stats_mask = STATS_MASK_DEFAULT; break; @@ -666,7 +700,11 @@ ipn3ke_tm_node_capabilities_get(struct rte_eth_dev *dev, cap->shaper_private_dual_rate_supported = 0; cap->shaper_private_rate_min = 0; cap->shaper_private_rate_max = 0; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 0; cap->shaper_shared_n_max = 0; + cap->shaper_shared_packet_mode_supported = 0; + cap->shaper_shared_byte_mode_supported = 0; cap->leaf.cman_head_drop_supported = 0; cap->leaf.cman_wred_packet_mode_supported = WRED_SUPPORTED; diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c index 73845a7..a8407e7 100644 --- a/drivers/net/ixgbe/ixgbe_tm.c +++ b/drivers/net/ixgbe/ixgbe_tm.c @@ -168,12 +168,16 @@ ixgbe_tm_capabilities_get(struct rte_eth_dev *dev, cap->shaper_private_rate_min = 0; /* 10Gbps -> 1.25GBps */ cap->shaper_private_rate_max = 1250000000ull; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 1; cap->shaper_shared_n_max = 0; cap->shaper_shared_n_nodes_per_shaper_max = 0; cap->shaper_shared_n_shapers_per_node_max = 0; cap->shaper_shared_dual_rate_n_max = 0; cap->shaper_shared_rate_min = 0; cap->shaper_shared_rate_max = 0; + cap->shaper_shared_packet_mode_supported = 0; + cap->shaper_shared_byte_mode_supported = 0; cap->sched_n_children_max = hw->mac.max_tx_queues; /** * HW supports SP. But no plan to support it now. @@ -182,6 +186,8 @@ ixgbe_tm_capabilities_get(struct rte_eth_dev *dev, cap->sched_sp_n_priorities_max = 1; cap->sched_wfq_n_children_per_group_max = 0; cap->sched_wfq_n_groups_max = 0; + cap->sched_wfq_packet_mode_supported = 0; + cap->sched_wfq_byte_mode_supported = 0; /** * SW only supports fair round robin now. * So, all the nodes should have the same weight. @@ -875,7 +881,11 @@ ixgbe_level_capabilities_get(struct rte_eth_dev *dev, cap->nonleaf.shaper_private_rate_min = 0; /* 10Gbps -> 1.25GBps */ cap->nonleaf.shaper_private_rate_max = 1250000000ull; + cap->nonleaf.shaper_private_packet_mode_supported = 0; + cap->nonleaf.shaper_private_byte_mode_supported = 1; cap->nonleaf.shaper_shared_n_max = 0; + cap->nonleaf.shaper_shared_packet_mode_supported = 0; + cap->nonleaf.shaper_shared_byte_mode_supported = 0; if (level_id == IXGBE_TM_NODE_TYPE_PORT) cap->nonleaf.sched_n_children_max = IXGBE_DCB_MAX_TRAFFIC_CLASS; @@ -886,6 +896,8 @@ ixgbe_level_capabilities_get(struct rte_eth_dev *dev, cap->nonleaf.sched_wfq_n_children_per_group_max = 0; cap->nonleaf.sched_wfq_n_groups_max = 0; cap->nonleaf.sched_wfq_weight_max = 1; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 0; cap->nonleaf.stats_mask = 0; return 0; @@ -897,7 +909,11 @@ ixgbe_level_capabilities_get(struct rte_eth_dev *dev, cap->leaf.shaper_private_rate_min = 0; /* 10Gbps -> 1.25GBps */ cap->leaf.shaper_private_rate_max = 1250000000ull; + cap->leaf.shaper_private_packet_mode_supported = 0; + cap->leaf.shaper_private_byte_mode_supported = 1; cap->leaf.shaper_shared_n_max = 0; + cap->leaf.shaper_shared_packet_mode_supported = 0; + cap->leaf.shaper_shared_byte_mode_supported = 0; cap->leaf.cman_head_drop_supported = false; cap->leaf.cman_wred_context_private_supported = true; cap->leaf.cman_wred_context_shared_n_max = 0; @@ -938,7 +954,11 @@ ixgbe_node_capabilities_get(struct rte_eth_dev *dev, cap->shaper_private_rate_min = 0; /* 10Gbps -> 1.25GBps */ cap->shaper_private_rate_max = 1250000000ull; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 1; cap->shaper_shared_n_max = 0; + cap->shaper_shared_packet_mode_supported = 0; + cap->shaper_shared_byte_mode_supported = 0; if (node_type == IXGBE_TM_NODE_TYPE_QUEUE) { cap->leaf.cman_head_drop_supported = false; @@ -955,6 +975,8 @@ ixgbe_node_capabilities_get(struct rte_eth_dev *dev, cap->nonleaf.sched_wfq_n_children_per_group_max = 0; cap->nonleaf.sched_wfq_n_groups_max = 0; cap->nonleaf.sched_wfq_weight_max = 1; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 0; } cap->stats_mask = 0; diff --git a/drivers/net/mvpp2/mrvl_tm.c b/drivers/net/mvpp2/mrvl_tm.c index 3de8997..e98f576 100644 --- a/drivers/net/mvpp2/mrvl_tm.c +++ b/drivers/net/mvpp2/mrvl_tm.c @@ -193,12 +193,16 @@ mrvl_capabilities_get(struct rte_eth_dev *dev, cap->shaper_private_n_max = cap->shaper_n_max; cap->shaper_private_rate_min = MRVL_RATE_MIN; cap->shaper_private_rate_max = priv->rate_max; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 1; cap->sched_n_children_max = dev->data->nb_tx_queues; cap->sched_sp_n_priorities_max = dev->data->nb_tx_queues; cap->sched_wfq_n_children_per_group_max = dev->data->nb_tx_queues; cap->sched_wfq_n_groups_max = 1; cap->sched_wfq_weight_max = MRVL_WEIGHT_MAX; + cap->sched_wfq_packet_mode_supported = 0; + cap->sched_wfq_byte_mode_supported = 1; cap->dynamic_update_mask = RTE_TM_UPDATE_NODE_SUSPEND_RESUME | RTE_TM_UPDATE_NODE_STATS; @@ -244,6 +248,8 @@ mrvl_level_capabilities_get(struct rte_eth_dev *dev, cap->nonleaf.shaper_private_supported = 1; cap->nonleaf.shaper_private_rate_min = MRVL_RATE_MIN; cap->nonleaf.shaper_private_rate_max = priv->rate_max; + cap->nonleaf.shaper_private_packet_mode_supported = 0; + cap->nonleaf.shaper_private_byte_mode_supported = 1; cap->nonleaf.sched_n_children_max = dev->data->nb_tx_queues; cap->nonleaf.sched_sp_n_priorities_max = 1; @@ -251,6 +257,8 @@ mrvl_level_capabilities_get(struct rte_eth_dev *dev, dev->data->nb_tx_queues; cap->nonleaf.sched_wfq_n_groups_max = 1; cap->nonleaf.sched_wfq_weight_max = MRVL_WEIGHT_MAX; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 1; cap->nonleaf.stats_mask = RTE_TM_STATS_N_PKTS | RTE_TM_STATS_N_BYTES; } else { /* level_id == MRVL_NODE_QUEUE */ @@ -261,6 +269,8 @@ mrvl_level_capabilities_get(struct rte_eth_dev *dev, cap->leaf.shaper_private_supported = 1; cap->leaf.shaper_private_rate_min = MRVL_RATE_MIN; cap->leaf.shaper_private_rate_max = priv->rate_max; + cap->leaf.shaper_private_packet_mode_supported = 0; + cap->leaf.shaper_private_byte_mode_supported = 1; cap->leaf.stats_mask = RTE_TM_STATS_N_PKTS; } @@ -300,6 +310,8 @@ mrvl_node_capabilities_get(struct rte_eth_dev *dev, uint32_t node_id, cap->shaper_private_supported = 1; cap->shaper_private_rate_min = MRVL_RATE_MIN; cap->shaper_private_rate_max = priv->rate_max; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 1; if (node->type == MRVL_NODE_PORT) { cap->nonleaf.sched_n_children_max = dev->data->nb_tx_queues; @@ -308,6 +320,8 @@ mrvl_node_capabilities_get(struct rte_eth_dev *dev, uint32_t node_id, dev->data->nb_tx_queues; cap->nonleaf.sched_wfq_n_groups_max = 1; cap->nonleaf.sched_wfq_weight_max = MRVL_WEIGHT_MAX; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 1; cap->stats_mask = RTE_TM_STATS_N_PKTS | RTE_TM_STATS_N_BYTES; } else { cap->stats_mask = RTE_TM_STATS_N_PKTS; From patchwork Wed Apr 22 07:59:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 69100 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 42DD2A00C2; Wed, 22 Apr 2020 10:00:44 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 92D421D407; Wed, 22 Apr 2020 10:00:26 +0200 (CEST) Received: from mail-pl1-f196.google.com (mail-pl1-f196.google.com [209.85.214.196]) by dpdk.org (Postfix) with ESMTP id 9C4271D155 for ; Wed, 22 Apr 2020 10:00:25 +0200 (CEST) Received: by mail-pl1-f196.google.com with SMTP id g2so636041plo.3 for ; Wed, 22 Apr 2020 01:00:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=4FQzotfUmwRXCPovV+tJ1a07Cid9UB8JggqPjuRt7I0=; b=tznF+++NHGdQqaK55IbNk6Sl47A30S7boGyo4EJ1D2jC6ipPXQLlkQ+EUdf5Lf2lgo Se/Nrnpc4PAtqYpt0sTAFxUU4c6TprxxoSDB0Xh7eeepdap2ecMVl64yikdr+Rqx7nEi 0QLo0A1y/vu3S6Rk3H9WLqIjA+Rvv/lfowzT3uMP7ak6qT0Ib9p38w2WdTMv+OcNSJT9 k0VuMmtG2OgO/qhNBwOavmd/r1PGNyizVjxAVobd0fIRCK2sCApyaD1LIQ0W47KwXQyu jQ9iBOMBJHEVFSC3ua0Yoivg/xSHObI+R0IX7t9LdcGqWSq19A3mIjslT6145n2klexL NvkA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=4FQzotfUmwRXCPovV+tJ1a07Cid9UB8JggqPjuRt7I0=; b=gPdo1CUAhn7MVFfST43m6ynNaeUZst4/mRTxL0UgQhxuBRHJ5vvTmxgIV2EU5FJBoy NA9R6ipv7jxI3kmKXTIHqsZcV0XCVKa6XHUEjEDx0IvX1aaUTSh1xrvOb3AjURBGGQU5 08ZNONnaPPk+Dfly/+2bevhFyETGbJXhj0o5S6fehZ9NNUe/lOpGA14R27ePNy5Tlmjn FVlcZ+S8D1x+qs1pZcuoK+jCYhPf2zBUsjoF1BwlzflKKMdLrGPcvh9hkhIMRnQR4TKd Lcohbgh3TwQemK2TTufZOTcerMp7RCCfTFOiKgqKAXHGftU7TvPYJE9+yxsPrsr0mS4V vf/w== X-Gm-Message-State: AGi0PuaSa7tfHiyxnMFRzbZeTqBMrM3kt96zwzUfYSBbMPtlJGJhkGPz hsVYSQBbUKjsuGMmplfP/QY= X-Google-Smtp-Source: APiQypI1pZcBdJ1x+yvgreHgYDDCj7/vCJWqUiHxX5LP84V+PgWW0j0Wkxme/ic8xOnhAPBiiaqahg== X-Received: by 2002:a17:902:fe03:: with SMTP id g3mr8411279plj.28.1587542424640; Wed, 22 Apr 2020 01:00:24 -0700 (PDT) Received: from hyd1588t430.marvell.com ([115.113.156.2]) by smtp.gmail.com with ESMTPSA id j6sm4521666pfe.134.2020.04.22.01.00.21 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 22 Apr 2020 01:00:24 -0700 (PDT) From: Nithin Dabilpuram To: Wenzhuo Lu , Jingjing Wu , Bernard Iremonger , John McNamara , Marko Kovacevic Cc: dev@dpdk.org, jerinj@marvell.com, kkanas@marvell.com, Nithin Dabilpuram Date: Wed, 22 Apr 2020 13:29:46 +0530 Message-Id: <20200422075948.10051-3-nithind1988@gmail.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20200422075948.10051-1-nithind1988@gmail.com> References: <20200330160019.29674-1-ndabilpuram@marvell.com> <20200422075948.10051-1-nithind1988@gmail.com> Subject: [dpdk-dev] [PATCH v3 3/4] app/testpmd: add tm cmd for non leaf and shaper pktmode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram Add TM command to enable packet mode for all SP children in non leaf node. This is a new command as "add tm nonleaf node pktmode". Also add support to shaper profile add command to take packet mode parameter used to setup shaper in packet mode. This adds an extra argument "packet_mode" to shaper profile add command "add port tm node shaper profile" as last argument. This patch also dumps new tm port/level/node capabilities sched_wfq_packet_mode_supported, sched_wfq_byte_mode_supported, shaper_private_packet_mode_supported, shaper_private_byte_mode_supported, shaper_shared_packet_mode_supported, shaper_shared_byte_mode_supported. Signed-off-by: Nithin Dabilpuram --- v2..v3: - Update cmdline dump of node/level cap. v1..v2: - Update tm capability show cmd to dump lastest pktmode/bytemode fields of v2. - Update existing shaper profile add command to take last argument as pkt_mode and update struct rte_tm_shaper_params:packet_mode with the same. - Update documentation with latest command changes. app/test-pmd/cmdline.c | 9 +- app/test-pmd/cmdline_tm.c | 220 ++++++++++++++++++++++++++++ app/test-pmd/cmdline_tm.h | 1 + doc/guides/testpmd_app_ug/testpmd_funcs.rst | 40 ++++- 4 files changed, 264 insertions(+), 6 deletions(-) diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index 22fb23a..880ec61 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -1189,7 +1189,7 @@ static void cmd_help_long_parsed(void *parsed_result, "add port tm node shaper profile (port_id) (shaper_profile_id)" " (cmit_tb_rate) (cmit_tb_size) (peak_tb_rate) (peak_tb_size)" - " (packet_length_adjust)\n" + " (packet_length_adjust) (packet_mode)\n" " Add port tm node private shaper profile.\n\n" "del port tm node shaper profile (port_id) (shaper_profile_id)\n" @@ -1221,6 +1221,12 @@ static void cmd_help_long_parsed(void *parsed_result, " [(shared_shaper_id_0) (shared_shaper_id_1)...]\n" " Add port tm nonleaf node.\n\n" + "add port tm nonleaf node pktmode (port_id) (node_id) (parent_node_id)" + " (priority) (weight) (level_id) (shaper_profile_id)" + " (n_sp_priorities) (stats_mask) (n_shared_shapers)" + " [(shared_shaper_id_0) (shared_shaper_id_1)...]\n" + " Add port tm nonleaf node with pkt mode enabled.\n\n" + "add port tm leaf node (port_id) (node_id) (parent_node_id)" " (priority) (weight) (level_id) (shaper_profile_id)" " (cman_mode) (wred_profile_id) (stats_mask) (n_shared_shapers)" @@ -19655,6 +19661,7 @@ cmdline_parse_ctx_t main_ctx[] = { (cmdline_parse_inst_t *)&cmd_del_port_tm_node_wred_profile, (cmdline_parse_inst_t *)&cmd_set_port_tm_node_shaper_profile, (cmdline_parse_inst_t *)&cmd_add_port_tm_nonleaf_node, + (cmdline_parse_inst_t *)&cmd_add_port_tm_nonleaf_node_pmode, (cmdline_parse_inst_t *)&cmd_add_port_tm_leaf_node, (cmdline_parse_inst_t *)&cmd_del_port_tm_node, (cmdline_parse_inst_t *)&cmd_set_port_tm_node_parent, diff --git a/app/test-pmd/cmdline_tm.c b/app/test-pmd/cmdline_tm.c index 6951beb..1f35fa7 100644 --- a/app/test-pmd/cmdline_tm.c +++ b/app/test-pmd/cmdline_tm.c @@ -257,6 +257,10 @@ static void cmd_show_port_tm_cap_parsed(void *parsed_result, cap.shaper_private_rate_min); printf("cap.shaper_private_rate_max %" PRIu64 "\n", cap.shaper_private_rate_max); + printf("cap.shaper_private_packet_mode_supported %" PRId32 "\n", + cap.shaper_private_packet_mode_supported); + printf("cap.shaper_private_byte_mode_supported %" PRId32 "\n", + cap.shaper_private_byte_mode_supported); printf("cap.shaper_shared_n_max %" PRIu32 "\n", cap.shaper_shared_n_max); printf("cap.shaper_shared_n_nodes_per_shaper_max %" PRIu32 "\n", @@ -269,6 +273,10 @@ static void cmd_show_port_tm_cap_parsed(void *parsed_result, cap.shaper_shared_rate_min); printf("cap.shaper_shared_rate_max %" PRIu64 "\n", cap.shaper_shared_rate_max); + printf("cap.shaper_shared_packet_mode_supported %" PRId32 "\n", + cap.shaper_shared_packet_mode_supported); + printf("cap.shaper_shared_byte_mode_supported %" PRId32 "\n", + cap.shaper_shared_byte_mode_supported); printf("cap.shaper_pkt_length_adjust_min %" PRId32 "\n", cap.shaper_pkt_length_adjust_min); printf("cap.shaper_pkt_length_adjust_max %" PRId32 "\n", @@ -283,6 +291,10 @@ static void cmd_show_port_tm_cap_parsed(void *parsed_result, cap.sched_wfq_n_groups_max); printf("cap.sched_wfq_weight_max %" PRIu32 "\n", cap.sched_wfq_weight_max); + printf("cap.sched_wfq_packet_mode_supported %" PRId32 "\n", + cap.sched_wfq_packet_mode_supported); + printf("cap.sched_wfq_byte_mode_supported %" PRId32 "\n", + cap.sched_wfq_byte_mode_supported); printf("cap.cman_head_drop_supported %" PRId32 "\n", cap.cman_head_drop_supported); printf("cap.cman_wred_context_n_max %" PRIu32 "\n", @@ -401,8 +413,19 @@ static void cmd_show_port_tm_level_cap_parsed(void *parsed_result, lcap.nonleaf.shaper_private_rate_min); printf("cap.nonleaf.shaper_private_rate_max %" PRIu64 "\n", lcap.nonleaf.shaper_private_rate_max); + printf("cap.nonleaf.shaper_private_packet_mode_supported %" + PRId32 "\n", + lcap.nonleaf.shaper_private_packet_mode_supported); + printf("cap.nonleaf.shaper_private_byte_mode_supported %" PRId32 + "\n", lcap.nonleaf.shaper_private_byte_mode_supported); printf("cap.nonleaf.shaper_shared_n_max %" PRIu32 "\n", lcap.nonleaf.shaper_shared_n_max); + printf("cap.nonleaf.shaper_shared_packet_mode_supported %" + PRId32 "\n", + lcap.nonleaf.shaper_shared_packet_mode_supported); + printf("cap.nonleaf.shaper_shared_byte_mode_supported %" + PRId32 "\n", + lcap.nonleaf.shaper_shared_byte_mode_supported); printf("cap.nonleaf.sched_n_children_max %" PRIu32 "\n", lcap.nonleaf.sched_n_children_max); printf("cap.nonleaf.sched_sp_n_priorities_max %" PRIu32 "\n", @@ -413,6 +436,10 @@ static void cmd_show_port_tm_level_cap_parsed(void *parsed_result, lcap.nonleaf.sched_wfq_n_groups_max); printf("cap.nonleaf.sched_wfq_weight_max %" PRIu32 "\n", lcap.nonleaf.sched_wfq_weight_max); + printf("cap.nonleaf.sched_wfq_packet_mode_supported %" PRId32 "\n", + lcap.nonleaf.sched_wfq_packet_mode_supported); + printf("cap.nonleaf.sched_wfq_byte_mode_supported %" PRId32 + "\n", lcap.nonleaf.sched_wfq_byte_mode_supported); printf("cap.nonleaf.stats_mask %" PRIx64 "\n", lcap.nonleaf.stats_mask); } else { @@ -424,8 +451,16 @@ static void cmd_show_port_tm_level_cap_parsed(void *parsed_result, lcap.leaf.shaper_private_rate_min); printf("cap.leaf.shaper_private_rate_max %" PRIu64 "\n", lcap.leaf.shaper_private_rate_max); + printf("cap.leaf.shaper_private_packet_mode_supported %" PRId32 + "\n", lcap.leaf.shaper_private_packet_mode_supported); + printf("cap.leaf.shaper_private_byte_mode_supported %" PRId32 "\n", + lcap.leaf.shaper_private_byte_mode_supported); printf("cap.leaf.shaper_shared_n_max %" PRIu32 "\n", lcap.leaf.shaper_shared_n_max); + printf("cap.leaf.shaper_shared_packet_mode_supported %" PRId32 "\n", + lcap.leaf.shaper_shared_packet_mode_supported); + printf("cap.leaf.shaper_shared_byte_mode_supported %" PRId32 "\n", + lcap.leaf.shaper_shared_byte_mode_supported); printf("cap.leaf.cman_head_drop_supported %" PRId32 "\n", lcap.leaf.cman_head_drop_supported); printf("cap.leaf.cman_wred_context_private_supported %" PRId32 @@ -524,8 +559,16 @@ static void cmd_show_port_tm_node_cap_parsed(void *parsed_result, ncap.shaper_private_rate_min); printf("cap.shaper_private_rate_max %" PRIu64 "\n", ncap.shaper_private_rate_max); + printf("cap.shaper_private_packet_mode_supported %" PRId32 "\n", + ncap.shaper_private_packet_mode_supported); + printf("cap.shaper_private_byte_mode_supported %" PRId32 "\n", + ncap.shaper_private_byte_mode_supported); printf("cap.shaper_shared_n_max %" PRIu32 "\n", ncap.shaper_shared_n_max); + printf("cap.shaper_shared_packet_mode_supported %" PRId32 "\n", + ncap.shaper_shared_packet_mode_supported); + printf("cap.shaper_shared_byte_mode_supported %" PRId32 "\n", + ncap.shaper_shared_byte_mode_supported); if (!is_leaf) { printf("cap.nonleaf.sched_n_children_max %" PRIu32 "\n", ncap.nonleaf.sched_n_children_max); @@ -537,6 +580,10 @@ static void cmd_show_port_tm_node_cap_parsed(void *parsed_result, ncap.nonleaf.sched_wfq_n_groups_max); printf("cap.nonleaf.sched_wfq_weight_max %" PRIu32 "\n", ncap.nonleaf.sched_wfq_weight_max); + printf("cap.nonleaf.sched_wfq_packet_mode_supported %" PRId32 "\n", + ncap.nonleaf.sched_wfq_packet_mode_supported); + printf("cap.nonleaf.sched_wfq_byte_mode_supported %" PRId32 "\n", + ncap.nonleaf.sched_wfq_byte_mode_supported); } else { printf("cap.leaf.cman_head_drop_supported %" PRId32 "\n", ncap.leaf.cman_head_drop_supported); @@ -776,6 +823,7 @@ struct cmd_add_port_tm_node_shaper_profile_result { uint64_t peak_tb_rate; uint64_t peak_tb_size; uint32_t pktlen_adjust; + int pkt_mode; }; cmdline_parse_token_string_t cmd_add_port_tm_node_shaper_profile_add = @@ -829,6 +877,10 @@ cmdline_parse_token_num_t cmd_add_port_tm_node_shaper_profile_pktlen_adjust = TOKEN_NUM_INITIALIZER( struct cmd_add_port_tm_node_shaper_profile_result, pktlen_adjust, UINT32); +cmdline_parse_token_num_t cmd_add_port_tm_node_shaper_profile_packet_mode = + TOKEN_NUM_INITIALIZER( + struct cmd_add_port_tm_node_shaper_profile_result, + pkt_mode, UINT32); static void cmd_add_port_tm_node_shaper_profile_parsed(void *parsed_result, __rte_unused struct cmdline *cl, @@ -853,6 +905,7 @@ static void cmd_add_port_tm_node_shaper_profile_parsed(void *parsed_result, sp.peak.rate = res->peak_tb_rate; sp.peak.size = res->peak_tb_size; sp.pkt_length_adjust = pkt_len_adjust; + sp.packet_mode = res->pkt_mode; ret = rte_tm_shaper_profile_add(port_id, shaper_id, &sp, &error); if (ret != 0) { @@ -879,6 +932,7 @@ cmdline_parse_inst_t cmd_add_port_tm_node_shaper_profile = { (void *)&cmd_add_port_tm_node_shaper_profile_peak_tb_rate, (void *)&cmd_add_port_tm_node_shaper_profile_peak_tb_size, (void *)&cmd_add_port_tm_node_shaper_profile_pktlen_adjust, + (void *)&cmd_add_port_tm_node_shaper_profile_packet_mode, NULL, }, }; @@ -1671,6 +1725,172 @@ cmdline_parse_inst_t cmd_add_port_tm_nonleaf_node = { }, }; +/* *** Add Port TM nonleaf node pkt mode *** */ +struct cmd_add_port_tm_nonleaf_node_pmode_result { + cmdline_fixed_string_t add; + cmdline_fixed_string_t port; + cmdline_fixed_string_t tm; + cmdline_fixed_string_t nonleaf; + cmdline_fixed_string_t node; + uint16_t port_id; + uint32_t node_id; + int32_t parent_node_id; + uint32_t priority; + uint32_t weight; + uint32_t level_id; + int32_t shaper_profile_id; + uint32_t n_sp_priorities; + uint64_t stats_mask; + cmdline_multi_string_t multi_shared_shaper_id; +}; + +cmdline_parse_token_string_t cmd_add_port_tm_nonleaf_node_pmode_add = + TOKEN_STRING_INITIALIZER( + struct cmd_add_port_tm_nonleaf_node_pmode_result, add, "add"); +cmdline_parse_token_string_t cmd_add_port_tm_nonleaf_node_pmode_port = + TOKEN_STRING_INITIALIZER( + struct cmd_add_port_tm_nonleaf_node_pmode_result, port, "port"); +cmdline_parse_token_string_t cmd_add_port_tm_nonleaf_node_pmode_tm = + TOKEN_STRING_INITIALIZER( + struct cmd_add_port_tm_nonleaf_node_pmode_result, tm, "tm"); +cmdline_parse_token_string_t cmd_add_port_tm_nonleaf_node_pmode_nonleaf = + TOKEN_STRING_INITIALIZER( + struct cmd_add_port_tm_nonleaf_node_pmode_result, nonleaf, "nonleaf"); +cmdline_parse_token_string_t cmd_add_port_tm_nonleaf_node_pmode_node = + TOKEN_STRING_INITIALIZER( + struct cmd_add_port_tm_nonleaf_node_pmode_result, node, "node"); +cmdline_parse_token_string_t cmd_add_port_tm_nonleaf_node_pmode_pktmode = + TOKEN_STRING_INITIALIZER( + struct cmd_add_port_tm_nonleaf_node_pmode_result, node, "pktmode"); +cmdline_parse_token_num_t cmd_add_port_tm_nonleaf_node_pmode_port_id = + TOKEN_NUM_INITIALIZER( + struct cmd_add_port_tm_nonleaf_node_pmode_result, + port_id, UINT16); +cmdline_parse_token_num_t cmd_add_port_tm_nonleaf_node_pmode_node_id = + TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_nonleaf_node_pmode_result, + node_id, UINT32); +cmdline_parse_token_num_t cmd_add_port_tm_nonleaf_node_pmode_parent_node_id = + TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_nonleaf_node_pmode_result, + parent_node_id, INT32); +cmdline_parse_token_num_t cmd_add_port_tm_nonleaf_node_pmode_priority = + TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_nonleaf_node_pmode_result, + priority, UINT32); +cmdline_parse_token_num_t cmd_add_port_tm_nonleaf_node_pmode_weight = + TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_nonleaf_node_pmode_result, + weight, UINT32); +cmdline_parse_token_num_t cmd_add_port_tm_nonleaf_node_pmode_level_id = + TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_nonleaf_node_pmode_result, + level_id, UINT32); +cmdline_parse_token_num_t cmd_add_port_tm_nonleaf_node_pmode_shaper_profile_id = + TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_nonleaf_node_pmode_result, + shaper_profile_id, INT32); +cmdline_parse_token_num_t cmd_add_port_tm_nonleaf_node_pmode_n_sp_priorities = + TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_nonleaf_node_pmode_result, + n_sp_priorities, UINT32); +cmdline_parse_token_num_t cmd_add_port_tm_nonleaf_node_pmode_stats_mask = + TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_nonleaf_node_pmode_result, + stats_mask, UINT64); +cmdline_parse_token_string_t + cmd_add_port_tm_nonleaf_node_pmode_multi_shrd_shpr_id = + TOKEN_STRING_INITIALIZER( + struct cmd_add_port_tm_nonleaf_node_pmode_result, + multi_shared_shaper_id, TOKEN_STRING_MULTI); + +static void cmd_add_port_tm_nonleaf_node_pmode_parsed(void *parsed_result, + __attribute__((unused)) struct cmdline *cl, + __attribute__((unused)) void *data) +{ + struct cmd_add_port_tm_nonleaf_node_pmode_result *res = parsed_result; + uint32_t parent_node_id, n_shared_shapers = 0; + char *s_str = res->multi_shared_shaper_id; + portid_t port_id = res->port_id; + struct rte_tm_node_params np; + int *wfq_weight_mode = NULL; + uint32_t *shared_shaper_id; + struct rte_tm_error error; + int ret; + + if (port_id_is_invalid(port_id, ENABLED_WARN)) + return; + + memset(&np, 0, sizeof(struct rte_tm_node_params)); + memset(&error, 0, sizeof(struct rte_tm_error)); + + /* Node parameters */ + if (res->parent_node_id < 0) + parent_node_id = UINT32_MAX; + else + parent_node_id = res->parent_node_id; + + shared_shaper_id = (uint32_t *)malloc(MAX_NUM_SHARED_SHAPERS * + sizeof(uint32_t)); + if (shared_shaper_id == NULL) { + printf(" Memory not allocated for shared shapers (error)\n"); + return; + } + + /* Parse multi shared shaper id string */ + ret = parse_multi_ss_id_str(s_str, &n_shared_shapers, shared_shaper_id); + if (ret) { + printf(" Shared shapers params string parse error\n"); + free(shared_shaper_id); + return; + } + + if (res->shaper_profile_id < 0) + np.shaper_profile_id = UINT32_MAX; + else + np.shaper_profile_id = res->shaper_profile_id; + + np.n_shared_shapers = n_shared_shapers; + if (np.n_shared_shapers) { + np.shared_shaper_id = &shared_shaper_id[0]; + } else { + free(shared_shaper_id); + shared_shaper_id = NULL; + } + + if (res->n_sp_priorities) + wfq_weight_mode = calloc(res->n_sp_priorities, sizeof(int)); + np.nonleaf.n_sp_priorities = res->n_sp_priorities; + np.stats_mask = res->stats_mask; + np.nonleaf.wfq_weight_mode = wfq_weight_mode; + + ret = rte_tm_node_add(port_id, res->node_id, parent_node_id, + res->priority, res->weight, res->level_id, + &np, &error); + if (ret != 0) { + print_err_msg(&error); + free(shared_shaper_id); + free(wfq_weight_mode); + return; + } +} + +cmdline_parse_inst_t cmd_add_port_tm_nonleaf_node_pmode = { + .f = cmd_add_port_tm_nonleaf_node_pmode_parsed, + .data = NULL, + .help_str = "Add port tm nonleaf node pktmode", + .tokens = { + (void *)&cmd_add_port_tm_nonleaf_node_pmode_add, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_port, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_tm, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_nonleaf, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_node, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_pktmode, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_port_id, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_node_id, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_parent_node_id, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_priority, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_weight, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_level_id, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_shaper_profile_id, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_n_sp_priorities, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_stats_mask, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_multi_shrd_shpr_id, + NULL, + }, +}; /* *** Add Port TM leaf node *** */ struct cmd_add_port_tm_leaf_node_result { cmdline_fixed_string_t add; diff --git a/app/test-pmd/cmdline_tm.h b/app/test-pmd/cmdline_tm.h index 950cb75..e59c15c 100644 --- a/app/test-pmd/cmdline_tm.h +++ b/app/test-pmd/cmdline_tm.h @@ -19,6 +19,7 @@ extern cmdline_parse_inst_t cmd_add_port_tm_node_wred_profile; extern cmdline_parse_inst_t cmd_del_port_tm_node_wred_profile; extern cmdline_parse_inst_t cmd_set_port_tm_node_shaper_profile; extern cmdline_parse_inst_t cmd_add_port_tm_nonleaf_node; +extern cmdline_parse_inst_t cmd_add_port_tm_nonleaf_node_pmode; extern cmdline_parse_inst_t cmd_add_port_tm_leaf_node; extern cmdline_parse_inst_t cmd_del_port_tm_node; extern cmdline_parse_inst_t cmd_set_port_tm_node_parent; diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index a360ecc..7513a97 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -2842,19 +2842,22 @@ Add the port traffic management private shaper profile:: testpmd> add port tm node shaper profile (port_id) (shaper_profile_id) \ (cmit_tb_rate) (cmit_tb_size) (peak_tb_rate) (peak_tb_size) \ - (packet_length_adjust) + (packet_length_adjust) (packet_mode) where: * ``shaper_profile id``: Shaper profile ID for the new profile. -* ``cmit_tb_rate``: Committed token bucket rate (bytes per second). -* ``cmit_tb_size``: Committed token bucket size (bytes). -* ``peak_tb_rate``: Peak token bucket rate (bytes per second). -* ``peak_tb_size``: Peak token bucket size (bytes). +* ``cmit_tb_rate``: Committed token bucket rate (bytes per second or packets per second). +* ``cmit_tb_size``: Committed token bucket size (bytes or packets). +* ``peak_tb_rate``: Peak token bucket rate (bytes per second or packets per second). +* ``peak_tb_size``: Peak token bucket size (bytes or packets). * ``packet_length_adjust``: The value (bytes) to be added to the length of each packet for the purpose of shaping. This parameter value can be used to correct the packet length with the framing overhead bytes that are consumed on the wire. +* ``packet_mode``: Shaper configured in packet mode. This parameter value if + zero, configures shaper in byte mode and if non-zero configures it in packet + mode. Delete port traffic management private shaper profile ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -2979,6 +2982,33 @@ where: * ``n_shared_shapers``: Number of shared shapers. * ``shared_shaper_id``: Shared shaper id. +Add port traffic management hierarchy nonleaf node with packet mode +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Add nonleaf node with packet mode to port traffic management hierarchy:: + + testpmd> add port tm nonleaf node pktmode (port_id) (node_id) (parent_node_id) \ + (priority) (weight) (level_id) (shaper_profile_id) \ + (n_sp_priorities) (stats_mask) (n_shared_shapers) \ + [(shared_shaper_0) (shared_shaper_1) ...] \ + +where: + +* ``parent_node_id``: Node ID of the parent. +* ``priority``: Node priority (highest node priority is zero). This is used by + the SP algorithm running on the parent node for scheduling this node. +* ``weight``: Node weight (lowest weight is one). The node weight is relative + to the weight sum of all siblings that have the same priority. It is used by + the WFQ algorithm running on the parent node for scheduling this node. +* ``level_id``: Hierarchy level of the node. +* ``shaper_profile_id``: Shaper profile ID of the private shaper to be used by + the node. +* ``n_sp_priorities``: Number of strict priorities. Packet mode is enabled on + all of them. +* ``stats_mask``: Mask of statistics counter types to be enabled for this node. +* ``n_shared_shapers``: Number of shared shapers. +* ``shared_shaper_id``: Shared shaper id. + Add port traffic management hierarchy leaf node ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From patchwork Wed Apr 22 07:59:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 69101 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id D2907A00C2; Wed, 22 Apr 2020 10:00:58 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6D3971D171; Wed, 22 Apr 2020 10:00:30 +0200 (CEST) Received: from mail-pf1-f180.google.com (mail-pf1-f180.google.com [209.85.210.180]) by dpdk.org (Postfix) with ESMTP id 8BD791D414 for ; Wed, 22 Apr 2020 10:00:28 +0200 (CEST) Received: by mail-pf1-f180.google.com with SMTP id d184so683108pfd.4 for ; Wed, 22 Apr 2020 01:00:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=cXDhRIe+jDBxjEwK9B764EmL6MuZwyjXYHvutuBLqJw=; b=DPOKVkAGNaBvriKJ7Uk6BgE4ew/eLwutKQefqbGRxSH5OpTFLCoyR5CeZPT7aTbjbH fE/BGlAs9mF3JjuS+C/XDzm6FrideFj/3TTGz/dEbQvixt8MYKAJ2QO0gaf9K7mZt7cW ghI+/DjUyRiandQxXgUHk6sy4NYf45HY+N61Jo8KoV6AtbUHxxRaLqWPdgZTTTebWRzk I4YEiNu36KxcBcag6cDMt9SunDlvim7bF2Y+YavqhVnuB17IgFCMo/eQlFq9LtElCj7B 8brdFrs2+jybtOZq2heNDSoWH3RwtCCrYjqRdS9JcDQzeAshI2xAHN1Pzotsznesf4Kz o6Ag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=cXDhRIe+jDBxjEwK9B764EmL6MuZwyjXYHvutuBLqJw=; b=IhqskkK3DhnDAxaRNsIpmu+2dOEOSHRF9s+ZI3l53ESimVD6a26lYWefNlOxwCIZVF jYCwyMM+hlBkZg4x9Dte4QDr4AXS4Vl/RIQZ9twOd2dHvXBPOuNGzBI/qM4b/3Pv8CHy uh18Uz+kUZG3THIpVff14iYDAMocRaRBbtHZ86Z9thuhFjwJHCvJ6+87YyVvv3yhWbqX o5OTVYbTCAbloogO7bf1T1dk14ANeTOX4VvFFIF0Pp914hoohK6ZyMoyQzjLPMiG4ixs xmpMKZenAK9zKOO4ouKqg07ocDW6PKfOYhgzjhtBCD1XPgcDpJhKEpONIV4a9GtfA9Ym pVgw== X-Gm-Message-State: AGi0PuYHfR99WBc70buVNpF5r83+K4pdZxovwrMTmcKW35VjexjK6liG P8KO7o3YolpV3p5bFPmY33E= X-Google-Smtp-Source: APiQypJRZ7vOlW7APZXiSKqPwD3VyoWr0StNs4zGGv8qNCW7B23wQ71Pis84JEP/CRg83UuVX/J2yQ== X-Received: by 2002:aa7:8429:: with SMTP id q9mr26195165pfn.308.1587542427683; Wed, 22 Apr 2020 01:00:27 -0700 (PDT) Received: from hyd1588t430.marvell.com ([115.113.156.2]) by smtp.gmail.com with ESMTPSA id j6sm4521666pfe.134.2020.04.22.01.00.25 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 22 Apr 2020 01:00:27 -0700 (PDT) From: Nithin Dabilpuram To: Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K Cc: dev@dpdk.org, kkanas@marvell.com Date: Wed, 22 Apr 2020 13:29:47 +0530 Message-Id: <20200422075948.10051-4-nithind1988@gmail.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20200422075948.10051-1-nithind1988@gmail.com> References: <20200330160019.29674-1-ndabilpuram@marvell.com> <20200422075948.10051-1-nithind1988@gmail.com> Subject: [dpdk-dev] [PATCH v3 4/4] net/octeontx2: support tm length adjust and pkt mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram This patch adds support to packet length adjust TM feature for private shaper. It also adds support to packet mode feature that applies both to private shaper and node DWRR scheduling of SP children. Signed-off-by: Nithin Dabilpuram --- v2..v3: - No change. v1..v2: - Newly included patch. drivers/net/octeontx2/otx2_tm.c | 140 +++++++++++++++++++++++++++++++++------- drivers/net/octeontx2/otx2_tm.h | 5 ++ 2 files changed, 122 insertions(+), 23 deletions(-) diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c index f94618d..fa7d21b 100644 --- a/drivers/net/octeontx2/otx2_tm.c +++ b/drivers/net/octeontx2/otx2_tm.c @@ -336,18 +336,25 @@ prepare_tm_shaper_reg(struct otx2_nix_tm_node *tm_node, { struct shaper_params cir, pir; uint32_t schq = tm_node->hw_id; + uint64_t adjust = 0; uint8_t k = 0; memset(&cir, 0, sizeof(cir)); memset(&pir, 0, sizeof(pir)); shaper_config_to_nix(profile, &cir, &pir); - otx2_tm_dbg("Shaper config node %s(%u) lvl %u id %u, " - "pir %" PRIu64 "(%" PRIu64 "B)," - " cir %" PRIu64 "(%" PRIu64 "B) (%p)", - nix_hwlvl2str(tm_node->hw_lvl), schq, tm_node->lvl, - tm_node->id, pir.rate, pir.burst, - cir.rate, cir.burst, tm_node); + /* Packet length adjust */ + if (tm_node->pkt_mode) + adjust = 1; + else if (profile) + adjust = profile->params.pkt_length_adjust & 0x1FF; + + otx2_tm_dbg("Shaper config node %s(%u) lvl %u id %u, pir %" PRIu64 + "(%" PRIu64 "B), cir %" PRIu64 "(%" PRIu64 "B)" + "adjust 0x%" PRIx64 "(pktmode %u) (%p)", + nix_hwlvl2str(tm_node->hw_lvl), schq, tm_node->lvl, + tm_node->id, pir.rate, pir.burst, cir.rate, cir.burst, + adjust, tm_node->pkt_mode, tm_node); switch (tm_node->hw_lvl) { case NIX_TXSCH_LVL_SMQ: @@ -364,7 +371,9 @@ prepare_tm_shaper_reg(struct otx2_nix_tm_node *tm_node, /* Configure RED ALG */ reg[k] = NIX_AF_MDQX_SHAPE(schq); - regval[k] = ((uint64_t)tm_node->red_algo << 9); + regval[k] = (adjust | + (uint64_t)tm_node->red_algo << 9 | + (uint64_t)tm_node->pkt_mode << 24); k++; break; case NIX_TXSCH_LVL_TL4: @@ -381,7 +390,9 @@ prepare_tm_shaper_reg(struct otx2_nix_tm_node *tm_node, /* Configure RED algo */ reg[k] = NIX_AF_TL4X_SHAPE(schq); - regval[k] = ((uint64_t)tm_node->red_algo << 9); + regval[k] = (adjust | + (uint64_t)tm_node->red_algo << 9 | + (uint64_t)tm_node->pkt_mode << 24); k++; break; case NIX_TXSCH_LVL_TL3: @@ -398,7 +409,9 @@ prepare_tm_shaper_reg(struct otx2_nix_tm_node *tm_node, /* Configure RED algo */ reg[k] = NIX_AF_TL3X_SHAPE(schq); - regval[k] = ((uint64_t)tm_node->red_algo << 9); + regval[k] = (adjust | + (uint64_t)tm_node->red_algo << 9 | + (uint64_t)tm_node->pkt_mode << 24); k++; break; @@ -416,7 +429,9 @@ prepare_tm_shaper_reg(struct otx2_nix_tm_node *tm_node, /* Configure RED algo */ reg[k] = NIX_AF_TL2X_SHAPE(schq); - regval[k] = ((uint64_t)tm_node->red_algo << 9); + regval[k] = (adjust | + (uint64_t)tm_node->red_algo << 9 | + (uint64_t)tm_node->pkt_mode << 24); k++; break; @@ -426,6 +441,12 @@ prepare_tm_shaper_reg(struct otx2_nix_tm_node *tm_node, regval[k] = (cir.rate && cir.burst) ? (shaper2regval(&cir) | 1) : 0; k++; + + /* Configure length disable and adjust */ + reg[k] = NIX_AF_TL1X_SHAPE(schq); + regval[k] = (adjust | + (uint64_t)tm_node->pkt_mode << 24); + k++; break; } @@ -773,6 +794,15 @@ nix_tm_node_add_to_list(struct otx2_eth_dev *dev, uint32_t node_id, tm_node->flags = 0; if (user) tm_node->flags = NIX_TM_NODE_USER; + + /* Packet mode */ + if (!nix_tm_is_leaf(dev, lvl) && + ((profile && profile->params.packet_mode) || + (params->nonleaf.wfq_weight_mode && + params->nonleaf.n_sp_priorities && + !params->nonleaf.wfq_weight_mode[0]))) + tm_node->pkt_mode = 1; + rte_memcpy(&tm_node->params, params, sizeof(struct rte_tm_node_params)); if (profile) @@ -1873,8 +1903,10 @@ otx2_nix_tm_capa_get(struct rte_eth_dev *eth_dev, cap->shaper_private_dual_rate_n_max = max_nr_nodes; cap->shaper_private_rate_min = MIN_SHAPER_RATE / 8; cap->shaper_private_rate_max = MAX_SHAPER_RATE / 8; - cap->shaper_pkt_length_adjust_min = 0; - cap->shaper_pkt_length_adjust_max = 0; + cap->shaper_private_packet_mode_supported = 1; + cap->shaper_private_byte_mode_supported = 1; + cap->shaper_pkt_length_adjust_min = NIX_LENGTH_ADJUST_MIN; + cap->shaper_pkt_length_adjust_max = NIX_LENGTH_ADJUST_MAX; /* Schedule Capabilities */ cap->sched_n_children_max = rsp->schq[NIX_TXSCH_LVL_MDQ]; @@ -1882,6 +1914,8 @@ otx2_nix_tm_capa_get(struct rte_eth_dev *eth_dev, cap->sched_wfq_n_children_per_group_max = cap->sched_n_children_max; cap->sched_wfq_n_groups_max = 1; cap->sched_wfq_weight_max = MAX_SCHED_WEIGHT; + cap->sched_wfq_packet_mode_supported = 1; + cap->sched_wfq_byte_mode_supported = 1; cap->dynamic_update_mask = RTE_TM_UPDATE_NODE_PARENT_KEEP_LEVEL | @@ -1944,12 +1978,16 @@ otx2_nix_tm_level_capa_get(struct rte_eth_dev *eth_dev, uint32_t lvl, nix_tm_have_tl1_access(dev) ? false : true; cap->nonleaf.shaper_private_rate_min = MIN_SHAPER_RATE / 8; cap->nonleaf.shaper_private_rate_max = MAX_SHAPER_RATE / 8; + cap->nonleaf.shaper_private_packet_mode_supported = 1; + cap->nonleaf.shaper_private_byte_mode_supported = 1; cap->nonleaf.sched_n_children_max = rsp->schq[hw_lvl - 1]; cap->nonleaf.sched_sp_n_priorities_max = nix_max_prio(dev, hw_lvl) + 1; cap->nonleaf.sched_wfq_n_groups_max = 1; cap->nonleaf.sched_wfq_weight_max = MAX_SCHED_WEIGHT; + cap->nonleaf.sched_wfq_packet_mode_supported = 1; + cap->nonleaf.sched_wfq_byte_mode_supported = 1; if (nix_tm_have_tl1_access(dev)) cap->nonleaf.stats_mask = @@ -1966,6 +2004,8 @@ otx2_nix_tm_level_capa_get(struct rte_eth_dev *eth_dev, uint32_t lvl, cap->nonleaf.shaper_private_dual_rate_supported = true; cap->nonleaf.shaper_private_rate_min = MIN_SHAPER_RATE / 8; cap->nonleaf.shaper_private_rate_max = MAX_SHAPER_RATE / 8; + cap->nonleaf.shaper_private_packet_mode_supported = 1; + cap->nonleaf.shaper_private_byte_mode_supported = 1; /* MDQ doesn't support Strict Priority */ if (hw_lvl == NIX_TXSCH_LVL_MDQ) @@ -1977,6 +2017,8 @@ otx2_nix_tm_level_capa_get(struct rte_eth_dev *eth_dev, uint32_t lvl, nix_max_prio(dev, hw_lvl) + 1; cap->nonleaf.sched_wfq_n_groups_max = 1; cap->nonleaf.sched_wfq_weight_max = MAX_SCHED_WEIGHT; + cap->nonleaf.sched_wfq_packet_mode_supported = 1; + cap->nonleaf.sched_wfq_byte_mode_supported = 1; } else { /* unsupported level */ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; @@ -2029,6 +2071,8 @@ otx2_nix_tm_node_capa_get(struct rte_eth_dev *eth_dev, uint32_t node_id, (hw_lvl == NIX_TXSCH_LVL_TL1) ? false : true; cap->shaper_private_rate_min = MIN_SHAPER_RATE / 8; cap->shaper_private_rate_max = MAX_SHAPER_RATE / 8; + cap->shaper_private_packet_mode_supported = 1; + cap->shaper_private_byte_mode_supported = 1; /* Non Leaf Scheduler */ if (hw_lvl == NIX_TXSCH_LVL_MDQ) @@ -2041,6 +2085,8 @@ otx2_nix_tm_node_capa_get(struct rte_eth_dev *eth_dev, uint32_t node_id, cap->nonleaf.sched_n_children_max; cap->nonleaf.sched_wfq_n_groups_max = 1; cap->nonleaf.sched_wfq_weight_max = MAX_SCHED_WEIGHT; + cap->nonleaf.sched_wfq_packet_mode_supported = 1; + cap->nonleaf.sched_wfq_byte_mode_supported = 1; if (hw_lvl == NIX_TXSCH_LVL_TL1) cap->stats_mask = RTE_TM_STATS_N_PKTS_RED_DROPPED | @@ -2096,6 +2142,13 @@ otx2_nix_tm_shaper_profile_add(struct rte_eth_dev *eth_dev, } } + if (params->pkt_length_adjust < NIX_LENGTH_ADJUST_MIN || + params->pkt_length_adjust > NIX_LENGTH_ADJUST_MAX) { + error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PKT_ADJUST_LEN; + error->message = "length adjust invalid"; + return -EINVAL; + } + profile = rte_zmalloc("otx2_nix_tm_shaper_profile", sizeof(struct otx2_nix_tm_shaper_profile), 0); if (!profile) @@ -2108,13 +2161,14 @@ otx2_nix_tm_shaper_profile_add(struct rte_eth_dev *eth_dev, otx2_tm_dbg("Added TM shaper profile %u, " " pir %" PRIu64 " , pbs %" PRIu64 ", cir %" PRIu64 - ", cbs %" PRIu64 " , adj %u", + ", cbs %" PRIu64 " , adj %u, pkt mode %d", profile_id, params->peak.rate * 8, params->peak.size, params->committed.rate * 8, params->committed.size, - params->pkt_length_adjust); + params->pkt_length_adjust, + params->packet_mode); /* Translate rate as bits per second */ profile->params.peak.rate = profile->params.peak.rate * 8; @@ -2170,9 +2224,11 @@ otx2_nix_tm_node_add(struct rte_eth_dev *eth_dev, uint32_t node_id, struct rte_tm_error *error) { struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_nix_tm_shaper_profile *profile = NULL; struct otx2_nix_tm_node *parent_node; - int rc, clear_on_fail = 0; - uint32_t exp_next_lvl; + int rc, pkt_mode, clear_on_fail = 0; + uint32_t exp_next_lvl, i; + uint32_t profile_id; uint16_t hw_lvl; /* we don't support dynamic updates */ @@ -2234,13 +2290,45 @@ otx2_nix_tm_node_add(struct rte_eth_dev *eth_dev, uint32_t node_id, return -EINVAL; } - /* Check if shaper profile exists for non leaf node */ - if (!nix_tm_is_leaf(dev, lvl) && - params->shaper_profile_id != RTE_TM_SHAPER_PROFILE_ID_NONE && - !nix_tm_shaper_profile_search(dev, params->shaper_profile_id)) { - error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID; - error->message = "invalid shaper profile"; - return -EINVAL; + if (!nix_tm_is_leaf(dev, lvl)) { + /* Check if shaper profile exists for non leaf node */ + profile_id = params->shaper_profile_id; + profile = nix_tm_shaper_profile_search(dev, profile_id); + if (profile_id != RTE_TM_SHAPER_PROFILE_ID_NONE && !profile) { + error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID; + error->message = "invalid shaper profile"; + return -EINVAL; + } + + /* Minimum static priority count is 1 */ + if (!params->nonleaf.n_sp_priorities || + params->nonleaf.n_sp_priorities > TXSCH_TLX_SP_PRIO_MAX) { + error->type = + RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SP_PRIORITIES; + error->message = "invalid sp priorities"; + return -EINVAL; + } + + pkt_mode = 0; + /* Validate weight mode */ + for (i = 0; i < params->nonleaf.n_sp_priorities && + params->nonleaf.wfq_weight_mode; i++) { + pkt_mode = !params->nonleaf.wfq_weight_mode[i]; + if (pkt_mode == !params->nonleaf.wfq_weight_mode[0]) + continue; + + error->type = + RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE; + error->message = "unsupported weight mode"; + return -EINVAL; + } + + if (profile && params->nonleaf.n_sp_priorities && + pkt_mode != profile->params.packet_mode) { + error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE; + error->message = "shaper wfq packet mode mismatch"; + return -EINVAL; + } } /* Check if there is second DWRR already in siblings or holes in prio */ @@ -2482,6 +2570,12 @@ otx2_nix_tm_node_shaper_update(struct rte_eth_dev *eth_dev, } } + if (profile && profile->params.packet_mode != tm_node->pkt_mode) { + error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID; + error->message = "shaper profile pkt mode mismatch"; + return -EINVAL; + } + tm_node->params.shaper_profile_id = profile_id; /* Nothing to do if not yet committed */ diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h index 9675182..cdca987 100644 --- a/drivers/net/octeontx2/otx2_tm.h +++ b/drivers/net/octeontx2/otx2_tm.h @@ -48,6 +48,7 @@ struct otx2_nix_tm_node { #define NIX_TM_NODE_USER BIT_ULL(2) /* Shaper algorithm for RED state @NIX_REDALG_E */ uint32_t red_algo:2; + uint32_t pkt_mode:1; struct otx2_nix_tm_node *parent; struct rte_tm_node_params params; @@ -114,6 +115,10 @@ TAILQ_HEAD(otx2_nix_tm_shaper_profile_list, otx2_nix_tm_shaper_profile); #define MAX_SHAPER_RATE \ SHAPER_RATE(MAX_RATE_EXPONENT, MAX_RATE_MANTISSA, 0) +/* Min is limited so that NIX_AF_SMQX_CFG[MINLEN]+ADJUST is not -ve */ +#define NIX_LENGTH_ADJUST_MIN ((int)-NIX_MIN_HW_FRS + 1) +#define NIX_LENGTH_ADJUST_MAX 255 + /** TM Shaper - low level operations */ /** NIX burst limits */