From patchwork Sat Apr 11 11:44:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 68186 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7D905A059F; Sat, 11 Apr 2020 13:44:46 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 96F271C133; Sat, 11 Apr 2020 13:44:45 +0200 (CEST) Received: from mail-pf1-f194.google.com (mail-pf1-f194.google.com [209.85.210.194]) by dpdk.org (Postfix) with ESMTP id 157181C10F for ; Sat, 11 Apr 2020 13:44:44 +0200 (CEST) Received: by mail-pf1-f194.google.com with SMTP id b8so2212989pfp.8 for ; Sat, 11 Apr 2020 04:44:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=65KETF+5YoTUBoJQ0QrMKX6s+m/S6sKQLPsHvmP7jVQ=; b=YS3KyyDlk/HGXpel0MwNciVwKEbt/g5R+7W97Me2HgllxwqteI3jhoUDC5Uxv6qzUK n1Gb3kX9+d+YnnkSUjjq6Y2b7//XeJFKF36e1cauI8XUq5BN3rARnyqLgy7frC9d5LJp xa3BF6v2F4SH4wPls/AgYCVxD3c1XUfFq0IWmMAyaZX0uuKqeos+CTA3VqhnKvj+a0rW Iu+FC+cWgMJPsSjWaKd51LYGFaXokjanG47PPMO+NXfKhFpwI2lq4Zcp+sh1OA95Pnlv K8kaCIng5Ne0e82S0ZO715QJjMENAcixvc4zuwLK3J01M41BUhmCTflxfMN8WHYYdxp2 rqDA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=65KETF+5YoTUBoJQ0QrMKX6s+m/S6sKQLPsHvmP7jVQ=; b=fR/HtPyRGJLGsRVb/5u9sgNAgxLJiLv5u48WDXMmJdb4XZadk7HQfUxoWmzWwuMNNM TIkZGQ7+hTNM7BIupdsWZlP6sNAJf94Ae2jtCM04QktXtDdg7Kc3OZTTjE8gg+0/00m4 Tok7ugsDO5wAoMD9k3hZOxyR+RzzKhhFOdDa3ThEkV5Gxc5zXIk1CEUug6aOhEjjmDhV 5ad5xk5oqA4ORx8y6HD26ZDrNyaxic0i+OQw/pnUfPutkRHczdED1+T+4S/4UwKsy6c8 PCpnRj9qvtXjqVZt/kCXM3+oirp6NAmNBa164ZOTa5oA8bqEe/5G2gL3GRI9qhYWcILE 0U2g== X-Gm-Message-State: AGi0PuYrI1Y2yVP7avj3KqMz88YYjAAhOM/JitInDJO+/bziZnLGCCs3 V39vkx3xp4DJRozmKEZ0XxQ= X-Google-Smtp-Source: APiQypKRMlI8H2REickY7ozaNAZLhnjaGfGHFsMPsE7VnEqJ/NIw29252NJ04NHCRWRh8a1PleInIQ== X-Received: by 2002:a63:3195:: with SMTP id x143mr8548621pgx.326.1586605482991; Sat, 11 Apr 2020 04:44:42 -0700 (PDT) Received: from hyd1588t430.marvell.com ([115.113.156.2]) by smtp.gmail.com with ESMTPSA id mg20sm4004793pjb.12.2020.04.11.04.44.40 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sat, 11 Apr 2020 04:44:42 -0700 (PDT) From: Nithin Dabilpuram To: Cristian Dumitrescu , Thomas Monjalon , Ferruh Yigit , Andrew Rybchenko Cc: dev@dpdk.org, jerinj@marvell.com, kkanas@marvell.com, Nithin Dabilpuram Date: Sat, 11 Apr 2020 17:14:27 +0530 Message-Id: <20200411114430.18506-1-nithind1988@gmail.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20200330160019.29674-1-ndabilpuram@marvell.com> References: <20200330160019.29674-1-ndabilpuram@marvell.com> Subject: [dpdk-dev] [PATCH v2 1/4] ethdev: add tm support for shaper config in pkt mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram Some NIC hardware support shaper to work in packet mode i.e shaping or ratelimiting traffic is in packets per second (PPS) as opposed to default bytes per second (BPS). Hence this patch adds support to configure shared or private shaper in packet mode, provide rate in PPS and add related tm capabilities in port/level/node capability structures. This patch also updates tm port/level/node capability structures with exiting features of scheduler wfq packet mode, scheduler wfq byte mode and private/shared shaper byte mode. Signed-off-by: Nithin Dabilpuram --- v1..v2: - Add seperate capability for shaper and scheduler pktmode and bytemode. - Add packet_mode field in struct rte_tm_shaper_params to indicate packet mode shaper profile. lib/librte_ethdev/rte_tm.h | 156 ++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 154 insertions(+), 2 deletions(-) diff --git a/lib/librte_ethdev/rte_tm.h b/lib/librte_ethdev/rte_tm.h index f9c0cf3..38fff4c 100644 --- a/lib/librte_ethdev/rte_tm.h +++ b/lib/librte_ethdev/rte_tm.h @@ -250,6 +250,23 @@ struct rte_tm_capabilities { */ uint64_t shaper_private_rate_max; + /** Shaper private packet mode supported. When non-zero, this parameter + * indicates that there is atleast one node that can be configured + * with packet mode in it's private shaper. When shaper is configured + * in packet mode, committed/peak rate provided is interpreted + * in packets per second. + */ + int shaper_private_packet_mode_supported; + + /** Shaper private byte mode supported. When non-zero, this parameter + * indicates that there is atleast one node that can be configured + * with byte mode in it's private shaper. When shaper is configured + * in byte mode, committed/peak rate provided is interpreted in + * bytes per second. + */ + int shaper_private_byte_mode_supported; + + /** Maximum number of shared shapers. The value of zero indicates that * shared shapers are not supported. */ @@ -284,6 +301,21 @@ struct rte_tm_capabilities { */ uint64_t shaper_shared_rate_max; + /** Shaper shared packet mode supported. When non-zero, this parameter + * indicates a shared shaper can be configured with packet mode. + * When shared shaper is configured in packet mode, committed/peak rate + * provided is interpreted in packets per second. + */ + int shaper_shared_packet_mode_supported; + + /** Shaper shared byte mode supported. When non-zero, this parameter + * indicates that a shared shaper can be configured with byte mode. + * When shared shaper is configured in byte mode, committed/peak rate + * provided is interpreted in bytes per second. + */ + int shaper_shared_byte_mode_supported; + + /** Minimum value allowed for packet length adjustment for any private * or shared shaper. */ @@ -339,6 +371,22 @@ struct rte_tm_capabilities { */ uint32_t sched_wfq_weight_max; + /** WFQ packet mode supported. When non-zero, this parameter indicates + * that there is at least one non-leaf node that supports packet mode + * for WFQ among its children. WFQ weights will be applied against + * packet count for scheduling children when a non-leaf node + * is configured appropriately. + */ + int sched_wfq_packet_mode_supported; + + /** WFQ byte mode supported. When non-zero, this parameter indicates + * that there is at least one non-leaf node that supports byte mode + * for WFQ among its children. WFQ weights will be applied against + * bytes for scheduling children when a non-leaf node is configured + * appropriately. + */ + int sched_wfq_byte_mode_supported; + /** WRED packet mode support. When non-zero, this parameter indicates * that there is at least one leaf node that supports the WRED packet * mode, which might not be true for all the leaf nodes. In packet @@ -485,6 +533,24 @@ struct rte_tm_level_capabilities { */ uint64_t shaper_private_rate_max; + /** Shaper private packet mode supported. When non-zero, + * this parameter indicates there is atleast one + * non-leaf node at this level that can be configured + * with packet mode in its private shaper. When private + * shaper is configured in packet mode, committed/peak + * rate provided is interpreted in packets per second. + */ + int shaper_private_packet_mode_supported; + + /** Shaper private byte mode supported. When non-zero, + * this parameter indicates there is atleast one + * non-leaf node at this level that can be configured + * with byte mode in its private shaper. When private + * shaper is configured in byte mode, committed/peak + * rate provided is interpreted in bytes per second. + */ + int shaper_private_byte_mode_supported; + /** Maximum number of shared shapers that any non-leaf * node on this level can be part of. The value of zero * indicates that shared shapers are not supported by @@ -554,6 +620,25 @@ struct rte_tm_level_capabilities { */ uint32_t sched_wfq_weight_max; + /** WFQ packet mode supported. When non-zero, this + * parameter indicates that there is at least one + * non-leaf node at this level that supports packet + * mode for WFQ among its children. WFQ weights will + * be applied against packet count for scheduling + * children when a non-leaf node is configured + * appropriately. + */ + int sched_wfq_packet_mode_supported; + + /** WFQ byte mode supported. When non-zero, this + * parameter indicates that there is at least one + * non-leaf node at this level that supports byte + * mode for WFQ among its children. WFQ weights will + * be applied against bytes for scheduling children + * when a non-leaf node is configured appropriately. + */ + int sched_wfq_byte_mode_supported; + /** Mask of statistics counter types supported by the * non-leaf nodes on this level. Every supported * statistics counter type is supported by at least one @@ -596,6 +681,24 @@ struct rte_tm_level_capabilities { */ uint64_t shaper_private_rate_max; + /** Shaper private packet mode supported. When non-zero, + * this parameter indicates there is atleast one leaf + * node at this level that can be configured with + * packet mode in its private shaper. When private + * shaper is configured in packet mode, committed/peak + * rate provided is interpreted in packets per second. + */ + int shaper_private_packet_mode_supported; + + /** Shaper private byte mode supported. When non-zero, + * this parameter indicates there is atleast one leaf + * node at this level that can be configured with + * byte mode in its private shaper. When private shaper + * is configured in byte mode, committed/peak rate + * provided is interpreted in bytes per second. + */ + int shaper_private_byte_mode_supported; + /** Maximum number of shared shapers that any leaf node * on this level can be part of. The value of zero * indicates that shared shapers are not supported by @@ -686,6 +789,20 @@ struct rte_tm_node_capabilities { */ uint64_t shaper_private_rate_max; + /** Shaper private packet mode supported. When non-zero, this parameter + * indicates private shaper of current node can be configured with + * packet mode. When configured in packet mode, committed/peak rate + * provided is interpreted in packets per second. + */ + int shaper_private_packet_mode_supported; + + /** Shaper private byte mode supported. When non-zero, this parameter + * indicates private shaper of current node can be configured with + * byte mode. When configured in byte mode, committed/peak rate + * provided is interpreted in bytes per second. + */ + int shaper_private_byte_mode_supported; + /** Maximum number of shared shapers the current node can be part of. * The value of zero indicates that shared shapers are not supported by * the current node. @@ -735,6 +852,23 @@ struct rte_tm_node_capabilities { * WFQ weight, so WFQ is reduced to FQ. */ uint32_t sched_wfq_weight_max; + + /** WFQ packet mode supported. When non-zero, this + * parameter indicates that current node supports packet + * mode for WFQ among its children. WFQ weights will be + * applied against packet count for scheduling children + * when configured appropriately. + */ + int sched_wfq_packet_mode_supported; + + /** WFQ byte mode supported. When non-zero, this + * parameter indicates that current node supports byte + * mode for WFQ among its children. WFQ weights will be + * applied against bytes for scheduling children when + * configured appropriately. + */ + int sched_wfq_byte_mode_supported; + } nonleaf; /** Items valid only for leaf nodes. */ @@ -836,10 +970,10 @@ struct rte_tm_wred_params { * Token bucket */ struct rte_tm_token_bucket { - /** Token bucket rate (bytes per second) */ + /** Token bucket rate (bytes per second or packets per second) */ uint64_t rate; - /** Token bucket size (bytes), a.k.a. max burst size */ + /** Token bucket size (bytes or packets), a.k.a. max burst size */ uint64_t size; }; @@ -860,6 +994,11 @@ struct rte_tm_token_bucket { * Dual rate shapers use both the committed and the peak token buckets. The * rate of the peak bucket has to be bigger than zero, as well as greater than * or equal to the rate of the committed bucket. + * + * @see struct rte_tm_capabilities::shaper_private_packet_mode_supported + * @see struct rte_tm_capabilities::shaper_private_byte_mode_supported + * @see struct rte_tm_capabilities::shaper_shared_packet_mode_supported + * @see struct rte_tm_capabilities::shaper_shared_byte_mode_supported */ struct rte_tm_shaper_params { /** Committed token bucket */ @@ -874,6 +1013,17 @@ struct rte_tm_shaper_params { * RTE_TM_ETH_FRAMING_OVERHEAD_FCS). */ int32_t pkt_length_adjust; + + /** When zero, the private or shared shaper that is associated to this + * profile works in byte mode and hence *rate* and *size* fields in + * both token bucket configurations are specified in bytes per second + * and bytes respectively. + * When non-zero, that private or shared shaper works in packet mode and + * hence *rate* and *size* fields in both token bucket configurations + * are specified in packets per second and packets respectively. In + * packet mode, *pkt_length_adjust* is ignored. + */ + int packet_mode; }; /** @@ -925,6 +1075,8 @@ struct rte_tm_node_params { * When non-NULL, it points to a pre-allocated array of * *n_sp_priorities* values, with non-zero value for * byte-mode and zero for packet-mode. + * @see struct rte_tm_node_capabilities::sched_wfq_packet_mode_supported + * @see struct rte_tm_node_capabilities::sched_wfq_byte_mode_supported */ int *wfq_weight_mode; From patchwork Sat Apr 11 11:44:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 68187 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id A85B1A059F; Sat, 11 Apr 2020 13:44:56 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4936C1C13B; Sat, 11 Apr 2020 13:44:50 +0200 (CEST) Received: from mail-pg1-f195.google.com (mail-pg1-f195.google.com [209.85.215.195]) by dpdk.org (Postfix) with ESMTP id 62FD31C198 for ; Sat, 11 Apr 2020 13:44:48 +0200 (CEST) Received: by mail-pg1-f195.google.com with SMTP id k191so2125487pgc.13 for ; Sat, 11 Apr 2020 04:44:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=c9hYUXHT36ZmtZe8oY+RdW+kdiDvnD4g+zVx8ZLapoc=; b=MnSifT4YgAQfUKBB953q1cE+pWDyB7mIKTitYyNqUkv6KusYG31Cp0Wkl9ATv6y4P5 bowRqg7M7NNwsT9gIHd+SBnmk5gR6A9gffNpcHCb5CbwXg2f+QPLK/GAkZbmeUbL2XQK KkJMWGXRr/YWpvOHTLWUmqO+8eqg+ufqlEtlUn1m5ZcY6qkE7cloLibicWYqGnvmtZb9 tUXhSIywPiQRu4zXf9adBDIi8YXyLNcLrH5UdE93y1naWEBJHIkyEf49K4HPIWORTGPR Jyy/ER99rVu7HNr/ILA5RklbnIVIujfjBymbPYahLVsahGR2/2nZZ0UYUP6k3TPjAWD+ VVFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=c9hYUXHT36ZmtZe8oY+RdW+kdiDvnD4g+zVx8ZLapoc=; b=XP7PgaCDx2DdT04vmJcmusOpFvFkKWlN/Oyi0vXjy+yiESMLC99B+toDccr5XhLqy9 ayVmjvR8XElHmEf/AeFMya9NVqI0GV+3nXlNLtflHWQAo1QA+vSXfYtt36W6uGzFQn54 m2/RXMc+Vp0I5PQwAi1EvOuM8GLbk8XLhIU6nn5dY4f+NNzKa4MzVClhAYKPMTGQ4edX fTRA5wCeZ8dwaUTuHbM6qk73i3cEp5dV7EYb+NNV6vFwp7uPWzS7Psh8pwbR9Hld1lW7 ryaMDWINQNOKPL0O43Bjl+KscSA2xyBJFRGuRZhzwN0OwcVBPV9jHdb/N288r0bRZn7O T4Ng== X-Gm-Message-State: AGi0PuYu6vybfrsJpfG4M8huzSW5nbF3jnSoL5znBvvuh92U2rvvpzBC I4lS1yn7y/vE6f4/3Lo2bA0= X-Google-Smtp-Source: APiQypJduJuc7Cr9R3mBKWgUueTlCevzfr2CdIkyDPDT6TI9NrRTgKGMJ/nOvvx6K+dCRlcwsJXGjg== X-Received: by 2002:a63:db51:: with SMTP id x17mr8368628pgi.162.1586605487360; Sat, 11 Apr 2020 04:44:47 -0700 (PDT) Received: from hyd1588t430.marvell.com ([115.113.156.2]) by smtp.gmail.com with ESMTPSA id mg20sm4004793pjb.12.2020.04.11.04.44.43 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sat, 11 Apr 2020 04:44:46 -0700 (PDT) From: Nithin Dabilpuram To: Beilei Xing , Qi Zhang , Rosen Xu , Wenzhuo Lu , Konstantin Ananyev , Tomasz Duszynski , Liron Himi , Jasvinder Singh , Cristian Dumitrescu Cc: dev@dpdk.org, jerinj@marvell.com, kkanas@marvell.com, Nithin Dabilpuram Date: Sat, 11 Apr 2020 17:14:28 +0530 Message-Id: <20200411114430.18506-2-nithind1988@gmail.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20200411114430.18506-1-nithind1988@gmail.com> References: <20200330160019.29674-1-ndabilpuram@marvell.com> <20200411114430.18506-1-nithind1988@gmail.com> Subject: [dpdk-dev] [PATCH v2 2/4] drivers/net: update tm capability for existing pmds X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram Since existing PMD's support shaper byte mode and scheduler wfq byte mode, update the same in their port/level/node capabilities that are added. Signed-off-by: Nithin Dabilpuram --- v1..v2: - Newly included patch to change exiting pmd's with tm support of byte mode to show the same in port/level/node cap. drivers/net/i40e/i40e_tm.c | 16 ++++++++++++ drivers/net/ipn3ke/ipn3ke_tm.c | 26 ++++++++++++++++++ drivers/net/ixgbe/ixgbe_tm.c | 16 ++++++++++++ drivers/net/mvpp2/mrvl_tm.c | 14 ++++++++++ drivers/net/softnic/rte_eth_softnic_tm.c | 45 ++++++++++++++++++++++++++++++++ 5 files changed, 117 insertions(+) diff --git a/drivers/net/i40e/i40e_tm.c b/drivers/net/i40e/i40e_tm.c index c76760c..ab272e9 100644 --- a/drivers/net/i40e/i40e_tm.c +++ b/drivers/net/i40e/i40e_tm.c @@ -160,12 +160,16 @@ i40e_tm_capabilities_get(struct rte_eth_dev *dev, cap->shaper_private_rate_min = 0; /* 40Gbps -> 5GBps */ cap->shaper_private_rate_max = 5000000000ull; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 1; cap->shaper_shared_n_max = 0; cap->shaper_shared_n_nodes_per_shaper_max = 0; cap->shaper_shared_n_shapers_per_node_max = 0; cap->shaper_shared_dual_rate_n_max = 0; cap->shaper_shared_rate_min = 0; cap->shaper_shared_rate_max = 0; + cap->shaper_shared_packet_mode_supported = 0; + cap->shaper_shared_byte_mode_supported = 0; cap->sched_n_children_max = hw->func_caps.num_tx_qp; /** * HW supports SP. But no plan to support it now. @@ -179,6 +183,8 @@ i40e_tm_capabilities_get(struct rte_eth_dev *dev, * So, all the nodes should have the same weight. */ cap->sched_wfq_weight_max = 1; + cap->sched_wfq_packet_mode_supported = 0; + cap->sched_wfq_byte_mode_supported = 0; cap->cman_head_drop_supported = 0; cap->dynamic_update_mask = 0; cap->shaper_pkt_length_adjust_min = RTE_TM_ETH_FRAMING_OVERHEAD; @@ -754,6 +760,8 @@ i40e_level_capabilities_get(struct rte_eth_dev *dev, cap->nonleaf.shaper_private_rate_min = 0; /* 40Gbps -> 5GBps */ cap->nonleaf.shaper_private_rate_max = 5000000000ull; + cap->nonleaf.shaper_private_packet_mode_supported = 0; + cap->nonleaf.shaper_private_byte_mode_supported = 1; cap->nonleaf.shaper_shared_n_max = 0; if (level_id == I40E_TM_NODE_TYPE_PORT) cap->nonleaf.sched_n_children_max = @@ -765,6 +773,8 @@ i40e_level_capabilities_get(struct rte_eth_dev *dev, cap->nonleaf.sched_wfq_n_children_per_group_max = 0; cap->nonleaf.sched_wfq_n_groups_max = 0; cap->nonleaf.sched_wfq_weight_max = 1; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 0; cap->nonleaf.stats_mask = 0; return 0; @@ -776,6 +786,8 @@ i40e_level_capabilities_get(struct rte_eth_dev *dev, cap->leaf.shaper_private_rate_min = 0; /* 40Gbps -> 5GBps */ cap->leaf.shaper_private_rate_max = 5000000000ull; + cap->leaf.shaper_private_packet_mode_supported = 0; + cap->leaf.shaper_private_byte_mode_supported = 1; cap->leaf.shaper_shared_n_max = 0; cap->leaf.cman_head_drop_supported = false; cap->leaf.cman_wred_context_private_supported = true; @@ -817,6 +829,8 @@ i40e_node_capabilities_get(struct rte_eth_dev *dev, cap->shaper_private_rate_min = 0; /* 40Gbps -> 5GBps */ cap->shaper_private_rate_max = 5000000000ull; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 1; cap->shaper_shared_n_max = 0; if (node_type == I40E_TM_NODE_TYPE_QUEUE) { @@ -834,6 +848,8 @@ i40e_node_capabilities_get(struct rte_eth_dev *dev, cap->nonleaf.sched_wfq_n_children_per_group_max = 0; cap->nonleaf.sched_wfq_n_groups_max = 0; cap->nonleaf.sched_wfq_weight_max = 1; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 0; } cap->stats_mask = 0; diff --git a/drivers/net/ipn3ke/ipn3ke_tm.c b/drivers/net/ipn3ke/ipn3ke_tm.c index 5a16c5f..35c90b8 100644 --- a/drivers/net/ipn3ke/ipn3ke_tm.c +++ b/drivers/net/ipn3ke/ipn3ke_tm.c @@ -440,6 +440,8 @@ ipn3ke_tm_capabilities_get(__rte_unused struct rte_eth_dev *dev, cap->shaper_private_dual_rate_n_max = 0; cap->shaper_private_rate_min = 1; cap->shaper_private_rate_max = 1 + IPN3KE_TM_VT_NODE_NUM; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 1; cap->shaper_shared_n_max = 0; cap->shaper_shared_n_nodes_per_shaper_max = 0; @@ -447,6 +449,8 @@ ipn3ke_tm_capabilities_get(__rte_unused struct rte_eth_dev *dev, cap->shaper_shared_dual_rate_n_max = 0; cap->shaper_shared_rate_min = 0; cap->shaper_shared_rate_max = 0; + cap->shaper_shared_packet_mode_supported = 0; + cap->shaper_shared_byte_mode_supported = 0; cap->shaper_pkt_length_adjust_min = RTE_TM_ETH_FRAMING_OVERHEAD_FCS; cap->shaper_pkt_length_adjust_max = RTE_TM_ETH_FRAMING_OVERHEAD_FCS; @@ -456,6 +460,8 @@ ipn3ke_tm_capabilities_get(__rte_unused struct rte_eth_dev *dev, cap->sched_wfq_n_children_per_group_max = UINT32_MAX; cap->sched_wfq_n_groups_max = 1; cap->sched_wfq_weight_max = UINT32_MAX; + cap->sched_wfq_packet_mode_supported = 0; + cap->sched_wfq_byte_mode_supported = 1; cap->cman_wred_packet_mode_supported = 0; cap->cman_wred_byte_mode_supported = 0; @@ -517,6 +523,8 @@ ipn3ke_tm_level_capabilities_get(struct rte_eth_dev *dev, cap->nonleaf.shaper_private_dual_rate_supported = 0; cap->nonleaf.shaper_private_rate_min = 1; cap->nonleaf.shaper_private_rate_max = UINT32_MAX; + cap->nonleaf.shaper_private_packet_mode_supported = 0; + cap->nonleaf.shaper_private_byte_mode_supported = 1; cap->nonleaf.shaper_shared_n_max = 0; cap->nonleaf.sched_n_children_max = IPN3KE_TM_VT_NODE_NUM; @@ -524,6 +532,8 @@ ipn3ke_tm_level_capabilities_get(struct rte_eth_dev *dev, cap->nonleaf.sched_wfq_n_children_per_group_max = 0; cap->nonleaf.sched_wfq_n_groups_max = 0; cap->nonleaf.sched_wfq_weight_max = 0; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 0; cap->nonleaf.stats_mask = STATS_MASK_DEFAULT; break; @@ -539,6 +549,8 @@ ipn3ke_tm_level_capabilities_get(struct rte_eth_dev *dev, cap->nonleaf.shaper_private_dual_rate_supported = 0; cap->nonleaf.shaper_private_rate_min = 1; cap->nonleaf.shaper_private_rate_max = UINT32_MAX; + cap->nonleaf.shaper_private_packet_mode_supported = 0; + cap->nonleaf.shaper_private_byte_mode_supported = 1; cap->nonleaf.shaper_shared_n_max = 0; cap->nonleaf.sched_n_children_max = IPN3KE_TM_COS_NODE_NUM; @@ -546,6 +558,8 @@ ipn3ke_tm_level_capabilities_get(struct rte_eth_dev *dev, cap->nonleaf.sched_wfq_n_children_per_group_max = 0; cap->nonleaf.sched_wfq_n_groups_max = 0; cap->nonleaf.sched_wfq_weight_max = 0; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 0; cap->nonleaf.stats_mask = STATS_MASK_DEFAULT; break; @@ -561,6 +575,8 @@ ipn3ke_tm_level_capabilities_get(struct rte_eth_dev *dev, cap->leaf.shaper_private_dual_rate_supported = 0; cap->leaf.shaper_private_rate_min = 0; cap->leaf.shaper_private_rate_max = 0; + cap->leaf.shaper_private_packet_mode_supported = 0; + cap->leaf.shaper_private_byte_mode_supported = 1; cap->leaf.shaper_shared_n_max = 0; cap->leaf.cman_head_drop_supported = 0; @@ -632,6 +648,8 @@ ipn3ke_tm_node_capabilities_get(struct rte_eth_dev *dev, cap->shaper_private_dual_rate_supported = 0; cap->shaper_private_rate_min = 1; cap->shaper_private_rate_max = UINT32_MAX; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 1; cap->shaper_shared_n_max = 0; cap->nonleaf.sched_n_children_max = IPN3KE_TM_VT_NODE_NUM; @@ -640,6 +658,8 @@ ipn3ke_tm_node_capabilities_get(struct rte_eth_dev *dev, IPN3KE_TM_VT_NODE_NUM; cap->nonleaf.sched_wfq_n_groups_max = 1; cap->nonleaf.sched_wfq_weight_max = 1; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 0; cap->stats_mask = STATS_MASK_DEFAULT; break; @@ -649,6 +669,8 @@ ipn3ke_tm_node_capabilities_get(struct rte_eth_dev *dev, cap->shaper_private_dual_rate_supported = 0; cap->shaper_private_rate_min = 1; cap->shaper_private_rate_max = UINT32_MAX; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 1; cap->shaper_shared_n_max = 0; cap->nonleaf.sched_n_children_max = IPN3KE_TM_COS_NODE_NUM; @@ -657,6 +679,8 @@ ipn3ke_tm_node_capabilities_get(struct rte_eth_dev *dev, IPN3KE_TM_COS_NODE_NUM; cap->nonleaf.sched_wfq_n_groups_max = 1; cap->nonleaf.sched_wfq_weight_max = 1; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 0; cap->stats_mask = STATS_MASK_DEFAULT; break; @@ -666,6 +690,8 @@ ipn3ke_tm_node_capabilities_get(struct rte_eth_dev *dev, cap->shaper_private_dual_rate_supported = 0; cap->shaper_private_rate_min = 0; cap->shaper_private_rate_max = 0; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 0; cap->shaper_shared_n_max = 0; cap->leaf.cman_head_drop_supported = 0; diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c index 73845a7..c067109 100644 --- a/drivers/net/ixgbe/ixgbe_tm.c +++ b/drivers/net/ixgbe/ixgbe_tm.c @@ -168,12 +168,16 @@ ixgbe_tm_capabilities_get(struct rte_eth_dev *dev, cap->shaper_private_rate_min = 0; /* 10Gbps -> 1.25GBps */ cap->shaper_private_rate_max = 1250000000ull; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 1; cap->shaper_shared_n_max = 0; cap->shaper_shared_n_nodes_per_shaper_max = 0; cap->shaper_shared_n_shapers_per_node_max = 0; cap->shaper_shared_dual_rate_n_max = 0; cap->shaper_shared_rate_min = 0; cap->shaper_shared_rate_max = 0; + cap->shaper_shared_packet_mode_supported = 0; + cap->shaper_shared_byte_mode_supported = 0; cap->sched_n_children_max = hw->mac.max_tx_queues; /** * HW supports SP. But no plan to support it now. @@ -182,6 +186,8 @@ ixgbe_tm_capabilities_get(struct rte_eth_dev *dev, cap->sched_sp_n_priorities_max = 1; cap->sched_wfq_n_children_per_group_max = 0; cap->sched_wfq_n_groups_max = 0; + cap->sched_wfq_packet_mode_supported = 0; + cap->sched_wfq_byte_mode_supported = 0; /** * SW only supports fair round robin now. * So, all the nodes should have the same weight. @@ -875,6 +881,8 @@ ixgbe_level_capabilities_get(struct rte_eth_dev *dev, cap->nonleaf.shaper_private_rate_min = 0; /* 10Gbps -> 1.25GBps */ cap->nonleaf.shaper_private_rate_max = 1250000000ull; + cap->nonleaf.shaper_private_packet_mode_supported = 0; + cap->nonleaf.shaper_private_byte_mode_supported = 1; cap->nonleaf.shaper_shared_n_max = 0; if (level_id == IXGBE_TM_NODE_TYPE_PORT) cap->nonleaf.sched_n_children_max = @@ -886,6 +894,8 @@ ixgbe_level_capabilities_get(struct rte_eth_dev *dev, cap->nonleaf.sched_wfq_n_children_per_group_max = 0; cap->nonleaf.sched_wfq_n_groups_max = 0; cap->nonleaf.sched_wfq_weight_max = 1; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 0; cap->nonleaf.stats_mask = 0; return 0; @@ -897,6 +907,8 @@ ixgbe_level_capabilities_get(struct rte_eth_dev *dev, cap->leaf.shaper_private_rate_min = 0; /* 10Gbps -> 1.25GBps */ cap->leaf.shaper_private_rate_max = 1250000000ull; + cap->leaf.shaper_private_packet_mode_supported = 0; + cap->leaf.shaper_private_byte_mode_supported = 1; cap->leaf.shaper_shared_n_max = 0; cap->leaf.cman_head_drop_supported = false; cap->leaf.cman_wred_context_private_supported = true; @@ -938,6 +950,8 @@ ixgbe_node_capabilities_get(struct rte_eth_dev *dev, cap->shaper_private_rate_min = 0; /* 10Gbps -> 1.25GBps */ cap->shaper_private_rate_max = 1250000000ull; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 1; cap->shaper_shared_n_max = 0; if (node_type == IXGBE_TM_NODE_TYPE_QUEUE) { @@ -955,6 +969,8 @@ ixgbe_node_capabilities_get(struct rte_eth_dev *dev, cap->nonleaf.sched_wfq_n_children_per_group_max = 0; cap->nonleaf.sched_wfq_n_groups_max = 0; cap->nonleaf.sched_wfq_weight_max = 1; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 0; } cap->stats_mask = 0; diff --git a/drivers/net/mvpp2/mrvl_tm.c b/drivers/net/mvpp2/mrvl_tm.c index 3de8997..e98f576 100644 --- a/drivers/net/mvpp2/mrvl_tm.c +++ b/drivers/net/mvpp2/mrvl_tm.c @@ -193,12 +193,16 @@ mrvl_capabilities_get(struct rte_eth_dev *dev, cap->shaper_private_n_max = cap->shaper_n_max; cap->shaper_private_rate_min = MRVL_RATE_MIN; cap->shaper_private_rate_max = priv->rate_max; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 1; cap->sched_n_children_max = dev->data->nb_tx_queues; cap->sched_sp_n_priorities_max = dev->data->nb_tx_queues; cap->sched_wfq_n_children_per_group_max = dev->data->nb_tx_queues; cap->sched_wfq_n_groups_max = 1; cap->sched_wfq_weight_max = MRVL_WEIGHT_MAX; + cap->sched_wfq_packet_mode_supported = 0; + cap->sched_wfq_byte_mode_supported = 1; cap->dynamic_update_mask = RTE_TM_UPDATE_NODE_SUSPEND_RESUME | RTE_TM_UPDATE_NODE_STATS; @@ -244,6 +248,8 @@ mrvl_level_capabilities_get(struct rte_eth_dev *dev, cap->nonleaf.shaper_private_supported = 1; cap->nonleaf.shaper_private_rate_min = MRVL_RATE_MIN; cap->nonleaf.shaper_private_rate_max = priv->rate_max; + cap->nonleaf.shaper_private_packet_mode_supported = 0; + cap->nonleaf.shaper_private_byte_mode_supported = 1; cap->nonleaf.sched_n_children_max = dev->data->nb_tx_queues; cap->nonleaf.sched_sp_n_priorities_max = 1; @@ -251,6 +257,8 @@ mrvl_level_capabilities_get(struct rte_eth_dev *dev, dev->data->nb_tx_queues; cap->nonleaf.sched_wfq_n_groups_max = 1; cap->nonleaf.sched_wfq_weight_max = MRVL_WEIGHT_MAX; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 1; cap->nonleaf.stats_mask = RTE_TM_STATS_N_PKTS | RTE_TM_STATS_N_BYTES; } else { /* level_id == MRVL_NODE_QUEUE */ @@ -261,6 +269,8 @@ mrvl_level_capabilities_get(struct rte_eth_dev *dev, cap->leaf.shaper_private_supported = 1; cap->leaf.shaper_private_rate_min = MRVL_RATE_MIN; cap->leaf.shaper_private_rate_max = priv->rate_max; + cap->leaf.shaper_private_packet_mode_supported = 0; + cap->leaf.shaper_private_byte_mode_supported = 1; cap->leaf.stats_mask = RTE_TM_STATS_N_PKTS; } @@ -300,6 +310,8 @@ mrvl_node_capabilities_get(struct rte_eth_dev *dev, uint32_t node_id, cap->shaper_private_supported = 1; cap->shaper_private_rate_min = MRVL_RATE_MIN; cap->shaper_private_rate_max = priv->rate_max; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 1; if (node->type == MRVL_NODE_PORT) { cap->nonleaf.sched_n_children_max = dev->data->nb_tx_queues; @@ -308,6 +320,8 @@ mrvl_node_capabilities_get(struct rte_eth_dev *dev, uint32_t node_id, dev->data->nb_tx_queues; cap->nonleaf.sched_wfq_n_groups_max = 1; cap->nonleaf.sched_wfq_weight_max = MRVL_WEIGHT_MAX; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 1; cap->stats_mask = RTE_TM_STATS_N_PKTS | RTE_TM_STATS_N_BYTES; } else { cap->stats_mask = RTE_TM_STATS_N_PKTS; diff --git a/drivers/net/softnic/rte_eth_softnic_tm.c b/drivers/net/softnic/rte_eth_softnic_tm.c index 80a470c..ac14fe1 100644 --- a/drivers/net/softnic/rte_eth_softnic_tm.c +++ b/drivers/net/softnic/rte_eth_softnic_tm.c @@ -447,6 +447,8 @@ static const struct rte_tm_capabilities tm_cap = { .shaper_private_dual_rate_n_max = 0, .shaper_private_rate_min = 1, .shaper_private_rate_max = UINT32_MAX, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 1, .shaper_shared_n_max = UINT32_MAX, .shaper_shared_n_nodes_per_shaper_max = UINT32_MAX, @@ -454,6 +456,8 @@ static const struct rte_tm_capabilities tm_cap = { .shaper_shared_dual_rate_n_max = 0, .shaper_shared_rate_min = 1, .shaper_shared_rate_max = UINT32_MAX, + .shaper_shared_packet_mode_supported = 0, + .shaper_shared_byte_mode_supported = 1, .shaper_pkt_length_adjust_min = RTE_TM_ETH_FRAMING_OVERHEAD_FCS, .shaper_pkt_length_adjust_max = RTE_TM_ETH_FRAMING_OVERHEAD_FCS, @@ -463,6 +467,8 @@ static const struct rte_tm_capabilities tm_cap = { .sched_wfq_n_children_per_group_max = UINT32_MAX, .sched_wfq_n_groups_max = 1, .sched_wfq_weight_max = UINT32_MAX, + .sched_wfq_packet_mode_supported = 0, + .sched_wfq_byte_mode_supported = 1, .cman_wred_packet_mode_supported = WRED_SUPPORTED, .cman_wred_byte_mode_supported = 0, @@ -548,6 +554,8 @@ static const struct rte_tm_level_capabilities tm_level_cap[] = { .shaper_private_dual_rate_supported = 0, .shaper_private_rate_min = 1, .shaper_private_rate_max = UINT32_MAX, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 1, .shaper_shared_n_max = 0, .sched_n_children_max = UINT32_MAX, @@ -555,6 +563,8 @@ static const struct rte_tm_level_capabilities tm_level_cap[] = { .sched_wfq_n_children_per_group_max = UINT32_MAX, .sched_wfq_n_groups_max = 1, .sched_wfq_weight_max = 1, + .sched_wfq_packet_mode_supported = 0, + .sched_wfq_byte_mode_supported = 0, .stats_mask = STATS_MASK_DEFAULT, } }, @@ -572,6 +582,8 @@ static const struct rte_tm_level_capabilities tm_level_cap[] = { .shaper_private_dual_rate_supported = 0, .shaper_private_rate_min = 1, .shaper_private_rate_max = UINT32_MAX, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 1, .shaper_shared_n_max = 0, .sched_n_children_max = UINT32_MAX, @@ -580,9 +592,14 @@ static const struct rte_tm_level_capabilities tm_level_cap[] = { .sched_wfq_n_groups_max = 1, #ifdef RTE_SCHED_SUBPORT_TC_OV .sched_wfq_weight_max = UINT32_MAX, + .sched_wfq_packet_mode_supported = 0, + .sched_wfq_byte_mode_supported = 1, #else .sched_wfq_weight_max = 1, + .sched_wfq_packet_mode_supported = 0, + .sched_wfq_byte_mode_supported = 0, #endif + .stats_mask = STATS_MASK_DEFAULT, } }, }, @@ -599,6 +616,8 @@ static const struct rte_tm_level_capabilities tm_level_cap[] = { .shaper_private_dual_rate_supported = 0, .shaper_private_rate_min = 1, .shaper_private_rate_max = UINT32_MAX, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 1, .shaper_shared_n_max = 0, .sched_n_children_max = @@ -608,6 +627,8 @@ static const struct rte_tm_level_capabilities tm_level_cap[] = { .sched_wfq_n_children_per_group_max = 1, .sched_wfq_n_groups_max = 0, .sched_wfq_weight_max = 1, + .sched_wfq_packet_mode_supported = 0, + .sched_wfq_byte_mode_supported = 0, .stats_mask = STATS_MASK_DEFAULT, } }, @@ -625,6 +646,8 @@ static const struct rte_tm_level_capabilities tm_level_cap[] = { .shaper_private_dual_rate_supported = 0, .shaper_private_rate_min = 1, .shaper_private_rate_max = UINT32_MAX, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 1, .shaper_shared_n_max = 1, .sched_n_children_max = @@ -634,6 +657,8 @@ static const struct rte_tm_level_capabilities tm_level_cap[] = { RTE_SCHED_BE_QUEUES_PER_PIPE, .sched_wfq_n_groups_max = 1, .sched_wfq_weight_max = UINT32_MAX, + .sched_wfq_packet_mode_supported = 0, + .sched_wfq_byte_mode_supported = 1, .stats_mask = STATS_MASK_DEFAULT, } }, @@ -651,6 +676,8 @@ static const struct rte_tm_level_capabilities tm_level_cap[] = { .shaper_private_dual_rate_supported = 0, .shaper_private_rate_min = 0, .shaper_private_rate_max = 0, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 0, .shaper_shared_n_max = 0, .cman_head_drop_supported = 0, @@ -736,6 +763,8 @@ static const struct rte_tm_node_capabilities tm_node_cap[] = { .shaper_private_dual_rate_supported = 0, .shaper_private_rate_min = 1, .shaper_private_rate_max = UINT32_MAX, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 1, .shaper_shared_n_max = 0, {.nonleaf = { @@ -744,6 +773,8 @@ static const struct rte_tm_node_capabilities tm_node_cap[] = { .sched_wfq_n_children_per_group_max = UINT32_MAX, .sched_wfq_n_groups_max = 1, .sched_wfq_weight_max = 1, + .sched_wfq_packet_mode_supported = 0, + .sched_wfq_byte_mode_supported = 0, } }, .stats_mask = STATS_MASK_DEFAULT, @@ -754,6 +785,8 @@ static const struct rte_tm_node_capabilities tm_node_cap[] = { .shaper_private_dual_rate_supported = 0, .shaper_private_rate_min = 1, .shaper_private_rate_max = UINT32_MAX, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 1, .shaper_shared_n_max = 0, {.nonleaf = { @@ -762,6 +795,8 @@ static const struct rte_tm_node_capabilities tm_node_cap[] = { .sched_wfq_n_children_per_group_max = UINT32_MAX, .sched_wfq_n_groups_max = 1, .sched_wfq_weight_max = UINT32_MAX, + .sched_wfq_packet_mode_supported = 0, + .sched_wfq_byte_mode_supported = 0, } }, .stats_mask = STATS_MASK_DEFAULT, @@ -772,6 +807,8 @@ static const struct rte_tm_node_capabilities tm_node_cap[] = { .shaper_private_dual_rate_supported = 0, .shaper_private_rate_min = 1, .shaper_private_rate_max = UINT32_MAX, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 1, .shaper_shared_n_max = 0, {.nonleaf = { @@ -782,6 +819,8 @@ static const struct rte_tm_node_capabilities tm_node_cap[] = { .sched_wfq_n_children_per_group_max = 1, .sched_wfq_n_groups_max = 0, .sched_wfq_weight_max = 1, + .sched_wfq_packet_mode_supported = 0, + .sched_wfq_byte_mode_supported = 0, } }, .stats_mask = STATS_MASK_DEFAULT, @@ -792,6 +831,8 @@ static const struct rte_tm_node_capabilities tm_node_cap[] = { .shaper_private_dual_rate_supported = 0, .shaper_private_rate_min = 1, .shaper_private_rate_max = UINT32_MAX, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 1, .shaper_shared_n_max = 1, {.nonleaf = { @@ -802,6 +843,8 @@ static const struct rte_tm_node_capabilities tm_node_cap[] = { RTE_SCHED_BE_QUEUES_PER_PIPE, .sched_wfq_n_groups_max = 1, .sched_wfq_weight_max = UINT32_MAX, + .sched_wfq_packet_mode_supported = 0, + .sched_wfq_byte_mode_supported = 1, } }, .stats_mask = STATS_MASK_DEFAULT, @@ -812,6 +855,8 @@ static const struct rte_tm_node_capabilities tm_node_cap[] = { .shaper_private_dual_rate_supported = 0, .shaper_private_rate_min = 0, .shaper_private_rate_max = 0, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 0, .shaper_shared_n_max = 0, From patchwork Sat Apr 11 11:44:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 68188 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id DD7DCA059F; Sat, 11 Apr 2020 13:45:05 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 882B61C1B1; Sat, 11 Apr 2020 13:44:53 +0200 (CEST) Received: from mail-pg1-f196.google.com (mail-pg1-f196.google.com [209.85.215.196]) by dpdk.org (Postfix) with ESMTP id 97C8B1C1B1 for ; Sat, 11 Apr 2020 13:44:51 +0200 (CEST) Received: by mail-pg1-f196.google.com with SMTP id g32so2141087pgb.6 for ; Sat, 11 Apr 2020 04:44:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=T1ZQW599WhJbGUqDYfUXusIfKNhLdEcrph/IYTxjQxs=; b=Y6yb4BHo+XekU32bDZDGEWMC5WBBBr4Z/566SgGy6rVvcffAjkcGXkgEQbK8ivrDm5 UP+l/Fxks5c/fwI2zAae48ycffdt2PQZmLSrv79nfW+fVnA2pqe84wNQ1G6deY3EwQ8S ygLxwb5Ntk043REpOVLFCTqfxJgFEPkklvcCzTirfmHlIgbYfLpZxzENI+WtILnT9zIU TJoJ17UaGgFuWnsLpS/Mf+44pBDZy+desAz3wqD6rIo2g8+xgwmHuiS8vNx+lVOngcmt DXSh8To15ZItAIm3/02/vW20PNR8N27GOr4N8hgaB9qMfBKtBGxo0nFlichLjhM+edmE QU2w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=T1ZQW599WhJbGUqDYfUXusIfKNhLdEcrph/IYTxjQxs=; b=RD3dAs/tc2FRIwwoJd5rIxMjNEv+jw+7Mhuz83Vsk5aAZTWyaNrPz3AvYrAcHJDBIn QpufwoDF+sBfE0K8LDghy1u4ZuQAtvHQJsszJzsVLVZDjX50mDFOXM8GVV3N3piFisYm t5ku9yn8i8msdkjbdz406L1I+G+05dB5kVTZ/izlxOBdTSIYw6wLbbQV87TdQ+tOrDex umCd5pwfPCjMtcuOTO3Sj47h+/pMEXyXRev5lvSSTu5N4uERfTMB6wABFOinF05NMQ2M oP4EJpmYS9MdYfI/JjY0TSiWQwQYPi2+q/CrGZhjhaFymFHo7U6+pn+8Xb1PZ/RPFVq2 aHEQ== X-Gm-Message-State: AGi0PuZWFLeKbJ0ghEDM3tKOTZgy4Cd9O50JTN35gwdSCRkvTiOeUzyp zOoyB5xdY2ko210+EqhiafQ= X-Google-Smtp-Source: APiQypJfVM0RNJrAVBDI1AyyOgfsDfGv70YbBBqNJSu/ygufknQHPrZdabfKVhnBjo9u4qviUc2YLg== X-Received: by 2002:a63:6e43:: with SMTP id j64mr8380609pgc.41.1586605490658; Sat, 11 Apr 2020 04:44:50 -0700 (PDT) Received: from hyd1588t430.marvell.com ([115.113.156.2]) by smtp.gmail.com with ESMTPSA id mg20sm4004793pjb.12.2020.04.11.04.44.47 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sat, 11 Apr 2020 04:44:50 -0700 (PDT) From: Nithin Dabilpuram To: Wenzhuo Lu , Jingjing Wu , Bernard Iremonger , John McNamara , Marko Kovacevic Cc: dev@dpdk.org, jerinj@marvell.com, kkanas@marvell.com, Nithin Dabilpuram Date: Sat, 11 Apr 2020 17:14:29 +0530 Message-Id: <20200411114430.18506-3-nithind1988@gmail.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20200411114430.18506-1-nithind1988@gmail.com> References: <20200330160019.29674-1-ndabilpuram@marvell.com> <20200411114430.18506-1-nithind1988@gmail.com> Subject: [dpdk-dev] [PATCH v2 3/4] app/testpmd: add tm cmd for non leaf and shaper pktmode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram Add TM command to enable packet mode for all SP children in non leaf node. This is a new command as "add tm nonleaf node pktmode". Also add support to shaper profile add command to take packet mode parameter used to setup shaper in packet mode. This adds an extra argument "packet_mode" to shaper profile add command "add port tm node shaper profile" as last argument. This patch also dumps new tm port/level/node capabilities sched_wfq_packet_mode_supported, sched_wfq_byte_mode_supported, shaper_private_packet_mode_supported, shaper_private_byte_mode_supported, shaper_shared_packet_mode_supported, shaper_shared_byte_mode_supported. Signed-off-by: Nithin Dabilpuram --- v1..v2: - Update tm capability show cmd to dump lastest pktmode/bytemode fields of v2. - Update existing shaper profile add command to take last argument as pkt_mode and update struct rte_tm_shaper_params:packet_mode with the same. - Update documentation with latest command changes. app/test-pmd/cmdline.c | 9 +- app/test-pmd/cmdline_tm.c | 206 ++++++++++++++++++++++++++++ app/test-pmd/cmdline_tm.h | 1 + doc/guides/testpmd_app_ug/testpmd_funcs.rst | 40 +++++- 4 files changed, 250 insertions(+), 6 deletions(-) diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index 863b567..af90242 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -1189,7 +1189,7 @@ static void cmd_help_long_parsed(void *parsed_result, "add port tm node shaper profile (port_id) (shaper_profile_id)" " (cmit_tb_rate) (cmit_tb_size) (peak_tb_rate) (peak_tb_size)" - " (packet_length_adjust)\n" + " (packet_length_adjust) (packet_mode)\n" " Add port tm node private shaper profile.\n\n" "del port tm node shaper profile (port_id) (shaper_profile_id)\n" @@ -1221,6 +1221,12 @@ static void cmd_help_long_parsed(void *parsed_result, " [(shared_shaper_id_0) (shared_shaper_id_1)...]\n" " Add port tm nonleaf node.\n\n" + "add port tm nonleaf node pktmode (port_id) (node_id) (parent_node_id)" + " (priority) (weight) (level_id) (shaper_profile_id)" + " (n_sp_priorities) (stats_mask) (n_shared_shapers)" + " [(shared_shaper_id_0) (shared_shaper_id_1)...]\n" + " Add port tm nonleaf node with pkt mode enabled.\n\n" + "add port tm leaf node (port_id) (node_id) (parent_node_id)" " (priority) (weight) (level_id) (shaper_profile_id)" " (cman_mode) (wred_profile_id) (stats_mask) (n_shared_shapers)" @@ -19636,6 +19642,7 @@ cmdline_parse_ctx_t main_ctx[] = { (cmdline_parse_inst_t *)&cmd_del_port_tm_node_wred_profile, (cmdline_parse_inst_t *)&cmd_set_port_tm_node_shaper_profile, (cmdline_parse_inst_t *)&cmd_add_port_tm_nonleaf_node, + (cmdline_parse_inst_t *)&cmd_add_port_tm_nonleaf_node_pmode, (cmdline_parse_inst_t *)&cmd_add_port_tm_leaf_node, (cmdline_parse_inst_t *)&cmd_del_port_tm_node, (cmdline_parse_inst_t *)&cmd_set_port_tm_node_parent, diff --git a/app/test-pmd/cmdline_tm.c b/app/test-pmd/cmdline_tm.c index d62a4f5..c9a2813 100644 --- a/app/test-pmd/cmdline_tm.c +++ b/app/test-pmd/cmdline_tm.c @@ -257,6 +257,10 @@ static void cmd_show_port_tm_cap_parsed(void *parsed_result, cap.shaper_private_rate_min); printf("cap.shaper_private_rate_max %" PRIu64 "\n", cap.shaper_private_rate_max); + printf("cap.shaper_private_packet_mode_supported %" PRId32 "\n", + cap.shaper_private_packet_mode_supported); + printf("cap.shaper_private_byte_mode_supported %" PRId32 "\n", + cap.shaper_private_byte_mode_supported); printf("cap.shaper_shared_n_max %" PRIu32 "\n", cap.shaper_shared_n_max); printf("cap.shaper_shared_n_nodes_per_shaper_max %" PRIu32 "\n", @@ -269,6 +273,10 @@ static void cmd_show_port_tm_cap_parsed(void *parsed_result, cap.shaper_shared_rate_min); printf("cap.shaper_shared_rate_max %" PRIu64 "\n", cap.shaper_shared_rate_max); + printf("cap.shaper_shared_packet_mode_supported %" PRId32 "\n", + cap.shaper_shared_packet_mode_supported); + printf("cap.shaper_shared_byte_mode_supported %" PRId32 "\n", + cap.shaper_shared_byte_mode_supported); printf("cap.shaper_pkt_length_adjust_min %" PRId32 "\n", cap.shaper_pkt_length_adjust_min); printf("cap.shaper_pkt_length_adjust_max %" PRId32 "\n", @@ -283,6 +291,10 @@ static void cmd_show_port_tm_cap_parsed(void *parsed_result, cap.sched_wfq_n_groups_max); printf("cap.sched_wfq_weight_max %" PRIu32 "\n", cap.sched_wfq_weight_max); + printf("cap.sched_wfq_packet_mode_supported %" PRId32 "\n", + cap.sched_wfq_packet_mode_supported); + printf("cap.sched_wfq_byte_mode_supported %" PRId32 "\n", + cap.sched_wfq_byte_mode_supported); printf("cap.cman_head_drop_supported %" PRId32 "\n", cap.cman_head_drop_supported); printf("cap.cman_wred_context_n_max %" PRIu32 "\n", @@ -401,6 +413,11 @@ static void cmd_show_port_tm_level_cap_parsed(void *parsed_result, lcap.nonleaf.shaper_private_rate_min); printf("cap.nonleaf.shaper_private_rate_max %" PRIu64 "\n", lcap.nonleaf.shaper_private_rate_max); + printf("cap.nonleaf.shaper_private_packet_mode_supported %" + PRId32 "\n", + lcap.nonleaf.shaper_private_packet_mode_supported); + printf("cap.nonleaf.shaper_private_byte_mode_supported %" PRId32 + "\n", lcap.nonleaf.shaper_private_byte_mode_supported); printf("cap.nonleaf.shaper_shared_n_max %" PRIu32 "\n", lcap.nonleaf.shaper_shared_n_max); printf("cap.nonleaf.sched_n_children_max %" PRIu32 "\n", @@ -413,6 +430,10 @@ static void cmd_show_port_tm_level_cap_parsed(void *parsed_result, lcap.nonleaf.sched_wfq_n_groups_max); printf("cap.nonleaf.sched_wfq_weight_max %" PRIu32 "\n", lcap.nonleaf.sched_wfq_weight_max); + printf("cap.nonleaf.sched_wfq_packet_mode_supported %" PRId32 "\n", + lcap.nonleaf.sched_wfq_packet_mode_supported); + printf("cap.nonleaf.sched_wfq_byte_mode_supported %" PRId32 + "\n", lcap.nonleaf.sched_wfq_byte_mode_supported); printf("cap.nonleaf.stats_mask %" PRIx64 "\n", lcap.nonleaf.stats_mask); } else { @@ -424,6 +445,10 @@ static void cmd_show_port_tm_level_cap_parsed(void *parsed_result, lcap.leaf.shaper_private_rate_min); printf("cap.leaf.shaper_private_rate_max %" PRIu64 "\n", lcap.leaf.shaper_private_rate_max); + printf("cap.leaf.shaper_private_packet_mode_supported %" PRId32 + "\n", lcap.leaf.shaper_private_packet_mode_supported); + printf("cap.leaf.shaper_private_byte_mode_supported %" PRId32 "\n", + lcap.leaf.shaper_private_byte_mode_supported); printf("cap.leaf.shaper_shared_n_max %" PRIu32 "\n", lcap.leaf.shaper_shared_n_max); printf("cap.leaf.cman_head_drop_supported %" PRId32 "\n", @@ -524,6 +549,10 @@ static void cmd_show_port_tm_node_cap_parsed(void *parsed_result, ncap.shaper_private_rate_min); printf("cap.shaper_private_rate_max %" PRIu64 "\n", ncap.shaper_private_rate_max); + printf("cap.shaper_private_packet_mode_supported %" PRId32 "\n", + ncap.shaper_private_packet_mode_supported); + printf("cap.shaper_private_byte_mode_supported %" PRId32 "\n", + ncap.shaper_private_byte_mode_supported); printf("cap.shaper_shared_n_max %" PRIu32 "\n", ncap.shaper_shared_n_max); if (!is_leaf) { @@ -537,6 +566,10 @@ static void cmd_show_port_tm_node_cap_parsed(void *parsed_result, ncap.nonleaf.sched_wfq_n_groups_max); printf("cap.nonleaf.sched_wfq_weight_max %" PRIu32 "\n", ncap.nonleaf.sched_wfq_weight_max); + printf("cap.nonleaf.sched_wfq_packet_mode_supported %" PRId32 "\n", + ncap.nonleaf.sched_wfq_packet_mode_supported); + printf("cap.nonleaf.sched_wfq_byte_mode_supported %" PRId32 "\n", + ncap.nonleaf.sched_wfq_byte_mode_supported); } else { printf("cap.leaf.cman_head_drop_supported %" PRId32 "\n", ncap.leaf.cman_head_drop_supported); @@ -776,6 +809,7 @@ struct cmd_add_port_tm_node_shaper_profile_result { uint64_t peak_tb_rate; uint64_t peak_tb_size; uint32_t pktlen_adjust; + int pkt_mode; }; cmdline_parse_token_string_t cmd_add_port_tm_node_shaper_profile_add = @@ -829,6 +863,10 @@ cmdline_parse_token_num_t cmd_add_port_tm_node_shaper_profile_pktlen_adjust = TOKEN_NUM_INITIALIZER( struct cmd_add_port_tm_node_shaper_profile_result, pktlen_adjust, UINT32); +cmdline_parse_token_num_t cmd_add_port_tm_node_shaper_profile_packet_mode = + TOKEN_NUM_INITIALIZER( + struct cmd_add_port_tm_node_shaper_profile_result, + pkt_mode, UINT32); static void cmd_add_port_tm_node_shaper_profile_parsed(void *parsed_result, __attribute__((unused)) struct cmdline *cl, @@ -853,6 +891,7 @@ static void cmd_add_port_tm_node_shaper_profile_parsed(void *parsed_result, sp.peak.rate = res->peak_tb_rate; sp.peak.size = res->peak_tb_size; sp.pkt_length_adjust = pkt_len_adjust; + sp.packet_mode = res->pkt_mode; ret = rte_tm_shaper_profile_add(port_id, shaper_id, &sp, &error); if (ret != 0) { @@ -879,6 +918,7 @@ cmdline_parse_inst_t cmd_add_port_tm_node_shaper_profile = { (void *)&cmd_add_port_tm_node_shaper_profile_peak_tb_rate, (void *)&cmd_add_port_tm_node_shaper_profile_peak_tb_size, (void *)&cmd_add_port_tm_node_shaper_profile_pktlen_adjust, + (void *)&cmd_add_port_tm_node_shaper_profile_packet_mode, NULL, }, }; @@ -1671,6 +1711,172 @@ cmdline_parse_inst_t cmd_add_port_tm_nonleaf_node = { }, }; +/* *** Add Port TM nonleaf node pkt mode *** */ +struct cmd_add_port_tm_nonleaf_node_pmode_result { + cmdline_fixed_string_t add; + cmdline_fixed_string_t port; + cmdline_fixed_string_t tm; + cmdline_fixed_string_t nonleaf; + cmdline_fixed_string_t node; + uint16_t port_id; + uint32_t node_id; + int32_t parent_node_id; + uint32_t priority; + uint32_t weight; + uint32_t level_id; + int32_t shaper_profile_id; + uint32_t n_sp_priorities; + uint64_t stats_mask; + cmdline_multi_string_t multi_shared_shaper_id; +}; + +cmdline_parse_token_string_t cmd_add_port_tm_nonleaf_node_pmode_add = + TOKEN_STRING_INITIALIZER( + struct cmd_add_port_tm_nonleaf_node_pmode_result, add, "add"); +cmdline_parse_token_string_t cmd_add_port_tm_nonleaf_node_pmode_port = + TOKEN_STRING_INITIALIZER( + struct cmd_add_port_tm_nonleaf_node_pmode_result, port, "port"); +cmdline_parse_token_string_t cmd_add_port_tm_nonleaf_node_pmode_tm = + TOKEN_STRING_INITIALIZER( + struct cmd_add_port_tm_nonleaf_node_pmode_result, tm, "tm"); +cmdline_parse_token_string_t cmd_add_port_tm_nonleaf_node_pmode_nonleaf = + TOKEN_STRING_INITIALIZER( + struct cmd_add_port_tm_nonleaf_node_pmode_result, nonleaf, "nonleaf"); +cmdline_parse_token_string_t cmd_add_port_tm_nonleaf_node_pmode_node = + TOKEN_STRING_INITIALIZER( + struct cmd_add_port_tm_nonleaf_node_pmode_result, node, "node"); +cmdline_parse_token_string_t cmd_add_port_tm_nonleaf_node_pmode_pktmode = + TOKEN_STRING_INITIALIZER( + struct cmd_add_port_tm_nonleaf_node_pmode_result, node, "pktmode"); +cmdline_parse_token_num_t cmd_add_port_tm_nonleaf_node_pmode_port_id = + TOKEN_NUM_INITIALIZER( + struct cmd_add_port_tm_nonleaf_node_pmode_result, + port_id, UINT16); +cmdline_parse_token_num_t cmd_add_port_tm_nonleaf_node_pmode_node_id = + TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_nonleaf_node_pmode_result, + node_id, UINT32); +cmdline_parse_token_num_t cmd_add_port_tm_nonleaf_node_pmode_parent_node_id = + TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_nonleaf_node_pmode_result, + parent_node_id, INT32); +cmdline_parse_token_num_t cmd_add_port_tm_nonleaf_node_pmode_priority = + TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_nonleaf_node_pmode_result, + priority, UINT32); +cmdline_parse_token_num_t cmd_add_port_tm_nonleaf_node_pmode_weight = + TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_nonleaf_node_pmode_result, + weight, UINT32); +cmdline_parse_token_num_t cmd_add_port_tm_nonleaf_node_pmode_level_id = + TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_nonleaf_node_pmode_result, + level_id, UINT32); +cmdline_parse_token_num_t cmd_add_port_tm_nonleaf_node_pmode_shaper_profile_id = + TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_nonleaf_node_pmode_result, + shaper_profile_id, INT32); +cmdline_parse_token_num_t cmd_add_port_tm_nonleaf_node_pmode_n_sp_priorities = + TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_nonleaf_node_pmode_result, + n_sp_priorities, UINT32); +cmdline_parse_token_num_t cmd_add_port_tm_nonleaf_node_pmode_stats_mask = + TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_nonleaf_node_pmode_result, + stats_mask, UINT64); +cmdline_parse_token_string_t + cmd_add_port_tm_nonleaf_node_pmode_multi_shrd_shpr_id = + TOKEN_STRING_INITIALIZER( + struct cmd_add_port_tm_nonleaf_node_pmode_result, + multi_shared_shaper_id, TOKEN_STRING_MULTI); + +static void cmd_add_port_tm_nonleaf_node_pmode_parsed(void *parsed_result, + __attribute__((unused)) struct cmdline *cl, + __attribute__((unused)) void *data) +{ + struct cmd_add_port_tm_nonleaf_node_pmode_result *res = parsed_result; + uint32_t parent_node_id, n_shared_shapers = 0; + char *s_str = res->multi_shared_shaper_id; + portid_t port_id = res->port_id; + struct rte_tm_node_params np; + int *wfq_weight_mode = NULL; + uint32_t *shared_shaper_id; + struct rte_tm_error error; + int ret; + + if (port_id_is_invalid(port_id, ENABLED_WARN)) + return; + + memset(&np, 0, sizeof(struct rte_tm_node_params)); + memset(&error, 0, sizeof(struct rte_tm_error)); + + /* Node parameters */ + if (res->parent_node_id < 0) + parent_node_id = UINT32_MAX; + else + parent_node_id = res->parent_node_id; + + shared_shaper_id = (uint32_t *)malloc(MAX_NUM_SHARED_SHAPERS * + sizeof(uint32_t)); + if (shared_shaper_id == NULL) { + printf(" Memory not allocated for shared shapers (error)\n"); + return; + } + + /* Parse multi shared shaper id string */ + ret = parse_multi_ss_id_str(s_str, &n_shared_shapers, shared_shaper_id); + if (ret) { + printf(" Shared shapers params string parse error\n"); + free(shared_shaper_id); + return; + } + + if (res->shaper_profile_id < 0) + np.shaper_profile_id = UINT32_MAX; + else + np.shaper_profile_id = res->shaper_profile_id; + + np.n_shared_shapers = n_shared_shapers; + if (np.n_shared_shapers) { + np.shared_shaper_id = &shared_shaper_id[0]; + } else { + free(shared_shaper_id); + shared_shaper_id = NULL; + } + + if (res->n_sp_priorities) + wfq_weight_mode = calloc(res->n_sp_priorities, sizeof(int)); + np.nonleaf.n_sp_priorities = res->n_sp_priorities; + np.stats_mask = res->stats_mask; + np.nonleaf.wfq_weight_mode = wfq_weight_mode; + + ret = rte_tm_node_add(port_id, res->node_id, parent_node_id, + res->priority, res->weight, res->level_id, + &np, &error); + if (ret != 0) { + print_err_msg(&error); + free(shared_shaper_id); + free(wfq_weight_mode); + return; + } +} + +cmdline_parse_inst_t cmd_add_port_tm_nonleaf_node_pmode = { + .f = cmd_add_port_tm_nonleaf_node_pmode_parsed, + .data = NULL, + .help_str = "Add port tm nonleaf node pktmode", + .tokens = { + (void *)&cmd_add_port_tm_nonleaf_node_pmode_add, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_port, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_tm, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_nonleaf, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_node, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_pktmode, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_port_id, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_node_id, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_parent_node_id, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_priority, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_weight, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_level_id, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_shaper_profile_id, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_n_sp_priorities, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_stats_mask, + (void *)&cmd_add_port_tm_nonleaf_node_pmode_multi_shrd_shpr_id, + NULL, + }, +}; /* *** Add Port TM leaf node *** */ struct cmd_add_port_tm_leaf_node_result { cmdline_fixed_string_t add; diff --git a/app/test-pmd/cmdline_tm.h b/app/test-pmd/cmdline_tm.h index 950cb75..e59c15c 100644 --- a/app/test-pmd/cmdline_tm.h +++ b/app/test-pmd/cmdline_tm.h @@ -19,6 +19,7 @@ extern cmdline_parse_inst_t cmd_add_port_tm_node_wred_profile; extern cmdline_parse_inst_t cmd_del_port_tm_node_wred_profile; extern cmdline_parse_inst_t cmd_set_port_tm_node_shaper_profile; extern cmdline_parse_inst_t cmd_add_port_tm_nonleaf_node; +extern cmdline_parse_inst_t cmd_add_port_tm_nonleaf_node_pmode; extern cmdline_parse_inst_t cmd_add_port_tm_leaf_node; extern cmdline_parse_inst_t cmd_del_port_tm_node; extern cmdline_parse_inst_t cmd_set_port_tm_node_parent; diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index dcee5de..a058f75 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -2840,19 +2840,22 @@ Add the port traffic management private shaper profile:: testpmd> add port tm node shaper profile (port_id) (shaper_profile_id) \ (cmit_tb_rate) (cmit_tb_size) (peak_tb_rate) (peak_tb_size) \ - (packet_length_adjust) + (packet_length_adjust) (packet_mode) where: * ``shaper_profile id``: Shaper profile ID for the new profile. -* ``cmit_tb_rate``: Committed token bucket rate (bytes per second). -* ``cmit_tb_size``: Committed token bucket size (bytes). -* ``peak_tb_rate``: Peak token bucket rate (bytes per second). -* ``peak_tb_size``: Peak token bucket size (bytes). +* ``cmit_tb_rate``: Committed token bucket rate (bytes per second or packets per second). +* ``cmit_tb_size``: Committed token bucket size (bytes or packets). +* ``peak_tb_rate``: Peak token bucket rate (bytes per second or packets per second). +* ``peak_tb_size``: Peak token bucket size (bytes or packets). * ``packet_length_adjust``: The value (bytes) to be added to the length of each packet for the purpose of shaping. This parameter value can be used to correct the packet length with the framing overhead bytes that are consumed on the wire. +* ``packet_mode``: Shaper configured in packet mode. This parameter value if + zero, configures shaper in byte mode and if non-zero configures it in packet + mode. Delete port traffic management private shaper profile ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -2977,6 +2980,33 @@ where: * ``n_shared_shapers``: Number of shared shapers. * ``shared_shaper_id``: Shared shaper id. +Add port traffic management hierarchy nonleaf node with packet mode +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Add nonleaf node with packet mode to port traffic management hierarchy:: + + testpmd> add port tm nonleaf node pktmode (port_id) (node_id) (parent_node_id) \ + (priority) (weight) (level_id) (shaper_profile_id) \ + (n_sp_priorities) (stats_mask) (n_shared_shapers) \ + [(shared_shaper_0) (shared_shaper_1) ...] \ + +where: + +* ``parent_node_id``: Node ID of the parent. +* ``priority``: Node priority (highest node priority is zero). This is used by + the SP algorithm running on the parent node for scheduling this node. +* ``weight``: Node weight (lowest weight is one). The node weight is relative + to the weight sum of all siblings that have the same priority. It is used by + the WFQ algorithm running on the parent node for scheduling this node. +* ``level_id``: Hierarchy level of the node. +* ``shaper_profile_id``: Shaper profile ID of the private shaper to be used by + the node. +* ``n_sp_priorities``: Number of strict priorities. Packet mode is enabled on + all of them. +* ``stats_mask``: Mask of statistics counter types to be enabled for this node. +* ``n_shared_shapers``: Number of shared shapers. +* ``shared_shaper_id``: Shared shaper id. + Add port traffic management hierarchy leaf node ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From patchwork Sat Apr 11 11:44:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 68189 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 202FBA059F; Sat, 11 Apr 2020 13:45:17 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 3681A1C1C4; Sat, 11 Apr 2020 13:44:55 +0200 (CEST) Received: from mail-pj1-f68.google.com (mail-pj1-f68.google.com [209.85.216.68]) by dpdk.org (Postfix) with ESMTP id 067301C1B8 for ; Sat, 11 Apr 2020 13:44:54 +0200 (CEST) Received: by mail-pj1-f68.google.com with SMTP id t40so1723585pjb.3 for ; Sat, 11 Apr 2020 04:44:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=trw7l5SM5mPZ/uGNeATRQhgoG+es+ac/ozZbbeHgIlI=; b=HsdW0cB9/knNi6Znavx1bDKIRF7+VhxurZLD6ERxtAbP+ZjsndpkeTpc4S/pthvCg4 DZDFeWKsmCw/pbKGcIGA6NnnZ4HtVn/cYIGByFcM875doXMGpMi0F6OiYR2QygMUnxX/ E3+Mf/r5Q064isEeMqO3RKoqN3eKZy3AshHZE8NtCoSFotC3NFHl9ayRVm1U2fXAJemt tmZM+/M9n6NjWz8Uwe/4iP0bHRsWyCwfhxGuZ8IEe8Y06lE+H3Q2YlXLTYWoi95JtbWX +x1RUUzYLszU/7xz0g0oDig7+jvtxWZZXgsl/5i34WYdX84Fsy3lXTBA/Wz+q0jO398D QUCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=trw7l5SM5mPZ/uGNeATRQhgoG+es+ac/ozZbbeHgIlI=; b=EMncG4GDWhTnS4bNyctJPEokO0ESUNySPtrkg5oxEO5N2BHd25wlp5I4uplqCOVE1M mzdWIGpDLKcp8zFx+qG2p2IH7F8FGrDs6BsLTJKSM4PN/PE+3l5jm0LpbER9c/luSlWf F2DwukyfHponyWX1i2561tjKBjlDroYbwcCRUAZbF/zDhcKvuMBZpbueqlPV9ah1YBQK pW+mJJxjlKdF9LUGhQ6aTNPiq+xSDhIVag8uOejCJmxpcSPplB7+dGY7EL401N9cBFJW tmN1h5d/f7Yh2YBTepv9U/nDSQAQgPwQSc6yXbbrcKcipadiadcKsgXHjcD90hqFlfzi ha0A== X-Gm-Message-State: AGi0PuYj1+uCEucVyZxEqiiDUAzDoTDz2NVmqWtJFxg6gZhVxa0ykpoa 1YjM/ugMEhWggJo9n6y/yDs= X-Google-Smtp-Source: APiQypIeK+AW+s7ZN7NyR3ikwtwxbyEM6vsuHHniMT6OmmpjMGkLE/+xVD2Ngs6Nj93NTo7YSgcfvw== X-Received: by 2002:a17:902:5ac2:: with SMTP id g2mr3592870plm.167.1586605493094; Sat, 11 Apr 2020 04:44:53 -0700 (PDT) Received: from hyd1588t430.marvell.com ([115.113.156.2]) by smtp.gmail.com with ESMTPSA id mg20sm4004793pjb.12.2020.04.11.04.44.51 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sat, 11 Apr 2020 04:44:52 -0700 (PDT) From: Nithin Dabilpuram To: Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K Cc: dev@dpdk.org, kkanas@marvell.com Date: Sat, 11 Apr 2020 17:14:30 +0530 Message-Id: <20200411114430.18506-4-nithind1988@gmail.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20200411114430.18506-1-nithind1988@gmail.com> References: <20200330160019.29674-1-ndabilpuram@marvell.com> <20200411114430.18506-1-nithind1988@gmail.com> Subject: [dpdk-dev] [PATCH v2 4/4] net/octeontx2: support tm length adjust and pkt mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram This patch adds support to packet length adjust TM feature for private shaper. It also adds support to packet mode feature that applies both to private shaper and node DWRR scheduling of SP children. Signed-off-by: Nithin Dabilpuram --- v1..v2: - Newly included patch. drivers/net/octeontx2/otx2_tm.c | 140 +++++++++++++++++++++++++++++++++------- drivers/net/octeontx2/otx2_tm.h | 5 ++ 2 files changed, 122 insertions(+), 23 deletions(-) diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c index f94618d..fa7d21b 100644 --- a/drivers/net/octeontx2/otx2_tm.c +++ b/drivers/net/octeontx2/otx2_tm.c @@ -336,18 +336,25 @@ prepare_tm_shaper_reg(struct otx2_nix_tm_node *tm_node, { struct shaper_params cir, pir; uint32_t schq = tm_node->hw_id; + uint64_t adjust = 0; uint8_t k = 0; memset(&cir, 0, sizeof(cir)); memset(&pir, 0, sizeof(pir)); shaper_config_to_nix(profile, &cir, &pir); - otx2_tm_dbg("Shaper config node %s(%u) lvl %u id %u, " - "pir %" PRIu64 "(%" PRIu64 "B)," - " cir %" PRIu64 "(%" PRIu64 "B) (%p)", - nix_hwlvl2str(tm_node->hw_lvl), schq, tm_node->lvl, - tm_node->id, pir.rate, pir.burst, - cir.rate, cir.burst, tm_node); + /* Packet length adjust */ + if (tm_node->pkt_mode) + adjust = 1; + else if (profile) + adjust = profile->params.pkt_length_adjust & 0x1FF; + + otx2_tm_dbg("Shaper config node %s(%u) lvl %u id %u, pir %" PRIu64 + "(%" PRIu64 "B), cir %" PRIu64 "(%" PRIu64 "B)" + "adjust 0x%" PRIx64 "(pktmode %u) (%p)", + nix_hwlvl2str(tm_node->hw_lvl), schq, tm_node->lvl, + tm_node->id, pir.rate, pir.burst, cir.rate, cir.burst, + adjust, tm_node->pkt_mode, tm_node); switch (tm_node->hw_lvl) { case NIX_TXSCH_LVL_SMQ: @@ -364,7 +371,9 @@ prepare_tm_shaper_reg(struct otx2_nix_tm_node *tm_node, /* Configure RED ALG */ reg[k] = NIX_AF_MDQX_SHAPE(schq); - regval[k] = ((uint64_t)tm_node->red_algo << 9); + regval[k] = (adjust | + (uint64_t)tm_node->red_algo << 9 | + (uint64_t)tm_node->pkt_mode << 24); k++; break; case NIX_TXSCH_LVL_TL4: @@ -381,7 +390,9 @@ prepare_tm_shaper_reg(struct otx2_nix_tm_node *tm_node, /* Configure RED algo */ reg[k] = NIX_AF_TL4X_SHAPE(schq); - regval[k] = ((uint64_t)tm_node->red_algo << 9); + regval[k] = (adjust | + (uint64_t)tm_node->red_algo << 9 | + (uint64_t)tm_node->pkt_mode << 24); k++; break; case NIX_TXSCH_LVL_TL3: @@ -398,7 +409,9 @@ prepare_tm_shaper_reg(struct otx2_nix_tm_node *tm_node, /* Configure RED algo */ reg[k] = NIX_AF_TL3X_SHAPE(schq); - regval[k] = ((uint64_t)tm_node->red_algo << 9); + regval[k] = (adjust | + (uint64_t)tm_node->red_algo << 9 | + (uint64_t)tm_node->pkt_mode << 24); k++; break; @@ -416,7 +429,9 @@ prepare_tm_shaper_reg(struct otx2_nix_tm_node *tm_node, /* Configure RED algo */ reg[k] = NIX_AF_TL2X_SHAPE(schq); - regval[k] = ((uint64_t)tm_node->red_algo << 9); + regval[k] = (adjust | + (uint64_t)tm_node->red_algo << 9 | + (uint64_t)tm_node->pkt_mode << 24); k++; break; @@ -426,6 +441,12 @@ prepare_tm_shaper_reg(struct otx2_nix_tm_node *tm_node, regval[k] = (cir.rate && cir.burst) ? (shaper2regval(&cir) | 1) : 0; k++; + + /* Configure length disable and adjust */ + reg[k] = NIX_AF_TL1X_SHAPE(schq); + regval[k] = (adjust | + (uint64_t)tm_node->pkt_mode << 24); + k++; break; } @@ -773,6 +794,15 @@ nix_tm_node_add_to_list(struct otx2_eth_dev *dev, uint32_t node_id, tm_node->flags = 0; if (user) tm_node->flags = NIX_TM_NODE_USER; + + /* Packet mode */ + if (!nix_tm_is_leaf(dev, lvl) && + ((profile && profile->params.packet_mode) || + (params->nonleaf.wfq_weight_mode && + params->nonleaf.n_sp_priorities && + !params->nonleaf.wfq_weight_mode[0]))) + tm_node->pkt_mode = 1; + rte_memcpy(&tm_node->params, params, sizeof(struct rte_tm_node_params)); if (profile) @@ -1873,8 +1903,10 @@ otx2_nix_tm_capa_get(struct rte_eth_dev *eth_dev, cap->shaper_private_dual_rate_n_max = max_nr_nodes; cap->shaper_private_rate_min = MIN_SHAPER_RATE / 8; cap->shaper_private_rate_max = MAX_SHAPER_RATE / 8; - cap->shaper_pkt_length_adjust_min = 0; - cap->shaper_pkt_length_adjust_max = 0; + cap->shaper_private_packet_mode_supported = 1; + cap->shaper_private_byte_mode_supported = 1; + cap->shaper_pkt_length_adjust_min = NIX_LENGTH_ADJUST_MIN; + cap->shaper_pkt_length_adjust_max = NIX_LENGTH_ADJUST_MAX; /* Schedule Capabilities */ cap->sched_n_children_max = rsp->schq[NIX_TXSCH_LVL_MDQ]; @@ -1882,6 +1914,8 @@ otx2_nix_tm_capa_get(struct rte_eth_dev *eth_dev, cap->sched_wfq_n_children_per_group_max = cap->sched_n_children_max; cap->sched_wfq_n_groups_max = 1; cap->sched_wfq_weight_max = MAX_SCHED_WEIGHT; + cap->sched_wfq_packet_mode_supported = 1; + cap->sched_wfq_byte_mode_supported = 1; cap->dynamic_update_mask = RTE_TM_UPDATE_NODE_PARENT_KEEP_LEVEL | @@ -1944,12 +1978,16 @@ otx2_nix_tm_level_capa_get(struct rte_eth_dev *eth_dev, uint32_t lvl, nix_tm_have_tl1_access(dev) ? false : true; cap->nonleaf.shaper_private_rate_min = MIN_SHAPER_RATE / 8; cap->nonleaf.shaper_private_rate_max = MAX_SHAPER_RATE / 8; + cap->nonleaf.shaper_private_packet_mode_supported = 1; + cap->nonleaf.shaper_private_byte_mode_supported = 1; cap->nonleaf.sched_n_children_max = rsp->schq[hw_lvl - 1]; cap->nonleaf.sched_sp_n_priorities_max = nix_max_prio(dev, hw_lvl) + 1; cap->nonleaf.sched_wfq_n_groups_max = 1; cap->nonleaf.sched_wfq_weight_max = MAX_SCHED_WEIGHT; + cap->nonleaf.sched_wfq_packet_mode_supported = 1; + cap->nonleaf.sched_wfq_byte_mode_supported = 1; if (nix_tm_have_tl1_access(dev)) cap->nonleaf.stats_mask = @@ -1966,6 +2004,8 @@ otx2_nix_tm_level_capa_get(struct rte_eth_dev *eth_dev, uint32_t lvl, cap->nonleaf.shaper_private_dual_rate_supported = true; cap->nonleaf.shaper_private_rate_min = MIN_SHAPER_RATE / 8; cap->nonleaf.shaper_private_rate_max = MAX_SHAPER_RATE / 8; + cap->nonleaf.shaper_private_packet_mode_supported = 1; + cap->nonleaf.shaper_private_byte_mode_supported = 1; /* MDQ doesn't support Strict Priority */ if (hw_lvl == NIX_TXSCH_LVL_MDQ) @@ -1977,6 +2017,8 @@ otx2_nix_tm_level_capa_get(struct rte_eth_dev *eth_dev, uint32_t lvl, nix_max_prio(dev, hw_lvl) + 1; cap->nonleaf.sched_wfq_n_groups_max = 1; cap->nonleaf.sched_wfq_weight_max = MAX_SCHED_WEIGHT; + cap->nonleaf.sched_wfq_packet_mode_supported = 1; + cap->nonleaf.sched_wfq_byte_mode_supported = 1; } else { /* unsupported level */ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; @@ -2029,6 +2071,8 @@ otx2_nix_tm_node_capa_get(struct rte_eth_dev *eth_dev, uint32_t node_id, (hw_lvl == NIX_TXSCH_LVL_TL1) ? false : true; cap->shaper_private_rate_min = MIN_SHAPER_RATE / 8; cap->shaper_private_rate_max = MAX_SHAPER_RATE / 8; + cap->shaper_private_packet_mode_supported = 1; + cap->shaper_private_byte_mode_supported = 1; /* Non Leaf Scheduler */ if (hw_lvl == NIX_TXSCH_LVL_MDQ) @@ -2041,6 +2085,8 @@ otx2_nix_tm_node_capa_get(struct rte_eth_dev *eth_dev, uint32_t node_id, cap->nonleaf.sched_n_children_max; cap->nonleaf.sched_wfq_n_groups_max = 1; cap->nonleaf.sched_wfq_weight_max = MAX_SCHED_WEIGHT; + cap->nonleaf.sched_wfq_packet_mode_supported = 1; + cap->nonleaf.sched_wfq_byte_mode_supported = 1; if (hw_lvl == NIX_TXSCH_LVL_TL1) cap->stats_mask = RTE_TM_STATS_N_PKTS_RED_DROPPED | @@ -2096,6 +2142,13 @@ otx2_nix_tm_shaper_profile_add(struct rte_eth_dev *eth_dev, } } + if (params->pkt_length_adjust < NIX_LENGTH_ADJUST_MIN || + params->pkt_length_adjust > NIX_LENGTH_ADJUST_MAX) { + error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PKT_ADJUST_LEN; + error->message = "length adjust invalid"; + return -EINVAL; + } + profile = rte_zmalloc("otx2_nix_tm_shaper_profile", sizeof(struct otx2_nix_tm_shaper_profile), 0); if (!profile) @@ -2108,13 +2161,14 @@ otx2_nix_tm_shaper_profile_add(struct rte_eth_dev *eth_dev, otx2_tm_dbg("Added TM shaper profile %u, " " pir %" PRIu64 " , pbs %" PRIu64 ", cir %" PRIu64 - ", cbs %" PRIu64 " , adj %u", + ", cbs %" PRIu64 " , adj %u, pkt mode %d", profile_id, params->peak.rate * 8, params->peak.size, params->committed.rate * 8, params->committed.size, - params->pkt_length_adjust); + params->pkt_length_adjust, + params->packet_mode); /* Translate rate as bits per second */ profile->params.peak.rate = profile->params.peak.rate * 8; @@ -2170,9 +2224,11 @@ otx2_nix_tm_node_add(struct rte_eth_dev *eth_dev, uint32_t node_id, struct rte_tm_error *error) { struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_nix_tm_shaper_profile *profile = NULL; struct otx2_nix_tm_node *parent_node; - int rc, clear_on_fail = 0; - uint32_t exp_next_lvl; + int rc, pkt_mode, clear_on_fail = 0; + uint32_t exp_next_lvl, i; + uint32_t profile_id; uint16_t hw_lvl; /* we don't support dynamic updates */ @@ -2234,13 +2290,45 @@ otx2_nix_tm_node_add(struct rte_eth_dev *eth_dev, uint32_t node_id, return -EINVAL; } - /* Check if shaper profile exists for non leaf node */ - if (!nix_tm_is_leaf(dev, lvl) && - params->shaper_profile_id != RTE_TM_SHAPER_PROFILE_ID_NONE && - !nix_tm_shaper_profile_search(dev, params->shaper_profile_id)) { - error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID; - error->message = "invalid shaper profile"; - return -EINVAL; + if (!nix_tm_is_leaf(dev, lvl)) { + /* Check if shaper profile exists for non leaf node */ + profile_id = params->shaper_profile_id; + profile = nix_tm_shaper_profile_search(dev, profile_id); + if (profile_id != RTE_TM_SHAPER_PROFILE_ID_NONE && !profile) { + error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID; + error->message = "invalid shaper profile"; + return -EINVAL; + } + + /* Minimum static priority count is 1 */ + if (!params->nonleaf.n_sp_priorities || + params->nonleaf.n_sp_priorities > TXSCH_TLX_SP_PRIO_MAX) { + error->type = + RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SP_PRIORITIES; + error->message = "invalid sp priorities"; + return -EINVAL; + } + + pkt_mode = 0; + /* Validate weight mode */ + for (i = 0; i < params->nonleaf.n_sp_priorities && + params->nonleaf.wfq_weight_mode; i++) { + pkt_mode = !params->nonleaf.wfq_weight_mode[i]; + if (pkt_mode == !params->nonleaf.wfq_weight_mode[0]) + continue; + + error->type = + RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE; + error->message = "unsupported weight mode"; + return -EINVAL; + } + + if (profile && params->nonleaf.n_sp_priorities && + pkt_mode != profile->params.packet_mode) { + error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE; + error->message = "shaper wfq packet mode mismatch"; + return -EINVAL; + } } /* Check if there is second DWRR already in siblings or holes in prio */ @@ -2482,6 +2570,12 @@ otx2_nix_tm_node_shaper_update(struct rte_eth_dev *eth_dev, } } + if (profile && profile->params.packet_mode != tm_node->pkt_mode) { + error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID; + error->message = "shaper profile pkt mode mismatch"; + return -EINVAL; + } + tm_node->params.shaper_profile_id = profile_id; /* Nothing to do if not yet committed */ diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h index 9675182..cdca987 100644 --- a/drivers/net/octeontx2/otx2_tm.h +++ b/drivers/net/octeontx2/otx2_tm.h @@ -48,6 +48,7 @@ struct otx2_nix_tm_node { #define NIX_TM_NODE_USER BIT_ULL(2) /* Shaper algorithm for RED state @NIX_REDALG_E */ uint32_t red_algo:2; + uint32_t pkt_mode:1; struct otx2_nix_tm_node *parent; struct rte_tm_node_params params; @@ -114,6 +115,10 @@ TAILQ_HEAD(otx2_nix_tm_shaper_profile_list, otx2_nix_tm_shaper_profile); #define MAX_SHAPER_RATE \ SHAPER_RATE(MAX_RATE_EXPONENT, MAX_RATE_MANTISSA, 0) +/* Min is limited so that NIX_AF_SMQX_CFG[MINLEN]+ADJUST is not -ve */ +#define NIX_LENGTH_ADJUST_MIN ((int)-NIX_MIN_HW_FRS + 1) +#define NIX_LENGTH_ADJUST_MAX 255 + /** TM Shaper - low level operations */ /** NIX burst limits */