From patchwork Fri Jan 5 13:59:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 135739 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4CDED43837; Fri, 5 Jan 2024 06:31:39 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7122E402F1; Fri, 5 Jan 2024 06:31:36 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 0243E402C7 for ; Fri, 5 Jan 2024 06:31:33 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1704432693; x=1735968693; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=nby6JFdnCkxNGeSV6h/V620V2X3hePjnWBMUU1CJR1I=; b=WamGGzQCuEUJ15nbLgfK7EK2qgRMaGC0qAZ1/T2nEnEIlaQx6btikiyS fJzIT/h1pQEe1JO0C5Su9yZX35xTujCBSqDZ8pzyBGj5qrvwO9wIekj+9 9fmYNQ2YKRNEOupwdwyisSEwrXS7l/k8/Gv2A917eSvsGihPc/0W/9dPN gRkBoyr8asKULpCl0/0FpRTQsWuPRgZOEy8F6YGmLZNsOTdKS2VGtejtK EHQXYXLEV80HXRDpAT0mgKGX7M4RXhQV2tK8rPTLeEad3oyPnXWd+PSHM QZTG33EqJM07Bxalx2A49LKTqxRdiU3qWWCbM5vvk0bloCahPxV5eNWR7 g==; X-IronPort-AV: E=McAfee;i="6600,9927,10943"; a="400197054" X-IronPort-AV: E=Sophos;i="6.04,332,1695711600"; d="scan'208";a="400197054" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jan 2024 21:31:33 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,332,1695711600"; d="scan'208";a="29031796" Received: from dpdk-qzhan15-test02.sh.intel.com ([10.67.119.16]) by orviesa001.jf.intel.com with ESMTP; 04 Jan 2024 21:31:32 -0800 From: Qi Zhang To: qiming.yang@intel.com, wenjun1.wu@intel.com Cc: dev@dpdk.org, Qi Zhang Subject: [PATCH 1/3] net/ice: hide port and TC layer in Tx sched tree Date: Fri, 5 Jan 2024 08:59:04 -0500 Message-Id: <20240105135906.383394-2-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20240105135906.383394-1-qi.z.zhang@intel.com> References: <20240105135906.383394-1-qi.z.zhang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org In currently 5 layer tree implementation, the port and tc layer is not configurable, so its not necessary to expose them to applicaiton. The patch hides the top 2 layers and represented the root of the tree at VSI layer. From application's point of view, its a 3 layer scheduler tree: Port -> Queue Group -> Queue. Signed-off-by: Qi Zhang --- drivers/net/ice/ice_ethdev.h | 7 ---- drivers/net/ice/ice_tm.c | 79 ++++-------------------------------- 2 files changed, 7 insertions(+), 79 deletions(-) diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h index fa4981ed14..ae22c29ffc 100644 --- a/drivers/net/ice/ice_ethdev.h +++ b/drivers/net/ice/ice_ethdev.h @@ -470,7 +470,6 @@ struct ice_tm_shaper_profile { struct ice_tm_node { TAILQ_ENTRY(ice_tm_node) node; uint32_t id; - uint32_t tc; uint32_t priority; uint32_t weight; uint32_t reference_count; @@ -484,8 +483,6 @@ struct ice_tm_node { /* node type of Traffic Manager */ enum ice_tm_node_type { ICE_TM_NODE_TYPE_PORT, - ICE_TM_NODE_TYPE_TC, - ICE_TM_NODE_TYPE_VSI, ICE_TM_NODE_TYPE_QGROUP, ICE_TM_NODE_TYPE_QUEUE, ICE_TM_NODE_TYPE_MAX, @@ -495,12 +492,8 @@ enum ice_tm_node_type { struct ice_tm_conf { struct ice_shaper_profile_list shaper_profile_list; struct ice_tm_node *root; /* root node - port */ - struct ice_tm_node_list tc_list; /* node list for all the TCs */ - struct ice_tm_node_list vsi_list; /* node list for all the VSIs */ struct ice_tm_node_list qgroup_list; /* node list for all the queue groups */ struct ice_tm_node_list queue_list; /* node list for all the queues */ - uint32_t nb_tc_node; - uint32_t nb_vsi_node; uint32_t nb_qgroup_node; uint32_t nb_queue_node; bool committed; diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c index b570798f07..7ae68c683b 100644 --- a/drivers/net/ice/ice_tm.c +++ b/drivers/net/ice/ice_tm.c @@ -43,12 +43,8 @@ ice_tm_conf_init(struct rte_eth_dev *dev) /* initialize node configuration */ TAILQ_INIT(&pf->tm_conf.shaper_profile_list); pf->tm_conf.root = NULL; - TAILQ_INIT(&pf->tm_conf.tc_list); - TAILQ_INIT(&pf->tm_conf.vsi_list); TAILQ_INIT(&pf->tm_conf.qgroup_list); TAILQ_INIT(&pf->tm_conf.queue_list); - pf->tm_conf.nb_tc_node = 0; - pf->tm_conf.nb_vsi_node = 0; pf->tm_conf.nb_qgroup_node = 0; pf->tm_conf.nb_queue_node = 0; pf->tm_conf.committed = false; @@ -72,16 +68,6 @@ ice_tm_conf_uninit(struct rte_eth_dev *dev) rte_free(tm_node); } pf->tm_conf.nb_qgroup_node = 0; - while ((tm_node = TAILQ_FIRST(&pf->tm_conf.vsi_list))) { - TAILQ_REMOVE(&pf->tm_conf.vsi_list, tm_node, node); - rte_free(tm_node); - } - pf->tm_conf.nb_vsi_node = 0; - while ((tm_node = TAILQ_FIRST(&pf->tm_conf.tc_list))) { - TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node); - rte_free(tm_node); - } - pf->tm_conf.nb_tc_node = 0; if (pf->tm_conf.root) { rte_free(pf->tm_conf.root); pf->tm_conf.root = NULL; @@ -93,8 +79,6 @@ ice_tm_node_search(struct rte_eth_dev *dev, uint32_t node_id, enum ice_tm_node_type *node_type) { struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct ice_tm_node_list *tc_list = &pf->tm_conf.tc_list; - struct ice_tm_node_list *vsi_list = &pf->tm_conf.vsi_list; struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list; struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list; struct ice_tm_node *tm_node; @@ -104,20 +88,6 @@ ice_tm_node_search(struct rte_eth_dev *dev, return pf->tm_conf.root; } - TAILQ_FOREACH(tm_node, tc_list, node) { - if (tm_node->id == node_id) { - *node_type = ICE_TM_NODE_TYPE_TC; - return tm_node; - } - } - - TAILQ_FOREACH(tm_node, vsi_list, node) { - if (tm_node->id == node_id) { - *node_type = ICE_TM_NODE_TYPE_VSI; - return tm_node; - } - } - TAILQ_FOREACH(tm_node, qgroup_list, node) { if (tm_node->id == node_id) { *node_type = ICE_TM_NODE_TYPE_QGROUP; @@ -371,6 +341,8 @@ ice_shaper_profile_del(struct rte_eth_dev *dev, return 0; } +#define MAX_QUEUE_PER_GROUP 8 + static int ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, uint32_t parent_node_id, uint32_t priority, @@ -384,8 +356,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, struct ice_tm_shaper_profile *shaper_profile = NULL; struct ice_tm_node *tm_node; struct ice_tm_node *parent_node; - uint16_t tc_nb = 1; - uint16_t vsi_nb = 1; int ret; if (!params || !error) @@ -440,6 +410,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, tm_node->id = node_id; tm_node->parent = NULL; tm_node->reference_count = 0; + tm_node->shaper_profile = shaper_profile; tm_node->children = (struct ice_tm_node **) rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0); rte_memcpy(&tm_node->params, params, @@ -448,7 +419,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, return 0; } - /* TC or queue node */ /* check the parent node */ parent_node = ice_tm_node_search(dev, parent_node_id, &parent_node_type); @@ -458,8 +428,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, return -EINVAL; } if (parent_node_type != ICE_TM_NODE_TYPE_PORT && - parent_node_type != ICE_TM_NODE_TYPE_TC && - parent_node_type != ICE_TM_NODE_TYPE_VSI && parent_node_type != ICE_TM_NODE_TYPE_QGROUP) { error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID; error->message = "parent is not valid"; @@ -475,20 +443,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, /* check the node number */ if (parent_node_type == ICE_TM_NODE_TYPE_PORT) { - /* check the TC number */ - if (pf->tm_conf.nb_tc_node >= tc_nb) { - error->type = RTE_TM_ERROR_TYPE_NODE_ID; - error->message = "too many TCs"; - return -EINVAL; - } - } else if (parent_node_type == ICE_TM_NODE_TYPE_TC) { - /* check the VSI number */ - if (pf->tm_conf.nb_vsi_node >= vsi_nb) { - error->type = RTE_TM_ERROR_TYPE_NODE_ID; - error->message = "too many VSIs"; - return -EINVAL; - } - } else if (parent_node_type == ICE_TM_NODE_TYPE_VSI) { /* check the queue group number */ if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) { error->type = RTE_TM_ERROR_TYPE_NODE_ID; @@ -497,7 +451,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, } } else { /* check the queue number */ - if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) { + if (parent_node->reference_count >= MAX_QUEUE_PER_GROUP) { error->type = RTE_TM_ERROR_TYPE_NODE_ID; error->message = "too many queues"; return -EINVAL; @@ -509,7 +463,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, } } - /* add the TC or VSI or queue group or queue node */ tm_node = rte_zmalloc("ice_tm_node", sizeof(struct ice_tm_node), 0); @@ -538,24 +491,12 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, rte_memcpy(&tm_node->params, params, sizeof(struct rte_tm_node_params)); if (parent_node_type == ICE_TM_NODE_TYPE_PORT) { - TAILQ_INSERT_TAIL(&pf->tm_conf.tc_list, - tm_node, node); - tm_node->tc = pf->tm_conf.nb_tc_node; - pf->tm_conf.nb_tc_node++; - } else if (parent_node_type == ICE_TM_NODE_TYPE_TC) { - TAILQ_INSERT_TAIL(&pf->tm_conf.vsi_list, - tm_node, node); - tm_node->tc = parent_node->tc; - pf->tm_conf.nb_vsi_node++; - } else if (parent_node_type == ICE_TM_NODE_TYPE_VSI) { TAILQ_INSERT_TAIL(&pf->tm_conf.qgroup_list, tm_node, node); - tm_node->tc = parent_node->parent->tc; pf->tm_conf.nb_qgroup_node++; } else { TAILQ_INSERT_TAIL(&pf->tm_conf.queue_list, tm_node, node); - tm_node->tc = parent_node->parent->parent->tc; pf->tm_conf.nb_queue_node++; } tm_node->parent->reference_count++; @@ -603,15 +544,9 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id, return 0; } - /* TC or VSI or queue group or queue node */ + /* queue group or queue node */ tm_node->parent->reference_count--; - if (node_type == ICE_TM_NODE_TYPE_TC) { - TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node); - pf->tm_conf.nb_tc_node--; - } else if (node_type == ICE_TM_NODE_TYPE_VSI) { - TAILQ_REMOVE(&pf->tm_conf.vsi_list, tm_node, node); - pf->tm_conf.nb_vsi_node--; - } else if (node_type == ICE_TM_NODE_TYPE_QGROUP) { + if (node_type == ICE_TM_NODE_TYPE_QGROUP) { TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node); pf->tm_conf.nb_qgroup_node--; } else { @@ -872,7 +807,7 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev, /* config vsi node */ vsi_node = ice_get_vsi_node(hw); - tm_node = TAILQ_FIRST(&pf->tm_conf.vsi_list); + tm_node = pf->tm_conf.root; ret_val = ice_set_node_rate(hw, tm_node, vsi_node); if (ret_val) { From patchwork Fri Jan 5 13:59:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 135740 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0CA1643837; Fri, 5 Jan 2024 06:31:45 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C0C0F40633; Fri, 5 Jan 2024 06:31:38 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id D03A0402EF for ; Fri, 5 Jan 2024 06:31:35 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1704432695; x=1735968695; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=aABu8DOWc4EEq8RHO/oF/XvREcI3Xt4fO1M5brV4CVE=; b=VsAJjz7bLDNc3s2zBvtifZnL8LKgrZosHdtQbd8Pw+S/djsToK3WDiF4 ln4mem9hK+snnnVgm7mf+ThzQg10d1UX8XIXsCFQlblsh3SLNMM3Gl72R f6U7x3b5a3MnI+EiyCfEmcLaYI1NTjwZ7vIbO4GoWVV9mwLNVN2PQw/3z drQmSVTz+X1HxH6qW7RwIvhVwlwGE+b3jUcWSorieFEArQITvjA4ETRpW 72D2P+mPmCBdZ22UvTHCRi/kSjQ6bJ1G8F0n26t6/o/gHIkGLvDLnbQYD jp1T2mUbkt3ootupoXve9BdM1iA7O8c5mgj9YPMZ+qy4WwHgV9+pXYNef A==; X-IronPort-AV: E=McAfee;i="6600,9927,10943"; a="400197055" X-IronPort-AV: E=Sophos;i="6.04,332,1695711600"; d="scan'208";a="400197055" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jan 2024 21:31:35 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,332,1695711600"; d="scan'208";a="29031807" Received: from dpdk-qzhan15-test02.sh.intel.com ([10.67.119.16]) by orviesa001.jf.intel.com with ESMTP; 04 Jan 2024 21:31:34 -0800 From: Qi Zhang To: qiming.yang@intel.com, wenjun1.wu@intel.com Cc: dev@dpdk.org, Qi Zhang Subject: [PATCH 2/3] net/ice: refactor tm config data struture Date: Fri, 5 Jan 2024 08:59:05 -0500 Message-Id: <20240105135906.383394-3-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20240105135906.383394-1-qi.z.zhang@intel.com> References: <20240105135906.383394-1-qi.z.zhang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Simplified struct ice_tm_conf by removing per level node list. Signed-off-by: Qi Zhang --- drivers/net/ice/ice_ethdev.h | 5 +- drivers/net/ice/ice_tm.c | 210 +++++++++++++++-------------------- 2 files changed, 88 insertions(+), 127 deletions(-) diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h index ae22c29ffc..008a7a23b9 100644 --- a/drivers/net/ice/ice_ethdev.h +++ b/drivers/net/ice/ice_ethdev.h @@ -472,6 +472,7 @@ struct ice_tm_node { uint32_t id; uint32_t priority; uint32_t weight; + uint32_t level; uint32_t reference_count; struct ice_tm_node *parent; struct ice_tm_node **children; @@ -492,10 +493,6 @@ enum ice_tm_node_type { struct ice_tm_conf { struct ice_shaper_profile_list shaper_profile_list; struct ice_tm_node *root; /* root node - port */ - struct ice_tm_node_list qgroup_list; /* node list for all the queue groups */ - struct ice_tm_node_list queue_list; /* node list for all the queues */ - uint32_t nb_qgroup_node; - uint32_t nb_queue_node; bool committed; bool clear_on_fail; }; diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c index 7ae68c683b..7c662f8a85 100644 --- a/drivers/net/ice/ice_tm.c +++ b/drivers/net/ice/ice_tm.c @@ -43,66 +43,30 @@ ice_tm_conf_init(struct rte_eth_dev *dev) /* initialize node configuration */ TAILQ_INIT(&pf->tm_conf.shaper_profile_list); pf->tm_conf.root = NULL; - TAILQ_INIT(&pf->tm_conf.qgroup_list); - TAILQ_INIT(&pf->tm_conf.queue_list); - pf->tm_conf.nb_qgroup_node = 0; - pf->tm_conf.nb_queue_node = 0; pf->tm_conf.committed = false; pf->tm_conf.clear_on_fail = false; } -void -ice_tm_conf_uninit(struct rte_eth_dev *dev) +static void free_node(struct ice_tm_node *root) { - struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct ice_tm_node *tm_node; + uint32_t i; - /* clear node configuration */ - while ((tm_node = TAILQ_FIRST(&pf->tm_conf.queue_list))) { - TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node); - rte_free(tm_node); - } - pf->tm_conf.nb_queue_node = 0; - while ((tm_node = TAILQ_FIRST(&pf->tm_conf.qgroup_list))) { - TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node); - rte_free(tm_node); - } - pf->tm_conf.nb_qgroup_node = 0; - if (pf->tm_conf.root) { - rte_free(pf->tm_conf.root); - pf->tm_conf.root = NULL; - } + if (root == NULL) + return; + + for (i = 0; i < root->reference_count; i++) + free_node(root->children[i]); + + rte_free(root); } -static inline struct ice_tm_node * -ice_tm_node_search(struct rte_eth_dev *dev, - uint32_t node_id, enum ice_tm_node_type *node_type) +void +ice_tm_conf_uninit(struct rte_eth_dev *dev) { struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list; - struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list; - struct ice_tm_node *tm_node; - - if (pf->tm_conf.root && pf->tm_conf.root->id == node_id) { - *node_type = ICE_TM_NODE_TYPE_PORT; - return pf->tm_conf.root; - } - TAILQ_FOREACH(tm_node, qgroup_list, node) { - if (tm_node->id == node_id) { - *node_type = ICE_TM_NODE_TYPE_QGROUP; - return tm_node; - } - } - - TAILQ_FOREACH(tm_node, queue_list, node) { - if (tm_node->id == node_id) { - *node_type = ICE_TM_NODE_TYPE_QUEUE; - return tm_node; - } - } - - return NULL; + free_node(pf->tm_conf.root); + pf->tm_conf.root = NULL; } static int @@ -195,11 +159,29 @@ ice_node_param_check(struct ice_pf *pf, uint32_t node_id, return 0; } +static struct ice_tm_node * +find_node(struct ice_tm_node *root, uint32_t id) +{ + uint32_t i; + + if (root == NULL || root->id == id) + return root; + + for (i = 0; i < root->reference_count; i++) { + struct ice_tm_node *node = find_node(root->children[i], id); + + if (node) + return node; + } + + return NULL; +} + static int ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id, int *is_leaf, struct rte_tm_error *error) { - enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX; + struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); struct ice_tm_node *tm_node; if (!is_leaf || !error) @@ -212,14 +194,14 @@ ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id, } /* check if the node id exists */ - tm_node = ice_tm_node_search(dev, node_id, &node_type); + tm_node = find_node(pf->tm_conf.root, node_id); if (!tm_node) { error->type = RTE_TM_ERROR_TYPE_NODE_ID; error->message = "no such node"; return -EINVAL; } - if (node_type == ICE_TM_NODE_TYPE_QUEUE) + if (tm_node->level == ICE_TM_NODE_TYPE_QUEUE) *is_leaf = true; else *is_leaf = false; @@ -351,8 +333,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, struct rte_tm_error *error) { struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); - enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX; - enum ice_tm_node_type parent_node_type = ICE_TM_NODE_TYPE_MAX; struct ice_tm_shaper_profile *shaper_profile = NULL; struct ice_tm_node *tm_node; struct ice_tm_node *parent_node; @@ -367,7 +347,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, return ret; /* check if the node is already existed */ - if (ice_tm_node_search(dev, node_id, &node_type)) { + if (find_node(pf->tm_conf.root, node_id)) { error->type = RTE_TM_ERROR_TYPE_NODE_ID; error->message = "node id already used"; return -EINVAL; @@ -408,6 +388,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, if (!tm_node) return -ENOMEM; tm_node->id = node_id; + tm_node->level = ICE_TM_NODE_TYPE_PORT; tm_node->parent = NULL; tm_node->reference_count = 0; tm_node->shaper_profile = shaper_profile; @@ -420,29 +401,28 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, } /* check the parent node */ - parent_node = ice_tm_node_search(dev, parent_node_id, - &parent_node_type); + parent_node = find_node(pf->tm_conf.root, parent_node_id); if (!parent_node) { error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID; error->message = "parent not exist"; return -EINVAL; } - if (parent_node_type != ICE_TM_NODE_TYPE_PORT && - parent_node_type != ICE_TM_NODE_TYPE_QGROUP) { + if (parent_node->level != ICE_TM_NODE_TYPE_PORT && + parent_node->level != ICE_TM_NODE_TYPE_QGROUP) { error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID; error->message = "parent is not valid"; return -EINVAL; } /* check level */ if (level_id != RTE_TM_NODE_LEVEL_ID_ANY && - level_id != (uint32_t)parent_node_type + 1) { + level_id != parent_node->level + 1) { error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS; error->message = "Wrong level"; return -EINVAL; } /* check the node number */ - if (parent_node_type == ICE_TM_NODE_TYPE_PORT) { + if (parent_node->level == ICE_TM_NODE_TYPE_PORT) { /* check the queue group number */ if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) { error->type = RTE_TM_ERROR_TYPE_NODE_ID; @@ -473,6 +453,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, tm_node->weight = weight; tm_node->reference_count = 0; tm_node->parent = parent_node; + tm_node->level = parent_node->level + 1; tm_node->shaper_profile = shaper_profile; tm_node->children = (struct ice_tm_node **) rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0); @@ -490,15 +471,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, rte_memcpy(&tm_node->params, params, sizeof(struct rte_tm_node_params)); - if (parent_node_type == ICE_TM_NODE_TYPE_PORT) { - TAILQ_INSERT_TAIL(&pf->tm_conf.qgroup_list, - tm_node, node); - pf->tm_conf.nb_qgroup_node++; - } else { - TAILQ_INSERT_TAIL(&pf->tm_conf.queue_list, - tm_node, node); - pf->tm_conf.nb_queue_node++; - } tm_node->parent->reference_count++; return 0; @@ -509,7 +481,6 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id, struct rte_tm_error *error) { struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); - enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX; struct ice_tm_node *tm_node; if (!error) @@ -522,7 +493,7 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id, } /* check if the node id exists */ - tm_node = ice_tm_node_search(dev, node_id, &node_type); + tm_node = find_node(pf->tm_conf.root, node_id); if (!tm_node) { error->type = RTE_TM_ERROR_TYPE_NODE_ID; error->message = "no such node"; @@ -538,7 +509,7 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id, } /* root node */ - if (node_type == ICE_TM_NODE_TYPE_PORT) { + if (tm_node->level == ICE_TM_NODE_TYPE_PORT) { rte_free(tm_node); pf->tm_conf.root = NULL; return 0; @@ -546,13 +517,6 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id, /* queue group or queue node */ tm_node->parent->reference_count--; - if (node_type == ICE_TM_NODE_TYPE_QGROUP) { - TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node); - pf->tm_conf.nb_qgroup_node--; - } else { - TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node); - pf->tm_conf.nb_queue_node--; - } rte_free(tm_node); return 0; @@ -708,9 +672,9 @@ static int ice_reset_noleaf_nodes(struct rte_eth_dev *dev) { struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list; struct ice_sched_node *vsi_node = ice_get_vsi_node(hw); - struct ice_tm_node *tm_node; + struct ice_tm_node *root = pf->tm_conf.root; + uint32_t i; int ret; /* reset vsi_node */ @@ -720,8 +684,12 @@ static int ice_reset_noleaf_nodes(struct rte_eth_dev *dev) return ret; } - /* reset queue group nodes */ - TAILQ_FOREACH(tm_node, qgroup_list, node) { + if (root == NULL) + return 0; + + for (i = 0; i < root->reference_count; i++) { + struct ice_tm_node *tm_node = root->children[i]; + if (tm_node->sched_node == NULL) continue; @@ -774,9 +742,7 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev, { struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list; - struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list; - struct ice_tm_node *tm_node; + struct ice_tm_node *root; struct ice_sched_node *vsi_node = NULL; struct ice_sched_node *queue_node; struct ice_tx_queue *txq; @@ -807,14 +773,14 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev, /* config vsi node */ vsi_node = ice_get_vsi_node(hw); - tm_node = pf->tm_conf.root; + root = pf->tm_conf.root; - ret_val = ice_set_node_rate(hw, tm_node, vsi_node); + ret_val = ice_set_node_rate(hw, root, vsi_node); if (ret_val) { error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; PMD_DRV_LOG(ERR, "configure vsi node %u bandwidth failed", - tm_node->id); + root->id); goto add_leaf; } @@ -825,13 +791,27 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev, idx_vsi_child = 0; idx_qg = 0; - TAILQ_FOREACH(tm_node, qgroup_list, node) { + if (root == NULL) + goto commit; + + for (i = 0; i < root->reference_count; i++) { + struct ice_tm_node *tm_node = root->children[i]; struct ice_tm_node *tm_child_node; struct ice_sched_node *qgroup_sched_node = vsi_node->children[idx_vsi_child]->children[idx_qg]; + uint32_t j; - for (i = 0; i < tm_node->reference_count; i++) { - tm_child_node = tm_node->children[i]; + ret_val = ice_cfg_hw_node(hw, tm_node, qgroup_sched_node); + if (ret_val) { + error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; + PMD_DRV_LOG(ERR, + "configure queue group node %u failed", + tm_node->id); + goto reset_leaf; + } + + for (j = 0; j < tm_node->reference_count; j++) { + tm_child_node = tm_node->children[j]; qid = tm_child_node->id; ret_val = ice_tx_queue_start(dev, qid); if (ret_val) { @@ -847,25 +827,25 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev, PMD_DRV_LOG(ERR, "get queue %u node failed", qid); goto reset_leaf; } - if (queue_node->info.parent_teid == qgroup_sched_node->info.node_teid) - continue; - ret_val = ice_move_recfg_lan_txq(dev, queue_node, qgroup_sched_node, qid); + if (queue_node->info.parent_teid != qgroup_sched_node->info.node_teid) { + ret_val = ice_move_recfg_lan_txq(dev, queue_node, + qgroup_sched_node, qid); + if (ret_val) { + error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; + PMD_DRV_LOG(ERR, "move queue %u failed", qid); + goto reset_leaf; + } + } + ret_val = ice_cfg_hw_node(hw, tm_child_node, queue_node); if (ret_val) { error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; - PMD_DRV_LOG(ERR, "move queue %u failed", qid); + PMD_DRV_LOG(ERR, + "configure queue group node %u failed", + tm_node->id); goto reset_leaf; } } - ret_val = ice_cfg_hw_node(hw, tm_node, qgroup_sched_node); - if (ret_val) { - error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; - PMD_DRV_LOG(ERR, - "configure queue group node %u failed", - tm_node->id); - goto reset_leaf; - } - idx_qg++; if (idx_qg >= nb_qg) { idx_qg = 0; @@ -878,23 +858,7 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev, } } - /* config queue nodes */ - TAILQ_FOREACH(tm_node, queue_list, node) { - qid = tm_node->id; - txq = dev->data->tx_queues[qid]; - q_teid = txq->q_teid; - queue_node = ice_sched_get_node(hw->port_info, q_teid); - - ret_val = ice_cfg_hw_node(hw, tm_node, queue_node); - if (ret_val) { - error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; - PMD_DRV_LOG(ERR, - "configure queue group node %u failed", - tm_node->id); - goto reset_leaf; - } - } - +commit: pf->tm_conf.committed = true; pf->tm_conf.clear_on_fail = clear_on_fail; From patchwork Fri Jan 5 13:59:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 135741 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 65BE743837; Fri, 5 Jan 2024 06:31:53 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BA30A40685; Fri, 5 Jan 2024 06:31:40 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 4003D40395 for ; Fri, 5 Jan 2024 06:31:37 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1704432696; x=1735968696; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=eJA2r3ORi034S1VQp8Pd9/upvcKGEpYyssu9gYsksSc=; b=TEx7ovx7boyDkkl3OKpaDEKHAcLt+Rq56VNlISLfI2TN5Gx1yX4b9CHx CXZU5LNxbI+2vOvzWe58HAnJGaDGWlqSndrUo7XITkHx0pMldV2vP3C8j +vpVAmyybqxUC4YWk7FMhYuvLcKcsFedj+97kDtusSSvtVEz4oiENCFXX u31bLFcTO58FLacUz4+vQVy7myF6HfUu/i1ybRAVusnxrYRMMTW/Rd5M4 o1DL44k8l06hh2vTk85E2GS49xYfsryB9glTLFO4117DKxJNKm6QUlte3 99UvZmW5X1wyrS50g+uN+VwySh9eLvDRSAqvx7K/rn4jocWbeKntcCaHj A==; X-IronPort-AV: E=McAfee;i="6600,9927,10943"; a="400197057" X-IronPort-AV: E=Sophos;i="6.04,332,1695711600"; d="scan'208";a="400197057" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jan 2024 21:31:36 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,332,1695711600"; d="scan'208";a="29031813" Received: from dpdk-qzhan15-test02.sh.intel.com ([10.67.119.16]) by orviesa001.jf.intel.com with ESMTP; 04 Jan 2024 21:31:36 -0800 From: Qi Zhang To: qiming.yang@intel.com, wenjun1.wu@intel.com Cc: dev@dpdk.org, Qi Zhang Subject: [PATCH 3/3] doc: update ice document for qos Date: Fri, 5 Jan 2024 08:59:06 -0500 Message-Id: <20240105135906.383394-4-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20240105135906.383394-1-qi.z.zhang@intel.com> References: <20240105135906.383394-1-qi.z.zhang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add description for ice PMD's rte_tm capabilities. Signed-off-by: Qi Zhang --- doc/guides/nics/ice.rst | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst index bafb3ba022..1f737a009c 100644 --- a/doc/guides/nics/ice.rst +++ b/doc/guides/nics/ice.rst @@ -352,6 +352,25 @@ queue 3 using a raw pattern:: Currently, raw pattern support is limited to the FDIR and Hash engines. +Traffic Management Support +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The ice PMD provides support for the Traffic Management API (RTE_RM), allow +users to offload a 3-layers Tx scheduler on the E810 NIC: + +- ``Port Layer`` + + This is the root layer, support peak bandwidth configuration, max to 32 children. + +- ``Queue Group Layer`` + + The middel layer, support peak / committed bandwidth, weight, prioirty configurations, + max to 8 children. + +- ``Queue Layer`` + + The leaf layer, support peak / committed bandwidth, weight, prioirty configurations. + Additional Options ++++++++++++++++++