From patchwork Fri Jan 5 14:11:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 135743 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8ACA443837; Fri, 5 Jan 2024 06:43:59 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1B52D40649; Fri, 5 Jan 2024 06:43:52 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id 2A5BF40633 for ; Fri, 5 Jan 2024 06:43:50 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1704433430; x=1735969430; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=aABu8DOWc4EEq8RHO/oF/XvREcI3Xt4fO1M5brV4CVE=; b=f6Jp4vw1iE8kv8NZ/7pWVuEWAcUYr0tPh3K8ineb8ky29TlXUwyluIlI gUThWMbxPC6Klg1EqXY2DgPUxyc/ZhVDBP6iX5Mnnop8cclxhwQCjrVM5 Itl5STxZ8h3zXCP5QsoewDwNVBwoLPanmVc+s1XRd3ON8NSHmGj55F/ZM eswWRxtkIwU5Y6FAoCqBFIiMeZgMqRot2h73zcYYXpapTEOYEAbkjt1LG joU3nfCmKLI7CfOzAjlaZJ5NOzhBpfW1gk9UnuAas97oKPXxvA+LYFHoA aWbWASo281RbvKJmg4whHsYtyFbNEh0IGDa2TfAtOaY2GGTlf1qf1BEqd Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10943"; a="401221770" X-IronPort-AV: E=Sophos;i="6.04,332,1695711600"; d="scan'208";a="401221770" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jan 2024 21:43:49 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10943"; a="846478388" X-IronPort-AV: E=Sophos;i="6.04,332,1695711600"; d="scan'208";a="846478388" Received: from dpdk-qzhan15-test02.sh.intel.com ([10.67.119.16]) by fmsmga008.fm.intel.com with ESMTP; 04 Jan 2024 21:43:47 -0800 From: Qi Zhang To: qiming.yang@intel.com, wenjun1.wu@intel.com Cc: dev@dpdk.org, Qi Zhang Subject: [PATCH v2 2/3] net/ice: refactor tm config data structure Date: Fri, 5 Jan 2024 09:11:19 -0500 Message-Id: <20240105141120.384681-3-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20240105141120.384681-1-qi.z.zhang@intel.com> References: <20240105135906.383394-1-qi.z.zhang@intel.com> <20240105141120.384681-1-qi.z.zhang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Simplified struct ice_tm_conf by removing per level node list. Signed-off-by: Qi Zhang Acked-by: Wenjun Wu --- drivers/net/ice/ice_ethdev.h | 5 +- drivers/net/ice/ice_tm.c | 210 +++++++++++++++-------------------- 2 files changed, 88 insertions(+), 127 deletions(-) diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h index ae22c29ffc..008a7a23b9 100644 --- a/drivers/net/ice/ice_ethdev.h +++ b/drivers/net/ice/ice_ethdev.h @@ -472,6 +472,7 @@ struct ice_tm_node { uint32_t id; uint32_t priority; uint32_t weight; + uint32_t level; uint32_t reference_count; struct ice_tm_node *parent; struct ice_tm_node **children; @@ -492,10 +493,6 @@ enum ice_tm_node_type { struct ice_tm_conf { struct ice_shaper_profile_list shaper_profile_list; struct ice_tm_node *root; /* root node - port */ - struct ice_tm_node_list qgroup_list; /* node list for all the queue groups */ - struct ice_tm_node_list queue_list; /* node list for all the queues */ - uint32_t nb_qgroup_node; - uint32_t nb_queue_node; bool committed; bool clear_on_fail; }; diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c index 7ae68c683b..7c662f8a85 100644 --- a/drivers/net/ice/ice_tm.c +++ b/drivers/net/ice/ice_tm.c @@ -43,66 +43,30 @@ ice_tm_conf_init(struct rte_eth_dev *dev) /* initialize node configuration */ TAILQ_INIT(&pf->tm_conf.shaper_profile_list); pf->tm_conf.root = NULL; - TAILQ_INIT(&pf->tm_conf.qgroup_list); - TAILQ_INIT(&pf->tm_conf.queue_list); - pf->tm_conf.nb_qgroup_node = 0; - pf->tm_conf.nb_queue_node = 0; pf->tm_conf.committed = false; pf->tm_conf.clear_on_fail = false; } -void -ice_tm_conf_uninit(struct rte_eth_dev *dev) +static void free_node(struct ice_tm_node *root) { - struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct ice_tm_node *tm_node; + uint32_t i; - /* clear node configuration */ - while ((tm_node = TAILQ_FIRST(&pf->tm_conf.queue_list))) { - TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node); - rte_free(tm_node); - } - pf->tm_conf.nb_queue_node = 0; - while ((tm_node = TAILQ_FIRST(&pf->tm_conf.qgroup_list))) { - TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node); - rte_free(tm_node); - } - pf->tm_conf.nb_qgroup_node = 0; - if (pf->tm_conf.root) { - rte_free(pf->tm_conf.root); - pf->tm_conf.root = NULL; - } + if (root == NULL) + return; + + for (i = 0; i < root->reference_count; i++) + free_node(root->children[i]); + + rte_free(root); } -static inline struct ice_tm_node * -ice_tm_node_search(struct rte_eth_dev *dev, - uint32_t node_id, enum ice_tm_node_type *node_type) +void +ice_tm_conf_uninit(struct rte_eth_dev *dev) { struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list; - struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list; - struct ice_tm_node *tm_node; - - if (pf->tm_conf.root && pf->tm_conf.root->id == node_id) { - *node_type = ICE_TM_NODE_TYPE_PORT; - return pf->tm_conf.root; - } - TAILQ_FOREACH(tm_node, qgroup_list, node) { - if (tm_node->id == node_id) { - *node_type = ICE_TM_NODE_TYPE_QGROUP; - return tm_node; - } - } - - TAILQ_FOREACH(tm_node, queue_list, node) { - if (tm_node->id == node_id) { - *node_type = ICE_TM_NODE_TYPE_QUEUE; - return tm_node; - } - } - - return NULL; + free_node(pf->tm_conf.root); + pf->tm_conf.root = NULL; } static int @@ -195,11 +159,29 @@ ice_node_param_check(struct ice_pf *pf, uint32_t node_id, return 0; } +static struct ice_tm_node * +find_node(struct ice_tm_node *root, uint32_t id) +{ + uint32_t i; + + if (root == NULL || root->id == id) + return root; + + for (i = 0; i < root->reference_count; i++) { + struct ice_tm_node *node = find_node(root->children[i], id); + + if (node) + return node; + } + + return NULL; +} + static int ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id, int *is_leaf, struct rte_tm_error *error) { - enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX; + struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); struct ice_tm_node *tm_node; if (!is_leaf || !error) @@ -212,14 +194,14 @@ ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id, } /* check if the node id exists */ - tm_node = ice_tm_node_search(dev, node_id, &node_type); + tm_node = find_node(pf->tm_conf.root, node_id); if (!tm_node) { error->type = RTE_TM_ERROR_TYPE_NODE_ID; error->message = "no such node"; return -EINVAL; } - if (node_type == ICE_TM_NODE_TYPE_QUEUE) + if (tm_node->level == ICE_TM_NODE_TYPE_QUEUE) *is_leaf = true; else *is_leaf = false; @@ -351,8 +333,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, struct rte_tm_error *error) { struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); - enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX; - enum ice_tm_node_type parent_node_type = ICE_TM_NODE_TYPE_MAX; struct ice_tm_shaper_profile *shaper_profile = NULL; struct ice_tm_node *tm_node; struct ice_tm_node *parent_node; @@ -367,7 +347,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, return ret; /* check if the node is already existed */ - if (ice_tm_node_search(dev, node_id, &node_type)) { + if (find_node(pf->tm_conf.root, node_id)) { error->type = RTE_TM_ERROR_TYPE_NODE_ID; error->message = "node id already used"; return -EINVAL; @@ -408,6 +388,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, if (!tm_node) return -ENOMEM; tm_node->id = node_id; + tm_node->level = ICE_TM_NODE_TYPE_PORT; tm_node->parent = NULL; tm_node->reference_count = 0; tm_node->shaper_profile = shaper_profile; @@ -420,29 +401,28 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, } /* check the parent node */ - parent_node = ice_tm_node_search(dev, parent_node_id, - &parent_node_type); + parent_node = find_node(pf->tm_conf.root, parent_node_id); if (!parent_node) { error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID; error->message = "parent not exist"; return -EINVAL; } - if (parent_node_type != ICE_TM_NODE_TYPE_PORT && - parent_node_type != ICE_TM_NODE_TYPE_QGROUP) { + if (parent_node->level != ICE_TM_NODE_TYPE_PORT && + parent_node->level != ICE_TM_NODE_TYPE_QGROUP) { error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID; error->message = "parent is not valid"; return -EINVAL; } /* check level */ if (level_id != RTE_TM_NODE_LEVEL_ID_ANY && - level_id != (uint32_t)parent_node_type + 1) { + level_id != parent_node->level + 1) { error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS; error->message = "Wrong level"; return -EINVAL; } /* check the node number */ - if (parent_node_type == ICE_TM_NODE_TYPE_PORT) { + if (parent_node->level == ICE_TM_NODE_TYPE_PORT) { /* check the queue group number */ if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) { error->type = RTE_TM_ERROR_TYPE_NODE_ID; @@ -473,6 +453,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, tm_node->weight = weight; tm_node->reference_count = 0; tm_node->parent = parent_node; + tm_node->level = parent_node->level + 1; tm_node->shaper_profile = shaper_profile; tm_node->children = (struct ice_tm_node **) rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0); @@ -490,15 +471,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, rte_memcpy(&tm_node->params, params, sizeof(struct rte_tm_node_params)); - if (parent_node_type == ICE_TM_NODE_TYPE_PORT) { - TAILQ_INSERT_TAIL(&pf->tm_conf.qgroup_list, - tm_node, node); - pf->tm_conf.nb_qgroup_node++; - } else { - TAILQ_INSERT_TAIL(&pf->tm_conf.queue_list, - tm_node, node); - pf->tm_conf.nb_queue_node++; - } tm_node->parent->reference_count++; return 0; @@ -509,7 +481,6 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id, struct rte_tm_error *error) { struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); - enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX; struct ice_tm_node *tm_node; if (!error) @@ -522,7 +493,7 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id, } /* check if the node id exists */ - tm_node = ice_tm_node_search(dev, node_id, &node_type); + tm_node = find_node(pf->tm_conf.root, node_id); if (!tm_node) { error->type = RTE_TM_ERROR_TYPE_NODE_ID; error->message = "no such node"; @@ -538,7 +509,7 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id, } /* root node */ - if (node_type == ICE_TM_NODE_TYPE_PORT) { + if (tm_node->level == ICE_TM_NODE_TYPE_PORT) { rte_free(tm_node); pf->tm_conf.root = NULL; return 0; @@ -546,13 +517,6 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id, /* queue group or queue node */ tm_node->parent->reference_count--; - if (node_type == ICE_TM_NODE_TYPE_QGROUP) { - TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node); - pf->tm_conf.nb_qgroup_node--; - } else { - TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node); - pf->tm_conf.nb_queue_node--; - } rte_free(tm_node); return 0; @@ -708,9 +672,9 @@ static int ice_reset_noleaf_nodes(struct rte_eth_dev *dev) { struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list; struct ice_sched_node *vsi_node = ice_get_vsi_node(hw); - struct ice_tm_node *tm_node; + struct ice_tm_node *root = pf->tm_conf.root; + uint32_t i; int ret; /* reset vsi_node */ @@ -720,8 +684,12 @@ static int ice_reset_noleaf_nodes(struct rte_eth_dev *dev) return ret; } - /* reset queue group nodes */ - TAILQ_FOREACH(tm_node, qgroup_list, node) { + if (root == NULL) + return 0; + + for (i = 0; i < root->reference_count; i++) { + struct ice_tm_node *tm_node = root->children[i]; + if (tm_node->sched_node == NULL) continue; @@ -774,9 +742,7 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev, { struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list; - struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list; - struct ice_tm_node *tm_node; + struct ice_tm_node *root; struct ice_sched_node *vsi_node = NULL; struct ice_sched_node *queue_node; struct ice_tx_queue *txq; @@ -807,14 +773,14 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev, /* config vsi node */ vsi_node = ice_get_vsi_node(hw); - tm_node = pf->tm_conf.root; + root = pf->tm_conf.root; - ret_val = ice_set_node_rate(hw, tm_node, vsi_node); + ret_val = ice_set_node_rate(hw, root, vsi_node); if (ret_val) { error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; PMD_DRV_LOG(ERR, "configure vsi node %u bandwidth failed", - tm_node->id); + root->id); goto add_leaf; } @@ -825,13 +791,27 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev, idx_vsi_child = 0; idx_qg = 0; - TAILQ_FOREACH(tm_node, qgroup_list, node) { + if (root == NULL) + goto commit; + + for (i = 0; i < root->reference_count; i++) { + struct ice_tm_node *tm_node = root->children[i]; struct ice_tm_node *tm_child_node; struct ice_sched_node *qgroup_sched_node = vsi_node->children[idx_vsi_child]->children[idx_qg]; + uint32_t j; - for (i = 0; i < tm_node->reference_count; i++) { - tm_child_node = tm_node->children[i]; + ret_val = ice_cfg_hw_node(hw, tm_node, qgroup_sched_node); + if (ret_val) { + error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; + PMD_DRV_LOG(ERR, + "configure queue group node %u failed", + tm_node->id); + goto reset_leaf; + } + + for (j = 0; j < tm_node->reference_count; j++) { + tm_child_node = tm_node->children[j]; qid = tm_child_node->id; ret_val = ice_tx_queue_start(dev, qid); if (ret_val) { @@ -847,25 +827,25 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev, PMD_DRV_LOG(ERR, "get queue %u node failed", qid); goto reset_leaf; } - if (queue_node->info.parent_teid == qgroup_sched_node->info.node_teid) - continue; - ret_val = ice_move_recfg_lan_txq(dev, queue_node, qgroup_sched_node, qid); + if (queue_node->info.parent_teid != qgroup_sched_node->info.node_teid) { + ret_val = ice_move_recfg_lan_txq(dev, queue_node, + qgroup_sched_node, qid); + if (ret_val) { + error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; + PMD_DRV_LOG(ERR, "move queue %u failed", qid); + goto reset_leaf; + } + } + ret_val = ice_cfg_hw_node(hw, tm_child_node, queue_node); if (ret_val) { error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; - PMD_DRV_LOG(ERR, "move queue %u failed", qid); + PMD_DRV_LOG(ERR, + "configure queue group node %u failed", + tm_node->id); goto reset_leaf; } } - ret_val = ice_cfg_hw_node(hw, tm_node, qgroup_sched_node); - if (ret_val) { - error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; - PMD_DRV_LOG(ERR, - "configure queue group node %u failed", - tm_node->id); - goto reset_leaf; - } - idx_qg++; if (idx_qg >= nb_qg) { idx_qg = 0; @@ -878,23 +858,7 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev, } } - /* config queue nodes */ - TAILQ_FOREACH(tm_node, queue_list, node) { - qid = tm_node->id; - txq = dev->data->tx_queues[qid]; - q_teid = txq->q_teid; - queue_node = ice_sched_get_node(hw->port_info, q_teid); - - ret_val = ice_cfg_hw_node(hw, tm_node, queue_node); - if (ret_val) { - error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; - PMD_DRV_LOG(ERR, - "configure queue group node %u failed", - tm_node->id); - goto reset_leaf; - } - } - +commit: pf->tm_conf.committed = true; pf->tm_conf.clear_on_fail = clear_on_fail;