From patchwork Fri Jan 5 21:12:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 135758 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 07CE74383C; Fri, 5 Jan 2024 13:45:12 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4538A4064E; Fri, 5 Jan 2024 13:45:09 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.136]) by mails.dpdk.org (Postfix) with ESMTP id 8261C402BF for ; Fri, 5 Jan 2024 13:45:05 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1704458705; x=1735994705; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=NFKO0PKD5v1Q+TkpvxCwlMiSCsUuVltlkn957i+iXcw=; b=Q6LK2AAmd3xI2sRMDfxoNgWEfCvsbIPdwlsCzZhtbxWnrP4xmvtzW+5t wnH4qxmEtDfAMlXUTOuo3mN9G+kUbPL/RQttkUhhZuOKqf9IIj7IR8DHZ DVoQ4uRwR60CxhjvUhyk0PVtCNSJosBOlKLeWtTCfP+bp7dade5Yrm+TU nveC9v6ECCp79Gazf4YhFt1g1s4mG7e2D9TtASv+S4sgCVoeTUD458gT7 3wohHggrqkOEfNwA9qEw35fa2lhfzL9Qe3drRYeEhI3ghRoouKLhomnHI JZwgAifUwqqMK0o+ooNEolEsKuB/N79+dUT3JtdMFeWH87cgEAwdGHP08 A==; X-IronPort-AV: E=McAfee;i="6600,9927,10943"; a="376988422" X-IronPort-AV: E=Sophos;i="6.04,333,1695711600"; d="scan'208";a="376988422" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jan 2024 04:45:05 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10943"; a="730468550" X-IronPort-AV: E=Sophos;i="6.04,333,1695711600"; d="scan'208";a="730468550" Received: from dpdk-qzhan15-test02.sh.intel.com ([10.67.119.16]) by orsmga003.jf.intel.com with ESMTP; 05 Jan 2024 04:45:03 -0800 From: Qi Zhang To: qiming.yang@intel.com, wenjun1.wu@intel.com Cc: dev@dpdk.org, Qi Zhang Subject: [PATCH v3 1/3] net/ice: hide port and TC layer in Tx sched tree Date: Fri, 5 Jan 2024 16:12:35 -0500 Message-Id: <20240105211237.394105-2-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20240105211237.394105-1-qi.z.zhang@intel.com> References: <20240105135906.383394-1-qi.z.zhang@intel.com> <20240105211237.394105-1-qi.z.zhang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org In currently 5 layer tree implementation, the port and tc layer is not configurable, so its not necessary to expose them to application. The patch hides the top 2 layers and represented the root of the tree at VSI layer. From application's point of view, its a 3 layer scheduler tree: Port -> Queue Group -> Queue. Signed-off-by: Qi Zhang Acked-by: Wenjun Wu --- drivers/net/ice/ice_ethdev.h | 7 ---- drivers/net/ice/ice_tm.c | 79 ++++-------------------------------- 2 files changed, 7 insertions(+), 79 deletions(-) diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h index fa4981ed14..ae22c29ffc 100644 --- a/drivers/net/ice/ice_ethdev.h +++ b/drivers/net/ice/ice_ethdev.h @@ -470,7 +470,6 @@ struct ice_tm_shaper_profile { struct ice_tm_node { TAILQ_ENTRY(ice_tm_node) node; uint32_t id; - uint32_t tc; uint32_t priority; uint32_t weight; uint32_t reference_count; @@ -484,8 +483,6 @@ struct ice_tm_node { /* node type of Traffic Manager */ enum ice_tm_node_type { ICE_TM_NODE_TYPE_PORT, - ICE_TM_NODE_TYPE_TC, - ICE_TM_NODE_TYPE_VSI, ICE_TM_NODE_TYPE_QGROUP, ICE_TM_NODE_TYPE_QUEUE, ICE_TM_NODE_TYPE_MAX, @@ -495,12 +492,8 @@ enum ice_tm_node_type { struct ice_tm_conf { struct ice_shaper_profile_list shaper_profile_list; struct ice_tm_node *root; /* root node - port */ - struct ice_tm_node_list tc_list; /* node list for all the TCs */ - struct ice_tm_node_list vsi_list; /* node list for all the VSIs */ struct ice_tm_node_list qgroup_list; /* node list for all the queue groups */ struct ice_tm_node_list queue_list; /* node list for all the queues */ - uint32_t nb_tc_node; - uint32_t nb_vsi_node; uint32_t nb_qgroup_node; uint32_t nb_queue_node; bool committed; diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c index b570798f07..7ae68c683b 100644 --- a/drivers/net/ice/ice_tm.c +++ b/drivers/net/ice/ice_tm.c @@ -43,12 +43,8 @@ ice_tm_conf_init(struct rte_eth_dev *dev) /* initialize node configuration */ TAILQ_INIT(&pf->tm_conf.shaper_profile_list); pf->tm_conf.root = NULL; - TAILQ_INIT(&pf->tm_conf.tc_list); - TAILQ_INIT(&pf->tm_conf.vsi_list); TAILQ_INIT(&pf->tm_conf.qgroup_list); TAILQ_INIT(&pf->tm_conf.queue_list); - pf->tm_conf.nb_tc_node = 0; - pf->tm_conf.nb_vsi_node = 0; pf->tm_conf.nb_qgroup_node = 0; pf->tm_conf.nb_queue_node = 0; pf->tm_conf.committed = false; @@ -72,16 +68,6 @@ ice_tm_conf_uninit(struct rte_eth_dev *dev) rte_free(tm_node); } pf->tm_conf.nb_qgroup_node = 0; - while ((tm_node = TAILQ_FIRST(&pf->tm_conf.vsi_list))) { - TAILQ_REMOVE(&pf->tm_conf.vsi_list, tm_node, node); - rte_free(tm_node); - } - pf->tm_conf.nb_vsi_node = 0; - while ((tm_node = TAILQ_FIRST(&pf->tm_conf.tc_list))) { - TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node); - rte_free(tm_node); - } - pf->tm_conf.nb_tc_node = 0; if (pf->tm_conf.root) { rte_free(pf->tm_conf.root); pf->tm_conf.root = NULL; @@ -93,8 +79,6 @@ ice_tm_node_search(struct rte_eth_dev *dev, uint32_t node_id, enum ice_tm_node_type *node_type) { struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct ice_tm_node_list *tc_list = &pf->tm_conf.tc_list; - struct ice_tm_node_list *vsi_list = &pf->tm_conf.vsi_list; struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list; struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list; struct ice_tm_node *tm_node; @@ -104,20 +88,6 @@ ice_tm_node_search(struct rte_eth_dev *dev, return pf->tm_conf.root; } - TAILQ_FOREACH(tm_node, tc_list, node) { - if (tm_node->id == node_id) { - *node_type = ICE_TM_NODE_TYPE_TC; - return tm_node; - } - } - - TAILQ_FOREACH(tm_node, vsi_list, node) { - if (tm_node->id == node_id) { - *node_type = ICE_TM_NODE_TYPE_VSI; - return tm_node; - } - } - TAILQ_FOREACH(tm_node, qgroup_list, node) { if (tm_node->id == node_id) { *node_type = ICE_TM_NODE_TYPE_QGROUP; @@ -371,6 +341,8 @@ ice_shaper_profile_del(struct rte_eth_dev *dev, return 0; } +#define MAX_QUEUE_PER_GROUP 8 + static int ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, uint32_t parent_node_id, uint32_t priority, @@ -384,8 +356,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, struct ice_tm_shaper_profile *shaper_profile = NULL; struct ice_tm_node *tm_node; struct ice_tm_node *parent_node; - uint16_t tc_nb = 1; - uint16_t vsi_nb = 1; int ret; if (!params || !error) @@ -440,6 +410,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, tm_node->id = node_id; tm_node->parent = NULL; tm_node->reference_count = 0; + tm_node->shaper_profile = shaper_profile; tm_node->children = (struct ice_tm_node **) rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0); rte_memcpy(&tm_node->params, params, @@ -448,7 +419,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, return 0; } - /* TC or queue node */ /* check the parent node */ parent_node = ice_tm_node_search(dev, parent_node_id, &parent_node_type); @@ -458,8 +428,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, return -EINVAL; } if (parent_node_type != ICE_TM_NODE_TYPE_PORT && - parent_node_type != ICE_TM_NODE_TYPE_TC && - parent_node_type != ICE_TM_NODE_TYPE_VSI && parent_node_type != ICE_TM_NODE_TYPE_QGROUP) { error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID; error->message = "parent is not valid"; @@ -475,20 +443,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, /* check the node number */ if (parent_node_type == ICE_TM_NODE_TYPE_PORT) { - /* check the TC number */ - if (pf->tm_conf.nb_tc_node >= tc_nb) { - error->type = RTE_TM_ERROR_TYPE_NODE_ID; - error->message = "too many TCs"; - return -EINVAL; - } - } else if (parent_node_type == ICE_TM_NODE_TYPE_TC) { - /* check the VSI number */ - if (pf->tm_conf.nb_vsi_node >= vsi_nb) { - error->type = RTE_TM_ERROR_TYPE_NODE_ID; - error->message = "too many VSIs"; - return -EINVAL; - } - } else if (parent_node_type == ICE_TM_NODE_TYPE_VSI) { /* check the queue group number */ if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) { error->type = RTE_TM_ERROR_TYPE_NODE_ID; @@ -497,7 +451,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, } } else { /* check the queue number */ - if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) { + if (parent_node->reference_count >= MAX_QUEUE_PER_GROUP) { error->type = RTE_TM_ERROR_TYPE_NODE_ID; error->message = "too many queues"; return -EINVAL; @@ -509,7 +463,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, } } - /* add the TC or VSI or queue group or queue node */ tm_node = rte_zmalloc("ice_tm_node", sizeof(struct ice_tm_node), 0); @@ -538,24 +491,12 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, rte_memcpy(&tm_node->params, params, sizeof(struct rte_tm_node_params)); if (parent_node_type == ICE_TM_NODE_TYPE_PORT) { - TAILQ_INSERT_TAIL(&pf->tm_conf.tc_list, - tm_node, node); - tm_node->tc = pf->tm_conf.nb_tc_node; - pf->tm_conf.nb_tc_node++; - } else if (parent_node_type == ICE_TM_NODE_TYPE_TC) { - TAILQ_INSERT_TAIL(&pf->tm_conf.vsi_list, - tm_node, node); - tm_node->tc = parent_node->tc; - pf->tm_conf.nb_vsi_node++; - } else if (parent_node_type == ICE_TM_NODE_TYPE_VSI) { TAILQ_INSERT_TAIL(&pf->tm_conf.qgroup_list, tm_node, node); - tm_node->tc = parent_node->parent->tc; pf->tm_conf.nb_qgroup_node++; } else { TAILQ_INSERT_TAIL(&pf->tm_conf.queue_list, tm_node, node); - tm_node->tc = parent_node->parent->parent->tc; pf->tm_conf.nb_queue_node++; } tm_node->parent->reference_count++; @@ -603,15 +544,9 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id, return 0; } - /* TC or VSI or queue group or queue node */ + /* queue group or queue node */ tm_node->parent->reference_count--; - if (node_type == ICE_TM_NODE_TYPE_TC) { - TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node); - pf->tm_conf.nb_tc_node--; - } else if (node_type == ICE_TM_NODE_TYPE_VSI) { - TAILQ_REMOVE(&pf->tm_conf.vsi_list, tm_node, node); - pf->tm_conf.nb_vsi_node--; - } else if (node_type == ICE_TM_NODE_TYPE_QGROUP) { + if (node_type == ICE_TM_NODE_TYPE_QGROUP) { TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node); pf->tm_conf.nb_qgroup_node--; } else { @@ -872,7 +807,7 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev, /* config vsi node */ vsi_node = ice_get_vsi_node(hw); - tm_node = TAILQ_FIRST(&pf->tm_conf.vsi_list); + tm_node = pf->tm_conf.root; ret_val = ice_set_node_rate(hw, tm_node, vsi_node); if (ret_val) { From patchwork Fri Jan 5 21:12:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 135759 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 764074383C; Fri, 5 Jan 2024 13:45:18 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 77DD24068A; Fri, 5 Jan 2024 13:45:10 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.136]) by mails.dpdk.org (Postfix) with ESMTP id 7B7D140608 for ; Fri, 5 Jan 2024 13:45:08 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1704458708; x=1735994708; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=FT5oF7/EsCqCfQioUk2LIULfRCeMV3mBIs8weAhC4aI=; b=F0glCcQATWiHQGxHedyjXCHN4jfoZ9vTsvMiZE+Rk4Hf64+5233nkIOb gysE82Qquiq6kUX78z7yEPDr4QGT2DHpy2snEQYsHmnR6FYpPS9RoY3Ek SXS0aFou2Vnav4VlMjIf9/9GpBqESyAhZ9Vgw6AVLRZn6ZwmuhEZvphdo EpCkFvsljyN1mpNal1a7BPnH29b1f8sgMl/AtvyqfU3e5HNEzG5NItPkW 6nalBIu1gUV1cM8i22sc924RxwtxUEPaOW6oWd/5jbEhAvEpXvsfiOYzA V0tvmRtXfye4yDQ/WDdnD6Ehb63XjNr+6r7P6c3RX3YNAicN7qzMr1PiM g==; X-IronPort-AV: E=McAfee;i="6600,9927,10943"; a="376988426" X-IronPort-AV: E=Sophos;i="6.04,333,1695711600"; d="scan'208";a="376988426" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jan 2024 04:45:07 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10943"; a="730468562" X-IronPort-AV: E=Sophos;i="6.04,333,1695711600"; d="scan'208";a="730468562" Received: from dpdk-qzhan15-test02.sh.intel.com ([10.67.119.16]) by orsmga003.jf.intel.com with ESMTP; 05 Jan 2024 04:45:05 -0800 From: Qi Zhang To: qiming.yang@intel.com, wenjun1.wu@intel.com Cc: dev@dpdk.org, Qi Zhang Subject: [PATCH v3 2/3] net/ice: refactor tm config data structure Date: Fri, 5 Jan 2024 16:12:36 -0500 Message-Id: <20240105211237.394105-3-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20240105211237.394105-1-qi.z.zhang@intel.com> References: <20240105135906.383394-1-qi.z.zhang@intel.com> <20240105211237.394105-1-qi.z.zhang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Simplified struct ice_tm_conf by removing per level node list. Signed-off-by: Qi Zhang --- drivers/net/ice/ice_ethdev.h | 5 +- drivers/net/ice/ice_tm.c | 244 ++++++++++++++++------------------- 2 files changed, 111 insertions(+), 138 deletions(-) diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h index ae22c29ffc..008a7a23b9 100644 --- a/drivers/net/ice/ice_ethdev.h +++ b/drivers/net/ice/ice_ethdev.h @@ -472,6 +472,7 @@ struct ice_tm_node { uint32_t id; uint32_t priority; uint32_t weight; + uint32_t level; uint32_t reference_count; struct ice_tm_node *parent; struct ice_tm_node **children; @@ -492,10 +493,6 @@ enum ice_tm_node_type { struct ice_tm_conf { struct ice_shaper_profile_list shaper_profile_list; struct ice_tm_node *root; /* root node - port */ - struct ice_tm_node_list qgroup_list; /* node list for all the queue groups */ - struct ice_tm_node_list queue_list; /* node list for all the queues */ - uint32_t nb_qgroup_node; - uint32_t nb_queue_node; bool committed; bool clear_on_fail; }; diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c index 7ae68c683b..c579662843 100644 --- a/drivers/net/ice/ice_tm.c +++ b/drivers/net/ice/ice_tm.c @@ -6,6 +6,9 @@ #include "ice_ethdev.h" #include "ice_rxtx.h" +#define MAX_CHILDREN_PER_SCHED_NODE 8 +#define MAX_CHILDREN_PER_TM_NODE 256 + static int ice_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail, __rte_unused struct rte_tm_error *error); @@ -43,66 +46,30 @@ ice_tm_conf_init(struct rte_eth_dev *dev) /* initialize node configuration */ TAILQ_INIT(&pf->tm_conf.shaper_profile_list); pf->tm_conf.root = NULL; - TAILQ_INIT(&pf->tm_conf.qgroup_list); - TAILQ_INIT(&pf->tm_conf.queue_list); - pf->tm_conf.nb_qgroup_node = 0; - pf->tm_conf.nb_queue_node = 0; pf->tm_conf.committed = false; pf->tm_conf.clear_on_fail = false; } -void -ice_tm_conf_uninit(struct rte_eth_dev *dev) +static void free_node(struct ice_tm_node *root) { - struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct ice_tm_node *tm_node; + uint32_t i; - /* clear node configuration */ - while ((tm_node = TAILQ_FIRST(&pf->tm_conf.queue_list))) { - TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node); - rte_free(tm_node); - } - pf->tm_conf.nb_queue_node = 0; - while ((tm_node = TAILQ_FIRST(&pf->tm_conf.qgroup_list))) { - TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node); - rte_free(tm_node); - } - pf->tm_conf.nb_qgroup_node = 0; - if (pf->tm_conf.root) { - rte_free(pf->tm_conf.root); - pf->tm_conf.root = NULL; - } + if (root == NULL) + return; + + for (i = 0; i < root->reference_count; i++) + free_node(root->children[i]); + + rte_free(root); } -static inline struct ice_tm_node * -ice_tm_node_search(struct rte_eth_dev *dev, - uint32_t node_id, enum ice_tm_node_type *node_type) +void +ice_tm_conf_uninit(struct rte_eth_dev *dev) { struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list; - struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list; - struct ice_tm_node *tm_node; - - if (pf->tm_conf.root && pf->tm_conf.root->id == node_id) { - *node_type = ICE_TM_NODE_TYPE_PORT; - return pf->tm_conf.root; - } - TAILQ_FOREACH(tm_node, qgroup_list, node) { - if (tm_node->id == node_id) { - *node_type = ICE_TM_NODE_TYPE_QGROUP; - return tm_node; - } - } - - TAILQ_FOREACH(tm_node, queue_list, node) { - if (tm_node->id == node_id) { - *node_type = ICE_TM_NODE_TYPE_QUEUE; - return tm_node; - } - } - - return NULL; + free_node(pf->tm_conf.root); + pf->tm_conf.root = NULL; } static int @@ -195,11 +162,29 @@ ice_node_param_check(struct ice_pf *pf, uint32_t node_id, return 0; } +static struct ice_tm_node * +find_node(struct ice_tm_node *root, uint32_t id) +{ + uint32_t i; + + if (root == NULL || root->id == id) + return root; + + for (i = 0; i < root->reference_count; i++) { + struct ice_tm_node *node = find_node(root->children[i], id); + + if (node) + return node; + } + + return NULL; +} + static int ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id, int *is_leaf, struct rte_tm_error *error) { - enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX; + struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); struct ice_tm_node *tm_node; if (!is_leaf || !error) @@ -212,14 +197,14 @@ ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id, } /* check if the node id exists */ - tm_node = ice_tm_node_search(dev, node_id, &node_type); + tm_node = find_node(pf->tm_conf.root, node_id); if (!tm_node) { error->type = RTE_TM_ERROR_TYPE_NODE_ID; error->message = "no such node"; return -EINVAL; } - if (node_type == ICE_TM_NODE_TYPE_QUEUE) + if (tm_node->level == ICE_TM_NODE_TYPE_QUEUE) *is_leaf = true; else *is_leaf = false; @@ -341,8 +326,6 @@ ice_shaper_profile_del(struct rte_eth_dev *dev, return 0; } -#define MAX_QUEUE_PER_GROUP 8 - static int ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, uint32_t parent_node_id, uint32_t priority, @@ -351,8 +334,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, struct rte_tm_error *error) { struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); - enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX; - enum ice_tm_node_type parent_node_type = ICE_TM_NODE_TYPE_MAX; struct ice_tm_shaper_profile *shaper_profile = NULL; struct ice_tm_node *tm_node; struct ice_tm_node *parent_node; @@ -367,7 +348,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, return ret; /* check if the node is already existed */ - if (ice_tm_node_search(dev, node_id, &node_type)) { + if (find_node(pf->tm_conf.root, node_id)) { error->type = RTE_TM_ERROR_TYPE_NODE_ID; error->message = "node id already used"; return -EINVAL; @@ -402,17 +383,19 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, } /* add the root node */ - tm_node = rte_zmalloc("ice_tm_node", - sizeof(struct ice_tm_node), + tm_node = rte_zmalloc(NULL, + sizeof(struct ice_tm_node) + + sizeof(struct ice_tm_node *) * MAX_CHILDREN_PER_TM_NODE, 0); if (!tm_node) return -ENOMEM; tm_node->id = node_id; + tm_node->level = ICE_TM_NODE_TYPE_PORT; tm_node->parent = NULL; tm_node->reference_count = 0; tm_node->shaper_profile = shaper_profile; - tm_node->children = (struct ice_tm_node **) - rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0); + tm_node->children = + (void *)((uint8_t *)tm_node + sizeof(struct ice_tm_node)); rte_memcpy(&tm_node->params, params, sizeof(struct rte_tm_node_params)); pf->tm_conf.root = tm_node; @@ -420,29 +403,28 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, } /* check the parent node */ - parent_node = ice_tm_node_search(dev, parent_node_id, - &parent_node_type); + parent_node = find_node(pf->tm_conf.root, parent_node_id); if (!parent_node) { error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID; error->message = "parent not exist"; return -EINVAL; } - if (parent_node_type != ICE_TM_NODE_TYPE_PORT && - parent_node_type != ICE_TM_NODE_TYPE_QGROUP) { + if (parent_node->level != ICE_TM_NODE_TYPE_PORT && + parent_node->level != ICE_TM_NODE_TYPE_QGROUP) { error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID; error->message = "parent is not valid"; return -EINVAL; } /* check level */ if (level_id != RTE_TM_NODE_LEVEL_ID_ANY && - level_id != (uint32_t)parent_node_type + 1) { + level_id != parent_node->level + 1) { error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS; error->message = "Wrong level"; return -EINVAL; } /* check the node number */ - if (parent_node_type == ICE_TM_NODE_TYPE_PORT) { + if (parent_node->level == ICE_TM_NODE_TYPE_PORT) { /* check the queue group number */ if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) { error->type = RTE_TM_ERROR_TYPE_NODE_ID; @@ -451,7 +433,8 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, } } else { /* check the queue number */ - if (parent_node->reference_count >= MAX_QUEUE_PER_GROUP) { + if (parent_node->reference_count >= + MAX_CHILDREN_PER_SCHED_NODE) { error->type = RTE_TM_ERROR_TYPE_NODE_ID; error->message = "too many queues"; return -EINVAL; @@ -463,8 +446,9 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, } } - tm_node = rte_zmalloc("ice_tm_node", - sizeof(struct ice_tm_node), + tm_node = rte_zmalloc(NULL, + sizeof(struct ice_tm_node) + + sizeof(struct ice_tm_node *) * MAX_CHILDREN_PER_TM_NODE, 0); if (!tm_node) return -ENOMEM; @@ -473,9 +457,10 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, tm_node->weight = weight; tm_node->reference_count = 0; tm_node->parent = parent_node; + tm_node->level = parent_node->level + 1; tm_node->shaper_profile = shaper_profile; - tm_node->children = (struct ice_tm_node **) - rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0); + tm_node->children = + (void *)((uint8_t *)tm_node + sizeof(struct ice_tm_node)); tm_node->parent->children[tm_node->parent->reference_count] = tm_node; if (tm_node->priority != 0 && level_id != ICE_TM_NODE_TYPE_QUEUE && @@ -490,15 +475,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, rte_memcpy(&tm_node->params, params, sizeof(struct rte_tm_node_params)); - if (parent_node_type == ICE_TM_NODE_TYPE_PORT) { - TAILQ_INSERT_TAIL(&pf->tm_conf.qgroup_list, - tm_node, node); - pf->tm_conf.nb_qgroup_node++; - } else { - TAILQ_INSERT_TAIL(&pf->tm_conf.queue_list, - tm_node, node); - pf->tm_conf.nb_queue_node++; - } tm_node->parent->reference_count++; return 0; @@ -509,8 +485,8 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id, struct rte_tm_error *error) { struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); - enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX; struct ice_tm_node *tm_node; + uint32_t i, j; if (!error) return -EINVAL; @@ -522,7 +498,7 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id, } /* check if the node id exists */ - tm_node = ice_tm_node_search(dev, node_id, &node_type); + tm_node = find_node(pf->tm_conf.root, node_id); if (!tm_node) { error->type = RTE_TM_ERROR_TYPE_NODE_ID; error->message = "no such node"; @@ -538,21 +514,21 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id, } /* root node */ - if (node_type == ICE_TM_NODE_TYPE_PORT) { + if (tm_node->level == ICE_TM_NODE_TYPE_PORT) { rte_free(tm_node); pf->tm_conf.root = NULL; return 0; } /* queue group or queue node */ + for (i = 0; i < tm_node->parent->reference_count; i++) + if (tm_node->parent->children[i] == tm_node) + break; + + for (j = i ; j < tm_node->parent->reference_count - 1; j++) + tm_node->parent->children[j] = tm_node->parent->children[j + 1]; + tm_node->parent->reference_count--; - if (node_type == ICE_TM_NODE_TYPE_QGROUP) { - TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node); - pf->tm_conf.nb_qgroup_node--; - } else { - TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node); - pf->tm_conf.nb_queue_node--; - } rte_free(tm_node); return 0; @@ -708,9 +684,9 @@ static int ice_reset_noleaf_nodes(struct rte_eth_dev *dev) { struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list; struct ice_sched_node *vsi_node = ice_get_vsi_node(hw); - struct ice_tm_node *tm_node; + struct ice_tm_node *root = pf->tm_conf.root; + uint32_t i; int ret; /* reset vsi_node */ @@ -720,8 +696,12 @@ static int ice_reset_noleaf_nodes(struct rte_eth_dev *dev) return ret; } - /* reset queue group nodes */ - TAILQ_FOREACH(tm_node, qgroup_list, node) { + if (root == NULL) + return 0; + + for (i = 0; i < root->reference_count; i++) { + struct ice_tm_node *tm_node = root->children[i]; + if (tm_node->sched_node == NULL) continue; @@ -774,9 +754,7 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev, { struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list; - struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list; - struct ice_tm_node *tm_node; + struct ice_tm_node *root; struct ice_sched_node *vsi_node = NULL; struct ice_sched_node *queue_node; struct ice_tx_queue *txq; @@ -807,14 +785,14 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev, /* config vsi node */ vsi_node = ice_get_vsi_node(hw); - tm_node = pf->tm_conf.root; + root = pf->tm_conf.root; - ret_val = ice_set_node_rate(hw, tm_node, vsi_node); + ret_val = ice_set_node_rate(hw, root, vsi_node); if (ret_val) { error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; PMD_DRV_LOG(ERR, "configure vsi node %u bandwidth failed", - tm_node->id); + root->id); goto add_leaf; } @@ -825,13 +803,27 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev, idx_vsi_child = 0; idx_qg = 0; - TAILQ_FOREACH(tm_node, qgroup_list, node) { + if (root == NULL) + goto commit; + + for (i = 0; i < root->reference_count; i++) { + struct ice_tm_node *tm_node = root->children[i]; struct ice_tm_node *tm_child_node; struct ice_sched_node *qgroup_sched_node = vsi_node->children[idx_vsi_child]->children[idx_qg]; + uint32_t j; + + ret_val = ice_cfg_hw_node(hw, tm_node, qgroup_sched_node); + if (ret_val) { + error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; + PMD_DRV_LOG(ERR, + "configure queue group node %u failed", + tm_node->id); + goto reset_leaf; + } - for (i = 0; i < tm_node->reference_count; i++) { - tm_child_node = tm_node->children[i]; + for (j = 0; j < tm_node->reference_count; j++) { + tm_child_node = tm_node->children[j]; qid = tm_child_node->id; ret_val = ice_tx_queue_start(dev, qid); if (ret_val) { @@ -847,25 +839,25 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev, PMD_DRV_LOG(ERR, "get queue %u node failed", qid); goto reset_leaf; } - if (queue_node->info.parent_teid == qgroup_sched_node->info.node_teid) - continue; - ret_val = ice_move_recfg_lan_txq(dev, queue_node, qgroup_sched_node, qid); + if (queue_node->info.parent_teid != qgroup_sched_node->info.node_teid) { + ret_val = ice_move_recfg_lan_txq(dev, queue_node, + qgroup_sched_node, qid); + if (ret_val) { + error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; + PMD_DRV_LOG(ERR, "move queue %u failed", qid); + goto reset_leaf; + } + } + ret_val = ice_cfg_hw_node(hw, tm_child_node, queue_node); if (ret_val) { error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; - PMD_DRV_LOG(ERR, "move queue %u failed", qid); + PMD_DRV_LOG(ERR, + "configure queue group node %u failed", + tm_node->id); goto reset_leaf; } } - ret_val = ice_cfg_hw_node(hw, tm_node, qgroup_sched_node); - if (ret_val) { - error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; - PMD_DRV_LOG(ERR, - "configure queue group node %u failed", - tm_node->id); - goto reset_leaf; - } - idx_qg++; if (idx_qg >= nb_qg) { idx_qg = 0; @@ -878,23 +870,7 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev, } } - /* config queue nodes */ - TAILQ_FOREACH(tm_node, queue_list, node) { - qid = tm_node->id; - txq = dev->data->tx_queues[qid]; - q_teid = txq->q_teid; - queue_node = ice_sched_get_node(hw->port_info, q_teid); - - ret_val = ice_cfg_hw_node(hw, tm_node, queue_node); - if (ret_val) { - error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; - PMD_DRV_LOG(ERR, - "configure queue group node %u failed", - tm_node->id); - goto reset_leaf; - } - } - +commit: pf->tm_conf.committed = true; pf->tm_conf.clear_on_fail = clear_on_fail; From patchwork Fri Jan 5 21:12:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 135760 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8383A4383C; Fri, 5 Jan 2024 13:45:28 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3573D406B7; Fri, 5 Jan 2024 13:45:12 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.136]) by mails.dpdk.org (Postfix) with ESMTP id A019D40608 for ; Fri, 5 Jan 2024 13:45:09 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1704458709; x=1735994709; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7xibqH7E4C4gvus0GFilP6f2sbw5wQrvFRYmFo73/js=; b=ZQX4xe16v6xkOb2fdx65BVYMUBzb30+f5+FkUTdXVzTKym6bNzR8CWOp MJx28SLuRVsHa5M21+VU1T/1asVVP5lgmJnrejnNSglDoi6CDN1bEMnbz 5G70qIVxDKA98VzN27bqZMmKcAQtd/wwIgtb3LtXmlRhqF+LjIaqRMPXv MbMsW5WNaIvFOaK2pC6SGzi70o6tiy5+Z/HQf2gsJRtRtsZsrpWYJ+0IJ qgMUL/7YZ7RZ102sn2yKJKa4JH14CC7J6bY2PBDSFQkKDRe6x3Qs4Efeu VTo/KEFd5IvSX90LDuPb54ssJ1JdBH9SzCJfuMh1Vni6tyVEmnVbq7YSW Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10943"; a="376988429" X-IronPort-AV: E=Sophos;i="6.04,333,1695711600"; d="scan'208";a="376988429" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jan 2024 04:45:09 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10943"; a="730468570" X-IronPort-AV: E=Sophos;i="6.04,333,1695711600"; d="scan'208";a="730468570" Received: from dpdk-qzhan15-test02.sh.intel.com ([10.67.119.16]) by orsmga003.jf.intel.com with ESMTP; 05 Jan 2024 04:45:07 -0800 From: Qi Zhang To: qiming.yang@intel.com, wenjun1.wu@intel.com Cc: dev@dpdk.org, Qi Zhang Subject: [PATCH v3 3/3] doc: update ice document for qos Date: Fri, 5 Jan 2024 16:12:37 -0500 Message-Id: <20240105211237.394105-4-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20240105211237.394105-1-qi.z.zhang@intel.com> References: <20240105135906.383394-1-qi.z.zhang@intel.com> <20240105211237.394105-1-qi.z.zhang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add description for ice PMD's rte_tm capabilities. Signed-off-by: Qi Zhang Acked-by: Wenjun Wu --- doc/guides/nics/ice.rst | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst index bafb3ba022..3d381a266b 100644 --- a/doc/guides/nics/ice.rst +++ b/doc/guides/nics/ice.rst @@ -352,6 +352,25 @@ queue 3 using a raw pattern:: Currently, raw pattern support is limited to the FDIR and Hash engines. +Traffic Management Support +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The ice PMD provides support for the Traffic Management API (RTE_RM), allow +users to offload a 3-layers Tx scheduler on the E810 NIC: + +- ``Port Layer`` + + This is the root layer, support peak bandwidth configuration, max to 32 children. + +- ``Queue Group Layer`` + + The middel layer, support peak / committed bandwidth, weight, priority configurations, + max to 8 children. + +- ``Queue Layer`` + + The leaf layer, support peak / committed bandwidth, weight, priority configurations. + Additional Options ++++++++++++++++++