From patchwork Tue Jan 2 19:42:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 135680 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7EBD6437FC; Tue, 2 Jan 2024 12:21:38 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5FCBD4067B; Tue, 2 Jan 2024 12:21:34 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.10]) by mails.dpdk.org (Postfix) with ESMTP id 70CAC402CE for ; Tue, 2 Jan 2024 12:21:32 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1704194492; x=1735730492; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=mcI0opuWOtcYtonIBT4oiCJ61hj46G6G3aJbL8YjyB8=; b=K5ZtqvZQm5c8tATSlW13HLbY5G0ScADC6JYLLkNRGnl7L+2Ofbw1bw+Y OJOrW0sizbOcjLQQ+Kk+g8kLjFtudpuNJ7gxe9qJhnsLYMjvOPaFyKXm6 biF8ig0OXJtX2mFWorkXORHSl3gT1rsrvYWQ9c+mFYaOPe/AM2A5uAwt6 uf+7awJSKEcKPwsUimoJAZj/N93a45PoDbG/iCx02uf3JXuTmmHH823px kg+NXIFsooufbR3bYcDIOsn8tR3GH2yN+GoKRL21dclk3ipUK85WTtVum i+fmQu5S9ZeNOHQ59U80BuSaZWOcxEdY+5WrnFveAlsRrQfPZEjuCLN9A w==; X-IronPort-AV: E=McAfee;i="6600,9927,10940"; a="10256780" X-IronPort-AV: E=Sophos;i="6.04,324,1695711600"; d="scan'208";a="10256780" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orvoesa102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Jan 2024 03:21:32 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10940"; a="952895234" X-IronPort-AV: E=Sophos;i="6.04,324,1695711600"; d="scan'208";a="952895234" Received: from dpdk-qzhan15-test02.sh.intel.com ([10.67.115.37]) by orsmga005.jf.intel.com with ESMTP; 02 Jan 2024 03:21:30 -0800 From: Qi Zhang To: qiming.yang@intel.com, wenjun1.wu@intel.com Cc: dev@dpdk.org, Qi Zhang Subject: [PATCH 1/6] net/ice: remove redundent code Date: Tue, 2 Jan 2024 14:42:27 -0500 Message-Id: <20240102194232.3614305-2-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20240102194232.3614305-1-qi.z.zhang@intel.com> References: <20240102194232.3614305-1-qi.z.zhang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The committed flag for tx schedular configuration is not used in PF only mode, remove the redundent code. Signed-off-by: Qi Zhang --- drivers/net/ice/ice_tm.c | 14 -------------- 1 file changed, 14 deletions(-) diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c index f5ea47ae83..9e2f981fa3 100644 --- a/drivers/net/ice/ice_tm.c +++ b/drivers/net/ice/ice_tm.c @@ -390,13 +390,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, if (!params || !error) return -EINVAL; - /* if already committed */ - if (pf->tm_conf.committed) { - error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; - error->message = "already committed"; - return -EINVAL; - } - ret = ice_node_param_check(pf, node_id, priority, weight, params, error); if (ret) @@ -579,13 +572,6 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id, if (!error) return -EINVAL; - /* if already committed */ - if (pf->tm_conf.committed) { - error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; - error->message = "already committed"; - return -EINVAL; - } - if (node_id == RTE_TM_NODE_ID_NULL) { error->type = RTE_TM_ERROR_TYPE_NODE_ID; error->message = "invalid node id"; From patchwork Tue Jan 2 19:42:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 135681 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 88089437FC; Tue, 2 Jan 2024 12:21:45 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 999DA4068A; Tue, 2 Jan 2024 12:21:36 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.10]) by mails.dpdk.org (Postfix) with ESMTP id 2192F40395 for ; Tue, 2 Jan 2024 12:21:33 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1704194494; x=1735730494; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6UHvH055I6V0OZcwq+N6SPm17yCde1kEavCP/E/nooA=; b=ki7IhfOA3o+rO6nz2ZOQlF7oeMmW0HUl3ICpKF7V4p2bI4ax2rYgsi4e 8aDMMZXxED14ndJYsnB1kJn5tq1gme9sx24ZiO7WDRB59vUJ5WFw0mF4i J6ZQ9xPFtNQtAZJkraX+Ti1hOvou5TIIkhCJhGanqfRuG9iuzieuMLOXo PdJFuPECk3wMFG8wGT5OiOmIKgft3iHkoqAkrd7GuRj811WUTtvLGgopE A7mxPGH9jTK922UEGvyNrq2G5yKP+/5YKO5fyKljiy7LW70KeJjVUV9iM CpySkRNsJcgkHxMFYmQhYUunpyWPexoTiQlmAtjAjC6lQAZs2GsL1y5WG Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10940"; a="10256785" X-IronPort-AV: E=Sophos;i="6.04,324,1695711600"; d="scan'208";a="10256785" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orvoesa102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Jan 2024 03:21:33 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10940"; a="952895238" X-IronPort-AV: E=Sophos;i="6.04,324,1695711600"; d="scan'208";a="952895238" Received: from dpdk-qzhan15-test02.sh.intel.com ([10.67.115.37]) by orsmga005.jf.intel.com with ESMTP; 02 Jan 2024 03:21:32 -0800 From: Qi Zhang To: qiming.yang@intel.com, wenjun1.wu@intel.com Cc: dev@dpdk.org, Qi Zhang Subject: [PATCH 2/6] net/ice: support VSI level bandwidth config Date: Tue, 2 Jan 2024 14:42:28 -0500 Message-Id: <20240102194232.3614305-3-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20240102194232.3614305-1-qi.z.zhang@intel.com> References: <20240102194232.3614305-1-qi.z.zhang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Enable the configuration of peak and committed rates for a Tx scheduler node at the VSI level. This patch also consolidate rate configuration across various levels into a single function 'ice_set_node_rate.' Signed-off-by: Qi Zhang --- drivers/net/ice/base/ice_sched.c | 2 +- drivers/net/ice/base/ice_sched.h | 4 +- drivers/net/ice/ice_tm.c | 142 +++++++++++++++++++------------ 3 files changed, 91 insertions(+), 57 deletions(-) diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c index a4d31647fe..23cc1ee50a 100644 --- a/drivers/net/ice/base/ice_sched.c +++ b/drivers/net/ice/base/ice_sched.c @@ -4429,7 +4429,7 @@ ice_sched_set_node_bw(struct ice_port_info *pi, struct ice_sched_node *node, * NOTE: Caller provides the correct SRL node in case of shared profile * settings. */ -static enum ice_status +enum ice_status ice_sched_set_node_bw_lmt(struct ice_port_info *pi, struct ice_sched_node *node, enum ice_rl_type rl_type, u32 bw) { diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h index 4b68f3f535..a600ff9a24 100644 --- a/drivers/net/ice/base/ice_sched.h +++ b/drivers/net/ice/base/ice_sched.h @@ -237,5 +237,7 @@ enum ice_status ice_replay_vsi_agg(struct ice_hw *hw, u16 vsi_handle); enum ice_status ice_sched_replay_root_node_bw(struct ice_port_info *pi); enum ice_status ice_sched_replay_q_bw(struct ice_port_info *pi, struct ice_q_ctx *q_ctx); - +enum ice_status +ice_sched_set_node_bw_lmt(struct ice_port_info *pi, struct ice_sched_node *node, + enum ice_rl_type rl_type, u32 bw); #endif /* _ICE_SCHED_H_ */ diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c index 9e2f981fa3..d9187af8af 100644 --- a/drivers/net/ice/ice_tm.c +++ b/drivers/net/ice/ice_tm.c @@ -663,6 +663,55 @@ static int ice_move_recfg_lan_txq(struct rte_eth_dev *dev, return ret; } +static int ice_set_node_rate(struct ice_hw *hw, + struct ice_tm_node *tm_node, + struct ice_sched_node *sched_node) +{ + enum ice_status status; + bool reset = false; + uint32_t peak = 0; + uint32_t committed = 0; + uint32_t rate; + + if (tm_node == NULL || tm_node->shaper_profile == NULL) { + reset = true; + } else { + peak = (uint32_t)tm_node->shaper_profile->profile.peak.rate; + committed = (uint32_t)tm_node->shaper_profile->profile.committed.rate; + } + + if (reset || peak == 0) + rate = ICE_SCHED_DFLT_BW; + else + rate = peak / 1000 * BITS_PER_BYTE; + + + status = ice_sched_set_node_bw_lmt(hw->port_info, + sched_node, + ICE_MAX_BW, + rate); + if (status) { + PMD_DRV_LOG(ERR, "Failed to set max bandwidth for node %u", tm_node->id); + return -EINVAL; + } + + if (reset || committed == 0) + rate = ICE_SCHED_DFLT_BW; + else + rate = committed / 1000 * BITS_PER_BYTE; + + status = ice_sched_set_node_bw_lmt(hw->port_info, + sched_node, + ICE_MIN_BW, + rate); + if (status) { + PMD_DRV_LOG(ERR, "Failed to set min bandwidth for node %u", tm_node->id); + return -EINVAL; + } + + return 0; +} + static int ice_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail, __rte_unused struct rte_tm_error *error) @@ -673,13 +722,11 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev, struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list; struct ice_tm_node *tm_node; struct ice_sched_node *node; - struct ice_sched_node *vsi_node; + struct ice_sched_node *vsi_node = NULL; struct ice_sched_node *queue_node; struct ice_tx_queue *txq; struct ice_vsi *vsi; int ret_val = ICE_SUCCESS; - uint64_t peak = 0; - uint64_t committed = 0; uint8_t priority; uint32_t i; uint32_t idx_vsi_child; @@ -704,6 +751,18 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev, for (i = 0; i < vsi_layer; i++) node = node->children[0]; vsi_node = node; + + tm_node = TAILQ_FIRST(&pf->tm_conf.vsi_list); + + ret_val = ice_set_node_rate(hw, tm_node, vsi_node); + if (ret_val) { + error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; + PMD_DRV_LOG(ERR, + "configure vsi node %u bandwidth failed", + tm_node->id); + goto reset_vsi; + } + nb_vsi_child = vsi_node->num_children; nb_qg = vsi_node->children[0]->num_children; @@ -722,7 +781,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev, if (ret_val) { error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; PMD_DRV_LOG(ERR, "start queue %u failed", qid); - goto fail_clear; + goto reset_vsi; } txq = dev->data->tx_queues[qid]; q_teid = txq->q_teid; @@ -730,7 +789,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev, if (queue_node == NULL) { error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; PMD_DRV_LOG(ERR, "get queue %u node failed", qid); - goto fail_clear; + goto reset_vsi; } if (queue_node->info.parent_teid == qgroup_sched_node->info.node_teid) continue; @@ -738,28 +797,19 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev, if (ret_val) { error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; PMD_DRV_LOG(ERR, "move queue %u failed", qid); - goto fail_clear; + goto reset_vsi; } } - if (tm_node->reference_count != 0 && tm_node->shaper_profile) { - uint32_t node_teid = qgroup_sched_node->info.node_teid; - /* Transfer from Byte per seconds to Kbps */ - peak = tm_node->shaper_profile->profile.peak.rate; - peak = peak / 1000 * BITS_PER_BYTE; - ret_val = ice_sched_set_node_bw_lmt_per_tc(hw->port_info, - node_teid, - ICE_AGG_TYPE_Q, - tm_node->tc, - ICE_MAX_BW, - (u32)peak); - if (ret_val) { - error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; - PMD_DRV_LOG(ERR, - "configure queue group %u bandwidth failed", - tm_node->id); - goto fail_clear; - } + + ret_val = ice_set_node_rate(hw, tm_node, qgroup_sched_node); + if (ret_val) { + error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; + PMD_DRV_LOG(ERR, + "configure queue group %u bandwidth failed", + tm_node->id); + goto reset_vsi; } + priority = 7 - tm_node->priority; ret_val = ice_sched_cfg_sibl_node_prio_lock(hw->port_info, qgroup_sched_node, priority); @@ -777,7 +827,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev, if (idx_vsi_child >= nb_vsi_child) { error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; PMD_DRV_LOG(ERR, "too many queues"); - goto fail_clear; + goto reset_vsi; } } @@ -786,37 +836,17 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev, txq = dev->data->tx_queues[qid]; vsi = txq->vsi; q_teid = txq->q_teid; - if (tm_node->shaper_profile) { - /* Transfer from Byte per seconds to Kbps */ - if (tm_node->shaper_profile->profile.peak.rate > 0) { - peak = tm_node->shaper_profile->profile.peak.rate; - peak = peak / 1000 * BITS_PER_BYTE; - ret_val = ice_cfg_q_bw_lmt(hw->port_info, vsi->idx, - tm_node->tc, tm_node->id, - ICE_MAX_BW, (u32)peak); - if (ret_val) { - error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; - PMD_DRV_LOG(ERR, - "configure queue %u peak bandwidth failed", - tm_node->id); - goto fail_clear; - } - } - if (tm_node->shaper_profile->profile.committed.rate > 0) { - committed = tm_node->shaper_profile->profile.committed.rate; - committed = committed / 1000 * BITS_PER_BYTE; - ret_val = ice_cfg_q_bw_lmt(hw->port_info, vsi->idx, - tm_node->tc, tm_node->id, - ICE_MIN_BW, (u32)committed); - if (ret_val) { - error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; - PMD_DRV_LOG(ERR, - "configure queue %u committed bandwidth failed", - tm_node->id); - goto fail_clear; - } - } + + queue_node = ice_sched_get_node(hw->port_info, q_teid); + ret_val = ice_set_node_rate(hw, tm_node, queue_node); + if (ret_val) { + error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; + PMD_DRV_LOG(ERR, + "configure queue %u bandwidth failed", + tm_node->id); + goto reset_vsi; } + priority = 7 - tm_node->priority; ret_val = ice_cfg_vsi_q_priority(hw->port_info, 1, &q_teid, &priority); @@ -838,6 +868,8 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev, return ret_val; +reset_vsi: + ice_set_node_rate(hw, NULL, vsi_node); fail_clear: /* clear all the traffic manager configuration */ if (clear_on_fail) { From patchwork Tue Jan 2 19:42:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 135682 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E404B437FC; Tue, 2 Jan 2024 12:21:52 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C146140698; Tue, 2 Jan 2024 12:21:37 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.10]) by mails.dpdk.org (Postfix) with ESMTP id B18B9402CE for ; Tue, 2 Jan 2024 12:21:35 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1704194495; x=1735730495; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=io0d5c4TwA0h+433gQi5okfO1wtPib5/ul7BafPHZag=; b=mvwXEYFfYxfyb5Be4RMydzm3hLbO0IqxQyb8y06FMKuhiqNSh/nJH8D6 UFHN3QecnYz79VPQ7XKQVCRKu4KvFZmggRv4IEI3DEQcn77dB3I4TkFg2 Q0FGPLe0ssoI9ZpJnBfPhublNa3GCIeRWf2yQWZvo4DvYO5rWBUlB8hwm bsoVqGvr06S4unbiD7ahzxRiuNnwbKnf6Pq/eTEgy2smDNrJg6CXNdojT T/UT7FmN6XBevejdGYyTWdt1F9a9pmSfnqt5rsYAcuiHVfTaLECFh/m9c Z08sqOA6q1NyWMJhnxnXJQ55btfDFMhZRwAA/kqTzat9v6oOmR+0k3uvr w==; X-IronPort-AV: E=McAfee;i="6600,9927,10940"; a="10256790" X-IronPort-AV: E=Sophos;i="6.04,324,1695711600"; d="scan'208";a="10256790" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orvoesa102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Jan 2024 03:21:35 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10940"; a="952895242" X-IronPort-AV: E=Sophos;i="6.04,324,1695711600"; d="scan'208";a="952895242" Received: from dpdk-qzhan15-test02.sh.intel.com ([10.67.115.37]) by orsmga005.jf.intel.com with ESMTP; 02 Jan 2024 03:21:33 -0800 From: Qi Zhang To: qiming.yang@intel.com, wenjun1.wu@intel.com Cc: dev@dpdk.org, Qi Zhang Subject: [PATCH 3/6] net/ice: support queue group weight configure Date: Tue, 2 Jan 2024 14:42:29 -0500 Message-Id: <20240102194232.3614305-4-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20240102194232.3614305-1-qi.z.zhang@intel.com> References: <20240102194232.3614305-1-qi.z.zhang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Enable the configuration of weight for Tx scheduler node at the queue group level. This patch also consolidate weight configuration across various levels by exposing the base code API 'ice_sched_cfg_node_bw_alloc'. Signed-off-by: Qi Zhang --- drivers/net/ice/base/ice_sched.c | 2 +- drivers/net/ice/base/ice_sched.h | 3 +++ drivers/net/ice/ice_tm.c | 27 ++++++++++++++++++++------- 3 files changed, 24 insertions(+), 8 deletions(-) diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c index 23cc1ee50a..a1dd0c6ace 100644 --- a/drivers/net/ice/base/ice_sched.c +++ b/drivers/net/ice/base/ice_sched.c @@ -3020,7 +3020,7 @@ ice_sched_update_elem(struct ice_hw *hw, struct ice_sched_node *node, * * This function configures node element's BW allocation. */ -static enum ice_status +enum ice_status ice_sched_cfg_node_bw_alloc(struct ice_hw *hw, struct ice_sched_node *node, enum ice_rl_type rl_type, u16 bw_alloc) { diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h index a600ff9a24..5b35fd564e 100644 --- a/drivers/net/ice/base/ice_sched.h +++ b/drivers/net/ice/base/ice_sched.h @@ -240,4 +240,7 @@ ice_sched_replay_q_bw(struct ice_port_info *pi, struct ice_q_ctx *q_ctx); enum ice_status ice_sched_set_node_bw_lmt(struct ice_port_info *pi, struct ice_sched_node *node, enum ice_rl_type rl_type, u32 bw); +enum ice_status +ice_sched_cfg_node_bw_alloc(struct ice_hw *hw, struct ice_sched_node *node, + enum ice_rl_type rl_type, u16 bw_alloc); #endif /* _ICE_SCHED_H_ */ diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c index d9187af8af..604d045e2c 100644 --- a/drivers/net/ice/ice_tm.c +++ b/drivers/net/ice/ice_tm.c @@ -529,7 +529,8 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, PMD_DRV_LOG(WARNING, "priority != 0 not supported in level %d", level_id); - if (tm_node->weight != 1 && level_id != ICE_TM_NODE_TYPE_QUEUE) + if (tm_node->weight != 1 && + level_id != ICE_TM_NODE_TYPE_QUEUE && level_id != ICE_TM_NODE_TYPE_QGROUP) PMD_DRV_LOG(WARNING, "weight != 1 not supported in level %d", level_id); @@ -725,7 +726,6 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev, struct ice_sched_node *vsi_node = NULL; struct ice_sched_node *queue_node; struct ice_tx_queue *txq; - struct ice_vsi *vsi; int ret_val = ICE_SUCCESS; uint8_t priority; uint32_t i; @@ -819,6 +819,18 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev, tm_node->priority); goto fail_clear; } + + ret_val = ice_sched_cfg_node_bw_alloc(hw, qgroup_sched_node, + ICE_MAX_BW, + (uint16_t)tm_node->weight); + if (ret_val) { + error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT; + PMD_DRV_LOG(ERR, "configure queue group %u weight %u failed", + tm_node->id, + tm_node->weight); + goto fail_clear; + } + idx_qg++; if (idx_qg >= nb_qg) { idx_qg = 0; @@ -834,7 +846,6 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev, TAILQ_FOREACH(tm_node, queue_list, node) { qid = tm_node->id; txq = dev->data->tx_queues[qid]; - vsi = txq->vsi; q_teid = txq->q_teid; queue_node = ice_sched_get_node(hw->port_info, q_teid); @@ -856,12 +867,14 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev, goto fail_clear; } - ret_val = ice_cfg_q_bw_alloc(hw->port_info, vsi->idx, - tm_node->tc, tm_node->id, - ICE_MAX_BW, (u32)tm_node->weight); + queue_node = ice_sched_get_node(hw->port_info, q_teid); + ret_val = ice_sched_cfg_node_bw_alloc(hw, queue_node, ICE_MAX_BW, + (uint16_t)tm_node->weight); if (ret_val) { error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT; - PMD_DRV_LOG(ERR, "configure queue %u weight failed", tm_node->weight); + PMD_DRV_LOG(ERR, "configure queue %u weight %u failed", + tm_node->id, + tm_node->weight); goto fail_clear; } } From patchwork Tue Jan 2 19:42:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 135683 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 79BEA437FC; Tue, 2 Jan 2024 12:21:58 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 06CD9406B4; Tue, 2 Jan 2024 12:21:39 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.10]) by mails.dpdk.org (Postfix) with ESMTP id 593DE40693 for ; Tue, 2 Jan 2024 12:21:37 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1704194497; x=1735730497; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=mcVLa7N/oJz/TmJiiK9jr9W2e7Jhp4z39o7X61Dzmy8=; b=OzHL7tF/TE4JHlVgu3ohFFp9kW48GETNpqdW11WkcSJ1RDRdgQwYkAgh EszQwoOQFr2gCaOCqTvFxFkBYTQM6bBQVQfZFjRJmeHQE4ZhNpmlhQtLw UqzphBy+wd8gFAt209exyy3SsYi5k8POAfmnoVPz4CgZ/dsZNpaqS6Dxv FfS9DGiN9cGlHv72FZmNVH6oKQNwI3f3HBd+vgHkwfl17TTHLUV77FOHx cTTIfNgCrNACyp1u0Z2vF8Hrhr1y+5RZhLxfordCWvLSq5z9ODo+iFH6M E5OwpmT7087DhkDCo2R+aCMf1FHe3to51Gf/RsVTY/bSmCANlZaIz5xpU g==; X-IronPort-AV: E=McAfee;i="6600,9927,10940"; a="10256794" X-IronPort-AV: E=Sophos;i="6.04,324,1695711600"; d="scan'208";a="10256794" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orvoesa102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Jan 2024 03:21:37 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10940"; a="952895247" X-IronPort-AV: E=Sophos;i="6.04,324,1695711600"; d="scan'208";a="952895247" Received: from dpdk-qzhan15-test02.sh.intel.com ([10.67.115.37]) by orsmga005.jf.intel.com with ESMTP; 02 Jan 2024 03:21:35 -0800 From: Qi Zhang To: qiming.yang@intel.com, wenjun1.wu@intel.com Cc: dev@dpdk.org, Qi Zhang Subject: [PATCH 4/6] net/ice: refactor hardware Tx sched node config Date: Tue, 2 Jan 2024 14:42:30 -0500 Message-Id: <20240102194232.3614305-5-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20240102194232.3614305-1-qi.z.zhang@intel.com> References: <20240102194232.3614305-1-qi.z.zhang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Consolidate Tx scheduler node configuration into a function: 'ice_cfg_hw_node", where rate limit, weight, priority will be configured for queue group level and queue level. Signed-off-by: Qi Zhang --- drivers/net/ice/ice_tm.c | 97 ++++++++++++++++++++-------------------- 1 file changed, 49 insertions(+), 48 deletions(-) diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c index 604d045e2c..20cc47fff1 100644 --- a/drivers/net/ice/ice_tm.c +++ b/drivers/net/ice/ice_tm.c @@ -713,6 +713,49 @@ static int ice_set_node_rate(struct ice_hw *hw, return 0; } +static int ice_cfg_hw_node(struct ice_hw *hw, + struct ice_tm_node *tm_node, + struct ice_sched_node *sched_node) +{ + enum ice_status status; + uint8_t priority; + uint16_t weight; + int ret; + + ret = ice_set_node_rate(hw, tm_node, sched_node); + if (ret) { + PMD_DRV_LOG(ERR, + "configure queue group %u bandwidth failed", + sched_node->info.node_teid); + return ret; + } + + priority = tm_node ? (7 - tm_node->priority) : 0; + status = ice_sched_cfg_sibl_node_prio(hw->port_info, + sched_node, + priority); + if (status) { + PMD_DRV_LOG(ERR, "configure node %u priority %u failed", + sched_node->info.node_teid, + priority); + return -EINVAL; + } + + weight = tm_node ? (uint16_t)tm_node->weight : 4; + + status = ice_sched_cfg_node_bw_alloc(hw, sched_node, + ICE_MAX_BW, + weight); + if (status) { + PMD_DRV_LOG(ERR, "configure node %u weight %u failed", + sched_node->info.node_teid, + weight); + return -EINVAL; + } + + return 0; +} + static int ice_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail, __rte_unused struct rte_tm_error *error) @@ -726,8 +769,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev, struct ice_sched_node *vsi_node = NULL; struct ice_sched_node *queue_node; struct ice_tx_queue *txq; - int ret_val = ICE_SUCCESS; - uint8_t priority; + int ret_val = 0; uint32_t i; uint32_t idx_vsi_child; uint32_t idx_qg; @@ -801,36 +843,15 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev, } } - ret_val = ice_set_node_rate(hw, tm_node, qgroup_sched_node); + ret_val = ice_cfg_hw_node(hw, tm_node, qgroup_sched_node); if (ret_val) { error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; PMD_DRV_LOG(ERR, - "configure queue group %u bandwidth failed", + "configure queue group node %u failed", tm_node->id); goto reset_vsi; } - priority = 7 - tm_node->priority; - ret_val = ice_sched_cfg_sibl_node_prio_lock(hw->port_info, qgroup_sched_node, - priority); - if (ret_val) { - error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY; - PMD_DRV_LOG(ERR, "configure queue group %u priority failed", - tm_node->priority); - goto fail_clear; - } - - ret_val = ice_sched_cfg_node_bw_alloc(hw, qgroup_sched_node, - ICE_MAX_BW, - (uint16_t)tm_node->weight); - if (ret_val) { - error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT; - PMD_DRV_LOG(ERR, "configure queue group %u weight %u failed", - tm_node->id, - tm_node->weight); - goto fail_clear; - } - idx_qg++; if (idx_qg >= nb_qg) { idx_qg = 0; @@ -847,36 +868,16 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev, qid = tm_node->id; txq = dev->data->tx_queues[qid]; q_teid = txq->q_teid; - queue_node = ice_sched_get_node(hw->port_info, q_teid); - ret_val = ice_set_node_rate(hw, tm_node, queue_node); + + ret_val = ice_cfg_hw_node(hw, tm_node, queue_node); if (ret_val) { error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; PMD_DRV_LOG(ERR, - "configure queue %u bandwidth failed", + "configure queue group node %u failed", tm_node->id); goto reset_vsi; } - - priority = 7 - tm_node->priority; - ret_val = ice_cfg_vsi_q_priority(hw->port_info, 1, - &q_teid, &priority); - if (ret_val) { - error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY; - PMD_DRV_LOG(ERR, "configure queue %u priority failed", tm_node->priority); - goto fail_clear; - } - - queue_node = ice_sched_get_node(hw->port_info, q_teid); - ret_val = ice_sched_cfg_node_bw_alloc(hw, queue_node, ICE_MAX_BW, - (uint16_t)tm_node->weight); - if (ret_val) { - error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT; - PMD_DRV_LOG(ERR, "configure queue %u weight %u failed", - tm_node->id, - tm_node->weight); - goto fail_clear; - } } return ret_val; From patchwork Tue Jan 2 19:42:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 135684 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0B51D437FC; Tue, 2 Jan 2024 12:22:06 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CBAF040A77; Tue, 2 Jan 2024 12:21:41 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.10]) by mails.dpdk.org (Postfix) with ESMTP id 2649E406BA for ; Tue, 2 Jan 2024 12:21:39 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1704194499; x=1735730499; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=tMkaXHy/Mrc/nNlWCyGJTeu9zmYtwWlE09IqbpEcurk=; b=mZ/h5O4Oaut4oojuqKnzE4M8YlaU/Z+LqN1eZFJ6fk62u4FtZNyOqotu 1KWwflzww51My/U24ScmAatMUXKEifXP33MVpZ5oXRmqMHguFvx+DQxv0 1tcfRbPViGGA2dDLFTZNLyQ/ZygTzhidIE/rx1a2SLxHJPM5orVTjM5kl 0v5XomRAiFjDtN7MdgTayy0UhmPiYe5JVleEMH2coQx1xATezBzPnbDVF KyRTyOtelprFd7h78SbpZTirSCj1084Ndysh7iXfw8GbUWsc/spEIVJVs vrS1FY9esZ7Qd2c/RohXyuREnyKRqY966TJylqWD3G6btRrKOM3q9zVCx Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10940"; a="10256799" X-IronPort-AV: E=Sophos;i="6.04,324,1695711600"; d="scan'208";a="10256799" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orvoesa102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Jan 2024 03:21:38 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10940"; a="952895253" X-IronPort-AV: E=Sophos;i="6.04,324,1695711600"; d="scan'208";a="952895253" Received: from dpdk-qzhan15-test02.sh.intel.com ([10.67.115.37]) by orsmga005.jf.intel.com with ESMTP; 02 Jan 2024 03:21:37 -0800 From: Qi Zhang To: qiming.yang@intel.com, wenjun1.wu@intel.com Cc: dev@dpdk.org, Qi Zhang Subject: [PATCH 5/6] net/ice: reset Tx sched node during commit Date: Tue, 2 Jan 2024 14:42:31 -0500 Message-Id: <20240102194232.3614305-6-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20240102194232.3614305-1-qi.z.zhang@intel.com> References: <20240102194232.3614305-1-qi.z.zhang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org 1. Always reset all Tx scheduler at the beginning of a commit action. This prevent unexpected remains from previous commit. 2. Reset all Tx scheduler nodes if a commit failed. For leaf node, stop queues which will remove sched node from scheduler tree, then start queues which will add sched node back to default topo. For noleaf node, simply reset to default parameters. Signed-off-by: Qi Zhang --- drivers/net/ice/ice_ethdev.h | 1 + drivers/net/ice/ice_tm.c | 130 ++++++++++++++++++++++++++++------- 2 files changed, 107 insertions(+), 24 deletions(-) diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h index 1338c80d14..3b2db6aaa6 100644 --- a/drivers/net/ice/ice_ethdev.h +++ b/drivers/net/ice/ice_ethdev.h @@ -478,6 +478,7 @@ struct ice_tm_node { struct ice_tm_node **children; struct ice_tm_shaper_profile *shaper_profile; struct rte_tm_node_params params; + struct ice_sched_node *sched_node; }; /* node type of Traffic Manager */ diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c index 20cc47fff1..4d8dbff2dc 100644 --- a/drivers/net/ice/ice_tm.c +++ b/drivers/net/ice/ice_tm.c @@ -756,16 +756,91 @@ static int ice_cfg_hw_node(struct ice_hw *hw, return 0; } +static struct ice_sched_node *ice_get_vsi_node(struct ice_hw *hw) +{ + struct ice_sched_node *node = hw->port_info->root; + uint32_t vsi_layer = hw->num_tx_sched_layers - ICE_VSI_LAYER_OFFSET; + uint32_t i; + + for (i = 0; i < vsi_layer; i++) + node = node->children[0]; + + return node; +} + +static int ice_reset_noleaf_nodes(struct rte_eth_dev *dev) +{ + struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list; + struct ice_sched_node *vsi_node = ice_get_vsi_node(hw); + struct ice_tm_node *tm_node; + int ret; + + /* reset vsi_node */ + ret = ice_set_node_rate(hw, NULL, vsi_node); + if (ret) { + PMD_DRV_LOG(ERR, "reset vsi node failed"); + return ret; + } + + /* reset queue group nodes */ + TAILQ_FOREACH(tm_node, qgroup_list, node) { + if (tm_node->sched_node == NULL) + continue; + + ret = ice_cfg_hw_node(hw, NULL, tm_node->sched_node); + if (ret) { + PMD_DRV_LOG(ERR, "reset queue group node %u failed", tm_node->id); + return ret; + } + tm_node->sched_node = NULL; + } + + return 0; +} + +static int ice_remove_leaf_nodes(struct rte_eth_dev *dev) +{ + int ret = 0; + int i; + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + ret = ice_tx_queue_stop(dev, i); + if (ret) { + PMD_DRV_LOG(ERR, "stop queue %u failed", i); + break; + } + } + + return ret; +} + +static int ice_add_leaf_nodes(struct rte_eth_dev *dev) +{ + int ret = 0; + int i; + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + ret = ice_tx_queue_start(dev, i); + if (ret) { + PMD_DRV_LOG(ERR, "start queue %u failed", i); + break; + } + } + + return ret; +} + static int ice_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail, - __rte_unused struct rte_tm_error *error) + struct rte_tm_error *error) { struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list; struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list; struct ice_tm_node *tm_node; - struct ice_sched_node *node; struct ice_sched_node *vsi_node = NULL; struct ice_sched_node *queue_node; struct ice_tx_queue *txq; @@ -777,23 +852,25 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev, uint32_t nb_qg; uint32_t qid; uint32_t q_teid; - uint32_t vsi_layer; - for (i = 0; i < dev->data->nb_tx_queues; i++) { - ret_val = ice_tx_queue_stop(dev, i); - if (ret_val) { - error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; - PMD_DRV_LOG(ERR, "stop queue %u failed", i); - goto fail_clear; - } + /* remove leaf nodes */ + ret_val = ice_remove_leaf_nodes(dev); + if (ret_val) { + error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; + PMD_DRV_LOG(ERR, "reset no-leaf nodes failed"); + goto fail_clear; } - node = hw->port_info->root; - vsi_layer = hw->num_tx_sched_layers - ICE_VSI_LAYER_OFFSET; - for (i = 0; i < vsi_layer; i++) - node = node->children[0]; - vsi_node = node; + /* reset no-leaf nodes. */ + ret_val = ice_reset_noleaf_nodes(dev); + if (ret_val) { + error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; + PMD_DRV_LOG(ERR, "reset leaf nodes failed"); + goto add_leaf; + } + /* config vsi node */ + vsi_node = ice_get_vsi_node(hw); tm_node = TAILQ_FIRST(&pf->tm_conf.vsi_list); ret_val = ice_set_node_rate(hw, tm_node, vsi_node); @@ -802,9 +879,10 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev, PMD_DRV_LOG(ERR, "configure vsi node %u bandwidth failed", tm_node->id); - goto reset_vsi; + goto add_leaf; } + /* config queue group nodes */ nb_vsi_child = vsi_node->num_children; nb_qg = vsi_node->children[0]->num_children; @@ -823,7 +901,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev, if (ret_val) { error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; PMD_DRV_LOG(ERR, "start queue %u failed", qid); - goto reset_vsi; + goto reset_leaf; } txq = dev->data->tx_queues[qid]; q_teid = txq->q_teid; @@ -831,7 +909,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev, if (queue_node == NULL) { error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; PMD_DRV_LOG(ERR, "get queue %u node failed", qid); - goto reset_vsi; + goto reset_leaf; } if (queue_node->info.parent_teid == qgroup_sched_node->info.node_teid) continue; @@ -839,7 +917,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev, if (ret_val) { error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; PMD_DRV_LOG(ERR, "move queue %u failed", qid); - goto reset_vsi; + goto reset_leaf; } } @@ -849,7 +927,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev, PMD_DRV_LOG(ERR, "configure queue group node %u failed", tm_node->id); - goto reset_vsi; + goto reset_leaf; } idx_qg++; @@ -860,10 +938,11 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev, if (idx_vsi_child >= nb_vsi_child) { error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; PMD_DRV_LOG(ERR, "too many queues"); - goto reset_vsi; + goto reset_leaf; } } + /* config queue nodes */ TAILQ_FOREACH(tm_node, queue_list, node) { qid = tm_node->id; txq = dev->data->tx_queues[qid]; @@ -876,14 +955,17 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev, PMD_DRV_LOG(ERR, "configure queue group node %u failed", tm_node->id); - goto reset_vsi; + goto reset_leaf; } } return ret_val; -reset_vsi: - ice_set_node_rate(hw, NULL, vsi_node); +reset_leaf: + ice_remove_leaf_nodes(dev); +add_leaf: + ice_add_leaf_nodes(dev); + ice_reset_noleaf_nodes(dev); fail_clear: /* clear all the traffic manager configuration */ if (clear_on_fail) { From patchwork Tue Jan 2 19:42:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 135685 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 99123437FC; Tue, 2 Jan 2024 12:22:11 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0946940A87; Tue, 2 Jan 2024 12:21:43 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.10]) by mails.dpdk.org (Postfix) with ESMTP id 9820B40A75 for ; Tue, 2 Jan 2024 12:21:40 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1704194500; x=1735730500; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=1S1OXgVbxUMMAlXZUdjD5wX0ENsyKyq8GrGN4OjlU84=; b=k2j66nynjD7YIVmTL6bFeQSP1xZtLoKUot+daMN07udzlqqWJ02a6GPC x/7XH6J2krWC99f58NEX/tN7LBj8Q47V43mxkhGtkBgm+XsNhdHpRo9ng dcNUoLBsAmk3WbpzZ2ePcs0UWskQYbTTM30+ABnYf6V6mImDG6qWeklIx 7CwGEHeFMY3S8iIsRnEJxpFWZEW8T0kYwjekCq4AyxYZ3cgdqUJXs8Nlf YmSiveiRmZAsLBDdID9/UA4/DtBS2vAb3vDY62a67jYXYI02a2YdJxE9w WFqzUegzR04vMvVz/WoutomN9qU4Ltc3eJTIYxDHtsc21VvAJhV5DbprI A==; X-IronPort-AV: E=McAfee;i="6600,9927,10940"; a="10256801" X-IronPort-AV: E=Sophos;i="6.04,324,1695711600"; d="scan'208";a="10256801" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orvoesa102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Jan 2024 03:21:40 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10940"; a="952895257" X-IronPort-AV: E=Sophos;i="6.04,324,1695711600"; d="scan'208";a="952895257" Received: from dpdk-qzhan15-test02.sh.intel.com ([10.67.115.37]) by orsmga005.jf.intel.com with ESMTP; 02 Jan 2024 03:21:38 -0800 From: Qi Zhang To: qiming.yang@intel.com, wenjun1.wu@intel.com Cc: dev@dpdk.org, Qi Zhang Subject: [PATCH 6/6] net/ice: support Tx sched commit before device start Date: Tue, 2 Jan 2024 14:42:32 -0500 Message-Id: <20240102194232.3614305-7-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20240102194232.3614305-1-qi.z.zhang@intel.com> References: <20240102194232.3614305-1-qi.z.zhang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Currently Tx hierarchy commit only take effect if device already be started, as after a dev start / stop cycle, queues has been removed and added back which cause the Tx scheduler tree return to original topo. In this patch, the hierarchy commit function will simply return if device has not be started yet and all the commit actions will be deferred to dev_start. Signed-off-by: Qi Zhang --- drivers/net/ice/ice_ethdev.c | 9 +++++++++ drivers/net/ice/ice_ethdev.h | 4 ++++ drivers/net/ice/ice_tm.c | 25 ++++++++++++++++++++++--- 3 files changed, 35 insertions(+), 3 deletions(-) diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index 3c3bc49dc2..72e13f95f8 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -3717,6 +3717,7 @@ ice_dev_start(struct rte_eth_dev *dev) int mask, ret; uint8_t timer = hw->func_caps.ts_func_info.tmr_index_owned; uint32_t pin_idx = ad->devargs.pin_idx; + struct rte_tm_error tm_err; /* program Tx queues' context in hardware */ for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { @@ -3746,6 +3747,14 @@ ice_dev_start(struct rte_eth_dev *dev) } } + if (pf->tm_conf.committed) { + ret = ice_do_hierarchy_commit(dev, pf->tm_conf.clear_on_fail, &tm_err); + if (ret) { + PMD_DRV_LOG(ERR, "fail to commit Tx scheduler"); + goto rx_err; + } + } + ice_set_rx_function(dev); ice_set_tx_function(dev); diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h index 3b2db6aaa6..fa4981ed14 100644 --- a/drivers/net/ice/ice_ethdev.h +++ b/drivers/net/ice/ice_ethdev.h @@ -504,6 +504,7 @@ struct ice_tm_conf { uint32_t nb_qgroup_node; uint32_t nb_queue_node; bool committed; + bool clear_on_fail; }; struct ice_pf { @@ -686,6 +687,9 @@ int ice_rem_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id, struct ice_rss_hash_cfg *cfg); void ice_tm_conf_init(struct rte_eth_dev *dev); void ice_tm_conf_uninit(struct rte_eth_dev *dev); +int ice_do_hierarchy_commit(struct rte_eth_dev *dev, + int clear_on_fail, + struct rte_tm_error *error); extern const struct rte_tm_ops ice_tm_ops; static inline int diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c index 4d8dbff2dc..aa012897ed 100644 --- a/drivers/net/ice/ice_tm.c +++ b/drivers/net/ice/ice_tm.c @@ -52,6 +52,7 @@ ice_tm_conf_init(struct rte_eth_dev *dev) pf->tm_conf.nb_qgroup_node = 0; pf->tm_conf.nb_queue_node = 0; pf->tm_conf.committed = false; + pf->tm_conf.clear_on_fail = false; } void @@ -832,9 +833,9 @@ static int ice_add_leaf_nodes(struct rte_eth_dev *dev) return ret; } -static int ice_hierarchy_commit(struct rte_eth_dev *dev, - int clear_on_fail, - struct rte_tm_error *error) +int ice_do_hierarchy_commit(struct rte_eth_dev *dev, + int clear_on_fail, + struct rte_tm_error *error) { struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -959,6 +960,8 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev, } } + pf->tm_conf.committed = true; + return ret_val; reset_leaf: @@ -974,3 +977,19 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev, } return ret_val; } + +static int ice_hierarchy_commit(struct rte_eth_dev *dev, + int clear_on_fail, + struct rte_tm_error *error) +{ + struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); + + /* if device not started, simply set committed flag and return. */ + if (!dev->data->dev_started) { + pf->tm_conf.committed = true; + pf->tm_conf.clear_on_fail = clear_on_fail; + return 0; + } + + return ice_do_hierarchy_commit(dev, clear_on_fail, error); +}