From patchwork Wed Aug 7 09:46:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 142987 X-Patchwork-Delegate: bruce.richardson@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 45D2D4575B; Wed, 7 Aug 2024 11:47:20 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8C9F7410D4; Wed, 7 Aug 2024 11:47:16 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) by mails.dpdk.org (Postfix) with ESMTP id 8D9A14064C for ; Wed, 7 Aug 2024 11:47:14 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1723024035; x=1754560035; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=WMVHp79XXfZ9w+MVBk38li3O6kN2asMtgwE7JzjBQ6g=; b=I8U18jJuRawAiEpNLJCj+UP4zFIePoqLrYICDDtKPjwwTRM23fDGGw/2 iL6pjTUX1wkfhcgGVBKtbqodb+x8MuxdiUxylELKS4jjSe+isYcVprMsK 8eqSpQoa2kDdeWDFprpdkB+FxfgaFDjQlWSrAVhknXN1auPFbhnj1BWW8 FgmNT7AIRAvxlAP/T22Z2hcmSnFgux7yvNxL74uhgW25AeFsyL04fZOEf PKCyKRubVtM9kX5H0du+FMc7h2fNOBLwQxt9bdt7vQFKV805rw+WGlWNQ fr4XkQxgh1FRbJG0qgvJnGQ0fx+NKAT+urKuQN4pL8DUKYmrAgNPuM5V6 Q==; X-CSE-ConnectionGUID: BtmfxkEdS/6Ch1jbL+VFwQ== X-CSE-MsgGUID: GhLo7cJ7STmLrKoOl2EHAg== X-IronPort-AV: E=McAfee;i="6700,10204,11156"; a="21257927" X-IronPort-AV: E=Sophos;i="6.09,269,1716274800"; d="scan'208";a="21257927" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2024 02:47:13 -0700 X-CSE-ConnectionGUID: O7+UwRdPQZS22HdjEzMNDg== X-CSE-MsgGUID: zfkifiyuRZuqjuBJQA4+sw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,269,1716274800"; d="scan'208";a="87467359" Received: from silpixa00400562.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.39]) by orviesa002.jf.intel.com with ESMTP; 07 Aug 2024 02:47:12 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson Subject: [PATCH v2 01/15] net/ice: add traffic management node query function Date: Wed, 7 Aug 2024 10:46:52 +0100 Message-ID: <20240807094706.459822-2-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240807094706.459822-1-bruce.richardson@intel.com> References: <20240807093407.452784-1-bruce.richardson@intel.com> <20240807094706.459822-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Implement the new node querying function for the "ice" net driver. Signed-off-by: Bruce Richardson --- drivers/net/ice/ice_tm.c | 48 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 48 insertions(+) diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c index 8a29a9e744..459446a6b0 100644 --- a/drivers/net/ice/ice_tm.c +++ b/drivers/net/ice/ice_tm.c @@ -17,6 +17,11 @@ static int ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, uint32_t weight, uint32_t level_id, const struct rte_tm_node_params *params, struct rte_tm_error *error); +static int ice_node_query(const struct rte_eth_dev *dev, uint32_t node_id, + uint32_t *parent_node_id, uint32_t *priority, + uint32_t *weight, uint32_t *level_id, + struct rte_tm_node_params *params, + struct rte_tm_error *error); static int ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id, struct rte_tm_error *error); static int ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id, @@ -35,6 +40,7 @@ const struct rte_tm_ops ice_tm_ops = { .node_add = ice_tm_node_add, .node_delete = ice_tm_node_delete, .node_type_get = ice_node_type_get, + .node_query = ice_node_query, .hierarchy_commit = ice_hierarchy_commit, }; @@ -219,6 +225,48 @@ ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id, return 0; } +static int +ice_node_query(const struct rte_eth_dev *dev, uint32_t node_id, + uint32_t *parent_node_id, uint32_t *priority, + uint32_t *weight, uint32_t *level_id, + struct rte_tm_node_params *params, + struct rte_tm_error *error) +{ + struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct ice_tm_node *tm_node; + + if (node_id == RTE_TM_NODE_ID_NULL) { + error->type = RTE_TM_ERROR_TYPE_NODE_ID; + error->message = "invalid node id"; + return -EINVAL; + } + + /* check if the node id exists */ + tm_node = find_node(pf->tm_conf.root, node_id); + if (!tm_node) { + error->type = RTE_TM_ERROR_TYPE_NODE_ID; + error->message = "no such node"; + return -EEXIST; + } + + if (parent_node_id != NULL) + *parent_node_id = tm_node->parent->id; + + if (priority != NULL) + *priority = tm_node->priority; + + if (weight != NULL) + *weight = tm_node->weight; + + if (level_id != NULL) + *level_id = tm_node->level; + + if (params != NULL) + *params = tm_node->params; + + return 0; +} + static inline struct ice_tm_shaper_profile * ice_shaper_profile_search(struct rte_eth_dev *dev, uint32_t shaper_profile_id) From patchwork Wed Aug 7 09:46:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 142988 X-Patchwork-Delegate: bruce.richardson@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AB1834575B; Wed, 7 Aug 2024 11:47:25 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CCCB0410E7; Wed, 7 Aug 2024 11:47:17 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) by mails.dpdk.org (Postfix) with ESMTP id 39DE34064C for ; Wed, 7 Aug 2024 11:47:15 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1723024036; x=1754560036; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=dsMeQYQMD84RlgK6NKVflTBI/D9ELWC/eMpFhKbzYRI=; b=O5akKSEK4eEJDx7SOR9Mae6CHGAtP+YCTl1fhQRYt7lVf+FLreA+E/mx jeIQ2BjUwpXgy+SPgwCEW8b14mk8A+VRECD7MPgrTV3FgCljuiiGaaJBh PdY9IZg0hg03vCR6c+MKCnfETvTjXhMT8oPOId3uwZg4v6Ckee+KYbZuO gwOMK7Ly8VkjO3W7o2T+x3/ravugV+Sfd3TzibK8uqri/+n9OFOPDZdpY UsZLTro+ovwd6iVtVU7WksTUIb7QilxXKxGzlx573tjpIip/y/yljxC60 S9JT7u+z6LQYZYWvO6Ca3B4+ae8iCNqdEAipW7oa8xtSumW2FnWQCD2Zt w==; X-CSE-ConnectionGUID: pAZ4vgEXRBOdyR9IYNX7aA== X-CSE-MsgGUID: hmGP+heuTf2yYS7cpkUAHg== X-IronPort-AV: E=McAfee;i="6700,10204,11156"; a="21257928" X-IronPort-AV: E=Sophos;i="6.09,269,1716274800"; d="scan'208";a="21257928" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2024 02:47:14 -0700 X-CSE-ConnectionGUID: /a00xKYwTnOaPwPwoy38lA== X-CSE-MsgGUID: 1MCwW472TfKJ/G+LnL70kg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,269,1716274800"; d="scan'208";a="87467363" Received: from silpixa00400562.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.39]) by orviesa002.jf.intel.com with ESMTP; 07 Aug 2024 02:47:13 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson Subject: [PATCH v2 02/15] net/ice: detect stopping a flow-director queue twice Date: Wed, 7 Aug 2024 10:46:53 +0100 Message-ID: <20240807094706.459822-3-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240807094706.459822-1-bruce.richardson@intel.com> References: <20240807093407.452784-1-bruce.richardson@intel.com> <20240807094706.459822-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org If the flow-director queue is stopped at some point during the running of an application, the shutdown procedure for the port issues an error as it tries to stop the queue a second time, and fails to do so. We can eliminate this error by setting the tail-register pointer to NULL on stop, and checking for that condition in subsequent stop calls. Since the register pointer is set on start, any restarting of the queue will allow a stop call to progress as normal. Signed-off-by: Bruce Richardson --- drivers/net/ice/ice_rxtx.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c index f270498ed1..a150d28e73 100644 --- a/drivers/net/ice/ice_rxtx.c +++ b/drivers/net/ice/ice_rxtx.c @@ -1139,6 +1139,10 @@ ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) tx_queue_id); return -EINVAL; } + if (txq->qtx_tail == NULL) { + PMD_DRV_LOG(INFO, "TX queue %u not started\n", tx_queue_id); + return 0; + } vsi = txq->vsi; q_ids[0] = txq->reg_idx; @@ -1153,6 +1157,7 @@ ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) } txq->tx_rel_mbufs(txq); + txq->qtx_tail = NULL; return 0; } From patchwork Wed Aug 7 09:46:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 142989 X-Patchwork-Delegate: bruce.richardson@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CB6264575B; Wed, 7 Aug 2024 11:47:32 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 35B1541101; Wed, 7 Aug 2024 11:47:19 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) by mails.dpdk.org (Postfix) with ESMTP id DE9F740E4A for ; Wed, 7 Aug 2024 11:47:15 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1723024036; x=1754560036; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=0xnoZfd/s5v5Pc79O1SdPkkC1ECMz+5//A7pKn5vEXU=; b=ObDQb8n7z066ecf4Ort0tW1CKFwatEs7uuBehqFbZFSiErs2FtoJrfU/ Pf99JfDmb5mGwZgQupST4BREdowTYMnbEkovZm8yqTHSd54hnjkwE8N+0 kQfcD1fjLiDbAAtls3q2LXeakK5eP+L0lB6AfUByeHixbgVx70Caw9GI/ hNcX4js3EhLh+FUGRoGfwOMz1dm7bQNENmG07ZNdxkXCUR1lM5sHDZa2A /Z9Mf7fH3jstkYOHbrPl07aXYMfv/BcFjbMWVktaTwyHSv3FNzRPyCbsj TjSlqUG96pqCbSHWk+KdFsSbt21XELdGXNEclGYTHvFEsYW3nMiy+fhyz g==; X-CSE-ConnectionGUID: NYTlYwDkQKCmdiYxBTa+Hw== X-CSE-MsgGUID: U2gJa5ubSriDpLzYZFjxVA== X-IronPort-AV: E=McAfee;i="6700,10204,11156"; a="21257932" X-IronPort-AV: E=Sophos;i="6.09,269,1716274800"; d="scan'208";a="21257932" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2024 02:47:15 -0700 X-CSE-ConnectionGUID: c7sdVNmoQHm+NaZRwgXevQ== X-CSE-MsgGUID: YG85V13dSZ+XwXwvnLSKdw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,269,1716274800"; d="scan'208";a="87467366" Received: from silpixa00400562.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.39]) by orviesa002.jf.intel.com with ESMTP; 07 Aug 2024 02:47:14 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson Subject: [PATCH v2 03/15] net/ice: improve Tx scheduler graph output Date: Wed, 7 Aug 2024 10:46:54 +0100 Message-ID: <20240807094706.459822-4-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240807094706.459822-1-bruce.richardson@intel.com> References: <20240807093407.452784-1-bruce.richardson@intel.com> <20240807094706.459822-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The function to dump the TX scheduler topology only adds to the chart nodes connected to TX queues or for the flow director VSI. Change the function to work recursively from the root node and thereby include all scheduler nodes, whether in use or not, in the dump. Also, improve the output of the Tx scheduler graphing function: * Add VSI details to each node in graph * When number of children is >16, skip middle nodes to reduce size of the graph, otherwise dot output is unviewable for large hierarchies * For VSIs other than zero, use dot's clustering method to put those VSIs into subgraphs with borders * For leaf nodes, display queue numbers for the any nodes assigned to ethdev NIC Tx queues Signed-off-by: Bruce Richardson --- drivers/net/ice/ice_diagnose.c | 196 ++++++++++++--------------------- 1 file changed, 69 insertions(+), 127 deletions(-) diff --git a/drivers/net/ice/ice_diagnose.c b/drivers/net/ice/ice_diagnose.c index c357554707..623d84e37d 100644 --- a/drivers/net/ice/ice_diagnose.c +++ b/drivers/net/ice/ice_diagnose.c @@ -545,29 +545,15 @@ static void print_rl_profile(struct ice_aqc_rl_profile_elem *prof, fprintf(stream, "\t\t\t\t\t\n"); } -static -void print_elem_type(FILE *stream, u8 type) +static const char * +get_elem_type(u8 type) { - switch (type) { - case 1: - fprintf(stream, "root"); - break; - case 2: - fprintf(stream, "tc"); - break; - case 3: - fprintf(stream, "se_generic"); - break; - case 4: - fprintf(stream, "entry_point"); - break; - case 5: - fprintf(stream, "leaf"); - break; - default: - fprintf(stream, "%d", type); - break; - } + static const char * const ice_sched_node_types[] = { + "Undefined", "Root", "TC", "SE Generic", "SW Entry", "Leaf" + }; + if (type < RTE_DIM(ice_sched_node_types)) + return ice_sched_node_types[type]; + return "*UNKNOWN*"; } static @@ -602,7 +588,9 @@ void print_priority_mode(FILE *stream, bool flag) } static -void print_node(struct ice_aqc_txsched_elem_data *data, +void print_node(struct ice_sched_node *node, + struct rte_eth_dev_data *ethdata, + struct ice_aqc_txsched_elem_data *data, struct ice_aqc_rl_profile_elem *cir_prof, struct ice_aqc_rl_profile_elem *eir_prof, struct ice_aqc_rl_profile_elem *shared_prof, @@ -613,17 +601,19 @@ void print_node(struct ice_aqc_txsched_elem_data *data, fprintf(stream, "\t\t\t\n"); - fprintf(stream, "\t\t\t\t\n"); - fprintf(stream, "\t\t\t\t\t\n"); - fprintf(stream, "\t\t\t\t\t\n", data->node_teid); - fprintf(stream, "\t\t\t\t\n"); - - fprintf(stream, "\t\t\t\t\n"); - fprintf(stream, "\t\t\t\t\t\n"); - fprintf(stream, "\t\t\t\t\t\n"); - fprintf(stream, "\t\t\t\t\n"); + fprintf(stream, "\t\t\t\t\n", data->node_teid); + fprintf(stream, "\t\t\t\t\n", + get_elem_type(data->data.elem_type)); + fprintf(stream, "\t\t\t\t\n", node->vsi_handle); + if (data->data.elem_type == ICE_AQC_ELEM_TYPE_LEAF) { + for (uint16_t i = 0; i < ethdata->nb_tx_queues; i++) { + struct ice_tx_queue *q = ethdata->tx_queues[i]; + if (q->q_teid == data->node_teid) { + fprintf(stream, "\t\t\t\t\n", i); + break; + } + } + } if (!detail) goto brief; @@ -705,8 +695,6 @@ void print_node(struct ice_aqc_txsched_elem_data *data, fprintf(stream, "\t\tshape=plain\n"); fprintf(stream, "\t]\n"); - if (data->parent_teid != 0xFFFFFFFF) - fprintf(stream, "\tNODE_%d -> NODE_%d\n", data->parent_teid, data->node_teid); } static @@ -731,112 +719,92 @@ int query_rl_profile(struct ice_hw *hw, return 0; } -static -int query_node(struct ice_hw *hw, uint32_t child, uint32_t *parent, - uint8_t level, bool detail, FILE *stream) +static int +query_node(struct ice_hw *hw, struct rte_eth_dev_data *ethdata, + struct ice_sched_node *node, bool detail, FILE *stream) { - struct ice_aqc_txsched_elem_data data; + struct ice_aqc_txsched_elem_data *data = &node->info; struct ice_aqc_rl_profile_elem cir_prof; struct ice_aqc_rl_profile_elem eir_prof; struct ice_aqc_rl_profile_elem shared_prof; struct ice_aqc_rl_profile_elem *cp = NULL; struct ice_aqc_rl_profile_elem *ep = NULL; struct ice_aqc_rl_profile_elem *sp = NULL; - int status, ret; - - status = ice_sched_query_elem(hw, child, &data); - if (status != ICE_SUCCESS) { - if (level == hw->num_tx_sched_layers) { - /* ignore the error when a queue has been stopped. */ - PMD_DRV_LOG(WARNING, "Failed to query queue node %d.", child); - *parent = 0xffffffff; - return 0; - } - PMD_DRV_LOG(ERR, "Failed to query scheduling node %d.", child); - return -EINVAL; - } - - *parent = data.parent_teid; + u8 level = node->tx_sched_layer; + int ret; - if (data.data.cir_bw.bw_profile_idx != 0) { - ret = query_rl_profile(hw, level, 0, data.data.cir_bw.bw_profile_idx, &cir_prof); + if (data->data.cir_bw.bw_profile_idx != 0) { + ret = query_rl_profile(hw, level, 0, data->data.cir_bw.bw_profile_idx, &cir_prof); if (ret) return ret; cp = &cir_prof; } - if (data.data.eir_bw.bw_profile_idx != 0) { - ret = query_rl_profile(hw, level, 1, data.data.eir_bw.bw_profile_idx, &eir_prof); + if (data->data.eir_bw.bw_profile_idx != 0) { + ret = query_rl_profile(hw, level, 1, data->data.eir_bw.bw_profile_idx, &eir_prof); if (ret) return ret; ep = &eir_prof; } - if (data.data.srl_id != 0) { - ret = query_rl_profile(hw, level, 2, data.data.srl_id, &shared_prof); + if (data->data.srl_id != 0) { + ret = query_rl_profile(hw, level, 2, data->data.srl_id, &shared_prof); if (ret) return ret; sp = &shared_prof; } - print_node(&data, cp, ep, sp, detail, stream); + print_node(node, ethdata, data, cp, ep, sp, detail, stream); return 0; } -static -int query_nodes(struct ice_hw *hw, - uint32_t *children, int child_num, - uint32_t *parents, int *parent_num, - uint8_t level, bool detail, - FILE *stream) +static int +query_node_recursive(struct ice_hw *hw, struct rte_eth_dev_data *ethdata, + struct ice_sched_node *node, bool detail, FILE *stream) { - uint32_t parent; - int i; - int j; - - *parent_num = 0; - for (i = 0; i < child_num; i++) { - bool exist = false; - int ret; + bool close = false; + if (node->parent != NULL && node->vsi_handle != node->parent->vsi_handle) { + fprintf(stream, "subgraph cluster_%u {\n", node->vsi_handle); + fprintf(stream, "\tlabel = \"VSI %u\";\n", node->vsi_handle); + close = true; + } - ret = query_node(hw, children[i], &parent, level, detail, stream); - if (ret) - return -EINVAL; + int ret = query_node(hw, ethdata, node, detail, stream); + if (ret != 0) + return ret; - for (j = 0; j < *parent_num; j++) { - if (parents[j] == parent) { - exist = true; - break; - } + for (uint16_t i = 0; i < node->num_children; i++) { + ret = query_node_recursive(hw, ethdata, node->children[i], detail, stream); + if (ret != 0) + return ret; + /* if we have a lot of nodes, skip a bunch in the middle */ + if (node->num_children > 16 && i == 2) { + uint16_t inc = node->num_children - 5; + fprintf(stream, "\tn%d_children [label=\"... +%d child nodes ...\"];\n", + node->info.node_teid, inc); + fprintf(stream, "\tNODE_%d -> n%d_children;\n", + node->info.node_teid, node->info.node_teid); + i += inc; } - - if (!exist && parent != 0xFFFFFFFF) - parents[(*parent_num)++] = parent; } + if (close) + fprintf(stream, "}\n"); + if (node->info.parent_teid != 0xFFFFFFFF) + fprintf(stream, "\tNODE_%d -> NODE_%d\n", + node->info.parent_teid, node->info.node_teid); return 0; } -int rte_pmd_ice_dump_txsched(uint16_t port, bool detail, FILE *stream) +int +rte_pmd_ice_dump_txsched(uint16_t port, bool detail, FILE *stream) { struct rte_eth_dev *dev; struct ice_hw *hw; - struct ice_pf *pf; - struct ice_q_ctx *q_ctx; - uint16_t q_num; - uint16_t i; - struct ice_tx_queue *txq; - uint32_t buf1[256]; - uint32_t buf2[256]; - uint32_t *children = buf1; - uint32_t *parents = buf2; - int child_num = 0; - int parent_num = 0; - uint8_t level; RTE_ETH_VALID_PORTID_OR_ERR_RET(port, -ENODEV); @@ -846,35 +814,9 @@ int rte_pmd_ice_dump_txsched(uint16_t port, bool detail, FILE *stream) dev = &rte_eth_devices[port]; hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); - pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); - level = hw->num_tx_sched_layers; - - q_num = dev->data->nb_tx_queues; - - /* main vsi */ - for (i = 0; i < q_num; i++) { - txq = dev->data->tx_queues[i]; - q_ctx = ice_get_lan_q_ctx(hw, txq->vsi->idx, 0, i); - children[child_num++] = q_ctx->q_teid; - } - - /* fdir vsi */ - q_ctx = ice_get_lan_q_ctx(hw, pf->fdir.fdir_vsi->idx, 0, 0); - children[child_num++] = q_ctx->q_teid; fprintf(stream, "digraph tx_sched {\n"); - while (child_num > 0) { - int ret; - ret = query_nodes(hw, children, child_num, - parents, &parent_num, - level, detail, stream); - if (ret) - return ret; - - children = parents; - child_num = parent_num; - level--; - } + query_node_recursive(hw, dev->data, hw->port_info->root, detail, stream); fprintf(stream, "}\n"); return 0; From patchwork Wed Aug 7 09:46:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 142990 X-Patchwork-Delegate: bruce.richardson@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EDD444575B; Wed, 7 Aug 2024 11:47:39 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7493C42789; Wed, 7 Aug 2024 11:47:20 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) by mails.dpdk.org (Postfix) with ESMTP id 6CFD440E4F for ; Wed, 7 Aug 2024 11:47:16 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1723024037; x=1754560037; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=QE/rY82rr9NaaJgrQVtjlxeRkVxBP/3nCusjyPn6E7M=; b=J9i5gj1CKVeZp3dFChk5La7ITgtQkuql4EM5T9+Nj0bWH1p/rwADcvRQ XyhXEih5e2bCnZr6UtSTZpY82gO3zng7tN+zm+1ZcDWeYzEltgAgRsvYD OfguTcKBzOzioBfNvzzZ2b/2e/w5eW+SB35SkelAW5wRexomc5IrNZX5c UwrPLuCmLNvnsOJ1BiabVhPM8hUDUbwaLtkc9ef2jKOHKKnsQsXDBHC5R Z4Bui1T61G7/1pV0Bt5g7zSbgpn5nWfjdOfeN0aNB97iGwkqfgPsKMyfX L+s0CUbcrhQDjUzgBK9J1KH14zDy462pyCGjiI1IssMIaphIrzxtgg+Lo A==; X-CSE-ConnectionGUID: EzXQrZuRRIS9Ixn6gFw7wA== X-CSE-MsgGUID: 8TBL68kaQs2m2kvGWXr0Cg== X-IronPort-AV: E=McAfee;i="6700,10204,11156"; a="21257935" X-IronPort-AV: E=Sophos;i="6.09,269,1716274800"; d="scan'208";a="21257935" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2024 02:47:16 -0700 X-CSE-ConnectionGUID: kBTxWXX0SS2S8aw7/xv+VA== X-CSE-MsgGUID: JeeWxVLpR5yb6x32Ctnuyw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,269,1716274800"; d="scan'208";a="87467370" Received: from silpixa00400562.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.39]) by orviesa002.jf.intel.com with ESMTP; 07 Aug 2024 02:47:15 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson Subject: [PATCH v2 04/15] net/ice: add option to choose DDP package file Date: Wed, 7 Aug 2024 10:46:55 +0100 Message-ID: <20240807094706.459822-5-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240807094706.459822-1-bruce.richardson@intel.com> References: <20240807093407.452784-1-bruce.richardson@intel.com> <20240807094706.459822-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The "Dynamic Device Personalization" package is loaded at initialization time by the driver, but the specific package file loaded depends upon what package file is found first by searching through a hard-coded list of firmware paths. To enable greater control over the package loading, we can add a device option to choose a specific DDP package file to load. Signed-off-by: Bruce Richardson --- doc/guides/nics/ice.rst | 9 +++++++++ drivers/net/ice/ice_ethdev.c | 34 ++++++++++++++++++++++++++++++++++ drivers/net/ice/ice_ethdev.h | 1 + 3 files changed, 44 insertions(+) diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst index ae975d19ad..58ccfbd1a5 100644 --- a/doc/guides/nics/ice.rst +++ b/doc/guides/nics/ice.rst @@ -108,6 +108,15 @@ Runtime Configuration -a 80:00.0,default-mac-disable=1 +- ``DDP Package File`` + + Rather than have the driver search for the DDP package to load, + or to override what package is used, + the ``ddp_pkg_file`` option can be used to provide the path to a specific package file. + For example:: + + -a 80:00.0,ddp_pkg_file=/path/to/ice-version.pkg + - ``Protocol extraction for per queue`` Configure the RX queues to do protocol extraction into mbuf for protocol diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index 304f959b7e..3e7ceda9ce 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -36,6 +36,7 @@ #define ICE_ONE_PPS_OUT_ARG "pps_out" #define ICE_RX_LOW_LATENCY_ARG "rx_low_latency" #define ICE_MBUF_CHECK_ARG "mbuf_check" +#define ICE_DDP_FILENAME "ddp_pkg_file" #define ICE_CYCLECOUNTER_MASK 0xffffffffffffffffULL @@ -52,6 +53,7 @@ static const char * const ice_valid_args[] = { ICE_RX_LOW_LATENCY_ARG, ICE_DEFAULT_MAC_DISABLE, ICE_MBUF_CHECK_ARG, + ICE_DDP_FILENAME, NULL }; @@ -692,6 +694,18 @@ handle_field_name_arg(__rte_unused const char *key, const char *value, return 0; } +static int +handle_ddp_filename_arg(__rte_unused const char *key, const char *value, void *name_args) +{ + const char **filename = name_args; + if (strlen(value) >= ICE_MAX_PKG_FILENAME_SIZE) { + PMD_DRV_LOG(ERR, "The DDP package filename is too long : '%s'", value); + return -1; + } + *filename = strdup(value); + return 0; +} + static void ice_check_proto_xtr_support(struct ice_hw *hw) { @@ -1882,6 +1896,16 @@ int ice_load_pkg(struct ice_adapter *adapter, bool use_dsn, uint64_t dsn) size_t bufsz; int err; + if (adapter->devargs.ddp_filename != NULL) { + strlcpy(pkg_file, adapter->devargs.ddp_filename, sizeof(pkg_file)); + if (rte_firmware_read(pkg_file, &buf, &bufsz) == 0) { + goto load_fw; + } else { + PMD_INIT_LOG(ERR, "Cannot load DDP file: %s\n", pkg_file); + return -1; + } + } + if (!use_dsn) goto no_dsn; @@ -2216,6 +2240,13 @@ static int ice_parse_devargs(struct rte_eth_dev *dev) ret = rte_kvargs_process(kvlist, ICE_RX_LOW_LATENCY_ARG, &parse_bool, &ad->devargs.rx_low_latency); + if (ret) + goto bail; + + ret = rte_kvargs_process(kvlist, ICE_DDP_FILENAME, + &handle_ddp_filename_arg, &ad->devargs.ddp_filename); + if (ret) + goto bail; bail: rte_kvargs_free(kvlist); @@ -2689,6 +2720,8 @@ ice_dev_close(struct rte_eth_dev *dev) ice_free_hw_tbls(hw); rte_free(hw->port_info); hw->port_info = NULL; + free((void *)(uintptr_t)ad->devargs.ddp_filename); + ad->devargs.ddp_filename = NULL; ice_shutdown_all_ctrlq(hw, true); rte_free(pf->proto_xtr); pf->proto_xtr = NULL; @@ -6981,6 +7014,7 @@ RTE_PMD_REGISTER_PARAM_STRING(net_ice, ICE_PROTO_XTR_ARG "=[queue:]" ICE_SAFE_MODE_SUPPORT_ARG "=<0|1>" ICE_DEFAULT_MAC_DISABLE "=<0|1>" + ICE_DDP_FILENAME "=" ICE_RX_LOW_LATENCY_ARG "=<0|1>"); RTE_LOG_REGISTER_SUFFIX(ice_logtype_init, init, NOTICE); diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h index 3ea9f37dc8..c211b5b9cc 100644 --- a/drivers/net/ice/ice_ethdev.h +++ b/drivers/net/ice/ice_ethdev.h @@ -568,6 +568,7 @@ struct ice_devargs { /* Name of the field. */ char xtr_field_name[RTE_MBUF_DYN_NAMESIZE]; uint64_t mbuf_check; + const char *ddp_filename; }; /** From patchwork Wed Aug 7 09:46:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 142991 X-Patchwork-Delegate: bruce.richardson@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2EDF74575B; Wed, 7 Aug 2024 11:47:46 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C4A8F4279A; Wed, 7 Aug 2024 11:47:21 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) by mails.dpdk.org (Postfix) with ESMTP id DC58F41101 for ; Wed, 7 Aug 2024 11:47:17 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1723024038; x=1754560038; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ccMKSfEIz/zEqyYcO6QebfqSqOBKZ+tQjycugc1KZqs=; b=LJXmhbE1xSzt/mbDvuLZEYu/K8hEZHn6GYUvIOBHGB9TI1l2Eip7DaB0 kmmlNbzSGe96kvA29EKm2O6mXZJSp8rphnm9U4DsHu3yWcAzIZvEIfqSK MXoXNeY1/wzLfwlWBIxJLVMfxYdOTNc0LfkH2K3uZhP7DIcMe1tG+fzd+ zENgA66GVSBTS/7pUnWTatV69LNMh/OP0wqtjly5MIQUDi/b22paccfX4 mF3SO/k5DGLJiXFfsp4x74szefbHF64vhrDw0nD0LRYZGWPGLaQ2Rx61u PJ/5vdzPgr9wDhMXvVIfcwbX7Zz5+CXqZiiecFu1US4f5goNY/Wg0SZEu Q==; X-CSE-ConnectionGUID: aShheLMBRPOsVVvavyVfJw== X-CSE-MsgGUID: bsAhtnJdT5KIF+xJIX6e+g== X-IronPort-AV: E=McAfee;i="6700,10204,11156"; a="21257937" X-IronPort-AV: E=Sophos;i="6.09,269,1716274800"; d="scan'208";a="21257937" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2024 02:47:17 -0700 X-CSE-ConnectionGUID: cTJn3o6iS9STf2CZnWplUw== X-CSE-MsgGUID: L3t0KX/oRvG93JpIdWUqkg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,269,1716274800"; d="scan'208";a="87467374" Received: from silpixa00400562.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.39]) by orviesa002.jf.intel.com with ESMTP; 07 Aug 2024 02:47:16 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson Subject: [PATCH v2 05/15] net/ice: add option to download scheduler topology Date: Wed, 7 Aug 2024 10:46:56 +0100 Message-ID: <20240807094706.459822-6-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240807094706.459822-1-bruce.richardson@intel.com> References: <20240807093407.452784-1-bruce.richardson@intel.com> <20240807094706.459822-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The DDP package file being loaded at init time may contain an alternative Tx Scheduler topology in it. Add driver option to load this topology at init time. Signed-off-by: Bruce Richardson --- drivers/net/ice/base/ice_ddp.c | 18 +++++++++++++++--- drivers/net/ice/base/ice_ddp.h | 4 ++-- drivers/net/ice/ice_ethdev.c | 24 +++++++++++++++--------- drivers/net/ice/ice_ethdev.h | 1 + 4 files changed, 33 insertions(+), 14 deletions(-) diff --git a/drivers/net/ice/base/ice_ddp.c b/drivers/net/ice/base/ice_ddp.c index 24506dfaea..e6c42c5274 100644 --- a/drivers/net/ice/base/ice_ddp.c +++ b/drivers/net/ice/base/ice_ddp.c @@ -1326,7 +1326,7 @@ ice_fill_hw_ptype(struct ice_hw *hw) * ice_copy_and_init_pkg() instead of directly calling ice_init_pkg() in this * case. */ -enum ice_ddp_state ice_init_pkg(struct ice_hw *hw, u8 *buf, u32 len) +enum ice_ddp_state ice_init_pkg(struct ice_hw *hw, u8 *buf, u32 len, bool load_sched) { bool already_loaded = false; enum ice_ddp_state state; @@ -1344,6 +1344,18 @@ enum ice_ddp_state ice_init_pkg(struct ice_hw *hw, u8 *buf, u32 len) return state; } + if (load_sched) { + enum ice_status res = ice_cfg_tx_topo(hw, buf, len); + if (res != ICE_SUCCESS) { + ice_debug(hw, ICE_DBG_INIT, "failed to apply sched topology (err: %d)\n", + res); + return ICE_DDP_PKG_ERR; + } + ice_debug(hw, ICE_DBG_INIT, "Topology download successful, reinitializing device\n"); + ice_deinit_hw(hw); + ice_init_hw(hw); + } + /* initialize package info */ state = ice_init_pkg_info(hw, pkg); if (state) @@ -1416,7 +1428,7 @@ enum ice_ddp_state ice_init_pkg(struct ice_hw *hw, u8 *buf, u32 len) * related routines. */ enum ice_ddp_state -ice_copy_and_init_pkg(struct ice_hw *hw, const u8 *buf, u32 len) +ice_copy_and_init_pkg(struct ice_hw *hw, const u8 *buf, u32 len, bool load_sched) { enum ice_ddp_state state; u8 *buf_copy; @@ -1426,7 +1438,7 @@ ice_copy_and_init_pkg(struct ice_hw *hw, const u8 *buf, u32 len) buf_copy = (u8 *)ice_memdup(hw, buf, len, ICE_NONDMA_TO_NONDMA); - state = ice_init_pkg(hw, buf_copy, len); + state = ice_init_pkg(hw, buf_copy, len, load_sched); if (!ice_is_init_pkg_successful(state)) { /* Free the copy, since we failed to initialize the package */ ice_free(hw, buf_copy); diff --git a/drivers/net/ice/base/ice_ddp.h b/drivers/net/ice/base/ice_ddp.h index 5761920207..2feba2e91d 100644 --- a/drivers/net/ice/base/ice_ddp.h +++ b/drivers/net/ice/base/ice_ddp.h @@ -451,9 +451,9 @@ ice_pkg_enum_entry(struct ice_seg *ice_seg, struct ice_pkg_enum *state, void * ice_pkg_enum_section(struct ice_seg *ice_seg, struct ice_pkg_enum *state, u32 sect_type); -enum ice_ddp_state ice_init_pkg(struct ice_hw *hw, u8 *buff, u32 len); +enum ice_ddp_state ice_init_pkg(struct ice_hw *hw, u8 *buff, u32 len, bool load_sched); enum ice_ddp_state -ice_copy_and_init_pkg(struct ice_hw *hw, const u8 *buf, u32 len); +ice_copy_and_init_pkg(struct ice_hw *hw, const u8 *buf, u32 len, bool load_sched); bool ice_is_init_pkg_successful(enum ice_ddp_state state); void ice_free_seg(struct ice_hw *hw); diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index 3e7ceda9ce..0d2445a317 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -37,6 +37,7 @@ #define ICE_RX_LOW_LATENCY_ARG "rx_low_latency" #define ICE_MBUF_CHECK_ARG "mbuf_check" #define ICE_DDP_FILENAME "ddp_pkg_file" +#define ICE_DDP_LOAD_SCHED "ddp_load_sched_topo" #define ICE_CYCLECOUNTER_MASK 0xffffffffffffffffULL @@ -54,6 +55,7 @@ static const char * const ice_valid_args[] = { ICE_DEFAULT_MAC_DISABLE, ICE_MBUF_CHECK_ARG, ICE_DDP_FILENAME, + ICE_DDP_LOAD_SCHED, NULL }; @@ -1938,7 +1940,7 @@ int ice_load_pkg(struct ice_adapter *adapter, bool use_dsn, uint64_t dsn) load_fw: PMD_INIT_LOG(DEBUG, "DDP package name: %s", pkg_file); - err = ice_copy_and_init_pkg(hw, buf, bufsz); + err = ice_copy_and_init_pkg(hw, buf, bufsz, adapter->devargs.ddp_load_sched); if (!ice_is_init_pkg_successful(err)) { PMD_INIT_LOG(ERR, "ice_copy_and_init_hw failed: %d\n", err); free(buf); @@ -1971,19 +1973,18 @@ static int parse_bool(const char *key, const char *value, void *args) { int *i = (int *)args; - char *end; - int num; - num = strtoul(value, &end, 10); - - if (num != 0 && num != 1) { - PMD_DRV_LOG(WARNING, "invalid value:\"%s\" for key:\"%s\", " - "value must be 0 or 1", + if (value == NULL || value[0] == '\0') { + PMD_DRV_LOG(WARNING, "key:\"%s\", requires a value, which must be 0 or 1", key); + return -1; + } + if (value[1] != '\0' || (value[0] != '0' && value[0] != '1')) { + PMD_DRV_LOG(WARNING, "invalid value:\"%s\" for key:\"%s\", value must be 0 or 1", value, key); return -1; } - *i = num; + *i = value[0] - '0'; return 0; } @@ -2248,6 +2249,10 @@ static int ice_parse_devargs(struct rte_eth_dev *dev) if (ret) goto bail; + ret = rte_kvargs_process(kvlist, ICE_DDP_LOAD_SCHED, + &parse_bool, &ad->devargs.ddp_load_sched); + if (ret) + goto bail; bail: rte_kvargs_free(kvlist); return ret; @@ -7014,6 +7019,7 @@ RTE_PMD_REGISTER_PARAM_STRING(net_ice, ICE_PROTO_XTR_ARG "=[queue:]" ICE_SAFE_MODE_SUPPORT_ARG "=<0|1>" ICE_DEFAULT_MAC_DISABLE "=<0|1>" + ICE_DDP_LOAD_SCHED "=<0|1>" ICE_DDP_FILENAME "=" ICE_RX_LOW_LATENCY_ARG "=<0|1>"); diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h index c211b5b9cc..f31addb122 100644 --- a/drivers/net/ice/ice_ethdev.h +++ b/drivers/net/ice/ice_ethdev.h @@ -563,6 +563,7 @@ struct ice_devargs { uint8_t proto_xtr[ICE_MAX_QUEUE_NUM]; uint8_t pin_idx; uint8_t pps_out_ena; + int ddp_load_sched; int xtr_field_offs; uint8_t xtr_flag_offs[PROTO_XTR_MAX]; /* Name of the field. */ From patchwork Wed Aug 7 09:46:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 142992 X-Patchwork-Delegate: bruce.richardson@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6DFA84575B; Wed, 7 Aug 2024 11:47:53 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EF8AA42798; Wed, 7 Aug 2024 11:47:23 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) by mails.dpdk.org (Postfix) with ESMTP id 852DF41101 for ; Wed, 7 Aug 2024 11:47:18 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1723024039; x=1754560039; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Rgp3nCjBnYGk6+bYR0XFomTHqEgLOuvbBBKSe7KP9LY=; b=DTJwoSfccKTUpOst0N0s+db9RNW+QqmfhPqiGXkmsStUKd1O5fQtF4qo paIz+H2LUB/fdxjUAgB9p6mPpt7Uh+Ce41lkKTY/7TkCI2YPfDv/AYSMq /2/AiRqjsYSCyd8+x1irwdGHlkt+8mnleiCyeq22znB05l+3gncqTGgls sLexyFUb6onQLDBulX+xDgk3Nfd8cqkFGJ6OTcV9ran8rshJBpxB5I7+p yhAutQPw7CMZy2oGRiFzJG/OOoUn0YU3i14C952tN/yDYDTlp5vm1BKEA VltXCxtQSY+Qf5RHKnM9UJ54E2dz7wBbMEDnTkw56BbSmy8qU0C6975nF A==; X-CSE-ConnectionGUID: 7oalksl8QtCPYQ/M++zsLw== X-CSE-MsgGUID: eedAvOTlR7qiC+g492ekNQ== X-IronPort-AV: E=McAfee;i="6700,10204,11156"; a="21257938" X-IronPort-AV: E=Sophos;i="6.09,269,1716274800"; d="scan'208";a="21257938" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2024 02:47:18 -0700 X-CSE-ConnectionGUID: gpA7PvuFTjmXDpXzrP3HdA== X-CSE-MsgGUID: 67lhtxQXQKC2gTDr/0WVQA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,269,1716274800"; d="scan'208";a="87467378" Received: from silpixa00400562.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.39]) by orviesa002.jf.intel.com with ESMTP; 07 Aug 2024 02:47:17 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson Subject: [PATCH v2 06/15] net/ice/base: allow init without TC class sched nodes Date: Wed, 7 Aug 2024 10:46:57 +0100 Message-ID: <20240807094706.459822-7-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240807094706.459822-1-bruce.richardson@intel.com> References: <20240807093407.452784-1-bruce.richardson@intel.com> <20240807094706.459822-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org If DCB support is disabled via DDP image, there will not be any traffic class (TC) nodes in the scheduler tree immediately above the root level. To allow the driver to work with this scenario, we allow use of the root node as a dummy TC0 node in case where there are no TC nodes in the tree. For use of any other TC other than 0 (used by default in the driver), existing behaviour of returning NULL pointer is maintained. Signed-off-by: Bruce Richardson --- drivers/net/ice/base/ice_sched.c | 6 ++++++ drivers/net/ice/base/ice_type.h | 1 + 2 files changed, 7 insertions(+) diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c index 373c32a518..f75e5ae599 100644 --- a/drivers/net/ice/base/ice_sched.c +++ b/drivers/net/ice/base/ice_sched.c @@ -292,6 +292,10 @@ struct ice_sched_node *ice_sched_get_tc_node(struct ice_port_info *pi, u8 tc) if (!pi || !pi->root) return NULL; + /* if no TC nodes, use root as TC node 0 */ + if (pi->has_tc == 0) + return tc == 0 ? pi->root : NULL; + for (i = 0; i < pi->root->num_children; i++) if (pi->root->children[i]->tc_num == tc) return pi->root->children[i]; @@ -1306,6 +1310,8 @@ int ice_sched_init_port(struct ice_port_info *pi) ICE_AQC_ELEM_TYPE_ENTRY_POINT) hw->sw_entry_point_layer = j; + if (buf[0].generic[j].data.elem_type == ICE_AQC_ELEM_TYPE_TC) + pi->has_tc = 1; status = ice_sched_add_node(pi, j, &buf[i].generic[j], NULL); if (status) goto err_init_port; diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h index 598a80155b..a70e4a8afa 100644 --- a/drivers/net/ice/base/ice_type.h +++ b/drivers/net/ice/base/ice_type.h @@ -1260,6 +1260,7 @@ struct ice_port_info { struct ice_qos_cfg qos_cfg; u8 is_vf:1; u8 is_custom_tx_enabled:1; + u8 has_tc:1; }; struct ice_switch_info { From patchwork Wed Aug 7 09:46:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 142993 X-Patchwork-Delegate: bruce.richardson@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 553AE4575B; Wed, 7 Aug 2024 11:47:59 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 32849427C1; Wed, 7 Aug 2024 11:47:25 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) by mails.dpdk.org (Postfix) with ESMTP id 5BC904114B for ; Wed, 7 Aug 2024 11:47:19 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1723024040; x=1754560040; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=GkONx/JTr0Ryhfm3YU25ux0z6dB+IFvsn9qzX1Ek7Sc=; b=fwpa3x6Qftbeiy07Sp0PGWwX2bGjAih5Ui0PYcv02P4sQsLig0ZPHxLH VuFY/BHRZayQtPWzbR1CyobZiiMnY06J/rNTZdclIO/M9rWSABAwkFBlG ejtln7d9pzEsfe2GXHjFI/kKrolzElc4jmg2DzcElh1qsE0D+gnKBnYL/ Vq+8c+q5sCjTCbfD7rt7qWw1TLSLXiLrFkjjWnlFr9MX3Ft0utsKkXD15 msZiri1pBf1aIs278Qs9lR3B1BcCZb1bJ9vyVgQ/KqbxJf2JrD44FZxEx vx3ldYpAuHgf4oR2RCSJjEVA90YbS0JFtDVmpLVZK6N+n2yvecQSugYxz A==; X-CSE-ConnectionGUID: VTjMPXhSTjqDbXdzJbfUnw== X-CSE-MsgGUID: SxIZR7siQ5W5x5d+Dy/WGw== X-IronPort-AV: E=McAfee;i="6700,10204,11156"; a="21257939" X-IronPort-AV: E=Sophos;i="6.09,269,1716274800"; d="scan'208";a="21257939" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2024 02:47:19 -0700 X-CSE-ConnectionGUID: LDQRgWHxQde7WLaoBXDrQg== X-CSE-MsgGUID: 5ff1tZZ3SHCINOuIfXRSaA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,269,1716274800"; d="scan'208";a="87467382" Received: from silpixa00400562.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.39]) by orviesa002.jf.intel.com with ESMTP; 07 Aug 2024 02:47:18 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson Subject: [PATCH v2 07/15] net/ice/base: set VSI index on newly created nodes Date: Wed, 7 Aug 2024 10:46:58 +0100 Message-ID: <20240807094706.459822-8-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240807094706.459822-1-bruce.richardson@intel.com> References: <20240807093407.452784-1-bruce.richardson@intel.com> <20240807094706.459822-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The ice_sched_node type has got a field for the vsi to which the node belongs. This field was not getting set in "ice_sched_add_node", so add a line configuring this field for each node from its parent node. Similarly, when searching for a qgroup node, we can check for each node that the VSI information is correct. Signed-off-by: Bruce Richardson --- drivers/net/ice/base/ice_sched.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c index f75e5ae599..f6dc5ae173 100644 --- a/drivers/net/ice/base/ice_sched.c +++ b/drivers/net/ice/base/ice_sched.c @@ -200,6 +200,7 @@ ice_sched_add_node(struct ice_port_info *pi, u8 layer, node->in_use = true; node->parent = parent; node->tx_sched_layer = layer; + node->vsi_handle = parent->vsi_handle; parent->children[parent->num_children++] = node; node->info = elem; return 0; @@ -1581,7 +1582,7 @@ ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc, /* make sure the qgroup node is part of the VSI subtree */ if (ice_sched_find_node_in_subtree(pi->hw, vsi_node, qgrp_node)) if (qgrp_node->num_children < max_children && - qgrp_node->owner == owner) + qgrp_node->owner == owner && qgrp_node->vsi_handle == vsi_handle) break; qgrp_node = qgrp_node->sibling; } From patchwork Wed Aug 7 09:46:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 142994 X-Patchwork-Delegate: bruce.richardson@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1AA794575B; Wed, 7 Aug 2024 11:48:05 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7E25D427C2; Wed, 7 Aug 2024 11:47:26 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) by mails.dpdk.org (Postfix) with ESMTP id 552EE4114B for ; Wed, 7 Aug 2024 11:47:20 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1723024041; x=1754560041; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/gS/eRyLGXBm/1l9JuK0a7+h9+9V8DKp+JWxmwnYhqE=; b=kShcjEkA/bbXFRDxtDi181F0RgG0OFqPjX6tp8M77gDq5JDs0/QITK4x X0To5M/WYKDEAlDK5JAVA/O0W1Um/6p2SkX2zXNqQrQRLvNXxKXwHBJtF Pa/q2zz79POhFJwE4akWHL8/5Dz6OjeNmn+7Rfv7kA9RyFYx6tQToYJdx zb1CBtKkcEWZjHQCfhnWER6eTm652F8QbA5yGMJeGMFB1IN9d6OAXtu80 KUWEGceCAVXgoC5w3pf3d1LYdL2eBN1iayK2xvRge24XxmwtWPt0fITAg KIcdSB+GBEAO71QxHgYCeQIughk7Avt9A3nTDCHAWDS0KuHg4STnXwcAC g==; X-CSE-ConnectionGUID: en8tRU6BQxanR8HZ88Zx1w== X-CSE-MsgGUID: nIi3StOWSXOz9PqwB8xexw== X-IronPort-AV: E=McAfee;i="6700,10204,11156"; a="21257940" X-IronPort-AV: E=Sophos;i="6.09,269,1716274800"; d="scan'208";a="21257940" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2024 02:47:20 -0700 X-CSE-ConnectionGUID: xcIosOg5SWuuD5k01/0/Fw== X-CSE-MsgGUID: SNWXJBfOTcuMnC2hzto+GA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,269,1716274800"; d="scan'208";a="87467386" Received: from silpixa00400562.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.39]) by orviesa002.jf.intel.com with ESMTP; 07 Aug 2024 02:47:19 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson Subject: [PATCH v2 08/15] net/ice/base: read VSI layer info from VSI Date: Wed, 7 Aug 2024 10:46:59 +0100 Message-ID: <20240807094706.459822-9-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240807094706.459822-1-bruce.richardson@intel.com> References: <20240807093407.452784-1-bruce.richardson@intel.com> <20240807094706.459822-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Rather than computing from the number of HW layers the layer of the VSI, we can instead just read that info from the VSI node itself. This allows the layer to be changed at runtime. Signed-off-by: Bruce Richardson --- drivers/net/ice/base/ice_sched.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c index f6dc5ae173..e398984bf2 100644 --- a/drivers/net/ice/base/ice_sched.c +++ b/drivers/net/ice/base/ice_sched.c @@ -1559,7 +1559,6 @@ ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 max_children; qgrp_layer = ice_sched_get_qgrp_layer(pi->hw); - vsi_layer = ice_sched_get_vsi_layer(pi->hw); max_children = pi->hw->max_children[qgrp_layer]; vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle); @@ -1569,6 +1568,7 @@ ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc, /* validate invalid VSI ID */ if (!vsi_node) return NULL; + vsi_layer = vsi_node->tx_sched_layer; /* If the queue group and vsi layer are same then queues * are all attached directly to VSI From patchwork Wed Aug 7 09:47:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 142995 X-Patchwork-Delegate: bruce.richardson@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C5CEC4575B; Wed, 7 Aug 2024 11:48:10 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AFEEB42792; Wed, 7 Aug 2024 11:47:27 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) by mails.dpdk.org (Postfix) with ESMTP id 5537742793 for ; Wed, 7 Aug 2024 11:47:21 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1723024042; x=1754560042; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=jxdVXmPVYhO2u0GalZkvq+G7NSjyrBI6pORRv1zrw1g=; b=G+CvoQU8rR0RdslifBn3lt1WsYWQvtOjXVZlj8QLg2XUWqKzbAVmEfD6 CgiILbC/1AjpYk3Q1qZ1UVGleT4QpVxde+y8M4po0mbVyIw9xCnd7GXyH jFuhoWhzI3J43KmxdMo8pwtFXZTkPfQAKlAmJC3cus7hHNeTtoppVVqb0 fQQZWTYO3MqSWhc7CxhIPDUuBSHxZbon2POcaTfGUWZDsyR+JkR1H1Xp1 YI2KLYcTBrYuWxajowxgVk6PZq0ow+rU2gWTNf0LxeQEb74AIXnBm6QDD bAvFAsYVUn/vOHMMLRae57ANJsE4hyaYoFa85VBkkN2vgiQ0h7oqyWkmQ Q==; X-CSE-ConnectionGUID: uzjscjmHTK+eSuyUMu9XhQ== X-CSE-MsgGUID: 5ACALUiFQPaYXtJdTEVHzA== X-IronPort-AV: E=McAfee;i="6700,10204,11156"; a="21257944" X-IronPort-AV: E=Sophos;i="6.09,269,1716274800"; d="scan'208";a="21257944" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2024 02:47:21 -0700 X-CSE-ConnectionGUID: 6iH+HqFNR3aa0r5FbagvKg== X-CSE-MsgGUID: Tgf1nuU4Tv+Ut3KCXBSrOQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,269,1716274800"; d="scan'208";a="87467389" Received: from silpixa00400562.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.39]) by orviesa002.jf.intel.com with ESMTP; 07 Aug 2024 02:47:20 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson Subject: [PATCH v2 09/15] net/ice/base: remove 255 limit on sched child nodes Date: Wed, 7 Aug 2024 10:47:00 +0100 Message-ID: <20240807094706.459822-10-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240807094706.459822-1-bruce.richardson@intel.com> References: <20240807093407.452784-1-bruce.richardson@intel.com> <20240807094706.459822-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The Tx scheduler in the ice driver can be configured to have large numbers of child nodes at a given layer, but the driver code implicitly limited the number of nodes to 255 by using a u8 datatype for the number of children. Increase this to a 16-bit value throughout the code. Signed-off-by: Bruce Richardson --- drivers/net/ice/base/ice_sched.c | 25 ++++++++++++++----------- drivers/net/ice/base/ice_type.h | 2 +- 2 files changed, 15 insertions(+), 12 deletions(-) diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c index e398984bf2..be13833e1e 100644 --- a/drivers/net/ice/base/ice_sched.c +++ b/drivers/net/ice/base/ice_sched.c @@ -289,7 +289,7 @@ ice_sched_get_first_node(struct ice_port_info *pi, */ struct ice_sched_node *ice_sched_get_tc_node(struct ice_port_info *pi, u8 tc) { - u8 i; + u16 i; if (!pi || !pi->root) return NULL; @@ -316,7 +316,7 @@ void ice_free_sched_node(struct ice_port_info *pi, struct ice_sched_node *node) { struct ice_sched_node *parent; struct ice_hw *hw = pi->hw; - u8 i, j; + u16 i, j; /* Free the children before freeing up the parent node * The parent array is updated below and that shifts the nodes @@ -1473,7 +1473,7 @@ bool ice_sched_find_node_in_subtree(struct ice_hw *hw, struct ice_sched_node *base, struct ice_sched_node *node) { - u8 i; + u16 i; for (i = 0; i < base->num_children; i++) { struct ice_sched_node *child = base->children[i]; @@ -1510,7 +1510,7 @@ ice_sched_get_free_qgrp(struct ice_port_info *pi, struct ice_sched_node *qgrp_node, u8 owner) { struct ice_sched_node *min_qgrp; - u8 min_children; + u16 min_children; if (!qgrp_node) return qgrp_node; @@ -2070,7 +2070,7 @@ static void ice_sched_rm_agg_vsi_info(struct ice_port_info *pi, u16 vsi_handle) */ static bool ice_sched_is_leaf_node_present(struct ice_sched_node *node) { - u8 i; + u16 i; for (i = 0; i < node->num_children; i++) if (ice_sched_is_leaf_node_present(node->children[i])) @@ -2105,7 +2105,7 @@ ice_sched_rm_vsi_cfg(struct ice_port_info *pi, u16 vsi_handle, u8 owner) ice_for_each_traffic_class(i) { struct ice_sched_node *vsi_node, *tc_node; - u8 j = 0; + u16 j = 0; tc_node = ice_sched_get_tc_node(pi, i); if (!tc_node) @@ -2173,7 +2173,7 @@ int ice_rm_vsi_lan_cfg(struct ice_port_info *pi, u16 vsi_handle) */ bool ice_sched_is_tree_balanced(struct ice_hw *hw, struct ice_sched_node *node) { - u8 i; + u16 i; /* start from the leaf node */ for (i = 0; i < node->num_children; i++) @@ -2247,7 +2247,8 @@ ice_sched_get_free_vsi_parent(struct ice_hw *hw, struct ice_sched_node *node, u16 *num_nodes) { u8 l = node->tx_sched_layer; - u8 vsil, i; + u8 vsil; + u16 i; vsil = ice_sched_get_vsi_layer(hw); @@ -2289,7 +2290,7 @@ ice_sched_update_parent(struct ice_sched_node *new_parent, struct ice_sched_node *node) { struct ice_sched_node *old_parent; - u8 i, j; + u16 i, j; old_parent = node->parent; @@ -2389,7 +2390,8 @@ ice_sched_move_vsi_to_agg(struct ice_port_info *pi, u16 vsi_handle, u32 agg_id, u16 num_nodes[ICE_AQC_TOPO_MAX_LEVEL_NUM] = { 0 }; u32 first_node_teid, vsi_teid; u16 num_nodes_added; - u8 aggl, vsil, i; + u8 aggl, vsil; + u16 i; int status; tc_node = ice_sched_get_tc_node(pi, tc); @@ -2505,7 +2507,8 @@ ice_move_all_vsi_to_dflt_agg(struct ice_port_info *pi, static bool ice_sched_is_agg_inuse(struct ice_port_info *pi, struct ice_sched_node *node) { - u8 vsil, i; + u8 vsil; + u16 i; vsil = ice_sched_get_vsi_layer(pi->hw); if (node->tx_sched_layer < vsil - 1) { diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h index a70e4a8afa..35f832eb9f 100644 --- a/drivers/net/ice/base/ice_type.h +++ b/drivers/net/ice/base/ice_type.h @@ -1030,9 +1030,9 @@ struct ice_sched_node { struct ice_aqc_txsched_elem_data info; u32 agg_id; /* aggregator group ID */ u16 vsi_handle; + u16 num_children; u8 in_use; /* suspended or in use */ u8 tx_sched_layer; /* Logical Layer (1-9) */ - u8 num_children; u8 tc_num; u8 owner; #define ICE_SCHED_NODE_OWNER_LAN 0 From patchwork Wed Aug 7 09:47:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 142996 X-Patchwork-Delegate: bruce.richardson@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C992C4575B; Wed, 7 Aug 2024 11:48:16 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D2D67427E0; Wed, 7 Aug 2024 11:47:28 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) by mails.dpdk.org (Postfix) with ESMTP id 4E0F7427A1 for ; Wed, 7 Aug 2024 11:47:22 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1723024043; x=1754560043; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=WNGvCEHT2eO8jmDMYmqgWt/AiLTTGF2USXUYqSHc79s=; b=d5pz9wQpDjMbLmqVv3XtCnCaf/48hs8oo/JUNkHscV7TP4xqVopv2GdR AJLFkYJ3lrsInJVtpUYc3ABKTmZnPeriOvjfI4R7CMtZ76GYHKyaMAJNZ Rvt4IZRTikCIXHXkWTJA+73amCKUN5ftsh8yGfLCWO2VKoGj+Ve3UHHLN eQBogt3xSw+kbQGlrva+XzNgrpult6qyOqKUmPg3DAhUL2qU5rQe1oYuL rvCeWYd2iL6Eo4F++O239miJoc3mvc/11QPM+NaPhEC3xK6vCnWQn5NjI kqY7oyn3GLLWQ8l1ZTX7jUsXKTDsOtWRxFiTvxZMvxAhK0BdwQfqpd5uE A==; X-CSE-ConnectionGUID: iOclOfAVTDCGQXA7oIHkXQ== X-CSE-MsgGUID: 0Jsmpfo9Rq+iwRuqg9B5mw== X-IronPort-AV: E=McAfee;i="6700,10204,11156"; a="21257945" X-IronPort-AV: E=Sophos;i="6.09,269,1716274800"; d="scan'208";a="21257945" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2024 02:47:22 -0700 X-CSE-ConnectionGUID: /2RlYuGFR62HIDXEzgdXTw== X-CSE-MsgGUID: 8gJv5chYQa69S7qaJMuNEg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,269,1716274800"; d="scan'208";a="87467392" Received: from silpixa00400562.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.39]) by orviesa002.jf.intel.com with ESMTP; 07 Aug 2024 02:47:21 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson Subject: [PATCH v2 10/15] net/ice/base: optimize subtree searches Date: Wed, 7 Aug 2024 10:47:01 +0100 Message-ID: <20240807094706.459822-11-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240807094706.459822-1-bruce.richardson@intel.com> References: <20240807093407.452784-1-bruce.richardson@intel.com> <20240807094706.459822-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org In a number of places throughout the driver code, we want to confirm that a scheduler node is indeed a child of another node. Currently, this is confirmed by searching down the tree from the base until the desired node is hit, a search which may hit many irrelevant tree nodes when recursing down wrong branches. By switching the direction of search, to check upwards from the node to the parent, we can avoid any incorrect paths, and so speed up processing. Signed-off-by: Bruce Richardson --- drivers/net/ice/base/ice_sched.c | 18 +++++------------- 1 file changed, 5 insertions(+), 13 deletions(-) diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c index be13833e1e..f7d5f8f415 100644 --- a/drivers/net/ice/base/ice_sched.c +++ b/drivers/net/ice/base/ice_sched.c @@ -1475,20 +1475,12 @@ ice_sched_find_node_in_subtree(struct ice_hw *hw, struct ice_sched_node *base, { u16 i; - for (i = 0; i < base->num_children; i++) { - struct ice_sched_node *child = base->children[i]; - - if (node == child) - return true; - - if (child->tx_sched_layer > node->tx_sched_layer) - return false; - - /* this recursion is intentional, and wouldn't - * go more than 8 calls - */ - if (ice_sched_find_node_in_subtree(hw, child, node)) + if (base == node) + return true; + while (node->tx_sched_layer != 0 && node->parent != NULL) { + if (node->parent == base) return true; + node = node->parent; } return false; } From patchwork Wed Aug 7 09:47:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 142997 X-Patchwork-Delegate: bruce.richardson@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id ACBC84575B; Wed, 7 Aug 2024 11:48:22 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 099EE427E7; Wed, 7 Aug 2024 11:47:30 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) by mails.dpdk.org (Postfix) with ESMTP id 4513B42792 for ; Wed, 7 Aug 2024 11:47:23 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1723024044; x=1754560044; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=AnuurfSOY/UYpOqgy9U+/yVVqMHFyHLQR9wqWwlXt+M=; b=duUQNs7exd+7Ik4GzfY9Qu73Kp9tP/yrnYz82RkvLezJdz76dK0/QhUg RJxrR7WYTDmoYTWKTrnNNUvr3V9wDRxVqZsybKWd8QHGvFFnpsTn1l5FB 8cXY3vNGeqxRo8rqDccLJ2712vC3P3xYbhCek2t3nJBzVCSCLHrBynq7F uedpuUaMa3a7uYI+g4K9f2XCIl2ofnxqz+Fmby9nilfxrbg33NixdBohq Oo+4HjrMEJ9IUY6yB0wr2I5USPl/p7AIDj2j7a0z3TQqw6FTqDihZOsX9 FYwR0eWZxWzLhODDLUYf9fN+zmz1TShRRZ/zg/w+m3yNfVCEV8OyIwa/k A==; X-CSE-ConnectionGUID: XcN9eHLHT76f2RVD+q5+vA== X-CSE-MsgGUID: eI0lToqeSGa5bUGD43FVFQ== X-IronPort-AV: E=McAfee;i="6700,10204,11156"; a="21257948" X-IronPort-AV: E=Sophos;i="6.09,269,1716274800"; d="scan'208";a="21257948" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2024 02:47:23 -0700 X-CSE-ConnectionGUID: e1Z3G5guQg+72yupk4gKhQ== X-CSE-MsgGUID: rNb8wIs6SZKQjrH45XjF2A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,269,1716274800"; d="scan'208";a="87467396" Received: from silpixa00400562.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.39]) by orviesa002.jf.intel.com with ESMTP; 07 Aug 2024 02:47:22 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson Subject: [PATCH v2 11/15] net/ice/base: make functions non-static Date: Wed, 7 Aug 2024 10:47:02 +0100 Message-ID: <20240807094706.459822-12-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240807094706.459822-1-bruce.richardson@intel.com> References: <20240807093407.452784-1-bruce.richardson@intel.com> <20240807094706.459822-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org We will need to allocate more lanq contexts after a scheduler rework, so make that function non-static so accessible outside the file. For similar reasons, make the function to add a Tx scheduler node non-static Signed-off-by: Bruce Richardson --- drivers/net/ice/base/ice_sched.c | 2 +- drivers/net/ice/base/ice_sched.h | 8 ++++++++ 2 files changed, 9 insertions(+), 1 deletion(-) diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c index f7d5f8f415..d88b836c38 100644 --- a/drivers/net/ice/base/ice_sched.c +++ b/drivers/net/ice/base/ice_sched.c @@ -570,7 +570,7 @@ ice_sched_suspend_resume_elems(struct ice_hw *hw, u8 num_nodes, u32 *node_teids, * @tc: TC number * @new_numqs: number of queues */ -static int +int ice_alloc_lan_q_ctx(struct ice_hw *hw, u16 vsi_handle, u8 tc, u16 new_numqs) { struct ice_vsi_ctx *vsi_ctx; diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h index 9f78516dfb..c7eb794963 100644 --- a/drivers/net/ice/base/ice_sched.h +++ b/drivers/net/ice/base/ice_sched.h @@ -270,4 +270,12 @@ int ice_sched_replay_q_bw(struct ice_port_info *pi, struct ice_q_ctx *q_ctx); int ice_sched_cfg_node_bw_alloc(struct ice_hw *hw, struct ice_sched_node *node, enum ice_rl_type rl_type, u16 bw_alloc); + +int +ice_sched_add_elems(struct ice_port_info *pi, struct ice_sched_node *tc_node, + struct ice_sched_node *parent, u8 layer, u16 num_nodes, + u16 *num_nodes_added, u32 *first_node_teid, + struct ice_sched_node **prealloc_nodes); +int +ice_alloc_lan_q_ctx(struct ice_hw *hw, u16 vsi_handle, u8 tc, u16 new_numqs); #endif /* _ICE_SCHED_H_ */ From patchwork Wed Aug 7 09:47:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 142998 X-Patchwork-Delegate: bruce.richardson@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BCAEB4575B; Wed, 7 Aug 2024 11:48:30 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9E94242D66; Wed, 7 Aug 2024 11:47:31 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) by mails.dpdk.org (Postfix) with ESMTP id 3EE37427B9 for ; Wed, 7 Aug 2024 11:47:24 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1723024045; x=1754560045; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=qOxFlFAxWuyEOUSQHTprnZIWb3sQBavBp1ESUG5WutQ=; b=V8UtWTnCrQKwawU1x0QRiyXEPEpZVdYnAJut1tVoPm+rPYs/k0qKVVo0 B3c+kmrgHPRdRgm1WO71Azz5wdLcUi8KfKBMntq7FU9gfZxEC9hPVGk1C Q+RjEY/UMVm00/cgH4qTkYIkRexd3d0qpq7G29jmGvLw83eGey9ErgPFu IgMvh+Pm15wl6GRdFZ1v6VxvHhhtMWksqkK5yTa9Pc1SyrEB5a4k3GFxo plYhmZyQ715k8nIhNpbQEpkXYnfIhVk6wl/rUAp7239jsbl4gcx2frtZn RKV/yB1cCPwKVeOfAHIxwXLS6xXBU2kFmSf5r4hdYChwyelDUjty2LFF4 Q==; X-CSE-ConnectionGUID: b79ukhHeShmXwdqpZspyZA== X-CSE-MsgGUID: QqFWxflMQ2i64cZQ9QYj5w== X-IronPort-AV: E=McAfee;i="6700,10204,11156"; a="21257949" X-IronPort-AV: E=Sophos;i="6.09,269,1716274800"; d="scan'208";a="21257949" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2024 02:47:24 -0700 X-CSE-ConnectionGUID: I4QjcZg1SbScl68PEBVW2w== X-CSE-MsgGUID: E5Xkdwi6Q8a1pBmIlVDUvQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,269,1716274800"; d="scan'208";a="87467400" Received: from silpixa00400562.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.39]) by orviesa002.jf.intel.com with ESMTP; 07 Aug 2024 02:47:23 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson Subject: [PATCH v2 12/15] net/ice/base: remove flag checks before topology upload Date: Wed, 7 Aug 2024 10:47:03 +0100 Message-ID: <20240807094706.459822-13-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240807094706.459822-1-bruce.richardson@intel.com> References: <20240807093407.452784-1-bruce.richardson@intel.com> <20240807094706.459822-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org DPDK should support more than just 9-level or 5-level topologies, so remove the checks for those particular settings. Signed-off-by: Bruce Richardson --- drivers/net/ice/base/ice_ddp.c | 33 --------------------------------- 1 file changed, 33 deletions(-) diff --git a/drivers/net/ice/base/ice_ddp.c b/drivers/net/ice/base/ice_ddp.c index e6c42c5274..744f015fe5 100644 --- a/drivers/net/ice/base/ice_ddp.c +++ b/drivers/net/ice/base/ice_ddp.c @@ -2373,38 +2373,6 @@ int ice_cfg_tx_topo(struct ice_hw *hw, u8 *buf, u32 len) return status; } - /* Is default topology already applied ? */ - if (!(flags & ICE_AQC_TX_TOPO_FLAGS_LOAD_NEW) && - hw->num_tx_sched_layers == 9) { - ice_debug(hw, ICE_DBG_INIT, "Loaded default topology\n"); - /* Already default topology is loaded */ - return ICE_ERR_ALREADY_EXISTS; - } - - /* Is new topology already applied ? */ - if ((flags & ICE_AQC_TX_TOPO_FLAGS_LOAD_NEW) && - hw->num_tx_sched_layers == 5) { - ice_debug(hw, ICE_DBG_INIT, "Loaded new topology\n"); - /* Already new topology is loaded */ - return ICE_ERR_ALREADY_EXISTS; - } - - /* Is set topology issued already ? */ - if (flags & ICE_AQC_TX_TOPO_FLAGS_ISSUED) { - ice_debug(hw, ICE_DBG_INIT, "Update tx topology was done by another PF\n"); - /* add a small delay before exiting */ - for (i = 0; i < 20; i++) - ice_msec_delay(100, true); - return ICE_ERR_ALREADY_EXISTS; - } - - /* Change the topology from new to default (5 to 9) */ - if (!(flags & ICE_AQC_TX_TOPO_FLAGS_LOAD_NEW) && - hw->num_tx_sched_layers == 5) { - ice_debug(hw, ICE_DBG_INIT, "Change topology from 5 to 9 layers\n"); - goto update_topo; - } - pkg_hdr = (struct ice_pkg_hdr *)buf; state = ice_verify_pkg(pkg_hdr, len); if (state) { @@ -2451,7 +2419,6 @@ int ice_cfg_tx_topo(struct ice_hw *hw, u8 *buf, u32 len) /* Get the new topology buffer */ new_topo = ((u8 *)section) + offset; -update_topo: /* acquire global lock to make sure that set topology issued * by one PF */ From patchwork Wed Aug 7 09:47:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 142999 X-Patchwork-Delegate: bruce.richardson@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7A2A54575B; Wed, 7 Aug 2024 11:48:37 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C6BA342D76; Wed, 7 Aug 2024 11:47:32 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) by mails.dpdk.org (Postfix) with ESMTP id 39E4E427C2 for ; Wed, 7 Aug 2024 11:47:25 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1723024046; x=1754560046; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=PuI8ws8QZpu20EpZEZTMXgmWpEspJN7UcXGSE6TqDfI=; b=KX7BPtC02Lj78D5jW1JRbOZsE+2e4pdPg09gp9D8l6IAmJ4rlTeIRbt4 cbZT6rKi8GjczHLhPDqgLD0nyevzA3SNf/cVGLTXa9ygegstYAsoDN290 JuyiohdmAHZQWIDe3K279lj4hLtzkEhTGCjExdGqg+Jar1bUwVAP0b7Ie Mfh3oo5MectPaL9EonZkBCP/kReO30sjCxxzJmZLq5nPq4gSFh9BsgBTI a6R+jyYJr0Ire2C0sttY+ZYQRiNTbVXaKYH8zXfRDRZROSRl+lE4iAVVv JlXPxushmM+ERrD3BEsBYjtkFiJ1Zey6FGmOTf8olk5eqKxOyz145UDxd A==; X-CSE-ConnectionGUID: Kgk9vrTSTvW0VmuDPkFKgA== X-CSE-MsgGUID: /b6bqMdHSdO8RYY2qCrFkw== X-IronPort-AV: E=McAfee;i="6700,10204,11156"; a="21257952" X-IronPort-AV: E=Sophos;i="6.09,269,1716274800"; d="scan'208";a="21257952" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2024 02:47:25 -0700 X-CSE-ConnectionGUID: FPZYCUDnRNy6U469I18uug== X-CSE-MsgGUID: kNt94xKlSraCXxy56yBM1A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,269,1716274800"; d="scan'208";a="87467403" Received: from silpixa00400562.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.39]) by orviesa002.jf.intel.com with ESMTP; 07 Aug 2024 02:47:24 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson Subject: [PATCH v2 13/15] net/ice: limit the number of queues to sched capabilities Date: Wed, 7 Aug 2024 10:47:04 +0100 Message-ID: <20240807094706.459822-14-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240807094706.459822-1-bruce.richardson@intel.com> References: <20240807093407.452784-1-bruce.richardson@intel.com> <20240807094706.459822-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Rather than assuming that each VSI can hold up to 256 queue pairs, or the reported device limit, query the available nodes in the scheduler tree to check that we are not overflowing the limit for number of child scheduling nodes at each level. Do this by multiplying max_children for each level beyond the VSI and using that as an additional cap on the number of queues. Signed-off-by: Bruce Richardson --- drivers/net/ice/ice_ethdev.c | 25 ++++++++++++++++++++----- 1 file changed, 20 insertions(+), 5 deletions(-) diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index 0d2445a317..ab3f88fd7d 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -913,7 +913,7 @@ ice_vsi_config_default_rss(struct ice_aqc_vsi_props *info) } static int -ice_vsi_config_tc_queue_mapping(struct ice_vsi *vsi, +ice_vsi_config_tc_queue_mapping(struct ice_hw *hw, struct ice_vsi *vsi, struct ice_aqc_vsi_props *info, uint8_t enabled_tcmap) { @@ -929,13 +929,28 @@ ice_vsi_config_tc_queue_mapping(struct ice_vsi *vsi, } /* vector 0 is reserved and 1 vector for ctrl vsi */ - if (vsi->adapter->hw.func_caps.common_cap.num_msix_vectors < 2) + if (vsi->adapter->hw.func_caps.common_cap.num_msix_vectors < 2) { vsi->nb_qps = 0; - else + } else { vsi->nb_qps = RTE_MIN ((uint16_t)vsi->adapter->hw.func_caps.common_cap.num_msix_vectors - 2, RTE_MIN(vsi->nb_qps, ICE_MAX_Q_PER_TC)); + /* cap max QPs to what the HW reports as num-children for each layer. + * Multiply num_children for each layer from the entry_point layer to + * the qgroup, or second-last layer. + * Avoid any potential overflow by using uint32_t type and breaking loop + * once we have a number greater than the already configured max. + */ + uint32_t max_sched_vsi_nodes = 1; + for (uint8_t i = hw->sw_entry_point_layer; i < hw->num_tx_sched_layers - 1; i++) { + max_sched_vsi_nodes *= hw->max_children[i]; + if (max_sched_vsi_nodes >= vsi->nb_qps) + break; + } + vsi->nb_qps = RTE_MIN(vsi->nb_qps, max_sched_vsi_nodes); + } + /* nb_qps(hex) -> fls */ /* 0000 -> 0 */ /* 0001 -> 0 */ @@ -1707,7 +1722,7 @@ ice_setup_vsi(struct ice_pf *pf, enum ice_vsi_type type) rte_cpu_to_le_16(hw->func_caps.fd_fltr_best_effort); /* Enable VLAN/UP trip */ - ret = ice_vsi_config_tc_queue_mapping(vsi, + ret = ice_vsi_config_tc_queue_mapping(hw, vsi, &vsi_ctx.info, ICE_DEFAULT_TCMAP); if (ret) { @@ -1731,7 +1746,7 @@ ice_setup_vsi(struct ice_pf *pf, enum ice_vsi_type type) vsi_ctx.info.fd_options = rte_cpu_to_le_16(cfg); vsi_ctx.info.sw_id = hw->port_info->sw_id; vsi_ctx.info.sw_flags2 = ICE_AQ_VSI_SW_FLAG_LAN_ENA; - ret = ice_vsi_config_tc_queue_mapping(vsi, + ret = ice_vsi_config_tc_queue_mapping(hw, vsi, &vsi_ctx.info, ICE_DEFAULT_TCMAP); if (ret) { From patchwork Wed Aug 7 09:47:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 143000 X-Patchwork-Delegate: bruce.richardson@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5E7504575B; Wed, 7 Aug 2024 11:48:44 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1BD6F42D2E; Wed, 7 Aug 2024 11:47:34 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) by mails.dpdk.org (Postfix) with ESMTP id C0E7942792 for ; Wed, 7 Aug 2024 11:47:26 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1723024047; x=1754560047; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=nPwu9OZXh4h74O31O258Of+3VGQ/RmUSyOIDO5t/8wY=; b=bBLZUOTtrOoKR42/BdCQBneBVR+nFXVaNvFWsKiY6l/suMUCIdeUHc/c 8bOJ3iSNMkzXUoHwZ3WQjTdp9d0hUtqCRjRevgp+viq+MX7ToiVn8K0kc UZ+tX+oQ0gJy7Fl5dFAt0dlwFXTuwQW0y5JpIOHQcOfrghhCV9nm+sQae tDNsT+6nKxQHhtVkzH2OH684Hc8frx6cbqa5r3k7UESGY+R/F5g7xKTfp FZllsv5OXttyBCp4QTLxG2Wr0T3wi0PtIPs7h8o82HEoaADTcrUhCGoVx /0IbjhYygMi+AJSiwUHkXKnB1TGkkfzQ5mMiaZi8BBcANcRgD6prFOaV2 A==; X-CSE-ConnectionGUID: a4WONZr4Ttuzo3jgsVYjMA== X-CSE-MsgGUID: dQ/nDlbBT3udZnWepRLhtA== X-IronPort-AV: E=McAfee;i="6700,10204,11156"; a="21257956" X-IronPort-AV: E=Sophos;i="6.09,269,1716274800"; d="scan'208";a="21257956" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2024 02:47:26 -0700 X-CSE-ConnectionGUID: ZWCO7gubTWilPEXC7fcgLQ== X-CSE-MsgGUID: pSswBF2aROi9lRnwp5CT8A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,269,1716274800"; d="scan'208";a="87467408" Received: from silpixa00400562.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.39]) by orviesa002.jf.intel.com with ESMTP; 07 Aug 2024 02:47:25 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson Subject: [PATCH v2 14/15] net/ice: enhance Tx scheduler hierarchy support Date: Wed, 7 Aug 2024 10:47:05 +0100 Message-ID: <20240807094706.459822-15-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240807094706.459822-1-bruce.richardson@intel.com> References: <20240807093407.452784-1-bruce.richardson@intel.com> <20240807094706.459822-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Increase the flexibility of the Tx scheduler hierarchy support in the driver. If the HW/firmware allows it, allow creating up to 2k child nodes per scheduler node. Also expand the number of supported layers to the max available, rather than always just having 3 layers. One restriction on this change is that the topology needs to be configured and enabled before port queue setup, in many cases, and before port start in all cases. Signed-off-by: Bruce Richardson --- drivers/net/ice/ice_ethdev.c | 9 - drivers/net/ice/ice_ethdev.h | 15 +- drivers/net/ice/ice_rxtx.c | 10 + drivers/net/ice/ice_tm.c | 495 ++++++++++++++--------------------- 4 files changed, 212 insertions(+), 317 deletions(-) diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index ab3f88fd7d..5a5967ff71 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -3832,7 +3832,6 @@ ice_dev_start(struct rte_eth_dev *dev) int mask, ret; uint8_t timer = hw->func_caps.ts_func_info.tmr_index_owned; uint32_t pin_idx = ad->devargs.pin_idx; - struct rte_tm_error tm_err; ice_declare_bitmap(pmask, ICE_PROMISC_MAX); ice_zero_bitmap(pmask, ICE_PROMISC_MAX); @@ -3864,14 +3863,6 @@ ice_dev_start(struct rte_eth_dev *dev) } } - if (pf->tm_conf.committed) { - ret = ice_do_hierarchy_commit(dev, pf->tm_conf.clear_on_fail, &tm_err); - if (ret) { - PMD_DRV_LOG(ERR, "fail to commit Tx scheduler"); - goto rx_err; - } - } - ice_set_rx_function(dev); ice_set_tx_function(dev); diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h index f31addb122..cb1a7e8e0d 100644 --- a/drivers/net/ice/ice_ethdev.h +++ b/drivers/net/ice/ice_ethdev.h @@ -479,14 +479,6 @@ struct ice_tm_node { struct ice_sched_node *sched_node; }; -/* node type of Traffic Manager */ -enum ice_tm_node_type { - ICE_TM_NODE_TYPE_PORT, - ICE_TM_NODE_TYPE_QGROUP, - ICE_TM_NODE_TYPE_QUEUE, - ICE_TM_NODE_TYPE_MAX, -}; - /* Struct to store all the Traffic Manager configuration. */ struct ice_tm_conf { struct ice_shaper_profile_list shaper_profile_list; @@ -690,9 +682,6 @@ int ice_rem_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id, struct ice_rss_hash_cfg *cfg); void ice_tm_conf_init(struct rte_eth_dev *dev); void ice_tm_conf_uninit(struct rte_eth_dev *dev); -int ice_do_hierarchy_commit(struct rte_eth_dev *dev, - int clear_on_fail, - struct rte_tm_error *error); extern const struct rte_tm_ops ice_tm_ops; static inline int @@ -750,4 +739,8 @@ int rte_pmd_ice_dump_switch(uint16_t port, uint8_t **buff, uint32_t *size); __rte_experimental int rte_pmd_ice_dump_txsched(uint16_t port, bool detail, FILE *stream); + +int +ice_tm_setup_txq_node(struct ice_pf *pf, struct ice_hw *hw, uint16_t qid, uint32_t node_teid); + #endif /* _ICE_ETHDEV_H_ */ diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c index a150d28e73..7a421bb364 100644 --- a/drivers/net/ice/ice_rxtx.c +++ b/drivers/net/ice/ice_rxtx.c @@ -747,6 +747,7 @@ ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) int err; struct ice_vsi *vsi; struct ice_hw *hw; + struct ice_pf *pf; struct ice_aqc_add_tx_qgrp *txq_elem; struct ice_tlan_ctx tx_ctx; int buf_len; @@ -777,6 +778,7 @@ ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) vsi = txq->vsi; hw = ICE_VSI_TO_HW(vsi); + pf = ICE_VSI_TO_PF(vsi); memset(&tx_ctx, 0, sizeof(tx_ctx)); txq_elem->num_txqs = 1; @@ -812,6 +814,14 @@ ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) /* store the schedule node id */ txq->q_teid = txq_elem->txqs[0].q_teid; + /* move the queue to correct position in hierarchy, if explicit hierarchy configured */ + if (pf->tm_conf.committed) + if (ice_tm_setup_txq_node(pf, hw, tx_queue_id, txq->q_teid) != 0) { + PMD_DRV_LOG(ERR, "Failed to set up txq traffic management node"); + rte_free(txq_elem); + return -EIO; + } + dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; rte_free(txq_elem); diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c index 459446a6b0..a86943a5b2 100644 --- a/drivers/net/ice/ice_tm.c +++ b/drivers/net/ice/ice_tm.c @@ -1,17 +1,17 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright(c) 2022 Intel Corporation */ +#include #include #include "ice_ethdev.h" #include "ice_rxtx.h" -#define MAX_CHILDREN_PER_SCHED_NODE 8 -#define MAX_CHILDREN_PER_TM_NODE 256 +#define MAX_CHILDREN_PER_TM_NODE 2048 static int ice_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail, - __rte_unused struct rte_tm_error *error); + struct rte_tm_error *error); static int ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, uint32_t parent_node_id, uint32_t priority, uint32_t weight, uint32_t level_id, @@ -86,9 +86,10 @@ ice_tm_conf_uninit(struct rte_eth_dev *dev) } static int -ice_node_param_check(struct ice_pf *pf, uint32_t node_id, +ice_node_param_check(uint32_t node_id, uint32_t priority, uint32_t weight, const struct rte_tm_node_params *params, + bool is_leaf, struct rte_tm_error *error) { /* checked all the unsupported parameter */ @@ -123,7 +124,7 @@ ice_node_param_check(struct ice_pf *pf, uint32_t node_id, } /* for non-leaf node */ - if (node_id >= pf->dev_data->nb_tx_queues) { + if (!is_leaf) { if (params->nonleaf.wfq_weight_mode) { error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE; @@ -147,6 +148,11 @@ ice_node_param_check(struct ice_pf *pf, uint32_t node_id, } /* for leaf node */ + if (node_id >= RTE_MAX_QUEUES_PER_PORT) { + error->type = RTE_TM_ERROR_TYPE_NODE_ID; + error->message = "Node ID out of range for a leaf node."; + return -EINVAL; + } if (params->leaf.cman) { error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_CMAN; error->message = "Congestion management not supported"; @@ -193,11 +199,18 @@ find_node(struct ice_tm_node *root, uint32_t id) return NULL; } +static inline uint8_t +ice_get_leaf_level(struct ice_hw *hw) +{ + return hw->num_tx_sched_layers - 1 - hw->port_info->has_tc; +} + static int ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id, int *is_leaf, struct rte_tm_error *error) { struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct ice_tm_node *tm_node; if (!is_leaf || !error) @@ -217,7 +230,7 @@ ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id, return -EINVAL; } - if (tm_node->level == ICE_TM_NODE_TYPE_QUEUE) + if (tm_node->level == ice_get_leaf_level(hw)) *is_leaf = true; else *is_leaf = false; @@ -389,16 +402,28 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, struct rte_tm_error *error) { struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct ice_tm_shaper_profile *shaper_profile = NULL; struct ice_tm_node *tm_node; - struct ice_tm_node *parent_node; + struct ice_tm_node *parent_node = NULL; int ret; if (!params || !error) return -EINVAL; - ret = ice_node_param_check(pf, node_id, priority, weight, - params, error); + if (parent_node_id != RTE_TM_NODE_ID_NULL) { + parent_node = find_node(pf->tm_conf.root, parent_node_id); + if (!parent_node) { + error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID; + error->message = "parent not exist"; + return -EINVAL; + } + } + if (level_id == RTE_TM_NODE_LEVEL_ID_ANY && parent_node != NULL) + level_id = parent_node->level + 1; + + ret = ice_node_param_check(node_id, priority, weight, + params, level_id == ice_get_leaf_level(hw), error); if (ret) return ret; @@ -424,9 +449,9 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, /* root node if not have a parent */ if (parent_node_id == RTE_TM_NODE_ID_NULL) { /* check level */ - if (level_id != ICE_TM_NODE_TYPE_PORT) { + if (level_id != 0) { error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS; - error->message = "Wrong level"; + error->message = "Wrong level, root node (NULL parent) must be at level 0"; return -EINVAL; } @@ -445,7 +470,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, if (!tm_node) return -ENOMEM; tm_node->id = node_id; - tm_node->level = ICE_TM_NODE_TYPE_PORT; + tm_node->level = 0; tm_node->parent = NULL; tm_node->reference_count = 0; tm_node->shaper_profile = shaper_profile; @@ -458,48 +483,21 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, } /* check the parent node */ - parent_node = find_node(pf->tm_conf.root, parent_node_id); - if (!parent_node) { - error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID; - error->message = "parent not exist"; - return -EINVAL; - } - if (parent_node->level != ICE_TM_NODE_TYPE_PORT && - parent_node->level != ICE_TM_NODE_TYPE_QGROUP) { + /* for n-level hierarchy, level n-1 is leaf, so last level with children is n-2 */ + if ((int)parent_node->level > hw->num_tx_sched_layers - 2) { error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID; error->message = "parent is not valid"; return -EINVAL; } /* check level */ - if (level_id != RTE_TM_NODE_LEVEL_ID_ANY && - level_id != parent_node->level + 1) { + if (level_id != parent_node->level + 1) { error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS; error->message = "Wrong level"; return -EINVAL; } /* check the node number */ - if (parent_node->level == ICE_TM_NODE_TYPE_PORT) { - /* check the queue group number */ - if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) { - error->type = RTE_TM_ERROR_TYPE_NODE_ID; - error->message = "too many queue groups"; - return -EINVAL; - } - } else { - /* check the queue number */ - if (parent_node->reference_count >= - MAX_CHILDREN_PER_SCHED_NODE) { - error->type = RTE_TM_ERROR_TYPE_NODE_ID; - error->message = "too many queues"; - return -EINVAL; - } - if (node_id >= pf->dev_data->nb_tx_queues) { - error->type = RTE_TM_ERROR_TYPE_NODE_ID; - error->message = "too large queue id"; - return -EINVAL; - } - } + /* TODO, check max children allowed and max nodes at this level */ tm_node = rte_zmalloc(NULL, sizeof(struct ice_tm_node) + @@ -518,13 +516,12 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, (void *)((uint8_t *)tm_node + sizeof(struct ice_tm_node)); tm_node->parent->children[tm_node->parent->reference_count] = tm_node; - if (tm_node->priority != 0 && level_id != ICE_TM_NODE_TYPE_QUEUE && - level_id != ICE_TM_NODE_TYPE_QGROUP) + if (tm_node->priority != 0) + /* TODO fixme, some levels may support this perhaps? */ PMD_DRV_LOG(WARNING, "priority != 0 not supported in level %d", level_id); - if (tm_node->weight != 1 && - level_id != ICE_TM_NODE_TYPE_QUEUE && level_id != ICE_TM_NODE_TYPE_QGROUP) + if (tm_node->weight != 1 && level_id == 0) PMD_DRV_LOG(WARNING, "weight != 1 not supported in level %d", level_id); @@ -569,7 +566,7 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id, } /* root node */ - if (tm_node->level == ICE_TM_NODE_TYPE_PORT) { + if (tm_node->level == 0) { rte_free(tm_node); pf->tm_conf.root = NULL; return 0; @@ -589,53 +586,6 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id, return 0; } -static int ice_move_recfg_lan_txq(struct rte_eth_dev *dev, - struct ice_sched_node *queue_sched_node, - struct ice_sched_node *dst_node, - uint16_t queue_id) -{ - struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct ice_aqc_move_txqs_data *buf; - struct ice_sched_node *queue_parent_node; - uint8_t txqs_moved; - int ret = ICE_SUCCESS; - uint16_t buf_size = ice_struct_size(buf, txqs, 1); - - buf = (struct ice_aqc_move_txqs_data *)ice_malloc(hw, sizeof(*buf)); - if (buf == NULL) - return -ENOMEM; - - queue_parent_node = queue_sched_node->parent; - buf->src_teid = queue_parent_node->info.node_teid; - buf->dest_teid = dst_node->info.node_teid; - buf->txqs[0].q_teid = queue_sched_node->info.node_teid; - buf->txqs[0].txq_id = queue_id; - - ret = ice_aq_move_recfg_lan_txq(hw, 1, true, false, false, false, 50, - NULL, buf, buf_size, &txqs_moved, NULL); - if (ret || txqs_moved == 0) { - PMD_DRV_LOG(ERR, "move lan queue %u failed", queue_id); - rte_free(buf); - return ICE_ERR_PARAM; - } - - if (queue_parent_node->num_children > 0) { - queue_parent_node->num_children--; - queue_parent_node->children[queue_parent_node->num_children] = NULL; - } else { - PMD_DRV_LOG(ERR, "invalid children number %d for queue %u", - queue_parent_node->num_children, queue_id); - rte_free(buf); - return ICE_ERR_PARAM; - } - dst_node->children[dst_node->num_children++] = queue_sched_node; - queue_sched_node->parent = dst_node; - ice_sched_query_elem(hw, queue_sched_node->info.node_teid, &queue_sched_node->info); - - rte_free(buf); - return ret; -} - static int ice_set_node_rate(struct ice_hw *hw, struct ice_tm_node *tm_node, struct ice_sched_node *sched_node) @@ -723,240 +673,191 @@ static int ice_cfg_hw_node(struct ice_hw *hw, return 0; } -static struct ice_sched_node *ice_get_vsi_node(struct ice_hw *hw) +int +ice_tm_setup_txq_node(struct ice_pf *pf, struct ice_hw *hw, uint16_t qid, uint32_t teid) { - struct ice_sched_node *node = hw->port_info->root; - uint32_t vsi_layer = hw->num_tx_sched_layers - ICE_VSI_LAYER_OFFSET; - uint32_t i; + struct ice_sched_node *hw_node = ice_sched_find_node_by_teid(hw->port_info->root, teid); + struct ice_tm_node *sw_node = find_node(pf->tm_conf.root, qid); - for (i = 0; i < vsi_layer; i++) - node = node->children[0]; - - return node; -} - -static int ice_reset_noleaf_nodes(struct rte_eth_dev *dev) -{ - struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct ice_sched_node *vsi_node = ice_get_vsi_node(hw); - struct ice_tm_node *root = pf->tm_conf.root; - uint32_t i; - int ret; - - /* reset vsi_node */ - ret = ice_set_node_rate(hw, NULL, vsi_node); - if (ret) { - PMD_DRV_LOG(ERR, "reset vsi node failed"); - return ret; - } - - if (root == NULL) + /* not configured in hierarchy */ + if (sw_node == NULL) return 0; - for (i = 0; i < root->reference_count; i++) { - struct ice_tm_node *tm_node = root->children[i]; + sw_node->sched_node = hw_node; - if (tm_node->sched_node == NULL) - continue; + /* if the queue node has been put in the wrong place in hierarchy */ + if (hw_node->parent != sw_node->parent->sched_node) { + struct ice_aqc_move_txqs_data *buf; + uint8_t txqs_moved = 0; + uint16_t buf_size = ice_struct_size(buf, txqs, 1); + + buf = ice_malloc(hw, buf_size); + if (buf == NULL) + return -ENOMEM; - ret = ice_cfg_hw_node(hw, NULL, tm_node->sched_node); - if (ret) { - PMD_DRV_LOG(ERR, "reset queue group node %u failed", tm_node->id); - return ret; + struct ice_sched_node *parent = hw_node->parent; + struct ice_sched_node *new_parent = sw_node->parent->sched_node; + buf->src_teid = parent->info.node_teid; + buf->dest_teid = new_parent->info.node_teid; + buf->txqs[0].q_teid = hw_node->info.node_teid; + buf->txqs[0].txq_id = qid; + + int ret = ice_aq_move_recfg_lan_txq(hw, 1, true, false, false, false, 50, + NULL, buf, buf_size, &txqs_moved, NULL); + if (ret || txqs_moved == 0) { + PMD_DRV_LOG(ERR, "move lan queue %u failed", qid); + ice_free(hw, buf); + return ICE_ERR_PARAM; } - tm_node->sched_node = NULL; + + /* now update the ice_sched_nodes to match physical layout */ + new_parent->children[new_parent->num_children++] = hw_node; + hw_node->parent = new_parent; + ice_sched_query_elem(hw, hw_node->info.node_teid, &hw_node->info); + for (uint16_t i = 0; i < parent->num_children; i++) + if (parent->children[i] == hw_node) { + /* to remove, just overwrite the old node slot with the last ptr */ + parent->children[i] = parent->children[--parent->num_children]; + break; + } } - return 0; + return ice_cfg_hw_node(hw, sw_node, hw_node); } -static int ice_remove_leaf_nodes(struct rte_eth_dev *dev) +/* from a given node, recursively deletes all the nodes that belong to that vsi. + * Any nodes which can't be deleted because they have children belonging to a different + * VSI, are now also adjusted to belong to that VSI also + */ +static int +free_sched_node_recursive(struct ice_port_info *pi, const struct ice_sched_node *root, + struct ice_sched_node *node, uint8_t vsi_id) { - int ret = 0; - int i; + uint16_t i = 0; - for (i = 0; i < dev->data->nb_tx_queues; i++) { - ret = ice_tx_queue_stop(dev, i); - if (ret) { - PMD_DRV_LOG(ERR, "stop queue %u failed", i); - break; + while (i < node->num_children) { + if (node->children[i]->vsi_handle != vsi_id) { + i++; + continue; } + free_sched_node_recursive(pi, root, node->children[i], vsi_id); } - return ret; -} - -static int ice_add_leaf_nodes(struct rte_eth_dev *dev) -{ - int ret = 0; - int i; - - for (i = 0; i < dev->data->nb_tx_queues; i++) { - ret = ice_tx_queue_start(dev, i); - if (ret) { - PMD_DRV_LOG(ERR, "start queue %u failed", i); - break; - } + if (node != root) { + if (node->num_children == 0) + ice_free_sched_node(pi, node); + else + node->vsi_handle = node->children[0]->vsi_handle; } - return ret; + return 0; } -int ice_do_hierarchy_commit(struct rte_eth_dev *dev, - int clear_on_fail, - struct rte_tm_error *error) +static int +create_sched_node_recursive(struct ice_port_info *pi, struct ice_tm_node *sw_node, + struct ice_sched_node *hw_root, uint16_t *created) { - struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct ice_tm_node *root; - struct ice_sched_node *vsi_node = NULL; - struct ice_sched_node *queue_node; - struct ice_tx_queue *txq; - int ret_val = 0; - uint32_t i; - uint32_t idx_vsi_child; - uint32_t idx_qg; - uint32_t nb_vsi_child; - uint32_t nb_qg; - uint32_t qid; - uint32_t q_teid; - - /* remove leaf nodes */ - ret_val = ice_remove_leaf_nodes(dev); - if (ret_val) { - error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; - PMD_DRV_LOG(ERR, "reset no-leaf nodes failed"); - goto fail_clear; - } - - /* reset no-leaf nodes. */ - ret_val = ice_reset_noleaf_nodes(dev); - if (ret_val) { - error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; - PMD_DRV_LOG(ERR, "reset leaf nodes failed"); - goto add_leaf; - } - - /* config vsi node */ - vsi_node = ice_get_vsi_node(hw); - root = pf->tm_conf.root; - - ret_val = ice_set_node_rate(hw, root, vsi_node); - if (ret_val) { - error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; - PMD_DRV_LOG(ERR, - "configure vsi node %u bandwidth failed", - root->id); - goto add_leaf; - } - - /* config queue group nodes */ - nb_vsi_child = vsi_node->num_children; - nb_qg = vsi_node->children[0]->num_children; - - idx_vsi_child = 0; - idx_qg = 0; - - if (root == NULL) - goto commit; - - for (i = 0; i < root->reference_count; i++) { - struct ice_tm_node *tm_node = root->children[i]; - struct ice_tm_node *tm_child_node; - struct ice_sched_node *qgroup_sched_node = - vsi_node->children[idx_vsi_child]->children[idx_qg]; - uint32_t j; - - ret_val = ice_cfg_hw_node(hw, tm_node, qgroup_sched_node); - if (ret_val) { - error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; - PMD_DRV_LOG(ERR, - "configure queue group node %u failed", - tm_node->id); - goto reset_leaf; - } - - for (j = 0; j < tm_node->reference_count; j++) { - tm_child_node = tm_node->children[j]; - qid = tm_child_node->id; - ret_val = ice_tx_queue_start(dev, qid); - if (ret_val) { - error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; - PMD_DRV_LOG(ERR, "start queue %u failed", qid); - goto reset_leaf; - } - txq = dev->data->tx_queues[qid]; - q_teid = txq->q_teid; - queue_node = ice_sched_get_node(hw->port_info, q_teid); - if (queue_node == NULL) { - error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; - PMD_DRV_LOG(ERR, "get queue %u node failed", qid); - goto reset_leaf; - } - if (queue_node->info.parent_teid != qgroup_sched_node->info.node_teid) { - ret_val = ice_move_recfg_lan_txq(dev, queue_node, - qgroup_sched_node, qid); - if (ret_val) { - error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; - PMD_DRV_LOG(ERR, "move queue %u failed", qid); - goto reset_leaf; - } - } - ret_val = ice_cfg_hw_node(hw, tm_child_node, queue_node); - if (ret_val) { - error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; - PMD_DRV_LOG(ERR, - "configure queue group node %u failed", - tm_node->id); - goto reset_leaf; - } - } - - idx_qg++; - if (idx_qg >= nb_qg) { - idx_qg = 0; - idx_vsi_child++; + struct ice_sched_node *parent = sw_node->sched_node; + uint32_t teid; + uint16_t added; + + /* first create all child nodes */ + for (uint16_t i = 0; i < sw_node->reference_count; i++) { + struct ice_tm_node *tm_node = sw_node->children[i]; + int res = ice_sched_add_elems(pi, hw_root, + parent, parent->tx_sched_layer + 1, + 1 /* num nodes */, &added, &teid, + NULL /* no pre-alloc */); + if (res != 0) { + PMD_DRV_LOG(ERR, "Error with ice_sched_add_elems, adding child node to teid %u\n", + parent->info.node_teid); + return -1; } - if (idx_vsi_child >= nb_vsi_child) { - error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; - PMD_DRV_LOG(ERR, "too many queues"); - goto reset_leaf; + struct ice_sched_node *hw_node = ice_sched_find_node_by_teid(parent, teid); + if (ice_cfg_hw_node(pi->hw, tm_node, hw_node) != 0) { + PMD_DRV_LOG(ERR, "Error configuring node %u at layer %u", + teid, parent->tx_sched_layer + 1); + return -1; } + tm_node->sched_node = hw_node; + created[hw_node->tx_sched_layer]++; } -commit: - pf->tm_conf.committed = true; - pf->tm_conf.clear_on_fail = clear_on_fail; + /* if we have just created the child nodes in the q-group, i.e. last non-leaf layer, + * then just return, rather than trying to create leaf nodes. + * That is done later at queue start. + */ + if (sw_node->level + 2 == ice_get_leaf_level(pi->hw)) + return 0; - return ret_val; + for (uint16_t i = 0; i < sw_node->reference_count; i++) { + if (sw_node->children[i]->reference_count == 0) + continue; -reset_leaf: - ice_remove_leaf_nodes(dev); -add_leaf: - ice_add_leaf_nodes(dev); - ice_reset_noleaf_nodes(dev); -fail_clear: - /* clear all the traffic manager configuration */ - if (clear_on_fail) { - ice_tm_conf_uninit(dev); - ice_tm_conf_init(dev); + if (create_sched_node_recursive(pi, sw_node->children[i], hw_root, created) < 0) + return -1; } - return ret_val; + return 0; } -static int ice_hierarchy_commit(struct rte_eth_dev *dev, - int clear_on_fail, - struct rte_tm_error *error) +static int +apply_topology_updates(struct rte_eth_dev *dev __rte_unused) { + return 0; +} + +static int +commit_new_hierarchy(struct rte_eth_dev *dev) +{ + struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct ice_port_info *pi = hw->port_info; + struct ice_tm_node *sw_root = pf->tm_conf.root; + struct ice_sched_node *new_vsi_root = (pi->has_tc) ? pi->root->children[0] : pi->root; + uint16_t nodes_created_per_level[10] = {0}; /* counted per hw level, not per logical */ + uint8_t q_lvl = ice_get_leaf_level(hw); + uint8_t qg_lvl = q_lvl - 1; + + /* check if we have a previously applied topology */ + if (sw_root->sched_node != NULL) + return apply_topology_updates(dev); + + free_sched_node_recursive(pi, new_vsi_root, new_vsi_root, new_vsi_root->vsi_handle); + + sw_root->sched_node = new_vsi_root; + if (create_sched_node_recursive(pi, sw_root, new_vsi_root, nodes_created_per_level) < 0) + return -1; + for (uint16_t i = 0; i < RTE_DIM(nodes_created_per_level); i++) + PMD_DRV_LOG(DEBUG, "Created %u nodes at level %u\n", + nodes_created_per_level[i], i); + hw->vsi_ctx[pf->main_vsi->idx]->sched.vsi_node[0] = new_vsi_root; + + pf->main_vsi->nb_qps = + RTE_MIN(nodes_created_per_level[qg_lvl] * hw->max_children[qg_lvl], + hw->layer_info[q_lvl].max_device_nodes); + + pf->tm_conf.committed = true; /* set flag to be checks on queue start */ + + return ice_alloc_lan_q_ctx(hw, 0, 0, pf->main_vsi->nb_qps); +} - /* if device not started, simply set committed flag and return. */ - if (!dev->data->dev_started) { - pf->tm_conf.committed = true; - pf->tm_conf.clear_on_fail = clear_on_fail; - return 0; +static int +ice_hierarchy_commit(struct rte_eth_dev *dev, + int clear_on_fail, + struct rte_tm_error *error) +{ + RTE_SET_USED(error); + /* TODO - commit should only be done to topology before start! */ + if (dev->data->dev_started) + return -1; + + uint64_t start = rte_rdtsc(); + int ret = commit_new_hierarchy(dev); + if (ret < 0 && clear_on_fail) { + ice_tm_conf_uninit(dev); + ice_tm_conf_init(dev); } - - return ice_do_hierarchy_commit(dev, clear_on_fail, error); + uint64_t time = rte_rdtsc() - start; + PMD_DRV_LOG(DEBUG, "Time to apply hierarchy = %.1f\n", (float)time / rte_get_timer_hz()); + return ret; } From patchwork Wed Aug 7 09:47:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 143001 X-Patchwork-Delegate: bruce.richardson@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 52C444575B; Wed, 7 Aug 2024 11:48:52 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E30B142D99; Wed, 7 Aug 2024 11:47:35 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) by mails.dpdk.org (Postfix) with ESMTP id B3969427D1 for ; Wed, 7 Aug 2024 11:47:27 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1723024048; x=1754560048; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=v4HTJqytaqUI8XU8YwO92/UVR3ZX7NtTKectMp7+rrU=; b=irGbAP5ilIzLqGMNOLJTtBMGhA5fjtV0K4DZ9TJns4ROaGtoy8DN9LXR omUFGct07N5NPfqzobHYG1qLBVwxhlsmGCJg8VGlbBI/+X64KDPABVty/ xW30mHJd+0XhsVho1yF24s/7unIivgzcunw/1E3icphO/YQcH+6C44Ua1 aACmA1Enl3SbZoRKqdhYEOGKkDeZ/aqJi3XFbr/5EPa44umKuRjoJbs3M IsDBebee8KTLijFhagpSjnILsv7xI3VABfU02T9BokV+WqN4S7r5dOzns HKph38nd4lFx/iaDEWOr3Hw0KXK3RA3Defp3aseeMaCYwCQPve/2mpHmS Q==; X-CSE-ConnectionGUID: c+sIdV8CTyuPXxxADLuB/Q== X-CSE-MsgGUID: 9h4Tb0mySd+NgZtvA23HwA== X-IronPort-AV: E=McAfee;i="6700,10204,11156"; a="21257957" X-IronPort-AV: E=Sophos;i="6.09,269,1716274800"; d="scan'208";a="21257957" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Aug 2024 02:47:27 -0700 X-CSE-ConnectionGUID: o9HLnG3bTOaQzPBr10Y4HA== X-CSE-MsgGUID: BbwpW9EeQgGnede0VhkDWA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,269,1716274800"; d="scan'208";a="87467411" Received: from silpixa00400562.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.39]) by orviesa002.jf.intel.com with ESMTP; 07 Aug 2024 02:47:26 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson Subject: [PATCH v2 15/15] net/ice: add minimal capability reporting API Date: Wed, 7 Aug 2024 10:47:06 +0100 Message-ID: <20240807094706.459822-16-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240807094706.459822-1-bruce.richardson@intel.com> References: <20240807093407.452784-1-bruce.richardson@intel.com> <20240807094706.459822-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Incomplete but reports number of available layers Signed-off-by: Bruce Richardson --- drivers/net/ice/ice_ethdev.h | 1 + drivers/net/ice/ice_tm.c | 17 +++++++++++++++++ 2 files changed, 18 insertions(+) diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h index cb1a7e8e0d..6bebc511e4 100644 --- a/drivers/net/ice/ice_ethdev.h +++ b/drivers/net/ice/ice_ethdev.h @@ -682,6 +682,7 @@ int ice_rem_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id, struct ice_rss_hash_cfg *cfg); void ice_tm_conf_init(struct rte_eth_dev *dev); void ice_tm_conf_uninit(struct rte_eth_dev *dev); + extern const struct rte_tm_ops ice_tm_ops; static inline int diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c index a86943a5b2..d7def61756 100644 --- a/drivers/net/ice/ice_tm.c +++ b/drivers/net/ice/ice_tm.c @@ -33,8 +33,12 @@ static int ice_shaper_profile_add(struct rte_eth_dev *dev, static int ice_shaper_profile_del(struct rte_eth_dev *dev, uint32_t shaper_profile_id, struct rte_tm_error *error); +static int ice_tm_capabilities_get(struct rte_eth_dev *dev, + struct rte_tm_capabilities *cap, + struct rte_tm_error *error); const struct rte_tm_ops ice_tm_ops = { + .capabilities_get = ice_tm_capabilities_get, .shaper_profile_add = ice_shaper_profile_add, .shaper_profile_delete = ice_shaper_profile_del, .node_add = ice_tm_node_add, @@ -861,3 +865,16 @@ ice_hierarchy_commit(struct rte_eth_dev *dev, PMD_DRV_LOG(DEBUG, "Time to apply hierarchy = %.1f\n", (float)time / rte_get_timer_hz()); return ret; } + +static int +ice_tm_capabilities_get(struct rte_eth_dev *dev, struct rte_tm_capabilities *cap, + struct rte_tm_error *error) +{ + struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + *cap = (struct rte_tm_capabilities){ + .n_levels_max = hw->num_tx_sched_layers - hw->port_info->has_tc, + }; + if (error) + error->type = RTE_TM_ERROR_TYPE_NONE; + return 0; +}
teid %d
type "); - print_elem_type(stream, data->data.elem_type); - fprintf(stream, "
teid%d
type%s
VSI%u
TXQ%u