From patchwork Thu Feb 29 11:51:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dariusz Sosnowski X-Patchwork-Id: 137471 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1F46043C35; Thu, 29 Feb 2024 12:52:55 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 519C4427D7; Thu, 29 Feb 2024 12:52:43 +0100 (CET) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2040.outbound.protection.outlook.com [40.107.236.40]) by mails.dpdk.org (Postfix) with ESMTP id 4090D415D7 for ; Thu, 29 Feb 2024 12:52:42 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Zo9/0m1AMyqGSN+R553U44z1T+x9F8zIwT0l+OJWbCGWvczIjhMCcF7YFegGXC3Kf57iVJQ8oxKVyki0+EXPI8azsMeoe5Pu3BpWh1v8e4cyYxuIZtFkH2wMffFz6kA5K7D5A6bwn3IdGFXgzVJhMBLM2fxWjv4ZBkPdFe4Kk9R84jPtBOtAzWcr5dvi4TUIywRzIpT2KaLNKCZYNGcokBqeade/kQoeh6uvefWI81a7z5+oGZRV4VNwfTD8DVj/YkaPhXxNehSM6epPjF5YxWLcXv78BXVrKF5qx+Yr25CC+doJGeRD61cBvnRbUJE6oTZF5dX17Ji3+92FfuMWIg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=if2wkJlZkGgnyWH+69es0C/YDcaEcfuLahxNP/SIMMs=; b=Yg0l95T8EWzzSi/S/EQK0KB1vOjM6nWbN7Bkv+Tp3Tff97jXjzmh4XeGCa5juPMTNBnFzkCGgNVkMypi5j8Yvk5mQtc440iFndSaUIoVgXe/hz1+Gt9YG645ECI64kfjgj1kKWjtLwlgYs/R6KwwAkV8IfSQyAFCv+lPc4W31WzDCBjhykRJ5bszZk/egyCjUlU4N4nVkmAKGTx6LFTEyjeaSXB9vCG072/drK9DCi9YR81YNf8S0hv0brZjd9/sg6+t4UQnvT991RvKp409NCEinyhaKJ4O0PJVIXnRiQEpozi/7vlDzRmqpAvlQGEfQfiI4suqsn25aHo1yH46Yw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=if2wkJlZkGgnyWH+69es0C/YDcaEcfuLahxNP/SIMMs=; b=tRBOeCQJjTypXxos0KJf6KTbi27HAeer5/YfxACduzONUuAyHL3wJQq+fbpvsYds2Gupp++JVQYxVfQYseBOHm31h1o/Zi1vVVnDV1F9tPcS8bP49Pl/PyGD1jZj340wvxHI+Bi9O2hAM7zSNmEeyMK787tt+2yCCYo13ibji3Wl/j1FlauvLsZmQDXWQAHMMSJ+EOnHQm9X6c31gsytwm3xDSgOVf/2MNS6l4krDpPNvhDOVJHq74giDcbGa0w8aPAVnU2jhVKt7WYz16rdCLKd9rYHzV4+zlzOlDBK/zT1eFKPGtpIT+dB7QFbxG5Fa2KrbedAKGu7ZHeakxPkoQ== Received: from BYAPR07CA0031.namprd07.prod.outlook.com (2603:10b6:a02:bc::44) by DS7PR12MB6360.namprd12.prod.outlook.com (2603:10b6:8:93::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7316.39; Thu, 29 Feb 2024 11:52:39 +0000 Received: from CO1PEPF000044F0.namprd05.prod.outlook.com (2603:10b6:a02:bc:cafe::fa) by BYAPR07CA0031.outlook.office365.com (2603:10b6:a02:bc::44) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7339.30 via Frontend Transport; Thu, 29 Feb 2024 11:52:39 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1PEPF000044F0.mail.protection.outlook.com (10.167.241.70) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7292.25 via Frontend Transport; Thu, 29 Feb 2024 11:52:38 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Thu, 29 Feb 2024 03:52:21 -0800 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12; Thu, 29 Feb 2024 03:52:18 -0800 From: Dariusz Sosnowski To: Viacheslav Ovsiienko , Ori Kam , Suanming Mou , Matan Azrad CC: , Raslan Darawsheh , Bing Zhao Subject: [PATCH v2 04/11] net/mlx5: skip the unneeded resource index allocation Date: Thu, 29 Feb 2024 12:51:49 +0100 Message-ID: <20240229115157.201671-5-dsosnowski@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240229115157.201671-1-dsosnowski@nvidia.com> References: <20240228170046.176600-1-dsosnowski@nvidia.com> <20240229115157.201671-1-dsosnowski@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1PEPF000044F0:EE_|DS7PR12MB6360:EE_ X-MS-Office365-Filtering-Correlation-Id: 6b2d4457-66bf-42df-7038-08dc391cedbc X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: xQKFL6WzQGRCPbx3hEoY+xeT22/kKNtFV/FrjaXODJyFVJyzeA1RhZeHsHT+ZWo7I9Tz8Gp+/Z/T8oyiarMdN8mqCBrLfKFbXr2FIw+8LgAVVKExgrx2B/aDoS1toiQ5Yd89+TFS4JBH+UGdhWbrd8JQGghDS5IOAFUXg4HYyntKCF5gIfn4I7RAdhfcM++jbFpuDtpnwcoTBXk9gPCqNUWRaUisfyo97Eav101zlzGzOJ4u2PkxeZuQW+EdtrHS41cGFqj7lwrkuL/4SD7eofoDhWhSY2WCM+mOe6F8sVDetbS55S959rfIhPTKeHBHmIaqIJVSqZyxOlpwuXpCYQDLS/91MwE/It7VG0Xhjtp172GE2mRoeTieV3Baf3SuxR7jIJ2zgT1AZYJyicI8o+WeJHZe//6sKdbNYOD01pto3fVCu3ngdeVTNZICj5ymgoBbG/6omOovyxXQe5nWruKjdwzezpDD0VkMU1KONEb4ZBKp1ipD7dgYXsGd8V26IN6D9dxNReBFB03n5nPKzOb9QNukmsAA93JlskvTfJOYCMxOkmjO+1rjrkeO4N2CrYGo6+JucK6SU4s17YN2NXAUtvn/AJTMONgEsZT29WGSrHVRxROsIeMW/dKidi3+HPvjiCSC2kr2Drfvk2vbCUq8KGPGD80e0kkZZ6v/NJZSFMe6CLRCq+dVGFog13qIXmNKv02+/WFsgzEB9Uou5gZrqDP5v8Cv6OPQyM/AE/XQADSW35b8pbxDWQ5ZipWe X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(36860700004)(82310400014); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Feb 2024 11:52:38.8466 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 6b2d4457-66bf-42df-7038-08dc391cedbc X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1PEPF000044F0.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB6360 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Bing Zhao The resource index was introduced to decouple the flow rule and its resources used by hardware steering. This is needed only when a rule update is supported. In some cases, the update is not supported on a table(matcher). E.g.: * Table is resizable * FW gets involved * Root table * Not index based or optimized (not applicable) Or only one STE entry is required per rule. When doing an update, the operation is always atomic. There is no need for the extra resource index either. If the matcher doesn't support rule update or the maximal entry is only 1 for this matcher, there is no need to manage the resource index allocation and free from the pool. Signed-off-by: Bing Zhao Acked-by: Ori Kam --- drivers/net/mlx5/mlx5_flow_hw.c | 129 +++++++++++++++++++------------- 1 file changed, 76 insertions(+), 53 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index ef91a23a9b..1fe8f42618 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -3383,9 +3383,6 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, flow = mlx5_ipool_zmalloc(table->flow, &flow_idx); if (!flow) goto error; - mlx5_ipool_malloc(table->resource, &res_idx); - if (!res_idx) - goto error; rule_acts = flow_hw_get_dr_action_buffer(priv, table, action_template_index, queue); /* * Set the table here in order to know the destination table @@ -3394,7 +3391,14 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, flow->table = table; flow->mt_idx = pattern_template_index; flow->idx = flow_idx; - flow->res_idx = res_idx; + if (table->resource) { + mlx5_ipool_malloc(table->resource, &res_idx); + if (!res_idx) + goto error; + flow->res_idx = res_idx; + } else { + flow->res_idx = flow_idx; + } /* * Set the job type here in order to know if the flow memory * should be freed or not when get the result from dequeue. @@ -3404,11 +3408,10 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, job->user_data = user_data; rule_attr.user_data = job; /* - * Indexed pool returns 1-based indices, but mlx5dr expects 0-based indices for rule - * insertion hints. + * Indexed pool returns 1-based indices, but mlx5dr expects 0-based indices + * for rule insertion hints. */ - MLX5_ASSERT(res_idx > 0); - flow->rule_idx = res_idx - 1; + flow->rule_idx = flow->res_idx - 1; rule_attr.rule_idx = flow->rule_idx; /* * Construct the flow actions based on the input actions. @@ -3451,12 +3454,12 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, if (likely(!ret)) return (struct rte_flow *)flow; error: - if (job) - flow_hw_job_put(priv, job, queue); + if (table->resource && res_idx) + mlx5_ipool_free(table->resource, res_idx); if (flow_idx) mlx5_ipool_free(table->flow, flow_idx); - if (res_idx) - mlx5_ipool_free(table->resource, res_idx); + if (job) + flow_hw_job_put(priv, job, queue); rte_flow_error_set(error, rte_errno, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "fail to create rte flow"); @@ -3527,9 +3530,6 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev, flow = mlx5_ipool_zmalloc(table->flow, &flow_idx); if (!flow) goto error; - mlx5_ipool_malloc(table->resource, &res_idx); - if (!res_idx) - goto error; rule_acts = flow_hw_get_dr_action_buffer(priv, table, action_template_index, queue); /* * Set the table here in order to know the destination table @@ -3538,7 +3538,14 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev, flow->table = table; flow->mt_idx = 0; flow->idx = flow_idx; - flow->res_idx = res_idx; + if (table->resource) { + mlx5_ipool_malloc(table->resource, &res_idx); + if (!res_idx) + goto error; + flow->res_idx = res_idx; + } else { + flow->res_idx = flow_idx; + } /* * Set the job type here in order to know if the flow memory * should be freed or not when get the result from dequeue. @@ -3547,9 +3554,7 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev, job->flow = flow; job->user_data = user_data; rule_attr.user_data = job; - /* - * Set the rule index. - */ + /* Set the rule index. */ flow->rule_idx = rule_index; rule_attr.rule_idx = flow->rule_idx; /* @@ -3585,12 +3590,12 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev, if (likely(!ret)) return (struct rte_flow *)flow; error: - if (job) - flow_hw_job_put(priv, job, queue); - if (res_idx) + if (table->resource && res_idx) mlx5_ipool_free(table->resource, res_idx); if (flow_idx) mlx5_ipool_free(table->flow, flow_idx); + if (job) + flow_hw_job_put(priv, job, queue); rte_flow_error_set(error, rte_errno, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "fail to create rte flow"); @@ -3653,9 +3658,6 @@ flow_hw_async_flow_update(struct rte_eth_dev *dev, rte_errno = ENOMEM; goto error; } - mlx5_ipool_malloc(table->resource, &res_idx); - if (!res_idx) - goto error; nf = job->upd_flow; memset(nf, 0, sizeof(struct rte_flow_hw)); rule_acts = flow_hw_get_dr_action_buffer(priv, table, action_template_index, queue); @@ -3666,7 +3668,14 @@ flow_hw_async_flow_update(struct rte_eth_dev *dev, nf->table = table; nf->mt_idx = of->mt_idx; nf->idx = of->idx; - nf->res_idx = res_idx; + if (table->resource) { + mlx5_ipool_malloc(table->resource, &res_idx); + if (!res_idx) + goto error; + nf->res_idx = res_idx; + } else { + nf->res_idx = of->res_idx; + } /* * Set the job type here in order to know if the flow memory * should be freed or not when get the result from dequeue. @@ -3676,11 +3685,11 @@ flow_hw_async_flow_update(struct rte_eth_dev *dev, job->user_data = user_data; rule_attr.user_data = job; /* - * Indexed pool returns 1-based indices, but mlx5dr expects 0-based indices for rule - * insertion hints. + * Indexed pool returns 1-based indices, but mlx5dr expects 0-based indices + * for rule insertion hints. + * If there is only one STE, the update will be atomic by nature. */ - MLX5_ASSERT(res_idx > 0); - nf->rule_idx = res_idx - 1; + nf->rule_idx = nf->res_idx - 1; rule_attr.rule_idx = nf->rule_idx; /* * Construct the flow actions based on the input actions. @@ -3706,14 +3715,14 @@ flow_hw_async_flow_update(struct rte_eth_dev *dev, if (likely(!ret)) return 0; error: + if (table->resource && res_idx) + mlx5_ipool_free(table->resource, res_idx); /* Flow created fail, return the descriptor and flow memory. */ if (job) flow_hw_job_put(priv, job, queue); - if (res_idx) - mlx5_ipool_free(table->resource, res_idx); return rte_flow_error_set(error, rte_errno, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "fail to update rte flow"); + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "fail to update rte flow"); } /** @@ -3968,13 +3977,15 @@ hw_cmpl_flow_update_or_destroy(struct rte_eth_dev *dev, } if (job->type != MLX5_HW_Q_JOB_TYPE_UPDATE) { if (table) { - mlx5_ipool_free(table->resource, res_idx); + if (table->resource) + mlx5_ipool_free(table->resource, res_idx); mlx5_ipool_free(table->flow, flow->idx); } } else { rte_memcpy(flow, job->upd_flow, offsetof(struct rte_flow_hw, rule)); - mlx5_ipool_free(table->resource, res_idx); + if (table->resource) + mlx5_ipool_free(table->resource, res_idx); } } @@ -4474,6 +4485,7 @@ flow_hw_table_create(struct rte_eth_dev *dev, uint32_t i = 0, max_tpl = MLX5_HW_TBL_MAX_ITEM_TEMPLATE; uint32_t nb_flows = rte_align32pow2(attr->nb_flows); bool port_started = !!dev->data->dev_started; + bool rpool_needed; size_t tbl_mem_size; int err; @@ -4511,13 +4523,6 @@ flow_hw_table_create(struct rte_eth_dev *dev, tbl->flow = mlx5_ipool_create(&cfg); if (!tbl->flow) goto error; - /* Allocate rule indexed pool. */ - cfg.size = 0; - cfg.type = "mlx5_hw_table_rule"; - cfg.max_idx += priv->hw_q[0].size; - tbl->resource = mlx5_ipool_create(&cfg); - if (!tbl->resource) - goto error; /* Register the flow group. */ ge = mlx5_hlist_register(priv->sh->groups, attr->flow_attr.group, &ctx); if (!ge) @@ -4597,12 +4602,30 @@ flow_hw_table_create(struct rte_eth_dev *dev, tbl->type = attr->flow_attr.transfer ? MLX5DR_TABLE_TYPE_FDB : (attr->flow_attr.egress ? MLX5DR_TABLE_TYPE_NIC_TX : MLX5DR_TABLE_TYPE_NIC_RX); + /* + * Only the matcher supports update and needs more than 1 WQE, an additional + * index is needed. Or else the flow index can be reused. + */ + rpool_needed = mlx5dr_matcher_is_updatable(tbl->matcher_info[0].matcher) && + mlx5dr_matcher_is_dependent(tbl->matcher_info[0].matcher); + if (rpool_needed) { + /* Allocate rule indexed pool. */ + cfg.size = 0; + cfg.type = "mlx5_hw_table_rule"; + cfg.max_idx += priv->hw_q[0].size; + tbl->resource = mlx5_ipool_create(&cfg); + if (!tbl->resource) + goto res_error; + } if (port_started) LIST_INSERT_HEAD(&priv->flow_hw_tbl, tbl, next); else LIST_INSERT_HEAD(&priv->flow_hw_tbl_ongo, tbl, next); rte_rwlock_init(&tbl->matcher_replace_rwlk); return tbl; +res_error: + if (tbl->matcher_info[0].matcher) + (void)mlx5dr_matcher_destroy(tbl->matcher_info[0].matcher); at_error: for (i = 0; i < nb_action_templates; i++) { __flow_hw_action_template_destroy(dev, &tbl->ats[i].acts); @@ -4620,8 +4643,6 @@ flow_hw_table_create(struct rte_eth_dev *dev, if (tbl->grp) mlx5_hlist_unregister(priv->sh->groups, &tbl->grp->entry); - if (tbl->resource) - mlx5_ipool_destroy(tbl->resource); if (tbl->flow) mlx5_ipool_destroy(tbl->flow); mlx5_free(tbl); @@ -4830,12 +4851,13 @@ flow_hw_table_destroy(struct rte_eth_dev *dev, uint32_t ridx = 1; /* Build ipool allocated object bitmap. */ - mlx5_ipool_flush_cache(table->resource); + if (table->resource) + mlx5_ipool_flush_cache(table->resource); mlx5_ipool_flush_cache(table->flow); /* Check if ipool has allocated objects. */ if (table->refcnt || mlx5_ipool_get_next(table->flow, &fidx) || - mlx5_ipool_get_next(table->resource, &ridx)) { + (table->resource && mlx5_ipool_get_next(table->resource, &ridx))) { DRV_LOG(WARNING, "Table %p is still in use.", (void *)table); return rte_flow_error_set(error, EBUSY, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, @@ -4857,7 +4879,8 @@ flow_hw_table_destroy(struct rte_eth_dev *dev, if (table->matcher_info[1].matcher) mlx5dr_matcher_destroy(table->matcher_info[1].matcher); mlx5_hlist_unregister(priv->sh->groups, &table->grp->entry); - mlx5_ipool_destroy(table->resource); + if (table->resource) + mlx5_ipool_destroy(table->resource); mlx5_ipool_destroy(table->flow); mlx5_free(table); return 0; @@ -12476,11 +12499,11 @@ flow_hw_table_resize(struct rte_eth_dev *dev, return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, table, "cannot resize flows pool"); - ret = mlx5_ipool_resize(table->resource, nb_flows); - if (ret) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - table, "cannot resize resources pool"); + /* + * A resizable matcher doesn't support rule update. In this case, the ipool + * for the resource is not created and there is no need to resize it. + */ + MLX5_ASSERT(!table->resource); if (mlx5_is_multi_pattern_active(&table->mpctx)) { ret = flow_hw_table_resize_multi_pattern_actions(dev, table, nb_flows, error); if (ret < 0)