From patchwork Thu Feb 29 11:51:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dariusz Sosnowski X-Patchwork-Id: 137478 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2681C43C35; Thu, 29 Feb 2024 12:54:00 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6E76A42E6A; Thu, 29 Feb 2024 12:53:01 +0100 (CET) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2084.outbound.protection.outlook.com [40.107.244.84]) by mails.dpdk.org (Postfix) with ESMTP id 7DAC642DEA for ; Thu, 29 Feb 2024 12:52:59 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Vdapp6gBE6NsqF9Gsd75ZGuZY1KFGB75PPJqKZ4o8kM2NSxj5OfAz6aFMsdufK8rbrzHPIks2h7Tb06PgZYqv24JKk4KGx61A0FpZlYWu9JPHUqMVT/3866bK6n1ngJlWqjTMaL5tUL9Cxq8zyROlKJmAd//IGk7VViJGlm1IaY8jtbmEi26ymbddtiQ3MVzqOu7KQcJ5bU8UvZErE6LIvVB+7frXnM/bcPqOg6+cyDUZoKykiOqjSnZ+taWtupso3GxgiFh8o4O0K9Aj5ANXtugX4viBnchKG7nUejMxyQCWJqTSsafz984HQGBu8SVHgsDFXpGpHU84FVP0qQY2g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=361rTXdyHU8hI0qe10Ou89/R5lgQUIRN+RQG2j/6ZuU=; b=adWr1nmwO6AF/1bcPq1fIXYGsAupS/ZTxSiLXz0Qs9lfMvE2dci3ep86Lk5Qm09IgsKQ7NnZmTkfPNt8XotlgYmflOQNTB7d60Ib1Uodo4NPjCIKPrEK/AXn0m//y1em6SDyaGrriMg/80kiCEshgOrtWYQPEctUQ0yvamyUjJs8Nw8p92DYo2BFXdVebYJixquY0aBhrJDxiT930ZpCR4AHSLitdEgH3NbjaW2ptpeNlDHAdO/kxVhF7ghA3k0y/myxCchLgaT0rHUN5yS4il/lafz5fiTjvsgCPD0gJr6W+ghZsPKj1f2NAXypBaEIX6nPDMMuIfCTn9CSSftwdw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=361rTXdyHU8hI0qe10Ou89/R5lgQUIRN+RQG2j/6ZuU=; b=qm2eBtIBKvHpUz1TZyOR02QVo/jOes6nKLSt1YX2bM9zoW4sZGYV8F9NMGBWnGFNg0R8PXzo/QcN46tY4jbsoGHah/OOUqPEDVsWuesZiPKfzo4Ne89kA8vghuBXQpJum0xUqtdwZxqAkuhrAxQawhDHa7hiO02EUEAAkxf79QpuVGRPl4HeXIWVHL7m+hrJiPdF1SAcfcLNa+784F8WofQTjCm6z7j1bL7ACpvA8q8YZV4lpJqFe6I49aWdLRfcwUrpFhmCTYH9mgEzi6/5RlrWHgC8IroqzWnPWy5GS/VnSq8e8Pgux+E8CQQJcRJpYKPGvx4b0SlpEZMPQ5QAQg== Received: from BYAPR07CA0093.namprd07.prod.outlook.com (2603:10b6:a03:12b::34) by SJ1PR12MB6075.namprd12.prod.outlook.com (2603:10b6:a03:45e::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7316.33; Thu, 29 Feb 2024 11:52:51 +0000 Received: from CO1PEPF000044EF.namprd05.prod.outlook.com (2603:10b6:a03:12b:cafe::62) by BYAPR07CA0093.outlook.office365.com (2603:10b6:a03:12b::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7339.30 via Frontend Transport; Thu, 29 Feb 2024 11:52:51 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1PEPF000044EF.mail.protection.outlook.com (10.167.241.69) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7292.25 via Frontend Transport; Thu, 29 Feb 2024 11:52:50 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Thu, 29 Feb 2024 03:52:32 -0800 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12; Thu, 29 Feb 2024 03:52:30 -0800 From: Dariusz Sosnowski To: Viacheslav Ovsiienko , Ori Kam , Suanming Mou , Matan Azrad CC: , Raslan Darawsheh , Bing Zhao Subject: [PATCH v2 10/11] net/mlx5: reuse flow fields Date: Thu, 29 Feb 2024 12:51:55 +0100 Message-ID: <20240229115157.201671-11-dsosnowski@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240229115157.201671-1-dsosnowski@nvidia.com> References: <20240228170046.176600-1-dsosnowski@nvidia.com> <20240229115157.201671-1-dsosnowski@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1PEPF000044EF:EE_|SJ1PR12MB6075:EE_ X-MS-Office365-Filtering-Correlation-Id: 86bfec87-92e3-48ac-fd46-08dc391cf4e3 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: NsMq4RcDm1PxYB1EDYevRo7miaK2Fvd6NeodUF3i7hy+Q3a03ySfbo2hwm43wik1eNcwdV0t/hJEhD+GMQGCfv1s7ZS/uX+LpVk4Mi2wVnIKKZgq9KEBAlZNlH9cZSGVfC+17kEv9Y8ce1rowFyW9AlGeH7yhTaX4BRCHvFYI+EVJvicdxn3lg+X5njuOcz0DoqeDpPL1frbJAz0xw5bhD92jsX2xh6YWtu3Q/svK3laN9YfsLlDlJgBrAo0LG0Y0b488+0MV0y2MqXsUjhhug4t0djIAoeQ/G3xkDum4XzVHezSE+4OuBb2aOraQHxhh56IXlUTAGg05C62mrEy6wetywUd7cxTzrB38tiV/SE8VPD08LTEyMQQKMVTP/g9ZSpFs9i9w7y/h80IoYHMCUBrUqwnRNSDnTaEweluzF7HcbvvvOIOIkebDIIzc6Q9TgbZuBbXoR9q8Xjmo6hUdkaJqv0skIoi3nwEJ0pSQLs+K+GfDSNfTmd9REfJlWWnb+Za8m0CfZ0BdmvHkDlsp/nyAy70LQJbFugGPUhnNU6IB+DxPAyb2XYgVJm+zGBHbtzqBsoniIbpVoMpHbfpHWsoSi66hIh4fULFXr9O9RyfsyNaRhaumqbi2Da8pHgfcXPacfanml529kpMUh8LITGuKCtzsshHxTMNIeH18Xgv6RmooKYutmQtPEms5/GDVplGctDym8ZYoKKQkmRiLDSjSRVOSfgjtrij2Ga4xUK1Kz29G39n+mYXKrPcOesA X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(36860700004)(82310400014); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Feb 2024 11:52:50.8186 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 86bfec87-92e3-48ac-fd46-08dc391cf4e3 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1PEPF000044EF.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ1PR12MB6075 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Each time a flow is allocated in mlx5 PMD the whole buffer, both rte_flow_hw and mlx5dr_rule parts, are zeroed. This introduces some wasted work because: - mlx5dr layer does not assume that mlx5dr_rule must be initialized, - flow action translation in mlx5 PMD does not need most of the fields of rte_flow_hw to be zeroed. To reduce this wasted work, this patch introduces flags field to flow definition. Each flow field which is not always initialized during flow creation, will have a correspondent flag set if value is valid (in other words - it was set during flow creation). Utilizing this mechanism allows PMD to: - remove zeroing from flow allocation, - access some fields (especially from rte_flow_hw_aux) if and only if corresponding flag is set. Signed-off-by: Dariusz Sosnowski Acked-by: Ori Kam --- drivers/net/mlx5/mlx5_flow.h | 24 ++++++++- drivers/net/mlx5/mlx5_flow_hw.c | 93 +++++++++++++++++++++------------ 2 files changed, 83 insertions(+), 34 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index e8f4d2cb16..db65825eab 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1279,6 +1279,26 @@ enum { MLX5_FLOW_HW_FLOW_OP_TYPE_RSZ_TBL_MOVE, }; +enum { + MLX5_FLOW_HW_FLOW_FLAG_CNT_ID = RTE_BIT32(0), + MLX5_FLOW_HW_FLOW_FLAG_FATE_JUMP = RTE_BIT32(1), + MLX5_FLOW_HW_FLOW_FLAG_FATE_HRXQ = RTE_BIT32(2), + MLX5_FLOW_HW_FLOW_FLAG_AGE_IDX = RTE_BIT32(3), + MLX5_FLOW_HW_FLOW_FLAG_MTR_ID = RTE_BIT32(4), + MLX5_FLOW_HW_FLOW_FLAG_MATCHER_SELECTOR = RTE_BIT32(5), + MLX5_FLOW_HW_FLOW_FLAG_UPD_FLOW = RTE_BIT32(6), +}; + +#define MLX5_FLOW_HW_FLOW_FLAGS_ALL ( \ + MLX5_FLOW_HW_FLOW_FLAG_CNT_ID | \ + MLX5_FLOW_HW_FLOW_FLAG_FATE_JUMP | \ + MLX5_FLOW_HW_FLOW_FLAG_FATE_HRXQ | \ + MLX5_FLOW_HW_FLOW_FLAG_AGE_IDX | \ + MLX5_FLOW_HW_FLOW_FLAG_MTR_ID | \ + MLX5_FLOW_HW_FLOW_FLAG_MATCHER_SELECTOR | \ + MLX5_FLOW_HW_FLOW_FLAG_UPD_FLOW \ + ) + #ifdef PEDANTIC #pragma GCC diagnostic ignored "-Wpedantic" #endif @@ -1295,8 +1315,8 @@ struct rte_flow_hw { uint32_t res_idx; /** HWS flow rule index passed to mlx5dr. */ uint32_t rule_idx; - /** Fate action type. */ - uint32_t fate_type; + /** Which flow fields (inline or in auxiliary struct) are used. */ + uint32_t flags; /** Ongoing flow operation type. */ uint8_t operation_type; /** Index of pattern template this flow is based on. */ diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 025f04ddde..979be4764a 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -2845,6 +2845,7 @@ flow_hw_shared_action_construct(struct rte_eth_dev *dev, uint32_t queue, &rule_act->action, &rule_act->counter.offset)) return -1; + flow->flags |= MLX5_FLOW_HW_FLOW_FLAG_CNT_ID; flow->cnt_id = act_idx; break; case MLX5_INDIRECT_ACTION_TYPE_AGE: @@ -2854,6 +2855,7 @@ flow_hw_shared_action_construct(struct rte_eth_dev *dev, uint32_t queue, * it in flow destroy. */ mlx5_flow_hw_aux_set_age_idx(flow, aux, act_idx); + flow->flags |= MLX5_FLOW_HW_FLOW_FLAG_AGE_IDX; if (action_flags & MLX5_FLOW_ACTION_INDIRECT_COUNT) /* * The mutual update for idirect AGE & COUNT will be @@ -2869,6 +2871,7 @@ flow_hw_shared_action_construct(struct rte_eth_dev *dev, uint32_t queue, ¶m->queue_id, &age_cnt, idx) < 0) return -1; + flow->flags |= MLX5_FLOW_HW_FLOW_FLAG_CNT_ID; flow->cnt_id = age_cnt; param->nb_cnts++; } else { @@ -3174,7 +3177,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, rule_acts[act_data->action_dst].action = (!!attr.group) ? jump->hws_action : jump->root_action; flow->jump = jump; - flow->fate_type = MLX5_FLOW_FATE_JUMP; + flow->flags |= MLX5_FLOW_HW_FLOW_FLAG_FATE_JUMP; break; case RTE_FLOW_ACTION_TYPE_RSS: case RTE_FLOW_ACTION_TYPE_QUEUE: @@ -3185,7 +3188,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, return -1; rule_acts[act_data->action_dst].action = hrxq->action; flow->hrxq = hrxq; - flow->fate_type = MLX5_FLOW_FATE_QUEUE; + flow->flags |= MLX5_FLOW_HW_FLOW_FLAG_FATE_HRXQ; break; case MLX5_RTE_FLOW_ACTION_TYPE_RSS: item_flags = table->its[it_idx]->item_flags; @@ -3264,7 +3267,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, (!!attr.group) ? jump->hws_action : jump->root_action; flow->jump = jump; - flow->fate_type = MLX5_FLOW_FATE_JUMP; + flow->flags |= MLX5_FLOW_HW_FLOW_FLAG_FATE_JUMP; if (mlx5_aso_mtr_wait(priv->sh, MLX5_HW_INV_QUEUE, aso_mtr)) return -1; break; @@ -3284,6 +3287,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, if (age_idx == 0) return -rte_errno; mlx5_flow_hw_aux_set_age_idx(flow, aux, age_idx); + flow->flags |= MLX5_FLOW_HW_FLOW_FLAG_AGE_IDX; if (at->action_flags & MLX5_FLOW_ACTION_INDIRECT_COUNT) /* * When AGE uses indirect counter, no need to @@ -3306,6 +3310,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, ); if (ret != 0) return ret; + flow->flags |= MLX5_FLOW_HW_FLOW_FLAG_CNT_ID; flow->cnt_id = cnt_id; break; case MLX5_RTE_FLOW_ACTION_TYPE_COUNT: @@ -3317,6 +3322,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, ); if (ret != 0) return ret; + flow->flags |= MLX5_FLOW_HW_FLOW_FLAG_CNT_ID; flow->cnt_id = act_data->shared_counter.id; break; case RTE_FLOW_ACTION_TYPE_CONNTRACK: @@ -3349,6 +3355,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, return ret; aux = mlx5_flow_hw_aux(dev->data->port_id, flow); mlx5_flow_hw_aux_set_mtr_id(flow, aux, mtr_idx); + flow->flags |= MLX5_FLOW_HW_FLOW_FLAG_MTR_ID; break; case RTE_FLOW_ACTION_TYPE_NAT64: nat64_c = action->conf; @@ -3360,7 +3367,11 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, } } if (at->action_flags & MLX5_FLOW_ACTION_INDIRECT_COUNT) { + /* If indirect count is used, then CNT_ID flag should be set. */ + MLX5_ASSERT(flow->flags & MLX5_FLOW_HW_FLOW_FLAG_CNT_ID); if (at->action_flags & MLX5_FLOW_ACTION_INDIRECT_AGE) { + /* If indirect AGE is used, then AGE_IDX flag should be set. */ + MLX5_ASSERT(flow->flags & MLX5_FLOW_HW_FLOW_FLAG_AGE_IDX); aux = mlx5_flow_hw_aux(dev->data->port_id, flow); age_idx = mlx5_flow_hw_aux_get_age_idx(flow, aux) & MLX5_HWS_AGE_IDX_MASK; @@ -3398,8 +3409,10 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, flow->res_idx - 1; rule_acts[hw_acts->push_remove_pos].ipv6_ext.header = ap->ipv6_push_data; } - if (mlx5_hws_cnt_id_valid(hw_acts->cnt_id)) + if (mlx5_hws_cnt_id_valid(hw_acts->cnt_id)) { + flow->flags |= MLX5_FLOW_HW_FLOW_FLAG_CNT_ID; flow->cnt_id = hw_acts->cnt_id; + } return 0; } @@ -3512,7 +3525,7 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, "Port must be started before enqueueing flow operations"); return NULL; } - flow = mlx5_ipool_zmalloc(table->flow, &flow_idx); + flow = mlx5_ipool_malloc(table->flow, &flow_idx); if (!flow) goto error; rule_acts = flow_hw_get_dr_action_buffer(priv, table, action_template_index, queue); @@ -3531,6 +3544,7 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, } else { flow->res_idx = flow_idx; } + flow->flags = 0; /* * Set the flow operation type here in order to know if the flow memory * should be freed or not when get the result from dequeue. @@ -3582,6 +3596,7 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, (struct mlx5dr_rule *)flow->rule); rte_rwlock_read_unlock(&table->matcher_replace_rwlk); aux->matcher_selector = selector; + flow->flags |= MLX5_FLOW_HW_FLOW_FLAG_MATCHER_SELECTOR; } if (likely(!ret)) { flow_hw_q_inc_flow_ops(priv, queue); @@ -3655,7 +3670,7 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev, "Flow rule index exceeds table size"); return NULL; } - flow = mlx5_ipool_zmalloc(table->flow, &flow_idx); + flow = mlx5_ipool_malloc(table->flow, &flow_idx); if (!flow) goto error; rule_acts = flow_hw_get_dr_action_buffer(priv, table, action_template_index, queue); @@ -3674,6 +3689,7 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev, } else { flow->res_idx = flow_idx; } + flow->flags = 0; /* * Set the flow operation type here in order to know if the flow memory * should be freed or not when get the result from dequeue. @@ -3715,6 +3731,7 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev, (struct mlx5dr_rule *)flow->rule); rte_rwlock_read_unlock(&table->matcher_replace_rwlk); aux->matcher_selector = selector; + flow->flags |= MLX5_FLOW_HW_FLOW_FLAG_MATCHER_SELECTOR; } if (likely(!ret)) { flow_hw_q_inc_flow_ops(priv, queue); @@ -3802,6 +3819,7 @@ flow_hw_async_flow_update(struct rte_eth_dev *dev, } else { nf->res_idx = of->res_idx; } + nf->flags = 0; /* Indicate the construction function to set the proper fields. */ nf->operation_type = MLX5_FLOW_HW_FLOW_OP_TYPE_UPDATE; /* @@ -3831,6 +3849,7 @@ flow_hw_async_flow_update(struct rte_eth_dev *dev, */ of->operation_type = MLX5_FLOW_HW_FLOW_OP_TYPE_UPDATE; of->user_data = user_data; + of->flags |= MLX5_FLOW_HW_FLOW_FLAG_UPD_FLOW; rule_attr.user_data = of; ret = mlx5dr_rule_action_update((struct mlx5dr_rule *)of->rule, action_template_index, rule_acts, &rule_attr); @@ -3925,13 +3944,14 @@ flow_hw_age_count_release(struct mlx5_priv *priv, uint32_t queue, uint32_t *cnt_queue; uint32_t age_idx = aux->orig.age_idx; + MLX5_ASSERT(flow->flags & MLX5_FLOW_HW_FLOW_FLAG_CNT_ID); if (mlx5_hws_cnt_is_shared(priv->hws_cpool, flow->cnt_id)) { - if (age_idx && !mlx5_hws_age_is_indirect(age_idx)) { + if ((flow->flags & MLX5_FLOW_HW_FLOW_FLAG_AGE_IDX) && + !mlx5_hws_age_is_indirect(age_idx)) { /* Remove this AGE parameter from indirect counter. */ mlx5_hws_cnt_age_set(priv->hws_cpool, flow->cnt_id, 0); /* Release the AGE parameter. */ mlx5_hws_age_action_destroy(priv, age_idx, error); - mlx5_flow_hw_aux_set_age_idx(flow, aux, 0); } return; } @@ -3939,8 +3959,7 @@ flow_hw_age_count_release(struct mlx5_priv *priv, uint32_t queue, cnt_queue = mlx5_hws_cnt_is_pool_shared(priv) ? NULL : &queue; /* Put the counter first to reduce the race risk in BG thread. */ mlx5_hws_cnt_pool_put(priv->hws_cpool, cnt_queue, &flow->cnt_id); - flow->cnt_id = 0; - if (age_idx) { + if (flow->flags & MLX5_FLOW_HW_FLOW_FLAG_AGE_IDX) { if (mlx5_hws_age_is_indirect(age_idx)) { uint32_t idx = age_idx & MLX5_HWS_AGE_IDX_MASK; @@ -3949,7 +3968,6 @@ flow_hw_age_count_release(struct mlx5_priv *priv, uint32_t queue, /* Release the AGE parameter. */ mlx5_hws_age_action_destroy(priv, age_idx, error); } - mlx5_flow_hw_aux_set_age_idx(flow, aux, age_idx); } } @@ -4079,34 +4097,35 @@ hw_cmpl_flow_update_or_destroy(struct rte_eth_dev *dev, struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_aso_mtr_pool *pool = priv->hws_mpool; struct rte_flow_template_table *table = flow->table; - struct rte_flow_hw_aux *aux = mlx5_flow_hw_aux(dev->data->port_id, flow); /* Release the original resource index in case of update. */ uint32_t res_idx = flow->res_idx; - if (flow->fate_type == MLX5_FLOW_FATE_JUMP) - flow_hw_jump_release(dev, flow->jump); - else if (flow->fate_type == MLX5_FLOW_FATE_QUEUE) - mlx5_hrxq_obj_release(dev, flow->hrxq); - if (mlx5_hws_cnt_id_valid(flow->cnt_id)) - flow_hw_age_count_release(priv, queue, - flow, error); - if (aux->orig.mtr_id) { - mlx5_ipool_free(pool->idx_pool, aux->orig.mtr_id); - aux->orig.mtr_id = 0; - } - if (flow->operation_type != MLX5_FLOW_HW_FLOW_OP_TYPE_UPDATE) { - if (table->resource) - mlx5_ipool_free(table->resource, res_idx); - mlx5_ipool_free(table->flow, flow->idx); - } else { + if (flow->flags & MLX5_FLOW_HW_FLOW_FLAGS_ALL) { struct rte_flow_hw_aux *aux = mlx5_flow_hw_aux(dev->data->port_id, flow); - struct rte_flow_hw *upd_flow = &aux->upd_flow; - rte_memcpy(flow, upd_flow, offsetof(struct rte_flow_hw, rule)); - aux->orig = aux->upd; - flow->operation_type = MLX5_FLOW_HW_FLOW_OP_TYPE_CREATE; + if (flow->flags & MLX5_FLOW_HW_FLOW_FLAG_FATE_JUMP) + flow_hw_jump_release(dev, flow->jump); + else if (flow->flags & MLX5_FLOW_HW_FLOW_FLAG_FATE_HRXQ) + mlx5_hrxq_obj_release(dev, flow->hrxq); + if (flow->flags & MLX5_FLOW_HW_FLOW_FLAG_CNT_ID) + flow_hw_age_count_release(priv, queue, flow, error); + if (flow->flags & MLX5_FLOW_HW_FLOW_FLAG_MTR_ID) + mlx5_ipool_free(pool->idx_pool, aux->orig.mtr_id); + if (flow->flags & MLX5_FLOW_HW_FLOW_FLAG_UPD_FLOW) { + struct rte_flow_hw *upd_flow = &aux->upd_flow; + + rte_memcpy(flow, upd_flow, offsetof(struct rte_flow_hw, rule)); + aux->orig = aux->upd; + flow->operation_type = MLX5_FLOW_HW_FLOW_OP_TYPE_CREATE; + if (table->resource) + mlx5_ipool_free(table->resource, res_idx); + } + } + if (flow->operation_type == MLX5_FLOW_HW_FLOW_OP_TYPE_DESTROY || + flow->operation_type == MLX5_FLOW_HW_FLOW_OP_TYPE_RSZ_TBL_DESTROY) { if (table->resource) mlx5_ipool_free(table->resource, res_idx); + mlx5_ipool_free(table->flow, flow->idx); } } @@ -4121,6 +4140,7 @@ hw_cmpl_resizable_tbl(struct rte_eth_dev *dev, uint32_t selector = aux->matcher_selector; uint32_t other_selector = (selector + 1) & 1; + MLX5_ASSERT(flow->flags & MLX5_FLOW_HW_FLOW_FLAG_MATCHER_SELECTOR); switch (flow->operation_type) { case MLX5_FLOW_HW_FLOW_OP_TYPE_RSZ_TBL_CREATE: rte_atomic_fetch_add_explicit @@ -11411,10 +11431,18 @@ flow_hw_query(struct rte_eth_dev *dev, struct rte_flow *flow, case RTE_FLOW_ACTION_TYPE_VOID: break; case RTE_FLOW_ACTION_TYPE_COUNT: + if (!(hw_flow->flags & MLX5_FLOW_HW_FLOW_FLAG_CNT_ID)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "counter not defined in the rule"); ret = flow_hw_query_counter(dev, hw_flow->cnt_id, data, error); break; case RTE_FLOW_ACTION_TYPE_AGE: + if (!(hw_flow->flags & MLX5_FLOW_HW_FLOW_FLAG_AGE_IDX)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "age data not available"); aux = mlx5_flow_hw_aux(dev->data->port_id, hw_flow); ret = flow_hw_query_age(dev, mlx5_flow_hw_aux_get_age_idx(hw_flow, aux), data, error); @@ -12707,6 +12735,7 @@ flow_hw_update_resized(struct rte_eth_dev *dev, uint32_t queue, .burst = attr->postpone, }; + MLX5_ASSERT(hw_flow->flags & MLX5_FLOW_HW_FLOW_FLAG_MATCHER_SELECTOR); /** * mlx5dr_matcher_resize_rule_move() accepts original table matcher - * the one that was used BEFORE table resize.