net/mlx5: fix pattern template size validation

Message ID 20240306073856.950136-1-getelson@nvidia.com (mailing list archive)
State Accepted, archived
Delegated to: Raslan Darawsheh
Headers
Series net/mlx5: fix pattern template size validation |

Checks

Context Check Description
ci/checkpatch warning coding style issues
ci/loongarch-compilation success Compilation OK
ci/loongarch-unit-testing success Unit Testing PASS
ci/Intel-compilation success Compilation OK
ci/intel-Testing success Testing PASS
ci/intel-Functional success Functional PASS
ci/iol-mellanox-Performance success Performance Testing PASS
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-intel-Functional success Functional Testing PASS
ci/iol-abi-testing success Testing PASS
ci/iol-compile-amd64-testing success Testing PASS
ci/iol-unit-arm64-testing success Testing PASS
ci/iol-unit-amd64-testing success Testing PASS
ci/iol-compile-arm64-testing success Testing PASS
ci/iol-sample-apps-testing success Testing PASS
ci/iol-broadcom-Performance success Performance Testing PASS
ci/iol-broadcom-Functional success Functional Testing PASS

Commit Message

Gregory Etelson March 6, 2024, 7:38 a.m. UTC
  PMD running in HWS FDB mode can be configured to steer group 0 to FW.
In that case PMD activates legacy DV pattern processing.
There are control flows that require HWS pattern processing
in group 0.

Pattern template validation tried to create a matcher both in group 0
and HWS group.
As the result, during group 0 validation HWS tuned pattern was
processed as DV.

The patch removed pattern validation for group 0.

Fixes: f3aadd103358 ("net/mlx5: improve pattern template validation")
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_hw.c | 49 +++++++++++++++++++--------------
 1 file changed, 29 insertions(+), 20 deletions(-)
  

Comments

Raslan Darawsheh March 13, 2024, 7:44 a.m. UTC | #1
Hi,

> -----Original Message-----
> From: Gregory Etelson <getelson@nvidia.com>
> Sent: Wednesday, March 6, 2024 9:39 AM
> To: dev@dpdk.org
> Cc: Gregory Etelson <getelson@nvidia.com>; Maayan Kashani
> <mkashani@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>; Dariusz
> Sosnowski <dsosnowski@nvidia.com>; Slava Ovsiienko
> <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>; Suanming Mou
> <suanmingm@nvidia.com>; Matan Azrad <matan@nvidia.com>
> Subject: [PATCH] net/mlx5: fix pattern template size validation
> 
> PMD running in HWS FDB mode can be configured to steer group 0 to FW.
> In that case PMD activates legacy DV pattern processing.
> There are control flows that require HWS pattern processing in group 0.
> 
> Pattern template validation tried to create a matcher both in group 0 and HWS
> group.
> As the result, during group 0 validation HWS tuned pattern was processed as
> DV.
> 
> The patch removed pattern validation for group 0.
> 
> Fixes: f3aadd103358 ("net/mlx5: improve pattern template validation")
> Signed-off-by: Gregory Etelson <getelson@nvidia.com>
> Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Patch applied to next-net-mlx,

Kindest regards,
Raslan Darawsheh
  

Patch

diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 4216433c6e..b37348c972 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -7668,48 +7668,57 @@  flow_hw_pattern_has_sq_match(const struct rte_flow_item *items)
 	return false;
 }
 
+/*
+ * Verify that the tested flow patterns fits STE size limit in HWS group.
+ *
+ *
+ * Return values:
+ * 0       : Tested patterns fit STE size limit
+ * -EINVAL : Invalid parameters detected
+ * -E2BIG  : Tested patterns exceed STE size limit
+ */
 static int
 pattern_template_validate(struct rte_eth_dev *dev,
 			  struct rte_flow_pattern_template *pt[], uint32_t pt_num)
 {
-	uint32_t group = 0;
+	struct rte_flow_error error;
 	struct mlx5_flow_template_table_cfg tbl_cfg = {
-		.attr = (struct rte_flow_template_table_attr) {
+		.attr = {
 			.nb_flows = 64,
 			.insertion_type = RTE_FLOW_TABLE_INSERTION_TYPE_PATTERN,
 			.hash_func = RTE_FLOW_TABLE_HASH_FUNC_DEFAULT,
 			.flow_attr = {
+				.group = 1,
 				.ingress = pt[0]->attr.ingress,
 				.egress = pt[0]->attr.egress,
 				.transfer = pt[0]->attr.transfer
 			}
-		},
-		.external = true
+		}
 	};
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct rte_flow_actions_template *action_template;
+	struct rte_flow_template_table *tmpl_tbl;
+	int ret;
 
-	if (pt[0]->attr.ingress) {
+	if (pt[0]->attr.ingress)
 		action_template = priv->action_template_drop[MLX5DR_TABLE_TYPE_NIC_RX];
-	} else if (pt[0]->attr.egress) {
+	else if (pt[0]->attr.egress)
 		action_template = priv->action_template_drop[MLX5DR_TABLE_TYPE_NIC_TX];
-	} else if (pt[0]->attr.transfer) {
+	else if (pt[0]->attr.transfer)
 		action_template = priv->action_template_drop[MLX5DR_TABLE_TYPE_FDB];
+	else
+		return -EINVAL;
+	if (pt[0]->item_flags & MLX5_FLOW_ITEM_COMPARE)
+		tbl_cfg.attr.nb_flows = 1;
+	tmpl_tbl = flow_hw_table_create(dev, &tbl_cfg, pt, pt_num,
+					&action_template, 1, NULL);
+	if (tmpl_tbl) {
+		ret = 0;
+		flow_hw_table_destroy(dev, tmpl_tbl, &error);
 	} else {
-		rte_errno = EINVAL;
-		return rte_errno;
+		ret = rte_errno == E2BIG ? -E2BIG : 0;
 	}
-	do {
-		struct rte_flow_template_table *tmpl_tbl;
-
-		tbl_cfg.attr.flow_attr.group = group;
-		tmpl_tbl = flow_hw_table_create(dev, &tbl_cfg, pt, pt_num,
-						&action_template, 1, NULL);
-		if (!tmpl_tbl)
-			return rte_errno;
-		flow_hw_table_destroy(dev, tmpl_tbl, NULL);
-	} while (++group <= 1);
-	return 0;
+	return ret;
 }
 
 /**