[V1] net/mlx5: store IPv6 TC detection result in physical device

Message ID 20240130064913.1916709-1-gavinl@nvidia.com (mailing list archive)
State Accepted, archived
Delegated to: Raslan Darawsheh
Headers
Series [V1] net/mlx5: store IPv6 TC detection result in physical device |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/loongarch-compilation success Compilation OK
ci/loongarch-unit-testing success Unit Testing PASS
ci/Intel-compilation success Compilation OK
ci/intel-Testing success Testing PASS
ci/intel-Functional success Functional PASS
ci/iol-intel-Functional success Functional Testing PASS
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-abi-testing success Testing PASS
ci/iol-broadcom-Functional success Functional Testing PASS
ci/iol-broadcom-Performance success Performance Testing PASS
ci/iol-unit-amd64-testing success Testing PASS
ci/iol-sample-apps-testing success Testing PASS
ci/iol-compile-amd64-testing success Testing PASS
ci/iol-compile-arm64-testing success Testing PASS
ci/iol-unit-arm64-testing success Testing PASS
ci/iol-mellanox-Performance success Performance Testing PASS

Commit Message

Gavin Li Jan. 30, 2024, 6:49 a.m. UTC
  Previously, discovering of IPv6 traffic class would happen on each device
not sharing context with others. However, It's not necessary to repeat it
on devices of the same physical device. A flow will be created and
destroyed in the detection, which may trigger cache allocation and take
more memory in scale cases.

To solve the problem, store the discovering of IPv6 traffic class result
in physical device, and do it only once per physical device.

Fixes: 569b8340a012 ("net/mlx5: discover IPv6 traffic class support in RDMA core")
Signed-off-by: Gavin Li <gavinl@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
---
 drivers/net/mlx5/linux/mlx5_os.c | 12 +++++++-----
 drivers/net/mlx5/mlx5.h          | 13 ++++++++++++-
 drivers/net/mlx5/mlx5_flow_dv.c  |  2 +-
 3 files changed, 20 insertions(+), 7 deletions(-)
  

Comments

Raslan Darawsheh Feb. 25, 2024, 2:45 p.m. UTC | #1
Hi,
> -----Original Message-----
> From: Gavin Li <gavinl@nvidia.com>
> Sent: Tuesday, January 30, 2024 8:49 AM
> To: Dariusz Sosnowski <dsosnowski@nvidia.com>; Slava Ovsiienko
> <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>; Suanming Mou
> <suanmingm@nvidia.com>; Matan Azrad <matan@nvidia.com>
> Cc: dev@dpdk.org; Raslan Darawsheh <rasland@nvidia.com>; Jiawei(Jonny)
> Wang <jiaweiw@nvidia.com>
> Subject: [PATCH V1] net/mlx5: store IPv6 TC detection result in physical device
> 
> Previously, discovering of IPv6 traffic class would happen on each device not
> sharing context with others. However, It's not necessary to repeat it on
> devices of the same physical device. A flow will be created and destroyed in
> the detection, which may trigger cache allocation and take more memory in
> scale cases.
> 
> To solve the problem, store the discovering of IPv6 traffic class result in
> physical device, and do it only once per physical device.
> 
> Fixes: 569b8340a012 ("net/mlx5: discover IPv6 traffic class support in RDMA
> core")
> Signed-off-by: Gavin Li <gavinl@nvidia.com>
> Acked-by: Suanming Mou <suanmingm@nvidia.com>
Patch applied to next-net-mlx

Kindest regards
Raslan Darawsheh
  

Patch

diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index e47d0d0238..dd140e9934 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -1602,11 +1602,13 @@  mlx5_dev_spawn(struct rte_device *dpdk_dev,
 			goto error;
 	}
 	rte_rwlock_init(&priv->ind_tbls_lock);
-	if (!priv->sh->cdev->config.hca_attr.modify_outer_ipv6_traffic_class ||
-	    (sh->config.dv_flow_en == 1 &&
-	    !priv->sh->ipv6_tc_fallback &&
-	    mlx5_flow_discover_ipv6_tc_support(eth_dev)))
-		priv->sh->ipv6_tc_fallback = 1;
+	if (sh->phdev->config.ipv6_tc_fallback == MLX5_IPV6_TC_UNKNOWN) {
+		if (!sh->cdev->config.hca_attr.modify_outer_ipv6_traffic_class ||
+		    (sh->config.dv_flow_en == 1 && mlx5_flow_discover_ipv6_tc_support(eth_dev)))
+			sh->phdev->config.ipv6_tc_fallback = MLX5_IPV6_TC_FALLBACK;
+		else
+			sh->phdev->config.ipv6_tc_fallback = MLX5_IPV6_TC_OK;
+	}
 	if (priv->sh->config.dv_flow_en == 2) {
 #ifdef HAVE_MLX5_HWS_SUPPORT
 		if (priv->sh->config.dv_esw_en) {
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 683029023e..ce9aa64a1d 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1421,6 +1421,17 @@  struct mlx5_dev_registers {
 
 struct mlx5_geneve_tlv_options;
 
+enum mlx5_ipv6_tc_support {
+	MLX5_IPV6_TC_UNKNOWN = 0,
+	MLX5_IPV6_TC_FALLBACK,
+	MLX5_IPV6_TC_OK,
+};
+
+struct mlx5_common_nic_config {
+	enum mlx5_ipv6_tc_support ipv6_tc_fallback;
+	/* Whether ipv6 traffic class should use old value. */
+};
+
 /**
  * Physical device structure.
  * This device is created once per NIC to manage recourses shared by all ports
@@ -1431,6 +1442,7 @@  struct mlx5_physical_device {
 	struct mlx5_dev_ctx_shared *sh; /* Created on sherd context. */
 	uint64_t guid; /* System image guid, the uniq ID of physical device. */
 	struct mlx5_geneve_tlv_options *tlv_options;
+	struct mlx5_common_nic_config config;
 	uint32_t refcnt;
 };
 
@@ -1459,7 +1471,6 @@  struct mlx5_dev_ctx_shared {
 	uint32_t lag_rx_port_affinity_en:1;
 	/* lag_rx_port_affinity is supported. */
 	uint32_t hws_max_log_bulk_sz:5;
-	uint32_t ipv6_tc_fallback:1;
 	/* Log of minimal HWS counters created hard coded. */
 	uint32_t hws_max_nb_counters; /* Maximal number for HWS counters. */
 	uint32_t max_port; /* Maximal IB device port index. */
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 6998be107f..1d2fdd3391 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -1608,7 +1608,7 @@  mlx5_modify_flex_item(const struct rte_eth_dev *dev,
 static inline bool
 mlx5_dv_modify_ipv6_traffic_class_supported(struct mlx5_priv *priv)
 {
-	return !priv->sh->ipv6_tc_fallback;
+	return priv->sh->phdev->config.ipv6_tc_fallback == MLX5_IPV6_TC_OK;
 }
 
 void