net/mlx5: reduce the counter pool name length

Message ID 20230630125730.435542-1-bingz@nvidia.com (mailing list archive)
State Accepted, archived
Delegated to: Raslan Darawsheh
Headers
Series net/mlx5: reduce the counter pool name length |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/loongarch-compilation success Compilation OK
ci/loongarch-unit-testing success Unit Testing PASS
ci/Intel-compilation success Compilation OK
ci/intel-Testing success Testing PASS
ci/github-robot: build success github build: passed
ci/intel-Functional success Functional PASS
ci/iol-mellanox-Performance success Performance Testing PASS
ci/iol-broadcom-Performance success Performance Testing PASS
ci/iol-abi-testing success Testing PASS
ci/iol-x86_64-compile-testing success Testing PASS
ci/iol-aarch-unit-testing success Testing PASS
ci/iol-unit-testing success Testing PASS
ci/iol-testing success Testing PASS
ci/iol-x86_64-unit-testing success Testing PASS
ci/iol-broadcom-Functional success Functional Testing PASS
ci/iol-intel-Functional success Functional Testing PASS
ci/iol-intel-Performance success Performance Testing PASS

Commit Message

Bing Zhao June 30, 2023, 12:57 p.m. UTC
  The name size of a rte_ring is RTE_MEMZONE_NAMESIZE with the value 32
by default. When creating a HWS counter pool cache, the final string
format was "RG_MLX5_HWS_CNT_POOL_%u_cache/%u" and it could support
less than 1000 variants. For example, if the first %u representing
port id is 100 and it will take all the available characters then the
second %u for queues will be discarded. If there was more than one
rule creation queue, the rte_ring could not be created.

By reducing the fixed character number and using hexadecimal format,
the issue can be overcome with an assumption that not all the integer
fields for queue index is used.

Fixes: 13ea6bdcc7ee ("net/mlx5: support counters in cross port shared mode")
Fixes: 4d368e1da3a4 ("net/mlx5: support flow counter action for HWS")
Cc: jackmin@nvidia.com
Cc: stable@dpdk.org

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/mlx5_hws_cnt.c | 9 +++------
 1 file changed, 3 insertions(+), 6 deletions(-)
  

Comments

Raslan Darawsheh July 3, 2023, 2:02 p.m. UTC | #1
Hi,

> -----Original Message-----
> From: Bing Zhao <bingz@nvidia.com>
> Sent: Friday, June 30, 2023 3:58 PM
> To: Matan Azrad <matan@nvidia.com>; Slava Ovsiienko
> <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>; Suanming Mou
> <suanmingm@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>
> Cc: dev@dpdk.org; Jack Min <jackmin@nvidia.com>; stable@dpdk.org
> Subject: [PATCH] net/mlx5: reduce the counter pool name length
> 
> The name size of a rte_ring is RTE_MEMZONE_NAMESIZE with the value 32
> by default. When creating a HWS counter pool cache, the final string
> format was "RG_MLX5_HWS_CNT_POOL_%u_cache/%u" and it could support
> less than 1000 variants. For example, if the first %u representing
> port id is 100 and it will take all the available characters then the
> second %u for queues will be discarded. If there was more than one
> rule creation queue, the rte_ring could not be created.
> 
> By reducing the fixed character number and using hexadecimal format,
> the issue can be overcome with an assumption that not all the integer
> fields for queue index is used.
> 
> Fixes: 13ea6bdcc7ee ("net/mlx5: support counters in cross port shared
> mode")
> Fixes: 4d368e1da3a4 ("net/mlx5: support flow counter action for HWS")
> Cc: jackmin@nvidia.com
> Cc: stable@dpdk.org

Patch applied to next-net-mlx,

Kindest regards,
Raslan Darawsheh
  

Patch

diff --git a/drivers/net/mlx5/mlx5_hws_cnt.c b/drivers/net/mlx5/mlx5_hws_cnt.c
index d98df68f39..18d80f34ba 100644
--- a/drivers/net/mlx5/mlx5_hws_cnt.c
+++ b/drivers/net/mlx5/mlx5_hws_cnt.c
@@ -419,8 +419,7 @@  mlx5_hws_cnt_pool_init(struct mlx5_dev_ctx_shared *sh,
 		goto error;
 	}
 	for (qidx = 0; qidx < ccfg->q_num; qidx++) {
-		snprintf(mz_name, sizeof(mz_name), "%s_cache/%u", pcfg->name,
-				qidx);
+		snprintf(mz_name, sizeof(mz_name), "%s_qc/%x", pcfg->name, qidx);
 		cntp->cache->qcache[qidx] = rte_ring_create(mz_name, ccfg->size,
 				SOCKET_ID_ANY,
 				RING_F_SP_ENQ | RING_F_SC_DEQ |
@@ -612,12 +611,10 @@  mlx5_hws_cnt_pool_create(struct rte_eth_dev *dev,
 	int ret = 0;
 	size_t sz;
 
-	mp_name = mlx5_malloc(MLX5_MEM_ZERO, RTE_MEMZONE_NAMESIZE, 0,
-			SOCKET_ID_ANY);
+	mp_name = mlx5_malloc(MLX5_MEM_ZERO, RTE_MEMZONE_NAMESIZE, 0, SOCKET_ID_ANY);
 	if (mp_name == NULL)
 		goto error;
-	snprintf(mp_name, RTE_MEMZONE_NAMESIZE, "MLX5_HWS_CNT_POOL_%u",
-			dev->data->port_id);
+	snprintf(mp_name, RTE_MEMZONE_NAMESIZE, "MLX5_HWS_CNT_P_%x", dev->data->port_id);
 	pcfg.name = mp_name;
 	pcfg.request_num = pattr->nb_counters;
 	pcfg.alloc_factor = HWS_CNT_ALLOC_FACTOR_DEFAULT;