net/mlx5: fix Tx queue size adjustment

Message ID 20210205124318.18650-1-viacheslavo@nvidia.com (mailing list archive)
State Accepted, archived
Delegated to: Ferruh Yigit
Headers
Series net/mlx5: fix Tx queue size adjustment |

Checks

Context Check Description
ci/Intel-compilation success Compilation OK
ci/intel-Testing success Testing PASS
ci/checkpatch success coding style OK

Commit Message

Slava Ovsiienko Feb. 5, 2021, 12:43 p.m. UTC
  The inline data size alignments should be taken into account
either to conform the rdma-core implementation of sending
queue size calculation.

Fixes: 7e14d144f2ea ("net/mlx5: fix Tx queue size created with DevX")

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/mlx5_devx.c | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)
  

Comments

Thomas Monjalon Feb. 5, 2021, 1:02 p.m. UTC | #1
05/02/2021 13:43, Viacheslav Ovsiienko:
> The inline data size alignments should be taken into account
> either to conform the rdma-core implementation of sending
> queue size calculation.
> 
> Fixes: 7e14d144f2ea ("net/mlx5: fix Tx queue size created with DevX")
> 
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>

Squashed while pulling next-net, thanks.
  

Patch

diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 1b1a72dd07..e4acab90c8 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -1077,15 +1077,18 @@  mlx5_txq_devx_obj_new(struct rte_eth_dev *dev, uint16_t idx)
 	 * internally in the mlx5_calc_sq_size(), we do the same
 	 * for the queue being created with DevX at this point.
 	 */
-	wqe_size = txq_data->tso_en ? txq_ctrl->max_tso_header : 0;
+	wqe_size = txq_data->tso_en ?
+		   RTE_ALIGN(txq_ctrl->max_tso_header, MLX5_WSEG_SIZE) : 0;
 	wqe_size += sizeof(struct mlx5_wqe_cseg) +
 		    sizeof(struct mlx5_wqe_eseg) +
 		    sizeof(struct mlx5_wqe_dseg);
 	if (txq_data->inlen_send)
-		wqe_size = RTE_MAX(wqe_size, txq_data->inlen_send +
-					     sizeof(struct mlx5_wqe_cseg) +
-					     sizeof(struct mlx5_wqe_eseg));
-	wqe_size = RTE_ALIGN_CEIL(wqe_size, MLX5_WQE_SIZE) / MLX5_WQE_SIZE;
+		wqe_size = RTE_MAX(wqe_size, sizeof(struct mlx5_wqe_cseg) +
+					     sizeof(struct mlx5_wqe_eseg) +
+					     RTE_ALIGN(txq_data->inlen_send +
+						       sizeof(uint32_t),
+						       MLX5_WSEG_SIZE));
+	wqe_size = RTE_ALIGN(wqe_size, MLX5_WQE_SIZE) / MLX5_WQE_SIZE;
 	/* Create Send Queue object with DevX. */
 	wqe_n = RTE_MIN((1UL << txq_data->elts_n) * wqe_size,
 			(uint32_t)priv->sh->device_attr.max_qp_wr);