From patchwork Thu Feb 4 12:04:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Slava Ovsiienko X-Patchwork-Id: 87744 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 94FB7A0524; Thu, 4 Feb 2021 13:04:16 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1D98424065A; Thu, 4 Feb 2021 13:04:16 +0100 (CET) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by mails.dpdk.org (Postfix) with ESMTP id BAE3A240652 for ; Thu, 4 Feb 2021 13:04:14 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from viacheslavo@nvidia.com) with SMTP; 4 Feb 2021 14:04:11 +0200 Received: from nvidia.com (pegasus11.mtr.labs.mlnx [10.210.16.104]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 114C4B2m001904; Thu, 4 Feb 2021 14:04:11 +0200 From: Viacheslav Ovsiienko To: dev@dpdk.org Cc: rasland@nvidia.com, matan@nvidia.com, orika@nvidia.com, thomas@monjalon.net, stable@dpdk.org Date: Thu, 4 Feb 2021 14:04:09 +0200 Message-Id: <20210204120409.1194-1-viacheslavo@nvidia.com> X-Mailer: git-send-email 2.18.1 Subject: [dpdk-dev] [PATCH] net/mlx5: fix Tx queue size created with DevX X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The number of descriptors specified for queue creation implies the queue should be able to contain the specified amount of packets being sent. Typically one packet takes one queue descriptor (WQE) to be handled. If there is inline data option enabled one packet might require more WQEs to embrace the inline data and the overall queue size (the number of queue descriptors) should be adjusted accordingly. In mlx5 PMD the queues can be created either via Verbs, using the rdma-core library or via DevX as direct kernel/firmware call. The rdma-core does queue size adjustment internally, depending on TSO and inline setting. The DevX approach missed this point. This caused the queue size discrepancy and performance variations. The patch adjusts the Tx queue size for the DexV approach in the same as it is done in rdma-core implementation. Fixes: 86d259cec852 ("net/mlx5: separate Tx queue object creations") Cc: stable@dpdk.org Signed-off-by: Viacheslav Ovsiienko --- drivers/net/mlx5/mlx5_devx.c | 21 +++++++++++++++++++-- 1 file changed, 19 insertions(+), 2 deletions(-) diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index 935cbd03ab..ef34c38580 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -1036,7 +1036,7 @@ mlx5_txq_devx_obj_new(struct rte_eth_dev *dev, uint16_t idx) }; void *reg_addr; uint32_t cqe_n, log_desc_n; - uint32_t wqe_n; + uint32_t wqe_n, wqe_size; int ret = 0; MLX5_ASSERT(txq_data); @@ -1069,8 +1069,25 @@ mlx5_txq_devx_obj_new(struct rte_eth_dev *dev, uint16_t idx) txq_data->cq_pi = 0; txq_data->cq_db = txq_obj->cq_obj.db_rec; *txq_data->cq_db = 0; + /* + * Ajust the amount of WQEs depending on inline settings. + * The number of descriptors should be enough to handle + * the specified number of packets. If queue is being created + * with Verbs the rdma-core does queue size adjustment + * internally in the mlx5_calc_sq_size(), we do the same + * for the queue being created with DevX at this point. + */ + wqe_size = txq_data->tso_en ? txq_ctrl->max_tso_header : 0; + wqe_size += sizeof(struct mlx5_wqe_cseg) + + sizeof(struct mlx5_wqe_eseg) + + sizeof(struct mlx5_wqe_dseg); + if (txq_data->inlen_send) + wqe_size = RTE_MAX(wqe_size, txq_data->inlen_send + + sizeof(struct mlx5_wqe_cseg) + + sizeof(struct mlx5_wqe_eseg)); + wqe_size = RTE_ALIGN_CEIL(wqe_size, MLX5_WQE_SIZE) / MLX5_WQE_SIZE; /* Create Send Queue object with DevX. */ - wqe_n = RTE_MIN(1UL << txq_data->elts_n, + wqe_n = RTE_MIN((1UL << txq_data->elts_n) * wqe_size, (uint32_t)priv->sh->device_attr.max_qp_wr); log_desc_n = log2above(wqe_n); ret = mlx5_txq_create_devx_sq_resources(dev, idx, log_desc_n);