From patchwork Wed Aug 14 10:49:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 143156 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 46EF1457C0; Wed, 14 Aug 2024 12:50:56 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 23F6842D89; Wed, 14 Aug 2024 12:50:31 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by mails.dpdk.org (Postfix) with ESMTP id CEBBF42D7B for ; Wed, 14 Aug 2024 12:50:28 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1723632629; x=1755168629; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ehx/ElS3/RsGdvk2LE1AqLlLf+7+uNPGvdPYq/gozio=; b=gNf2biw6FMTSQ0mK8nGp2vy7IK2O1rAq4V51DGzMe4ln8MbK4okn+1+q XuuO72PtFWa/miVCL8d9iuoe/TZD03R9Hu32fxLzN2a4exvD7iGjRCrQl mYaqWx5M5W2hpeM1SnAA8uqmolGNa6/KePfnHPYCu7ZTmvxhZ7irnaL0g y4xWBjuCnsvhNZht1gRvViaKaHQv2XmeCFYBozqLNE0HPQCv8QtHmE95h bYjIIKtIMleVkU+jCJbGyvRhaamX0HMojil4R2Sf2YDhnAfcKims7T/pe +YyARpS3SNxlZ5ZZgRYHWrZGBLxpB49uU7XY7iVEm9DpmabN+uUSAHtXy A==; X-CSE-ConnectionGUID: K9P+kHAIQgaVDpReK7sIeg== X-CSE-MsgGUID: Uk6I9TSDQt2rhBDv5X5nYA== X-IronPort-AV: E=McAfee;i="6700,10204,11163"; a="44360288" X-IronPort-AV: E=Sophos;i="6.09,145,1716274800"; d="scan'208";a="44360288" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Aug 2024 03:50:28 -0700 X-CSE-ConnectionGUID: vQlIPWXeR6iU0h+3SiKTgQ== X-CSE-MsgGUID: woJDqxsRRiu/prjjq9cxMg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,145,1716274800"; d="scan'208";a="96481695" Received: from unknown (HELO silpixa00401385.ir.intel.com) ([10.237.214.25]) by orviesa001.jf.intel.com with ESMTP; 14 Aug 2024 03:50:27 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: ferruh.yigit@amd.com, thomas@monjalon.net, mb@smartsharesystems.com, Bruce Richardson Subject: [PATCH v3 08/26] pdump: use separate Rx and Tx queue limits Date: Wed, 14 Aug 2024 11:49:14 +0100 Message-ID: <20240814104933.14062-9-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240814104933.14062-1-bruce.richardson@intel.com> References: <20240812132910.162252-1-bruce.richardson@intel.com> <20240814104933.14062-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Update library to use the new defines RTE_MAX_ETHPORT_TX_QUEUES and RTE_MAX_ETHPORT_RX_QUEUES rather than the old define RTE_MAX_QUEUES_PER_PORT. Signed-off-by: Bruce Richardson Acked-by: Morten Brørup --- lib/pdump/rte_pdump.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/lib/pdump/rte_pdump.c b/lib/pdump/rte_pdump.c index 679c3dd0b5..0e0f1088f5 100644 --- a/lib/pdump/rte_pdump.c +++ b/lib/pdump/rte_pdump.c @@ -61,8 +61,8 @@ static struct pdump_rxtx_cbs { const struct rte_bpf *filter; enum pdump_version ver; uint32_t snaplen; -} rx_cbs[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT], -tx_cbs[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT]; +} rx_cbs[RTE_MAX_ETHPORTS][RTE_MAX_ETHPORT_RX_QUEUES], +tx_cbs[RTE_MAX_ETHPORTS][RTE_MAX_ETHPORT_TX_QUEUES]; /* @@ -72,8 +72,8 @@ tx_cbs[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT]; */ static const char MZ_RTE_PDUMP_STATS[] = "rte_pdump_stats"; static struct { - struct rte_pdump_stats rx[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT]; - struct rte_pdump_stats tx[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT]; + struct rte_pdump_stats rx[RTE_MAX_ETHPORTS][RTE_MAX_ETHPORT_RX_QUEUES]; + struct rte_pdump_stats tx[RTE_MAX_ETHPORTS][RTE_MAX_ETHPORT_TX_QUEUES]; const struct rte_memzone *mz; } *pdump_stats; @@ -708,8 +708,8 @@ rte_pdump_disable_by_deviceid(char *device_id, uint16_t queue, } static void -pdump_sum_stats(uint16_t port, uint16_t nq, - struct rte_pdump_stats stats[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT], +pdump_sum_stats(uint16_t nq, + struct rte_pdump_stats *stats, struct rte_pdump_stats *total) { uint64_t *sum = (uint64_t *)total; @@ -718,7 +718,7 @@ pdump_sum_stats(uint16_t port, uint16_t nq, uint16_t qid; for (qid = 0; qid < nq; qid++) { - const RTE_ATOMIC(uint64_t) *perq = (const uint64_t __rte_atomic *)&stats[port][qid]; + const RTE_ATOMIC(uint64_t) *perq = (const uint64_t __rte_atomic *)&stats[qid]; for (i = 0; i < sizeof(*total) / sizeof(uint64_t); i++) { val = rte_atomic_load_explicit(&perq[i], rte_memory_order_relaxed); @@ -762,7 +762,7 @@ rte_pdump_stats(uint16_t port, struct rte_pdump_stats *stats) pdump_stats = mz->addr; } - pdump_sum_stats(port, dev_info.nb_rx_queues, pdump_stats->rx, stats); - pdump_sum_stats(port, dev_info.nb_tx_queues, pdump_stats->tx, stats); + pdump_sum_stats(dev_info.nb_rx_queues, pdump_stats->rx[port], stats); + pdump_sum_stats(dev_info.nb_tx_queues, pdump_stats->tx[port], stats); return 0; }