From patchwork Mon Apr 22 19:07:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hernan Vargas X-Patchwork-Id: 139630 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 04A9843EDD; Mon, 22 Apr 2024 21:11:54 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C739640395; Mon, 22 Apr 2024 21:11:38 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) by mails.dpdk.org (Postfix) with ESMTP id 839B2402C5; Mon, 22 Apr 2024 21:11:33 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1713813094; x=1745349094; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=NEecPZy4s2m+H49jr/WpaOjrI2Y3JbJ5TEZLqzVGhlk=; b=FNcbeHJKJRleKgG+iyh84IRUqcFnBQEMff70EDvW+abshafIZ1GGnX0m SJKYr7XYkdeafdKkJAuqSbFPIBnGiYASUZC1vZmLGwXxYT7jrG+EWcsYh 2wq1iuPk1t/RWWdIn8QB4rLG1bCrCMN6owH+tLlFqIkGaY0hszVESxAWa vmD89Ei33eN6N/S8AU5ieRAZufhMx8x5Ae4p3Y8Ha/0HWwwtuYMgpWH5u +4vH72CbYpn3gnVZId+ofDQinicNOQe7AvvC1Y29yrRgknv5YuiA8OcsS z2LB+r3XwXnbYHLJ/oYSuEFESJOHggeKSFuMWuJPNo7KOkHw3cvl3pC/r w==; X-CSE-ConnectionGUID: reoeVv79RuuAfY8/oJd2oA== X-CSE-MsgGUID: riUYnfhPQXi9zzbqEipfsg== X-IronPort-AV: E=McAfee;i="6600,9927,11052"; a="19922802" X-IronPort-AV: E=Sophos;i="6.07,221,1708416000"; d="scan'208";a="19922802" Received: from orviesa006.jf.intel.com ([10.64.159.146]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Apr 2024 12:11:33 -0700 X-CSE-ConnectionGUID: VOymZTCpSQuNz5sVUncbuA== X-CSE-MsgGUID: 66EOOVFWTd6dwACf2jIC+Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,221,1708416000"; d="scan'208";a="24555952" Received: from unknown (HELO csl-npg-qt0.la.intel.com) ([10.233.181.103]) by orviesa006.jf.intel.com with ESMTP; 22 Apr 2024 12:11:32 -0700 From: Hernan Vargas To: dev@dpdk.org, gakhil@marvell.com, trix@redhat.com, maxime.coquelin@redhat.com Cc: nicolas.chautru@intel.com, qi.z.zhang@intel.com, Hernan Vargas , stable@dpdk.org Subject: [PATCH v1 3/9] test/bbdev: fix interrupt tests Date: Mon, 22 Apr 2024 12:07:54 -0700 Message-Id: <20240422190800.123027-4-hernan.vargas@intel.com> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20240422190800.123027-1-hernan.vargas@intel.com> References: <20240422190800.123027-1-hernan.vargas@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Fix possible error with regards to setting the burst size from the enqueue thread. Fixes: b2e2aec3239e ("app/bbdev: enhance interrupt test") Cc: stable@dpdk.org Signed-off-by: Hernan Vargas --- app/test-bbdev/test_bbdev_perf.c | 98 ++++++++++++++++---------------- 1 file changed, 49 insertions(+), 49 deletions(-) diff --git a/app/test-bbdev/test_bbdev_perf.c b/app/test-bbdev/test_bbdev_perf.c index 9ed0c4648d24..28d78e73a9c1 100644 --- a/app/test-bbdev/test_bbdev_perf.c +++ b/app/test-bbdev/test_bbdev_perf.c @@ -3406,15 +3406,6 @@ throughput_intr_lcore_ldpc_dec(void *arg) if (unlikely(num_to_process - enqueued < num_to_enq)) num_to_enq = num_to_process - enqueued; - enq = 0; - do { - enq += rte_bbdev_enqueue_ldpc_dec_ops( - tp->dev_id, - queue_id, &ops[enqueued], - num_to_enq); - } while (unlikely(num_to_enq != enq)); - enqueued += enq; - /* Write to thread burst_sz current number of enqueued * descriptors. It ensures that proper number of * descriptors will be dequeued in callback @@ -3424,6 +3415,15 @@ throughput_intr_lcore_ldpc_dec(void *arg) */ __atomic_store_n(&tp->burst_sz, num_to_enq, __ATOMIC_RELAXED); + enq = 0; + do { + enq += rte_bbdev_enqueue_ldpc_dec_ops( + tp->dev_id, + queue_id, &ops[enqueued], + num_to_enq); + } while (unlikely(num_to_enq != enq)); + enqueued += enq; + /* Wait until processing of previous batch is * completed */ @@ -3498,14 +3498,6 @@ throughput_intr_lcore_dec(void *arg) if (unlikely(num_to_process - enqueued < num_to_enq)) num_to_enq = num_to_process - enqueued; - enq = 0; - do { - enq += rte_bbdev_enqueue_dec_ops(tp->dev_id, - queue_id, &ops[enqueued], - num_to_enq); - } while (unlikely(num_to_enq != enq)); - enqueued += enq; - /* Write to thread burst_sz current number of enqueued * descriptors. It ensures that proper number of * descriptors will be dequeued in callback @@ -3515,6 +3507,14 @@ throughput_intr_lcore_dec(void *arg) */ __atomic_store_n(&tp->burst_sz, num_to_enq, __ATOMIC_RELAXED); + enq = 0; + do { + enq += rte_bbdev_enqueue_dec_ops(tp->dev_id, + queue_id, &ops[enqueued], + num_to_enq); + } while (unlikely(num_to_enq != enq)); + enqueued += enq; + /* Wait until processing of previous batch is * completed */ @@ -3584,14 +3584,6 @@ throughput_intr_lcore_enc(void *arg) if (unlikely(num_to_process - enqueued < num_to_enq)) num_to_enq = num_to_process - enqueued; - enq = 0; - do { - enq += rte_bbdev_enqueue_enc_ops(tp->dev_id, - queue_id, &ops[enqueued], - num_to_enq); - } while (unlikely(enq != num_to_enq)); - enqueued += enq; - /* Write to thread burst_sz current number of enqueued * descriptors. It ensures that proper number of * descriptors will be dequeued in callback @@ -3601,6 +3593,14 @@ throughput_intr_lcore_enc(void *arg) */ __atomic_store_n(&tp->burst_sz, num_to_enq, __ATOMIC_RELAXED); + enq = 0; + do { + enq += rte_bbdev_enqueue_enc_ops(tp->dev_id, + queue_id, &ops[enqueued], + num_to_enq); + } while (unlikely(enq != num_to_enq)); + enqueued += enq; + /* Wait until processing of previous batch is * completed */ @@ -3672,15 +3672,6 @@ throughput_intr_lcore_ldpc_enc(void *arg) if (unlikely(num_to_process - enqueued < num_to_enq)) num_to_enq = num_to_process - enqueued; - enq = 0; - do { - enq += rte_bbdev_enqueue_ldpc_enc_ops( - tp->dev_id, - queue_id, &ops[enqueued], - num_to_enq); - } while (unlikely(enq != num_to_enq)); - enqueued += enq; - /* Write to thread burst_sz current number of enqueued * descriptors. It ensures that proper number of * descriptors will be dequeued in callback @@ -3690,6 +3681,15 @@ throughput_intr_lcore_ldpc_enc(void *arg) */ __atomic_store_n(&tp->burst_sz, num_to_enq, __ATOMIC_RELAXED); + enq = 0; + do { + enq += rte_bbdev_enqueue_ldpc_enc_ops( + tp->dev_id, + queue_id, &ops[enqueued], + num_to_enq); + } while (unlikely(enq != num_to_enq)); + enqueued += enq; + /* Wait until processing of previous batch is * completed */ @@ -3761,14 +3761,6 @@ throughput_intr_lcore_fft(void *arg) if (unlikely(num_to_process - enqueued < num_to_enq)) num_to_enq = num_to_process - enqueued; - enq = 0; - do { - enq += rte_bbdev_enqueue_fft_ops(tp->dev_id, - queue_id, &ops[enqueued], - num_to_enq); - } while (unlikely(enq != num_to_enq)); - enqueued += enq; - /* Write to thread burst_sz current number of enqueued * descriptors. It ensures that proper number of * descriptors will be dequeued in callback @@ -3778,6 +3770,14 @@ throughput_intr_lcore_fft(void *arg) */ __atomic_store_n(&tp->burst_sz, num_to_enq, __ATOMIC_RELAXED); + enq = 0; + do { + enq += rte_bbdev_enqueue_fft_ops(tp->dev_id, + queue_id, &ops[enqueued], + num_to_enq); + } while (unlikely(enq != num_to_enq)); + enqueued += enq; + /* Wait until processing of previous batch is * completed */ @@ -3844,13 +3844,6 @@ throughput_intr_lcore_mldts(void *arg) if (unlikely(num_to_process - enqueued < num_to_enq)) num_to_enq = num_to_process - enqueued; - enq = 0; - do { - enq += rte_bbdev_enqueue_mldts_ops(tp->dev_id, - queue_id, &ops[enqueued], num_to_enq); - } while (unlikely(enq != num_to_enq)); - enqueued += enq; - /* Write to thread burst_sz current number of enqueued * descriptors. It ensures that proper number of * descriptors will be dequeued in callback @@ -3860,6 +3853,13 @@ throughput_intr_lcore_mldts(void *arg) */ __atomic_store_n(&tp->burst_sz, num_to_enq, __ATOMIC_RELAXED); + enq = 0; + do { + enq += rte_bbdev_enqueue_mldts_ops(tp->dev_id, + queue_id, &ops[enqueued], num_to_enq); + } while (unlikely(enq != num_to_enq)); + enqueued += enq; + /* Wait until processing of previous batch is * completed */