From patchwork Tue Nov 16 09:42:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joyce Kong X-Patchwork-Id: 104386 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A7DD3A0C43; Tue, 16 Nov 2021 10:43:23 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 08EF6411B2; Tue, 16 Nov 2021 10:43:11 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by mails.dpdk.org (Postfix) with ESMTP id EAEC34118F for ; Tue, 16 Nov 2021 10:43:09 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 73BAF1FB; Tue, 16 Nov 2021 01:43:09 -0800 (PST) Received: from net-arm-n1amp-02.shanghai.arm.com (net-arm-n1amp-02.shanghai.arm.com [10.169.210.110]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 600303F5A1; Tue, 16 Nov 2021 01:43:07 -0800 (PST) From: Joyce Kong To: Jerin Jacob Cc: dev@dpdk.org, honnappa.nagarahalli@arm.com, nd@arm.com, Joyce Kong , Ruifeng Wang Subject: [PATCH v2 07/12] app/eventdev: use compiler atomics for shared data sync Date: Tue, 16 Nov 2021 09:42:00 +0000 Message-Id: <20211116094205.750359-8-joyce.kong@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211116094205.750359-1-joyce.kong@arm.com> References: <20211116094205.750359-1-joyce.kong@arm.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Convert rte_atomic usages to compiler atomic built-ins for shared data sync in eventdev cases. Signed-off-by: Joyce Kong Reviewed-by: Ruifeng Wang --- app/test-eventdev/evt_main.c | 1 - app/test-eventdev/test_order_atq.c | 4 ++-- app/test-eventdev/test_order_common.c | 4 ++-- app/test-eventdev/test_order_common.h | 8 ++++---- app/test-eventdev/test_order_queue.c | 4 ++-- 5 files changed, 10 insertions(+), 11 deletions(-) diff --git a/app/test-eventdev/evt_main.c b/app/test-eventdev/evt_main.c index 3534aabca7..194c980c7a 100644 --- a/app/test-eventdev/evt_main.c +++ b/app/test-eventdev/evt_main.c @@ -6,7 +6,6 @@ #include #include -#include #include #include #include diff --git a/app/test-eventdev/test_order_atq.c b/app/test-eventdev/test_order_atq.c index 71215a07b6..2fee4b4daa 100644 --- a/app/test-eventdev/test_order_atq.c +++ b/app/test-eventdev/test_order_atq.c @@ -28,7 +28,7 @@ order_atq_worker(void *arg, const bool flow_id_cap) uint16_t event = rte_event_dequeue_burst(dev_id, port, &ev, 1, 0); if (!event) { - if (rte_atomic64_read(outstand_pkts) <= 0) + if (__atomic_load_n(outstand_pkts, __ATOMIC_RELAXED) <= 0) break; rte_pause(); continue; @@ -64,7 +64,7 @@ order_atq_worker_burst(void *arg, const bool flow_id_cap) BURST_SIZE, 0); if (nb_rx == 0) { - if (rte_atomic64_read(outstand_pkts) <= 0) + if (__atomic_load_n(outstand_pkts, __ATOMIC_RELAXED) <= 0) break; rte_pause(); continue; diff --git a/app/test-eventdev/test_order_common.c b/app/test-eventdev/test_order_common.c index d7760061ba..ff7813f9c2 100644 --- a/app/test-eventdev/test_order_common.c +++ b/app/test-eventdev/test_order_common.c @@ -187,7 +187,7 @@ order_test_setup(struct evt_test *test, struct evt_options *opt) evt_err("failed to allocate t->expected_flow_seq memory"); goto exp_nomem; } - rte_atomic64_set(&t->outstand_pkts, opt->nb_pkts); + __atomic_store_n(&t->outstand_pkts, opt->nb_pkts, __ATOMIC_RELAXED); t->err = false; t->nb_pkts = opt->nb_pkts; t->nb_flows = opt->nb_flows; @@ -294,7 +294,7 @@ order_launch_lcores(struct evt_test *test, struct evt_options *opt, while (t->err == false) { uint64_t new_cycles = rte_get_timer_cycles(); - int64_t remaining = rte_atomic64_read(&t->outstand_pkts); + int64_t remaining = __atomic_load_n(&t->outstand_pkts, __ATOMIC_RELAXED); if (remaining <= 0) { t->result = EVT_TEST_SUCCESS; diff --git a/app/test-eventdev/test_order_common.h b/app/test-eventdev/test_order_common.h index cd9d6009ec..92781d9587 100644 --- a/app/test-eventdev/test_order_common.h +++ b/app/test-eventdev/test_order_common.h @@ -48,7 +48,7 @@ struct test_order { * The atomic_* is an expensive operation,Since it is a functional test, * We are using the atomic_ operation to reduce the code complexity. */ - rte_atomic64_t outstand_pkts; + uint64_t outstand_pkts; enum evt_test_result result; uint32_t nb_flows; uint64_t nb_pkts; @@ -95,7 +95,7 @@ static __rte_always_inline void order_process_stage_1(struct test_order *const t, struct rte_event *const ev, const uint32_t nb_flows, uint32_t *const expected_flow_seq, - rte_atomic64_t *const outstand_pkts) + uint64_t *const outstand_pkts) { const uint32_t flow = (uintptr_t)ev->mbuf % nb_flows; /* compare the seqn against expected value */ @@ -113,7 +113,7 @@ order_process_stage_1(struct test_order *const t, */ expected_flow_seq[flow]++; rte_pktmbuf_free(ev->mbuf); - rte_atomic64_sub(outstand_pkts, 1); + __atomic_sub_fetch(outstand_pkts, 1, __ATOMIC_RELAXED); } static __rte_always_inline void @@ -132,7 +132,7 @@ order_process_stage_invalid(struct test_order *const t, const uint8_t port = w->port_id;\ const uint32_t nb_flows = t->nb_flows;\ uint32_t *expected_flow_seq = t->expected_flow_seq;\ - rte_atomic64_t *outstand_pkts = &t->outstand_pkts;\ + uint64_t *outstand_pkts = &t->outstand_pkts;\ if (opt->verbose_level > 1)\ printf("%s(): lcore %d dev_id %d port=%d\n",\ __func__, rte_lcore_id(), dev_id, port) diff --git a/app/test-eventdev/test_order_queue.c b/app/test-eventdev/test_order_queue.c index 621367805a..80eaea5cf5 100644 --- a/app/test-eventdev/test_order_queue.c +++ b/app/test-eventdev/test_order_queue.c @@ -28,7 +28,7 @@ order_queue_worker(void *arg, const bool flow_id_cap) uint16_t event = rte_event_dequeue_burst(dev_id, port, &ev, 1, 0); if (!event) { - if (rte_atomic64_read(outstand_pkts) <= 0) + if (__atomic_load_n(outstand_pkts, __ATOMIC_RELAXED) <= 0) break; rte_pause(); continue; @@ -64,7 +64,7 @@ order_queue_worker_burst(void *arg, const bool flow_id_cap) BURST_SIZE, 0); if (nb_rx == 0) { - if (rte_atomic64_read(outstand_pkts) <= 0) + if (__atomic_load_n(outstand_pkts, __ATOMIC_RELAXED) <= 0) break; rte_pause(); continue;