From patchwork Fri Apr 19 23:06:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tyler Retzlaff X-Patchwork-Id: 139600 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 053E743EB4; Sat, 20 Apr 2024 01:12:08 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 538F142DCE; Sat, 20 Apr 2024 01:07:55 +0200 (CEST) Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by mails.dpdk.org (Postfix) with ESMTP id BC5F340E09 for ; Sat, 20 Apr 2024 01:06:58 +0200 (CEST) Received: by linux.microsoft.com (Postfix, from userid 1086) id 1C16C20FEA21; Fri, 19 Apr 2024 16:06:49 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 1C16C20FEA21 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1713568011; bh=PTS5+zJNF7qOHZjKawLu1e96OL4IZGbkxRUmQ0XSFN8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=U/8dL3QMyqdMZszemEwe+YvXWgiXSRRJTPqTLejav0B+8j15/zpCFAKSwjKE3sloK qXEGZns//MetJGdCrmf+GsTheTqJHSuYKeFlVdPRN9qVXhAL22/FZKFhsqACgku9Yw LANqszkLbhnAGKSEq/dZHZQQz8kUQYoZ32JdGK9Y= From: Tyler Retzlaff To: dev@dpdk.org Cc: =?utf-8?q?Mattias_R=C3=B6nnblom?= , =?utf-8?q?Morten_Br=C3=B8rup?= , Abdullah Sevincer , Ajit Khaparde , Alok Prasad , Anatoly Burakov , Andrew Rybchenko , Anoob Joseph , Bruce Richardson , Byron Marohn , Chenbo Xia , Chengwen Feng , Ciara Loftus , Ciara Power , Dariusz Sosnowski , David Hunt , Devendra Singh Rawat , Erik Gabriel Carrillo , Guoyang Zhou , Harman Kalra , Harry van Haaren , Honnappa Nagarahalli , Jakub Grajciar , Jerin Jacob , Jeroen de Borst , Jian Wang , Jiawen Wu , Jie Hai , Jingjing Wu , Joshua Washington , Joyce Kong , Junfeng Guo , Kevin Laatz , Konstantin Ananyev , Liang Ma , Long Li , Maciej Czekaj , Matan Azrad , Maxime Coquelin , Nicolas Chautru , Ori Kam , Pavan Nikhilesh , Peter Mccarthy , Rahul Lakkireddy , Reshma Pattan , Rosen Xu , Ruifeng Wang , Rushil Gupta , Sameh Gobriel , Sivaprasad Tummala , Somnath Kotur , Stephen Hemminger , Suanming Mou , Sunil Kumar Kori , Sunil Uttarwar , Tetsuya Mukawa , Vamsi Attunuru , Viacheslav Ovsiienko , Vladimir Medvedkin , Xiaoyun Wang , Yipeng Wang , Yisen Zhuang , Yuying Zhang , Yuying Zhang , Ziyang Xuan , Tyler Retzlaff Subject: [PATCH v4 42/45] app/test-eventdev: use rte stdatomic API Date: Fri, 19 Apr 2024 16:06:40 -0700 Message-Id: <1713568003-30453-43-git-send-email-roretzla@linux.microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1713568003-30453-1-git-send-email-roretzla@linux.microsoft.com> References: <1710967892-7046-1-git-send-email-roretzla@linux.microsoft.com> <1713568003-30453-1-git-send-email-roretzla@linux.microsoft.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Replace the use of gcc builtin __atomic_xxx intrinsics with corresponding rte_atomic_xxx optional rte stdatomic API. Signed-off-by: Tyler Retzlaff Acked-by: Stephen Hemminger --- app/test-eventdev/test_order_atq.c | 4 ++-- app/test-eventdev/test_order_common.c | 5 +++-- app/test-eventdev/test_order_common.h | 8 ++++---- app/test-eventdev/test_order_queue.c | 4 ++-- app/test-eventdev/test_perf_common.h | 6 +++--- 5 files changed, 14 insertions(+), 13 deletions(-) diff --git a/app/test-eventdev/test_order_atq.c b/app/test-eventdev/test_order_atq.c index 2fee4b4..128d3f2 100644 --- a/app/test-eventdev/test_order_atq.c +++ b/app/test-eventdev/test_order_atq.c @@ -28,7 +28,7 @@ uint16_t event = rte_event_dequeue_burst(dev_id, port, &ev, 1, 0); if (!event) { - if (__atomic_load_n(outstand_pkts, __ATOMIC_RELAXED) <= 0) + if (rte_atomic_load_explicit(outstand_pkts, rte_memory_order_relaxed) <= 0) break; rte_pause(); continue; @@ -64,7 +64,7 @@ BURST_SIZE, 0); if (nb_rx == 0) { - if (__atomic_load_n(outstand_pkts, __ATOMIC_RELAXED) <= 0) + if (rte_atomic_load_explicit(outstand_pkts, rte_memory_order_relaxed) <= 0) break; rte_pause(); continue; diff --git a/app/test-eventdev/test_order_common.c b/app/test-eventdev/test_order_common.c index a9894c6..0fceace 100644 --- a/app/test-eventdev/test_order_common.c +++ b/app/test-eventdev/test_order_common.c @@ -189,7 +189,7 @@ evt_err("failed to allocate t->expected_flow_seq memory"); goto exp_nomem; } - __atomic_store_n(&t->outstand_pkts, opt->nb_pkts, __ATOMIC_RELAXED); + rte_atomic_store_explicit(&t->outstand_pkts, opt->nb_pkts, rte_memory_order_relaxed); t->err = false; t->nb_pkts = opt->nb_pkts; t->nb_flows = opt->nb_flows; @@ -296,7 +296,8 @@ while (t->err == false) { uint64_t new_cycles = rte_get_timer_cycles(); - int64_t remaining = __atomic_load_n(&t->outstand_pkts, __ATOMIC_RELAXED); + int64_t remaining = rte_atomic_load_explicit(&t->outstand_pkts, + rte_memory_order_relaxed); if (remaining <= 0) { t->result = EVT_TEST_SUCCESS; diff --git a/app/test-eventdev/test_order_common.h b/app/test-eventdev/test_order_common.h index d4cbc5c..7177fd8 100644 --- a/app/test-eventdev/test_order_common.h +++ b/app/test-eventdev/test_order_common.h @@ -48,7 +48,7 @@ struct __rte_cache_aligned test_order { * The atomic_* is an expensive operation,Since it is a functional test, * We are using the atomic_ operation to reduce the code complexity. */ - uint64_t outstand_pkts; + RTE_ATOMIC(uint64_t) outstand_pkts; enum evt_test_result result; uint32_t nb_flows; uint64_t nb_pkts; @@ -95,7 +95,7 @@ struct __rte_cache_aligned test_order { order_process_stage_1(struct test_order *const t, struct rte_event *const ev, const uint32_t nb_flows, uint32_t *const expected_flow_seq, - uint64_t *const outstand_pkts) + RTE_ATOMIC(uint64_t) *const outstand_pkts) { const uint32_t flow = (uintptr_t)ev->mbuf % nb_flows; /* compare the seqn against expected value */ @@ -113,7 +113,7 @@ struct __rte_cache_aligned test_order { */ expected_flow_seq[flow]++; rte_pktmbuf_free(ev->mbuf); - __atomic_fetch_sub(outstand_pkts, 1, __ATOMIC_RELAXED); + rte_atomic_fetch_sub_explicit(outstand_pkts, 1, rte_memory_order_relaxed); } static __rte_always_inline void @@ -132,7 +132,7 @@ struct __rte_cache_aligned test_order { const uint8_t port = w->port_id;\ const uint32_t nb_flows = t->nb_flows;\ uint32_t *expected_flow_seq = t->expected_flow_seq;\ - uint64_t *outstand_pkts = &t->outstand_pkts;\ + RTE_ATOMIC(uint64_t) *outstand_pkts = &t->outstand_pkts;\ if (opt->verbose_level > 1)\ printf("%s(): lcore %d dev_id %d port=%d\n",\ __func__, rte_lcore_id(), dev_id, port) diff --git a/app/test-eventdev/test_order_queue.c b/app/test-eventdev/test_order_queue.c index 80eaea5..a282ab2 100644 --- a/app/test-eventdev/test_order_queue.c +++ b/app/test-eventdev/test_order_queue.c @@ -28,7 +28,7 @@ uint16_t event = rte_event_dequeue_burst(dev_id, port, &ev, 1, 0); if (!event) { - if (__atomic_load_n(outstand_pkts, __ATOMIC_RELAXED) <= 0) + if (rte_atomic_load_explicit(outstand_pkts, rte_memory_order_relaxed) <= 0) break; rte_pause(); continue; @@ -64,7 +64,7 @@ BURST_SIZE, 0); if (nb_rx == 0) { - if (__atomic_load_n(outstand_pkts, __ATOMIC_RELAXED) <= 0) + if (rte_atomic_load_explicit(outstand_pkts, rte_memory_order_relaxed) <= 0) break; rte_pause(); continue; diff --git a/app/test-eventdev/test_perf_common.h b/app/test-eventdev/test_perf_common.h index bc627de..d60b873 100644 --- a/app/test-eventdev/test_perf_common.h +++ b/app/test-eventdev/test_perf_common.h @@ -225,7 +225,7 @@ struct __rte_cache_aligned perf_elt { * stored before updating the number of * processed packets for worker lcores */ - rte_atomic_thread_fence(__ATOMIC_RELEASE); + rte_atomic_thread_fence(rte_memory_order_release); w->processed_pkts++; if (prod_type == EVT_PROD_TYPE_EVENT_CRYPTO_ADPTR) { @@ -270,7 +270,7 @@ struct __rte_cache_aligned perf_elt { /* Release fence here ensures event_prt is stored before updating the number of processed * packets for worker lcores. */ - rte_atomic_thread_fence(__ATOMIC_RELEASE); + rte_atomic_thread_fence(rte_memory_order_release); w->processed_pkts++; if (prod_type == EVT_PROD_TYPE_EVENT_CRYPTO_ADPTR) { @@ -325,7 +325,7 @@ struct __rte_cache_aligned perf_elt { /* Release fence here ensures event_prt is stored before updating the number of processed * packets for worker lcores. */ - rte_atomic_thread_fence(__ATOMIC_RELEASE); + rte_atomic_thread_fence(rte_memory_order_release); w->processed_pkts += vec->nb_elem; if (enable_fwd_latency) {