From patchwork Fri Aug 11 01:31:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tyler Retzlaff X-Patchwork-Id: 130099 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DB0EC43028; Fri, 11 Aug 2023 03:32:21 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8062543260; Fri, 11 Aug 2023 03:32:10 +0200 (CEST) Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by mails.dpdk.org (Postfix) with ESMTP id B361A40E03; Fri, 11 Aug 2023 03:32:05 +0200 (CEST) Received: by linux.microsoft.com (Postfix, from userid 1086) id CEA3120FD058; Thu, 10 Aug 2023 18:32:04 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com CEA3120FD058 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1691717524; bh=WBxylC/yIzi0ZZD0MzOcfR5x7hYDwTUx8GXZv8EhP7A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PMpdhBzi4ERx6RJIi13rFbeDItohXNMyUs2nvz7InAoxm4255LbfGZ48fQqwRtyGz HuOV5m/Jpio0LQ8bF5MBbn7OeJTWvM2YqONUlAcwAR9IiMsOT/soKdzF/mQ5FbN5Dw 76tq4m6WpnWFXLBXHQ8NxB2WTFVKqNEYXj3mOnpQ= From: Tyler Retzlaff To: dev@dpdk.org Cc: techboard@dpdk.org, Bruce Richardson , Honnappa Nagarahalli , Ruifeng Wang , Jerin Jacob , Sunil Kumar Kori , =?utf-8?q?Mattias_R=C3=B6nnblom?= , Joyce Kong , David Christensen , Konstantin Ananyev , David Hunt , Thomas Monjalon , David Marchand , Tyler Retzlaff Subject: [PATCH 1/6] eal: provide rte stdatomics optional atomics API Date: Thu, 10 Aug 2023 18:31:56 -0700 Message-Id: <1691717521-1025-2-git-send-email-roretzla@linux.microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1691717521-1025-1-git-send-email-roretzla@linux.microsoft.com> References: <1691717521-1025-1-git-send-email-roretzla@linux.microsoft.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Provide API for atomic operations in the rte namespace that may optionally be configured to use C11 atomics with meson option enable_stdatomics=true Signed-off-by: Tyler Retzlaff --- config/meson.build | 1 + config/rte_config.h | 1 + lib/eal/include/meson.build | 1 + lib/eal/include/rte_stdatomic.h | 162 ++++++++++++++++++++++++++++++++++++++++ meson_options.txt | 1 + 5 files changed, 166 insertions(+) create mode 100644 lib/eal/include/rte_stdatomic.h diff --git a/config/meson.build b/config/meson.build index d822371..ec49964 100644 --- a/config/meson.build +++ b/config/meson.build @@ -303,6 +303,7 @@ endforeach # set other values pulled from the build options dpdk_conf.set('RTE_MAX_ETHPORTS', get_option('max_ethports')) dpdk_conf.set('RTE_LIBEAL_USE_HPET', get_option('use_hpet')) +dpdk_conf.set('RTE_ENABLE_STDATOMIC', get_option('enable_stdatomic')) dpdk_conf.set('RTE_ENABLE_TRACE_FP', get_option('enable_trace_fp')) # values which have defaults which may be overridden dpdk_conf.set('RTE_MAX_VFIO_GROUPS', 64) diff --git a/config/rte_config.h b/config/rte_config.h index 400e44e..f17b6ae 100644 --- a/config/rte_config.h +++ b/config/rte_config.h @@ -13,6 +13,7 @@ #define _RTE_CONFIG_H_ #include +#include /* legacy defines */ #ifdef RTE_EXEC_ENV_LINUX diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build index b0db9b3..f8a47b3 100644 --- a/lib/eal/include/meson.build +++ b/lib/eal/include/meson.build @@ -43,6 +43,7 @@ headers += files( 'rte_seqlock.h', 'rte_service.h', 'rte_service_component.h', + 'rte_stdatomic.h', 'rte_string_fns.h', 'rte_tailq.h', 'rte_thread.h', diff --git a/lib/eal/include/rte_stdatomic.h b/lib/eal/include/rte_stdatomic.h new file mode 100644 index 0000000..832fd07 --- /dev/null +++ b/lib/eal/include/rte_stdatomic.h @@ -0,0 +1,162 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2023 Microsoft Corporation + */ + +#ifndef _RTE_STDATOMIC_H_ +#define _RTE_STDATOMIC_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +typedef int rte_memory_order; + +#ifdef RTE_ENABLE_STDATOMIC +#ifdef __STDC_NO_ATOMICS__ +#error enable_stdatomics=true but atomics not supported by toolchain +#endif + +#include + +#define __rte_atomic _Atomic + +#define rte_memory_order_relaxed memory_order_relaxed +#ifdef __ATOMIC_RELAXED +_Static_assert(rte_memory_order_relaxed == __ATOMIC_RELAXED, + "rte_memory_order_relaxed == __ATOMIC_RELAXED"); +#endif + +#define rte_memory_order_consume memory_order_consume +#ifdef __ATOMIC_CONSUME +_Static_assert(rte_memory_order_consume == __ATOMIC_CONSUME, + "rte_memory_order_consume == __ATOMIC_CONSUME"); +#endif + +#define rte_memory_order_acquire memory_order_acquire +#ifdef __ATOMIC_ACQUIRE +_Static_assert(rte_memory_order_acquire == __ATOMIC_ACQUIRE, + "rte_memory_order_acquire == __ATOMIC_ACQUIRE"); +#endif + +#define rte_memory_order_release memory_order_release +#ifdef __ATOMIC_RELEASE +_Static_assert(rte_memory_order_release == __ATOMIC_RELEASE, + "rte_memory_order_release == __ATOMIC_RELEASE"); +#endif + +#define rte_memory_order_acq_rel memory_order_acq_rel +#ifdef __ATOMIC_ACQ_REL +_Static_assert(rte_memory_order_acq_rel == __ATOMIC_ACQ_REL, + "rte_memory_order_acq_rel == __ATOMIC_ACQ_REL"); +#endif + +#define rte_memory_order_seq_cst memory_order_seq_cst +#ifdef __ATOMIC_SEQ_CST +_Static_assert(rte_memory_order_seq_cst == __ATOMIC_SEQ_CST, + "rte_memory_order_seq_cst == __ATOMIC_SEQ_CST"); +#endif + +#define rte_atomic_load_explicit(ptr, memorder) \ + atomic_load_explicit(ptr, memorder) + +#define rte_atomic_store_explicit(ptr, val, memorder) \ + atomic_store_explicit(ptr, val, memorder) + +#define rte_atomic_exchange_explicit(ptr, val, memorder) \ + atomic_exchange_explicit(ptr, val, memorder) + +#define rte_atomic_compare_exchange_strong_explicit( \ + ptr, expected, desired, succ_memorder, fail_memorder) \ + atomic_compare_exchange_strong_explicit( \ + ptr, expected, desired, succ_memorder, fail_memorder) + +#define rte_atomic_compare_exchange_weak_explicit( \ + ptr, expected, desired, succ_memorder, fail_memorder) \ + atomic_compare_exchange_strong_explicit( \ + ptr, expected, desired, succ_memorder, fail_memorder) + +#define rte_atomic_fetch_add_explicit(ptr, val, memorder) \ + atomic_fetch_add_explicit(ptr, val, memorder) + +#define rte_atomic_fetch_sub_explicit(ptr, val, memorder) \ + atomic_fetch_sub_explicit(ptr, val, memorder) + +#define rte_atomic_fetch_and_explicit(ptr, val, memorder) \ + atomic_fetch_and_explicit(ptr, val, memorder) + +#define rte_atomic_fetch_xor_explicit(ptr, val, memorder) \ + atomic_fetch_xor_explicit(ptr, val, memorder) + +#define rte_atomic_fetch_or_explicit(ptr, val, memorder) \ + atomic_fetch_or_explicit(ptr, val, memorder) + +#define rte_atomic_fetch_nand_explicit(ptr, val, memorder) \ + atomic_fetch_nand_explicit(ptr, val, memorder) + +#define rte_atomic_flag_test_and_set_explict(ptr, memorder) \ + atomic_flag_test_and_set_explicit(ptr, memorder) + +#define rte_atomic_flag_clear_explicit(ptr, memorder) \ + atomic_flag_clear(ptr, memorder) + +#else + +#define __rte_atomic + +#define rte_memory_order_relaxed __ATOMIC_RELAXED +#define rte_memory_order_consume __ATOMIC_CONSUME +#define rte_memory_order_acquire __ATOMIC_ACQUIRE +#define rte_memory_order_release __ATOMIC_RELEASE +#define rte_memory_order_acq_rel __ATOMIC_ACQ_REL +#define rte_memory_order_seq_cst __ATOMIC_SEQ_CST + +#define rte_atomic_load_explicit(ptr, memorder) \ + __atomic_load_n(ptr, memorder) + +#define rte_atomic_store_explicit(ptr, val, memorder) \ + __atomic_store_n(ptr, val, memorder) + +#define rte_atomic_exchange_explicit(ptr, val, memorder) \ + __atomic_exchange_n(ptr, val, memorder) + +#define rte_atomic_compare_exchange_strong_explicit( \ + ptr, expected, desired, succ_memorder, fail_memorder) \ + __atomic_compare_exchange_n( \ + ptr, expected, desired, 0, succ_memorder, fail_memorder) + +#define rte_atomic_compare_exchange_weak_explicit( \ + ptr, expected, desired, succ_memorder, fail_memorder) \ + __atomic_compare_exchange_n( \ + ptr, expected, desired, 1, succ_memorder, fail_memorder) + +#define rte_atomic_fetch_add_explicit(ptr, val, memorder) \ + __atomic_fetch_add(ptr, val, memorder) + +#define rte_atomic_fetch_sub_explicit(ptr, val, memorder) \ + __atomic_fetch_sub(ptr, val, memorder) + +#define rte_atomic_fetch_and_explicit(ptr, val, memorder) \ + __atomic_fetch_and(ptr, val, memorder) + +#define rte_atomic_fetch_xor_explicit(ptr, val, memorder) \ + __atomic_fetch_xor(ptr, val, memorder) + +#define rte_atomic_fetch_or_explicit(ptr, val, memorder) \ + __atomic_fetch_or(ptr, val, memorder) + +#define rte_atomic_fetch_nand_explicit(ptr, val, memorder) \ + __atomic_fetch_nand(ptr, val, memorder) + +#define rte_atomic_flag_test_and_set_explicit(ptr, memorder) \ + __atomic_test_and_set(ptr, memorder) + +#define rte_atomic_flag_clear_explicit(ptr, memorder) \ + __atomic_clear(ptr, memorder) + +#endif + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_STDATOMIC_H_ */ diff --git a/meson_options.txt b/meson_options.txt index 621e1ca..7d6784d 100644 --- a/meson_options.txt +++ b/meson_options.txt @@ -46,6 +46,7 @@ option('mbuf_refcnt_atomic', type: 'boolean', value: true, description: 'Atomically access the mbuf refcnt.') option('platform', type: 'string', value: 'native', description: 'Platform to build, either "native", "generic" or a SoC. Please refer to the Linux build guide for more information.') +option('enable_stdatomic', type: 'boolean', value: false, description: 'enable use of C11 stdatomic') option('enable_trace_fp', type: 'boolean', value: false, description: 'enable fast path trace points.') option('tests', type: 'boolean', value: true, description: From patchwork Fri Aug 11 01:31:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tyler Retzlaff X-Patchwork-Id: 130100 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C002843028; Fri, 11 Aug 2023 03:32:28 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 93BF443266; Fri, 11 Aug 2023 03:32:11 +0200 (CEST) Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by mails.dpdk.org (Postfix) with ESMTP id C5932410E3; Fri, 11 Aug 2023 03:32:05 +0200 (CEST) Received: by linux.microsoft.com (Postfix, from userid 1086) id DCF1420FD061; Thu, 10 Aug 2023 18:32:04 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com DCF1420FD061 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1691717524; bh=UymbGgHwSq1MnsvjVe5e26t+o2q7RTJvD/7FgxxlBx4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qkXk9NaNRv6fv2qWRp98OrNokukyzaTQy5R3J6v8yiP8ts/ET3hq2Tv0om6qIE37B tXkaGLSk9ybu0pGXG9sBpHOsJiJxRHG2Q8dKEi/Y+ynTlEvPHC4kq0YXX1xtLQGFp0 n2yxONTJNdVAEfSnHNJscjpdJCFJxmi+kvv/Y+8o= From: Tyler Retzlaff To: dev@dpdk.org Cc: techboard@dpdk.org, Bruce Richardson , Honnappa Nagarahalli , Ruifeng Wang , Jerin Jacob , Sunil Kumar Kori , =?utf-8?q?Mattias_R=C3=B6nnblom?= , Joyce Kong , David Christensen , Konstantin Ananyev , David Hunt , Thomas Monjalon , David Marchand , Tyler Retzlaff Subject: [PATCH 2/6] eal: adapt EAL to present rte optional atomics API Date: Thu, 10 Aug 2023 18:31:57 -0700 Message-Id: <1691717521-1025-3-git-send-email-roretzla@linux.microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1691717521-1025-1-git-send-email-roretzla@linux.microsoft.com> References: <1691717521-1025-1-git-send-email-roretzla@linux.microsoft.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Adapt the EAL public headers to use rte optional atomics API instead of directly using and exposing toolchain specific atomic builtin intrinsics. Signed-off-by: Tyler Retzlaff --- app/test/test_mcslock.c | 6 ++-- lib/eal/arm/include/rte_atomic_64.h | 32 +++++++++++----------- lib/eal/arm/include/rte_pause_64.h | 26 +++++++++--------- lib/eal/arm/rte_power_intrinsics.c | 8 +++--- lib/eal/common/eal_common_trace.c | 16 ++++++----- lib/eal/include/generic/rte_atomic.h | 50 +++++++++++++++++----------------- lib/eal/include/generic/rte_pause.h | 38 +++++++++++++------------- lib/eal/include/generic/rte_rwlock.h | 47 +++++++++++++++++--------------- lib/eal/include/generic/rte_spinlock.h | 19 ++++++------- lib/eal/include/rte_mcslock.h | 50 +++++++++++++++++----------------- lib/eal/include/rte_pflock.h | 24 ++++++++-------- lib/eal/include/rte_seqcount.h | 18 ++++++------ lib/eal/include/rte_ticketlock.h | 42 ++++++++++++++-------------- lib/eal/include/rte_trace_point.h | 4 +-- lib/eal/ppc/include/rte_atomic.h | 50 +++++++++++++++++----------------- lib/eal/x86/include/rte_atomic.h | 4 +-- lib/eal/x86/include/rte_spinlock.h | 2 +- lib/eal/x86/rte_power_intrinsics.c | 6 ++-- 18 files changed, 225 insertions(+), 217 deletions(-) diff --git a/app/test/test_mcslock.c b/app/test/test_mcslock.c index 52e45e7..cc25970 100644 --- a/app/test/test_mcslock.c +++ b/app/test/test_mcslock.c @@ -36,9 +36,9 @@ * lock multiple times. */ -rte_mcslock_t *p_ml; -rte_mcslock_t *p_ml_try; -rte_mcslock_t *p_ml_perf; +rte_mcslock_t * __rte_atomic p_ml; +rte_mcslock_t * __rte_atomic p_ml_try; +rte_mcslock_t * __rte_atomic p_ml_perf; static unsigned int count; diff --git a/lib/eal/arm/include/rte_atomic_64.h b/lib/eal/arm/include/rte_atomic_64.h index 6047911..ac3cec9 100644 --- a/lib/eal/arm/include/rte_atomic_64.h +++ b/lib/eal/arm/include/rte_atomic_64.h @@ -107,33 +107,33 @@ */ RTE_SET_USED(failure); /* Find invalid memory order */ - RTE_ASSERT(success == __ATOMIC_RELAXED || - success == __ATOMIC_ACQUIRE || - success == __ATOMIC_RELEASE || - success == __ATOMIC_ACQ_REL || - success == __ATOMIC_SEQ_CST); + RTE_ASSERT(success == rte_memory_order_relaxed || + success == rte_memory_order_acquire || + success == rte_memory_order_release || + success == rte_memory_order_acq_rel || + success == rte_memory_order_seq_cst); rte_int128_t expected = *exp; rte_int128_t desired = *src; rte_int128_t old; #if defined(__ARM_FEATURE_ATOMICS) || defined(RTE_ARM_FEATURE_ATOMICS) - if (success == __ATOMIC_RELAXED) + if (success == rte_memory_order_relaxed) __cas_128_relaxed(dst, exp, desired); - else if (success == __ATOMIC_ACQUIRE) + else if (success == rte_memory_order_acquire) __cas_128_acquire(dst, exp, desired); - else if (success == __ATOMIC_RELEASE) + else if (success == rte_memory_order_release) __cas_128_release(dst, exp, desired); else __cas_128_acq_rel(dst, exp, desired); old = *exp; #else -#define __HAS_ACQ(mo) ((mo) != __ATOMIC_RELAXED && (mo) != __ATOMIC_RELEASE) -#define __HAS_RLS(mo) ((mo) == __ATOMIC_RELEASE || (mo) == __ATOMIC_ACQ_REL || \ - (mo) == __ATOMIC_SEQ_CST) +#define __HAS_ACQ(mo) ((mo) != rte_memory_order_relaxed && (mo) != rte_memory_order_release) +#define __HAS_RLS(mo) ((mo) == rte_memory_order_release || (mo) == rte_memory_order_acq_rel || \ + (mo) == rte_memory_order_seq_cst) - int ldx_mo = __HAS_ACQ(success) ? __ATOMIC_ACQUIRE : __ATOMIC_RELAXED; - int stx_mo = __HAS_RLS(success) ? __ATOMIC_RELEASE : __ATOMIC_RELAXED; + int ldx_mo = __HAS_ACQ(success) ? rte_memory_order_acquire : rte_memory_order_relaxed; + int stx_mo = __HAS_RLS(success) ? rte_memory_order_release : rte_memory_order_relaxed; #undef __HAS_ACQ #undef __HAS_RLS @@ -153,7 +153,7 @@ : "Q" (src->val[0]) \ : "memory"); } - if (ldx_mo == __ATOMIC_RELAXED) + if (ldx_mo == rte_memory_order_relaxed) __LOAD_128("ldxp", dst, old) else __LOAD_128("ldaxp", dst, old) @@ -170,7 +170,7 @@ : "memory"); } if (likely(old.int128 == expected.int128)) { - if (stx_mo == __ATOMIC_RELAXED) + if (stx_mo == rte_memory_order_relaxed) __STORE_128("stxp", dst, desired, ret) else __STORE_128("stlxp", dst, desired, ret) @@ -181,7 +181,7 @@ * needs to be stored back to ensure it was read * atomically. */ - if (stx_mo == __ATOMIC_RELAXED) + if (stx_mo == rte_memory_order_relaxed) __STORE_128("stxp", dst, old, ret) else __STORE_128("stlxp", dst, old, ret) diff --git a/lib/eal/arm/include/rte_pause_64.h b/lib/eal/arm/include/rte_pause_64.h index 5f70e97..d4daafc 100644 --- a/lib/eal/arm/include/rte_pause_64.h +++ b/lib/eal/arm/include/rte_pause_64.h @@ -41,7 +41,7 @@ static inline void rte_pause(void) * implicitly to exit WFE. */ #define __RTE_ARM_LOAD_EXC_8(src, dst, memorder) { \ - if (memorder == __ATOMIC_RELAXED) { \ + if (memorder == rte_memory_order_relaxed) { \ asm volatile("ldxrb %w[tmp], [%x[addr]]" \ : [tmp] "=&r" (dst) \ : [addr] "r" (src) \ @@ -60,7 +60,7 @@ static inline void rte_pause(void) * implicitly to exit WFE. */ #define __RTE_ARM_LOAD_EXC_16(src, dst, memorder) { \ - if (memorder == __ATOMIC_RELAXED) { \ + if (memorder == rte_memory_order_relaxed) { \ asm volatile("ldxrh %w[tmp], [%x[addr]]" \ : [tmp] "=&r" (dst) \ : [addr] "r" (src) \ @@ -79,7 +79,7 @@ static inline void rte_pause(void) * implicitly to exit WFE. */ #define __RTE_ARM_LOAD_EXC_32(src, dst, memorder) { \ - if (memorder == __ATOMIC_RELAXED) { \ + if (memorder == rte_memory_order_relaxed) { \ asm volatile("ldxr %w[tmp], [%x[addr]]" \ : [tmp] "=&r" (dst) \ : [addr] "r" (src) \ @@ -98,7 +98,7 @@ static inline void rte_pause(void) * implicitly to exit WFE. */ #define __RTE_ARM_LOAD_EXC_64(src, dst, memorder) { \ - if (memorder == __ATOMIC_RELAXED) { \ + if (memorder == rte_memory_order_relaxed) { \ asm volatile("ldxr %x[tmp], [%x[addr]]" \ : [tmp] "=&r" (dst) \ : [addr] "r" (src) \ @@ -118,7 +118,7 @@ static inline void rte_pause(void) */ #define __RTE_ARM_LOAD_EXC_128(src, dst, memorder) { \ volatile rte_int128_t *dst_128 = (volatile rte_int128_t *)&dst; \ - if (memorder == __ATOMIC_RELAXED) { \ + if (memorder == rte_memory_order_relaxed) { \ asm volatile("ldxp %x[tmp0], %x[tmp1], [%x[addr]]" \ : [tmp0] "=&r" (dst_128->val[0]), \ [tmp1] "=&r" (dst_128->val[1]) \ @@ -153,8 +153,8 @@ static inline void rte_pause(void) { uint16_t value; - RTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE && - memorder != __ATOMIC_RELAXED); + RTE_BUILD_BUG_ON(memorder != rte_memory_order_acquire && + memorder != rte_memory_order_relaxed); __RTE_ARM_LOAD_EXC_16(addr, value, memorder) if (value != expected) { @@ -172,8 +172,8 @@ static inline void rte_pause(void) { uint32_t value; - RTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE && - memorder != __ATOMIC_RELAXED); + RTE_BUILD_BUG_ON(memorder != rte_memory_order_acquire && + memorder != rte_memory_order_relaxed); __RTE_ARM_LOAD_EXC_32(addr, value, memorder) if (value != expected) { @@ -191,8 +191,8 @@ static inline void rte_pause(void) { uint64_t value; - RTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE && - memorder != __ATOMIC_RELAXED); + RTE_BUILD_BUG_ON(memorder != rte_memory_order_acquire && + memorder != rte_memory_order_relaxed); __RTE_ARM_LOAD_EXC_64(addr, value, memorder) if (value != expected) { @@ -206,8 +206,8 @@ static inline void rte_pause(void) #define RTE_WAIT_UNTIL_MASKED(addr, mask, cond, expected, memorder) do { \ RTE_BUILD_BUG_ON(!__builtin_constant_p(memorder)); \ - RTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE && \ - memorder != __ATOMIC_RELAXED); \ + RTE_BUILD_BUG_ON(memorder != rte_memory_order_acquire && \ + memorder != rte_memory_order_relaxed); \ const uint32_t size = sizeof(*(addr)) << 3; \ typeof(*(addr)) expected_value = (expected); \ typeof(*(addr)) value; \ diff --git a/lib/eal/arm/rte_power_intrinsics.c b/lib/eal/arm/rte_power_intrinsics.c index 77b96e4..f54cf59 100644 --- a/lib/eal/arm/rte_power_intrinsics.c +++ b/lib/eal/arm/rte_power_intrinsics.c @@ -33,19 +33,19 @@ switch (pmc->size) { case sizeof(uint8_t): - __RTE_ARM_LOAD_EXC_8(pmc->addr, cur_value, __ATOMIC_RELAXED) + __RTE_ARM_LOAD_EXC_8(pmc->addr, cur_value, rte_memory_order_relaxed) __RTE_ARM_WFE() break; case sizeof(uint16_t): - __RTE_ARM_LOAD_EXC_16(pmc->addr, cur_value, __ATOMIC_RELAXED) + __RTE_ARM_LOAD_EXC_16(pmc->addr, cur_value, rte_memory_order_relaxed) __RTE_ARM_WFE() break; case sizeof(uint32_t): - __RTE_ARM_LOAD_EXC_32(pmc->addr, cur_value, __ATOMIC_RELAXED) + __RTE_ARM_LOAD_EXC_32(pmc->addr, cur_value, rte_memory_order_relaxed) __RTE_ARM_WFE() break; case sizeof(uint64_t): - __RTE_ARM_LOAD_EXC_64(pmc->addr, cur_value, __ATOMIC_RELAXED) + __RTE_ARM_LOAD_EXC_64(pmc->addr, cur_value, rte_memory_order_relaxed) __RTE_ARM_WFE() break; default: diff --git a/lib/eal/common/eal_common_trace.c b/lib/eal/common/eal_common_trace.c index cb980af..c6628dd 100644 --- a/lib/eal/common/eal_common_trace.c +++ b/lib/eal/common/eal_common_trace.c @@ -103,11 +103,11 @@ struct trace_point_head * trace_mode_set(rte_trace_point_t *t, enum rte_trace_mode mode) { if (mode == RTE_TRACE_MODE_OVERWRITE) - __atomic_fetch_and(t, ~__RTE_TRACE_FIELD_ENABLE_DISCARD, - __ATOMIC_RELEASE); + rte_atomic_fetch_and_explicit(t, ~__RTE_TRACE_FIELD_ENABLE_DISCARD, + rte_memory_order_release); else - __atomic_fetch_or(t, __RTE_TRACE_FIELD_ENABLE_DISCARD, - __ATOMIC_RELEASE); + rte_atomic_fetch_or_explicit(t, __RTE_TRACE_FIELD_ENABLE_DISCARD, + rte_memory_order_release); } void @@ -141,7 +141,7 @@ rte_trace_mode rte_trace_mode_get(void) if (trace_point_is_invalid(t)) return false; - val = __atomic_load_n(t, __ATOMIC_ACQUIRE); + val = rte_atomic_load_explicit(t, rte_memory_order_acquire); return (val & __RTE_TRACE_FIELD_ENABLE_MASK) != 0; } @@ -153,7 +153,8 @@ rte_trace_mode rte_trace_mode_get(void) if (trace_point_is_invalid(t)) return -ERANGE; - prev = __atomic_fetch_or(t, __RTE_TRACE_FIELD_ENABLE_MASK, __ATOMIC_RELEASE); + prev = rte_atomic_fetch_or_explicit(t, __RTE_TRACE_FIELD_ENABLE_MASK, + rte_memory_order_release); if ((prev & __RTE_TRACE_FIELD_ENABLE_MASK) == 0) __atomic_fetch_add(&trace.status, 1, __ATOMIC_RELEASE); return 0; @@ -167,7 +168,8 @@ rte_trace_mode rte_trace_mode_get(void) if (trace_point_is_invalid(t)) return -ERANGE; - prev = __atomic_fetch_and(t, ~__RTE_TRACE_FIELD_ENABLE_MASK, __ATOMIC_RELEASE); + prev = rte_atomic_fetch_and_explicit(t, ~__RTE_TRACE_FIELD_ENABLE_MASK, + rte_memory_order_release); if ((prev & __RTE_TRACE_FIELD_ENABLE_MASK) != 0) __atomic_fetch_sub(&trace.status, 1, __ATOMIC_RELEASE); return 0; diff --git a/lib/eal/include/generic/rte_atomic.h b/lib/eal/include/generic/rte_atomic.h index aef44e2..15a36f3 100644 --- a/lib/eal/include/generic/rte_atomic.h +++ b/lib/eal/include/generic/rte_atomic.h @@ -62,7 +62,7 @@ * but has different syntax and memory ordering semantic. Hence * deprecated for the simplicity of memory ordering semantics in use. * - * rte_atomic_thread_fence(__ATOMIC_ACQ_REL) should be used instead. + * rte_atomic_thread_fence(rte_memory_order_acq_rel) should be used instead. */ static inline void rte_smp_mb(void); @@ -79,7 +79,7 @@ * but has different syntax and memory ordering semantic. Hence * deprecated for the simplicity of memory ordering semantics in use. * - * rte_atomic_thread_fence(__ATOMIC_RELEASE) should be used instead. + * rte_atomic_thread_fence(rte_memory_order_release) should be used instead. * The fence also guarantees LOAD operations that precede the call * are globally visible across the lcores before the STORE operations * that follows it. @@ -99,7 +99,7 @@ * but has different syntax and memory ordering semantic. Hence * deprecated for the simplicity of memory ordering semantics in use. * - * rte_atomic_thread_fence(__ATOMIC_ACQUIRE) should be used instead. + * rte_atomic_thread_fence(rte_memory_order_acquire) should be used instead. * The fence also guarantees LOAD operations that precede the call * are globally visible across the lcores before the STORE operations * that follows it. @@ -153,7 +153,7 @@ /** * Synchronization fence between threads based on the specified memory order. */ -static inline void rte_atomic_thread_fence(int memorder); +static inline void rte_atomic_thread_fence(rte_memory_order memorder); /*------------------------- 16 bit atomic operations -------------------------*/ @@ -206,7 +206,7 @@ static inline uint16_t rte_atomic16_exchange(volatile uint16_t *dst, uint16_t val) { - return __atomic_exchange_n(dst, val, __ATOMIC_SEQ_CST); + return rte_atomic_exchange_explicit(dst, val, rte_memory_order_seq_cst); } #endif @@ -273,7 +273,7 @@ static inline void rte_atomic16_add(rte_atomic16_t *v, int16_t inc) { - __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST); + rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst); } /** @@ -287,7 +287,7 @@ static inline void rte_atomic16_sub(rte_atomic16_t *v, int16_t dec) { - __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST); + rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst); } /** @@ -340,7 +340,7 @@ static inline int16_t rte_atomic16_add_return(rte_atomic16_t *v, int16_t inc) { - return __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST) + inc; + return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc; } /** @@ -360,7 +360,7 @@ static inline int16_t rte_atomic16_sub_return(rte_atomic16_t *v, int16_t dec) { - return __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST) - dec; + return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec; } /** @@ -379,7 +379,7 @@ #ifdef RTE_FORCE_INTRINSICS static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v) { - return __atomic_fetch_add(&v->cnt, 1, __ATOMIC_SEQ_CST) + 1 == 0; + return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_seq_cst) + 1 == 0; } #endif @@ -399,7 +399,7 @@ static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v) #ifdef RTE_FORCE_INTRINSICS static inline int rte_atomic16_dec_and_test(rte_atomic16_t *v) { - return __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_SEQ_CST) - 1 == 0; + return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_seq_cst) - 1 == 0; } #endif @@ -485,7 +485,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v) static inline uint32_t rte_atomic32_exchange(volatile uint32_t *dst, uint32_t val) { - return __atomic_exchange_n(dst, val, __ATOMIC_SEQ_CST); + return rte_atomic_exchange_explicit(dst, val, rte_memory_order_seq_cst); } #endif @@ -552,7 +552,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v) static inline void rte_atomic32_add(rte_atomic32_t *v, int32_t inc) { - __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST); + rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst); } /** @@ -566,7 +566,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v) static inline void rte_atomic32_sub(rte_atomic32_t *v, int32_t dec) { - __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST); + rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst); } /** @@ -619,7 +619,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v) static inline int32_t rte_atomic32_add_return(rte_atomic32_t *v, int32_t inc) { - return __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST) + inc; + return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc; } /** @@ -639,7 +639,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v) static inline int32_t rte_atomic32_sub_return(rte_atomic32_t *v, int32_t dec) { - return __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST) - dec; + return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec; } /** @@ -658,7 +658,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v) #ifdef RTE_FORCE_INTRINSICS static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v) { - return __atomic_fetch_add(&v->cnt, 1, __ATOMIC_SEQ_CST) + 1 == 0; + return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_seq_cst) + 1 == 0; } #endif @@ -678,7 +678,7 @@ static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v) #ifdef RTE_FORCE_INTRINSICS static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v) { - return __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_SEQ_CST) - 1 == 0; + return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_seq_cst) - 1 == 0; } #endif @@ -763,7 +763,7 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v) static inline uint64_t rte_atomic64_exchange(volatile uint64_t *dst, uint64_t val) { - return __atomic_exchange_n(dst, val, __ATOMIC_SEQ_CST); + return rte_atomic_exchange_explicit(dst, val, rte_memory_order_seq_cst); } #endif @@ -884,7 +884,7 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v) static inline void rte_atomic64_add(rte_atomic64_t *v, int64_t inc) { - __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST); + rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst); } #endif @@ -903,7 +903,7 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v) static inline void rte_atomic64_sub(rte_atomic64_t *v, int64_t dec) { - __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST); + rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst); } #endif @@ -961,7 +961,7 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v) static inline int64_t rte_atomic64_add_return(rte_atomic64_t *v, int64_t inc) { - return __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST) + inc; + return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc; } #endif @@ -985,7 +985,7 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v) static inline int64_t rte_atomic64_sub_return(rte_atomic64_t *v, int64_t dec) { - return __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST) - dec; + return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec; } #endif @@ -1116,8 +1116,8 @@ static inline void rte_atomic64_clear(rte_atomic64_t *v) * stronger) model. * @param failure * If unsuccessful, the operation's memory behavior conforms to this (or a - * stronger) model. This argument cannot be __ATOMIC_RELEASE, - * __ATOMIC_ACQ_REL, or a stronger model than success. + * stronger) model. This argument cannot be rte_memory_order_release, + * rte_memory_order_acq_rel, or a stronger model than success. * @return * Non-zero on success; 0 on failure. */ diff --git a/lib/eal/include/generic/rte_pause.h b/lib/eal/include/generic/rte_pause.h index ec1f418..3ea1553 100644 --- a/lib/eal/include/generic/rte_pause.h +++ b/lib/eal/include/generic/rte_pause.h @@ -35,13 +35,13 @@ * A 16-bit expected value to be in the memory location. * @param memorder * Two different memory orders that can be specified: - * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to + * rte_memory_order_acquire and rte_memory_order_relaxed. These map to * C++11 memory orders with the same names, see the C++11 standard or * the GCC wiki on atomic synchronization for detailed definition. */ static __rte_always_inline void rte_wait_until_equal_16(volatile uint16_t *addr, uint16_t expected, - int memorder); + rte_memory_order memorder); /** * Wait for *addr to be updated with a 32-bit expected value, with a relaxed @@ -53,13 +53,13 @@ * A 32-bit expected value to be in the memory location. * @param memorder * Two different memory orders that can be specified: - * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to + * rte_memory_order_acquire and rte_memory_order_relaxed. These map to * C++11 memory orders with the same names, see the C++11 standard or * the GCC wiki on atomic synchronization for detailed definition. */ static __rte_always_inline void rte_wait_until_equal_32(volatile uint32_t *addr, uint32_t expected, - int memorder); + rte_memory_order memorder); /** * Wait for *addr to be updated with a 64-bit expected value, with a relaxed @@ -71,42 +71,42 @@ * A 64-bit expected value to be in the memory location. * @param memorder * Two different memory orders that can be specified: - * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to + * rte_memory_order_acquire and rte_memory_order_relaxed. These map to * C++11 memory orders with the same names, see the C++11 standard or * the GCC wiki on atomic synchronization for detailed definition. */ static __rte_always_inline void rte_wait_until_equal_64(volatile uint64_t *addr, uint64_t expected, - int memorder); + rte_memory_order memorder); #ifndef RTE_WAIT_UNTIL_EQUAL_ARCH_DEFINED static __rte_always_inline void rte_wait_until_equal_16(volatile uint16_t *addr, uint16_t expected, - int memorder) + rte_memory_order memorder) { - assert(memorder == __ATOMIC_ACQUIRE || memorder == __ATOMIC_RELAXED); + assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed); - while (__atomic_load_n(addr, memorder) != expected) + while (rte_atomic_load_explicit(addr, memorder) != expected) rte_pause(); } static __rte_always_inline void rte_wait_until_equal_32(volatile uint32_t *addr, uint32_t expected, - int memorder) + rte_memory_order memorder) { - assert(memorder == __ATOMIC_ACQUIRE || memorder == __ATOMIC_RELAXED); + assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed); - while (__atomic_load_n(addr, memorder) != expected) + while (rte_atomic_load_explicit(addr, memorder) != expected) rte_pause(); } static __rte_always_inline void rte_wait_until_equal_64(volatile uint64_t *addr, uint64_t expected, - int memorder) + rte_memory_order memorder) { - assert(memorder == __ATOMIC_ACQUIRE || memorder == __ATOMIC_RELAXED); + assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed); - while (__atomic_load_n(addr, memorder) != expected) + while (rte_atomic_load_explicit(addr, memorder) != expected) rte_pause(); } @@ -124,16 +124,16 @@ * An expected value to be in the memory location. * @param memorder * Two different memory orders that can be specified: - * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to + * rte_memory_order_acquire and rte_memory_order_relaxed. These map to * C++11 memory orders with the same names, see the C++11 standard or * the GCC wiki on atomic synchronization for detailed definition. */ #define RTE_WAIT_UNTIL_MASKED(addr, mask, cond, expected, memorder) do { \ RTE_BUILD_BUG_ON(!__builtin_constant_p(memorder)); \ - RTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE && \ - memorder != __ATOMIC_RELAXED); \ + RTE_BUILD_BUG_ON((memorder) != rte_memory_order_acquire && \ + (memorder) != rte_memory_order_relaxed); \ typeof(*(addr)) expected_value = (expected); \ - while (!((__atomic_load_n((addr), (memorder)) & (mask)) cond \ + while (!((rte_atomic_load_explicit((addr), (memorder)) & (mask)) cond \ expected_value)) \ rte_pause(); \ } while (0) diff --git a/lib/eal/include/generic/rte_rwlock.h b/lib/eal/include/generic/rte_rwlock.h index 9e083bb..fc0d5fd 100644 --- a/lib/eal/include/generic/rte_rwlock.h +++ b/lib/eal/include/generic/rte_rwlock.h @@ -57,7 +57,7 @@ #define RTE_RWLOCK_READ 0x4 /* Reader increment */ typedef struct __rte_lockable { - int32_t cnt; + int32_t __rte_atomic cnt; } rte_rwlock_t; /** @@ -92,21 +92,21 @@ while (1) { /* Wait while writer is present or pending */ - while (__atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED) + while (rte_atomic_load_explicit(&rwl->cnt, rte_memory_order_relaxed) & RTE_RWLOCK_MASK) rte_pause(); /* Try to get read lock */ - x = __atomic_fetch_add(&rwl->cnt, RTE_RWLOCK_READ, - __ATOMIC_ACQUIRE) + RTE_RWLOCK_READ; + x = rte_atomic_fetch_add_explicit(&rwl->cnt, RTE_RWLOCK_READ, + rte_memory_order_acquire) + RTE_RWLOCK_READ; /* If no writer, then acquire was successful */ if (likely(!(x & RTE_RWLOCK_MASK))) return; /* Lost race with writer, backout the change. */ - __atomic_fetch_sub(&rwl->cnt, RTE_RWLOCK_READ, - __ATOMIC_RELAXED); + rte_atomic_fetch_sub_explicit(&rwl->cnt, RTE_RWLOCK_READ, + rte_memory_order_relaxed); } } @@ -127,20 +127,20 @@ { int32_t x; - x = __atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED); + x = rte_atomic_load_explicit(&rwl->cnt, rte_memory_order_relaxed); /* fail if write lock is held or writer is pending */ if (x & RTE_RWLOCK_MASK) return -EBUSY; /* Try to get read lock */ - x = __atomic_fetch_add(&rwl->cnt, RTE_RWLOCK_READ, - __ATOMIC_ACQUIRE) + RTE_RWLOCK_READ; + x = rte_atomic_fetch_add_explicit(&rwl->cnt, RTE_RWLOCK_READ, + rte_memory_order_acquire) + RTE_RWLOCK_READ; /* Back out if writer raced in */ if (unlikely(x & RTE_RWLOCK_MASK)) { - __atomic_fetch_sub(&rwl->cnt, RTE_RWLOCK_READ, - __ATOMIC_RELEASE); + rte_atomic_fetch_sub_explicit(&rwl->cnt, RTE_RWLOCK_READ, + rte_memory_order_release); return -EBUSY; } @@ -158,7 +158,7 @@ __rte_unlock_function(rwl) __rte_no_thread_safety_analysis { - __atomic_fetch_sub(&rwl->cnt, RTE_RWLOCK_READ, __ATOMIC_RELEASE); + rte_atomic_fetch_sub_explicit(&rwl->cnt, RTE_RWLOCK_READ, rte_memory_order_release); } /** @@ -178,10 +178,10 @@ { int32_t x; - x = __atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED); + x = rte_atomic_load_explicit(&rwl->cnt, rte_memory_order_relaxed); if (x < RTE_RWLOCK_WRITE && - __atomic_compare_exchange_n(&rwl->cnt, &x, x + RTE_RWLOCK_WRITE, - 1, __ATOMIC_ACQUIRE, __ATOMIC_RELAXED)) + rte_atomic_compare_exchange_weak_explicit(&rwl->cnt, &x, x + RTE_RWLOCK_WRITE, + rte_memory_order_acquire, rte_memory_order_relaxed)) return 0; else return -EBUSY; @@ -201,22 +201,25 @@ int32_t x; while (1) { - x = __atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED); + x = rte_atomic_load_explicit(&rwl->cnt, rte_memory_order_relaxed); /* No readers or writers? */ if (likely(x < RTE_RWLOCK_WRITE)) { /* Turn off RTE_RWLOCK_WAIT, turn on RTE_RWLOCK_WRITE */ - if (__atomic_compare_exchange_n(&rwl->cnt, &x, RTE_RWLOCK_WRITE, 1, - __ATOMIC_ACQUIRE, __ATOMIC_RELAXED)) + if (rte_atomic_compare_exchange_weak_explicit( + &rwl->cnt, &x, RTE_RWLOCK_WRITE, + rte_memory_order_acquire, rte_memory_order_relaxed)) return; } /* Turn on writer wait bit */ if (!(x & RTE_RWLOCK_WAIT)) - __atomic_fetch_or(&rwl->cnt, RTE_RWLOCK_WAIT, __ATOMIC_RELAXED); + rte_atomic_fetch_or_explicit(&rwl->cnt, RTE_RWLOCK_WAIT, + rte_memory_order_relaxed); /* Wait until no readers before trying again */ - while (__atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED) > RTE_RWLOCK_WAIT) + while (rte_atomic_load_explicit(&rwl->cnt, + rte_memory_order_relaxed) > RTE_RWLOCK_WAIT) rte_pause(); } @@ -233,7 +236,7 @@ __rte_unlock_function(rwl) __rte_no_thread_safety_analysis { - __atomic_fetch_sub(&rwl->cnt, RTE_RWLOCK_WRITE, __ATOMIC_RELEASE); + rte_atomic_fetch_sub_explicit(&rwl->cnt, RTE_RWLOCK_WRITE, rte_memory_order_release); } /** @@ -247,7 +250,7 @@ static inline int rte_rwlock_write_is_locked(rte_rwlock_t *rwl) { - if (__atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED) & RTE_RWLOCK_WRITE) + if (rte_atomic_load_explicit(&rwl->cnt, rte_memory_order_relaxed) & RTE_RWLOCK_WRITE) return 1; return 0; diff --git a/lib/eal/include/generic/rte_spinlock.h b/lib/eal/include/generic/rte_spinlock.h index c50ebaa..e5ff348 100644 --- a/lib/eal/include/generic/rte_spinlock.h +++ b/lib/eal/include/generic/rte_spinlock.h @@ -28,7 +28,7 @@ * The rte_spinlock_t type. */ typedef struct __rte_lockable { - volatile int locked; /**< lock status 0 = unlocked, 1 = locked */ + volatile int __rte_atomic locked; /**< lock status 0 = unlocked, 1 = locked */ } rte_spinlock_t; /** @@ -65,10 +65,10 @@ { int exp = 0; - while (!__atomic_compare_exchange_n(&sl->locked, &exp, 1, 0, - __ATOMIC_ACQUIRE, __ATOMIC_RELAXED)) { - rte_wait_until_equal_32((volatile uint32_t *)&sl->locked, - 0, __ATOMIC_RELAXED); + while (!rte_atomic_compare_exchange_strong_explicit(&sl->locked, &exp, 1, + rte_memory_order_acquire, rte_memory_order_relaxed)) { + rte_wait_until_equal_32((volatile uint32_t *)(uintptr_t)&sl->locked, + 0, rte_memory_order_relaxed); exp = 0; } } @@ -89,7 +89,7 @@ rte_spinlock_unlock(rte_spinlock_t *sl) __rte_no_thread_safety_analysis { - __atomic_store_n(&sl->locked, 0, __ATOMIC_RELEASE); + rte_atomic_store_explicit(&sl->locked, 0, rte_memory_order_release); } #endif @@ -112,9 +112,8 @@ __rte_no_thread_safety_analysis { int exp = 0; - return __atomic_compare_exchange_n(&sl->locked, &exp, 1, - 0, /* disallow spurious failure */ - __ATOMIC_ACQUIRE, __ATOMIC_RELAXED); + return rte_atomic_compare_exchange_strong_explicit(&sl->locked, &exp, 1, + rte_memory_order_acquire, rte_memory_order_relaxed); } #endif @@ -128,7 +127,7 @@ */ static inline int rte_spinlock_is_locked (rte_spinlock_t *sl) { - return __atomic_load_n(&sl->locked, __ATOMIC_ACQUIRE); + return rte_atomic_load_explicit(&sl->locked, rte_memory_order_acquire); } /** diff --git a/lib/eal/include/rte_mcslock.h b/lib/eal/include/rte_mcslock.h index a805cb2..982fd81 100644 --- a/lib/eal/include/rte_mcslock.h +++ b/lib/eal/include/rte_mcslock.h @@ -32,8 +32,8 @@ * The rte_mcslock_t type. */ typedef struct rte_mcslock { - struct rte_mcslock *next; - int locked; /* 1 if the queue locked, 0 otherwise */ + struct rte_mcslock * __rte_atomic next; + int __rte_atomic locked; /* 1 if the queue locked, 0 otherwise */ } rte_mcslock_t; /** @@ -48,13 +48,13 @@ * lock should use its 'own node'. */ static inline void -rte_mcslock_lock(rte_mcslock_t **msl, rte_mcslock_t *me) +rte_mcslock_lock(rte_mcslock_t * __rte_atomic *msl, rte_mcslock_t *me) { rte_mcslock_t *prev; /* Init me node */ - __atomic_store_n(&me->locked, 1, __ATOMIC_RELAXED); - __atomic_store_n(&me->next, NULL, __ATOMIC_RELAXED); + rte_atomic_store_explicit(&me->locked, 1, rte_memory_order_relaxed); + rte_atomic_store_explicit(&me->next, NULL, rte_memory_order_relaxed); /* If the queue is empty, the exchange operation is enough to acquire * the lock. Hence, the exchange operation requires acquire semantics. @@ -62,7 +62,7 @@ * visible to other CPUs/threads. Hence, the exchange operation requires * release semantics as well. */ - prev = __atomic_exchange_n(msl, me, __ATOMIC_ACQ_REL); + prev = rte_atomic_exchange_explicit(msl, me, rte_memory_order_acq_rel); if (likely(prev == NULL)) { /* Queue was empty, no further action required, * proceed with lock taken. @@ -76,19 +76,19 @@ * strong as a release fence and is not sufficient to enforce the * desired order here. */ - __atomic_store_n(&prev->next, me, __ATOMIC_RELEASE); + rte_atomic_store_explicit(&prev->next, me, rte_memory_order_release); /* The while-load of me->locked should not move above the previous * store to prev->next. Otherwise it will cause a deadlock. Need a * store-load barrier. */ - __atomic_thread_fence(__ATOMIC_ACQ_REL); + __atomic_thread_fence(rte_memory_order_acq_rel); /* If the lock has already been acquired, it first atomically * places the node at the end of the queue and then proceeds * to spin on me->locked until the previous lock holder resets * the me->locked using mcslock_unlock(). */ - rte_wait_until_equal_32((uint32_t *)&me->locked, 0, __ATOMIC_ACQUIRE); + rte_wait_until_equal_32((uint32_t *)(uintptr_t)&me->locked, 0, rte_memory_order_acquire); } /** @@ -100,34 +100,34 @@ * A pointer to the node of MCS lock passed in rte_mcslock_lock. */ static inline void -rte_mcslock_unlock(rte_mcslock_t **msl, rte_mcslock_t *me) +rte_mcslock_unlock(rte_mcslock_t * __rte_atomic *msl, rte_mcslock_t * __rte_atomic me) { /* Check if there are more nodes in the queue. */ - if (likely(__atomic_load_n(&me->next, __ATOMIC_RELAXED) == NULL)) { + if (likely(rte_atomic_load_explicit(&me->next, rte_memory_order_relaxed) == NULL)) { /* No, last member in the queue. */ - rte_mcslock_t *save_me = __atomic_load_n(&me, __ATOMIC_RELAXED); + rte_mcslock_t *save_me = rte_atomic_load_explicit(&me, rte_memory_order_relaxed); /* Release the lock by setting it to NULL */ - if (likely(__atomic_compare_exchange_n(msl, &save_me, NULL, 0, - __ATOMIC_RELEASE, __ATOMIC_RELAXED))) + if (likely(rte_atomic_compare_exchange_strong_explicit(msl, &save_me, NULL, + rte_memory_order_release, rte_memory_order_relaxed))) return; /* Speculative execution would be allowed to read in the * while-loop first. This has the potential to cause a * deadlock. Need a load barrier. */ - __atomic_thread_fence(__ATOMIC_ACQUIRE); + __atomic_thread_fence(rte_memory_order_acquire); /* More nodes added to the queue by other CPUs. * Wait until the next pointer is set. */ - uintptr_t *next; - next = (uintptr_t *)&me->next; + uintptr_t __rte_atomic *next; + next = (uintptr_t __rte_atomic *)&me->next; RTE_WAIT_UNTIL_MASKED(next, UINTPTR_MAX, !=, 0, - __ATOMIC_RELAXED); + rte_memory_order_relaxed); } /* Pass lock to next waiter. */ - __atomic_store_n(&me->next->locked, 0, __ATOMIC_RELEASE); + rte_atomic_store_explicit(&me->next->locked, 0, rte_memory_order_release); } /** @@ -141,10 +141,10 @@ * 1 if the lock is successfully taken; 0 otherwise. */ static inline int -rte_mcslock_trylock(rte_mcslock_t **msl, rte_mcslock_t *me) +rte_mcslock_trylock(rte_mcslock_t * __rte_atomic *msl, rte_mcslock_t *me) { /* Init me node */ - __atomic_store_n(&me->next, NULL, __ATOMIC_RELAXED); + rte_atomic_store_explicit(&me->next, NULL, rte_memory_order_relaxed); /* Try to lock */ rte_mcslock_t *expected = NULL; @@ -155,8 +155,8 @@ * is visible to other CPUs/threads. Hence, the compare-exchange * operation requires release semantics as well. */ - return __atomic_compare_exchange_n(msl, &expected, me, 0, - __ATOMIC_ACQ_REL, __ATOMIC_RELAXED); + return rte_atomic_compare_exchange_strong_explicit(msl, &expected, me, + rte_memory_order_acq_rel, rte_memory_order_relaxed); } /** @@ -168,9 +168,9 @@ * 1 if the lock is currently taken; 0 otherwise. */ static inline int -rte_mcslock_is_locked(rte_mcslock_t *msl) +rte_mcslock_is_locked(rte_mcslock_t * __rte_atomic msl) { - return (__atomic_load_n(&msl, __ATOMIC_RELAXED) != NULL); + return (rte_atomic_load_explicit(&msl, rte_memory_order_relaxed) != NULL); } #ifdef __cplusplus diff --git a/lib/eal/include/rte_pflock.h b/lib/eal/include/rte_pflock.h index a3f7291..7d51fc9 100644 --- a/lib/eal/include/rte_pflock.h +++ b/lib/eal/include/rte_pflock.h @@ -40,8 +40,8 @@ */ struct rte_pflock { struct { - uint16_t in; - uint16_t out; + uint16_t __rte_atomic in; + uint16_t __rte_atomic out; } rd, wr; }; typedef struct rte_pflock rte_pflock_t; @@ -116,14 +116,14 @@ struct rte_pflock { * If no writer is present, then the operation has completed * successfully. */ - w = __atomic_fetch_add(&pf->rd.in, RTE_PFLOCK_RINC, __ATOMIC_ACQUIRE) + w = rte_atomic_fetch_add_explicit(&pf->rd.in, RTE_PFLOCK_RINC, rte_memory_order_acquire) & RTE_PFLOCK_WBITS; if (w == 0) return; /* Wait for current write phase to complete. */ RTE_WAIT_UNTIL_MASKED(&pf->rd.in, RTE_PFLOCK_WBITS, !=, w, - __ATOMIC_ACQUIRE); + rte_memory_order_acquire); } /** @@ -139,7 +139,7 @@ struct rte_pflock { static inline void rte_pflock_read_unlock(rte_pflock_t *pf) { - __atomic_fetch_add(&pf->rd.out, RTE_PFLOCK_RINC, __ATOMIC_RELEASE); + rte_atomic_fetch_add_explicit(&pf->rd.out, RTE_PFLOCK_RINC, rte_memory_order_release); } /** @@ -160,8 +160,9 @@ struct rte_pflock { /* Acquire ownership of write-phase. * This is same as rte_ticketlock_lock(). */ - ticket = __atomic_fetch_add(&pf->wr.in, 1, __ATOMIC_RELAXED); - rte_wait_until_equal_16(&pf->wr.out, ticket, __ATOMIC_ACQUIRE); + ticket = rte_atomic_fetch_add_explicit(&pf->wr.in, 1, rte_memory_order_relaxed); + rte_wait_until_equal_16((uint16_t *)(uintptr_t)&pf->wr.out, ticket, + rte_memory_order_acquire); /* * Acquire ticket on read-side in order to allow them @@ -172,10 +173,11 @@ struct rte_pflock { * speculatively. */ w = RTE_PFLOCK_PRES | (ticket & RTE_PFLOCK_PHID); - ticket = __atomic_fetch_add(&pf->rd.in, w, __ATOMIC_RELAXED); + ticket = rte_atomic_fetch_add_explicit(&pf->rd.in, w, rte_memory_order_relaxed); /* Wait for any pending readers to flush. */ - rte_wait_until_equal_16(&pf->rd.out, ticket, __ATOMIC_ACQUIRE); + rte_wait_until_equal_16((uint16_t *)(uintptr_t)&pf->rd.out, ticket, + rte_memory_order_acquire); } /** @@ -192,10 +194,10 @@ struct rte_pflock { rte_pflock_write_unlock(rte_pflock_t *pf) { /* Migrate from write phase to read phase. */ - __atomic_fetch_and(&pf->rd.in, RTE_PFLOCK_LSB, __ATOMIC_RELEASE); + rte_atomic_fetch_and_explicit(&pf->rd.in, RTE_PFLOCK_LSB, rte_memory_order_release); /* Allow other writers to continue. */ - __atomic_fetch_add(&pf->wr.out, 1, __ATOMIC_RELEASE); + rte_atomic_fetch_add_explicit(&pf->wr.out, 1, rte_memory_order_release); } #ifdef __cplusplus diff --git a/lib/eal/include/rte_seqcount.h b/lib/eal/include/rte_seqcount.h index ff62708..f581908 100644 --- a/lib/eal/include/rte_seqcount.h +++ b/lib/eal/include/rte_seqcount.h @@ -31,7 +31,7 @@ * The RTE seqcount type. */ typedef struct { - uint32_t sn; /**< A sequence number for the protected data. */ + uint32_t __rte_atomic sn; /**< A sequence number for the protected data. */ } rte_seqcount_t; /** @@ -105,11 +105,11 @@ static inline uint32_t rte_seqcount_read_begin(const rte_seqcount_t *seqcount) { - /* __ATOMIC_ACQUIRE to prevent loads after (in program order) + /* rte_memory_order_acquire to prevent loads after (in program order) * from happening before the sn load. Synchronizes-with the * store release in rte_seqcount_write_end(). */ - return __atomic_load_n(&seqcount->sn, __ATOMIC_ACQUIRE); + return rte_atomic_load_explicit(&seqcount->sn, rte_memory_order_acquire); } /** @@ -160,9 +160,9 @@ return true; /* make sure the data loads happens before the sn load */ - rte_atomic_thread_fence(__ATOMIC_ACQUIRE); + rte_atomic_thread_fence(rte_memory_order_acquire); - end_sn = __atomic_load_n(&seqcount->sn, __ATOMIC_RELAXED); + end_sn = rte_atomic_load_explicit(&seqcount->sn, rte_memory_order_relaxed); /* A writer incremented the sequence number during this read * critical section. @@ -204,12 +204,12 @@ sn = seqcount->sn + 1; - __atomic_store_n(&seqcount->sn, sn, __ATOMIC_RELAXED); + rte_atomic_store_explicit(&seqcount->sn, sn, rte_memory_order_relaxed); - /* __ATOMIC_RELEASE to prevent stores after (in program order) + /* rte_memory_order_release to prevent stores after (in program order) * from happening before the sn store. */ - rte_atomic_thread_fence(__ATOMIC_RELEASE); + rte_atomic_thread_fence(rte_memory_order_release); } /** @@ -236,7 +236,7 @@ sn = seqcount->sn + 1; /* Synchronizes-with the load acquire in rte_seqcount_read_begin(). */ - __atomic_store_n(&seqcount->sn, sn, __ATOMIC_RELEASE); + rte_atomic_store_explicit(&seqcount->sn, sn, rte_memory_order_release); } #ifdef __cplusplus diff --git a/lib/eal/include/rte_ticketlock.h b/lib/eal/include/rte_ticketlock.h index 5db0d8a..31b5193 100644 --- a/lib/eal/include/rte_ticketlock.h +++ b/lib/eal/include/rte_ticketlock.h @@ -29,10 +29,10 @@ * The rte_ticketlock_t type. */ typedef union { - uint32_t tickets; + uint32_t __rte_atomic tickets; struct { - uint16_t current; - uint16_t next; + uint16_t __rte_atomic current; + uint16_t __rte_atomic next; } s; } rte_ticketlock_t; @@ -50,7 +50,7 @@ static inline void rte_ticketlock_init(rte_ticketlock_t *tl) { - __atomic_store_n(&tl->tickets, 0, __ATOMIC_RELAXED); + rte_atomic_store_explicit(&tl->tickets, 0, rte_memory_order_relaxed); } /** @@ -62,8 +62,9 @@ static inline void rte_ticketlock_lock(rte_ticketlock_t *tl) { - uint16_t me = __atomic_fetch_add(&tl->s.next, 1, __ATOMIC_RELAXED); - rte_wait_until_equal_16(&tl->s.current, me, __ATOMIC_ACQUIRE); + uint16_t me = rte_atomic_fetch_add_explicit(&tl->s.next, 1, rte_memory_order_relaxed); + rte_wait_until_equal_16((uint16_t *)(uintptr_t)&tl->s.current, me, + rte_memory_order_acquire); } /** @@ -75,8 +76,8 @@ static inline void rte_ticketlock_unlock(rte_ticketlock_t *tl) { - uint16_t i = __atomic_load_n(&tl->s.current, __ATOMIC_RELAXED); - __atomic_store_n(&tl->s.current, i + 1, __ATOMIC_RELEASE); + uint16_t i = rte_atomic_load_explicit(&tl->s.current, rte_memory_order_relaxed); + rte_atomic_store_explicit(&tl->s.current, i + 1, rte_memory_order_release); } /** @@ -91,12 +92,13 @@ rte_ticketlock_trylock(rte_ticketlock_t *tl) { rte_ticketlock_t oldl, newl; - oldl.tickets = __atomic_load_n(&tl->tickets, __ATOMIC_RELAXED); + oldl.tickets = rte_atomic_load_explicit(&tl->tickets, rte_memory_order_relaxed); newl.tickets = oldl.tickets; newl.s.next++; if (oldl.s.next == oldl.s.current) { - if (__atomic_compare_exchange_n(&tl->tickets, &oldl.tickets, - newl.tickets, 0, __ATOMIC_ACQUIRE, __ATOMIC_RELAXED)) + if (rte_atomic_compare_exchange_strong_explicit(&tl->tickets, + (uint32_t *)(uintptr_t)&oldl.tickets, + newl.tickets, rte_memory_order_acquire, rte_memory_order_relaxed)) return 1; } @@ -115,7 +117,7 @@ rte_ticketlock_is_locked(rte_ticketlock_t *tl) { rte_ticketlock_t tic; - tic.tickets = __atomic_load_n(&tl->tickets, __ATOMIC_ACQUIRE); + tic.tickets = rte_atomic_load_explicit(&tl->tickets, rte_memory_order_acquire); return (tic.s.current != tic.s.next); } @@ -126,7 +128,7 @@ typedef struct { rte_ticketlock_t tl; /**< the actual ticketlock */ - int user; /**< core id using lock, TICKET_LOCK_INVALID_ID for unused */ + int __rte_atomic user; /**< core id using lock, TICKET_LOCK_INVALID_ID for unused */ unsigned int count; /**< count of time this lock has been called */ } rte_ticketlock_recursive_t; @@ -146,7 +148,7 @@ rte_ticketlock_recursive_init(rte_ticketlock_recursive_t *tlr) { rte_ticketlock_init(&tlr->tl); - __atomic_store_n(&tlr->user, TICKET_LOCK_INVALID_ID, __ATOMIC_RELAXED); + rte_atomic_store_explicit(&tlr->user, TICKET_LOCK_INVALID_ID, rte_memory_order_relaxed); tlr->count = 0; } @@ -161,9 +163,9 @@ { int id = rte_gettid(); - if (__atomic_load_n(&tlr->user, __ATOMIC_RELAXED) != id) { + if (rte_atomic_load_explicit(&tlr->user, rte_memory_order_relaxed) != id) { rte_ticketlock_lock(&tlr->tl); - __atomic_store_n(&tlr->user, id, __ATOMIC_RELAXED); + rte_atomic_store_explicit(&tlr->user, id, rte_memory_order_relaxed); } tlr->count++; } @@ -178,8 +180,8 @@ rte_ticketlock_recursive_unlock(rte_ticketlock_recursive_t *tlr) { if (--(tlr->count) == 0) { - __atomic_store_n(&tlr->user, TICKET_LOCK_INVALID_ID, - __ATOMIC_RELAXED); + rte_atomic_store_explicit(&tlr->user, TICKET_LOCK_INVALID_ID, + rte_memory_order_relaxed); rte_ticketlock_unlock(&tlr->tl); } } @@ -197,10 +199,10 @@ { int id = rte_gettid(); - if (__atomic_load_n(&tlr->user, __ATOMIC_RELAXED) != id) { + if (rte_atomic_load_explicit(&tlr->user, rte_memory_order_relaxed) != id) { if (rte_ticketlock_trylock(&tlr->tl) == 0) return 0; - __atomic_store_n(&tlr->user, id, __ATOMIC_RELAXED); + rte_atomic_store_explicit(&tlr->user, id, rte_memory_order_relaxed); } tlr->count++; return 1; diff --git a/lib/eal/include/rte_trace_point.h b/lib/eal/include/rte_trace_point.h index c6b6fcc..2bcf954 100644 --- a/lib/eal/include/rte_trace_point.h +++ b/lib/eal/include/rte_trace_point.h @@ -32,7 +32,7 @@ #include /** The tracepoint object. */ -typedef uint64_t rte_trace_point_t; +typedef uint64_t __rte_atomic rte_trace_point_t; /** * Macro to define the tracepoint arguments in RTE_TRACE_POINT macro. @@ -358,7 +358,7 @@ struct __rte_trace_header { #define __rte_trace_point_emit_header_generic(t) \ void *mem; \ do { \ - const uint64_t val = __atomic_load_n(t, __ATOMIC_ACQUIRE); \ + const uint64_t val = rte_atomic_load_explicit(t, rte_memory_order_acquire); \ if (likely(!(val & __RTE_TRACE_FIELD_ENABLE_MASK))) \ return; \ mem = __rte_trace_mem_get(val); \ diff --git a/lib/eal/ppc/include/rte_atomic.h b/lib/eal/ppc/include/rte_atomic.h index ec8d8a2..44822db 100644 --- a/lib/eal/ppc/include/rte_atomic.h +++ b/lib/eal/ppc/include/rte_atomic.h @@ -48,8 +48,8 @@ static inline int rte_atomic16_cmpset(volatile uint16_t *dst, uint16_t exp, uint16_t src) { - return __atomic_compare_exchange(dst, &exp, &src, 0, __ATOMIC_ACQUIRE, - __ATOMIC_ACQUIRE) ? 1 : 0; + return __atomic_compare_exchange(dst, &exp, &src, 0, rte_memory_order_acquire, + rte_memory_order_acquire) ? 1 : 0; } static inline int rte_atomic16_test_and_set(rte_atomic16_t *v) @@ -60,29 +60,29 @@ static inline int rte_atomic16_test_and_set(rte_atomic16_t *v) static inline void rte_atomic16_inc(rte_atomic16_t *v) { - __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE); + rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire); } static inline void rte_atomic16_dec(rte_atomic16_t *v) { - __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE); + rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire); } static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v) { - return __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE) + 1 == 0; + return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire) + 1 == 0; } static inline int rte_atomic16_dec_and_test(rte_atomic16_t *v) { - return __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE) - 1 == 0; + return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire) - 1 == 0; } static inline uint16_t rte_atomic16_exchange(volatile uint16_t *dst, uint16_t val) { - return __atomic_exchange_2(dst, val, __ATOMIC_SEQ_CST); + return __atomic_exchange_2(dst, val, rte_memory_order_seq_cst); } /*------------------------- 32 bit atomic operations -------------------------*/ @@ -90,8 +90,8 @@ static inline int rte_atomic16_dec_and_test(rte_atomic16_t *v) static inline int rte_atomic32_cmpset(volatile uint32_t *dst, uint32_t exp, uint32_t src) { - return __atomic_compare_exchange(dst, &exp, &src, 0, __ATOMIC_ACQUIRE, - __ATOMIC_ACQUIRE) ? 1 : 0; + return __atomic_compare_exchange(dst, &exp, &src, 0, rte_memory_order_acquire, + rte_memory_order_acquire) ? 1 : 0; } static inline int rte_atomic32_test_and_set(rte_atomic32_t *v) @@ -102,29 +102,29 @@ static inline int rte_atomic32_test_and_set(rte_atomic32_t *v) static inline void rte_atomic32_inc(rte_atomic32_t *v) { - __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE); + rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire); } static inline void rte_atomic32_dec(rte_atomic32_t *v) { - __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE); + rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire); } static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v) { - return __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE) + 1 == 0; + return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire) + 1 == 0; } static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v) { - return __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE) - 1 == 0; + return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire) - 1 == 0; } static inline uint32_t rte_atomic32_exchange(volatile uint32_t *dst, uint32_t val) { - return __atomic_exchange_4(dst, val, __ATOMIC_SEQ_CST); + return __atomic_exchange_4(dst, val, rte_memory_order_seq_cst); } /*------------------------- 64 bit atomic operations -------------------------*/ @@ -132,8 +132,8 @@ static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v) static inline int rte_atomic64_cmpset(volatile uint64_t *dst, uint64_t exp, uint64_t src) { - return __atomic_compare_exchange(dst, &exp, &src, 0, __ATOMIC_ACQUIRE, - __ATOMIC_ACQUIRE) ? 1 : 0; + return __atomic_compare_exchange(dst, &exp, &src, 0, rte_memory_order_acquire, + rte_memory_order_acquire) ? 1 : 0; } static inline void @@ -157,47 +157,47 @@ static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v) static inline void rte_atomic64_add(rte_atomic64_t *v, int64_t inc) { - __atomic_fetch_add(&v->cnt, inc, __ATOMIC_ACQUIRE); + rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_acquire); } static inline void rte_atomic64_sub(rte_atomic64_t *v, int64_t dec) { - __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_ACQUIRE); + rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_acquire); } static inline void rte_atomic64_inc(rte_atomic64_t *v) { - __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE); + rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire); } static inline void rte_atomic64_dec(rte_atomic64_t *v) { - __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE); + rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire); } static inline int64_t rte_atomic64_add_return(rte_atomic64_t *v, int64_t inc) { - return __atomic_fetch_add(&v->cnt, inc, __ATOMIC_ACQUIRE) + inc; + return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_acquire) + inc; } static inline int64_t rte_atomic64_sub_return(rte_atomic64_t *v, int64_t dec) { - return __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_ACQUIRE) - dec; + return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_acquire) - dec; } static inline int rte_atomic64_inc_and_test(rte_atomic64_t *v) { - return __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE) + 1 == 0; + return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire) + 1 == 0; } static inline int rte_atomic64_dec_and_test(rte_atomic64_t *v) { - return __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE) - 1 == 0; + return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire) - 1 == 0; } static inline int rte_atomic64_test_and_set(rte_atomic64_t *v) @@ -213,7 +213,7 @@ static inline void rte_atomic64_clear(rte_atomic64_t *v) static inline uint64_t rte_atomic64_exchange(volatile uint64_t *dst, uint64_t val) { - return __atomic_exchange_8(dst, val, __ATOMIC_SEQ_CST); + return __atomic_exchange_8(dst, val, rte_memory_order_seq_cst); } #endif diff --git a/lib/eal/x86/include/rte_atomic.h b/lib/eal/x86/include/rte_atomic.h index f2ee1a9..aedce9b 100644 --- a/lib/eal/x86/include/rte_atomic.h +++ b/lib/eal/x86/include/rte_atomic.h @@ -82,14 +82,14 @@ /** * Synchronization fence between threads based on the specified memory order. * - * On x86 the __atomic_thread_fence(__ATOMIC_SEQ_CST) generates full 'mfence' + * On x86 the __atomic_thread_fence(rte_memory_order_seq_cst) generates full 'mfence' * which is quite expensive. The optimized implementation of rte_smp_mb is * used instead. */ static __rte_always_inline void rte_atomic_thread_fence(int memorder) { - if (memorder == __ATOMIC_SEQ_CST) + if (memorder == rte_memory_order_seq_cst) rte_smp_mb(); else __atomic_thread_fence(memorder); diff --git a/lib/eal/x86/include/rte_spinlock.h b/lib/eal/x86/include/rte_spinlock.h index 0b20ddf..c76218a 100644 --- a/lib/eal/x86/include/rte_spinlock.h +++ b/lib/eal/x86/include/rte_spinlock.h @@ -78,7 +78,7 @@ static inline int rte_tm_supported(void) } static inline int -rte_try_tm(volatile int *lock) +rte_try_tm(volatile int __rte_atomic *lock) { int i, retries; diff --git a/lib/eal/x86/rte_power_intrinsics.c b/lib/eal/x86/rte_power_intrinsics.c index f749da9..cf70e33 100644 --- a/lib/eal/x86/rte_power_intrinsics.c +++ b/lib/eal/x86/rte_power_intrinsics.c @@ -23,9 +23,9 @@ uint64_t val; /* trigger a write but don't change the value */ - val = __atomic_load_n((volatile uint64_t *)addr, __ATOMIC_RELAXED); - __atomic_compare_exchange_n((volatile uint64_t *)addr, &val, val, 0, - __ATOMIC_RELAXED, __ATOMIC_RELAXED); + val = rte_atomic_load_explicit((volatile uint64_t *)addr, rte_memory_order_relaxed); + rte_atomic_compare_exchange_strong_explicit((volatile uint64_t *)addr, &val, val, + rte_memory_order_relaxed, rte_memory_order_relaxed); } static bool wait_supported; From patchwork Fri Aug 11 01:31:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tyler Retzlaff X-Patchwork-Id: 130101 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 37CEB43028; Fri, 11 Aug 2023 03:32:37 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A66D443262; Fri, 11 Aug 2023 03:32:12 +0200 (CEST) Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by mails.dpdk.org (Postfix) with ESMTP id C134A40F17; Fri, 11 Aug 2023 03:32:05 +0200 (CEST) Received: by linux.microsoft.com (Postfix, from userid 1086) id 0040520FD065; Thu, 10 Aug 2023 18:32:04 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 0040520FD065 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1691717525; bh=JgVtcdRdaIRXczTMKsJuOpuH59jNvJLpGeVPAP61WBE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=b/UrfYaLpai/B+vtBI2qXpbxAL1sga9MMCdLgZ5P+J8ams+Y823YN3h+0QBlxe83H smBAK01uRbNT0q34Qnk15OEjPDgtGlFSCsIqtHjpJRPh1X2uesG4v1j5lVXhDNu8Ap uU8IHu56pcP/lP6n+4CD482ORrvufiDSQ1kVUddE= From: Tyler Retzlaff To: dev@dpdk.org Cc: techboard@dpdk.org, Bruce Richardson , Honnappa Nagarahalli , Ruifeng Wang , Jerin Jacob , Sunil Kumar Kori , =?utf-8?q?Mattias_R=C3=B6nnblom?= , Joyce Kong , David Christensen , Konstantin Ananyev , David Hunt , Thomas Monjalon , David Marchand , Tyler Retzlaff Subject: [PATCH 3/6] eal: add rte atomic qualifier with casts Date: Thu, 10 Aug 2023 18:31:58 -0700 Message-Id: <1691717521-1025-4-git-send-email-roretzla@linux.microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1691717521-1025-1-git-send-email-roretzla@linux.microsoft.com> References: <1691717521-1025-1-git-send-email-roretzla@linux.microsoft.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Introduce __rte_atomic qualifying casts in rte_optional atomics inline functions to prevent cascading the need to pass __rte_atomic qualified arguments. Warning, this is really implementation dependent and being done temporarily to avoid having to convert more of the libraries and tests in DPDK in the initial series that introduces the API. The consequence of the assumption of the ABI of the types in question not being ``the same'' is only a risk that may be realized when enable_stdatomic=true. Signed-off-by: Tyler Retzlaff --- lib/eal/include/generic/rte_atomic.h | 48 ++++++++++++++++++++++++------------ lib/eal/include/generic/rte_pause.h | 9 ++++--- lib/eal/x86/rte_power_intrinsics.c | 7 +++--- 3 files changed, 42 insertions(+), 22 deletions(-) diff --git a/lib/eal/include/generic/rte_atomic.h b/lib/eal/include/generic/rte_atomic.h index 15a36f3..2c65304 100644 --- a/lib/eal/include/generic/rte_atomic.h +++ b/lib/eal/include/generic/rte_atomic.h @@ -273,7 +273,8 @@ static inline void rte_atomic16_add(rte_atomic16_t *v, int16_t inc) { - rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst); + rte_atomic_fetch_add_explicit((volatile int16_t __rte_atomic *)&v->cnt, inc, + rte_memory_order_seq_cst); } /** @@ -287,7 +288,8 @@ static inline void rte_atomic16_sub(rte_atomic16_t *v, int16_t dec) { - rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst); + rte_atomic_fetch_sub_explicit((volatile int16_t __rte_atomic *)&v->cnt, dec, + rte_memory_order_seq_cst); } /** @@ -340,7 +342,8 @@ static inline int16_t rte_atomic16_add_return(rte_atomic16_t *v, int16_t inc) { - return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc; + return rte_atomic_fetch_add_explicit((volatile int16_t __rte_atomic *)&v->cnt, inc, + rte_memory_order_seq_cst) + inc; } /** @@ -360,7 +363,8 @@ static inline int16_t rte_atomic16_sub_return(rte_atomic16_t *v, int16_t dec) { - return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec; + return rte_atomic_fetch_sub_explicit((volatile int16_t __rte_atomic *)&v->cnt, dec, + rte_memory_order_seq_cst) - dec; } /** @@ -379,7 +383,8 @@ #ifdef RTE_FORCE_INTRINSICS static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v) { - return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_seq_cst) + 1 == 0; + return rte_atomic_fetch_add_explicit((volatile int16_t __rte_atomic *)&v->cnt, 1, + rte_memory_order_seq_cst) + 1 == 0; } #endif @@ -399,7 +404,8 @@ static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v) #ifdef RTE_FORCE_INTRINSICS static inline int rte_atomic16_dec_and_test(rte_atomic16_t *v) { - return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_seq_cst) - 1 == 0; + return rte_atomic_fetch_sub_explicit((volatile int16_t __rte_atomic *)&v->cnt, 1, + rte_memory_order_seq_cst) - 1 == 0; } #endif @@ -552,7 +558,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v) static inline void rte_atomic32_add(rte_atomic32_t *v, int32_t inc) { - rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst); + rte_atomic_fetch_add_explicit((volatile int32_t __rte_atomic *)&v->cnt, inc, + rte_memory_order_seq_cst); } /** @@ -566,7 +573,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v) static inline void rte_atomic32_sub(rte_atomic32_t *v, int32_t dec) { - rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst); + rte_atomic_fetch_sub_explicit((volatile int32_t __rte_atomic *)&v->cnt, dec, + rte_memory_order_seq_cst); } /** @@ -619,7 +627,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v) static inline int32_t rte_atomic32_add_return(rte_atomic32_t *v, int32_t inc) { - return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc; + return rte_atomic_fetch_add_explicit((volatile int32_t __rte_atomic *)&v->cnt, inc, + rte_memory_order_seq_cst) + inc; } /** @@ -639,7 +648,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v) static inline int32_t rte_atomic32_sub_return(rte_atomic32_t *v, int32_t dec) { - return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec; + return rte_atomic_fetch_sub_explicit((volatile int32_t __rte_atomic *)&v->cnt, dec, + rte_memory_order_seq_cst) - dec; } /** @@ -658,7 +668,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v) #ifdef RTE_FORCE_INTRINSICS static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v) { - return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_seq_cst) + 1 == 0; + return rte_atomic_fetch_add_explicit((volatile int32_t __rte_atomic *)&v->cnt, 1, + rte_memory_order_seq_cst) + 1 == 0; } #endif @@ -678,7 +689,8 @@ static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v) #ifdef RTE_FORCE_INTRINSICS static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v) { - return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_seq_cst) - 1 == 0; + return rte_atomic_fetch_sub_explicit((volatile int32_t __rte_atomic *)&v->cnt, 1, + rte_memory_order_seq_cst) - 1 == 0; } #endif @@ -884,7 +896,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v) static inline void rte_atomic64_add(rte_atomic64_t *v, int64_t inc) { - rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst); + rte_atomic_fetch_add_explicit((volatile int64_t __rte_atomic *)&v->cnt, inc, + rte_memory_order_seq_cst); } #endif @@ -903,7 +916,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v) static inline void rte_atomic64_sub(rte_atomic64_t *v, int64_t dec) { - rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst); + rte_atomic_fetch_sub_explicit((volatile int64_t __rte_atomic *)&v->cnt, dec, + rte_memory_order_seq_cst); } #endif @@ -961,7 +975,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v) static inline int64_t rte_atomic64_add_return(rte_atomic64_t *v, int64_t inc) { - return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc; + return rte_atomic_fetch_add_explicit((volatile int64_t __rte_atomic *)&v->cnt, inc, + rte_memory_order_seq_cst) + inc; } #endif @@ -985,7 +1000,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v) static inline int64_t rte_atomic64_sub_return(rte_atomic64_t *v, int64_t dec) { - return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec; + return rte_atomic_fetch_sub_explicit((volatile int64_t __rte_atomic *)&v->cnt, dec, + rte_memory_order_seq_cst) - dec; } #endif diff --git a/lib/eal/include/generic/rte_pause.h b/lib/eal/include/generic/rte_pause.h index 3ea1553..db8a1f8 100644 --- a/lib/eal/include/generic/rte_pause.h +++ b/lib/eal/include/generic/rte_pause.h @@ -86,7 +86,8 @@ { assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed); - while (rte_atomic_load_explicit(addr, memorder) != expected) + while (rte_atomic_load_explicit((volatile uint16_t __rte_atomic *)addr, memorder) + != expected) rte_pause(); } @@ -96,7 +97,8 @@ { assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed); - while (rte_atomic_load_explicit(addr, memorder) != expected) + while (rte_atomic_load_explicit((volatile uint32_t __rte_atomic *)addr, memorder) + != expected) rte_pause(); } @@ -106,7 +108,8 @@ { assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed); - while (rte_atomic_load_explicit(addr, memorder) != expected) + while (rte_atomic_load_explicit((volatile uint64_t __rte_atomic *)addr, memorder) + != expected) rte_pause(); } diff --git a/lib/eal/x86/rte_power_intrinsics.c b/lib/eal/x86/rte_power_intrinsics.c index cf70e33..6c192f0 100644 --- a/lib/eal/x86/rte_power_intrinsics.c +++ b/lib/eal/x86/rte_power_intrinsics.c @@ -23,9 +23,10 @@ uint64_t val; /* trigger a write but don't change the value */ - val = rte_atomic_load_explicit((volatile uint64_t *)addr, rte_memory_order_relaxed); - rte_atomic_compare_exchange_strong_explicit((volatile uint64_t *)addr, &val, val, - rte_memory_order_relaxed, rte_memory_order_relaxed); + val = rte_atomic_load_explicit((volatile uint64_t __rte_atomic *)addr, + rte_memory_order_relaxed); + rte_atomic_compare_exchange_strong_explicit((volatile uint64_t __rte_atomic *)addr, + &val, val, rte_memory_order_relaxed, rte_memory_order_relaxed); } static bool wait_supported; From patchwork Fri Aug 11 01:31:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tyler Retzlaff X-Patchwork-Id: 130098 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 85EAE43028; Fri, 11 Aug 2023 03:32:14 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 53A3B43255; Fri, 11 Aug 2023 03:32:09 +0200 (CEST) Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by mails.dpdk.org (Postfix) with ESMTP id BDB7340F16; Fri, 11 Aug 2023 03:32:05 +0200 (CEST) Received: by linux.microsoft.com (Postfix, from userid 1086) id 0D19D20FD06B; Thu, 10 Aug 2023 18:32:04 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 0D19D20FD06B DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1691717525; bh=4FDPEEXdI5egawCMhiKoUo3G76TdCHE5VK5pSea7en8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TqeytUTru/uIvylRACFVZ3snJUbvP4lfPxSxOkzsc/sQKQKKqqLAmyEdRb+6wXnYi UFrWO4aZ6qkqc4tvqzuvlS7bzbMyh3lmv5erYOtPeR8thG03T7SV82qeXn07v5Jjn6 eq8CxoiiRsDCZcECkTGrXRInTa27c6mC1XUYb9/U= From: Tyler Retzlaff To: dev@dpdk.org Cc: techboard@dpdk.org, Bruce Richardson , Honnappa Nagarahalli , Ruifeng Wang , Jerin Jacob , Sunil Kumar Kori , =?utf-8?q?Mattias_R=C3=B6nnblom?= , Joyce Kong , David Christensen , Konstantin Ananyev , David Hunt , Thomas Monjalon , David Marchand , Tyler Retzlaff Subject: [PATCH 4/6] distributor: adapt for EAL optional atomics API changes Date: Thu, 10 Aug 2023 18:31:59 -0700 Message-Id: <1691717521-1025-5-git-send-email-roretzla@linux.microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1691717521-1025-1-git-send-email-roretzla@linux.microsoft.com> References: <1691717521-1025-1-git-send-email-roretzla@linux.microsoft.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Adapt distributor for EAL optional atomics API changes Signed-off-by: Tyler Retzlaff --- lib/distributor/distributor_private.h | 2 +- lib/distributor/rte_distributor_single.c | 44 ++++++++++++++++---------------- 2 files changed, 23 insertions(+), 23 deletions(-) diff --git a/lib/distributor/distributor_private.h b/lib/distributor/distributor_private.h index 7101f63..ffbdae5 100644 --- a/lib/distributor/distributor_private.h +++ b/lib/distributor/distributor_private.h @@ -52,7 +52,7 @@ * Only 64-bits of the memory is actually used though. */ union rte_distributor_buffer_single { - volatile int64_t bufptr64; + volatile int64_t __rte_atomic bufptr64; char pad[RTE_CACHE_LINE_SIZE*3]; } __rte_cache_aligned; diff --git a/lib/distributor/rte_distributor_single.c b/lib/distributor/rte_distributor_single.c index 2c77ac4..ad43c13 100644 --- a/lib/distributor/rte_distributor_single.c +++ b/lib/distributor/rte_distributor_single.c @@ -32,10 +32,10 @@ int64_t req = (((int64_t)(uintptr_t)oldpkt) << RTE_DISTRIB_FLAG_BITS) | RTE_DISTRIB_GET_BUF; RTE_WAIT_UNTIL_MASKED(&buf->bufptr64, RTE_DISTRIB_FLAGS_MASK, - ==, 0, __ATOMIC_RELAXED); + ==, 0, rte_memory_order_relaxed); /* Sync with distributor on GET_BUF flag. */ - __atomic_store_n(&(buf->bufptr64), req, __ATOMIC_RELEASE); + rte_atomic_store_explicit(&buf->bufptr64, req, rte_memory_order_release); } struct rte_mbuf * @@ -44,7 +44,7 @@ struct rte_mbuf * { union rte_distributor_buffer_single *buf = &d->bufs[worker_id]; /* Sync with distributor. Acquire bufptr64. */ - if (__atomic_load_n(&buf->bufptr64, __ATOMIC_ACQUIRE) + if (rte_atomic_load_explicit(&buf->bufptr64, rte_memory_order_acquire) & RTE_DISTRIB_GET_BUF) return NULL; @@ -72,10 +72,10 @@ struct rte_mbuf * uint64_t req = (((int64_t)(uintptr_t)oldpkt) << RTE_DISTRIB_FLAG_BITS) | RTE_DISTRIB_RETURN_BUF; RTE_WAIT_UNTIL_MASKED(&buf->bufptr64, RTE_DISTRIB_FLAGS_MASK, - ==, 0, __ATOMIC_RELAXED); + ==, 0, rte_memory_order_relaxed); /* Sync with distributor on RETURN_BUF flag. */ - __atomic_store_n(&(buf->bufptr64), req, __ATOMIC_RELEASE); + rte_atomic_store_explicit(&buf->bufptr64, req, rte_memory_order_release); return 0; } @@ -119,7 +119,7 @@ struct rte_mbuf * d->in_flight_tags[wkr] = 0; d->in_flight_bitmask &= ~(1UL << wkr); /* Sync with worker. Release bufptr64. */ - __atomic_store_n(&(d->bufs[wkr].bufptr64), 0, __ATOMIC_RELEASE); + rte_atomic_store_explicit(&d->bufs[wkr].bufptr64, 0, rte_memory_order_release); if (unlikely(d->backlog[wkr].count != 0)) { /* On return of a packet, we need to move the * queued packets for this core elsewhere. @@ -165,21 +165,21 @@ struct rte_mbuf * for (wkr = 0; wkr < d->num_workers; wkr++) { uintptr_t oldbuf = 0; /* Sync with worker. Acquire bufptr64. */ - const int64_t data = __atomic_load_n(&(d->bufs[wkr].bufptr64), - __ATOMIC_ACQUIRE); + const int64_t data = rte_atomic_load_explicit(&d->bufs[wkr].bufptr64, + rte_memory_order_acquire); if (data & RTE_DISTRIB_GET_BUF) { flushed++; if (d->backlog[wkr].count) /* Sync with worker. Release bufptr64. */ - __atomic_store_n(&(d->bufs[wkr].bufptr64), + rte_atomic_store_explicit(&d->bufs[wkr].bufptr64, backlog_pop(&d->backlog[wkr]), - __ATOMIC_RELEASE); + rte_memory_order_release); else { /* Sync with worker on GET_BUF flag. */ - __atomic_store_n(&(d->bufs[wkr].bufptr64), + rte_atomic_store_explicit(&d->bufs[wkr].bufptr64, RTE_DISTRIB_GET_BUF, - __ATOMIC_RELEASE); + rte_memory_order_release); d->in_flight_tags[wkr] = 0; d->in_flight_bitmask &= ~(1UL << wkr); } @@ -217,8 +217,8 @@ struct rte_mbuf * while (next_idx < num_mbufs || next_mb != NULL) { uintptr_t oldbuf = 0; /* Sync with worker. Acquire bufptr64. */ - int64_t data = __atomic_load_n(&(d->bufs[wkr].bufptr64), - __ATOMIC_ACQUIRE); + int64_t data = rte_atomic_load_explicit(&(d->bufs[wkr].bufptr64), + rte_memory_order_acquire); if (!next_mb) { next_mb = mbufs[next_idx++]; @@ -264,15 +264,15 @@ struct rte_mbuf * if (d->backlog[wkr].count) /* Sync with worker. Release bufptr64. */ - __atomic_store_n(&(d->bufs[wkr].bufptr64), + rte_atomic_store_explicit(&d->bufs[wkr].bufptr64, backlog_pop(&d->backlog[wkr]), - __ATOMIC_RELEASE); + rte_memory_order_release); else { /* Sync with worker. Release bufptr64. */ - __atomic_store_n(&(d->bufs[wkr].bufptr64), + rte_atomic_store_explicit(&d->bufs[wkr].bufptr64, next_value, - __ATOMIC_RELEASE); + rte_memory_order_release); d->in_flight_tags[wkr] = new_tag; d->in_flight_bitmask |= (1UL << wkr); next_mb = NULL; @@ -294,8 +294,8 @@ struct rte_mbuf * for (wkr = 0; wkr < d->num_workers; wkr++) if (d->backlog[wkr].count && /* Sync with worker. Acquire bufptr64. */ - (__atomic_load_n(&(d->bufs[wkr].bufptr64), - __ATOMIC_ACQUIRE) & RTE_DISTRIB_GET_BUF)) { + (rte_atomic_load_explicit(&d->bufs[wkr].bufptr64, + rte_memory_order_acquire) & RTE_DISTRIB_GET_BUF)) { int64_t oldbuf = d->bufs[wkr].bufptr64 >> RTE_DISTRIB_FLAG_BITS; @@ -303,9 +303,9 @@ struct rte_mbuf * store_return(oldbuf, d, &ret_start, &ret_count); /* Sync with worker. Release bufptr64. */ - __atomic_store_n(&(d->bufs[wkr].bufptr64), + rte_atomic_store_explicit(&d->bufs[wkr].bufptr64, backlog_pop(&d->backlog[wkr]), - __ATOMIC_RELEASE); + rte_memory_order_release); } d->returns.start = ret_start; From patchwork Fri Aug 11 01:32:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tyler Retzlaff X-Patchwork-Id: 130102 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 465EC43028; Fri, 11 Aug 2023 03:32:43 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B9F1F43263; Fri, 11 Aug 2023 03:32:13 +0200 (CEST) Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by mails.dpdk.org (Postfix) with ESMTP id 96E4C40DFB; Fri, 11 Aug 2023 03:32:06 +0200 (CEST) Received: by linux.microsoft.com (Postfix, from userid 1086) id 1AD2D20FD073; Thu, 10 Aug 2023 18:32:04 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 1AD2D20FD073 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1691717525; bh=hb80W2mPwGBwRxTsEWa3FM3riuoCYqAJc6TC1lrCa/o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VQvp31IqTfgS7CUcfLVibd/lAMqPhmANbEKyClyOh8uV5pLZ9Ccf6Ct22WXPr8Acm nyyghFNNS5KNBxk6FqJt+KKd2FbHpaPG3OqFw7kgBd8UP2IJDFPnTHQDdoPQ3rb+N8 s7ZxngUrEOJvbyr419FadIp+T1Pwe9k7EBBt/KRA= From: Tyler Retzlaff To: dev@dpdk.org Cc: techboard@dpdk.org, Bruce Richardson , Honnappa Nagarahalli , Ruifeng Wang , Jerin Jacob , Sunil Kumar Kori , =?utf-8?q?Mattias_R=C3=B6nnblom?= , Joyce Kong , David Christensen , Konstantin Ananyev , David Hunt , Thomas Monjalon , David Marchand , Tyler Retzlaff Subject: [PATCH 5/6] bpf: adapt for EAL optional atomics API changes Date: Thu, 10 Aug 2023 18:32:00 -0700 Message-Id: <1691717521-1025-6-git-send-email-roretzla@linux.microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1691717521-1025-1-git-send-email-roretzla@linux.microsoft.com> References: <1691717521-1025-1-git-send-email-roretzla@linux.microsoft.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Adapt bpf for EAL optional atomics API changes Signed-off-by: Tyler Retzlaff --- lib/bpf/bpf_pkt.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/lib/bpf/bpf_pkt.c b/lib/bpf/bpf_pkt.c index ffd2db7..b300447 100644 --- a/lib/bpf/bpf_pkt.c +++ b/lib/bpf/bpf_pkt.c @@ -25,7 +25,7 @@ struct bpf_eth_cbi { /* used by both data & control path */ - uint32_t use; /*usage counter */ + uint32_t __rte_atomic use; /*usage counter */ const struct rte_eth_rxtx_callback *cb; /* callback handle */ struct rte_bpf *bpf; struct rte_bpf_jit jit; @@ -110,8 +110,8 @@ struct bpf_eth_cbh { /* in use, busy wait till current RX/TX iteration is finished */ if ((puse & BPF_ETH_CBI_INUSE) != 0) { - RTE_WAIT_UNTIL_MASKED((uint32_t *)(uintptr_t)&cbi->use, - UINT32_MAX, !=, puse, __ATOMIC_RELAXED); + RTE_WAIT_UNTIL_MASKED((uint32_t __rte_atomic *)(uintptr_t)&cbi->use, + UINT32_MAX, !=, puse, rte_memory_order_relaxed); } } From patchwork Fri Aug 11 01:32:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tyler Retzlaff X-Patchwork-Id: 130103 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E46E943028; Fri, 11 Aug 2023 03:32:49 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D404543274; Fri, 11 Aug 2023 03:32:14 +0200 (CEST) Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by mails.dpdk.org (Postfix) with ESMTP id B7F3840E03; Fri, 11 Aug 2023 03:32:06 +0200 (CEST) Received: by linux.microsoft.com (Postfix, from userid 1086) id 2814220FD07A; Thu, 10 Aug 2023 18:32:04 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 2814220FD07A DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1691717525; bh=GURJAbA18BKwuTdmNoLbuDCGden8EU4zKCXqIk+sHKc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Bg8VHhFfeCKowvOX+zdxEuky9FnFp9A2RrpmwsSBI9BSNbX86LFwwv2K31Hib1eMt xBfdaC4iNCw/kXFrFH7fnyF4TnxUo/jOy6ME9oSfig5rBZtJbwZF4m482ur5rl+aZ4 /fZlC13r6qFpGC0S9U7uKVs9vCSLnDKwH8c+vw2M= From: Tyler Retzlaff To: dev@dpdk.org Cc: techboard@dpdk.org, Bruce Richardson , Honnappa Nagarahalli , Ruifeng Wang , Jerin Jacob , Sunil Kumar Kori , =?utf-8?q?Mattias_R=C3=B6nnblom?= , Joyce Kong , David Christensen , Konstantin Ananyev , David Hunt , Thomas Monjalon , David Marchand , Tyler Retzlaff Subject: [PATCH 6/6] devtools: forbid new direct use of GCC atomic builtins Date: Thu, 10 Aug 2023 18:32:01 -0700 Message-Id: <1691717521-1025-7-git-send-email-roretzla@linux.microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1691717521-1025-1-git-send-email-roretzla@linux.microsoft.com> References: <1691717521-1025-1-git-send-email-roretzla@linux.microsoft.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Refrain from using compiler __atomic_xxx builtins DPDK now requires the use of rte_atomic__explicit macros when operating on DPDK atomic variables. Signed-off-by: Tyler Retzlaff Acked-by: Morten Brørup Acked-by: Bruce Richardson --- devtools/checkpatches.sh | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/devtools/checkpatches.sh b/devtools/checkpatches.sh index 43f5e36..a32f02e 100755 --- a/devtools/checkpatches.sh +++ b/devtools/checkpatches.sh @@ -102,6 +102,14 @@ check_forbidden_additions() { # -f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk \ "$1" || res=1 + # refrain from using compiler __atomic_xxx builtins + awk -v FOLDERS="lib drivers app examples" \ + -v EXPRESSIONS="__atomic_.*\\\(" \ + -v RET_ON_FAIL=1 \ + -v MESSAGE='Using __atomic_xxx builtins' \ + -f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk \ + "$1" || res=1 + # refrain from using compiler __atomic_thread_fence() # It should be avoided on x86 for SMP case. awk -v FOLDERS="lib drivers app examples" \