From patchwork Mon Jul 13 06:23:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Phil Yang X-Patchwork-Id: 73873 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id BFC6CA0540; Mon, 13 Jul 2020 08:24:10 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2C3201C434; Mon, 13 Jul 2020 08:24:09 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 57C141C238 for ; Mon, 13 Jul 2020 08:24:08 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D76A931B; Sun, 12 Jul 2020 23:24:07 -0700 (PDT) Received: from phil-VirtualBox.shanghai.arm.com (phil-VirtualBox.shanghai.arm.com [10.169.108.144]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id A3F7D3F887; Sun, 12 Jul 2020 23:24:03 -0700 (PDT) From: Phil Yang To: thomas@monjalon.net, john.mcnamara@intel.com, Honnappa.Nagarahalli@arm.com, drc@linux.vnet.ibm.com, dev@dpdk.org Cc: david.marchand@redhat.com, jerinj@marvell.com, konstantin.ananyev@intel.com, Ola.Liljedahl@arm.com, bruce.richardson@intel.com, Ruifeng.Wang@arm.com, nd@arm.com, Honnappa Nagarahalli , Marko Kovacevic Date: Mon, 13 Jul 2020 14:23:40 +0800 Message-Id: <1594621423-14796-2-git-send-email-phil.yang@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1594621423-14796-1-git-send-email-phil.yang@arm.com> References: <1594115449-13750-1-git-send-email-phil.yang@arm.com> <1594621423-14796-1-git-send-email-phil.yang@arm.com> Subject: [dpdk-dev] [PATCH v7 1/3] doc: add generic atomic deprecation section X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add deprecating the generic rte_atomic_xx APIs to C11 atomic built-ins guide and examples. Signed-off-by: Phil Yang Signed-off-by: Honnappa Nagarahalli --- doc/guides/prog_guide/writing_efficient_code.rst | 64 +++++++++++++++++++++++- 1 file changed, 63 insertions(+), 1 deletion(-) diff --git a/doc/guides/prog_guide/writing_efficient_code.rst b/doc/guides/prog_guide/writing_efficient_code.rst index 849f63e..16d6188 100644 --- a/doc/guides/prog_guide/writing_efficient_code.rst +++ b/doc/guides/prog_guide/writing_efficient_code.rst @@ -167,7 +167,13 @@ but with the added cost of lower throughput. Locks and Atomic Operations --------------------------- -Atomic operations imply a lock prefix before the instruction, +This section describes some key considerations when using locks and atomic +operations in the DPDK environment. + +Locks +~~~~~ + +On x86, atomic operations imply a lock prefix before the instruction, causing the processor's LOCK# signal to be asserted during execution of the following instruction. This has a big impact on performance in a multicore environment. @@ -176,6 +182,62 @@ It can often be replaced by other solutions like per-lcore variables. Also, some locking techniques are more efficient than others. For instance, the Read-Copy-Update (RCU) algorithm can frequently replace simple rwlocks. +Atomic Operations: Use C11 Atomic Built-ins +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +DPDK generic rte_atomic operations are implemented by `__sync built-ins +`_. +These __sync built-ins result in full barriers on aarch64, which are unnecessary +in many use cases. They can be replaced by `__atomic built-ins +`_ +that conform to the C11 memory model and provide finer memory order control. + +So replacing the rte_atomic operations with __atomic built-ins might improve +performance for aarch64 machines. + +Some typical optimization cases are listed below: + +Atomicity +^^^^^^^^^ + +Some use cases require atomicity alone, the ordering of the memory operations +does not matter. For example the packets statistics in ``virtio_xmit()`` +function of ``vhost`` example application. It just updates the number of +transmitted packets, no subsequent logic depends on these counters. So the +RELAXED memory ordering is sufficient. + +One-way Barrier +^^^^^^^^^^^^^^^ + +Some use cases allow for memory reordering in one way while requiring memory +ordering in the other direction. + +For example, the memory operations before the ``rte_spinlock_lock()`` can move +to the critical section, but the memory operations in the critical section +cannot move above the lock. In this case, the full memory barrier in the +compare-and-swap operation can be replaced to ACQUIRE. On the other hand, the +memory operations after the ``rte_spinlock_unlock()`` can move to the critical +section, but the memory operations in the critical section cannot move below +the unlock. So the full barrier in the STORE operation can be replaced with +RELEASE. + +Reader-Writer Concurrency +^^^^^^^^^^^^^^^^^^^^^^^^^ + +Lock-free reader-writer concurrency is one of the common use cases in DPDK. + +The payload or the data that the writer wants to communicate to the reader, +can be written with RELAXED memory order. However, the guard variable should +be written with RELEASE memory order. This ensures that the store to guard +variable is observable only after the store to payload is observable. +Refer to ``rte_hash_cuckoo_insert_mw()`` for an example. + +Correspondingly, on the reader side, the guard variable should be read +with ACQUIRE memory order. The payload or the data the writer communicated, +can be read with RELAXED memory order. This ensures that, if the store to +guard variable is observable, the store to payload is also observable. +Refer to rte_hash ``search_one_bucket_lf()`` for an example. + Coding Considerations --------------------- From patchwork Mon Jul 13 06:23:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Phil Yang X-Patchwork-Id: 73874 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8CAE2A0540; Mon, 13 Jul 2020 08:24:18 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4ABB91D154; Mon, 13 Jul 2020 08:24:13 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id A6DCD1C1BB for ; Mon, 13 Jul 2020 08:24:12 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 321621FB; Sun, 12 Jul 2020 23:24:12 -0700 (PDT) Received: from phil-VirtualBox.shanghai.arm.com (phil-VirtualBox.shanghai.arm.com [10.169.108.144]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 7E2C33F887; Sun, 12 Jul 2020 23:24:08 -0700 (PDT) From: Phil Yang To: thomas@monjalon.net, john.mcnamara@intel.com, Honnappa.Nagarahalli@arm.com, drc@linux.vnet.ibm.com, dev@dpdk.org Cc: david.marchand@redhat.com, jerinj@marvell.com, konstantin.ananyev@intel.com, Ola.Liljedahl@arm.com, bruce.richardson@intel.com, Ruifeng.Wang@arm.com, nd@arm.com Date: Mon, 13 Jul 2020 14:23:41 +0800 Message-Id: <1594621423-14796-3-git-send-email-phil.yang@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1594621423-14796-1-git-send-email-phil.yang@arm.com> References: <1594115449-13750-1-git-send-email-phil.yang@arm.com> <1594621423-14796-1-git-send-email-phil.yang@arm.com> Subject: [dpdk-dev] [PATCH v7 2/3] devtools: prevent use of rte atomic APIs in future patches X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In order to deprecate the rte_atomic and rte_smp barrier APIs, prevent the patches from using these APIs in the converted modules and compilers __sync built-ins in all modules. The converted modules: lib/librte_distributor lib/librte_hash lib/librte_kni lib/librte_lpm lib/librte_rcu lib/librte_ring lib/librte_stack lib/librte_vhost lib/librte_timer lib/librte_ipsec drivers/event/octeontx drivers/event/octeontx2 drivers/event/opdl drivers/net/bnx2x drivers/net/hinic drivers/net/hns3 drivers/net/memif drivers/net/thunderx drivers/net/virtio examples/l2fwd-event On x86 the __atomic_thread_fence(__ATOMIC_SEQ_CST) is quite expensive for SMP case. Flag the new code which use __atomic_thread_fence API. Signed-off-by: Phil Yang Reviewed-by: Ruifeng Wang --- devtools/checkpatches.sh | 40 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 40 insertions(+) diff --git a/devtools/checkpatches.sh b/devtools/checkpatches.sh index 58021aa..6d0452f 100755 --- a/devtools/checkpatches.sh +++ b/devtools/checkpatches.sh @@ -51,6 +51,13 @@ print_usage () { check_forbidden_additions() { # res=0 + c11_atomics_dir="lib/librte_distributor lib/librte_hash lib/librte_kni + lib/librte_lpm lib/librte_rcu lib/librte_ring + lib/librte_stack lib/librte_vhost + drivers/event/octeontx drivers/event/octeontx2 + drivers/event/opdl drivers/net/bnx2x drivers/net/hinic + drivers/net/hns3 drivers/net/memif drivers/net/thunderx + drivers/net/virtio examples/l2fwd-event" # refrain from new additions of rte_panic() and rte_exit() # multiple folders and expressions are separated by spaces @@ -74,6 +81,39 @@ check_forbidden_additions() { # -v EXPRESSIONS='for[[:space:]]*\\((char|u?int|unsigned|s?size_t)' \ -v RET_ON_FAIL=1 \ -v MESSAGE='Declaring a variable inside for()' \ + + # refrain from new additions of 16/32/64 bits rte_atomic_xxx() + # multiple folders and expressions are separated by spaces + awk -v FOLDERS="$c11_atomics_dir" \ + -v EXPRESSIONS="rte_atomic[0-9][0-9]_.*\\\(" \ + -v RET_ON_FAIL=1 \ + -v MESSAGE='Use of rte_atomicNN_xxx APIs not allowed, use __atomic_xxx built-ins' \ + -f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk \ + "$1" || res=1 + + # refrain from new additions of rte_smp_XXmb() + # multiple folders and expressions are separated by spaces + awk -v FOLDERS="$c11_atomics_dir" \ + -v EXPRESSIONS="rte_smp_(r|w)?mb\\\(" \ + -v RET_ON_FAIL=1 \ + -v MESSAGE='Use of rte_smp_r/wmb not allowed, use __atomic_xxx built-ins' \ + -f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk \ + "$1" || res=1 + + # refrain from using compiler __sync built-ins + awk -v FOLDERS="lib drivers app examples" \ + -v EXPRESSIONS="__sync_.*\\\(" \ + -v RET_ON_FAIL=1 \ + -v MESSAGE='Use of __sync_xxx built-ins not allowed, use __atomic_xxx built-ins' \ + -f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk \ + "$1" || res=1 + + # refrain from using compiler __atomic_thread_fence() + # It should be avoided on x86 for SMP case. + awk -v FOLDERS="lib drivers app examples" \ + -v EXPRESSIONS="__atomic_thread_fence\\\(" \ + -v RET_ON_FAIL=1 \ + -v MESSAGE='The __atomic_thread_fence is not allowed, use rte_atomic_thread_fence' \ -f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk \ "$1" || res=1 From patchwork Mon Jul 13 06:23:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Phil Yang X-Patchwork-Id: 73875 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 64C26A0540; Mon, 13 Jul 2020 08:24:27 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 995391D167; Mon, 13 Jul 2020 08:24:18 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id A1B861C1BB for ; Mon, 13 Jul 2020 08:24:17 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1D6D01FB; Sun, 12 Jul 2020 23:24:17 -0700 (PDT) Received: from phil-VirtualBox.shanghai.arm.com (phil-VirtualBox.shanghai.arm.com [10.169.108.144]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id D8F2A3F887; Sun, 12 Jul 2020 23:24:12 -0700 (PDT) From: Phil Yang To: thomas@monjalon.net, john.mcnamara@intel.com, Honnappa.Nagarahalli@arm.com, drc@linux.vnet.ibm.com, dev@dpdk.org Cc: david.marchand@redhat.com, jerinj@marvell.com, konstantin.ananyev@intel.com, Ola.Liljedahl@arm.com, bruce.richardson@intel.com, Ruifeng.Wang@arm.com, nd@arm.com, Jan Viktorin , Ruifeng Wang Date: Mon, 13 Jul 2020 14:23:42 +0800 Message-Id: <1594621423-14796-4-git-send-email-phil.yang@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1594621423-14796-1-git-send-email-phil.yang@arm.com> References: <1594115449-13750-1-git-send-email-phil.yang@arm.com> <1594621423-14796-1-git-send-email-phil.yang@arm.com> Subject: [dpdk-dev] [PATCH v7 3/3] eal/atomic: add wrapper for C11 atomic thread fence X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Provide a wrapper for __atomic_thread_fence built-in to support optimized code for __ATOMIC_SEQ_CST memory order for x86 platforms. Suggested-by: Honnappa Nagarahalli Signed-off-by: Phil Yang Reviewed-by: Ola Liljedahl Acked-by: Konstantin Ananyev --- lib/librte_eal/arm/include/rte_atomic_32.h | 6 ++++++ lib/librte_eal/arm/include/rte_atomic_64.h | 6 ++++++ lib/librte_eal/include/generic/rte_atomic.h | 6 ++++++ lib/librte_eal/ppc/include/rte_atomic.h | 6 ++++++ lib/librte_eal/x86/include/rte_atomic.h | 17 +++++++++++++++++ 5 files changed, 41 insertions(+) diff --git a/lib/librte_eal/arm/include/rte_atomic_32.h b/lib/librte_eal/arm/include/rte_atomic_32.h index 7dc0d06..dbe7cc6 100644 --- a/lib/librte_eal/arm/include/rte_atomic_32.h +++ b/lib/librte_eal/arm/include/rte_atomic_32.h @@ -37,6 +37,12 @@ extern "C" { #define rte_cio_rmb() rte_rmb() +static __rte_always_inline void +rte_atomic_thread_fence(int mo) +{ + __atomic_thread_fence(mo); +} + #ifdef __cplusplus } #endif diff --git a/lib/librte_eal/arm/include/rte_atomic_64.h b/lib/librte_eal/arm/include/rte_atomic_64.h index e42f69e..bf885ad 100644 --- a/lib/librte_eal/arm/include/rte_atomic_64.h +++ b/lib/librte_eal/arm/include/rte_atomic_64.h @@ -41,6 +41,12 @@ extern "C" { #define rte_cio_rmb() rte_rmb() +static __rte_always_inline void +rte_atomic_thread_fence(int mo) +{ + __atomic_thread_fence(mo); +} + /*------------------------ 128 bit atomic operations -------------------------*/ #if defined(__ARM_FEATURE_ATOMICS) || defined(RTE_ARM_FEATURE_ATOMICS) diff --git a/lib/librte_eal/include/generic/rte_atomic.h b/lib/librte_eal/include/generic/rte_atomic.h index e6ab15a..5b941db 100644 --- a/lib/librte_eal/include/generic/rte_atomic.h +++ b/lib/librte_eal/include/generic/rte_atomic.h @@ -158,6 +158,12 @@ static inline void rte_cio_rmb(void); asm volatile ("" : : : "memory"); \ } while(0) +/** + * Synchronization fence between threads based on the specified + * memory order. + */ +static inline void rte_atomic_thread_fence(int mo); + /*------------------------- 16 bit atomic operations -------------------------*/ /** diff --git a/lib/librte_eal/ppc/include/rte_atomic.h b/lib/librte_eal/ppc/include/rte_atomic.h index 7e3e131..91c5f30 100644 --- a/lib/librte_eal/ppc/include/rte_atomic.h +++ b/lib/librte_eal/ppc/include/rte_atomic.h @@ -40,6 +40,12 @@ extern "C" { #define rte_cio_rmb() rte_rmb() +static __rte_always_inline void +rte_atomic_thread_fence(int mo) +{ + __atomic_thread_fence(mo); +} + /*------------------------- 16 bit atomic operations -------------------------*/ /* To be compatible with Power7, use GCC built-in functions for 16 bit * operations */ diff --git a/lib/librte_eal/x86/include/rte_atomic.h b/lib/librte_eal/x86/include/rte_atomic.h index b9dcd30..bd256e7 100644 --- a/lib/librte_eal/x86/include/rte_atomic.h +++ b/lib/librte_eal/x86/include/rte_atomic.h @@ -83,6 +83,23 @@ rte_smp_mb(void) #define rte_cio_rmb() rte_compiler_barrier() +/** + * Synchronization fence between threads based on the specified + * memory order. + * + * On x86 the __atomic_thread_fence(__ATOMIC_SEQ_CST) generates + * full 'mfence' which is quite expensive. The optimized + * implementation of rte_smp_mb is used instead. + */ +static __rte_always_inline void +rte_atomic_thread_fence(int mo) +{ + if (mo == __ATOMIC_SEQ_CST) + rte_smp_mb(); + else + __atomic_thread_fence(mo); +} + /*------------------------- 16 bit atomic operations -------------------------*/ #ifndef RTE_FORCE_INTRINSICS