From patchwork Mon Jan 28 18:14:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Eads, Gage" X-Patchwork-Id: 50073 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 88C7D7EC7; Mon, 28 Jan 2019 19:15:12 +0100 (CET) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by dpdk.org (Postfix) with ESMTP id 34F2C695D for ; Mon, 28 Jan 2019 19:15:07 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Jan 2019 10:15:06 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,534,1539673200"; d="scan'208";a="139508008" Received: from txasoft-yocto.an.intel.com ([10.123.72.192]) by fmsmga004.fm.intel.com with ESMTP; 28 Jan 2019 10:15:04 -0800 From: Gage Eads To: dev@dpdk.org Cc: olivier.matz@6wind.com, arybchenko@solarflare.com, bruce.richardson@intel.com, konstantin.ananyev@intel.com, stephen@networkplumber.org, jerinj@marvell.com, mczekaj@marvell.com, nd@arm.com, Ola.Liljedahl@arm.com Date: Mon, 28 Jan 2019 12:14:03 -0600 Message-Id: <20190128181407.32739-2-gage.eads@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20190128181407.32739-1-gage.eads@intel.com> References: <20190118152326.22686-1-gage.eads@intel.com> <20190128181407.32739-1-gage.eads@intel.com> Subject: [dpdk-dev] [PATCH v4 1/5] ring: add 64-bit headtail structure X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" 64-bit head and tail index widths greatly increases the time it takes for them to wrap-around (with current CPU speeds, it won't happen within the author's lifetime). This is fundamental to avoiding the ABA problem -- in which a thread mistakes reading the same tail index in two accesses to mean that the ring was not modified in the intervening time -- in the upcoming non-blocking ring implementation. Using a 64-bit index makes the possibility of this occurring effectively zero. This commit places the new producer and consumer structures in the same location in struct rte_ring as their 32-bit counterparts. Since the 32-bit versions are padded out to a cache line, there is space for the new structure without affecting the layout of struct rte_ring. Thus, the ABI is preserved. Signed-off-by: Gage Eads --- lib/librte_ring/rte_ring.h | 23 +++++- lib/librte_ring/rte_ring_c11_mem.h | 153 +++++++++++++++++++++++++++++++++++++ lib/librte_ring/rte_ring_generic.h | 139 +++++++++++++++++++++++++++++++++ 3 files changed, 312 insertions(+), 3 deletions(-) diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h index af5444a9f..00dfb5b85 100644 --- a/lib/librte_ring/rte_ring.h +++ b/lib/librte_ring/rte_ring.h @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: BSD-3-Clause * - * Copyright (c) 2010-2017 Intel Corporation + * Copyright (c) 2010-2019 Intel Corporation * Copyright (c) 2007-2009 Kip Macy kmacy@freebsd.org * All rights reserved. * Derived from FreeBSD's bufring.h @@ -70,6 +70,15 @@ struct rte_ring_headtail { uint32_t single; /**< True if single prod/cons */ }; +/* 64-bit version of rte_ring_headtail, for use by rings that need to avoid + * head/tail wrap-around. + */ +struct rte_ring_headtail_64 { + volatile uint64_t head; /**< Prod/consumer head. */ + volatile uint64_t tail; /**< Prod/consumer tail. */ + uint32_t single; /**< True if single prod/cons */ +}; + /** * An RTE ring structure. * @@ -97,11 +106,19 @@ struct rte_ring { char pad0 __rte_cache_aligned; /**< empty cache line */ /** Ring producer status. */ - struct rte_ring_headtail prod __rte_cache_aligned; + RTE_STD_C11 + union { + struct rte_ring_headtail prod __rte_cache_aligned; + struct rte_ring_headtail_64 prod_64 __rte_cache_aligned; + }; char pad1 __rte_cache_aligned; /**< empty cache line */ /** Ring consumer status. */ - struct rte_ring_headtail cons __rte_cache_aligned; + RTE_STD_C11 + union { + struct rte_ring_headtail cons __rte_cache_aligned; + struct rte_ring_headtail_64 cons_64 __rte_cache_aligned; + }; char pad2 __rte_cache_aligned; /**< empty cache line */ }; diff --git a/lib/librte_ring/rte_ring_c11_mem.h b/lib/librte_ring/rte_ring_c11_mem.h index 0fb73a337..47acd4c7c 100644 --- a/lib/librte_ring/rte_ring_c11_mem.h +++ b/lib/librte_ring/rte_ring_c11_mem.h @@ -178,4 +178,157 @@ __rte_ring_move_cons_head(struct rte_ring *r, int is_sc, return n; } +/** + * @internal This function updates the producer head for enqueue using + * 64-bit head/tail values. + * + * @param r + * A pointer to the ring structure + * @param is_sp + * Indicates whether multi-producer path is needed or not + * @param n + * The number of elements we will want to enqueue, i.e. how far should the + * head be moved + * @param behavior + * RTE_RING_QUEUE_FIXED: Enqueue a fixed number of items from a ring + * RTE_RING_QUEUE_VARIABLE: Enqueue as many items as possible from ring + * @param old_head + * Returns head value as it was before the move, i.e. where enqueue starts + * @param new_head + * Returns the current/new head value i.e. where enqueue finishes + * @param free_entries + * Returns the amount of free space in the ring BEFORE head was moved + * @return + * Actual number of objects enqueued. + * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. + */ +static __rte_always_inline unsigned int +__rte_ring_move_prod_head_64(struct rte_ring *r, unsigned int is_sp, + unsigned int n, enum rte_ring_queue_behavior behavior, + uint64_t *old_head, uint64_t *new_head, + uint32_t *free_entries) +{ + const uint32_t capacity = r->capacity; + uint64_t cons_tail; + unsigned int max = n; + int success; + + *old_head = __atomic_load_n(&r->prod_64.head, __ATOMIC_RELAXED); + do { + /* Reset n to the initial burst count */ + n = max; + + /* Ensure the head is read before tail */ + __atomic_thread_fence(__ATOMIC_ACQUIRE); + + /* load-acquire synchronize with store-release of ht->tail + * in update_tail. + */ + cons_tail = __atomic_load_n(&r->cons_64.tail, + __ATOMIC_ACQUIRE); + + /* The subtraction is done between two unsigned 32bits value + * (the result is always modulo 32 bits even if we have + * *old_head > cons_tail). So 'free_entries' is always between 0 + * and capacity (which is < size). + */ + *free_entries = (capacity + cons_tail - *old_head); + + /* check that we have enough room in ring */ + if (unlikely(n > *free_entries)) + n = (behavior == RTE_RING_QUEUE_FIXED) ? + 0 : *free_entries; + + if (n == 0) + return 0; + + *new_head = *old_head + n; + if (is_sp) + r->prod_64.head = *new_head, success = 1; + else + /* on failure, *old_head is updated */ + success = __atomic_compare_exchange_n(&r->prod_64.head, + old_head, *new_head, + 0, __ATOMIC_RELAXED, + __ATOMIC_RELAXED); + } while (unlikely(success == 0)); + return n; +} + +/** + * @internal This function updates the consumer head for dequeue using + * 64-bit head/tail values. + * + * @param r + * A pointer to the ring structure + * @param is_sc + * Indicates whether multi-consumer path is needed or not + * @param n + * The number of elements we will want to enqueue, i.e. how far should the + * head be moved + * @param behavior + * RTE_RING_QUEUE_FIXED: Dequeue a fixed number of items from a ring + * RTE_RING_QUEUE_VARIABLE: Dequeue as many items as possible from ring + * @param old_head + * Returns head value as it was before the move, i.e. where dequeue starts + * @param new_head + * Returns the current/new head value i.e. where dequeue finishes + * @param entries + * Returns the number of entries in the ring BEFORE head was moved + * @return + * - Actual number of objects dequeued. + * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. + */ +static __rte_always_inline unsigned int +__rte_ring_move_cons_head_64(struct rte_ring *r, unsigned int is_sc, + unsigned int n, enum rte_ring_queue_behavior behavior, + uint64_t *old_head, uint64_t *new_head, + uint32_t *entries) +{ + unsigned int max = n; + uint64_t prod_tail; + int success; + + /* move cons.head atomically */ + *old_head = __atomic_load_n(&r->cons_64.head, __ATOMIC_RELAXED); + do { + /* Restore n as it may change every loop */ + n = max; + + /* Ensure the head is read before tail */ + __atomic_thread_fence(__ATOMIC_ACQUIRE); + + /* this load-acquire synchronize with store-release of ht->tail + * in update_tail. + */ + prod_tail = __atomic_load_n(&r->prod_64.tail, + __ATOMIC_ACQUIRE); + + /* The subtraction is done between two unsigned 32bits value + * (the result is always modulo 32 bits even if we have + * cons_head > prod_tail). So 'entries' is always between 0 + * and size(ring)-1. + */ + *entries = (prod_tail - *old_head); + + /* Set the actual entries for dequeue */ + if (n > *entries) + n = (behavior == RTE_RING_QUEUE_FIXED) ? 0 : *entries; + + if (unlikely(n == 0)) + return 0; + + *new_head = *old_head + n; + if (is_sc) + r->cons_64.head = *new_head, success = 1; + else + /* on failure, *old_head will be updated */ + success = __atomic_compare_exchange_n(&r->cons_64.head, + old_head, *new_head, + 0, __ATOMIC_RELAXED, + __ATOMIC_RELAXED); + } while (unlikely(success == 0)); + return n; +} + #endif /* _RTE_RING_C11_MEM_H_ */ diff --git a/lib/librte_ring/rte_ring_generic.h b/lib/librte_ring/rte_ring_generic.h index ea7dbe5b9..2158e092a 100644 --- a/lib/librte_ring/rte_ring_generic.h +++ b/lib/librte_ring/rte_ring_generic.h @@ -167,4 +167,143 @@ __rte_ring_move_cons_head(struct rte_ring *r, unsigned int is_sc, return n; } +/** + * @internal This function updates the producer head for enqueue using + * 64-bit head/tail values. + * + * @param r + * A pointer to the ring structure + * @param is_sp + * Indicates whether multi-producer path is needed or not + * @param n + * The number of elements we will want to enqueue, i.e. how far should the + * head be moved + * @param behavior + * RTE_RING_QUEUE_FIXED: Enqueue a fixed number of items from a ring + * RTE_RING_QUEUE_VARIABLE: Enqueue as many items as possible from ring + * @param old_head + * Returns head value as it was before the move, i.e. where enqueue starts + * @param new_head + * Returns the current/new head value i.e. where enqueue finishes + * @param free_entries + * Returns the amount of free space in the ring BEFORE head was moved + * @return + * Actual number of objects enqueued. + * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. + */ +static __rte_always_inline unsigned int +__rte_ring_move_prod_head_64(struct rte_ring *r, unsigned int is_sp, + unsigned int n, enum rte_ring_queue_behavior behavior, + uint64_t *old_head, uint64_t *new_head, + uint32_t *free_entries) +{ + const uint32_t capacity = r->capacity; + unsigned int max = n; + int success; + + do { + /* Reset n to the initial burst count */ + n = max; + + *old_head = r->prod_64.head; + + /* add rmb barrier to avoid load/load reorder in weak + * memory model. It is noop on x86 + */ + rte_smp_rmb(); + + /* + * The subtraction is done between two unsigned 64bits value + * (the result is always modulo 64 bits even if we have + * *old_head > cons_tail). So 'free_entries' is always between 0 + * and capacity (which is < size). + */ + *free_entries = (capacity + r->cons_64.tail - *old_head); + + /* check that we have enough room in ring */ + if (unlikely(n > *free_entries)) + n = (behavior == RTE_RING_QUEUE_FIXED) ? + 0 : *free_entries; + + if (n == 0) + return 0; + + *new_head = *old_head + n; + if (is_sp) + r->prod_64.head = *new_head, success = 1; + else + success = rte_atomic64_cmpset(&r->prod_64.head, + *old_head, *new_head); + } while (unlikely(success == 0)); + return n; +} + +/** + * @internal This function updates the consumer head for dequeue using + * 64-bit head/tail values. + * + * @param r + * A pointer to the ring structure + * @param is_sc + * Indicates whether multi-consumer path is needed or not + * @param n + * The number of elements we will want to enqueue, i.e. how far should the + * head be moved + * @param behavior + * RTE_RING_QUEUE_FIXED: Dequeue a fixed number of items from a ring + * RTE_RING_QUEUE_VARIABLE: Dequeue as many items as possible from ring + * @param old_head + * Returns head value as it was before the move, i.e. where dequeue starts + * @param new_head + * Returns the current/new head value i.e. where dequeue finishes + * @param entries + * Returns the number of entries in the ring BEFORE head was moved + * @return + * - Actual number of objects dequeued. + * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. + */ +static __rte_always_inline unsigned int +__rte_ring_move_cons_head_64(struct rte_ring *r, unsigned int is_sc, + unsigned int n, enum rte_ring_queue_behavior behavior, + uint64_t *old_head, uint64_t *new_head, + uint32_t *entries) +{ + unsigned int max = n; + int success; + + do { + /* Restore n as it may change every loop */ + n = max; + + *old_head = r->cons_64.head; + + /* add rmb barrier to avoid load/load reorder in weak + * memory model. It is noop on x86 + */ + rte_smp_rmb(); + + /* The subtraction is done between two unsigned 64bits value + * (the result is always modulo 64 bits even if we have + * cons_head > prod_tail). So 'entries' is always between 0 + * and size(ring)-1. + */ + *entries = (r->prod_64.tail - *old_head); + + /* Set the actual entries for dequeue */ + if (n > *entries) + n = (behavior == RTE_RING_QUEUE_FIXED) ? 0 : *entries; + + if (unlikely(n == 0)) + return 0; + + *new_head = *old_head + n; + if (is_sc) + r->cons_64.head = *new_head, success = 1; + else + success = rte_atomic64_cmpset(&r->cons_64.head, + *old_head, *new_head); + } while (unlikely(success == 0)); + return n; +} + #endif /* _RTE_RING_GENERIC_H_ */ From patchwork Mon Jan 28 18:14:04 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Eads, Gage" X-Patchwork-Id: 50074 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 011091B0FC; Mon, 28 Jan 2019 19:15:14 +0100 (CET) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by dpdk.org (Postfix) with ESMTP id 0FAA26904 for ; Mon, 28 Jan 2019 19:15:07 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Jan 2019 10:15:07 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,534,1539673200"; d="scan'208";a="139508023" Received: from txasoft-yocto.an.intel.com ([10.123.72.192]) by fmsmga004.fm.intel.com with ESMTP; 28 Jan 2019 10:15:06 -0800 From: Gage Eads To: dev@dpdk.org Cc: olivier.matz@6wind.com, arybchenko@solarflare.com, bruce.richardson@intel.com, konstantin.ananyev@intel.com, stephen@networkplumber.org, jerinj@marvell.com, mczekaj@marvell.com, nd@arm.com, Ola.Liljedahl@arm.com Date: Mon, 28 Jan 2019 12:14:04 -0600 Message-Id: <20190128181407.32739-3-gage.eads@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20190128181407.32739-1-gage.eads@intel.com> References: <20190118152326.22686-1-gage.eads@intel.com> <20190128181407.32739-1-gage.eads@intel.com> Subject: [dpdk-dev] [PATCH v4 2/5] ring: add a non-blocking implementation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This commit adds support for non-blocking circular ring enqueue and dequeue functions. The ring uses a 128-bit compare-and-swap instruction, and thus is currently limited to x86_64. The algorithm is based on the original rte ring (derived from FreeBSD's bufring.h) and inspired by Michael and Scott's non-blocking concurrent queue. Importantly, it adds a modification counter to each ring entry to ensure only one thread can write to an unused entry. ----- Algorithm: Multi-producer non-blocking enqueue: 1. Move the producer head index 'n' locations forward, effectively reserving 'n' locations. 2. For each pointer: a. Read the producer tail index, then ring[tail]. If ring[tail]'s modification counter isn't 'tail', retry. b. Construct the new entry: {pointer, tail + ring size} c. Compare-and-swap the old entry with the new. If unsuccessful, the next loop iteration will try to enqueue this pointer again. d. Compare-and-swap the tail index with 'tail + 1', whether or not step 2c succeeded. This guarantees threads can make forward progress. Multi-consumer non-blocking dequeue: 1. Move the consumer head index 'n' locations forward, effectively reserving 'n' pointers to be dequeued. 2. Copy 'n' pointers into the caller's object table (ignoring the modification counter), starting from ring[tail], then compare-and-swap the tail index with 'tail + n'. If unsuccessful, repeat step 2. ----- Discussion: There are two cases where the ABA problem is mitigated: 1. Enqueueing a pointer to the ring: without a modification counter tied to the tail index, the index could become stale by the time the enqueue happens, causing it to overwrite valid data. Tying the counter to the tail index gives us an expected value (as opposed to, say, a monotonically incrementing counter). Since the counter will eventually wrap, there is potential for the ABA problem. However, using a 64-bit counter makes this likelihood effectively zero. 2. Updating a tail index: the ABA problem can occur if the thread is preempted and the tail index wraps around. However, using 64-bit indexes makes this likelihood effectively zero. With no contention, an enqueue of n pointers uses (1 + 2n) CAS operations and a dequeue of n pointers uses 2. This algorithm has worse average-case performance than the regular rte ring (particularly a highly-contended ring with large bulk accesses), however: - For applications with preemptible pthreads, the regular rte ring's worst-case performance (i.e. one thread being preempted in the update_tail() critical section) is much worse than the non-blocking ring's. - Software caching can mitigate the average case performance for ring-based algorithms. For example, a non-blocking ring based mempool (a likely use case for this ring) with per-thread caching. The non-blocking ring is enabled via a new flag, RING_F_NB. Because the ring's memsize is now a function of its flags (the non-blocking ring requires 128b for each entry), this commit adds a new argument ('flags') to rte_ring_get_memsize(). An API deprecation notice will be sent in a separate commit. For ease-of-use, existing ring enqueue and dequeue functions work on both regular and non-blocking rings. This introduces an additional branch in the datapath, but this should be a highly predictable branch. ring_perf_autotest shows a negligible performance impact; it's hard to distinguish a real difference versus system noise. | ring_perf_autotest cycles with branch - Test | ring_perf_autotest cycles without ------------------------------------------------------------------ SP/SC single enq/dequeue | 0.33 MP/MC single enq/dequeue | -4.00 SP/SC burst enq/dequeue (size 8) | 0.00 MP/MC burst enq/dequeue (size 8) | 0.00 SP/SC burst enq/dequeue (size 32) | 0.00 MP/MC burst enq/dequeue (size 32) | 0.00 SC empty dequeue | 1.00 MC empty dequeue | 0.00 Single lcore: SP/SC bulk enq/dequeue (size 8) | 0.49 MP/MC bulk enq/dequeue (size 8) | 0.08 SP/SC bulk enq/dequeue (size 32) | 0.07 MP/MC bulk enq/dequeue (size 32) | 0.09 Two physical cores: SP/SC bulk enq/dequeue (size 8) | 0.19 MP/MC bulk enq/dequeue (size 8) | -0.37 SP/SC bulk enq/dequeue (size 32) | 0.09 MP/MC bulk enq/dequeue (size 32) | -0.05 Two NUMA nodes: SP/SC bulk enq/dequeue (size 8) | -1.96 MP/MC bulk enq/dequeue (size 8) | 0.88 SP/SC bulk enq/dequeue (size 32) | 0.10 MP/MC bulk enq/dequeue (size 32) | 0.46 Test setup: x86_64 build with default config, dual-socket Xeon E5-2699 v4, running on isolcpus cores with a tickless scheduler. Each test run three times and the results averaged. Signed-off-by: Gage Eads --- lib/librte_ring/rte_ring.c | 72 ++++++-- lib/librte_ring/rte_ring.h | 313 +++++++++++++++++++++++++++++++---- lib/librte_ring/rte_ring_c11_mem.h | 282 ++++++++++++++++++++++++++++++- lib/librte_ring/rte_ring_generic.h | 269 ++++++++++++++++++++++++++++++ lib/librte_ring/rte_ring_version.map | 7 + 5 files changed, 896 insertions(+), 47 deletions(-) diff --git a/lib/librte_ring/rte_ring.c b/lib/librte_ring/rte_ring.c index d215acecc..f3378dccd 100644 --- a/lib/librte_ring/rte_ring.c +++ b/lib/librte_ring/rte_ring.c @@ -45,9 +45,9 @@ EAL_REGISTER_TAILQ(rte_ring_tailq) /* return the size of memory occupied by a ring */ ssize_t -rte_ring_get_memsize(unsigned count) +rte_ring_get_memsize_v1905(unsigned int count, unsigned int flags) { - ssize_t sz; + ssize_t sz, elt_sz; /* count must be a power of 2 */ if ((!POWEROF2(count)) || (count > RTE_RING_SZ_MASK )) { @@ -57,10 +57,23 @@ rte_ring_get_memsize(unsigned count) return -EINVAL; } - sz = sizeof(struct rte_ring) + count * sizeof(void *); + elt_sz = (flags & RING_F_NB) ? 2 * sizeof(void *) : sizeof(void *); + + sz = sizeof(struct rte_ring) + count * elt_sz; sz = RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE); return sz; } +BIND_DEFAULT_SYMBOL(rte_ring_get_memsize, _v1905, 19.05); +MAP_STATIC_SYMBOL(ssize_t rte_ring_get_memsize(unsigned int count, + unsigned int flags), + rte_ring_get_memsize_v1905); + +ssize_t +rte_ring_get_memsize_v20(unsigned int count) +{ + return rte_ring_get_memsize_v1905(count, 0); +} +VERSION_SYMBOL(rte_ring_get_memsize, _v20, 2.0); int rte_ring_init(struct rte_ring *r, const char *name, unsigned count, @@ -82,8 +95,6 @@ rte_ring_init(struct rte_ring *r, const char *name, unsigned count, if (ret < 0 || ret >= (int)sizeof(r->name)) return -ENAMETOOLONG; r->flags = flags; - r->prod.single = (flags & RING_F_SP_ENQ) ? __IS_SP : __IS_MP; - r->cons.single = (flags & RING_F_SC_DEQ) ? __IS_SC : __IS_MC; if (flags & RING_F_EXACT_SZ) { r->size = rte_align32pow2(count + 1); @@ -100,8 +111,30 @@ rte_ring_init(struct rte_ring *r, const char *name, unsigned count, r->mask = count - 1; r->capacity = r->mask; } - r->prod.head = r->cons.head = 0; - r->prod.tail = r->cons.tail = 0; + + if (flags & RING_F_NB) { + uint64_t i; + + r->prod_64.single = (flags & RING_F_SP_ENQ) ? __IS_SP : __IS_MP; + r->cons_64.single = (flags & RING_F_SC_DEQ) ? __IS_SC : __IS_MC; + r->prod_64.head = r->cons_64.head = 0; + r->prod_64.tail = r->cons_64.tail = 0; + + for (i = 0; i < r->size; i++) { + struct nb_ring_entry *ring_ptr, *base; + + base = ((struct nb_ring_entry *)&r[1]); + + ring_ptr = &base[i & r->mask]; + + ring_ptr->cnt = i; + } + } else { + r->prod.single = (flags & RING_F_SP_ENQ) ? __IS_SP : __IS_MP; + r->cons.single = (flags & RING_F_SC_DEQ) ? __IS_SC : __IS_MC; + r->prod.head = r->cons.head = 0; + r->prod.tail = r->cons.tail = 0; + } return 0; } @@ -123,11 +156,19 @@ rte_ring_create(const char *name, unsigned count, int socket_id, ring_list = RTE_TAILQ_CAST(rte_ring_tailq.head, rte_ring_list); +#if !defined(RTE_ARCH_X86_64) + if (flags & RING_F_NB) { + printf("RING_F_NB is only supported on x86-64 platforms\n"); + rte_errno = EINVAL; + return NULL; + } +#endif + /* for an exact size ring, round up from count to a power of two */ if (flags & RING_F_EXACT_SZ) count = rte_align32pow2(count + 1); - ring_size = rte_ring_get_memsize(count); + ring_size = rte_ring_get_memsize(count, flags); if (ring_size < 0) { rte_errno = ring_size; return NULL; @@ -227,10 +268,17 @@ rte_ring_dump(FILE *f, const struct rte_ring *r) fprintf(f, " flags=%x\n", r->flags); fprintf(f, " size=%"PRIu32"\n", r->size); fprintf(f, " capacity=%"PRIu32"\n", r->capacity); - fprintf(f, " ct=%"PRIu32"\n", r->cons.tail); - fprintf(f, " ch=%"PRIu32"\n", r->cons.head); - fprintf(f, " pt=%"PRIu32"\n", r->prod.tail); - fprintf(f, " ph=%"PRIu32"\n", r->prod.head); + if (r->flags & RING_F_NB) { + fprintf(f, " ct=%"PRIu64"\n", r->cons_64.tail); + fprintf(f, " ch=%"PRIu64"\n", r->cons_64.head); + fprintf(f, " pt=%"PRIu64"\n", r->prod_64.tail); + fprintf(f, " ph=%"PRIu64"\n", r->prod_64.head); + } else { + fprintf(f, " ct=%"PRIu32"\n", r->cons.tail); + fprintf(f, " ch=%"PRIu32"\n", r->cons.head); + fprintf(f, " pt=%"PRIu32"\n", r->prod.tail); + fprintf(f, " ph=%"PRIu32"\n", r->prod.head); + } fprintf(f, " used=%u\n", rte_ring_count(r)); fprintf(f, " avail=%u\n", rte_ring_free_count(r)); } diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h index 00dfb5b85..c3d388c95 100644 --- a/lib/librte_ring/rte_ring.h +++ b/lib/librte_ring/rte_ring.h @@ -20,12 +20,16 @@ * * - FIFO (First In First Out) * - Maximum size is fixed; the pointers are stored in a table. - * - Lockless implementation. + * - Lockless (and optionally, non-blocking) implementation. * - Multi- or single-consumer dequeue. * - Multi- or single-producer enqueue. * - Bulk dequeue. * - Bulk enqueue. * + * The non-blocking ring algorithm is based on the original rte ring (derived + * from FreeBSD's bufring.h) and inspired by Michael and Scott's non-blocking + * concurrent queue. + * * Note: the ring implementation is not preemptible. Refer to Programmer's * guide/Environment Abstraction Layer/Multiple pthread/Known Issues/rte_ring * for more information. @@ -134,6 +138,18 @@ struct rte_ring { */ #define RING_F_EXACT_SZ 0x0004 #define RTE_RING_SZ_MASK (0x7fffffffU) /**< Ring size mask */ +/** + * The ring uses non-blocking enqueue and dequeue functions. These functions + * do not have the "non-preemptive" constraint of a regular rte ring, and thus + * are suited for applications using preemptible pthreads. However, the + * non-blocking functions have worse average-case performance than their + * regular rte ring counterparts. When used as the handler for a mempool, + * per-thread caching can mitigate the performance difference by reducing the + * number (and contention) of ring accesses. + * + * This flag is only supported on x86_64 platforms. + */ +#define RING_F_NB 0x0008 /* @internal defines for passing to the enqueue dequeue worker functions */ #define __IS_SP 1 @@ -151,11 +167,15 @@ struct rte_ring { * * @param count * The number of elements in the ring (must be a power of 2). + * @param flags + * The flags the ring will be created with. * @return * - The memory size needed for the ring on success. * - -EINVAL if count is not a power of 2. */ -ssize_t rte_ring_get_memsize(unsigned count); +ssize_t rte_ring_get_memsize(unsigned int count, unsigned int flags); +ssize_t rte_ring_get_memsize_v20(unsigned int count); +ssize_t rte_ring_get_memsize_v1905(unsigned int count, unsigned int flags); /** * Initialize a ring structure. @@ -188,6 +208,10 @@ ssize_t rte_ring_get_memsize(unsigned count); * - RING_F_SC_DEQ: If this flag is set, the default behavior when * using ``rte_ring_dequeue()`` or ``rte_ring_dequeue_bulk()`` * is "single-consumer". Otherwise, it is "multi-consumers". + * - RING_F_EXACT_SZ: If this flag is set, count can be a non-power-of-2 + * number, but up to half the ring space may be wasted. + * - RING_F_NB: (x86_64 only) If this flag is set, the ring uses + * non-blocking variants of the dequeue and enqueue functions. * @return * 0 on success, or a negative value on error. */ @@ -223,12 +247,17 @@ int rte_ring_init(struct rte_ring *r, const char *name, unsigned count, * - RING_F_SC_DEQ: If this flag is set, the default behavior when * using ``rte_ring_dequeue()`` or ``rte_ring_dequeue_bulk()`` * is "single-consumer". Otherwise, it is "multi-consumers". + * - RING_F_EXACT_SZ: If this flag is set, count can be a non-power-of-2 + * number, but up to half the ring space may be wasted. + * - RING_F_NB: (x86_64 only) If this flag is set, the ring uses + * non-blocking variants of the dequeue and enqueue functions. * @return * On success, the pointer to the new allocated ring. NULL on error with * rte_errno set appropriately. Possible errno values include: * - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure * - E_RTE_SECONDARY - function was called from a secondary process instance - * - EINVAL - count provided is not a power of 2 + * - EINVAL - count provided is not a power of 2, or RING_F_NB is used on an + * unsupported platform * - ENOSPC - the maximum number of memzones has already been allocated * - EEXIST - a memzone with the same name already exists * - ENOMEM - no appropriate memory area found in which to create memzone @@ -284,6 +313,50 @@ void rte_ring_dump(FILE *f, const struct rte_ring *r); } \ } while (0) +/* The actual enqueue of pointers on the ring. + * Used only by the single-producer non-blocking enqueue function, but + * out-lined here for code readability. + */ +#define ENQUEUE_PTRS_NB(r, ring_start, prod_head, obj_table, n) do { \ + unsigned int i; \ + const uint32_t size = (r)->size; \ + size_t idx = prod_head & (r)->mask; \ + size_t new_cnt = prod_head + size; \ + struct nb_ring_entry *ring = (struct nb_ring_entry *)ring_start; \ + if (likely(idx + n < size)) { \ + for (i = 0; i < (n & ((~(unsigned)0x3))); i += 4, idx += 4) { \ + ring[idx].ptr = obj_table[i]; \ + ring[idx].cnt = new_cnt + i; \ + ring[idx + 1].ptr = obj_table[i + 1]; \ + ring[idx + 1].cnt = new_cnt + i + 1; \ + ring[idx + 2].ptr = obj_table[i + 2]; \ + ring[idx + 2].cnt = new_cnt + i + 2; \ + ring[idx + 3].ptr = obj_table[i + 3]; \ + ring[idx + 3].cnt = new_cnt + i + 3; \ + } \ + switch (n & 0x3) { \ + case 3: \ + ring[idx].cnt = new_cnt + i; \ + ring[idx++].ptr = obj_table[i++]; /* fallthrough */ \ + case 2: \ + ring[idx].cnt = new_cnt + i; \ + ring[idx++].ptr = obj_table[i++]; /* fallthrough */ \ + case 1: \ + ring[idx].cnt = new_cnt + i; \ + ring[idx++].ptr = obj_table[i++]; \ + } \ + } else { \ + for (i = 0; idx < size; i++, idx++) { \ + ring[idx].cnt = new_cnt + i; \ + ring[idx].ptr = obj_table[i]; \ + } \ + for (idx = 0; i < n; i++, idx++) { \ + ring[idx].cnt = new_cnt + i; \ + ring[idx].ptr = obj_table[i]; \ + } \ + } \ +} while (0) + /* the actual copy of pointers on the ring to obj_table. * Placed here since identical code needed in both * single and multi consumer dequeue functions */ @@ -315,6 +388,45 @@ void rte_ring_dump(FILE *f, const struct rte_ring *r); } \ } while (0) +/* The actual copy of pointers on the ring to obj_table. + * Placed here since identical code needed in both + * single and multi consumer non-blocking dequeue functions. + */ +#define DEQUEUE_PTRS_NB(r, ring_start, cons_head, obj_table, n) do { \ + unsigned int i; \ + size_t idx = cons_head & (r)->mask; \ + const uint32_t size = (r)->size; \ + struct nb_ring_entry *ring = (struct nb_ring_entry *)ring_start; \ + if (likely(idx + n < size)) { \ + for (i = 0; i < (n & (~(unsigned)0x3)); i += 4, idx += 4) {\ + obj_table[i] = ring[idx].ptr; \ + obj_table[i + 1] = ring[idx + 1].ptr; \ + obj_table[i + 2] = ring[idx + 2].ptr; \ + obj_table[i + 3] = ring[idx + 3].ptr; \ + } \ + switch (n & 0x3) { \ + case 3: \ + obj_table[i++] = ring[idx++].ptr; /* fallthrough */ \ + case 2: \ + obj_table[i++] = ring[idx++].ptr; /* fallthrough */ \ + case 1: \ + obj_table[i++] = ring[idx++].ptr; \ + } \ + } else { \ + for (i = 0; idx < size; i++, idx++) \ + obj_table[i] = ring[idx].ptr; \ + for (idx = 0; i < n; i++, idx++) \ + obj_table[i] = ring[idx].ptr; \ + } \ +} while (0) + + +/* @internal 128-bit structure used by the non-blocking ring */ +struct nb_ring_entry { + void *ptr; /**< Data pointer */ + uint64_t cnt; /**< Modification counter */ +}; + /* Between load and load. there might be cpu reorder in weak model * (powerpc/arm). * There are 2 choices for the users @@ -331,6 +443,70 @@ void rte_ring_dump(FILE *f, const struct rte_ring *r); #endif /** + * @internal Enqueue several objects on the non-blocking ring + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects). + * @param n + * The number of objects to add in the ring from the obj_table. + * @param behavior + * RTE_RING_QUEUE_FIXED: Enqueue a fixed number of items to the ring + * RTE_RING_QUEUE_VARIABLE: Enqueue as many items as possible to the ring + * @param is_sp + * Indicates whether to use single producer or multi-producer head update + * @param free_space + * returns the amount of space after the enqueue operation has finished + * @return + * Actual number of objects enqueued. + * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. + */ +static __rte_always_inline unsigned int +__rte_ring_do_nb_enqueue(struct rte_ring *r, void * const *obj_table, + unsigned int n, enum rte_ring_queue_behavior behavior, + unsigned int is_sp, unsigned int *free_space) +{ + if (is_sp) + return __rte_ring_do_nb_enqueue_sp(r, obj_table, n, + behavior, free_space); + else + return __rte_ring_do_nb_enqueue_mp(r, obj_table, n, + behavior, free_space); +} + +/** + * @internal Dequeue several objects from the non-blocking ring + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects). + * @param n + * The number of objects to pull from the ring. + * @param behavior + * RTE_RING_QUEUE_FIXED: Dequeue a fixed number of items from the ring + * RTE_RING_QUEUE_VARIABLE: Dequeue as many items as possible from the ring + * @param available + * returns the number of remaining ring entries after the dequeue has finished + * @return + * - Actual number of objects dequeued. + * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. + */ +static __rte_always_inline unsigned int +__rte_ring_do_nb_dequeue(struct rte_ring *r, void **obj_table, + unsigned int n, enum rte_ring_queue_behavior behavior, + unsigned int is_sc, unsigned int *available) +{ + if (is_sc) + return __rte_ring_do_nb_dequeue_sc(r, obj_table, n, + behavior, available); + else + return __rte_ring_do_nb_dequeue_mc(r, obj_table, n, + behavior, available); +} + +/** * @internal Enqueue several objects on the ring * * @param r @@ -437,8 +613,14 @@ static __rte_always_inline unsigned int rte_ring_mp_enqueue_bulk(struct rte_ring *r, void * const *obj_table, unsigned int n, unsigned int *free_space) { - return __rte_ring_do_enqueue(r, obj_table, n, RTE_RING_QUEUE_FIXED, - __IS_MP, free_space); + if (r->flags & RING_F_NB) + return __rte_ring_do_nb_enqueue(r, obj_table, n, + RTE_RING_QUEUE_FIXED, __IS_MP, + free_space); + else + return __rte_ring_do_enqueue(r, obj_table, n, + RTE_RING_QUEUE_FIXED, __IS_MP, + free_space); } /** @@ -460,8 +642,14 @@ static __rte_always_inline unsigned int rte_ring_sp_enqueue_bulk(struct rte_ring *r, void * const *obj_table, unsigned int n, unsigned int *free_space) { - return __rte_ring_do_enqueue(r, obj_table, n, RTE_RING_QUEUE_FIXED, - __IS_SP, free_space); + if (r->flags & RING_F_NB) + return __rte_ring_do_nb_enqueue(r, obj_table, n, + RTE_RING_QUEUE_FIXED, __IS_SP, + free_space); + else + return __rte_ring_do_enqueue(r, obj_table, n, + RTE_RING_QUEUE_FIXED, __IS_SP, + free_space); } /** @@ -487,8 +675,14 @@ static __rte_always_inline unsigned int rte_ring_enqueue_bulk(struct rte_ring *r, void * const *obj_table, unsigned int n, unsigned int *free_space) { - return __rte_ring_do_enqueue(r, obj_table, n, RTE_RING_QUEUE_FIXED, - r->prod.single, free_space); + if (r->flags & RING_F_NB) + return __rte_ring_do_nb_enqueue(r, obj_table, n, + RTE_RING_QUEUE_FIXED, + r->prod_64.single, free_space); + else + return __rte_ring_do_enqueue(r, obj_table, n, + RTE_RING_QUEUE_FIXED, + r->prod.single, free_space); } /** @@ -571,8 +765,14 @@ static __rte_always_inline unsigned int rte_ring_mc_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned int n, unsigned int *available) { - return __rte_ring_do_dequeue(r, obj_table, n, RTE_RING_QUEUE_FIXED, - __IS_MC, available); + if (r->flags & RING_F_NB) + return __rte_ring_do_nb_dequeue(r, obj_table, n, + RTE_RING_QUEUE_FIXED, __IS_MC, + available); + else + return __rte_ring_do_dequeue(r, obj_table, n, + RTE_RING_QUEUE_FIXED, __IS_MC, + available); } /** @@ -595,8 +795,14 @@ static __rte_always_inline unsigned int rte_ring_sc_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned int n, unsigned int *available) { - return __rte_ring_do_dequeue(r, obj_table, n, RTE_RING_QUEUE_FIXED, - __IS_SC, available); + if (r->flags & RING_F_NB) + return __rte_ring_do_nb_dequeue(r, obj_table, n, + RTE_RING_QUEUE_FIXED, __IS_SC, + available); + else + return __rte_ring_do_dequeue(r, obj_table, n, + RTE_RING_QUEUE_FIXED, __IS_SC, + available); } /** @@ -622,8 +828,14 @@ static __rte_always_inline unsigned int rte_ring_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned int n, unsigned int *available) { - return __rte_ring_do_dequeue(r, obj_table, n, RTE_RING_QUEUE_FIXED, - r->cons.single, available); + if (r->flags & RING_F_NB) + return __rte_ring_do_nb_dequeue(r, obj_table, n, + RTE_RING_QUEUE_FIXED, + r->cons_64.single, available); + else + return __rte_ring_do_dequeue(r, obj_table, n, + RTE_RING_QUEUE_FIXED, + r->cons.single, available); } /** @@ -698,9 +910,13 @@ rte_ring_dequeue(struct rte_ring *r, void **obj_p) static inline unsigned rte_ring_count(const struct rte_ring *r) { - uint32_t prod_tail = r->prod.tail; - uint32_t cons_tail = r->cons.tail; - uint32_t count = (prod_tail - cons_tail) & r->mask; + uint32_t count; + + if (r->flags & RING_F_NB) + count = (r->prod_64.tail - r->cons_64.tail) & r->mask; + else + count = (r->prod.tail - r->cons.tail) & r->mask; + return (count > r->capacity) ? r->capacity : count; } @@ -820,8 +1036,14 @@ static __rte_always_inline unsigned rte_ring_mp_enqueue_burst(struct rte_ring *r, void * const *obj_table, unsigned int n, unsigned int *free_space) { - return __rte_ring_do_enqueue(r, obj_table, n, - RTE_RING_QUEUE_VARIABLE, __IS_MP, free_space); + if (r->flags & RING_F_NB) + return __rte_ring_do_nb_enqueue(r, obj_table, n, + RTE_RING_QUEUE_VARIABLE, + __IS_MP, free_space); + else + return __rte_ring_do_enqueue(r, obj_table, n, + RTE_RING_QUEUE_VARIABLE, + __IS_MP, free_space); } /** @@ -843,8 +1065,14 @@ static __rte_always_inline unsigned rte_ring_sp_enqueue_burst(struct rte_ring *r, void * const *obj_table, unsigned int n, unsigned int *free_space) { - return __rte_ring_do_enqueue(r, obj_table, n, - RTE_RING_QUEUE_VARIABLE, __IS_SP, free_space); + if (r->flags & RING_F_NB) + return __rte_ring_do_nb_enqueue(r, obj_table, n, + RTE_RING_QUEUE_VARIABLE, + __IS_SP, free_space); + else + return __rte_ring_do_enqueue(r, obj_table, n, + RTE_RING_QUEUE_VARIABLE, + __IS_SP, free_space); } /** @@ -870,8 +1098,14 @@ static __rte_always_inline unsigned rte_ring_enqueue_burst(struct rte_ring *r, void * const *obj_table, unsigned int n, unsigned int *free_space) { - return __rte_ring_do_enqueue(r, obj_table, n, RTE_RING_QUEUE_VARIABLE, - r->prod.single, free_space); + if (r->flags & RING_F_NB) + return __rte_ring_do_nb_enqueue(r, obj_table, n, + RTE_RING_QUEUE_VARIABLE, + r->prod_64.single, free_space); + else + return __rte_ring_do_enqueue(r, obj_table, n, + RTE_RING_QUEUE_VARIABLE, + r->prod.single, free_space); } /** @@ -898,8 +1132,14 @@ static __rte_always_inline unsigned rte_ring_mc_dequeue_burst(struct rte_ring *r, void **obj_table, unsigned int n, unsigned int *available) { - return __rte_ring_do_dequeue(r, obj_table, n, - RTE_RING_QUEUE_VARIABLE, __IS_MC, available); + if (r->flags & RING_F_NB) + return __rte_ring_do_nb_dequeue(r, obj_table, n, + RTE_RING_QUEUE_VARIABLE, + __IS_MC, available); + else + return __rte_ring_do_dequeue(r, obj_table, n, + RTE_RING_QUEUE_VARIABLE, + __IS_MC, available); } /** @@ -923,8 +1163,14 @@ static __rte_always_inline unsigned rte_ring_sc_dequeue_burst(struct rte_ring *r, void **obj_table, unsigned int n, unsigned int *available) { - return __rte_ring_do_dequeue(r, obj_table, n, - RTE_RING_QUEUE_VARIABLE, __IS_SC, available); + if (r->flags & RING_F_NB) + return __rte_ring_do_nb_dequeue(r, obj_table, n, + RTE_RING_QUEUE_VARIABLE, + __IS_SC, available); + else + return __rte_ring_do_dequeue(r, obj_table, n, + RTE_RING_QUEUE_VARIABLE, + __IS_SC, available); } /** @@ -950,9 +1196,14 @@ static __rte_always_inline unsigned rte_ring_dequeue_burst(struct rte_ring *r, void **obj_table, unsigned int n, unsigned int *available) { - return __rte_ring_do_dequeue(r, obj_table, n, - RTE_RING_QUEUE_VARIABLE, - r->cons.single, available); + if (r->flags & RING_F_NB) + return __rte_ring_do_nb_dequeue(r, obj_table, n, + RTE_RING_QUEUE_VARIABLE, + r->cons_64.single, available); + else + return __rte_ring_do_dequeue(r, obj_table, n, + RTE_RING_QUEUE_VARIABLE, + r->cons.single, available); } #ifdef __cplusplus diff --git a/lib/librte_ring/rte_ring_c11_mem.h b/lib/librte_ring/rte_ring_c11_mem.h index 47acd4c7c..7f83a5dc9 100644 --- a/lib/librte_ring/rte_ring_c11_mem.h +++ b/lib/librte_ring/rte_ring_c11_mem.h @@ -221,8 +221,8 @@ __rte_ring_move_prod_head_64(struct rte_ring *r, unsigned int is_sp, /* Ensure the head is read before tail */ __atomic_thread_fence(__ATOMIC_ACQUIRE); - /* load-acquire synchronize with store-release of ht->tail - * in update_tail. + /* load-acquire synchronize with store-release of tail in + * do_nb_dequeue_{sc, mc}. */ cons_tail = __atomic_load_n(&r->cons_64.tail, __ATOMIC_ACQUIRE); @@ -252,6 +252,7 @@ __rte_ring_move_prod_head_64(struct rte_ring *r, unsigned int is_sp, 0, __ATOMIC_RELAXED, __ATOMIC_RELAXED); } while (unlikely(success == 0)); + return n; } @@ -298,8 +299,8 @@ __rte_ring_move_cons_head_64(struct rte_ring *r, unsigned int is_sc, /* Ensure the head is read before tail */ __atomic_thread_fence(__ATOMIC_ACQUIRE); - /* this load-acquire synchronize with store-release of ht->tail - * in update_tail. + /* load-acquire synchronize with store-release of tail in + * do_nb_enqueue_{sp, mp}. */ prod_tail = __atomic_load_n(&r->prod_64.tail, __ATOMIC_ACQUIRE); @@ -328,6 +329,279 @@ __rte_ring_move_cons_head_64(struct rte_ring *r, unsigned int is_sc, 0, __ATOMIC_RELAXED, __ATOMIC_RELAXED); } while (unlikely(success == 0)); + + return n; +} + +/** + * @internal + * Enqueue several objects on the non-blocking ring (single-producer only) + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects). + * @param n + * The number of objects to add in the ring from the obj_table. + * @param behavior + * RTE_RING_QUEUE_FIXED: Enqueue a fixed number of items to the ring + * RTE_RING_QUEUE_VARIABLE: Enqueue as many items as possible to the ring + * @param free_space + * returns the amount of space after the enqueue operation has finished + * @return + * Actual number of objects enqueued. + * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. + */ +static __rte_always_inline unsigned int +__rte_ring_do_nb_enqueue_sp(struct rte_ring *r, void * const *obj_table, + unsigned int n, + enum rte_ring_queue_behavior behavior, + unsigned int *free_space) +{ + uint32_t free_entries; + uint64_t head, next; + + n = __rte_ring_move_prod_head_64(r, 1, n, behavior, + &head, &next, &free_entries); + if (n == 0) + goto end; + + ENQUEUE_PTRS_NB(r, &r[1], head, obj_table, n); + + __atomic_store_n(&r->prod_64.tail, + r->prod_64.tail + n, + __ATOMIC_RELEASE); +end: + if (free_space != NULL) + *free_space = free_entries - n; + return n; +} + +/** + * @internal + * Enqueue several objects on the non-blocking ring (multi-producer safe) + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects). + * @param n + * The number of objects to add in the ring from the obj_table. + * @param behavior + * RTE_RING_QUEUE_FIXED: Enqueue a fixed number of items to the ring + * RTE_RING_QUEUE_VARIABLE: Enqueue as many items as possible to the ring + * @param free_space + * returns the amount of space after the enqueue operation has finished + * @return + * Actual number of objects enqueued. + * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. + */ +static __rte_always_inline unsigned int +__rte_ring_do_nb_enqueue_mp(struct rte_ring *r, void * const *obj_table, + unsigned int n, + enum rte_ring_queue_behavior behavior, + unsigned int *free_space) +{ +#if !defined(RTE_ARCH_X86_64) || !defined(ALLOW_EXPERIMENTAL_API) + RTE_SET_USED(r); + RTE_SET_USED(obj_table); + RTE_SET_USED(n); + RTE_SET_USED(behavior); + RTE_SET_USED(free_space); +#ifndef ALLOW_EXPERIMENTAL_API + printf("[%s()] RING_F_NB requires an experimental API." + " Recompile with ALLOW_EXPERIMENTAL_API to use it.\n" + , __func__); +#endif + return 0; +#endif +#if defined(RTE_ARCH_X86_64) && defined(ALLOW_EXPERIMENTAL_API) + uint64_t head, next, tail; + uint32_t free_entries; + unsigned int i; + + n = __rte_ring_move_prod_head_64(r, 0, n, behavior, + &head, &next, &free_entries); + if (n == 0) + goto end; + + tail = __atomic_load_n(&r->prod_64.tail, __ATOMIC_RELAXED); + + for (i = 0; i < n; /* i incremented if enqueue succeeds */) { + struct nb_ring_entry old_value, new_value; + struct nb_ring_entry *base, *ring_ptr; + + /* Enqueue to the tail entry. If another thread wins the race, + * retry with the new tail. + */ + base = (struct nb_ring_entry *)&r[1]; + + ring_ptr = &base[tail & r->mask]; + + old_value = *ring_ptr; + + /* If the tail entry's modification counter doesn't match the + * producer tail index, it's already been updated. + * + * Attempt to update the tail here, so this thread doesn't + * depend on the forward progress of the thread that + * successfully enqueued. + */ + if (old_value.cnt != tail) { + /* Use a release memmodel to ensure the tail entry is + * visible to dequeueing threads before updating the + * tail. (tail is updated on failure.) + */ + __atomic_compare_exchange_n(&r->prod_64.tail, + &tail, tail + 1, + 0, __ATOMIC_RELEASE, + __ATOMIC_RELAXED); + continue; + } + + /* Prepare the new entry. The cnt field mitigates the ABA + * problem on the ring write. + */ + new_value.ptr = obj_table[i]; + new_value.cnt = tail + r->size; + + if (rte_atomic128_cmpset((volatile rte_int128_t *)ring_ptr, + (rte_int128_t *)&old_value, + (rte_int128_t *)&new_value, + 0, RTE_ATOMIC_RELAXED, + RTE_ATOMIC_RELAXED)) + i++; + + /* Use a release memmodel to ensure the tail entry is visible + * to dequeueing threads before updating the tail. Every thread + * attempts the cmpset, so they don't have to wait for the + * thread that successfully enqueued to the ring. Using a + * 64-bit tail mitigates the ABA problem here. (tail is updated + * on failure.) + */ + __atomic_compare_exchange_n(&r->prod_64.tail, + &tail, tail + 1, + 0, __ATOMIC_RELEASE, + __ATOMIC_RELAXED); + } + +end: + if (free_space != NULL) + *free_space = free_entries - n; + return n; +#endif +} + +/** + * @internal + * Dequeue several objects from the non-blocking ring (single-consumer only) + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects). + * @param n + * The number of objects to pull from the ring. + * @param behavior + * RTE_RING_QUEUE_FIXED: Dequeue a fixed number of items from the ring + * RTE_RING_QUEUE_VARIABLE: Dequeue as many items as possible from the ring + * @param available + * returns the number of remaining ring entries after the dequeue has finished + * @return + * - Actual number of objects dequeued. + * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. + */ +static __rte_always_inline unsigned int +__rte_ring_do_nb_dequeue_sc(struct rte_ring *r, void **obj_table, + unsigned int n, + enum rte_ring_queue_behavior behavior, + unsigned int *available) +{ + uint64_t head, next; + uint32_t entries; + + n = __rte_ring_move_cons_head_64(r, 1, n, behavior, + &head, &next, &entries); + if (n == 0) + goto end; + + DEQUEUE_PTRS_NB(r, &r[1], head, obj_table, n); + + __atomic_store_n(&r->cons_64.tail, + r->cons_64.tail + n, + __ATOMIC_RELEASE); +end: + if (available != NULL) + *available = entries - n; + return n; +} + +/** + * @internal + * Dequeue several objects from the non-blocking ring (multi-consumer safe) + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects). + * @param n + * The number of objects to pull from the ring. + * @param behavior + * RTE_RING_QUEUE_FIXED: Dequeue a fixed number of items from the ring + * RTE_RING_QUEUE_VARIABLE: Dequeue as many items as possible from the ring + * @param available + * returns the number of remaining ring entries after the dequeue has finished + * @return + * - Actual number of objects dequeued. + * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. + */ +static __rte_always_inline unsigned int +__rte_ring_do_nb_dequeue_mc(struct rte_ring *r, void **obj_table, + unsigned int n, + enum rte_ring_queue_behavior behavior, + unsigned int *available) +{ + uint64_t head, tail, next; + uint32_t entries; + int success; + + n = __rte_ring_move_cons_head_64(r, 0, n, behavior, + &head, &next, &entries); + if (n == 0) + goto end; + + /* The acquire-release synchronization on prod_64.tail ensures that + * this thread correctly observes the ring entries up to prod_64.tail. + * However by the time this thread reads cons_64.tail, or if its CAS + * fails, cons_64.tail may have passed the previously read value of + * prod_64.tail. Acquire-release synchronization on cons_64.tail is + * necessary to ensure that dequeue threads always observe the correct + * values of the ring entries. + */ + tail = __atomic_load_n(&r->cons_64.tail, __ATOMIC_ACQUIRE); + do { + /* Dequeue from the cons tail onwards. If multiple threads read + * the same pointers, the thread that successfully performs the + * CAS will keep them and the other(s) will retry. + */ + DEQUEUE_PTRS_NB(r, &r[1], tail, obj_table, n); + + next = tail + n; + + /* There is potential for the ABA problem here, but that is + * mitigated by the large (64-bit) tail. Use a release memmodel + * to ensure the dequeue operations and CAS are properly + * ordered. (tail is updated on failure.) + */ + success = __atomic_compare_exchange_n(&r->cons_64.tail, + &tail, next, + 0, __ATOMIC_RELEASE, + __ATOMIC_ACQUIRE); + } while (success == 0); + +end: + if (available != NULL) + *available = entries - n; return n; } diff --git a/lib/librte_ring/rte_ring_generic.h b/lib/librte_ring/rte_ring_generic.h index 2158e092a..87c9a09ce 100644 --- a/lib/librte_ring/rte_ring_generic.h +++ b/lib/librte_ring/rte_ring_generic.h @@ -306,4 +306,273 @@ __rte_ring_move_cons_head_64(struct rte_ring *r, unsigned int is_sc, return n; } +/** + * @internal + * Enqueue several objects on the non-blocking ring (single-producer only) + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects). + * @param n + * The number of objects to add in the ring from the obj_table. + * @param behavior + * RTE_RING_QUEUE_FIXED: Enqueue a fixed number of items to the ring + * RTE_RING_QUEUE_VARIABLE: Enqueue as many items as possible to the ring + * @param free_space + * returns the amount of space after the enqueue operation has finished + * @return + * Actual number of objects enqueued. + * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. + */ +static __rte_always_inline unsigned int +__rte_ring_do_nb_enqueue_sp(struct rte_ring *r, void * const *obj_table, + unsigned int n, + enum rte_ring_queue_behavior behavior, + unsigned int *free_space) +{ + uint32_t free_entries; + uint64_t head, next; + + n = __rte_ring_move_prod_head_64(r, 1, n, behavior, + &head, &next, &free_entries); + if (n == 0) + goto end; + + ENQUEUE_PTRS_NB(r, &r[1], head, obj_table, n); + + rte_smp_wmb(); + + r->prod_64.tail += n; + +end: + if (free_space != NULL) + *free_space = free_entries - n; + return n; +} + +/** + * @internal + * Enqueue several objects on the non-blocking ring (multi-producer safe) + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects). + * @param n + * The number of objects to add in the ring from the obj_table. + * @param behavior + * RTE_RING_QUEUE_FIXED: Enqueue a fixed number of items to the ring + * RTE_RING_QUEUE_VARIABLE: Enqueue as many items as possible to the ring + * @param free_space + * returns the amount of space after the enqueue operation has finished + * @return + * Actual number of objects enqueued. + * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. + */ +static __rte_always_inline unsigned int +__rte_ring_do_nb_enqueue_mp(struct rte_ring *r, void * const *obj_table, + unsigned int n, + enum rte_ring_queue_behavior behavior, + unsigned int *free_space) +{ +#if !defined(RTE_ARCH_X86_64) || !defined(ALLOW_EXPERIMENTAL_API) + RTE_SET_USED(r); + RTE_SET_USED(obj_table); + RTE_SET_USED(n); + RTE_SET_USED(behavior); + RTE_SET_USED(free_space); +#ifndef ALLOW_EXPERIMENTAL_API + printf("[%s()] RING_F_NB requires an experimental API." + " Recompile with ALLOW_EXPERIMENTAL_API to use it.\n" + , __func__); +#endif + return 0; +#endif +#if defined(RTE_ARCH_X86_64) && defined(ALLOW_EXPERIMENTAL_API) + uint64_t head, next, tail; + uint32_t free_entries; + unsigned int i; + + n = __rte_ring_move_prod_head_64(r, 0, n, behavior, + &head, &next, &free_entries); + if (n == 0) + goto end; + + for (i = 0; i < n; /* i incremented if enqueue succeeds */) { + struct nb_ring_entry old_value, new_value; + struct nb_ring_entry *base, *ring_ptr; + + /* Enqueue to the tail entry. If another thread wins the race, + * retry with the new tail. + */ + tail = r->prod_64.tail; + + base = (struct nb_ring_entry *)&r[1]; + + ring_ptr = &base[tail & r->mask]; + + old_value = *ring_ptr; + + /* If the tail entry's modification counter doesn't match the + * producer tail index, it's already been updated. + * + * Attempt to update the tail here, so this thread doesn't + * depend on the forward progress of the thread that + * successfully enqueued. + */ + if (old_value.cnt != tail) { + /* Ensure the tail entry is visible to dequeueing + * threads before updating the tail. + */ + rte_smp_wmb(); + + rte_atomic64_cmpset(&r->prod_64.tail, tail, tail + 1); + continue; + } + + /* Prepare the new entry. The cnt field mitigates the ABA + * problem on the ring write. + */ + new_value.ptr = obj_table[i]; + new_value.cnt = tail + r->size; + + if (rte_atomic128_cmpset((volatile rte_int128_t *)ring_ptr, + (rte_int128_t *)&old_value, + (rte_int128_t *)&new_value, + 0, RTE_ATOMIC_RELAXED, + RTE_ATOMIC_RELAXED)) + i++; + + /* Ensure the tail entry is visible to dequeueing threads + * before updating the tail. + */ + rte_smp_wmb(); + + /* Every thread attempts the cmpset, so they don't have to wait + * for the thread that successfully enqueued to the ring. + * Using a 64-bit tail mitigates the ABA problem here. + */ + rte_atomic64_cmpset(&r->prod_64.tail, tail, tail + 1); + } + +end: + if (free_space != NULL) + *free_space = free_entries - n; + return n; +#endif +} + +/** + * @internal + * Dequeue several objects from the non-blocking ring (single-consumer only) + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects). + * @param n + * The number of objects to pull from the ring. + * @param behavior + * RTE_RING_QUEUE_FIXED: Dequeue a fixed number of items from the ring + * RTE_RING_QUEUE_VARIABLE: Dequeue as many items as possible from the ring + * @param available + * returns the number of remaining ring entries after the dequeue has finished + * @return + * - Actual number of objects dequeued. + * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. + */ +static __rte_always_inline unsigned int +__rte_ring_do_nb_dequeue_sc(struct rte_ring *r, void **obj_table, + unsigned int n, + enum rte_ring_queue_behavior behavior, + unsigned int *available) +{ + uint64_t head, next; + uint32_t entries; + + n = __rte_ring_move_cons_head_64(r, 1, n, behavior, + &head, &next, &entries); + if (n == 0) + goto end; + + DEQUEUE_PTRS_NB(r, &r[1], head, obj_table, n); + + rte_smp_rmb(); + + r->cons_64.tail += n; + +end: + if (available != NULL) + *available = entries - n; + return n; +} + +/** + * @internal + * Dequeue several objects from the non-blocking ring (multi-consumer safe) + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects). + * @param n + * The number of objects to pull from the ring. + * @param behavior + * RTE_RING_QUEUE_FIXED: Dequeue a fixed number of items from the ring + * RTE_RING_QUEUE_VARIABLE: Dequeue as many items as possible from the ring + * @param available + * returns the number of remaining ring entries after the dequeue has finished + * @return + * - Actual number of objects dequeued. + * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. + */ +static __rte_always_inline unsigned int +__rte_ring_do_nb_dequeue_mc(struct rte_ring *r, void **obj_table, + unsigned int n, + enum rte_ring_queue_behavior behavior, + unsigned int *available) +{ + uint64_t head, next; + uint32_t entries; + int success; + + n = __rte_ring_move_cons_head_64(r, 0, n, behavior, + &head, &next, &entries); + if (n == 0) + goto end; + + do { + uint64_t tail = r->cons_64.tail; + + /* Ensure that the correct ring entry values are read by this + * thread. + */ + rte_smp_rmb(); + + /* Dequeue from the cons tail onwards. If multiple threads read + * the same pointers, the thread that successfully performs the + * CAS will keep them and the other(s) will retry. + */ + DEQUEUE_PTRS_NB(r, &r[1], tail, obj_table, n); + + next = tail + n; + + /* Ensure the dequeue operations and CAS are properly + * ordered. + */ + rte_smp_rmb(); + + /* There is potential for the ABA problem here, but that is + * mitigated by the large (64-bit) tail. + */ + success = rte_atomic64_cmpset(&r->cons_64.tail, tail, next); + } while (success == 0); + +end: + if (available != NULL) + *available = entries - n; + return n; +} + #endif /* _RTE_RING_GENERIC_H_ */ diff --git a/lib/librte_ring/rte_ring_version.map b/lib/librte_ring/rte_ring_version.map index d935efd0d..8969467af 100644 --- a/lib/librte_ring/rte_ring_version.map +++ b/lib/librte_ring/rte_ring_version.map @@ -17,3 +17,10 @@ DPDK_2.2 { rte_ring_free; } DPDK_2.0; + +DPDK_19.05 { + global: + + rte_ring_get_memsize; + +} DPDK_2.2; From patchwork Mon Jan 28 18:14:05 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Eads, Gage" X-Patchwork-Id: 50075 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7AE9D1B10C; Mon, 28 Jan 2019 19:15:15 +0100 (CET) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by dpdk.org (Postfix) with ESMTP id 9B35F7CB0 for ; Mon, 28 Jan 2019 19:15:08 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Jan 2019 10:15:08 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,534,1539673200"; d="scan'208";a="139508038" Received: from txasoft-yocto.an.intel.com ([10.123.72.192]) by fmsmga004.fm.intel.com with ESMTP; 28 Jan 2019 10:15:07 -0800 From: Gage Eads To: dev@dpdk.org Cc: olivier.matz@6wind.com, arybchenko@solarflare.com, bruce.richardson@intel.com, konstantin.ananyev@intel.com, stephen@networkplumber.org, jerinj@marvell.com, mczekaj@marvell.com, nd@arm.com, Ola.Liljedahl@arm.com Date: Mon, 28 Jan 2019 12:14:05 -0600 Message-Id: <20190128181407.32739-4-gage.eads@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20190128181407.32739-1-gage.eads@intel.com> References: <20190118152326.22686-1-gage.eads@intel.com> <20190128181407.32739-1-gage.eads@intel.com> Subject: [dpdk-dev] [PATCH v4 3/5] test_ring: add non-blocking ring autotest X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" ring_nb_autotest re-uses the ring_autotest code by wrapping its top-level function with one that takes a 'flags' argument. Signed-off-by: Gage Eads --- test/test/test_ring.c | 57 ++++++++++++++++++++++++++++++++------------------- 1 file changed, 36 insertions(+), 21 deletions(-) diff --git a/test/test/test_ring.c b/test/test/test_ring.c index aaf1e70ad..ff410d978 100644 --- a/test/test/test_ring.c +++ b/test/test/test_ring.c @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2010-2014 Intel Corporation + * Copyright(c) 2010-2019 Intel Corporation */ #include @@ -601,18 +601,20 @@ test_ring_burst_basic(struct rte_ring *r) * it will always fail to create ring with a wrong ring size number in this function */ static int -test_ring_creation_with_wrong_size(void) +test_ring_creation_with_wrong_size(unsigned int flags) { struct rte_ring * rp = NULL; /* Test if ring size is not power of 2 */ - rp = rte_ring_create("test_bad_ring_size", RING_SIZE + 1, SOCKET_ID_ANY, 0); + rp = rte_ring_create("test_bad_ring_size", RING_SIZE + 1, + SOCKET_ID_ANY, flags); if (NULL != rp) { return -1; } /* Test if ring size is exceeding the limit */ - rp = rte_ring_create("test_bad_ring_size", (RTE_RING_SZ_MASK + 1), SOCKET_ID_ANY, 0); + rp = rte_ring_create("test_bad_ring_size", (RTE_RING_SZ_MASK + 1), + SOCKET_ID_ANY, flags); if (NULL != rp) { return -1; } @@ -623,11 +625,11 @@ test_ring_creation_with_wrong_size(void) * it tests if it would always fail to create ring with an used ring name */ static int -test_ring_creation_with_an_used_name(void) +test_ring_creation_with_an_used_name(unsigned int flags) { struct rte_ring * rp; - rp = rte_ring_create("test", RING_SIZE, SOCKET_ID_ANY, 0); + rp = rte_ring_create("test", RING_SIZE, SOCKET_ID_ANY, flags); if (NULL != rp) return -1; @@ -639,10 +641,10 @@ test_ring_creation_with_an_used_name(void) * function to fail correctly */ static int -test_create_count_odd(void) +test_create_count_odd(unsigned int flags) { struct rte_ring *r = rte_ring_create("test_ring_count", - 4097, SOCKET_ID_ANY, 0 ); + 4097, SOCKET_ID_ANY, flags); if(r != NULL){ return -1; } @@ -665,7 +667,7 @@ test_lookup_null(void) * it tests some more basic ring operations */ static int -test_ring_basic_ex(void) +test_ring_basic_ex(unsigned int flags) { int ret = -1; unsigned i; @@ -679,7 +681,7 @@ test_ring_basic_ex(void) } rp = rte_ring_create("test_ring_basic_ex", RING_SIZE, SOCKET_ID_ANY, - RING_F_SP_ENQ | RING_F_SC_DEQ); + RING_F_SP_ENQ | RING_F_SC_DEQ | flags); if (rp == NULL) { printf("test_ring_basic_ex fail to create ring\n"); goto fail_test; @@ -737,7 +739,7 @@ test_ring_basic_ex(void) } static int -test_ring_with_exact_size(void) +test_ring_with_exact_size(unsigned int flags) { struct rte_ring *std_ring = NULL, *exact_sz_ring = NULL; void *ptr_array[16]; @@ -746,13 +748,13 @@ test_ring_with_exact_size(void) int ret = -1; std_ring = rte_ring_create("std", ring_sz, rte_socket_id(), - RING_F_SP_ENQ | RING_F_SC_DEQ); + RING_F_SP_ENQ | RING_F_SC_DEQ | flags); if (std_ring == NULL) { printf("%s: error, can't create std ring\n", __func__); goto end; } exact_sz_ring = rte_ring_create("exact sz", ring_sz, rte_socket_id(), - RING_F_SP_ENQ | RING_F_SC_DEQ | RING_F_EXACT_SZ); + RING_F_SP_ENQ | RING_F_SC_DEQ | RING_F_EXACT_SZ | flags); if (exact_sz_ring == NULL) { printf("%s: error, can't create exact size ring\n", __func__); goto end; @@ -808,17 +810,17 @@ test_ring_with_exact_size(void) } static int -test_ring(void) +__test_ring(unsigned int flags) { struct rte_ring *r = NULL; /* some more basic operations */ - if (test_ring_basic_ex() < 0) + if (test_ring_basic_ex(flags) < 0) goto test_fail; rte_atomic32_init(&synchro); - r = rte_ring_create("test", RING_SIZE, SOCKET_ID_ANY, 0); + r = rte_ring_create("test", RING_SIZE, SOCKET_ID_ANY, flags); if (r == NULL) goto test_fail; @@ -837,27 +839,27 @@ test_ring(void) goto test_fail; /* basic operations */ - if ( test_create_count_odd() < 0){ + if (test_create_count_odd(flags) < 0) { printf("Test failed to detect odd count\n"); goto test_fail; } else printf("Test detected odd count\n"); - if ( test_lookup_null() < 0){ + if (test_lookup_null() < 0) { printf("Test failed to detect NULL ring lookup\n"); goto test_fail; } else printf("Test detected NULL ring lookup\n"); /* test of creating ring with wrong size */ - if (test_ring_creation_with_wrong_size() < 0) + if (test_ring_creation_with_wrong_size(flags) < 0) goto test_fail; /* test of creation ring with an used name */ - if (test_ring_creation_with_an_used_name() < 0) + if (test_ring_creation_with_an_used_name(flags) < 0) goto test_fail; - if (test_ring_with_exact_size() < 0) + if (test_ring_with_exact_size(flags) < 0) goto test_fail; /* dump the ring status */ @@ -873,4 +875,17 @@ test_ring(void) return -1; } +static int +test_ring(void) +{ + return __test_ring(0); +} + +static int +test_nb_ring(void) +{ + return __test_ring(RING_F_NB); +} + REGISTER_TEST_COMMAND(ring_autotest, test_ring); +REGISTER_TEST_COMMAND(ring_nb_autotest, test_nb_ring); From patchwork Mon Jan 28 18:14:06 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Eads, Gage" X-Patchwork-Id: 50076 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DFD841B118; Mon, 28 Jan 2019 19:15:16 +0100 (CET) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by dpdk.org (Postfix) with ESMTP id EFB8A7CB0 for ; Mon, 28 Jan 2019 19:15:09 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Jan 2019 10:15:09 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,534,1539673200"; d="scan'208";a="139508048" Received: from txasoft-yocto.an.intel.com ([10.123.72.192]) by fmsmga004.fm.intel.com with ESMTP; 28 Jan 2019 10:15:08 -0800 From: Gage Eads To: dev@dpdk.org Cc: olivier.matz@6wind.com, arybchenko@solarflare.com, bruce.richardson@intel.com, konstantin.ananyev@intel.com, stephen@networkplumber.org, jerinj@marvell.com, mczekaj@marvell.com, nd@arm.com, Ola.Liljedahl@arm.com Date: Mon, 28 Jan 2019 12:14:06 -0600 Message-Id: <20190128181407.32739-5-gage.eads@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20190128181407.32739-1-gage.eads@intel.com> References: <20190118152326.22686-1-gage.eads@intel.com> <20190128181407.32739-1-gage.eads@intel.com> Subject: [dpdk-dev] [PATCH v4 4/5] test_ring_perf: add non-blocking ring perf test X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" nb_ring_perf_autotest re-uses the ring_perf_autotest code by wrapping its top-level function with one that takes a 'flags' argument. Signed-off-by: Gage Eads --- test/test/test_ring_perf.c | 19 ++++++++++++++++--- 1 file changed, 16 insertions(+), 3 deletions(-) diff --git a/test/test/test_ring_perf.c b/test/test/test_ring_perf.c index ebb3939f5..380c4b4a1 100644 --- a/test/test/test_ring_perf.c +++ b/test/test/test_ring_perf.c @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2010-2014 Intel Corporation + * Copyright(c) 2010-2019 Intel Corporation */ @@ -363,12 +363,12 @@ test_bulk_enqueue_dequeue(struct rte_ring *r) } static int -test_ring_perf(void) +__test_ring_perf(unsigned int flags) { struct lcore_pair cores; struct rte_ring *r = NULL; - r = rte_ring_create(RING_NAME, RING_SIZE, rte_socket_id(), 0); + r = rte_ring_create(RING_NAME, RING_SIZE, rte_socket_id(), flags); if (r == NULL) return -1; @@ -398,4 +398,17 @@ test_ring_perf(void) return 0; } +static int +test_ring_perf(void) +{ + return __test_ring_perf(0); +} + +static int +test_nb_ring_perf(void) +{ + return __test_ring_perf(RING_F_NB); +} + REGISTER_TEST_COMMAND(ring_perf_autotest, test_ring_perf); +REGISTER_TEST_COMMAND(ring_nb_perf_autotest, test_nb_ring_perf); From patchwork Mon Jan 28 18:14:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Eads, Gage" X-Patchwork-Id: 50077 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6C5211B123; Mon, 28 Jan 2019 19:15:18 +0100 (CET) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by dpdk.org (Postfix) with ESMTP id A35567CE2 for ; Mon, 28 Jan 2019 19:15:10 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Jan 2019 10:15:10 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,534,1539673200"; d="scan'208";a="139508057" Received: from txasoft-yocto.an.intel.com ([10.123.72.192]) by fmsmga004.fm.intel.com with ESMTP; 28 Jan 2019 10:15:09 -0800 From: Gage Eads To: dev@dpdk.org Cc: olivier.matz@6wind.com, arybchenko@solarflare.com, bruce.richardson@intel.com, konstantin.ananyev@intel.com, stephen@networkplumber.org, jerinj@marvell.com, mczekaj@marvell.com, nd@arm.com, Ola.Liljedahl@arm.com Date: Mon, 28 Jan 2019 12:14:07 -0600 Message-Id: <20190128181407.32739-6-gage.eads@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20190128181407.32739-1-gage.eads@intel.com> References: <20190118152326.22686-1-gage.eads@intel.com> <20190128181407.32739-1-gage.eads@intel.com> Subject: [dpdk-dev] [PATCH v4 5/5] mempool/ring: add non-blocking ring handlers X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" These handlers allow an application to create a mempool based on the non-blocking ring, with any combination of single/multi producer/consumer. Also, add a note to the programmer's guide's "known issues" section. Signed-off-by: Gage Eads Acked-by: Andrew Rybchenko --- doc/guides/prog_guide/env_abstraction_layer.rst | 5 +++ drivers/mempool/ring/Makefile | 1 + drivers/mempool/ring/meson.build | 2 + drivers/mempool/ring/rte_mempool_ring.c | 58 +++++++++++++++++++++++-- 4 files changed, 63 insertions(+), 3 deletions(-) diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst index 929d76dba..fcafd1cff 100644 --- a/doc/guides/prog_guide/env_abstraction_layer.rst +++ b/doc/guides/prog_guide/env_abstraction_layer.rst @@ -541,6 +541,11 @@ Known Issues 5. It MUST not be used by multi-producer/consumer pthreads, whose scheduling policies are SCHED_FIFO or SCHED_RR. + Alternatively, x86_64 applications can use the non-blocking ring mempool handler. When considering it, note that: + + - it is currently limited to the x86_64 platform, because it uses a function (16-byte compare-and-swap) that is not yet available on other platforms. + - it has worse average-case performance than the non-preemptive rte_ring, but software caching (e.g. the mempool cache) can mitigate this by reducing the number of handler operations. + + rte_timer Running ``rte_timer_manage()`` on a non-EAL pthread is not allowed. However, resetting/stopping the timer from a non-EAL pthread is allowed. diff --git a/drivers/mempool/ring/Makefile b/drivers/mempool/ring/Makefile index ddab522fe..012ba6966 100644 --- a/drivers/mempool/ring/Makefile +++ b/drivers/mempool/ring/Makefile @@ -10,6 +10,7 @@ LIB = librte_mempool_ring.a CFLAGS += -O3 CFLAGS += $(WERROR_FLAGS) +CFLAGS += -DALLOW_EXPERIMENTAL_API LDLIBS += -lrte_eal -lrte_mempool -lrte_ring EXPORT_MAP := rte_mempool_ring_version.map diff --git a/drivers/mempool/ring/meson.build b/drivers/mempool/ring/meson.build index a021e908c..b1cb673cc 100644 --- a/drivers/mempool/ring/meson.build +++ b/drivers/mempool/ring/meson.build @@ -1,4 +1,6 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2017 Intel Corporation +allow_experimental_apis = true + sources = files('rte_mempool_ring.c') diff --git a/drivers/mempool/ring/rte_mempool_ring.c b/drivers/mempool/ring/rte_mempool_ring.c index bc123fc52..013dac3bc 100644 --- a/drivers/mempool/ring/rte_mempool_ring.c +++ b/drivers/mempool/ring/rte_mempool_ring.c @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2010-2016 Intel Corporation + * Copyright(c) 2010-2019 Intel Corporation */ #include @@ -47,11 +47,11 @@ common_ring_get_count(const struct rte_mempool *mp) static int -common_ring_alloc(struct rte_mempool *mp) +__common_ring_alloc(struct rte_mempool *mp, int rg_flags) { - int rg_flags = 0, ret; char rg_name[RTE_RING_NAMESIZE]; struct rte_ring *r; + int ret; ret = snprintf(rg_name, sizeof(rg_name), RTE_MEMPOOL_MZ_FORMAT, mp->name); @@ -82,6 +82,18 @@ common_ring_alloc(struct rte_mempool *mp) return 0; } +static int +common_ring_alloc(struct rte_mempool *mp) +{ + return __common_ring_alloc(mp, 0); +} + +static int +common_ring_alloc_nb(struct rte_mempool *mp) +{ + return __common_ring_alloc(mp, RING_F_NB); +} + static void common_ring_free(struct rte_mempool *mp) { @@ -130,7 +142,47 @@ static const struct rte_mempool_ops ops_sp_mc = { .get_count = common_ring_get_count, }; +static const struct rte_mempool_ops ops_mp_mc_nb = { + .name = "ring_mp_mc_nb", + .alloc = common_ring_alloc_nb, + .free = common_ring_free, + .enqueue = common_ring_mp_enqueue, + .dequeue = common_ring_mc_dequeue, + .get_count = common_ring_get_count, +}; + +static const struct rte_mempool_ops ops_sp_sc_nb = { + .name = "ring_sp_sc_nb", + .alloc = common_ring_alloc_nb, + .free = common_ring_free, + .enqueue = common_ring_sp_enqueue, + .dequeue = common_ring_sc_dequeue, + .get_count = common_ring_get_count, +}; + +static const struct rte_mempool_ops ops_mp_sc_nb = { + .name = "ring_mp_sc_nb", + .alloc = common_ring_alloc_nb, + .free = common_ring_free, + .enqueue = common_ring_mp_enqueue, + .dequeue = common_ring_sc_dequeue, + .get_count = common_ring_get_count, +}; + +static const struct rte_mempool_ops ops_sp_mc_nb = { + .name = "ring_sp_mc_nb", + .alloc = common_ring_alloc_nb, + .free = common_ring_free, + .enqueue = common_ring_sp_enqueue, + .dequeue = common_ring_mc_dequeue, + .get_count = common_ring_get_count, +}; + MEMPOOL_REGISTER_OPS(ops_mp_mc); MEMPOOL_REGISTER_OPS(ops_sp_sc); MEMPOOL_REGISTER_OPS(ops_mp_sc); MEMPOOL_REGISTER_OPS(ops_sp_mc); +MEMPOOL_REGISTER_OPS(ops_mp_mc_nb); +MEMPOOL_REGISTER_OPS(ops_sp_sc_nb); +MEMPOOL_REGISTER_OPS(ops_mp_sc_nb); +MEMPOOL_REGISTER_OPS(ops_sp_mc_nb);