From patchwork Sun Jan 13 14:46:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gavin Hu X-Patchwork-Id: 49770 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DC13D1B4C1; Sun, 13 Jan 2019 15:46:56 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.101.70]) by dpdk.org (Postfix) with ESMTP id 8BC721B4C0 for ; Sun, 13 Jan 2019 15:46:54 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B0453A78; Sun, 13 Jan 2019 06:46:53 -0800 (PST) Received: from net-debian.shanghai.arm.com (net-debian.shanghai.arm.com [10.169.36.53]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 3B37D3F694; Sun, 13 Jan 2019 06:46:52 -0800 (PST) From: Gavin Hu To: dev@dpdk.org Cc: thomas@monjalon.net, jerinj@marvell.com, hemant.agrawal@nxp.com, stephen@networkplumber.org, nd@arm.com, Honnappa.Nagarahalli@arm.com, gavin.hu@arm.com, Joyce Kong Date: Sun, 13 Jan 2019 22:46:31 +0800 Message-Id: <20190113144631.23493-1-gavin.hu@arm.com> X-Mailer: git-send-email 2.11.0 Subject: [dpdk-dev] [PATCH v1] ticketlock: ticket based to improve fairness X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Joyce Kong The spinlock implementation is unfair, some threads may take locks aggressively while leaving the other threads starving for long time. As shown in the following test, within same period of time, there are threads taking locks much more times than the others. The ticketlock gives each waiting thread a ticket and they can take the lock one by one, first come, first serviced, this avoids starvation for too long time and is more predictable. *** spinlock_autotest without this patch *** Core [0] count = 89 Core [1] count = 84 Core [2] count = 94 ... Core [208] count = 171 Core [209] count = 152 Core [210] count = 161 Core [211] count = 187 *** spinlock_autotest with this patch *** Core [0] count = 534 Core [1] count = 533 Core [2] count = 542 ... Core [208] count = 554 Core [209] count = 556 Core [210] count = 555 Core [211] count = 551 Signed-off-by: Joyce Kong Signed-off-by: Gavin Hu --- .../common/include/generic/rte_ticketlock.h | 203 +++++++++++++++++++++ lib/librte_eal/rte_eal_version.map | 8 + 2 files changed, 211 insertions(+) create mode 100644 lib/librte_eal/common/include/generic/rte_ticketlock.h diff --git a/lib/librte_eal/common/include/generic/rte_ticketlock.h b/lib/librte_eal/common/include/generic/rte_ticketlock.h new file mode 100644 index 000000000..9d044bb31 --- /dev/null +++ b/lib/librte_eal/common/include/generic/rte_ticketlock.h @@ -0,0 +1,203 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2019 Arm Corporation + */ + +#ifndef _RTE_TICKETLOCK_H_ +#define _RTE_TICKETLOCK_H_ + +/** + * @file + * + * RTE ticketlocks + * + * This file defines an API for read-write locks, which are implemented + * in an architecture-specific way. This kind of lock simply waits in + * a loop repeatedly checking until the lock becomes available. + * + * All locks must be initialised before use, and only initialised once. + * + */ + +#include +#include +#include + +/** + * The rte_ticketlock_t type. + */ +typedef struct { + uint16_t current; + uint16_t next; +} rte_ticketlock_t; + +/** + * A static ticketlock initializer. + */ +#define RTE_TICKETLOCK_INITIALIZER { 0 } + +/** + * Initialize the ticketlock to an unlocked state. + * + * @param tl + * A pointer to the ticketlock. + */ +static inline __rte_experimental void +rte_ticketlock_init(rte_ticketlock_t *tl) +{ + __atomic_store_n(&tl->current, 0, __ATOMIC_RELAXED); + __atomic_store_n(&tl->next, 0, __ATOMIC_RELAXED); +} + +/** + * Take the ticketlock. + * + * @param tl + * A pointer to the ticketlock. + */ +static inline __rte_experimental void +rte_ticketlock_lock(rte_ticketlock_t *tl); + +static inline __rte_experimental void +rte_ticketlock_lock(rte_ticketlock_t *tl) +{ + uint16_t me = __atomic_fetch_add(&tl->next, 1, __ATOMIC_RELAXED); + while (__atomic_load_n(&tl->current, __ATOMIC_ACQUIRE) != me) + rte_pause(); +} + +/** + * Release the ticketlock. + * + * @param tl + * A pointer to the ticketlock. + */ +static inline __rte_experimental void +rte_ticketlock_unlock(rte_ticketlock_t *tl); + +static inline __rte_experimental void +rte_ticketlock_unlock(rte_ticketlock_t *tl) +{ + uint16_t i = __atomic_load_n(&tl->current, __ATOMIC_RELAXED); + i++; + __atomic_store_n(&tl->current, i, __ATOMIC_RELEASE); +} + +/** + * Try to take the lock. + * + * @param tl + * A pointer to the ticketlock. + * @return + * 1 if the lock is successfully taken; 0 otherwise. + */ +static inline __rte_experimental int +rte_ticketlock_trylock(rte_ticketlock_t *tl) +{ + uint16_t me = __atomic_fetch_add(&tl->next, 1, __ATOMIC_RELAXED); + while (__atomic_load_n(&tl->current, __ATOMIC_RELAXED) != me) { + __atomic_sub_fetch(&tl->next, 1, __ATOMIC_RELAXED); + return 0; + } + + return 1; +} + +/** + * Test if the lock is taken. + * + * @param tl + * A pointer to the ticketlock. + * @return + * 1 if the lock icurrently taken; 0 otherwise. + */ +static inline __rte_experimental int +rte_ticketlock_is_locked(rte_ticketlock_t *tl) +{ + return (__atomic_load_n(&tl->current, __ATOMIC_RELAXED) != + __atomic_load_n(&tl->next, __ATOMIC_RELAXED)); +} + +/** + * The rte_ticketlock_recursive_t type. + */ +typedef struct { + rte_ticketlock_t tl; /**< the actual ticketlock */ + volatile int user; /**< core id using lock, -1 for unused */ + volatile int count; /**< count of time this lock has been called */ +} rte_ticketlock_recursive_t; + +/** + * A static recursive ticketlock initializer. + */ +#define RTE_TICKETLOCK_RECURSIVE_INITIALIZER {RTE_TICKETLOCK_INITIALIZER, -1, 0} + +/** + * Initialize the recursive ticketlock to an unlocked state. + * + * @param tlr + * A pointer to the recursive ticketlock. + */ +static inline __rte_experimental void rte_ticketlock_recursive_init( + rte_ticketlock_recursive_t *tlr) +{ + rte_ticketlock_init(&tlr->tl); + tlr->user = -1; + tlr->count = 0; +} + +/** + * Take the recursive ticketlock. + * + * @param tlr + * A pointer to the recursive ticketlock. + */ +static inline __rte_experimental void rte_ticketlock_recursive_lock( + rte_ticketlock_recursive_t *tlr) +{ + int id = rte_gettid(); + + if (tlr->user != id) { + rte_ticketlock_lock(&tlr->tl); + tlr->user = id; + } + tlr->count++; +} +/** + * Release the recursive ticketlock. + * + * @param tlr + * A pointer to the recursive ticketlock. + */ +static inline __rte_experimental void rte_ticketlock_recursive_unlock( + rte_ticketlock_recursive_t *tlr) +{ + if (--(tlr->count) == 0) { + tlr->user = -1; + rte_ticketlock_unlock(&tlr->tl); + } + +} + +/** + * Try to take the recursive lock. + * + * @param tlr + * A pointer to the recursive ticketlock. + * @return + * 1 if the lock is successfully taken; 0 otherwise. + */ +static inline __rte_experimental int rte_ticketlock_recursive_trylock( + rte_ticketlock_recursive_t *tlr) +{ + int id = rte_gettid(); + + if (tlr->user != id) { + if (rte_ticketlock_trylock(&tlr->tl) == 0) + return 0; + tlr->user = id; + } + tlr->count++; + return 1; +} + +#endif /* _RTE_TICKETLOCK_H_ */ diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map index eb5f7b9cb..f1841effa 100644 --- a/lib/librte_eal/rte_eal_version.map +++ b/lib/librte_eal/rte_eal_version.map @@ -364,4 +364,12 @@ EXPERIMENTAL { rte_service_may_be_active; rte_socket_count; rte_socket_id_by_idx; + rte_ticketlock_lock; + rte_ticketlock_unlock; + rte_ticketlock_islocked; + rte_ticketlock_trylock; + rte_ticketlock_recurrsive_lock; + rte_ticketlock_recurrsive_unlock; + rte_ticketlock_recurrsive_islocked; + rte_ticketlock_recurrsive_trylock; };