From patchwork Mon Apr 11 11:00:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 109580 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6FC67A0093; Mon, 11 Apr 2022 13:00:35 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5E39D40F35; Mon, 11 Apr 2022 13:00:35 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id E400C406B4 for ; Mon, 11 Apr 2022 13:00:33 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649674833; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=t4dHM24GSyxpEMw7Q+9nkSMr3qkmihsGUhbxGBUm27k=; b=B62DqgRapZBv1Os+PpH1gaP2Z2QpWoa+ohhjO6KwDJ/DpHMQsXf8fprKgR64yZDwBVFsk4 tyDOmYc4lBf2Kia1ZJMYD1O6hP66zZUy7Hhgr40xzp9W5e77+ugPz36aVMMq09sRrBDKrw 9ZHToeDCVo0axItff+geLpcpr2300+8= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-96-WZmRWY07OXyQ5PyiptD5pQ-1; Mon, 11 Apr 2022 07:00:27 -0400 X-MC-Unique: WZmRWY07OXyQ5PyiptD5pQ-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D5A1A1C0BF84; Mon, 11 Apr 2022 11:00:26 +0000 (UTC) Received: from dmarchan.remote.csb (unknown [10.40.192.83]) by smtp.corp.redhat.com (Postfix) with ESMTP id 816541454538; Mon, 11 Apr 2022 11:00:24 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: maxime.coquelin@redhat.com, stephen@networkplumber.org, chenbo.xia@intel.com, jiayu.hu@intel.com, yuanx.wang@intel.com, xuan.ding@intel.com, Ruifeng Wang , Jan Viktorin , David Christensen , Bruce Richardson , Konstantin Ananyev Subject: [RFC PATCH v3 1/8] eal: annotate spinlock and rwlock Date: Mon, 11 Apr 2022 13:00:06 +0200 Message-Id: <20220411110013.18624-2-david.marchand@redhat.com> In-Reply-To: <20220411110013.18624-1-david.marchand@redhat.com> References: <20220328121758.26632-1-david.marchand@redhat.com> <20220411110013.18624-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david.marchand@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org clang offers some thread safety checks, statically verifying that locks are taken and released in the code. To use those checks, the full code leading to taking or releasing locks must be annotated with some attributes. Wrap those attributes into our own set of macros. rwlock and the "normal" spinlock are instrumented. A component may enable this check by setting annotate_locks = true in its meson.build. Note: those checks might be of interest out of DPDK, but it requires that the including application locks are annotated. On the other hand, applications out there might have been using those same checks. To be on the safe side, keep this instrumentation under a RTE_ANNOTATE_LOCKS internal build flag. Signed-off-by: David Marchand Acked-by: Maxime Coquelin --- Changes since RFC v2: - fixed rwlock trylock, - instrumented _tm spinlocks, - aligned attribute names to clang, --- drivers/meson.build | 5 ++ lib/eal/arm/include/rte_rwlock.h | 4 ++ lib/eal/arm/include/rte_spinlock.h | 6 +++ lib/eal/include/generic/rte_rwlock.h | 27 +++++++++-- lib/eal/include/generic/rte_spinlock.h | 40 ++++++++++----- lib/eal/include/meson.build | 1 + lib/eal/include/rte_lock_annotations.h | 67 ++++++++++++++++++++++++++ lib/eal/ppc/include/rte_rwlock.h | 4 ++ lib/eal/ppc/include/rte_spinlock.h | 9 ++++ lib/eal/x86/include/rte_rwlock.h | 4 ++ lib/eal/x86/include/rte_spinlock.h | 9 ++++ lib/meson.build | 5 ++ 12 files changed, 164 insertions(+), 17 deletions(-) create mode 100644 lib/eal/include/rte_lock_annotations.h diff --git a/drivers/meson.build b/drivers/meson.build index 1d8123b00c..c81c8c0eff 100644 --- a/drivers/meson.build +++ b/drivers/meson.build @@ -87,6 +87,7 @@ foreach subpath:subdirs build = true # set to false to disable, e.g. missing deps reason = '' # set if build == false to explain name = drv + annotate_locks = false sources = [] headers = [] objs = [] @@ -152,6 +153,10 @@ foreach subpath:subdirs enabled_drivers += name lib_name = '_'.join(['rte', class, name]) cflags += '-DRTE_LOG_DEFAULT_LOGTYPE=' + '.'.join([log_prefix, name]) + if annotate_locks and cc.has_argument('-Wthread-safety') + cflags += '-DRTE_ANNOTATE_LOCKS' + cflags += '-Wthread-safety' + endif dpdk_conf.set(lib_name.to_upper(), 1) dpdk_extra_ldflags += pkgconfig_extra_libs diff --git a/lib/eal/arm/include/rte_rwlock.h b/lib/eal/arm/include/rte_rwlock.h index 18bb37b036..575f171670 100644 --- a/lib/eal/arm/include/rte_rwlock.h +++ b/lib/eal/arm/include/rte_rwlock.h @@ -13,24 +13,28 @@ extern "C" { static inline void rte_rwlock_read_lock_tm(rte_rwlock_t *rwl) + __rte_no_thread_safety_analysis { rte_rwlock_read_lock(rwl); } static inline void rte_rwlock_read_unlock_tm(rte_rwlock_t *rwl) + __rte_no_thread_safety_analysis { rte_rwlock_read_unlock(rwl); } static inline void rte_rwlock_write_lock_tm(rte_rwlock_t *rwl) + __rte_no_thread_safety_analysis { rte_rwlock_write_lock(rwl); } static inline void rte_rwlock_write_unlock_tm(rte_rwlock_t *rwl) + __rte_no_thread_safety_analysis { rte_rwlock_write_unlock(rwl); } diff --git a/lib/eal/arm/include/rte_spinlock.h b/lib/eal/arm/include/rte_spinlock.h index a973763c23..2970de5c3c 100644 --- a/lib/eal/arm/include/rte_spinlock.h +++ b/lib/eal/arm/include/rte_spinlock.h @@ -23,36 +23,42 @@ static inline int rte_tm_supported(void) static inline void rte_spinlock_lock_tm(rte_spinlock_t *sl) + __rte_no_thread_safety_analysis { rte_spinlock_lock(sl); /* fall-back */ } static inline int rte_spinlock_trylock_tm(rte_spinlock_t *sl) + __rte_no_thread_safety_analysis { return rte_spinlock_trylock(sl); } static inline void rte_spinlock_unlock_tm(rte_spinlock_t *sl) + __rte_no_thread_safety_analysis { rte_spinlock_unlock(sl); } static inline void rte_spinlock_recursive_lock_tm(rte_spinlock_recursive_t *slr) + __rte_no_thread_safety_analysis { rte_spinlock_recursive_lock(slr); /* fall-back */ } static inline void rte_spinlock_recursive_unlock_tm(rte_spinlock_recursive_t *slr) + __rte_no_thread_safety_analysis { rte_spinlock_recursive_unlock(slr); } static inline int rte_spinlock_recursive_trylock_tm(rte_spinlock_recursive_t *slr) + __rte_no_thread_safety_analysis { return rte_spinlock_recursive_trylock(slr); } diff --git a/lib/eal/include/generic/rte_rwlock.h b/lib/eal/include/generic/rte_rwlock.h index da9bc3e9c0..145851f9c0 100644 --- a/lib/eal/include/generic/rte_rwlock.h +++ b/lib/eal/include/generic/rte_rwlock.h @@ -23,6 +23,7 @@ extern "C" { #include #include +#include #include /** @@ -30,7 +31,7 @@ extern "C" { * * cnt is -1 when write lock is held, and > 0 when read locks are held. */ -typedef struct { +typedef struct __rte_lockable { volatile int32_t cnt; /**< -1 when W lock held, > 0 when R locks held. */ } rte_rwlock_t; @@ -59,6 +60,8 @@ rte_rwlock_init(rte_rwlock_t *rwl) */ static inline void rte_rwlock_read_lock(rte_rwlock_t *rwl) + __rte_shared_lock_function(rwl) + __rte_no_thread_safety_analysis { int32_t x; int success = 0; @@ -91,6 +94,8 @@ rte_rwlock_read_lock(rte_rwlock_t *rwl) __rte_experimental static inline int rte_rwlock_read_trylock(rte_rwlock_t *rwl) + __rte_shared_trylock_function(0, rwl) + __rte_no_thread_safety_analysis { int32_t x; int success = 0; @@ -115,6 +120,8 @@ rte_rwlock_read_trylock(rte_rwlock_t *rwl) */ static inline void rte_rwlock_read_unlock(rte_rwlock_t *rwl) + __rte_unlock_function(rwl) + __rte_no_thread_safety_analysis { __atomic_fetch_sub(&rwl->cnt, 1, __ATOMIC_RELEASE); } @@ -135,6 +142,8 @@ rte_rwlock_read_unlock(rte_rwlock_t *rwl) __rte_experimental static inline int rte_rwlock_write_trylock(rte_rwlock_t *rwl) + __rte_exclusive_trylock_function(0, rwl) + __rte_no_thread_safety_analysis { int32_t x; @@ -154,6 +163,8 @@ rte_rwlock_write_trylock(rte_rwlock_t *rwl) */ static inline void rte_rwlock_write_lock(rte_rwlock_t *rwl) + __rte_exclusive_lock_function(rwl) + __rte_no_thread_safety_analysis { int32_t x; int success = 0; @@ -178,6 +189,8 @@ rte_rwlock_write_lock(rte_rwlock_t *rwl) */ static inline void rte_rwlock_write_unlock(rte_rwlock_t *rwl) + __rte_unlock_function(rwl) + __rte_no_thread_safety_analysis { __atomic_store_n(&rwl->cnt, 0, __ATOMIC_RELEASE); } @@ -196,7 +209,8 @@ rte_rwlock_write_unlock(rte_rwlock_t *rwl) * A pointer to a rwlock structure. */ static inline void -rte_rwlock_read_lock_tm(rte_rwlock_t *rwl); +rte_rwlock_read_lock_tm(rte_rwlock_t *rwl) + __rte_shared_lock_function(rwl); /** * Commit hardware memory transaction or release the read lock if the lock is used as a fall-back @@ -205,7 +219,8 @@ rte_rwlock_read_lock_tm(rte_rwlock_t *rwl); * A pointer to the rwlock structure. */ static inline void -rte_rwlock_read_unlock_tm(rte_rwlock_t *rwl); +rte_rwlock_read_unlock_tm(rte_rwlock_t *rwl) + __rte_unlock_function(rwl); /** * Try to execute critical section in a hardware memory transaction, if it @@ -221,7 +236,8 @@ rte_rwlock_read_unlock_tm(rte_rwlock_t *rwl); * A pointer to a rwlock structure. */ static inline void -rte_rwlock_write_lock_tm(rte_rwlock_t *rwl); +rte_rwlock_write_lock_tm(rte_rwlock_t *rwl) + __rte_exclusive_lock_function(rwl); /** * Commit hardware memory transaction or release the write lock if the lock is used as a fall-back @@ -230,7 +246,8 @@ rte_rwlock_write_lock_tm(rte_rwlock_t *rwl); * A pointer to a rwlock structure. */ static inline void -rte_rwlock_write_unlock_tm(rte_rwlock_t *rwl); +rte_rwlock_write_unlock_tm(rte_rwlock_t *rwl) + __rte_unlock_function(rwl); #ifdef __cplusplus } diff --git a/lib/eal/include/generic/rte_spinlock.h b/lib/eal/include/generic/rte_spinlock.h index 40fe49d5ad..684bfac96c 100644 --- a/lib/eal/include/generic/rte_spinlock.h +++ b/lib/eal/include/generic/rte_spinlock.h @@ -22,12 +22,13 @@ #ifdef RTE_FORCE_INTRINSICS #include #endif +#include #include /** * The rte_spinlock_t type. */ -typedef struct { +typedef struct __rte_lockable { volatile int locked; /**< lock status 0 = unlocked, 1 = locked */ } rte_spinlock_t; @@ -55,11 +56,13 @@ rte_spinlock_init(rte_spinlock_t *sl) * A pointer to the spinlock. */ static inline void -rte_spinlock_lock(rte_spinlock_t *sl); +rte_spinlock_lock(rte_spinlock_t *sl) + __rte_exclusive_lock_function(sl); #ifdef RTE_FORCE_INTRINSICS static inline void rte_spinlock_lock(rte_spinlock_t *sl) + __rte_no_thread_safety_analysis { int exp = 0; @@ -79,11 +82,13 @@ rte_spinlock_lock(rte_spinlock_t *sl) * A pointer to the spinlock. */ static inline void -rte_spinlock_unlock (rte_spinlock_t *sl); +rte_spinlock_unlock(rte_spinlock_t *sl) + __rte_unlock_function(sl); #ifdef RTE_FORCE_INTRINSICS static inline void -rte_spinlock_unlock (rte_spinlock_t *sl) +rte_spinlock_unlock(rte_spinlock_t *sl) + __rte_no_thread_safety_analysis { __atomic_store_n(&sl->locked, 0, __ATOMIC_RELEASE); } @@ -98,11 +103,13 @@ rte_spinlock_unlock (rte_spinlock_t *sl) * 1 if the lock is successfully taken; 0 otherwise. */ static inline int -rte_spinlock_trylock (rte_spinlock_t *sl); +rte_spinlock_trylock(rte_spinlock_t *sl) + __rte_exclusive_trylock_function(1, sl); #ifdef RTE_FORCE_INTRINSICS static inline int -rte_spinlock_trylock (rte_spinlock_t *sl) +rte_spinlock_trylock(rte_spinlock_t *sl) + __rte_no_thread_safety_analysis { int exp = 0; return __atomic_compare_exchange_n(&sl->locked, &exp, 1, @@ -146,7 +153,8 @@ static inline int rte_tm_supported(void); * A pointer to the spinlock. */ static inline void -rte_spinlock_lock_tm(rte_spinlock_t *sl); +rte_spinlock_lock_tm(rte_spinlock_t *sl) + __rte_exclusive_lock_function(sl); /** * Commit hardware memory transaction or release the spinlock if @@ -156,7 +164,8 @@ rte_spinlock_lock_tm(rte_spinlock_t *sl); * A pointer to the spinlock. */ static inline void -rte_spinlock_unlock_tm(rte_spinlock_t *sl); +rte_spinlock_unlock_tm(rte_spinlock_t *sl) + __rte_unlock_function(sl); /** * Try to execute critical section in a hardware memory transaction, @@ -175,7 +184,8 @@ rte_spinlock_unlock_tm(rte_spinlock_t *sl); * or lock is successfully taken; 0 otherwise. */ static inline int -rte_spinlock_trylock_tm(rte_spinlock_t *sl); +rte_spinlock_trylock_tm(rte_spinlock_t *sl) + __rte_exclusive_trylock_function(1, sl); /** * The rte_spinlock_recursive_t type. @@ -211,6 +221,7 @@ static inline void rte_spinlock_recursive_init(rte_spinlock_recursive_t *slr) * A pointer to the recursive spinlock. */ static inline void rte_spinlock_recursive_lock(rte_spinlock_recursive_t *slr) + __rte_no_thread_safety_analysis { int id = rte_gettid(); @@ -227,6 +238,7 @@ static inline void rte_spinlock_recursive_lock(rte_spinlock_recursive_t *slr) * A pointer to the recursive spinlock. */ static inline void rte_spinlock_recursive_unlock(rte_spinlock_recursive_t *slr) + __rte_no_thread_safety_analysis { if (--(slr->count) == 0) { slr->user = -1; @@ -244,6 +256,7 @@ static inline void rte_spinlock_recursive_unlock(rte_spinlock_recursive_t *slr) * 1 if the lock is successfully taken; 0 otherwise. */ static inline int rte_spinlock_recursive_trylock(rte_spinlock_recursive_t *slr) + __rte_no_thread_safety_analysis { int id = rte_gettid(); @@ -271,7 +284,8 @@ static inline int rte_spinlock_recursive_trylock(rte_spinlock_recursive_t *slr) * A pointer to the recursive spinlock. */ static inline void rte_spinlock_recursive_lock_tm( - rte_spinlock_recursive_t *slr); + rte_spinlock_recursive_t *slr) + __rte_no_thread_safety_analysis; /** * Commit hardware memory transaction or release the recursive spinlock @@ -281,7 +295,8 @@ static inline void rte_spinlock_recursive_lock_tm( * A pointer to the recursive spinlock. */ static inline void rte_spinlock_recursive_unlock_tm( - rte_spinlock_recursive_t *slr); + rte_spinlock_recursive_t *slr) + __rte_no_thread_safety_analysis; /** * Try to execute critical section in a hardware memory transaction, @@ -300,6 +315,7 @@ static inline void rte_spinlock_recursive_unlock_tm( * or lock is successfully taken; 0 otherwise. */ static inline int rte_spinlock_recursive_trylock_tm( - rte_spinlock_recursive_t *slr); + rte_spinlock_recursive_t *slr) + __rte_no_thread_safety_analysis; #endif /* _RTE_SPINLOCK_H_ */ diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build index 9700494816..a20b8d7e23 100644 --- a/lib/eal/include/meson.build +++ b/lib/eal/include/meson.build @@ -27,6 +27,7 @@ headers += files( 'rte_keepalive.h', 'rte_launch.h', 'rte_lcore.h', + 'rte_lock_annotations.h', 'rte_log.h', 'rte_malloc.h', 'rte_memory.h', diff --git a/lib/eal/include/rte_lock_annotations.h b/lib/eal/include/rte_lock_annotations.h new file mode 100644 index 0000000000..bcaf4193f7 --- /dev/null +++ b/lib/eal/include/rte_lock_annotations.h @@ -0,0 +1,67 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2022 Red Hat, Inc. + */ + +#ifndef RTE_LOCK_ANNOTATIONS_H +#define RTE_LOCK_ANNOTATIONS_H + +#ifdef __cplusplus +extern "C" { +#endif + +#ifdef RTE_ANNOTATE_LOCKS + +#define __rte_lockable \ + __attribute__((lockable)) + +#define __rte_guarded_by(...) \ + __attribute__((guarded_by(__VA_ARGS__))) +#define __rte_guarded_var \ + __attribute__((guarded_var)) + +#define __rte_exclusive_locks_required(...) \ + __attribute__((exclusive_locks_required(__VA_ARGS__))) +#define __rte_exclusive_lock_function(...) \ + __attribute__((exclusive_lock_function(__VA_ARGS__))) +#define __rte_exclusive_trylock_function(ret, ...) \ + __attribute__((exclusive_trylock_function(ret, __VA_ARGS__))) + +#define __rte_shared_locks_required(...) \ + __attribute__((shared_locks_required(__VA_ARGS__))) +#define __rte_shared_lock_function(...) \ + __attribute__((shared_lock_function(__VA_ARGS__))) +#define __rte_shared_trylock_function(ret, ...) \ + __attribute__((shared_trylock_function(ret, __VA_ARGS__))) + +#define __rte_unlock_function(...) \ + __attribute__((unlock_function(__VA_ARGS__))) + +#define __rte_no_thread_safety_analysis \ + __attribute__((no_thread_safety_analysis)) + +#else /* ! RTE_ANNOTATE_LOCKS */ + +#define __rte_lockable + +#define __rte_guarded_by(...) +#define __rte_guarded_var + +#define __rte_exclusive_locks_required(...) +#define __rte_exclusive_lock_function(...) +#define __rte_exclusive_trylock_function(...) + +#define __rte_shared_locks_required(...) +#define __rte_shared_lock_function(...) +#define __rte_shared_trylock_function(...) + +#define __rte_unlock_function(...) + +#define __rte_no_thread_safety_analysis + +#endif /* RTE_ANNOTATE_LOCKS */ + +#ifdef __cplusplus +} +#endif + +#endif /* RTE_LOCK_ANNOTATIONS_H */ diff --git a/lib/eal/ppc/include/rte_rwlock.h b/lib/eal/ppc/include/rte_rwlock.h index 9fadc04076..d7a5b61b8e 100644 --- a/lib/eal/ppc/include/rte_rwlock.h +++ b/lib/eal/ppc/include/rte_rwlock.h @@ -11,24 +11,28 @@ extern "C" { static inline void rte_rwlock_read_lock_tm(rte_rwlock_t *rwl) + __rte_no_thread_safety_analysis { rte_rwlock_read_lock(rwl); } static inline void rte_rwlock_read_unlock_tm(rte_rwlock_t *rwl) + __rte_no_thread_safety_analysis { rte_rwlock_read_unlock(rwl); } static inline void rte_rwlock_write_lock_tm(rte_rwlock_t *rwl) + __rte_no_thread_safety_analysis { rte_rwlock_write_lock(rwl); } static inline void rte_rwlock_write_unlock_tm(rte_rwlock_t *rwl) + __rte_no_thread_safety_analysis { rte_rwlock_write_unlock(rwl); } diff --git a/lib/eal/ppc/include/rte_spinlock.h b/lib/eal/ppc/include/rte_spinlock.h index 149ec245c7..979abea16d 100644 --- a/lib/eal/ppc/include/rte_spinlock.h +++ b/lib/eal/ppc/include/rte_spinlock.h @@ -20,6 +20,7 @@ extern "C" { static inline void rte_spinlock_lock(rte_spinlock_t *sl) + __rte_no_thread_safety_analysis { while (__sync_lock_test_and_set(&sl->locked, 1)) while (sl->locked) @@ -28,12 +29,14 @@ rte_spinlock_lock(rte_spinlock_t *sl) static inline void rte_spinlock_unlock(rte_spinlock_t *sl) + __rte_no_thread_safety_analysis { __sync_lock_release(&sl->locked); } static inline int rte_spinlock_trylock(rte_spinlock_t *sl) + __rte_no_thread_safety_analysis { return __sync_lock_test_and_set(&sl->locked, 1) == 0; } @@ -47,36 +50,42 @@ static inline int rte_tm_supported(void) static inline void rte_spinlock_lock_tm(rte_spinlock_t *sl) + __rte_no_thread_safety_analysis { rte_spinlock_lock(sl); /* fall-back */ } static inline int rte_spinlock_trylock_tm(rte_spinlock_t *sl) + __rte_no_thread_safety_analysis { return rte_spinlock_trylock(sl); } static inline void rte_spinlock_unlock_tm(rte_spinlock_t *sl) + __rte_no_thread_safety_analysis { rte_spinlock_unlock(sl); } static inline void rte_spinlock_recursive_lock_tm(rte_spinlock_recursive_t *slr) + __rte_no_thread_safety_analysis { rte_spinlock_recursive_lock(slr); /* fall-back */ } static inline void rte_spinlock_recursive_unlock_tm(rte_spinlock_recursive_t *slr) + __rte_no_thread_safety_analysis { rte_spinlock_recursive_unlock(slr); } static inline int rte_spinlock_recursive_trylock_tm(rte_spinlock_recursive_t *slr) + __rte_no_thread_safety_analysis { return rte_spinlock_recursive_trylock(slr); } diff --git a/lib/eal/x86/include/rte_rwlock.h b/lib/eal/x86/include/rte_rwlock.h index eec4c7123c..1796b69265 100644 --- a/lib/eal/x86/include/rte_rwlock.h +++ b/lib/eal/x86/include/rte_rwlock.h @@ -14,6 +14,7 @@ extern "C" { static inline void rte_rwlock_read_lock_tm(rte_rwlock_t *rwl) + __rte_no_thread_safety_analysis { if (likely(rte_try_tm(&rwl->cnt))) return; @@ -22,6 +23,7 @@ rte_rwlock_read_lock_tm(rte_rwlock_t *rwl) static inline void rte_rwlock_read_unlock_tm(rte_rwlock_t *rwl) + __rte_no_thread_safety_analysis { if (unlikely(rwl->cnt)) rte_rwlock_read_unlock(rwl); @@ -31,6 +33,7 @@ rte_rwlock_read_unlock_tm(rte_rwlock_t *rwl) static inline void rte_rwlock_write_lock_tm(rte_rwlock_t *rwl) + __rte_no_thread_safety_analysis { if (likely(rte_try_tm(&rwl->cnt))) return; @@ -39,6 +42,7 @@ rte_rwlock_write_lock_tm(rte_rwlock_t *rwl) static inline void rte_rwlock_write_unlock_tm(rte_rwlock_t *rwl) + __rte_no_thread_safety_analysis { if (unlikely(rwl->cnt)) rte_rwlock_write_unlock(rwl); diff --git a/lib/eal/x86/include/rte_spinlock.h b/lib/eal/x86/include/rte_spinlock.h index e2e2b2643c..0b20ddfd73 100644 --- a/lib/eal/x86/include/rte_spinlock.h +++ b/lib/eal/x86/include/rte_spinlock.h @@ -23,6 +23,7 @@ extern "C" { #ifndef RTE_FORCE_INTRINSICS static inline void rte_spinlock_lock(rte_spinlock_t *sl) + __rte_no_thread_safety_analysis { int lock_val = 1; asm volatile ( @@ -43,6 +44,7 @@ rte_spinlock_lock(rte_spinlock_t *sl) static inline void rte_spinlock_unlock (rte_spinlock_t *sl) + __rte_no_thread_safety_analysis { int unlock_val = 0; asm volatile ( @@ -54,6 +56,7 @@ rte_spinlock_unlock (rte_spinlock_t *sl) static inline int rte_spinlock_trylock (rte_spinlock_t *sl) + __rte_no_thread_safety_analysis { int lockval = 1; @@ -121,6 +124,7 @@ rte_try_tm(volatile int *lock) static inline void rte_spinlock_lock_tm(rte_spinlock_t *sl) + __rte_no_thread_safety_analysis { if (likely(rte_try_tm(&sl->locked))) return; @@ -130,6 +134,7 @@ rte_spinlock_lock_tm(rte_spinlock_t *sl) static inline int rte_spinlock_trylock_tm(rte_spinlock_t *sl) + __rte_no_thread_safety_analysis { if (likely(rte_try_tm(&sl->locked))) return 1; @@ -139,6 +144,7 @@ rte_spinlock_trylock_tm(rte_spinlock_t *sl) static inline void rte_spinlock_unlock_tm(rte_spinlock_t *sl) + __rte_no_thread_safety_analysis { if (unlikely(sl->locked)) rte_spinlock_unlock(sl); @@ -148,6 +154,7 @@ rte_spinlock_unlock_tm(rte_spinlock_t *sl) static inline void rte_spinlock_recursive_lock_tm(rte_spinlock_recursive_t *slr) + __rte_no_thread_safety_analysis { if (likely(rte_try_tm(&slr->sl.locked))) return; @@ -157,6 +164,7 @@ rte_spinlock_recursive_lock_tm(rte_spinlock_recursive_t *slr) static inline void rte_spinlock_recursive_unlock_tm(rte_spinlock_recursive_t *slr) + __rte_no_thread_safety_analysis { if (unlikely(slr->sl.locked)) rte_spinlock_recursive_unlock(slr); @@ -166,6 +174,7 @@ rte_spinlock_recursive_unlock_tm(rte_spinlock_recursive_t *slr) static inline int rte_spinlock_recursive_trylock_tm(rte_spinlock_recursive_t *slr) + __rte_no_thread_safety_analysis { if (likely(rte_try_tm(&slr->sl.locked))) return 1; diff --git a/lib/meson.build b/lib/meson.build index 24adbe44c9..909133ea64 100644 --- a/lib/meson.build +++ b/lib/meson.build @@ -112,6 +112,7 @@ foreach l:libraries reason = '' # set if build == false to explain why name = l use_function_versioning = false + annotate_locks = false sources = [] headers = [] indirect_headers = [] # public headers not directly included by apps @@ -184,6 +185,10 @@ foreach l:libraries cflags += '-DRTE_USE_FUNCTION_VERSIONING' endif cflags += '-DRTE_LOG_DEFAULT_LOGTYPE=lib.' + l + if annotate_locks and cc.has_argument('-Wthread-safety') + cflags += '-DRTE_ANNOTATE_LOCKS' + cflags += '-Wthread-safety' + endif # first build static lib static_lib = static_library(libname, From patchwork Mon Apr 11 11:00:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 109581 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 49316A0093; Mon, 11 Apr 2022 13:00:43 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E06D2427EE; Mon, 11 Apr 2022 13:00:39 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id DDCC44161A for ; Mon, 11 Apr 2022 13:00:37 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649674837; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rnGMuCAQVClq74Wd6lcHEc2DXYy3mVbTEZ9a4m59z04=; b=FjNs84Y/98VH+ogvkYxfGXP31rgnHKjeq8Zk3QogY4c2oioOorhxiZjZB8FLywLj2pSDyq snDaG8eMq5Du/0+a8FScHDAN5DNODH3wtrBPKl70aS6Ya5KQ4rugX7rXsu0mDPUhT5/J5E 1JBoRvVMSCYiuk6huQkXbk3FeDszksw= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-154-f0cup77SNY2fvF6Jox6wfg-1; Mon, 11 Apr 2022 07:00:31 -0400 X-MC-Unique: f0cup77SNY2fvF6Jox6wfg-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0B7E03C18524; Mon, 11 Apr 2022 11:00:31 +0000 (UTC) Received: from dmarchan.remote.csb (unknown [10.40.192.83]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9BEE1145B989; Mon, 11 Apr 2022 11:00:29 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: maxime.coquelin@redhat.com, stephen@networkplumber.org, chenbo.xia@intel.com, jiayu.hu@intel.com, yuanx.wang@intel.com, xuan.ding@intel.com Subject: [RFC PATCH v3 2/8] vhost: annotate virtqueue access lock Date: Mon, 11 Apr 2022 13:00:07 +0200 Message-Id: <20220411110013.18624-3-david.marchand@redhat.com> In-Reply-To: <20220411110013.18624-1-david.marchand@redhat.com> References: <20220328121758.26632-1-david.marchand@redhat.com> <20220411110013.18624-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david.marchand@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This change simply annotates existing paths of the code leading to manipulations of the vq->access_lock. One small change is required: vhost_poll_enqueue_completed was getting a queue_id to get hold of the vq, while its callers already knew of the vq. For the annotation sake, vq is now directly passed. vhost_user_lock_all_queue_pairs and vhost_user_unlock_all_queue_pairs are skipped since vq->access_lock are conditionally held. Signed-off-by: David Marchand Reviewed-by: Maxime Coquelin --- lib/vhost/vhost.h | 2 ++ lib/vhost/vhost_user.c | 2 ++ lib/vhost/virtio_net.c | 16 ++++++++++++---- 3 files changed, 16 insertions(+), 4 deletions(-) diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index a9edc271aa..4a0aa35306 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -834,6 +834,7 @@ vhost_need_event(uint16_t event_idx, uint16_t new_idx, uint16_t old) static __rte_always_inline void vhost_vring_call_split(struct virtio_net *dev, struct vhost_virtqueue *vq) + __rte_exclusive_locks_required(vq->access_lock) { /* Flush used->idx update before we read avail->flags. */ rte_atomic_thread_fence(__ATOMIC_SEQ_CST); @@ -872,6 +873,7 @@ vhost_vring_call_split(struct virtio_net *dev, struct vhost_virtqueue *vq) static __rte_always_inline void vhost_vring_call_packed(struct virtio_net *dev, struct vhost_virtqueue *vq) + __rte_exclusive_locks_required(vq->access_lock) { uint16_t old, new, off, off_wrap; bool signalled_used_valid, kick = false; diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c index 1d390677fa..dfa24fec09 100644 --- a/lib/vhost/vhost_user.c +++ b/lib/vhost/vhost_user.c @@ -2909,6 +2909,7 @@ vhost_user_check_and_alloc_queue_pair(struct virtio_net *dev, static void vhost_user_lock_all_queue_pairs(struct virtio_net *dev) + __rte_no_thread_safety_analysis { unsigned int i = 0; unsigned int vq_num = 0; @@ -2926,6 +2927,7 @@ vhost_user_lock_all_queue_pairs(struct virtio_net *dev) static void vhost_user_unlock_all_queue_pairs(struct virtio_net *dev) + __rte_no_thread_safety_analysis { unsigned int i = 0; unsigned int vq_num = 0; diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index 5f432b0d77..a8b91d4b20 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -1246,6 +1246,7 @@ vhost_enqueue_single_packed(struct virtio_net *dev, static __rte_noinline uint32_t virtio_dev_rx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf **pkts, uint32_t count) + __rte_exclusive_locks_required(vq->access_lock) { uint32_t pkt_idx = 0; uint16_t num_buffers; @@ -1441,6 +1442,7 @@ virtio_dev_rx_packed(struct virtio_net *dev, struct vhost_virtqueue *__rte_restrict vq, struct rte_mbuf **__rte_restrict pkts, uint32_t count) + __rte_exclusive_locks_required(vq->access_lock) { uint32_t pkt_idx = 0; @@ -1955,11 +1957,11 @@ write_back_completed_descs_packed(struct vhost_virtqueue *vq, } static __rte_always_inline uint16_t -vhost_poll_enqueue_completed(struct virtio_net *dev, uint16_t queue_id, +vhost_poll_enqueue_completed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf **pkts, uint16_t count, int16_t dma_id, uint16_t vchan_id) + __rte_exclusive_locks_required(vq->access_lock) { - struct vhost_virtqueue *vq = dev->virtqueue[queue_id]; struct vhost_async *async = vq->async; struct async_inflight_info *pkts_info = async->pkts_info; uint16_t nr_cpl_pkts = 0; @@ -2062,7 +2064,7 @@ rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id, goto out; } - n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, count, dma_id, vchan_id); + n_pkts_cpl = vhost_poll_enqueue_completed(dev, vq, pkts, count, dma_id, vchan_id); out: rte_spinlock_unlock(&vq->access_lock); @@ -2104,7 +2106,7 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, return 0; } - n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, count, dma_id, vchan_id); + n_pkts_cpl = vhost_poll_enqueue_completed(dev, vq, pkts, count, dma_id, vchan_id); return n_pkts_cpl; } @@ -2679,6 +2681,7 @@ static uint16_t virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count, bool legacy_ol_flags) + __rte_exclusive_locks_required(vq->access_lock) { uint16_t i; uint16_t free_entries; @@ -2774,6 +2777,7 @@ static uint16_t virtio_dev_tx_split_legacy(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count) + __rte_exclusive_locks_required(vq->access_lock) { return virtio_dev_tx_split(dev, vq, mbuf_pool, pkts, count, true); } @@ -2783,6 +2787,7 @@ static uint16_t virtio_dev_tx_split_compliant(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count) + __rte_exclusive_locks_required(vq->access_lock) { return virtio_dev_tx_split(dev, vq, mbuf_pool, pkts, count, false); } @@ -2982,6 +2987,7 @@ virtio_dev_tx_packed(struct virtio_net *dev, struct rte_mbuf **__rte_restrict pkts, uint32_t count, bool legacy_ol_flags) + __rte_exclusive_locks_required(vq->access_lock) { uint32_t pkt_idx = 0; @@ -3025,6 +3031,7 @@ static uint16_t virtio_dev_tx_packed_legacy(struct virtio_net *dev, struct vhost_virtqueue *__rte_restrict vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **__rte_restrict pkts, uint32_t count) + __rte_exclusive_locks_required(vq->access_lock) { return virtio_dev_tx_packed(dev, vq, mbuf_pool, pkts, count, true); } @@ -3034,6 +3041,7 @@ static uint16_t virtio_dev_tx_packed_compliant(struct virtio_net *dev, struct vhost_virtqueue *__rte_restrict vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **__rte_restrict pkts, uint32_t count) + __rte_exclusive_locks_required(vq->access_lock) { return virtio_dev_tx_packed(dev, vq, mbuf_pool, pkts, count, false); } From patchwork Mon Apr 11 11:00:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 109582 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1931EA0093; Mon, 11 Apr 2022 13:00:49 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C3385427F3; Mon, 11 Apr 2022 13:00:40 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id DDC7B427ED for ; Mon, 11 Apr 2022 13:00:39 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649674839; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=EbR7AF8gCYaQhhIbs8PPrDbSq300cUfE7XrEK1eWFkY=; b=LfwYkSQFupKwVWl6OUnLJGf4a6WFT3GxShxqu7oHTGQxD97YDy8AxVkuf2c/bU+vNgsMco VtBy2uFDK0LwWHQNKm++trX69ijSTkgLM7ydxOcN8lv/+ydxevs0nlbR6HQ9wpbmENMf1T d2EOPrv7fWAU4XPk1aL/etwSEippjxM= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-396-WCz-wDa2ORWpXo4QeT6EsA-1; Mon, 11 Apr 2022 07:00:36 -0400 X-MC-Unique: WCz-wDa2ORWpXo4QeT6EsA-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7FAFA804184; Mon, 11 Apr 2022 11:00:35 +0000 (UTC) Received: from dmarchan.remote.csb (unknown [10.40.192.83]) by smtp.corp.redhat.com (Postfix) with ESMTP id A40A9145B98E; Mon, 11 Apr 2022 11:00:33 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: maxime.coquelin@redhat.com, stephen@networkplumber.org, chenbo.xia@intel.com, jiayu.hu@intel.com, yuanx.wang@intel.com, xuan.ding@intel.com, stable@dpdk.org, Sunil Pai G , Patrick Fu Subject: [RFC PATCH v3 3/8] vhost: fix async access Date: Mon, 11 Apr 2022 13:00:08 +0200 Message-Id: <20220411110013.18624-4-david.marchand@redhat.com> In-Reply-To: <20220411110013.18624-1-david.marchand@redhat.com> References: <20220328121758.26632-1-david.marchand@redhat.com> <20220411110013.18624-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david.marchand@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org vq->async accesses must be protected with vq->access_lock. Fixes: eb666d24085f ("vhost: fix async unregister deadlock") Fixes: 0c0935c5f794 ("vhost: allow to check in-flight packets for async vhost") Cc: stable@dpdk.org Signed-off-by: David Marchand Acked-by: Sunil Pai G Reviewed-by: Maxime Coquelin --- lib/vhost/vhost.c | 25 ++++++++++--------------- 1 file changed, 10 insertions(+), 15 deletions(-) diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c index 2f96a28dac..a93e41f314 100644 --- a/lib/vhost/vhost.c +++ b/lib/vhost/vhost.c @@ -1753,27 +1753,23 @@ rte_vhost_async_channel_unregister(int vid, uint16_t queue_id) if (vq == NULL) return ret; - ret = 0; - - if (!vq->async) - return ret; - if (!rte_spinlock_trylock(&vq->access_lock)) { VHOST_LOG_CONFIG(ERR, "(%s) failed to unregister async channel, virtqueue busy.\n", dev->ifname); - return -1; + return ret; } - if (vq->async->pkts_inflight_n) { + if (!vq->async) { + ret = 0; + } else if (vq->async->pkts_inflight_n) { VHOST_LOG_CONFIG(ERR, "(%s) failed to unregister async channel.\n", dev->ifname); VHOST_LOG_CONFIG(ERR, "(%s) inflight packets must be completed before unregistration.\n", dev->ifname); - ret = -1; - goto out; + } else { + vhost_free_async_mem(vq); + ret = 0; } - vhost_free_async_mem(vq); -out: rte_spinlock_unlock(&vq->access_lock); return ret; @@ -1891,9 +1887,6 @@ rte_vhost_async_get_inflight(int vid, uint16_t queue_id) if (vq == NULL) return ret; - if (!vq->async) - return ret; - if (!rte_spinlock_trylock(&vq->access_lock)) { VHOST_LOG_CONFIG(DEBUG, "(%s) failed to check in-flight packets. virtqueue busy.\n", @@ -1901,7 +1894,9 @@ rte_vhost_async_get_inflight(int vid, uint16_t queue_id) return ret; } - ret = vq->async->pkts_inflight_n; + if (vq->async) + ret = vq->async->pkts_inflight_n; + rte_spinlock_unlock(&vq->access_lock); return ret; From patchwork Mon Apr 11 11:00:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 109583 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F30B4A0093; Mon, 11 Apr 2022 13:00:55 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C72ED427FB; Mon, 11 Apr 2022 13:00:45 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 659DE427FE for ; Mon, 11 Apr 2022 13:00:43 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649674842; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fVvdbwrunLXXMS6fLUmusvGEXzKAfdcox8VnV8FtTKk=; b=K3mzOFwJMmgEiX+CoXv2k64I3HG5DB+YzaQj20xTsWC5QHi80XX3YyeNEpRn/UTKM92jz9 ZRz8o6kS4DeHPofH8b9IncvARjGMrh3csXweF027QKetDrqtQAWJ8G+wtIZvSlBeSrwNs1 DJojUTIo5gZwHh04oJfDYoE50FICn0g= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-288-hafzMZzFNZSXakn6KqrzAQ-1; Mon, 11 Apr 2022 07:00:39 -0400 X-MC-Unique: hafzMZzFNZSXakn6KqrzAQ-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 711AB3811A2F; Mon, 11 Apr 2022 11:00:39 +0000 (UTC) Received: from dmarchan.remote.csb (unknown [10.40.192.83]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0BDFD140EBD5; Mon, 11 Apr 2022 11:00:37 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: maxime.coquelin@redhat.com, stephen@networkplumber.org, chenbo.xia@intel.com, jiayu.hu@intel.com, yuanx.wang@intel.com, xuan.ding@intel.com Subject: [RFC PATCH v3 4/8] vhost: annotate async accesses Date: Mon, 11 Apr 2022 13:00:09 +0200 Message-Id: <20220411110013.18624-5-david.marchand@redhat.com> In-Reply-To: <20220411110013.18624-1-david.marchand@redhat.com> References: <20220328121758.26632-1-david.marchand@redhat.com> <20220411110013.18624-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david.marchand@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org vq->async is initialised and must be accessed under vq->access_lock. Top level "_thread_unsafe" functions could be checked at runtime (clang provides a lock aware assert()-like check), but they are simply skipped because those functions are not called in-tree, and as a result, their annotations would not be tested. Signed-off-by: David Marchand Reviewed-by: Maxime Coquelin --- lib/vhost/vhost.c | 14 +++++++++----- lib/vhost/vhost.h | 2 +- lib/vhost/vhost_user.c | 2 ++ lib/vhost/virtio_net.c | 15 +++++++++++++++ 4 files changed, 27 insertions(+), 6 deletions(-) diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c index a93e41f314..79236df000 100644 --- a/lib/vhost/vhost.c +++ b/lib/vhost/vhost.c @@ -334,6 +334,7 @@ cleanup_device(struct virtio_net *dev, int destroy) static void vhost_free_async_mem(struct vhost_virtqueue *vq) + __rte_exclusive_locks_required(vq->access_lock) { if (!vq->async) return; @@ -352,6 +353,7 @@ vhost_free_async_mem(struct vhost_virtqueue *vq) void free_vq(struct virtio_net *dev, struct vhost_virtqueue *vq) + __rte_no_thread_safety_analysis { if (vq_is_packed(dev)) rte_free(vq->shadow_used_packed); @@ -1622,10 +1624,10 @@ rte_vhost_extern_callback_register(int vid, } static __rte_always_inline int -async_channel_register(int vid, uint16_t queue_id) +async_channel_register(struct virtio_net *dev, struct vhost_virtqueue *vq, + uint16_t queue_id) + __rte_exclusive_locks_required(vq->access_lock) { - struct virtio_net *dev = get_device(vid); - struct vhost_virtqueue *vq = dev->virtqueue[queue_id]; struct vhost_async *async; int node = vq->numa_node; @@ -1709,7 +1711,7 @@ rte_vhost_async_channel_register(int vid, uint16_t queue_id) return -1; rte_spinlock_lock(&vq->access_lock); - ret = async_channel_register(vid, queue_id); + ret = async_channel_register(dev, vq, queue_id); rte_spinlock_unlock(&vq->access_lock); return ret; @@ -1717,6 +1719,7 @@ rte_vhost_async_channel_register(int vid, uint16_t queue_id) int rte_vhost_async_channel_register_thread_unsafe(int vid, uint16_t queue_id) + __rte_no_thread_safety_analysis { struct vhost_virtqueue *vq; struct virtio_net *dev = get_device(vid); @@ -1732,7 +1735,7 @@ rte_vhost_async_channel_register_thread_unsafe(int vid, uint16_t queue_id) if (unlikely(vq == NULL || !dev->async_copy)) return -1; - return async_channel_register(vid, queue_id); + return async_channel_register(dev, vq, queue_id); } int @@ -1777,6 +1780,7 @@ rte_vhost_async_channel_unregister(int vid, uint16_t queue_id) int rte_vhost_async_channel_unregister_thread_unsafe(int vid, uint16_t queue_id) + __rte_no_thread_safety_analysis { struct vhost_virtqueue *vq; struct virtio_net *dev = get_device(vid); diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index 4a0aa35306..c2065e5324 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -299,7 +299,7 @@ struct vhost_virtqueue { struct rte_vhost_resubmit_info *resubmit_inflight; uint64_t global_counter; - struct vhost_async *async; + struct vhost_async *async __rte_guarded_var; int notif_enable; #define VIRTIO_UNINITIALIZED_NOTIF (-1) diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c index dfa24fec09..ee276a28f1 100644 --- a/lib/vhost/vhost_user.c +++ b/lib/vhost/vhost_user.c @@ -2199,6 +2199,8 @@ static int vhost_user_set_vring_enable(struct virtio_net **pdev, struct vhu_msg_context *ctx, int main_fd __rte_unused) + /* vq->access_lock is taken in vhost_user_lock_all_queue_pairs() */ + __rte_no_thread_safety_analysis { struct virtio_net *dev = *pdev; bool enable = !!ctx->msg.payload.state.num; diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index a8b91d4b20..0c40d5a7a5 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -51,6 +51,7 @@ static __rte_always_inline int64_t vhost_async_dma_transfer_one(struct virtio_net *dev, struct vhost_virtqueue *vq, int16_t dma_id, uint16_t vchan_id, uint16_t flag_idx, struct vhost_iov_iter *pkt) + __rte_exclusive_locks_required(vq->access_lock) { struct async_dma_vchan_info *dma_info = &dma_copy_track[dma_id].vchans[vchan_id]; uint16_t ring_mask = dma_info->ring_mask; @@ -99,6 +100,7 @@ static __rte_always_inline uint16_t vhost_async_dma_transfer(struct virtio_net *dev, struct vhost_virtqueue *vq, int16_t dma_id, uint16_t vchan_id, uint16_t head_idx, struct vhost_iov_iter *pkts, uint16_t nr_pkts) + __rte_exclusive_locks_required(vq->access_lock) { struct async_dma_vchan_info *dma_info = &dma_copy_track[dma_id].vchans[vchan_id]; int64_t ret, nr_copies = 0; @@ -1000,6 +1002,7 @@ static __rte_always_inline int async_mbuf_to_desc_seg(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf *m, uint32_t mbuf_offset, uint64_t buf_iova, uint32_t cpy_len) + __rte_exclusive_locks_required(vq->access_lock) { struct vhost_async *async = vq->async; uint64_t mapped_len; @@ -1057,6 +1060,7 @@ static __rte_always_inline int mbuf_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf *m, struct buf_vector *buf_vec, uint16_t nr_vec, uint16_t num_buffers, bool is_async) + __rte_exclusive_locks_required(vq->access_lock) { uint32_t vec_idx = 0; uint32_t mbuf_offset, mbuf_avail; @@ -1186,6 +1190,7 @@ vhost_enqueue_single_packed(struct virtio_net *dev, struct rte_mbuf *pkt, struct buf_vector *buf_vec, uint16_t *nr_descs) + __rte_exclusive_locks_required(vq->access_lock) { uint16_t nr_vec = 0; uint16_t avail_idx = vq->last_avail_idx; @@ -1417,6 +1422,7 @@ static __rte_always_inline int16_t virtio_dev_rx_single_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf *pkt) + __rte_exclusive_locks_required(vq->access_lock) { struct buf_vector buf_vec[BUF_VECTOR_MAX]; uint16_t nr_descs = 0; @@ -1541,6 +1547,7 @@ rte_vhost_enqueue_burst(int vid, uint16_t queue_id, static __rte_always_inline uint16_t async_get_first_inflight_pkt_idx(struct vhost_virtqueue *vq) + __rte_exclusive_locks_required(vq->access_lock) { struct vhost_async *async = vq->async; @@ -1587,6 +1594,7 @@ static __rte_noinline uint32_t virtio_dev_rx_async_submit_split(struct virtio_net *dev, struct vhost_virtqueue *vq, uint16_t queue_id, struct rte_mbuf **pkts, uint32_t count, int16_t dma_id, uint16_t vchan_id) + __rte_exclusive_locks_required(vq->access_lock) { struct buf_vector buf_vec[BUF_VECTOR_MAX]; uint32_t pkt_idx = 0; @@ -1691,6 +1699,7 @@ vhost_enqueue_async_packed(struct virtio_net *dev, struct buf_vector *buf_vec, uint16_t *nr_descs, uint16_t *nr_buffers) + __rte_exclusive_locks_required(vq->access_lock) { uint16_t nr_vec = 0; uint16_t avail_idx = vq->last_avail_idx; @@ -1748,6 +1757,7 @@ vhost_enqueue_async_packed(struct virtio_net *dev, static __rte_always_inline int16_t virtio_dev_rx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf *pkt, uint16_t *nr_descs, uint16_t *nr_buffers) + __rte_exclusive_locks_required(vq->access_lock) { struct buf_vector buf_vec[BUF_VECTOR_MAX]; @@ -1766,6 +1776,7 @@ virtio_dev_rx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, static __rte_always_inline void dma_error_handler_packed(struct vhost_virtqueue *vq, uint16_t slot_idx, uint32_t nr_err, uint32_t *pkt_idx) + __rte_exclusive_locks_required(vq->access_lock) { uint16_t descs_err = 0; uint16_t buffers_err = 0; @@ -1793,6 +1804,7 @@ static __rte_noinline uint32_t virtio_dev_rx_async_submit_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, uint16_t queue_id, struct rte_mbuf **pkts, uint32_t count, int16_t dma_id, uint16_t vchan_id) + __rte_exclusive_locks_required(vq->access_lock) { uint32_t pkt_idx = 0; uint32_t remained = count; @@ -1863,6 +1875,7 @@ virtio_dev_rx_async_submit_packed(struct virtio_net *dev, struct vhost_virtqueue static __rte_always_inline void write_back_completed_descs_split(struct vhost_virtqueue *vq, uint16_t n_descs) + __rte_exclusive_locks_required(vq->access_lock) { struct vhost_async *async = vq->async; uint16_t nr_left = n_descs; @@ -1895,6 +1908,7 @@ write_back_completed_descs_split(struct vhost_virtqueue *vq, uint16_t n_descs) static __rte_always_inline void write_back_completed_descs_packed(struct vhost_virtqueue *vq, uint16_t n_buffers) + __rte_exclusive_locks_required(vq->access_lock) { struct vhost_async *async = vq->async; uint16_t from = async->last_buffer_idx_packed; @@ -2076,6 +2090,7 @@ uint16_t rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, struct rte_mbuf **pkts, uint16_t count, int16_t dma_id, uint16_t vchan_id) + __rte_no_thread_safety_analysis { struct virtio_net *dev = get_device(vid); struct vhost_virtqueue *vq; From patchwork Mon Apr 11 11:00:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 109584 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3D8EAA0093; Mon, 11 Apr 2022 13:01:02 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C633E42800; Mon, 11 Apr 2022 13:00:49 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id E567A42802 for ; Mon, 11 Apr 2022 13:00:47 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649674847; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qjo1x/2/56bkF14EaV+Qw26PWqPfUsOuM5SU5Rz+zOY=; b=YpURmkP+tXnm1QMPdpXdcvZ9dQa0K1oRL7CSwlKTKbOYDVNmhZ6vKCyBQ7I+Q9xSOybXGq 8BsZT86NF6fBjBBwrdUtsFORGtp9eMBaR4z+OztuGA631YDSqW4VFovAmeXj9Grh4cpcvu MkFXiH8i+CYLAFgrKAynE5mb1V1j3oE= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-20-s1JtIzuUM6-ETiiplbjgKg-1; Mon, 11 Apr 2022 07:00:44 -0400 X-MC-Unique: s1JtIzuUM6-ETiiplbjgKg-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id EF95D1857F07; Mon, 11 Apr 2022 11:00:43 +0000 (UTC) Received: from dmarchan.remote.csb (unknown [10.40.192.83]) by smtp.corp.redhat.com (Postfix) with ESMTP id 11D74140EBD5; Mon, 11 Apr 2022 11:00:41 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: maxime.coquelin@redhat.com, stephen@networkplumber.org, chenbo.xia@intel.com, jiayu.hu@intel.com, yuanx.wang@intel.com, xuan.ding@intel.com Subject: [RFC PATCH v3 5/8] vhost: annotate need reply handling Date: Mon, 11 Apr 2022 13:00:10 +0200 Message-Id: <20220411110013.18624-6-david.marchand@redhat.com> In-Reply-To: <20220411110013.18624-1-david.marchand@redhat.com> References: <20220328121758.26632-1-david.marchand@redhat.com> <20220411110013.18624-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david.marchand@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org When a reply from the slave is required (VHOST_USER_NEED_REPLY flag), a spinlock is taken before sending the message. This spinlock is released if an error occurs when sending the message, and once a reply is received. A problem is that this lock is taken under a branch and annotating conditionally held locks is not supported. The code seems currently correct and, while we may rework the code, it is easier to simply skip checks on slave_req_lock for those helpers. Signed-off-by: David Marchand Reviewed-by: Maxime Coquelin --- lib/vhost/vhost_user.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c index ee276a28f1..d101d5072f 100644 --- a/lib/vhost/vhost_user.c +++ b/lib/vhost/vhost_user.c @@ -2854,6 +2854,7 @@ send_vhost_reply(struct virtio_net *dev, int sockfd, struct vhu_msg_context *ctx static int send_vhost_slave_message(struct virtio_net *dev, struct vhu_msg_context *ctx) + __rte_no_thread_safety_analysis { int ret; @@ -3165,6 +3166,7 @@ vhost_user_msg_handler(int vid, int fd) static int process_slave_message_reply(struct virtio_net *dev, const struct vhu_msg_context *ctx) + __rte_no_thread_safety_analysis { struct vhu_msg_context msg_reply; int ret; From patchwork Mon Apr 11 11:00:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 109585 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7F322A0093; Mon, 11 Apr 2022 13:01:08 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B991041157; Mon, 11 Apr 2022 13:00:53 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id C471242803 for ; Mon, 11 Apr 2022 13:00:51 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649674851; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=TvhwSYjMz20SABOWxzyr2HY5wl3VaZrqLfEMKi08jKU=; b=ilCu61jFSuQe49neftF3NnvYWf+xGARkVc+TTWygUDD8xoSOETzkKnMj9AnV6eeKiPgw2l 37Qx27HLl2TBOtHZPXDvR7Fb10YIW1Rdaysq6JbSzXduFJTtaDCs7aCEg696VHkL+j4DZX +7Nrray6w03QBlJZx+QnRo8WZUp7tTs= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-504-lXEz8zhiP2ur7xAEi9lcrQ-1; Mon, 11 Apr 2022 07:00:48 -0400 X-MC-Unique: lXEz8zhiP2ur7xAEi9lcrQ-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A903429DD9A7; Mon, 11 Apr 2022 11:00:47 +0000 (UTC) Received: from dmarchan.remote.csb (unknown [10.40.192.83]) by smtp.corp.redhat.com (Postfix) with ESMTP id 45F88140EBD5; Mon, 11 Apr 2022 11:00:46 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: maxime.coquelin@redhat.com, stephen@networkplumber.org, chenbo.xia@intel.com, jiayu.hu@intel.com, yuanx.wang@intel.com, xuan.ding@intel.com Subject: [RFC PATCH v3 6/8] vhost: annotate vDPA device list accesses Date: Mon, 11 Apr 2022 13:00:11 +0200 Message-Id: <20220411110013.18624-7-david.marchand@redhat.com> In-Reply-To: <20220411110013.18624-1-david.marchand@redhat.com> References: <20220328121758.26632-1-david.marchand@redhat.com> <20220411110013.18624-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david.marchand@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org vdpa_device_list access must be protected with vdpa_device_list_lock spinlock. Signed-off-by: David Marchand Reviewed-by: Maxime Coquelin --- lib/vhost/vdpa.c | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/lib/vhost/vdpa.c b/lib/vhost/vdpa.c index 8fa2153023..a3f9f8f072 100644 --- a/lib/vhost/vdpa.c +++ b/lib/vhost/vdpa.c @@ -22,21 +22,24 @@ /** Double linked list of vDPA devices. */ TAILQ_HEAD(vdpa_device_list, rte_vdpa_device); -static struct vdpa_device_list vdpa_device_list = - TAILQ_HEAD_INITIALIZER(vdpa_device_list); +static struct vdpa_device_list vdpa_device_list__ = + TAILQ_HEAD_INITIALIZER(vdpa_device_list__); static rte_spinlock_t vdpa_device_list_lock = RTE_SPINLOCK_INITIALIZER; +static struct vdpa_device_list * const vdpa_device_list + __rte_guarded_by(&vdpa_device_list_lock) = &vdpa_device_list__; /* Unsafe, needs to be called with vdpa_device_list_lock held */ static struct rte_vdpa_device * __vdpa_find_device_by_name(const char *name) + __rte_exclusive_locks_required(&vdpa_device_list_lock) { struct rte_vdpa_device *dev, *ret = NULL; if (name == NULL) return NULL; - TAILQ_FOREACH(dev, &vdpa_device_list, next) { + TAILQ_FOREACH(dev, vdpa_device_list, next) { if (!strncmp(dev->device->name, name, RTE_DEV_NAME_MAX_LEN)) { ret = dev; break; @@ -100,7 +103,7 @@ rte_vdpa_register_device(struct rte_device *rte_dev, dev->device = rte_dev; dev->ops = ops; - TAILQ_INSERT_TAIL(&vdpa_device_list, dev, next); + TAILQ_INSERT_TAIL(vdpa_device_list, dev, next); out_unlock: rte_spinlock_unlock(&vdpa_device_list_lock); @@ -114,11 +117,11 @@ rte_vdpa_unregister_device(struct rte_vdpa_device *dev) int ret = -1; rte_spinlock_lock(&vdpa_device_list_lock); - RTE_TAILQ_FOREACH_SAFE(cur_dev, &vdpa_device_list, next, tmp_dev) { + RTE_TAILQ_FOREACH_SAFE(cur_dev, vdpa_device_list, next, tmp_dev) { if (dev != cur_dev) continue; - TAILQ_REMOVE(&vdpa_device_list, dev, next); + TAILQ_REMOVE(vdpa_device_list, dev, next); rte_free(dev); ret = 0; break; @@ -316,7 +319,7 @@ vdpa_find_device(const struct rte_vdpa_device *start, rte_vdpa_cmp_t cmp, rte_spinlock_lock(&vdpa_device_list_lock); if (start == NULL) - dev = TAILQ_FIRST(&vdpa_device_list); + dev = TAILQ_FIRST(vdpa_device_list); else dev = TAILQ_NEXT(start, next); From patchwork Mon Apr 11 11:00:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 109586 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 80C5CA0093; Mon, 11 Apr 2022 13:01:16 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EC5ED41156; Mon, 11 Apr 2022 13:00:59 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 2B906406B4 for ; Mon, 11 Apr 2022 13:00:58 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649674857; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3xqLsarT95U2CkBdT3YBJYlLKK6NBEbq0FuUwWzGOnc=; b=ENKhzh42OyOvW2w9RGB3X+c9kuTcjYd66L9e7H0vMTwFnc4JO9nwBFgT48Q3hJiBbOXSEC vdjoWeQX1dWqYAQR3FL2KUzj3ITJeNJx2wiqEkFveuYLxBEJDBg37l1mOaF2ajhurCAPUL cjpwC/0TYSefmY7Fm5ZHyXdKTxiF7is= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-656-eppNJBUPNoOE1VCrdTCzFQ-1; Mon, 11 Apr 2022 07:00:52 -0400 X-MC-Unique: eppNJBUPNoOE1VCrdTCzFQ-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0CBD9801585; Mon, 11 Apr 2022 11:00:52 +0000 (UTC) Received: from dmarchan.remote.csb (unknown [10.40.192.83]) by smtp.corp.redhat.com (Postfix) with ESMTP id 71DBB145B989; Mon, 11 Apr 2022 11:00:50 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: maxime.coquelin@redhat.com, stephen@networkplumber.org, chenbo.xia@intel.com, jiayu.hu@intel.com, yuanx.wang@intel.com, xuan.ding@intel.com Subject: [RFC PATCH v3 7/8] vhost: annotate IOTLB locks Date: Mon, 11 Apr 2022 13:00:12 +0200 Message-Id: <20220411110013.18624-8-david.marchand@redhat.com> In-Reply-To: <20220411110013.18624-1-david.marchand@redhat.com> References: <20220328121758.26632-1-david.marchand@redhat.com> <20220411110013.18624-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david.marchand@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This change simply annotates existing paths of the code leading to manipulations of the IOTLB r/w locks. clang does not support conditionally held locks, so always take iotlb locks regardless of VIRTIO_F_IOMMU_PLATFORM feature. vdpa and vhost_crypto code are annotated though they end up not taking a IOTLB lock and have been marked with a FIXME. Signed-off-by: David Marchand Reviewed-by: Maxime Coquelin --- lib/vhost/iotlb.h | 8 +++++++ lib/vhost/vdpa.c | 1 + lib/vhost/vhost.c | 11 +++++---- lib/vhost/vhost.h | 22 +++++++++++++----- lib/vhost/vhost_crypto.c | 7 ++++++ lib/vhost/virtio_net.c | 49 ++++++++++++++++++++++++++++++---------- 6 files changed, 75 insertions(+), 23 deletions(-) diff --git a/lib/vhost/iotlb.h b/lib/vhost/iotlb.h index 8d0ff7473b..8b97d308d1 100644 --- a/lib/vhost/iotlb.h +++ b/lib/vhost/iotlb.h @@ -11,24 +11,32 @@ static __rte_always_inline void vhost_user_iotlb_rd_lock(struct vhost_virtqueue *vq) + __rte_shared_lock_function(vq->iotlb_lock) + __rte_no_thread_safety_analysis { rte_rwlock_read_lock(&vq->iotlb_lock); } static __rte_always_inline void vhost_user_iotlb_rd_unlock(struct vhost_virtqueue *vq) + __rte_unlock_function(vq->iotlb_lock) + __rte_no_thread_safety_analysis { rte_rwlock_read_unlock(&vq->iotlb_lock); } static __rte_always_inline void vhost_user_iotlb_wr_lock(struct vhost_virtqueue *vq) + __rte_exclusive_lock_function(vq->iotlb_lock) + __rte_no_thread_safety_analysis { rte_rwlock_write_lock(&vq->iotlb_lock); } static __rte_always_inline void vhost_user_iotlb_wr_unlock(struct vhost_virtqueue *vq) + __rte_unlock_function(vq->iotlb_lock) + __rte_no_thread_safety_analysis { rte_rwlock_write_unlock(&vq->iotlb_lock); } diff --git a/lib/vhost/vdpa.c b/lib/vhost/vdpa.c index a3f9f8f072..22b8f48b39 100644 --- a/lib/vhost/vdpa.c +++ b/lib/vhost/vdpa.c @@ -133,6 +133,7 @@ rte_vdpa_unregister_device(struct rte_vdpa_device *dev) int rte_vdpa_relay_vring_used(int vid, uint16_t qid, void *vring_m) + __rte_no_thread_safety_analysis /* FIXME: requires iotlb_lock? */ { struct virtio_net *dev = get_device(vid); uint16_t idx, idx_m, desc_id; diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c index 79236df000..f96e8e0376 100644 --- a/lib/vhost/vhost.c +++ b/lib/vhost/vhost.c @@ -383,6 +383,7 @@ free_device(struct virtio_net *dev) static __rte_always_inline int log_translate(struct virtio_net *dev, struct vhost_virtqueue *vq) + __rte_shared_locks_required(vq->iotlb_lock) { if (likely(!(vq->ring_addrs.flags & (1 << VHOST_VRING_F_LOG)))) return 0; @@ -434,6 +435,7 @@ translate_log_addr(struct virtio_net *dev, struct vhost_virtqueue *vq, /* Caller should have iotlb_lock read-locked */ static int vring_translate_split(struct virtio_net *dev, struct vhost_virtqueue *vq) + __rte_shared_locks_required(vq->iotlb_lock) { uint64_t req_size, size; @@ -473,6 +475,7 @@ vring_translate_split(struct virtio_net *dev, struct vhost_virtqueue *vq) /* Caller should have iotlb_lock read-locked */ static int vring_translate_packed(struct virtio_net *dev, struct vhost_virtqueue *vq) + __rte_shared_locks_required(vq->iotlb_lock) { uint64_t req_size, size; @@ -527,10 +530,9 @@ vring_translate(struct virtio_net *dev, struct vhost_virtqueue *vq) } void -vring_invalidate(struct virtio_net *dev, struct vhost_virtqueue *vq) +vring_invalidate(struct virtio_net *dev __rte_unused, struct vhost_virtqueue *vq) { - if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) - vhost_user_iotlb_wr_lock(vq); + vhost_user_iotlb_wr_lock(vq); vq->access_ok = false; vq->desc = NULL; @@ -538,8 +540,7 @@ vring_invalidate(struct virtio_net *dev, struct vhost_virtqueue *vq) vq->used = NULL; vq->log_guest_addr = 0; - if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) - vhost_user_iotlb_wr_unlock(vq); + vhost_user_iotlb_wr_unlock(vq); } static void diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index c2065e5324..b5724c6d2a 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -525,12 +525,15 @@ void __vhost_log_cache_write(struct virtio_net *dev, uint64_t addr, uint64_t len); void __vhost_log_cache_write_iova(struct virtio_net *dev, struct vhost_virtqueue *vq, - uint64_t iova, uint64_t len); + uint64_t iova, uint64_t len) + __rte_shared_locks_required(vq->iotlb_lock); void __vhost_log_cache_sync(struct virtio_net *dev, struct vhost_virtqueue *vq); + void __vhost_log_write(struct virtio_net *dev, uint64_t addr, uint64_t len); void __vhost_log_write_iova(struct virtio_net *dev, struct vhost_virtqueue *vq, - uint64_t iova, uint64_t len); + uint64_t iova, uint64_t len) + __rte_shared_locks_required(vq->iotlb_lock); static __rte_always_inline void vhost_log_write(struct virtio_net *dev, uint64_t addr, uint64_t len) @@ -580,6 +583,7 @@ vhost_log_used_vring(struct virtio_net *dev, struct vhost_virtqueue *vq, static __rte_always_inline void vhost_log_cache_write_iova(struct virtio_net *dev, struct vhost_virtqueue *vq, uint64_t iova, uint64_t len) + __rte_shared_locks_required(vq->iotlb_lock) { if (likely(!(dev->features & (1ULL << VHOST_F_LOG_ALL)))) return; @@ -593,6 +597,7 @@ vhost_log_cache_write_iova(struct virtio_net *dev, struct vhost_virtqueue *vq, static __rte_always_inline void vhost_log_write_iova(struct virtio_net *dev, struct vhost_virtqueue *vq, uint64_t iova, uint64_t len) + __rte_shared_locks_required(vq->iotlb_lock) { if (likely(!(dev->features & (1ULL << VHOST_F_LOG_ALL)))) return; @@ -796,18 +801,23 @@ struct rte_vhost_device_ops const *vhost_driver_callback_get(const char *path); void vhost_backend_cleanup(struct virtio_net *dev); uint64_t __vhost_iova_to_vva(struct virtio_net *dev, struct vhost_virtqueue *vq, - uint64_t iova, uint64_t *len, uint8_t perm); + uint64_t iova, uint64_t *len, uint8_t perm) + __rte_shared_locks_required(vq->iotlb_lock); void *vhost_alloc_copy_ind_table(struct virtio_net *dev, struct vhost_virtqueue *vq, - uint64_t desc_addr, uint64_t desc_len); -int vring_translate(struct virtio_net *dev, struct vhost_virtqueue *vq); + uint64_t desc_addr, uint64_t desc_len) + __rte_shared_locks_required(vq->iotlb_lock); +int vring_translate(struct virtio_net *dev, struct vhost_virtqueue *vq) + __rte_shared_locks_required(vq->iotlb_lock); uint64_t translate_log_addr(struct virtio_net *dev, struct vhost_virtqueue *vq, - uint64_t log_addr); + uint64_t log_addr) + __rte_shared_locks_required(vq->iotlb_lock); void vring_invalidate(struct virtio_net *dev, struct vhost_virtqueue *vq); static __rte_always_inline uint64_t vhost_iova_to_vva(struct virtio_net *dev, struct vhost_virtqueue *vq, uint64_t iova, uint64_t *len, uint8_t perm) + __rte_shared_locks_required(vq->iotlb_lock) { if (!(dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))) return rte_vhost_va_from_guest_pa(dev->mem, iova, len); diff --git a/lib/vhost/vhost_crypto.c b/lib/vhost/vhost_crypto.c index b1c0eb6a0f..7f8d11ad9f 100644 --- a/lib/vhost/vhost_crypto.c +++ b/lib/vhost/vhost_crypto.c @@ -506,6 +506,7 @@ static __rte_always_inline struct virtio_crypto_inhdr * reach_inhdr(struct vhost_crypto_data_req *vc_req, struct vhost_crypto_desc *head, uint32_t max_n_descs) + __rte_shared_locks_required(vc_req->vq->iotlb_lock) { struct virtio_crypto_inhdr *inhdr; struct vhost_crypto_desc *last = head + (max_n_descs - 1); @@ -552,6 +553,7 @@ static __rte_always_inline void * get_data_ptr(struct vhost_crypto_data_req *vc_req, struct vhost_crypto_desc *cur_desc, uint8_t perm) + __rte_shared_locks_required(vc_req->vq->iotlb_lock) { void *data; uint64_t dlen = cur_desc->len; @@ -570,6 +572,7 @@ copy_data(void *dst_data, struct vhost_crypto_data_req *vc_req, struct vhost_crypto_desc *head, struct vhost_crypto_desc **cur_desc, uint32_t size, uint32_t max_n_descs) + __rte_shared_locks_required(vc_req->vq->iotlb_lock) { struct vhost_crypto_desc *desc = *cur_desc; uint64_t remain, addr, dlen, len; @@ -718,6 +721,7 @@ prepare_write_back_data(struct vhost_crypto_data_req *vc_req, uint32_t offset, uint64_t write_back_len, uint32_t max_n_descs) + __rte_shared_locks_required(vc_req->vq->iotlb_lock) { struct vhost_crypto_writeback_data *wb_data, *head; struct vhost_crypto_desc *desc = *cur_desc; @@ -838,6 +842,7 @@ prepare_sym_cipher_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, struct virtio_crypto_cipher_data_req *cipher, struct vhost_crypto_desc *head, uint32_t max_n_descs) + __rte_shared_locks_required(vc_req->vq->iotlb_lock) { struct vhost_crypto_desc *desc = head; struct vhost_crypto_writeback_data *ewb = NULL; @@ -990,6 +995,7 @@ prepare_sym_chain_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, struct virtio_crypto_alg_chain_data_req *chain, struct vhost_crypto_desc *head, uint32_t max_n_descs) + __rte_shared_locks_required(vc_req->vq->iotlb_lock) { struct vhost_crypto_desc *desc = head, *digest_desc; struct vhost_crypto_writeback_data *ewb = NULL, *ewb2 = NULL; @@ -1172,6 +1178,7 @@ vhost_crypto_process_one_req(struct vhost_crypto *vcrypto, struct vhost_virtqueue *vq, struct rte_crypto_op *op, struct vring_desc *head, struct vhost_crypto_desc *descs, uint16_t desc_idx) + __rte_no_thread_safety_analysis /* FIXME: requires iotlb_lock? */ { struct vhost_crypto_data_req *vc_req = rte_mbuf_to_priv(op->sym->m_src); struct rte_cryptodev_sym_session *session; diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index 0c40d5a7a5..8a3e3aa297 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -180,6 +180,7 @@ vhost_async_dma_check_completed(struct virtio_net *dev, int16_t dma_id, uint16_t static inline void do_data_copy_enqueue(struct virtio_net *dev, struct vhost_virtqueue *vq) + __rte_shared_locks_required(vq->iotlb_lock) { struct batch_copy_elem *elem = vq->batch_copy_elems; uint16_t count = vq->batch_copy_nb_elems; @@ -526,6 +527,7 @@ vhost_shadow_enqueue_single_packed(struct virtio_net *dev, uint16_t *id, uint16_t *count, uint16_t num_buffers) + __rte_shared_locks_required(vq->iotlb_lock) { vhost_shadow_enqueue_packed(vq, len, id, count, num_buffers); @@ -607,6 +609,7 @@ static __rte_always_inline int map_one_desc(struct virtio_net *dev, struct vhost_virtqueue *vq, struct buf_vector *buf_vec, uint16_t *vec_idx, uint64_t desc_iova, uint64_t desc_len, uint8_t perm) + __rte_shared_locks_required(vq->iotlb_lock) { uint16_t vec_id = *vec_idx; @@ -644,6 +647,7 @@ fill_vec_buf_split(struct virtio_net *dev, struct vhost_virtqueue *vq, uint32_t avail_idx, uint16_t *vec_idx, struct buf_vector *buf_vec, uint16_t *desc_chain_head, uint32_t *desc_chain_len, uint8_t perm) + __rte_shared_locks_required(vq->iotlb_lock) { uint16_t idx = vq->avail->ring[avail_idx & (vq->size - 1)]; uint16_t vec_id = *vec_idx; @@ -727,6 +731,7 @@ reserve_avail_buf_split(struct virtio_net *dev, struct vhost_virtqueue *vq, uint32_t size, struct buf_vector *buf_vec, uint16_t *num_buffers, uint16_t avail_head, uint16_t *nr_vec) + __rte_shared_locks_required(vq->iotlb_lock) { uint16_t cur_idx; uint16_t vec_idx = 0; @@ -777,6 +782,7 @@ fill_vec_buf_packed_indirect(struct virtio_net *dev, struct vhost_virtqueue *vq, struct vring_packed_desc *desc, uint16_t *vec_idx, struct buf_vector *buf_vec, uint32_t *len, uint8_t perm) + __rte_shared_locks_required(vq->iotlb_lock) { uint16_t i; uint32_t nr_descs; @@ -835,6 +841,7 @@ fill_vec_buf_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, uint16_t avail_idx, uint16_t *desc_count, struct buf_vector *buf_vec, uint16_t *vec_idx, uint16_t *buf_id, uint32_t *len, uint8_t perm) + __rte_shared_locks_required(vq->iotlb_lock) { bool wrap_counter = vq->avail_wrap_counter; struct vring_packed_desc *descs = vq->desc_packed; @@ -900,6 +907,7 @@ static __rte_noinline void copy_vnet_hdr_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq, struct buf_vector *buf_vec, struct virtio_net_hdr_mrg_rxbuf *hdr) + __rte_shared_locks_required(vq->iotlb_lock) { uint64_t len; uint64_t remain = dev->vhost_hlen; @@ -1036,6 +1044,7 @@ static __rte_always_inline void sync_mbuf_to_desc_seg(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf *m, uint32_t mbuf_offset, uint64_t buf_addr, uint64_t buf_iova, uint32_t cpy_len) + __rte_shared_locks_required(vq->iotlb_lock) { struct batch_copy_elem *batch_copy = vq->batch_copy_elems; @@ -1061,6 +1070,7 @@ mbuf_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf *m, struct buf_vector *buf_vec, uint16_t nr_vec, uint16_t num_buffers, bool is_async) __rte_exclusive_locks_required(vq->access_lock) + __rte_shared_locks_required(vq->iotlb_lock) { uint32_t vec_idx = 0; uint32_t mbuf_offset, mbuf_avail; @@ -1191,6 +1201,7 @@ vhost_enqueue_single_packed(struct virtio_net *dev, struct buf_vector *buf_vec, uint16_t *nr_descs) __rte_exclusive_locks_required(vq->access_lock) + __rte_shared_locks_required(vq->iotlb_lock) { uint16_t nr_vec = 0; uint16_t avail_idx = vq->last_avail_idx; @@ -1252,6 +1263,7 @@ static __rte_noinline uint32_t virtio_dev_rx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf **pkts, uint32_t count) __rte_exclusive_locks_required(vq->access_lock) + __rte_shared_locks_required(vq->iotlb_lock) { uint32_t pkt_idx = 0; uint16_t num_buffers; @@ -1309,6 +1321,7 @@ virtio_dev_rx_sync_batch_check(struct virtio_net *dev, struct rte_mbuf **pkts, uint64_t *desc_addrs, uint64_t *lens) + __rte_shared_locks_required(vq->iotlb_lock) { bool wrap_counter = vq->avail_wrap_counter; struct vring_packed_desc *descs = vq->desc_packed; @@ -1360,6 +1373,7 @@ virtio_dev_rx_batch_packed_copy(struct virtio_net *dev, struct rte_mbuf **pkts, uint64_t *desc_addrs, uint64_t *lens) + __rte_shared_locks_required(vq->iotlb_lock) { uint32_t buf_offset = sizeof(struct virtio_net_hdr_mrg_rxbuf); struct virtio_net_hdr_mrg_rxbuf *hdrs[PACKED_BATCH_SIZE]; @@ -1401,6 +1415,7 @@ static __rte_always_inline int virtio_dev_rx_sync_batch_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf **pkts) + __rte_shared_locks_required(vq->iotlb_lock) { uint64_t desc_addrs[PACKED_BATCH_SIZE]; uint64_t lens[PACKED_BATCH_SIZE]; @@ -1423,6 +1438,7 @@ virtio_dev_rx_single_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf *pkt) __rte_exclusive_locks_required(vq->access_lock) + __rte_shared_locks_required(vq->iotlb_lock) { struct buf_vector buf_vec[BUF_VECTOR_MAX]; uint16_t nr_descs = 0; @@ -1449,6 +1465,7 @@ virtio_dev_rx_packed(struct virtio_net *dev, struct rte_mbuf **__rte_restrict pkts, uint32_t count) __rte_exclusive_locks_required(vq->access_lock) + __rte_shared_locks_required(vq->iotlb_lock) { uint32_t pkt_idx = 0; @@ -1501,8 +1518,7 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id, if (unlikely(!vq->enabled)) goto out_access_unlock; - if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) - vhost_user_iotlb_rd_lock(vq); + vhost_user_iotlb_rd_lock(vq); if (unlikely(!vq->access_ok)) if (unlikely(vring_translate(dev, vq) < 0)) @@ -1518,8 +1534,7 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id, nb_tx = virtio_dev_rx_split(dev, vq, pkts, count); out: - if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) - vhost_user_iotlb_rd_unlock(vq); + vhost_user_iotlb_rd_unlock(vq); out_access_unlock: rte_spinlock_unlock(&vq->access_lock); @@ -1595,6 +1610,7 @@ virtio_dev_rx_async_submit_split(struct virtio_net *dev, struct vhost_virtqueue uint16_t queue_id, struct rte_mbuf **pkts, uint32_t count, int16_t dma_id, uint16_t vchan_id) __rte_exclusive_locks_required(vq->access_lock) + __rte_shared_locks_required(vq->iotlb_lock) { struct buf_vector buf_vec[BUF_VECTOR_MAX]; uint32_t pkt_idx = 0; @@ -1700,6 +1716,7 @@ vhost_enqueue_async_packed(struct virtio_net *dev, uint16_t *nr_descs, uint16_t *nr_buffers) __rte_exclusive_locks_required(vq->access_lock) + __rte_shared_locks_required(vq->iotlb_lock) { uint16_t nr_vec = 0; uint16_t avail_idx = vq->last_avail_idx; @@ -1758,6 +1775,7 @@ static __rte_always_inline int16_t virtio_dev_rx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf *pkt, uint16_t *nr_descs, uint16_t *nr_buffers) __rte_exclusive_locks_required(vq->access_lock) + __rte_shared_locks_required(vq->iotlb_lock) { struct buf_vector buf_vec[BUF_VECTOR_MAX]; @@ -1805,6 +1823,7 @@ virtio_dev_rx_async_submit_packed(struct virtio_net *dev, struct vhost_virtqueue uint16_t queue_id, struct rte_mbuf **pkts, uint32_t count, int16_t dma_id, uint16_t vchan_id) __rte_exclusive_locks_required(vq->access_lock) + __rte_shared_locks_required(vq->iotlb_lock) { uint32_t pkt_idx = 0; uint32_t remained = count; @@ -2154,8 +2173,7 @@ virtio_dev_rx_async_submit(struct virtio_net *dev, uint16_t queue_id, if (unlikely(!vq->enabled || !vq->async)) goto out_access_unlock; - if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) - vhost_user_iotlb_rd_lock(vq); + vhost_user_iotlb_rd_lock(vq); if (unlikely(!vq->access_ok)) if (unlikely(vring_translate(dev, vq) < 0)) @@ -2173,8 +2191,7 @@ virtio_dev_rx_async_submit(struct virtio_net *dev, uint16_t queue_id, pkts, count, dma_id, vchan_id); out: - if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) - vhost_user_iotlb_rd_unlock(vq); + vhost_user_iotlb_rd_unlock(vq); out_access_unlock: rte_spinlock_unlock(&vq->access_lock); @@ -2697,6 +2714,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count, bool legacy_ol_flags) __rte_exclusive_locks_required(vq->access_lock) + __rte_shared_locks_required(vq->iotlb_lock) { uint16_t i; uint16_t free_entries; @@ -2793,6 +2811,7 @@ virtio_dev_tx_split_legacy(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count) __rte_exclusive_locks_required(vq->access_lock) + __rte_shared_locks_required(vq->iotlb_lock) { return virtio_dev_tx_split(dev, vq, mbuf_pool, pkts, count, true); } @@ -2803,6 +2822,7 @@ virtio_dev_tx_split_compliant(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count) __rte_exclusive_locks_required(vq->access_lock) + __rte_shared_locks_required(vq->iotlb_lock) { return virtio_dev_tx_split(dev, vq, mbuf_pool, pkts, count, false); } @@ -2814,6 +2834,7 @@ vhost_reserve_avail_batch_packed(struct virtio_net *dev, uint16_t avail_idx, uintptr_t *desc_addrs, uint16_t *ids) + __rte_shared_locks_required(vq->iotlb_lock) { bool wrap = vq->avail_wrap_counter; struct vring_packed_desc *descs = vq->desc_packed; @@ -2883,6 +2904,7 @@ virtio_dev_tx_batch_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf **pkts, bool legacy_ol_flags) + __rte_shared_locks_required(vq->iotlb_lock) { uint16_t avail_idx = vq->last_avail_idx; uint32_t buf_offset = sizeof(struct virtio_net_hdr_mrg_rxbuf); @@ -2929,6 +2951,7 @@ vhost_dequeue_single_packed(struct virtio_net *dev, uint16_t *buf_id, uint16_t *desc_count, bool legacy_ol_flags) + __rte_shared_locks_required(vq->iotlb_lock) { struct buf_vector buf_vec[BUF_VECTOR_MAX]; uint32_t buf_len; @@ -2972,6 +2995,7 @@ virtio_dev_tx_single_packed(struct virtio_net *dev, struct rte_mempool *mbuf_pool, struct rte_mbuf *pkts, bool legacy_ol_flags) + __rte_shared_locks_required(vq->iotlb_lock) { uint16_t buf_id, desc_count = 0; @@ -3003,6 +3027,7 @@ virtio_dev_tx_packed(struct virtio_net *dev, uint32_t count, bool legacy_ol_flags) __rte_exclusive_locks_required(vq->access_lock) + __rte_shared_locks_required(vq->iotlb_lock) { uint32_t pkt_idx = 0; @@ -3047,6 +3072,7 @@ virtio_dev_tx_packed_legacy(struct virtio_net *dev, struct vhost_virtqueue *__rte_restrict vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **__rte_restrict pkts, uint32_t count) __rte_exclusive_locks_required(vq->access_lock) + __rte_shared_locks_required(vq->iotlb_lock) { return virtio_dev_tx_packed(dev, vq, mbuf_pool, pkts, count, true); } @@ -3057,6 +3083,7 @@ virtio_dev_tx_packed_compliant(struct virtio_net *dev, struct vhost_virtqueue *__rte_restrict vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **__rte_restrict pkts, uint32_t count) __rte_exclusive_locks_required(vq->access_lock) + __rte_shared_locks_required(vq->iotlb_lock) { return virtio_dev_tx_packed(dev, vq, mbuf_pool, pkts, count, false); } @@ -3096,8 +3123,7 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id, goto out_access_unlock; } - if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) - vhost_user_iotlb_rd_lock(vq); + vhost_user_iotlb_rd_lock(vq); if (unlikely(!vq->access_ok)) if (unlikely(vring_translate(dev, vq) < 0)) { @@ -3153,8 +3179,7 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id, } out: - if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) - vhost_user_iotlb_rd_unlock(vq); + vhost_user_iotlb_rd_unlock(vq); out_access_unlock: rte_spinlock_unlock(&vq->access_lock); From patchwork Mon Apr 11 11:00:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 109587 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AF366A0093; Mon, 11 Apr 2022 13:01:23 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0CC10427F2; Mon, 11 Apr 2022 13:01:03 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id C681B427F0 for ; Mon, 11 Apr 2022 13:01:01 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649674861; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=m8/vg2xBPfDinzm3oZ8kGsm9iC7Pd0Gzozqy8aSbRA8=; b=Y6H/Io5KZ+zDMthvEd2etRdYjh25OpVJsuWEFuFZfbFg9O8wNy6a8uMwfMyjZIUo2r3UEV B6DYMUuuBZP9vjY8QxKTPwScC8idx9vJWnJy6ia5YL70CaedqFjLjfIf5QSF++8bNRJ3Au 4EOTOXVTf5xPpSr4VQFTbkekIhVmsIc= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-263-tFOSld7ANqui1qJkUkKxJA-1; Mon, 11 Apr 2022 07:00:56 -0400 X-MC-Unique: tFOSld7ANqui1qJkUkKxJA-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B4DBF2A59549; Mon, 11 Apr 2022 11:00:55 +0000 (UTC) Received: from dmarchan.remote.csb (unknown [10.40.192.83]) by smtp.corp.redhat.com (Postfix) with ESMTP id 54448140EBD5; Mon, 11 Apr 2022 11:00:54 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: maxime.coquelin@redhat.com, stephen@networkplumber.org, chenbo.xia@intel.com, jiayu.hu@intel.com, yuanx.wang@intel.com, xuan.ding@intel.com Subject: [RFC PATCH v3 8/8] vhost: enable lock check Date: Mon, 11 Apr 2022 13:00:13 +0200 Message-Id: <20220411110013.18624-9-david.marchand@redhat.com> In-Reply-To: <20220411110013.18624-1-david.marchand@redhat.com> References: <20220328121758.26632-1-david.marchand@redhat.com> <20220411110013.18624-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david.marchand@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Now that all locks in this library are annotated, we can enable the check. Signed-off-by: David Marchand Reviewed-by: Maxime Coquelin --- lib/vhost/meson.build | 2 ++ 1 file changed, 2 insertions(+) diff --git a/lib/vhost/meson.build b/lib/vhost/meson.build index bc7272053b..197a51d936 100644 --- a/lib/vhost/meson.build +++ b/lib/vhost/meson.build @@ -17,6 +17,8 @@ elif (toolchain == 'icc' and cc.version().version_compare('>=16.0.0')) endif dpdk_conf.set('RTE_LIBRTE_VHOST_POSTCOPY', cc.has_header('linux/userfaultfd.h')) cflags += '-fno-strict-aliasing' + +annotate_locks = true sources = files( 'fd_man.c', 'iotlb.c',