From patchwork Wed Mar 30 13:49:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 109026 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4260CA050D; Wed, 30 Mar 2022 15:50:30 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 56435428E6; Wed, 30 Mar 2022 15:50:24 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 639724285D for ; Wed, 30 Mar 2022 15:50:21 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1648648220; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=s/fsIOU0KTG+nc/SZwhQjUvSu3UPnpvrsQW1oZrShLI=; b=hOxBlJSI0RXQ0b8rk4EFTScKa9IBN3bEi/QVih5sk6crwTogLHeBkK33arYSc7E+Ck/fgC nxQArdBjueJVaGWDzwfdweg6bWaZ/JGoxV//TZ9KDpP3/bgHlv2g8EKksKLj/gOY0EpIpD BlKb6AlplNehnt3q/4XCsKah2MbhGEs= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-414-H7LUuQnLPHi0pWaq1457dg-1; Wed, 30 Mar 2022 09:50:15 -0400 X-MC-Unique: H7LUuQnLPHi0pWaq1457dg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D47901010364; Wed, 30 Mar 2022 13:50:14 +0000 (UTC) Received: from dmarchan.remote.csb (unknown [10.40.195.27]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9A15A400E10D; Wed, 30 Mar 2022 13:50:12 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: maxime.coquelin@redhat.com, chenbo.xia@intel.com, jiayu.hu@intel.com, yuanx.wang@intel.com, xuan.ding@intel.com, Ruifeng Wang , Jan Viktorin , David Christensen , Bruce Richardson , Konstantin Ananyev Subject: [RFC PATCH v2 2/9] eal: annotate spinlock and rwlock Date: Wed, 30 Mar 2022 15:49:49 +0200 Message-Id: <20220330134956.18927-3-david.marchand@redhat.com> In-Reply-To: <20220330134956.18927-1-david.marchand@redhat.com> References: <20220328121758.26632-1-david.marchand@redhat.com> <20220330134956.18927-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.11.54.2 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david.marchand@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org clang offers some thread safety checks, statically verifying that locks are taken and released in the code. To use those checks, the full code leading to taking or releasing locks must be annotated with some attributes. Wrap those attributes into our own set of macros. Only rwlock and the "normal" spinlock are instrumented for now. A component may enable this check by setting annotate_locks = true in its meson.build. Note: those checks might be of interest out of DPDK, but it requires that the including application locks are annotated. On the other hand, applications out there might have been using those same checks. To be on the safe side, keep this instrumentation under a RTE_ANNOTATE_LOCKS internal build flag. Signed-off-by: David Marchand --- drivers/meson.build | 5 ++ lib/eal/arm/include/rte_rwlock.h | 4 ++ lib/eal/arm/include/rte_spinlock.h | 6 +++ lib/eal/include/generic/rte_rwlock.h | 21 +++++--- lib/eal/include/generic/rte_spinlock.h | 19 +++++--- lib/eal/include/meson.build | 1 + lib/eal/include/rte_lock_annotations.h | 67 ++++++++++++++++++++++++++ lib/eal/ppc/include/rte_rwlock.h | 4 ++ lib/eal/ppc/include/rte_spinlock.h | 9 ++++ lib/eal/x86/include/rte_rwlock.h | 4 ++ lib/eal/x86/include/rte_spinlock.h | 9 ++++ lib/meson.build | 5 ++ 12 files changed, 141 insertions(+), 13 deletions(-) create mode 100644 lib/eal/include/rte_lock_annotations.h diff --git a/drivers/meson.build b/drivers/meson.build index 1d8123b00c..c81c8c0eff 100644 --- a/drivers/meson.build +++ b/drivers/meson.build @@ -87,6 +87,7 @@ foreach subpath:subdirs build = true # set to false to disable, e.g. missing deps reason = '' # set if build == false to explain name = drv + annotate_locks = false sources = [] headers = [] objs = [] @@ -152,6 +153,10 @@ foreach subpath:subdirs enabled_drivers += name lib_name = '_'.join(['rte', class, name]) cflags += '-DRTE_LOG_DEFAULT_LOGTYPE=' + '.'.join([log_prefix, name]) + if annotate_locks and cc.has_argument('-Wthread-safety') + cflags += '-DRTE_ANNOTATE_LOCKS' + cflags += '-Wthread-safety' + endif dpdk_conf.set(lib_name.to_upper(), 1) dpdk_extra_ldflags += pkgconfig_extra_libs diff --git a/lib/eal/arm/include/rte_rwlock.h b/lib/eal/arm/include/rte_rwlock.h index 18bb37b036..bea398b6f1 100644 --- a/lib/eal/arm/include/rte_rwlock.h +++ b/lib/eal/arm/include/rte_rwlock.h @@ -13,24 +13,28 @@ extern "C" { static inline void rte_rwlock_read_lock_tm(rte_rwlock_t *rwl) + RTE_NO_ANNOTATED_LOCK_CHECK { rte_rwlock_read_lock(rwl); } static inline void rte_rwlock_read_unlock_tm(rte_rwlock_t *rwl) + RTE_NO_ANNOTATED_LOCK_CHECK { rte_rwlock_read_unlock(rwl); } static inline void rte_rwlock_write_lock_tm(rte_rwlock_t *rwl) + RTE_NO_ANNOTATED_LOCK_CHECK { rte_rwlock_write_lock(rwl); } static inline void rte_rwlock_write_unlock_tm(rte_rwlock_t *rwl) + RTE_NO_ANNOTATED_LOCK_CHECK { rte_rwlock_write_unlock(rwl); } diff --git a/lib/eal/arm/include/rte_spinlock.h b/lib/eal/arm/include/rte_spinlock.h index a973763c23..d47b13070b 100644 --- a/lib/eal/arm/include/rte_spinlock.h +++ b/lib/eal/arm/include/rte_spinlock.h @@ -23,36 +23,42 @@ static inline int rte_tm_supported(void) static inline void rte_spinlock_lock_tm(rte_spinlock_t *sl) + RTE_NO_ANNOTATED_LOCK_CHECK { rte_spinlock_lock(sl); /* fall-back */ } static inline int rte_spinlock_trylock_tm(rte_spinlock_t *sl) + RTE_NO_ANNOTATED_LOCK_CHECK { return rte_spinlock_trylock(sl); } static inline void rte_spinlock_unlock_tm(rte_spinlock_t *sl) + RTE_NO_ANNOTATED_LOCK_CHECK { rte_spinlock_unlock(sl); } static inline void rte_spinlock_recursive_lock_tm(rte_spinlock_recursive_t *slr) + RTE_NO_ANNOTATED_LOCK_CHECK { rte_spinlock_recursive_lock(slr); /* fall-back */ } static inline void rte_spinlock_recursive_unlock_tm(rte_spinlock_recursive_t *slr) + RTE_NO_ANNOTATED_LOCK_CHECK { rte_spinlock_recursive_unlock(slr); } static inline int rte_spinlock_recursive_trylock_tm(rte_spinlock_recursive_t *slr) + RTE_NO_ANNOTATED_LOCK_CHECK { return rte_spinlock_recursive_trylock(slr); } diff --git a/lib/eal/include/generic/rte_rwlock.h b/lib/eal/include/generic/rte_rwlock.h index da9bc3e9c0..dabbac0131 100644 --- a/lib/eal/include/generic/rte_rwlock.h +++ b/lib/eal/include/generic/rte_rwlock.h @@ -23,6 +23,7 @@ extern "C" { #include #include +#include #include /** @@ -30,7 +31,7 @@ extern "C" { * * cnt is -1 when write lock is held, and > 0 when read locks are held. */ -typedef struct { +typedef struct RTE_ANNOTATED_LOCK { volatile int32_t cnt; /**< -1 when W lock held, > 0 when R locks held. */ } rte_rwlock_t; @@ -58,7 +59,8 @@ rte_rwlock_init(rte_rwlock_t *rwl) * A pointer to a rwlock structure. */ static inline void -rte_rwlock_read_lock(rte_rwlock_t *rwl) +rte_rwlock_read_lock(rte_rwlock_t *rwl) RTE_SHARED_LOCK_ACQUIRES(rwl) + RTE_NO_ANNOTATED_LOCK_CHECK { int32_t x; int success = 0; @@ -90,7 +92,8 @@ rte_rwlock_read_lock(rte_rwlock_t *rwl) */ __rte_experimental static inline int -rte_rwlock_read_trylock(rte_rwlock_t *rwl) +rte_rwlock_read_trylock(rte_rwlock_t *rwl) RTE_SHARED_LOCK_TRYLOCK(1, rwl) + RTE_NO_ANNOTATED_LOCK_CHECK { int32_t x; int success = 0; @@ -114,7 +117,8 @@ rte_rwlock_read_trylock(rte_rwlock_t *rwl) * A pointer to the rwlock structure. */ static inline void -rte_rwlock_read_unlock(rte_rwlock_t *rwl) +rte_rwlock_read_unlock(rte_rwlock_t *rwl) RTE_LOCK_RELEASES(rwl) + RTE_NO_ANNOTATED_LOCK_CHECK { __atomic_fetch_sub(&rwl->cnt, 1, __ATOMIC_RELEASE); } @@ -134,7 +138,8 @@ rte_rwlock_read_unlock(rte_rwlock_t *rwl) */ __rte_experimental static inline int -rte_rwlock_write_trylock(rte_rwlock_t *rwl) +rte_rwlock_write_trylock(rte_rwlock_t *rwl) RTE_EXC_LOCK_TRYLOCK(1, rwl) + RTE_NO_ANNOTATED_LOCK_CHECK { int32_t x; @@ -153,7 +158,8 @@ rte_rwlock_write_trylock(rte_rwlock_t *rwl) * A pointer to a rwlock structure. */ static inline void -rte_rwlock_write_lock(rte_rwlock_t *rwl) +rte_rwlock_write_lock(rte_rwlock_t *rwl) RTE_EXC_LOCK_ACQUIRES(rwl) + RTE_NO_ANNOTATED_LOCK_CHECK { int32_t x; int success = 0; @@ -177,7 +183,8 @@ rte_rwlock_write_lock(rte_rwlock_t *rwl) * A pointer to a rwlock structure. */ static inline void -rte_rwlock_write_unlock(rte_rwlock_t *rwl) +rte_rwlock_write_unlock(rte_rwlock_t *rwl) RTE_LOCK_RELEASES(rwl) + RTE_NO_ANNOTATED_LOCK_CHECK { __atomic_store_n(&rwl->cnt, 0, __ATOMIC_RELEASE); } diff --git a/lib/eal/include/generic/rte_spinlock.h b/lib/eal/include/generic/rte_spinlock.h index 40fe49d5ad..c54421f5d3 100644 --- a/lib/eal/include/generic/rte_spinlock.h +++ b/lib/eal/include/generic/rte_spinlock.h @@ -22,12 +22,13 @@ #ifdef RTE_FORCE_INTRINSICS #include #endif +#include #include /** * The rte_spinlock_t type. */ -typedef struct { +typedef struct RTE_ANNOTATED_LOCK { volatile int locked; /**< lock status 0 = unlocked, 1 = locked */ } rte_spinlock_t; @@ -55,11 +56,12 @@ rte_spinlock_init(rte_spinlock_t *sl) * A pointer to the spinlock. */ static inline void -rte_spinlock_lock(rte_spinlock_t *sl); +rte_spinlock_lock(rte_spinlock_t *sl) RTE_EXC_LOCK_ACQUIRES(sl); #ifdef RTE_FORCE_INTRINSICS static inline void rte_spinlock_lock(rte_spinlock_t *sl) + RTE_NO_ANNOTATED_LOCK_CHECK { int exp = 0; @@ -79,11 +81,12 @@ rte_spinlock_lock(rte_spinlock_t *sl) * A pointer to the spinlock. */ static inline void -rte_spinlock_unlock (rte_spinlock_t *sl); +rte_spinlock_unlock(rte_spinlock_t *sl) RTE_LOCK_RELEASES(sl); #ifdef RTE_FORCE_INTRINSICS static inline void -rte_spinlock_unlock (rte_spinlock_t *sl) +rte_spinlock_unlock(rte_spinlock_t *sl) + RTE_NO_ANNOTATED_LOCK_CHECK { __atomic_store_n(&sl->locked, 0, __ATOMIC_RELEASE); } @@ -98,11 +101,12 @@ rte_spinlock_unlock (rte_spinlock_t *sl) * 1 if the lock is successfully taken; 0 otherwise. */ static inline int -rte_spinlock_trylock (rte_spinlock_t *sl); +rte_spinlock_trylock(rte_spinlock_t *sl) RTE_EXC_LOCK_TRYLOCK(1, sl); #ifdef RTE_FORCE_INTRINSICS static inline int -rte_spinlock_trylock (rte_spinlock_t *sl) +rte_spinlock_trylock(rte_spinlock_t *sl) + RTE_NO_ANNOTATED_LOCK_CHECK { int exp = 0; return __atomic_compare_exchange_n(&sl->locked, &exp, 1, @@ -211,6 +215,7 @@ static inline void rte_spinlock_recursive_init(rte_spinlock_recursive_t *slr) * A pointer to the recursive spinlock. */ static inline void rte_spinlock_recursive_lock(rte_spinlock_recursive_t *slr) + RTE_NO_ANNOTATED_LOCK_CHECK { int id = rte_gettid(); @@ -227,6 +232,7 @@ static inline void rte_spinlock_recursive_lock(rte_spinlock_recursive_t *slr) * A pointer to the recursive spinlock. */ static inline void rte_spinlock_recursive_unlock(rte_spinlock_recursive_t *slr) + RTE_NO_ANNOTATED_LOCK_CHECK { if (--(slr->count) == 0) { slr->user = -1; @@ -244,6 +250,7 @@ static inline void rte_spinlock_recursive_unlock(rte_spinlock_recursive_t *slr) * 1 if the lock is successfully taken; 0 otherwise. */ static inline int rte_spinlock_recursive_trylock(rte_spinlock_recursive_t *slr) + RTE_NO_ANNOTATED_LOCK_CHECK { int id = rte_gettid(); diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build index 9700494816..a20b8d7e23 100644 --- a/lib/eal/include/meson.build +++ b/lib/eal/include/meson.build @@ -27,6 +27,7 @@ headers += files( 'rte_keepalive.h', 'rte_launch.h', 'rte_lcore.h', + 'rte_lock_annotations.h', 'rte_log.h', 'rte_malloc.h', 'rte_memory.h', diff --git a/lib/eal/include/rte_lock_annotations.h b/lib/eal/include/rte_lock_annotations.h new file mode 100644 index 0000000000..143087a56c --- /dev/null +++ b/lib/eal/include/rte_lock_annotations.h @@ -0,0 +1,67 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2022 Red Hat, Inc. + */ + +#ifndef RTE_LOCK_ANNOTATIONS_H +#define RTE_LOCK_ANNOTATIONS_H + +#ifdef __cplusplus +extern "C" { +#endif + +#ifdef RTE_ANNOTATE_LOCKS + +#define RTE_ANNOTATED_LOCK \ + __attribute__((lockable)) + +#define RTE_GUARDED_BY(...) \ + __attribute__((guarded_by(__VA_ARGS__))) +#define RTE_GUARDED_VAR \ + __attribute__((guarded_var)) + +#define RTE_EXC_LOCK_REQUIRES(...) \ + __attribute__((exclusive_locks_required(__VA_ARGS__))) +#define RTE_EXC_LOCK_ACQUIRES(...) \ + __attribute__((exclusive_lock_function(__VA_ARGS__))) +#define RTE_EXC_LOCK_TRYLOCK(ret, ...) \ + __attribute__((exclusive_trylock_function(ret, __VA_ARGS__))) + +#define RTE_SHARED_LOCK_REQUIRES(...) \ + __attribute__((shared_locks_required(__VA_ARGS__))) +#define RTE_SHARED_LOCK_ACQUIRES(...) \ + __attribute__((shared_lock_function(__VA_ARGS__))) +#define RTE_SHARED_LOCK_TRYLOCK(ret, ...) \ + __attribute__((shared_trylock_function(ret, __VA_ARGS__))) + +#define RTE_LOCK_RELEASES(...) \ + __attribute__((unlock_function(__VA_ARGS__))) + +#define RTE_NO_ANNOTATED_LOCK_CHECK \ + __attribute__((no_thread_safety_analysis)) + +#else /* ! RTE_ANNOTATE_LOCKS */ + +#define RTE_ANNOTATED_LOCK + +#define RTE_GUARDED_BY(...) +#define RTE_GUARDED_VAR + +#define RTE_EXC_LOCK_REQUIRES(...) +#define RTE_EXC_LOCK_ACQUIRES(...) +#define RTE_EXC_LOCK_TRYLOCK(...) + +#define RTE_SHARED_LOCK_REQUIRES(...) +#define RTE_SHARED_LOCK_ACQUIRES(...) +#define RTE_SHARED_LOCK_TRYLOCK(...) + +#define RTE_LOCK_RELEASES(...) + +#define RTE_NO_ANNOTATED_LOCK_CHECK + +#endif /* RTE_ANNOTATE_LOCKS */ + +#ifdef __cplusplus +} +#endif + +#endif /* RTE_LOCK_ANNOTATIONS_H */ diff --git a/lib/eal/ppc/include/rte_rwlock.h b/lib/eal/ppc/include/rte_rwlock.h index 9fadc04076..9260a9f0a2 100644 --- a/lib/eal/ppc/include/rte_rwlock.h +++ b/lib/eal/ppc/include/rte_rwlock.h @@ -11,24 +11,28 @@ extern "C" { static inline void rte_rwlock_read_lock_tm(rte_rwlock_t *rwl) + RTE_NO_ANNOTATED_LOCK_CHECK { rte_rwlock_read_lock(rwl); } static inline void rte_rwlock_read_unlock_tm(rte_rwlock_t *rwl) + RTE_NO_ANNOTATED_LOCK_CHECK { rte_rwlock_read_unlock(rwl); } static inline void rte_rwlock_write_lock_tm(rte_rwlock_t *rwl) + RTE_NO_ANNOTATED_LOCK_CHECK { rte_rwlock_write_lock(rwl); } static inline void rte_rwlock_write_unlock_tm(rte_rwlock_t *rwl) + RTE_NO_ANNOTATED_LOCK_CHECK { rte_rwlock_write_unlock(rwl); } diff --git a/lib/eal/ppc/include/rte_spinlock.h b/lib/eal/ppc/include/rte_spinlock.h index 149ec245c7..1d64535ddd 100644 --- a/lib/eal/ppc/include/rte_spinlock.h +++ b/lib/eal/ppc/include/rte_spinlock.h @@ -20,6 +20,7 @@ extern "C" { static inline void rte_spinlock_lock(rte_spinlock_t *sl) + RTE_NO_ANNOTATED_LOCK_CHECK { while (__sync_lock_test_and_set(&sl->locked, 1)) while (sl->locked) @@ -28,12 +29,14 @@ rte_spinlock_lock(rte_spinlock_t *sl) static inline void rte_spinlock_unlock(rte_spinlock_t *sl) + RTE_NO_ANNOTATED_LOCK_CHECK { __sync_lock_release(&sl->locked); } static inline int rte_spinlock_trylock(rte_spinlock_t *sl) + RTE_NO_ANNOTATED_LOCK_CHECK { return __sync_lock_test_and_set(&sl->locked, 1) == 0; } @@ -47,36 +50,42 @@ static inline int rte_tm_supported(void) static inline void rte_spinlock_lock_tm(rte_spinlock_t *sl) + RTE_NO_ANNOTATED_LOCK_CHECK { rte_spinlock_lock(sl); /* fall-back */ } static inline int rte_spinlock_trylock_tm(rte_spinlock_t *sl) + RTE_NO_ANNOTATED_LOCK_CHECK { return rte_spinlock_trylock(sl); } static inline void rte_spinlock_unlock_tm(rte_spinlock_t *sl) + RTE_NO_ANNOTATED_LOCK_CHECK { rte_spinlock_unlock(sl); } static inline void rte_spinlock_recursive_lock_tm(rte_spinlock_recursive_t *slr) + RTE_NO_ANNOTATED_LOCK_CHECK { rte_spinlock_recursive_lock(slr); /* fall-back */ } static inline void rte_spinlock_recursive_unlock_tm(rte_spinlock_recursive_t *slr) + RTE_NO_ANNOTATED_LOCK_CHECK { rte_spinlock_recursive_unlock(slr); } static inline int rte_spinlock_recursive_trylock_tm(rte_spinlock_recursive_t *slr) + RTE_NO_ANNOTATED_LOCK_CHECK { return rte_spinlock_recursive_trylock(slr); } diff --git a/lib/eal/x86/include/rte_rwlock.h b/lib/eal/x86/include/rte_rwlock.h index eec4c7123c..7857192d3e 100644 --- a/lib/eal/x86/include/rte_rwlock.h +++ b/lib/eal/x86/include/rte_rwlock.h @@ -14,6 +14,7 @@ extern "C" { static inline void rte_rwlock_read_lock_tm(rte_rwlock_t *rwl) + RTE_NO_ANNOTATED_LOCK_CHECK { if (likely(rte_try_tm(&rwl->cnt))) return; @@ -22,6 +23,7 @@ rte_rwlock_read_lock_tm(rte_rwlock_t *rwl) static inline void rte_rwlock_read_unlock_tm(rte_rwlock_t *rwl) + RTE_NO_ANNOTATED_LOCK_CHECK { if (unlikely(rwl->cnt)) rte_rwlock_read_unlock(rwl); @@ -31,6 +33,7 @@ rte_rwlock_read_unlock_tm(rte_rwlock_t *rwl) static inline void rte_rwlock_write_lock_tm(rte_rwlock_t *rwl) + RTE_NO_ANNOTATED_LOCK_CHECK { if (likely(rte_try_tm(&rwl->cnt))) return; @@ -39,6 +42,7 @@ rte_rwlock_write_lock_tm(rte_rwlock_t *rwl) static inline void rte_rwlock_write_unlock_tm(rte_rwlock_t *rwl) + RTE_NO_ANNOTATED_LOCK_CHECK { if (unlikely(rwl->cnt)) rte_rwlock_write_unlock(rwl); diff --git a/lib/eal/x86/include/rte_spinlock.h b/lib/eal/x86/include/rte_spinlock.h index e2e2b2643c..a0be70570b 100644 --- a/lib/eal/x86/include/rte_spinlock.h +++ b/lib/eal/x86/include/rte_spinlock.h @@ -23,6 +23,7 @@ extern "C" { #ifndef RTE_FORCE_INTRINSICS static inline void rte_spinlock_lock(rte_spinlock_t *sl) + RTE_NO_ANNOTATED_LOCK_CHECK { int lock_val = 1; asm volatile ( @@ -43,6 +44,7 @@ rte_spinlock_lock(rte_spinlock_t *sl) static inline void rte_spinlock_unlock (rte_spinlock_t *sl) + RTE_NO_ANNOTATED_LOCK_CHECK { int unlock_val = 0; asm volatile ( @@ -54,6 +56,7 @@ rte_spinlock_unlock (rte_spinlock_t *sl) static inline int rte_spinlock_trylock (rte_spinlock_t *sl) + RTE_NO_ANNOTATED_LOCK_CHECK { int lockval = 1; @@ -121,6 +124,7 @@ rte_try_tm(volatile int *lock) static inline void rte_spinlock_lock_tm(rte_spinlock_t *sl) + RTE_NO_ANNOTATED_LOCK_CHECK { if (likely(rte_try_tm(&sl->locked))) return; @@ -130,6 +134,7 @@ rte_spinlock_lock_tm(rte_spinlock_t *sl) static inline int rte_spinlock_trylock_tm(rte_spinlock_t *sl) + RTE_NO_ANNOTATED_LOCK_CHECK { if (likely(rte_try_tm(&sl->locked))) return 1; @@ -139,6 +144,7 @@ rte_spinlock_trylock_tm(rte_spinlock_t *sl) static inline void rte_spinlock_unlock_tm(rte_spinlock_t *sl) + RTE_NO_ANNOTATED_LOCK_CHECK { if (unlikely(sl->locked)) rte_spinlock_unlock(sl); @@ -148,6 +154,7 @@ rte_spinlock_unlock_tm(rte_spinlock_t *sl) static inline void rte_spinlock_recursive_lock_tm(rte_spinlock_recursive_t *slr) + RTE_NO_ANNOTATED_LOCK_CHECK { if (likely(rte_try_tm(&slr->sl.locked))) return; @@ -157,6 +164,7 @@ rte_spinlock_recursive_lock_tm(rte_spinlock_recursive_t *slr) static inline void rte_spinlock_recursive_unlock_tm(rte_spinlock_recursive_t *slr) + RTE_NO_ANNOTATED_LOCK_CHECK { if (unlikely(slr->sl.locked)) rte_spinlock_recursive_unlock(slr); @@ -166,6 +174,7 @@ rte_spinlock_recursive_unlock_tm(rte_spinlock_recursive_t *slr) static inline int rte_spinlock_recursive_trylock_tm(rte_spinlock_recursive_t *slr) + RTE_NO_ANNOTATED_LOCK_CHECK { if (likely(rte_try_tm(&slr->sl.locked))) return 1; diff --git a/lib/meson.build b/lib/meson.build index 24adbe44c9..909133ea64 100644 --- a/lib/meson.build +++ b/lib/meson.build @@ -112,6 +112,7 @@ foreach l:libraries reason = '' # set if build == false to explain why name = l use_function_versioning = false + annotate_locks = false sources = [] headers = [] indirect_headers = [] # public headers not directly included by apps @@ -184,6 +185,10 @@ foreach l:libraries cflags += '-DRTE_USE_FUNCTION_VERSIONING' endif cflags += '-DRTE_LOG_DEFAULT_LOGTYPE=lib.' + l + if annotate_locks and cc.has_argument('-Wthread-safety') + cflags += '-DRTE_ANNOTATE_LOCKS' + cflags += '-Wthread-safety' + endif # first build static lib static_lib = static_library(libname, From patchwork Wed Mar 30 13:49:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 109025 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 325B6A050D; Wed, 30 Mar 2022 15:50:24 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4D80742870; Wed, 30 Mar 2022 15:50:22 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 5E71A4285D for ; Wed, 30 Mar 2022 15:50:20 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1648648219; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/XX9h/QUkJXksVFe6bfewV0ta1vAvt1eZBjxxgUSk+8=; b=hwUY9VjLTP3O2w/TdGcfxXdi4SUcDc5y8P2uAtz0jpfy6VFZnVRFQhwLtpCNHYMnzHNe2O 4lR1UgudJZRbzb+BNLpSz5Oz0/qrD/Tt9sELjWaihWIMYDnSpUWbS/oNcAmtXn8+nSNWhy Vqc1/O7P02FTwZ0xKdKwW4j5x5rosg0= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-50-hIhJkQ-oPDeG6QVfQlO6Tg-1; Wed, 30 Mar 2022 09:50:18 -0400 X-MC-Unique: hIhJkQ-oPDeG6QVfQlO6Tg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2BAB6899EC5; Wed, 30 Mar 2022 13:50:18 +0000 (UTC) Received: from dmarchan.remote.csb (unknown [10.40.195.27]) by smtp.corp.redhat.com (Postfix) with ESMTP id DABA340D2821; Wed, 30 Mar 2022 13:50:16 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: maxime.coquelin@redhat.com, chenbo.xia@intel.com, jiayu.hu@intel.com, yuanx.wang@intel.com, xuan.ding@intel.com Subject: [RFC PATCH v2 3/9] vhost: annotate virtqueue access lock Date: Wed, 30 Mar 2022 15:49:50 +0200 Message-Id: <20220330134956.18927-4-david.marchand@redhat.com> In-Reply-To: <20220330134956.18927-1-david.marchand@redhat.com> References: <20220328121758.26632-1-david.marchand@redhat.com> <20220330134956.18927-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.11.54.2 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david.marchand@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This change simply annotates existing paths of the code leading to manipulations of the vq->access_lock. One small change is required: vhost_poll_enqueue_completed was getting a queue_id to get hold of the vq, while its callers already knew of the vq. For the annotation sake, vq is now directly passed. vhost_user_lock_all_queue_pairs and vhost_user_unlock_all_queue_pairs are skipped since vq->access_lock are conditionally held. Signed-off-by: David Marchand --- lib/vhost/vhost.h | 2 ++ lib/vhost/vhost_user.c | 2 ++ lib/vhost/virtio_net.c | 16 ++++++++++++---- 3 files changed, 16 insertions(+), 4 deletions(-) diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index a9edc271aa..158460b7d7 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -834,6 +834,7 @@ vhost_need_event(uint16_t event_idx, uint16_t new_idx, uint16_t old) static __rte_always_inline void vhost_vring_call_split(struct virtio_net *dev, struct vhost_virtqueue *vq) + RTE_EXC_LOCK_REQUIRES(vq->access_lock) { /* Flush used->idx update before we read avail->flags. */ rte_atomic_thread_fence(__ATOMIC_SEQ_CST); @@ -872,6 +873,7 @@ vhost_vring_call_split(struct virtio_net *dev, struct vhost_virtqueue *vq) static __rte_always_inline void vhost_vring_call_packed(struct virtio_net *dev, struct vhost_virtqueue *vq) + RTE_EXC_LOCK_REQUIRES(vq->access_lock) { uint16_t old, new, off, off_wrap; bool signalled_used_valid, kick = false; diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c index 1d390677fa..87eaa2ab4a 100644 --- a/lib/vhost/vhost_user.c +++ b/lib/vhost/vhost_user.c @@ -2909,6 +2909,7 @@ vhost_user_check_and_alloc_queue_pair(struct virtio_net *dev, static void vhost_user_lock_all_queue_pairs(struct virtio_net *dev) + RTE_NO_ANNOTATED_LOCK_CHECK { unsigned int i = 0; unsigned int vq_num = 0; @@ -2926,6 +2927,7 @@ vhost_user_lock_all_queue_pairs(struct virtio_net *dev) static void vhost_user_unlock_all_queue_pairs(struct virtio_net *dev) + RTE_NO_ANNOTATED_LOCK_CHECK { unsigned int i = 0; unsigned int vq_num = 0; diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index 5f432b0d77..514ee00993 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -1246,6 +1246,7 @@ vhost_enqueue_single_packed(struct virtio_net *dev, static __rte_noinline uint32_t virtio_dev_rx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf **pkts, uint32_t count) + RTE_EXC_LOCK_REQUIRES(vq->access_lock) { uint32_t pkt_idx = 0; uint16_t num_buffers; @@ -1441,6 +1442,7 @@ virtio_dev_rx_packed(struct virtio_net *dev, struct vhost_virtqueue *__rte_restrict vq, struct rte_mbuf **__rte_restrict pkts, uint32_t count) + RTE_EXC_LOCK_REQUIRES(vq->access_lock) { uint32_t pkt_idx = 0; @@ -1955,11 +1957,11 @@ write_back_completed_descs_packed(struct vhost_virtqueue *vq, } static __rte_always_inline uint16_t -vhost_poll_enqueue_completed(struct virtio_net *dev, uint16_t queue_id, +vhost_poll_enqueue_completed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf **pkts, uint16_t count, int16_t dma_id, uint16_t vchan_id) + RTE_EXC_LOCK_REQUIRES(vq->access_lock) { - struct vhost_virtqueue *vq = dev->virtqueue[queue_id]; struct vhost_async *async = vq->async; struct async_inflight_info *pkts_info = async->pkts_info; uint16_t nr_cpl_pkts = 0; @@ -2062,7 +2064,7 @@ rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id, goto out; } - n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, count, dma_id, vchan_id); + n_pkts_cpl = vhost_poll_enqueue_completed(dev, vq, pkts, count, dma_id, vchan_id); out: rte_spinlock_unlock(&vq->access_lock); @@ -2104,7 +2106,7 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, return 0; } - n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, count, dma_id, vchan_id); + n_pkts_cpl = vhost_poll_enqueue_completed(dev, vq, pkts, count, dma_id, vchan_id); return n_pkts_cpl; } @@ -2679,6 +2681,7 @@ static uint16_t virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count, bool legacy_ol_flags) + RTE_EXC_LOCK_REQUIRES(vq->access_lock) { uint16_t i; uint16_t free_entries; @@ -2774,6 +2777,7 @@ static uint16_t virtio_dev_tx_split_legacy(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count) + RTE_EXC_LOCK_REQUIRES(vq->access_lock) { return virtio_dev_tx_split(dev, vq, mbuf_pool, pkts, count, true); } @@ -2783,6 +2787,7 @@ static uint16_t virtio_dev_tx_split_compliant(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count) + RTE_EXC_LOCK_REQUIRES(vq->access_lock) { return virtio_dev_tx_split(dev, vq, mbuf_pool, pkts, count, false); } @@ -2982,6 +2987,7 @@ virtio_dev_tx_packed(struct virtio_net *dev, struct rte_mbuf **__rte_restrict pkts, uint32_t count, bool legacy_ol_flags) + RTE_EXC_LOCK_REQUIRES(vq->access_lock) { uint32_t pkt_idx = 0; @@ -3025,6 +3031,7 @@ static uint16_t virtio_dev_tx_packed_legacy(struct virtio_net *dev, struct vhost_virtqueue *__rte_restrict vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **__rte_restrict pkts, uint32_t count) + RTE_EXC_LOCK_REQUIRES(vq->access_lock) { return virtio_dev_tx_packed(dev, vq, mbuf_pool, pkts, count, true); } @@ -3034,6 +3041,7 @@ static uint16_t virtio_dev_tx_packed_compliant(struct virtio_net *dev, struct vhost_virtqueue *__rte_restrict vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **__rte_restrict pkts, uint32_t count) + RTE_EXC_LOCK_REQUIRES(vq->access_lock) { return virtio_dev_tx_packed(dev, vq, mbuf_pool, pkts, count, false); } From patchwork Wed Mar 30 13:49:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 109027 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7324AA050D; Wed, 30 Mar 2022 15:50:39 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A1B5E428F9; Wed, 30 Mar 2022 15:50:26 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 5CF84428F4 for ; Wed, 30 Mar 2022 15:50:25 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1648648223; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JXqqOCgj9Xe/FQOqOIN7I2y64iYcA0IN2TJAaMdM+cI=; b=by7KTDL6czSiEVNhjGez+RZGCFGImJBQ2BmYYE8zyqz8yTrRll7savwfISUpOZeurwOVnT Vl/BF060ol10qkshihLdPhqkPJjKhb6LJSxs8uNnJ2I86Grdfvw+gi+e3GTQCsAsaMrMR5 BRHdgETtuv+GCcQ0EQxg53LOQ6MGO1c= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-558-5zP1lTi3MBSLGifenlX-FA-1; Wed, 30 Mar 2022 09:50:22 -0400 X-MC-Unique: 5zP1lTi3MBSLGifenlX-FA-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id F1719100BAAF; Wed, 30 Mar 2022 13:50:21 +0000 (UTC) Received: from dmarchan.remote.csb (unknown [10.40.195.27]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4CB47400E10D; Wed, 30 Mar 2022 13:50:20 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: maxime.coquelin@redhat.com, chenbo.xia@intel.com, jiayu.hu@intel.com, yuanx.wang@intel.com, xuan.ding@intel.com, stable@dpdk.org, Patrick Fu Subject: [RFC PATCH v2 4/9] vhost: fix async access Date: Wed, 30 Mar 2022 15:49:51 +0200 Message-Id: <20220330134956.18927-5-david.marchand@redhat.com> In-Reply-To: <20220330134956.18927-1-david.marchand@redhat.com> References: <20220328121758.26632-1-david.marchand@redhat.com> <20220330134956.18927-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.11.54.2 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david.marchand@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org vq->async accesses must be protected with vq->access_lock. Fixes: eb666d24085f ("vhost: fix async unregister deadlock") Fixes: 0c0935c5f794 ("vhost: allow to check in-flight packets for async vhost") Cc: stable@dpdk.org Signed-off-by: David Marchand Acked-by: Sunil Pai G --- lib/vhost/vhost.c | 25 ++++++++++--------------- 1 file changed, 10 insertions(+), 15 deletions(-) diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c index 2f96a28dac..a93e41f314 100644 --- a/lib/vhost/vhost.c +++ b/lib/vhost/vhost.c @@ -1753,27 +1753,23 @@ rte_vhost_async_channel_unregister(int vid, uint16_t queue_id) if (vq == NULL) return ret; - ret = 0; - - if (!vq->async) - return ret; - if (!rte_spinlock_trylock(&vq->access_lock)) { VHOST_LOG_CONFIG(ERR, "(%s) failed to unregister async channel, virtqueue busy.\n", dev->ifname); - return -1; + return ret; } - if (vq->async->pkts_inflight_n) { + if (!vq->async) { + ret = 0; + } else if (vq->async->pkts_inflight_n) { VHOST_LOG_CONFIG(ERR, "(%s) failed to unregister async channel.\n", dev->ifname); VHOST_LOG_CONFIG(ERR, "(%s) inflight packets must be completed before unregistration.\n", dev->ifname); - ret = -1; - goto out; + } else { + vhost_free_async_mem(vq); + ret = 0; } - vhost_free_async_mem(vq); -out: rte_spinlock_unlock(&vq->access_lock); return ret; @@ -1891,9 +1887,6 @@ rte_vhost_async_get_inflight(int vid, uint16_t queue_id) if (vq == NULL) return ret; - if (!vq->async) - return ret; - if (!rte_spinlock_trylock(&vq->access_lock)) { VHOST_LOG_CONFIG(DEBUG, "(%s) failed to check in-flight packets. virtqueue busy.\n", @@ -1901,7 +1894,9 @@ rte_vhost_async_get_inflight(int vid, uint16_t queue_id) return ret; } - ret = vq->async->pkts_inflight_n; + if (vq->async) + ret = vq->async->pkts_inflight_n; + rte_spinlock_unlock(&vq->access_lock); return ret; From patchwork Wed Mar 30 13:49:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 109028 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4C5D1A050D; Wed, 30 Mar 2022 15:50:47 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A76C2428EE; Wed, 30 Mar 2022 15:50:30 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 9FE2B4285D for ; Wed, 30 Mar 2022 15:50:29 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1648648229; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DG5sGu6CPZbzQs7zJCvyTJar0+EQIDwNJ2Hrz9ZlnEg=; b=Enffy1DwkRQq+uBu1HDR2ZPec62Yw8fvl1RJiYuyTzMJL/PzmO6Lkhg+zAGdS2TNraNyQq Pko+i/zwMHURb3kcGok9lsiUbofUOGI90/V6tM1nyiGUo8w7yJiLOmW0hF97lJr4I6KcOG f4UUqe6RqWUbUTMTsHIsyJCgxkUB33w= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-22-nsqlefEKO5Ko5ZFBJcLOLg-1; Wed, 30 Mar 2022 09:50:25 -0400 X-MC-Unique: nsqlefEKO5Ko5ZFBJcLOLg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 3F0573C23FAF; Wed, 30 Mar 2022 13:50:25 +0000 (UTC) Received: from dmarchan.remote.csb (unknown [10.40.195.27]) by smtp.corp.redhat.com (Postfix) with ESMTP id E6196400E10D; Wed, 30 Mar 2022 13:50:23 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: maxime.coquelin@redhat.com, chenbo.xia@intel.com, jiayu.hu@intel.com, yuanx.wang@intel.com, xuan.ding@intel.com Subject: [RFC PATCH v2 5/9] vhost: annotate async acesses Date: Wed, 30 Mar 2022 15:49:52 +0200 Message-Id: <20220330134956.18927-6-david.marchand@redhat.com> In-Reply-To: <20220330134956.18927-1-david.marchand@redhat.com> References: <20220328121758.26632-1-david.marchand@redhat.com> <20220330134956.18927-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.11.54.2 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david.marchand@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org vq->async is initialised and must be accessed under vq->access_lock. Top level "_thread_unsafe" functions could be checked at runtime (clang provides a lock aware assert()-like check), but they are simply skipped because those functions are not called in-tree, and as a result, their annotations would not be tested. Signed-off-by: David Marchand --- lib/vhost/vhost.c | 14 +++++++++----- lib/vhost/vhost.h | 2 +- lib/vhost/vhost_user.c | 2 ++ lib/vhost/virtio_net.c | 15 +++++++++++++++ 4 files changed, 27 insertions(+), 6 deletions(-) diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c index a93e41f314..637aa65ffb 100644 --- a/lib/vhost/vhost.c +++ b/lib/vhost/vhost.c @@ -334,6 +334,7 @@ cleanup_device(struct virtio_net *dev, int destroy) static void vhost_free_async_mem(struct vhost_virtqueue *vq) + RTE_EXC_LOCK_REQUIRES(vq->access_lock) { if (!vq->async) return; @@ -352,6 +353,7 @@ vhost_free_async_mem(struct vhost_virtqueue *vq) void free_vq(struct virtio_net *dev, struct vhost_virtqueue *vq) + RTE_NO_ANNOTATED_LOCK_CHECK { if (vq_is_packed(dev)) rte_free(vq->shadow_used_packed); @@ -1622,10 +1624,10 @@ rte_vhost_extern_callback_register(int vid, } static __rte_always_inline int -async_channel_register(int vid, uint16_t queue_id) +async_channel_register(struct virtio_net *dev, struct vhost_virtqueue *vq, + uint16_t queue_id) + RTE_EXC_LOCK_REQUIRES(vq->access_lock) { - struct virtio_net *dev = get_device(vid); - struct vhost_virtqueue *vq = dev->virtqueue[queue_id]; struct vhost_async *async; int node = vq->numa_node; @@ -1709,7 +1711,7 @@ rte_vhost_async_channel_register(int vid, uint16_t queue_id) return -1; rte_spinlock_lock(&vq->access_lock); - ret = async_channel_register(vid, queue_id); + ret = async_channel_register(dev, vq, queue_id); rte_spinlock_unlock(&vq->access_lock); return ret; @@ -1717,6 +1719,7 @@ rte_vhost_async_channel_register(int vid, uint16_t queue_id) int rte_vhost_async_channel_register_thread_unsafe(int vid, uint16_t queue_id) + RTE_NO_ANNOTATED_LOCK_CHECK { struct vhost_virtqueue *vq; struct virtio_net *dev = get_device(vid); @@ -1732,7 +1735,7 @@ rte_vhost_async_channel_register_thread_unsafe(int vid, uint16_t queue_id) if (unlikely(vq == NULL || !dev->async_copy)) return -1; - return async_channel_register(vid, queue_id); + return async_channel_register(dev, vq, queue_id); } int @@ -1777,6 +1780,7 @@ rte_vhost_async_channel_unregister(int vid, uint16_t queue_id) int rte_vhost_async_channel_unregister_thread_unsafe(int vid, uint16_t queue_id) + RTE_NO_ANNOTATED_LOCK_CHECK { struct vhost_virtqueue *vq; struct virtio_net *dev = get_device(vid); diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index 158460b7d7..7c44fba0a1 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -299,7 +299,7 @@ struct vhost_virtqueue { struct rte_vhost_resubmit_info *resubmit_inflight; uint64_t global_counter; - struct vhost_async *async; + struct vhost_async *async RTE_GUARDED_VAR; int notif_enable; #define VIRTIO_UNINITIALIZED_NOTIF (-1) diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c index 87eaa2ab4a..0cfa2d6f82 100644 --- a/lib/vhost/vhost_user.c +++ b/lib/vhost/vhost_user.c @@ -2199,6 +2199,8 @@ static int vhost_user_set_vring_enable(struct virtio_net **pdev, struct vhu_msg_context *ctx, int main_fd __rte_unused) + /* vq->access_lock is taken in vhost_user_lock_all_queue_pairs() */ + RTE_NO_ANNOTATED_LOCK_CHECK { struct virtio_net *dev = *pdev; bool enable = !!ctx->msg.payload.state.num; diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index 514ee00993..c0d9d3db3e 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -51,6 +51,7 @@ static __rte_always_inline int64_t vhost_async_dma_transfer_one(struct virtio_net *dev, struct vhost_virtqueue *vq, int16_t dma_id, uint16_t vchan_id, uint16_t flag_idx, struct vhost_iov_iter *pkt) + RTE_EXC_LOCK_REQUIRES(vq->access_lock) { struct async_dma_vchan_info *dma_info = &dma_copy_track[dma_id].vchans[vchan_id]; uint16_t ring_mask = dma_info->ring_mask; @@ -99,6 +100,7 @@ static __rte_always_inline uint16_t vhost_async_dma_transfer(struct virtio_net *dev, struct vhost_virtqueue *vq, int16_t dma_id, uint16_t vchan_id, uint16_t head_idx, struct vhost_iov_iter *pkts, uint16_t nr_pkts) + RTE_EXC_LOCK_REQUIRES(vq->access_lock) { struct async_dma_vchan_info *dma_info = &dma_copy_track[dma_id].vchans[vchan_id]; int64_t ret, nr_copies = 0; @@ -1000,6 +1002,7 @@ static __rte_always_inline int async_mbuf_to_desc_seg(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf *m, uint32_t mbuf_offset, uint64_t buf_iova, uint32_t cpy_len) + RTE_EXC_LOCK_REQUIRES(vq->access_lock) { struct vhost_async *async = vq->async; uint64_t mapped_len; @@ -1057,6 +1060,7 @@ static __rte_always_inline int mbuf_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf *m, struct buf_vector *buf_vec, uint16_t nr_vec, uint16_t num_buffers, bool is_async) + RTE_EXC_LOCK_REQUIRES(vq->access_lock) { uint32_t vec_idx = 0; uint32_t mbuf_offset, mbuf_avail; @@ -1186,6 +1190,7 @@ vhost_enqueue_single_packed(struct virtio_net *dev, struct rte_mbuf *pkt, struct buf_vector *buf_vec, uint16_t *nr_descs) + RTE_EXC_LOCK_REQUIRES(vq->access_lock) { uint16_t nr_vec = 0; uint16_t avail_idx = vq->last_avail_idx; @@ -1417,6 +1422,7 @@ static __rte_always_inline int16_t virtio_dev_rx_single_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf *pkt) + RTE_EXC_LOCK_REQUIRES(vq->access_lock) { struct buf_vector buf_vec[BUF_VECTOR_MAX]; uint16_t nr_descs = 0; @@ -1541,6 +1547,7 @@ rte_vhost_enqueue_burst(int vid, uint16_t queue_id, static __rte_always_inline uint16_t async_get_first_inflight_pkt_idx(struct vhost_virtqueue *vq) + RTE_EXC_LOCK_REQUIRES(vq->access_lock) { struct vhost_async *async = vq->async; @@ -1587,6 +1594,7 @@ static __rte_noinline uint32_t virtio_dev_rx_async_submit_split(struct virtio_net *dev, struct vhost_virtqueue *vq, uint16_t queue_id, struct rte_mbuf **pkts, uint32_t count, int16_t dma_id, uint16_t vchan_id) + RTE_EXC_LOCK_REQUIRES(vq->access_lock) { struct buf_vector buf_vec[BUF_VECTOR_MAX]; uint32_t pkt_idx = 0; @@ -1691,6 +1699,7 @@ vhost_enqueue_async_packed(struct virtio_net *dev, struct buf_vector *buf_vec, uint16_t *nr_descs, uint16_t *nr_buffers) + RTE_EXC_LOCK_REQUIRES(vq->access_lock) { uint16_t nr_vec = 0; uint16_t avail_idx = vq->last_avail_idx; @@ -1748,6 +1757,7 @@ vhost_enqueue_async_packed(struct virtio_net *dev, static __rte_always_inline int16_t virtio_dev_rx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf *pkt, uint16_t *nr_descs, uint16_t *nr_buffers) + RTE_EXC_LOCK_REQUIRES(vq->access_lock) { struct buf_vector buf_vec[BUF_VECTOR_MAX]; @@ -1766,6 +1776,7 @@ virtio_dev_rx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, static __rte_always_inline void dma_error_handler_packed(struct vhost_virtqueue *vq, uint16_t slot_idx, uint32_t nr_err, uint32_t *pkt_idx) + RTE_EXC_LOCK_REQUIRES(vq->access_lock) { uint16_t descs_err = 0; uint16_t buffers_err = 0; @@ -1793,6 +1804,7 @@ static __rte_noinline uint32_t virtio_dev_rx_async_submit_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, uint16_t queue_id, struct rte_mbuf **pkts, uint32_t count, int16_t dma_id, uint16_t vchan_id) + RTE_EXC_LOCK_REQUIRES(vq->access_lock) { uint32_t pkt_idx = 0; uint32_t remained = count; @@ -1863,6 +1875,7 @@ virtio_dev_rx_async_submit_packed(struct virtio_net *dev, struct vhost_virtqueue static __rte_always_inline void write_back_completed_descs_split(struct vhost_virtqueue *vq, uint16_t n_descs) + RTE_EXC_LOCK_REQUIRES(vq->access_lock) { struct vhost_async *async = vq->async; uint16_t nr_left = n_descs; @@ -1895,6 +1908,7 @@ write_back_completed_descs_split(struct vhost_virtqueue *vq, uint16_t n_descs) static __rte_always_inline void write_back_completed_descs_packed(struct vhost_virtqueue *vq, uint16_t n_buffers) + RTE_EXC_LOCK_REQUIRES(vq->access_lock) { struct vhost_async *async = vq->async; uint16_t from = async->last_buffer_idx_packed; @@ -2076,6 +2090,7 @@ uint16_t rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, struct rte_mbuf **pkts, uint16_t count, int16_t dma_id, uint16_t vchan_id) + RTE_NO_ANNOTATED_LOCK_CHECK { struct virtio_net *dev = get_device(vid); struct vhost_virtqueue *vq; From patchwork Wed Mar 30 13:49:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 109029 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DF859A050D; Wed, 30 Mar 2022 15:50:53 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8E2BC42901; Wed, 30 Mar 2022 15:50:32 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 2A48342900 for ; Wed, 30 Mar 2022 15:50:31 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1648648230; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=X4Knv4FUdDPrJJ+Usnkm4yrb/flozyZqOJSx7cpuJVs=; b=EcsolGbB3suYkKfB0QgANCiaunLXI53TutBdSJO/4v72l6/FaX8dlsR020s9NrF+C8ooVl K53VQ6XfV0Ahner25+QFvRZbFsc1d5YtcNdDup9TWB4SwG0zVG0gq8PX2w5JuAdYkcGlFl 9QPAkvUA/Lh3WUYQow2GTFwD71h3Z0E= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-86-lSYhuMGrPb6EZ8jTYa9WaQ-1; Wed, 30 Mar 2022 09:50:29 -0400 X-MC-Unique: lSYhuMGrPb6EZ8jTYa9WaQ-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D1DBF3C23FA3; Wed, 30 Mar 2022 13:50:28 +0000 (UTC) Received: from dmarchan.remote.csb (unknown [10.40.195.27]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8664F40D1B9B; Wed, 30 Mar 2022 13:50:27 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: maxime.coquelin@redhat.com, chenbo.xia@intel.com, jiayu.hu@intel.com, yuanx.wang@intel.com, xuan.ding@intel.com Subject: [RFC PATCH v2 6/9] vhost: annotate need reply handling Date: Wed, 30 Mar 2022 15:49:53 +0200 Message-Id: <20220330134956.18927-7-david.marchand@redhat.com> In-Reply-To: <20220330134956.18927-1-david.marchand@redhat.com> References: <20220328121758.26632-1-david.marchand@redhat.com> <20220330134956.18927-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.11.54.2 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david.marchand@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org When a reply from the slave is required (VHOST_USER_NEED_REPLY flag), a spinlock is taken before sending the message. This spinlock is released if an error occurs when sending the message, and once a reply is received. A problem is that this lock is taken under a branch and annotating conditionally held locks is not supported. The code seems currently correct and, while we may rework the code, it is easier to simply skip checks on slave_req_lock for those helpers. Signed-off-by: David Marchand --- lib/vhost/vhost_user.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c index 0cfa2d6f82..273cce5e41 100644 --- a/lib/vhost/vhost_user.c +++ b/lib/vhost/vhost_user.c @@ -2854,6 +2854,7 @@ send_vhost_reply(struct virtio_net *dev, int sockfd, struct vhu_msg_context *ctx static int send_vhost_slave_message(struct virtio_net *dev, struct vhu_msg_context *ctx) + RTE_NO_ANNOTATED_LOCK_CHECK { int ret; @@ -3165,6 +3166,7 @@ vhost_user_msg_handler(int vid, int fd) static int process_slave_message_reply(struct virtio_net *dev, const struct vhu_msg_context *ctx) + RTE_NO_ANNOTATED_LOCK_CHECK { struct vhu_msg_context msg_reply; int ret; From patchwork Wed Mar 30 13:49:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 109030 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 24A50A050D; Wed, 30 Mar 2022 15:51:00 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8B0A04285C; Wed, 30 Mar 2022 15:50:39 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 21B964285C for ; Wed, 30 Mar 2022 15:50:38 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1648648237; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jDGYEm2YxK8sqRefI/ajvHpBSPHG/LBlEDduXdFdzsM=; b=NKYPLq68q8JQ3sfAfBVWErtv+dH3KdRMnwWYtVkENX1HJYwpx57ATkWeR8LEiRrF5FZQnu T8+TpTryGN4mpcnp0edp8/b0a+6/KUpUE0hmbbTKw+vSU/RdFzFe3M5YEN82zTmwXsFG34 j3QLqCvhQYkat2/Qrcc4YZt1/QOW2rY= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-222-SHkJTFkXPG2bEKEN_ZTNzA-1; Wed, 30 Mar 2022 09:50:32 -0400 X-MC-Unique: SHkJTFkXPG2bEKEN_ZTNzA-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 641013C021BF; Wed, 30 Mar 2022 13:50:32 +0000 (UTC) Received: from dmarchan.remote.csb (unknown [10.40.195.27]) by smtp.corp.redhat.com (Postfix) with ESMTP id 11D9D40D2821; Wed, 30 Mar 2022 13:50:30 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: maxime.coquelin@redhat.com, chenbo.xia@intel.com, jiayu.hu@intel.com, yuanx.wang@intel.com, xuan.ding@intel.com Subject: [RFC PATCH v2 7/9] vhost: annotate VDPA device list accesses Date: Wed, 30 Mar 2022 15:49:54 +0200 Message-Id: <20220330134956.18927-8-david.marchand@redhat.com> In-Reply-To: <20220330134956.18927-1-david.marchand@redhat.com> References: <20220328121758.26632-1-david.marchand@redhat.com> <20220330134956.18927-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.11.54.2 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david.marchand@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org vdpa_device_list access must be protected with vdpa_device_list_lock spinlock. Signed-off-by: David Marchand --- lib/vhost/vdpa.c | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/lib/vhost/vdpa.c b/lib/vhost/vdpa.c index 8fa2153023..e2caa2bf28 100644 --- a/lib/vhost/vdpa.c +++ b/lib/vhost/vdpa.c @@ -22,21 +22,24 @@ /** Double linked list of vDPA devices. */ TAILQ_HEAD(vdpa_device_list, rte_vdpa_device); -static struct vdpa_device_list vdpa_device_list = - TAILQ_HEAD_INITIALIZER(vdpa_device_list); +static struct vdpa_device_list vdpa_device_list__ = + TAILQ_HEAD_INITIALIZER(vdpa_device_list__); static rte_spinlock_t vdpa_device_list_lock = RTE_SPINLOCK_INITIALIZER; +static struct vdpa_device_list * const vdpa_device_list + RTE_GUARDED_BY(&vdpa_device_list_lock) = &vdpa_device_list__; /* Unsafe, needs to be called with vdpa_device_list_lock held */ static struct rte_vdpa_device * __vdpa_find_device_by_name(const char *name) + RTE_EXC_LOCK_REQUIRES(&vdpa_device_list_lock) { struct rte_vdpa_device *dev, *ret = NULL; if (name == NULL) return NULL; - TAILQ_FOREACH(dev, &vdpa_device_list, next) { + TAILQ_FOREACH(dev, vdpa_device_list, next) { if (!strncmp(dev->device->name, name, RTE_DEV_NAME_MAX_LEN)) { ret = dev; break; @@ -100,7 +103,7 @@ rte_vdpa_register_device(struct rte_device *rte_dev, dev->device = rte_dev; dev->ops = ops; - TAILQ_INSERT_TAIL(&vdpa_device_list, dev, next); + TAILQ_INSERT_TAIL(vdpa_device_list, dev, next); out_unlock: rte_spinlock_unlock(&vdpa_device_list_lock); @@ -114,11 +117,11 @@ rte_vdpa_unregister_device(struct rte_vdpa_device *dev) int ret = -1; rte_spinlock_lock(&vdpa_device_list_lock); - RTE_TAILQ_FOREACH_SAFE(cur_dev, &vdpa_device_list, next, tmp_dev) { + RTE_TAILQ_FOREACH_SAFE(cur_dev, vdpa_device_list, next, tmp_dev) { if (dev != cur_dev) continue; - TAILQ_REMOVE(&vdpa_device_list, dev, next); + TAILQ_REMOVE(vdpa_device_list, dev, next); rte_free(dev); ret = 0; break; @@ -316,7 +319,7 @@ vdpa_find_device(const struct rte_vdpa_device *start, rte_vdpa_cmp_t cmp, rte_spinlock_lock(&vdpa_device_list_lock); if (start == NULL) - dev = TAILQ_FIRST(&vdpa_device_list); + dev = TAILQ_FIRST(vdpa_device_list); else dev = TAILQ_NEXT(start, next); From patchwork Wed Mar 30 13:49:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 109031 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 43C63A050D; Wed, 30 Mar 2022 15:51:06 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A95F242907; Wed, 30 Mar 2022 15:50:41 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id 0381342900 for ; Wed, 30 Mar 2022 15:50:39 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1648648239; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=9j+AdPwMQnInjCHqtj/h5o56sDdHh2K8+05LGtS0pE8=; b=RcszGDBK87DHzbA9GBfGOPBglwngCFcn5JteqwLIk7afoUo2Dyzoc5JPCBSMGsZ6Re/9d9 8ah8UftCFRLvuPiT82nyMsWEKnY/enY/zr0p4k6+png4zOH+ecGRP4a2gpr+/CchlA8ohL DZ8lX2+m1OI+wnVcNdyWT+H4yfw2nmE= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-271-LJCVTphkNSaH9xtT7PfMsA-1; Wed, 30 Mar 2022 09:50:36 -0400 X-MC-Unique: LJCVTphkNSaH9xtT7PfMsA-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C00EB801E80; Wed, 30 Mar 2022 13:50:35 +0000 (UTC) Received: from dmarchan.remote.csb (unknown [10.40.195.27]) by smtp.corp.redhat.com (Postfix) with ESMTP id 49DA4400E10D; Wed, 30 Mar 2022 13:50:34 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: maxime.coquelin@redhat.com, chenbo.xia@intel.com, jiayu.hu@intel.com, yuanx.wang@intel.com, xuan.ding@intel.com Subject: [RFC PATCH v2 8/9] vhost: annotate IOTLB locks Date: Wed, 30 Mar 2022 15:49:55 +0200 Message-Id: <20220330134956.18927-9-david.marchand@redhat.com> In-Reply-To: <20220330134956.18927-1-david.marchand@redhat.com> References: <20220328121758.26632-1-david.marchand@redhat.com> <20220330134956.18927-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.11.54.2 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david.marchand@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This change simply annotates existing paths of the code leading to manipulations of the IOTLB r/w locks. clang does not support conditionally held locks, so always take iotlb locks regardless of VIRTIO_F_IOMMU_PLATFORM feature. vdpa and vhost_crypto code are annotated though they end up not taking a IOTLB lock and have been marked with a FIXME. Signed-off-by: David Marchand --- lib/vhost/iotlb.h | 8 +++++++ lib/vhost/vdpa.c | 1 + lib/vhost/vhost.c | 11 +++++---- lib/vhost/vhost.h | 22 +++++++++++++----- lib/vhost/vhost_crypto.c | 7 ++++++ lib/vhost/virtio_net.c | 49 ++++++++++++++++++++++++++++++---------- 6 files changed, 75 insertions(+), 23 deletions(-) diff --git a/lib/vhost/iotlb.h b/lib/vhost/iotlb.h index 8d0ff7473b..af5ec2da55 100644 --- a/lib/vhost/iotlb.h +++ b/lib/vhost/iotlb.h @@ -11,24 +11,32 @@ static __rte_always_inline void vhost_user_iotlb_rd_lock(struct vhost_virtqueue *vq) + RTE_SHARED_LOCK_ACQUIRES(vq->iotlb_lock) + RTE_NO_ANNOTATED_LOCK_CHECK { rte_rwlock_read_lock(&vq->iotlb_lock); } static __rte_always_inline void vhost_user_iotlb_rd_unlock(struct vhost_virtqueue *vq) + RTE_LOCK_RELEASES(vq->iotlb_lock) + RTE_NO_ANNOTATED_LOCK_CHECK { rte_rwlock_read_unlock(&vq->iotlb_lock); } static __rte_always_inline void vhost_user_iotlb_wr_lock(struct vhost_virtqueue *vq) + RTE_EXC_LOCK_ACQUIRES(vq->iotlb_lock) + RTE_NO_ANNOTATED_LOCK_CHECK { rte_rwlock_write_lock(&vq->iotlb_lock); } static __rte_always_inline void vhost_user_iotlb_wr_unlock(struct vhost_virtqueue *vq) + RTE_LOCK_RELEASES(vq->iotlb_lock) + RTE_NO_ANNOTATED_LOCK_CHECK { rte_rwlock_write_unlock(&vq->iotlb_lock); } diff --git a/lib/vhost/vdpa.c b/lib/vhost/vdpa.c index e2caa2bf28..9304c0083d 100644 --- a/lib/vhost/vdpa.c +++ b/lib/vhost/vdpa.c @@ -133,6 +133,7 @@ rte_vdpa_unregister_device(struct rte_vdpa_device *dev) int rte_vdpa_relay_vring_used(int vid, uint16_t qid, void *vring_m) + RTE_NO_ANNOTATED_LOCK_CHECK /* FIXME: requires iotlb_lock? */ { struct virtio_net *dev = get_device(vid); uint16_t idx, idx_m, desc_id; diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c index 637aa65ffb..f87d9735d2 100644 --- a/lib/vhost/vhost.c +++ b/lib/vhost/vhost.c @@ -383,6 +383,7 @@ free_device(struct virtio_net *dev) static __rte_always_inline int log_translate(struct virtio_net *dev, struct vhost_virtqueue *vq) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock) { if (likely(!(vq->ring_addrs.flags & (1 << VHOST_VRING_F_LOG)))) return 0; @@ -434,6 +435,7 @@ translate_log_addr(struct virtio_net *dev, struct vhost_virtqueue *vq, /* Caller should have iotlb_lock read-locked */ static int vring_translate_split(struct virtio_net *dev, struct vhost_virtqueue *vq) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock) { uint64_t req_size, size; @@ -473,6 +475,7 @@ vring_translate_split(struct virtio_net *dev, struct vhost_virtqueue *vq) /* Caller should have iotlb_lock read-locked */ static int vring_translate_packed(struct virtio_net *dev, struct vhost_virtqueue *vq) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock) { uint64_t req_size, size; @@ -527,10 +530,9 @@ vring_translate(struct virtio_net *dev, struct vhost_virtqueue *vq) } void -vring_invalidate(struct virtio_net *dev, struct vhost_virtqueue *vq) +vring_invalidate(struct virtio_net *dev __rte_unused, struct vhost_virtqueue *vq) { - if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) - vhost_user_iotlb_wr_lock(vq); + vhost_user_iotlb_wr_lock(vq); vq->access_ok = false; vq->desc = NULL; @@ -538,8 +540,7 @@ vring_invalidate(struct virtio_net *dev, struct vhost_virtqueue *vq) vq->used = NULL; vq->log_guest_addr = 0; - if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) - vhost_user_iotlb_wr_unlock(vq); + vhost_user_iotlb_wr_unlock(vq); } static void diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index 7c44fba0a1..f26171b601 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -525,12 +525,15 @@ void __vhost_log_cache_write(struct virtio_net *dev, uint64_t addr, uint64_t len); void __vhost_log_cache_write_iova(struct virtio_net *dev, struct vhost_virtqueue *vq, - uint64_t iova, uint64_t len); + uint64_t iova, uint64_t len) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock); void __vhost_log_cache_sync(struct virtio_net *dev, struct vhost_virtqueue *vq); + void __vhost_log_write(struct virtio_net *dev, uint64_t addr, uint64_t len); void __vhost_log_write_iova(struct virtio_net *dev, struct vhost_virtqueue *vq, - uint64_t iova, uint64_t len); + uint64_t iova, uint64_t len) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock); static __rte_always_inline void vhost_log_write(struct virtio_net *dev, uint64_t addr, uint64_t len) @@ -580,6 +583,7 @@ vhost_log_used_vring(struct virtio_net *dev, struct vhost_virtqueue *vq, static __rte_always_inline void vhost_log_cache_write_iova(struct virtio_net *dev, struct vhost_virtqueue *vq, uint64_t iova, uint64_t len) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock) { if (likely(!(dev->features & (1ULL << VHOST_F_LOG_ALL)))) return; @@ -593,6 +597,7 @@ vhost_log_cache_write_iova(struct virtio_net *dev, struct vhost_virtqueue *vq, static __rte_always_inline void vhost_log_write_iova(struct virtio_net *dev, struct vhost_virtqueue *vq, uint64_t iova, uint64_t len) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock) { if (likely(!(dev->features & (1ULL << VHOST_F_LOG_ALL)))) return; @@ -796,18 +801,23 @@ struct rte_vhost_device_ops const *vhost_driver_callback_get(const char *path); void vhost_backend_cleanup(struct virtio_net *dev); uint64_t __vhost_iova_to_vva(struct virtio_net *dev, struct vhost_virtqueue *vq, - uint64_t iova, uint64_t *len, uint8_t perm); + uint64_t iova, uint64_t *len, uint8_t perm) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock); void *vhost_alloc_copy_ind_table(struct virtio_net *dev, struct vhost_virtqueue *vq, - uint64_t desc_addr, uint64_t desc_len); -int vring_translate(struct virtio_net *dev, struct vhost_virtqueue *vq); + uint64_t desc_addr, uint64_t desc_len) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock); +int vring_translate(struct virtio_net *dev, struct vhost_virtqueue *vq) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock); uint64_t translate_log_addr(struct virtio_net *dev, struct vhost_virtqueue *vq, - uint64_t log_addr); + uint64_t log_addr) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock); void vring_invalidate(struct virtio_net *dev, struct vhost_virtqueue *vq); static __rte_always_inline uint64_t vhost_iova_to_vva(struct virtio_net *dev, struct vhost_virtqueue *vq, uint64_t iova, uint64_t *len, uint8_t perm) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock) { if (!(dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))) return rte_vhost_va_from_guest_pa(dev->mem, iova, len); diff --git a/lib/vhost/vhost_crypto.c b/lib/vhost/vhost_crypto.c index b1c0eb6a0f..fccd28a71a 100644 --- a/lib/vhost/vhost_crypto.c +++ b/lib/vhost/vhost_crypto.c @@ -506,6 +506,7 @@ static __rte_always_inline struct virtio_crypto_inhdr * reach_inhdr(struct vhost_crypto_data_req *vc_req, struct vhost_crypto_desc *head, uint32_t max_n_descs) + RTE_SHARED_LOCK_REQUIRES(vc_req->vq->iotlb_lock) { struct virtio_crypto_inhdr *inhdr; struct vhost_crypto_desc *last = head + (max_n_descs - 1); @@ -552,6 +553,7 @@ static __rte_always_inline void * get_data_ptr(struct vhost_crypto_data_req *vc_req, struct vhost_crypto_desc *cur_desc, uint8_t perm) + RTE_SHARED_LOCK_REQUIRES(vc_req->vq->iotlb_lock) { void *data; uint64_t dlen = cur_desc->len; @@ -570,6 +572,7 @@ copy_data(void *dst_data, struct vhost_crypto_data_req *vc_req, struct vhost_crypto_desc *head, struct vhost_crypto_desc **cur_desc, uint32_t size, uint32_t max_n_descs) + RTE_SHARED_LOCK_REQUIRES(vc_req->vq->iotlb_lock) { struct vhost_crypto_desc *desc = *cur_desc; uint64_t remain, addr, dlen, len; @@ -718,6 +721,7 @@ prepare_write_back_data(struct vhost_crypto_data_req *vc_req, uint32_t offset, uint64_t write_back_len, uint32_t max_n_descs) + RTE_SHARED_LOCK_REQUIRES(vc_req->vq->iotlb_lock) { struct vhost_crypto_writeback_data *wb_data, *head; struct vhost_crypto_desc *desc = *cur_desc; @@ -838,6 +842,7 @@ prepare_sym_cipher_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, struct virtio_crypto_cipher_data_req *cipher, struct vhost_crypto_desc *head, uint32_t max_n_descs) + RTE_SHARED_LOCK_REQUIRES(vc_req->vq->iotlb_lock) { struct vhost_crypto_desc *desc = head; struct vhost_crypto_writeback_data *ewb = NULL; @@ -990,6 +995,7 @@ prepare_sym_chain_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, struct virtio_crypto_alg_chain_data_req *chain, struct vhost_crypto_desc *head, uint32_t max_n_descs) + RTE_SHARED_LOCK_REQUIRES(vc_req->vq->iotlb_lock) { struct vhost_crypto_desc *desc = head, *digest_desc; struct vhost_crypto_writeback_data *ewb = NULL, *ewb2 = NULL; @@ -1172,6 +1178,7 @@ vhost_crypto_process_one_req(struct vhost_crypto *vcrypto, struct vhost_virtqueue *vq, struct rte_crypto_op *op, struct vring_desc *head, struct vhost_crypto_desc *descs, uint16_t desc_idx) + RTE_NO_ANNOTATED_LOCK_CHECK /* FIXME: requires iotlb_lock? */ { struct vhost_crypto_data_req *vc_req = rte_mbuf_to_priv(op->sym->m_src); struct rte_cryptodev_sym_session *session; diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index c0d9d3db3e..867492a338 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -180,6 +180,7 @@ vhost_async_dma_check_completed(struct virtio_net *dev, int16_t dma_id, uint16_t static inline void do_data_copy_enqueue(struct virtio_net *dev, struct vhost_virtqueue *vq) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock) { struct batch_copy_elem *elem = vq->batch_copy_elems; uint16_t count = vq->batch_copy_nb_elems; @@ -526,6 +527,7 @@ vhost_shadow_enqueue_single_packed(struct virtio_net *dev, uint16_t *id, uint16_t *count, uint16_t num_buffers) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock) { vhost_shadow_enqueue_packed(vq, len, id, count, num_buffers); @@ -607,6 +609,7 @@ static __rte_always_inline int map_one_desc(struct virtio_net *dev, struct vhost_virtqueue *vq, struct buf_vector *buf_vec, uint16_t *vec_idx, uint64_t desc_iova, uint64_t desc_len, uint8_t perm) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock) { uint16_t vec_id = *vec_idx; @@ -644,6 +647,7 @@ fill_vec_buf_split(struct virtio_net *dev, struct vhost_virtqueue *vq, uint32_t avail_idx, uint16_t *vec_idx, struct buf_vector *buf_vec, uint16_t *desc_chain_head, uint32_t *desc_chain_len, uint8_t perm) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock) { uint16_t idx = vq->avail->ring[avail_idx & (vq->size - 1)]; uint16_t vec_id = *vec_idx; @@ -727,6 +731,7 @@ reserve_avail_buf_split(struct virtio_net *dev, struct vhost_virtqueue *vq, uint32_t size, struct buf_vector *buf_vec, uint16_t *num_buffers, uint16_t avail_head, uint16_t *nr_vec) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock) { uint16_t cur_idx; uint16_t vec_idx = 0; @@ -777,6 +782,7 @@ fill_vec_buf_packed_indirect(struct virtio_net *dev, struct vhost_virtqueue *vq, struct vring_packed_desc *desc, uint16_t *vec_idx, struct buf_vector *buf_vec, uint32_t *len, uint8_t perm) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock) { uint16_t i; uint32_t nr_descs; @@ -835,6 +841,7 @@ fill_vec_buf_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, uint16_t avail_idx, uint16_t *desc_count, struct buf_vector *buf_vec, uint16_t *vec_idx, uint16_t *buf_id, uint32_t *len, uint8_t perm) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock) { bool wrap_counter = vq->avail_wrap_counter; struct vring_packed_desc *descs = vq->desc_packed; @@ -900,6 +907,7 @@ static __rte_noinline void copy_vnet_hdr_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq, struct buf_vector *buf_vec, struct virtio_net_hdr_mrg_rxbuf *hdr) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock) { uint64_t len; uint64_t remain = dev->vhost_hlen; @@ -1036,6 +1044,7 @@ static __rte_always_inline void sync_mbuf_to_desc_seg(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf *m, uint32_t mbuf_offset, uint64_t buf_addr, uint64_t buf_iova, uint32_t cpy_len) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock) { struct batch_copy_elem *batch_copy = vq->batch_copy_elems; @@ -1061,6 +1070,7 @@ mbuf_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf *m, struct buf_vector *buf_vec, uint16_t nr_vec, uint16_t num_buffers, bool is_async) RTE_EXC_LOCK_REQUIRES(vq->access_lock) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock) { uint32_t vec_idx = 0; uint32_t mbuf_offset, mbuf_avail; @@ -1191,6 +1201,7 @@ vhost_enqueue_single_packed(struct virtio_net *dev, struct buf_vector *buf_vec, uint16_t *nr_descs) RTE_EXC_LOCK_REQUIRES(vq->access_lock) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock) { uint16_t nr_vec = 0; uint16_t avail_idx = vq->last_avail_idx; @@ -1252,6 +1263,7 @@ static __rte_noinline uint32_t virtio_dev_rx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf **pkts, uint32_t count) RTE_EXC_LOCK_REQUIRES(vq->access_lock) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock) { uint32_t pkt_idx = 0; uint16_t num_buffers; @@ -1309,6 +1321,7 @@ virtio_dev_rx_sync_batch_check(struct virtio_net *dev, struct rte_mbuf **pkts, uint64_t *desc_addrs, uint64_t *lens) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock) { bool wrap_counter = vq->avail_wrap_counter; struct vring_packed_desc *descs = vq->desc_packed; @@ -1360,6 +1373,7 @@ virtio_dev_rx_batch_packed_copy(struct virtio_net *dev, struct rte_mbuf **pkts, uint64_t *desc_addrs, uint64_t *lens) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock) { uint32_t buf_offset = sizeof(struct virtio_net_hdr_mrg_rxbuf); struct virtio_net_hdr_mrg_rxbuf *hdrs[PACKED_BATCH_SIZE]; @@ -1401,6 +1415,7 @@ static __rte_always_inline int virtio_dev_rx_sync_batch_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf **pkts) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock) { uint64_t desc_addrs[PACKED_BATCH_SIZE]; uint64_t lens[PACKED_BATCH_SIZE]; @@ -1423,6 +1438,7 @@ virtio_dev_rx_single_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf *pkt) RTE_EXC_LOCK_REQUIRES(vq->access_lock) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock) { struct buf_vector buf_vec[BUF_VECTOR_MAX]; uint16_t nr_descs = 0; @@ -1449,6 +1465,7 @@ virtio_dev_rx_packed(struct virtio_net *dev, struct rte_mbuf **__rte_restrict pkts, uint32_t count) RTE_EXC_LOCK_REQUIRES(vq->access_lock) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock) { uint32_t pkt_idx = 0; @@ -1501,8 +1518,7 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id, if (unlikely(!vq->enabled)) goto out_access_unlock; - if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) - vhost_user_iotlb_rd_lock(vq); + vhost_user_iotlb_rd_lock(vq); if (unlikely(!vq->access_ok)) if (unlikely(vring_translate(dev, vq) < 0)) @@ -1518,8 +1534,7 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id, nb_tx = virtio_dev_rx_split(dev, vq, pkts, count); out: - if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) - vhost_user_iotlb_rd_unlock(vq); + vhost_user_iotlb_rd_unlock(vq); out_access_unlock: rte_spinlock_unlock(&vq->access_lock); @@ -1595,6 +1610,7 @@ virtio_dev_rx_async_submit_split(struct virtio_net *dev, struct vhost_virtqueue uint16_t queue_id, struct rte_mbuf **pkts, uint32_t count, int16_t dma_id, uint16_t vchan_id) RTE_EXC_LOCK_REQUIRES(vq->access_lock) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock) { struct buf_vector buf_vec[BUF_VECTOR_MAX]; uint32_t pkt_idx = 0; @@ -1700,6 +1716,7 @@ vhost_enqueue_async_packed(struct virtio_net *dev, uint16_t *nr_descs, uint16_t *nr_buffers) RTE_EXC_LOCK_REQUIRES(vq->access_lock) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock) { uint16_t nr_vec = 0; uint16_t avail_idx = vq->last_avail_idx; @@ -1758,6 +1775,7 @@ static __rte_always_inline int16_t virtio_dev_rx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf *pkt, uint16_t *nr_descs, uint16_t *nr_buffers) RTE_EXC_LOCK_REQUIRES(vq->access_lock) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock) { struct buf_vector buf_vec[BUF_VECTOR_MAX]; @@ -1805,6 +1823,7 @@ virtio_dev_rx_async_submit_packed(struct virtio_net *dev, struct vhost_virtqueue uint16_t queue_id, struct rte_mbuf **pkts, uint32_t count, int16_t dma_id, uint16_t vchan_id) RTE_EXC_LOCK_REQUIRES(vq->access_lock) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock) { uint32_t pkt_idx = 0; uint32_t remained = count; @@ -2154,8 +2173,7 @@ virtio_dev_rx_async_submit(struct virtio_net *dev, uint16_t queue_id, if (unlikely(!vq->enabled || !vq->async)) goto out_access_unlock; - if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) - vhost_user_iotlb_rd_lock(vq); + vhost_user_iotlb_rd_lock(vq); if (unlikely(!vq->access_ok)) if (unlikely(vring_translate(dev, vq) < 0)) @@ -2173,8 +2191,7 @@ virtio_dev_rx_async_submit(struct virtio_net *dev, uint16_t queue_id, pkts, count, dma_id, vchan_id); out: - if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) - vhost_user_iotlb_rd_unlock(vq); + vhost_user_iotlb_rd_unlock(vq); out_access_unlock: rte_spinlock_unlock(&vq->access_lock); @@ -2697,6 +2714,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count, bool legacy_ol_flags) RTE_EXC_LOCK_REQUIRES(vq->access_lock) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock) { uint16_t i; uint16_t free_entries; @@ -2793,6 +2811,7 @@ virtio_dev_tx_split_legacy(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count) RTE_EXC_LOCK_REQUIRES(vq->access_lock) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock) { return virtio_dev_tx_split(dev, vq, mbuf_pool, pkts, count, true); } @@ -2803,6 +2822,7 @@ virtio_dev_tx_split_compliant(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count) RTE_EXC_LOCK_REQUIRES(vq->access_lock) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock) { return virtio_dev_tx_split(dev, vq, mbuf_pool, pkts, count, false); } @@ -2814,6 +2834,7 @@ vhost_reserve_avail_batch_packed(struct virtio_net *dev, uint16_t avail_idx, uintptr_t *desc_addrs, uint16_t *ids) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock) { bool wrap = vq->avail_wrap_counter; struct vring_packed_desc *descs = vq->desc_packed; @@ -2883,6 +2904,7 @@ virtio_dev_tx_batch_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf **pkts, bool legacy_ol_flags) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock) { uint16_t avail_idx = vq->last_avail_idx; uint32_t buf_offset = sizeof(struct virtio_net_hdr_mrg_rxbuf); @@ -2929,6 +2951,7 @@ vhost_dequeue_single_packed(struct virtio_net *dev, uint16_t *buf_id, uint16_t *desc_count, bool legacy_ol_flags) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock) { struct buf_vector buf_vec[BUF_VECTOR_MAX]; uint32_t buf_len; @@ -2972,6 +2995,7 @@ virtio_dev_tx_single_packed(struct virtio_net *dev, struct rte_mempool *mbuf_pool, struct rte_mbuf *pkts, bool legacy_ol_flags) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock) { uint16_t buf_id, desc_count = 0; @@ -3003,6 +3027,7 @@ virtio_dev_tx_packed(struct virtio_net *dev, uint32_t count, bool legacy_ol_flags) RTE_EXC_LOCK_REQUIRES(vq->access_lock) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock) { uint32_t pkt_idx = 0; @@ -3047,6 +3072,7 @@ virtio_dev_tx_packed_legacy(struct virtio_net *dev, struct vhost_virtqueue *__rte_restrict vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **__rte_restrict pkts, uint32_t count) RTE_EXC_LOCK_REQUIRES(vq->access_lock) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock) { return virtio_dev_tx_packed(dev, vq, mbuf_pool, pkts, count, true); } @@ -3057,6 +3083,7 @@ virtio_dev_tx_packed_compliant(struct virtio_net *dev, struct vhost_virtqueue *__rte_restrict vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **__rte_restrict pkts, uint32_t count) RTE_EXC_LOCK_REQUIRES(vq->access_lock) + RTE_SHARED_LOCK_REQUIRES(vq->iotlb_lock) { return virtio_dev_tx_packed(dev, vq, mbuf_pool, pkts, count, false); } @@ -3096,8 +3123,7 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id, goto out_access_unlock; } - if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) - vhost_user_iotlb_rd_lock(vq); + vhost_user_iotlb_rd_lock(vq); if (unlikely(!vq->access_ok)) if (unlikely(vring_translate(dev, vq) < 0)) { @@ -3153,8 +3179,7 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id, } out: - if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) - vhost_user_iotlb_rd_unlock(vq); + vhost_user_iotlb_rd_unlock(vq); out_access_unlock: rte_spinlock_unlock(&vq->access_lock); From patchwork Wed Mar 30 13:49:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 109032 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 95A23A050D; Wed, 30 Mar 2022 15:51:14 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 041DE4290F; Wed, 30 Mar 2022 15:50:43 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 918E442861 for ; Wed, 30 Mar 2022 15:50:41 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1648648241; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=m8/vg2xBPfDinzm3oZ8kGsm9iC7Pd0Gzozqy8aSbRA8=; b=ETnBoDX6d+eWr9Lm9ADGJCNr/J0fK9VRfGBlp0NDZzzkL+g6Y7lhOcDGQwdxQIEa/gwXCF QstD8FZqt9LP8t2We8Opm0eSZlJjoEQSMhc/PM2Z+ILMv1Ly4yzsRkEyRvLpqNGtZJk6Ev hcaw042nlkfJvDQ7LXqBZpE4c5p5l7s= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-126-j9_dqXnnPK-smfFxm7GH1g-1; Wed, 30 Mar 2022 09:50:39 -0400 X-MC-Unique: j9_dqXnnPK-smfFxm7GH1g-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 44356858F1D; Wed, 30 Mar 2022 13:50:39 +0000 (UTC) Received: from dmarchan.remote.csb (unknown [10.40.195.27]) by smtp.corp.redhat.com (Postfix) with ESMTP id F239040D2821; Wed, 30 Mar 2022 13:50:37 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: maxime.coquelin@redhat.com, chenbo.xia@intel.com, jiayu.hu@intel.com, yuanx.wang@intel.com, xuan.ding@intel.com Subject: [RFC PATCH v2 9/9] vhost: enable lock check Date: Wed, 30 Mar 2022 15:49:56 +0200 Message-Id: <20220330134956.18927-10-david.marchand@redhat.com> In-Reply-To: <20220330134956.18927-1-david.marchand@redhat.com> References: <20220328121758.26632-1-david.marchand@redhat.com> <20220330134956.18927-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.11.54.2 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david.marchand@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Now that all locks in this library are annotated, we can enable the check. Signed-off-by: David Marchand --- lib/vhost/meson.build | 2 ++ 1 file changed, 2 insertions(+) diff --git a/lib/vhost/meson.build b/lib/vhost/meson.build index bc7272053b..197a51d936 100644 --- a/lib/vhost/meson.build +++ b/lib/vhost/meson.build @@ -17,6 +17,8 @@ elif (toolchain == 'icc' and cc.version().version_compare('>=16.0.0')) endif dpdk_conf.set('RTE_LIBRTE_VHOST_POSTCOPY', cc.has_header('linux/userfaultfd.h')) cflags += '-fno-strict-aliasing' + +annotate_locks = true sources = files( 'fd_man.c', 'iotlb.c',