From patchwork Tue Feb 7 10:45:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 123214 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DC26141C2D; Tue, 7 Feb 2023 11:45:52 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BFD6F4282D; Tue, 7 Feb 2023 11:45:52 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id B6A1D40042 for ; Tue, 7 Feb 2023 11:45:51 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675766751; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=KfrSfjin3JAU8NqN1oy5MyRhMeTWyT3DX8RYXh24LRU=; b=gT7J53pJMbUwYZAvWL1VEzWXNr6axeyxqNp9TkUscAUQt8D16iVT7VAEAevT0eoftC9UaE 4o6evQ08uWja3MUxZ1c83JO3sr6VXSp9CRtJQnqsBtqme7t09NV6S8XgrbH2uOtbUmaHuX 4okAuV9tdQmqGTRYWCShvmCYZiY0ysA= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-70-_rUDxvbKN6iX1ziYK3rwBQ-1; Tue, 07 Feb 2023 05:45:46 -0500 X-MC-Unique: _rUDxvbKN6iX1ziYK3rwBQ-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id ADB352932488; Tue, 7 Feb 2023 10:45:45 +0000 (UTC) Received: from dmarchan.redhat.com (ovpn-195-46.brq.redhat.com [10.40.195.46]) by smtp.corp.redhat.com (Postfix) with ESMTP id 26568C15BA0; Tue, 7 Feb 2023 10:45:43 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: maxime.coquelin@redhat.com, stephen@networkplumber.org, chenbo.xia@intel.com, jiayu.hu@intel.com, yuanx.wang@intel.com, xuan.ding@intel.com, mb@smartsharesystems.com, Anatoly Burakov , =?utf-8?q?Mattias_R=C3=B6nnblo?= =?utf-8?q?m?= , David Christensen , Bruce Richardson , Konstantin Ananyev Subject: [PATCH v6 1/9] eal: annotate spinlock, rwlock and seqlock Date: Tue, 7 Feb 2023 11:45:24 +0100 Message-Id: <20230207104532.2370869-2-david.marchand@redhat.com> In-Reply-To: <20230207104532.2370869-1-david.marchand@redhat.com> References: <20220328121758.26632-1-david.marchand@redhat.com> <20230207104532.2370869-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org clang offers some thread safety checks, statically verifying that locks are taken and released in the code. To use those checks, the full code leading to taking or releasing locks must be annotated with some attributes. Wrap those attributes into our own set of macros. rwlock, seqlock and the "normal" spinlock are instrumented. Those checks might be of interest out of DPDK, but it requires that the including application locks are annotated. On the other hand, applications out there might have been using those same checks. To be on the safe side, keep this instrumentation under a RTE_ANNOTATE_LOCKS internal build flag. A component may en/disable this check by setting annotate_locks = true/false in its meson.build. Note: Doxygen preprocessor does not understand trailing function attributes (this can be observed with the rte_seqlock.h header). One would think that expanding the annotation macros to a noop in rte_lock_annotations.h would be enough (since RTE_ANNOTATE_LOCKS is not set during doxygen processing)). Unfortunately, the use of EXPAND_ONLY_PREDEF defeats this. Removing EXPAND_ONLY_PREDEF entirely is not an option as it would expand all other DPDK macros. The chosen solution is to expand the annotation macros explicitly to a noop in PREDEFINED. Signed-off-by: David Marchand Acked-by: Morten Brørup Reviewed-by: Maxime Coquelin Acked-by: Chenbo Xia --- Changes since v4: - hid annotations from Doxygen, - fixed typos, Changes since RFC v3: - rebased, - added some documentation, - added assert attribute, - instrumented seqlock, - cleaned annotations in arch-specific headers, Changes since RFC v2: - fixed rwlock trylock, - instrumented _tm spinlocks, - aligned attribute names to clang, --- doc/api/doxy-api.conf.in | 11 +++ .../prog_guide/env_abstraction_layer.rst | 24 ++++++ doc/guides/rel_notes/release_23_03.rst | 5 ++ drivers/meson.build | 5 ++ lib/eal/include/generic/rte_rwlock.h | 27 +++++-- lib/eal/include/generic/rte_spinlock.h | 31 +++++--- lib/eal/include/meson.build | 1 + lib/eal/include/rte_lock_annotations.h | 73 +++++++++++++++++++ lib/eal/include/rte_seqlock.h | 2 + lib/eal/ppc/include/rte_spinlock.h | 3 + lib/eal/x86/include/rte_rwlock.h | 4 + lib/eal/x86/include/rte_spinlock.h | 9 +++ lib/meson.build | 5 ++ 13 files changed, 186 insertions(+), 14 deletions(-) create mode 100644 lib/eal/include/rte_lock_annotations.h diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in index f0886c3bd1..e859426099 100644 --- a/doc/api/doxy-api.conf.in +++ b/doc/api/doxy-api.conf.in @@ -84,6 +84,17 @@ FILE_PATTERNS = rte_*.h \ PREDEFINED = __DOXYGEN__ \ RTE_HAS_CPUSET \ VFIO_PRESENT \ + __rte_lockable= \ + __rte_guarded_by(x)= \ + __rte_exclusive_locks_required(x)= \ + __rte_exclusive_lock_function(x)= \ + __rte_exclusive_trylock_function(x)= \ + __rte_assert_exclusive_lock(x)= \ + __rte_shared_locks_required(x)= \ + __rte_shared_lock_function(x)= \ + __rte_shared_trylock_function(x)= \ + __rte_assert_shared_lock(x)= \ + __rte_unlock_function(x)= \ __attribute__(x)= OPTIMIZE_OUTPUT_FOR_C = YES diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst index 35fbebe1be..3f33621e05 100644 --- a/doc/guides/prog_guide/env_abstraction_layer.rst +++ b/doc/guides/prog_guide/env_abstraction_layer.rst @@ -529,6 +529,30 @@ Misc Functions Locks and atomic operations are per-architecture (i686 and x86_64). +Lock annotations +~~~~~~~~~~~~~~~~ + +R/W locks, seq locks and spinlocks have been instrumented to help developers in +catching issues in DPDK. + +This instrumentation relies on +`clang Thread Safety checks `_. +All attributes are prefixed with __rte and are fully described in the clang +documentation. + +Some general comments: + +- it is important that lock requirements are expressed at the function + declaration level in headers so that other code units can be inspected, +- when some global lock is necessary to some user-exposed API, it is preferred + to expose it via an internal helper rather than expose the global variable, +- there are a list of known limitations with clang instrumentation, but before + waiving checks with ``__rte_no_thread_safety_analysis`` in your code, please + discuss it on the mailing list, + +A DPDK library/driver can enable/disable the checks by setting +``annotate_locks`` accordingly in its ``meson.build`` file. + IOVA Mode Detection ~~~~~~~~~~~~~~~~~~~ diff --git a/doc/guides/rel_notes/release_23_03.rst b/doc/guides/rel_notes/release_23_03.rst index 1fa101c420..bf90feba68 100644 --- a/doc/guides/rel_notes/release_23_03.rst +++ b/doc/guides/rel_notes/release_23_03.rst @@ -55,6 +55,11 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= +* **Introduced lock annotations.** + + Added lock annotations attributes so that clang can statically analyze lock + correctness. + * **Updated AMD axgbe driver.** * Added multi-process support. diff --git a/drivers/meson.build b/drivers/meson.build index c6d619200f..bddc4a6cc4 100644 --- a/drivers/meson.build +++ b/drivers/meson.build @@ -91,6 +91,7 @@ foreach subpath:subdirs build = true # set to false to disable, e.g. missing deps reason = '' # set if build == false to explain name = drv + annotate_locks = false sources = [] headers = [] driver_sdk_headers = [] # public headers included by drivers @@ -167,6 +168,10 @@ foreach subpath:subdirs enabled_drivers += name lib_name = '_'.join(['rte', class, name]) cflags += '-DRTE_LOG_DEFAULT_LOGTYPE=' + '.'.join([log_prefix, name]) + if annotate_locks and cc.has_argument('-Wthread-safety') + cflags += '-DRTE_ANNOTATE_LOCKS' + cflags += '-Wthread-safety' + endif dpdk_conf.set(lib_name.to_upper(), 1) dpdk_extra_ldflags += pkgconfig_extra_libs diff --git a/lib/eal/include/generic/rte_rwlock.h b/lib/eal/include/generic/rte_rwlock.h index 233d4262be..d45c22c189 100644 --- a/lib/eal/include/generic/rte_rwlock.h +++ b/lib/eal/include/generic/rte_rwlock.h @@ -30,6 +30,7 @@ extern "C" { #include #include +#include #include /** @@ -55,7 +56,7 @@ extern "C" { /* Writer is waiting or has lock */ #define RTE_RWLOCK_READ 0x4 /* Reader increment */ -typedef struct { +typedef struct __rte_lockable { int32_t cnt; } rte_rwlock_t; @@ -84,6 +85,8 @@ rte_rwlock_init(rte_rwlock_t *rwl) */ static inline void rte_rwlock_read_lock(rte_rwlock_t *rwl) + __rte_shared_lock_function(rwl) + __rte_no_thread_safety_analysis { int32_t x; @@ -119,6 +122,8 @@ rte_rwlock_read_lock(rte_rwlock_t *rwl) */ static inline int rte_rwlock_read_trylock(rte_rwlock_t *rwl) + __rte_shared_trylock_function(0, rwl) + __rte_no_thread_safety_analysis { int32_t x; @@ -150,6 +155,8 @@ rte_rwlock_read_trylock(rte_rwlock_t *rwl) */ static inline void rte_rwlock_read_unlock(rte_rwlock_t *rwl) + __rte_unlock_function(rwl) + __rte_no_thread_safety_analysis { __atomic_fetch_sub(&rwl->cnt, RTE_RWLOCK_READ, __ATOMIC_RELEASE); } @@ -166,6 +173,8 @@ rte_rwlock_read_unlock(rte_rwlock_t *rwl) */ static inline int rte_rwlock_write_trylock(rte_rwlock_t *rwl) + __rte_exclusive_trylock_function(0, rwl) + __rte_no_thread_safety_analysis { int32_t x; @@ -186,6 +195,8 @@ rte_rwlock_write_trylock(rte_rwlock_t *rwl) */ static inline void rte_rwlock_write_lock(rte_rwlock_t *rwl) + __rte_exclusive_lock_function(rwl) + __rte_no_thread_safety_analysis { int32_t x; @@ -219,6 +230,8 @@ rte_rwlock_write_lock(rte_rwlock_t *rwl) */ static inline void rte_rwlock_write_unlock(rte_rwlock_t *rwl) + __rte_unlock_function(rwl) + __rte_no_thread_safety_analysis { __atomic_fetch_sub(&rwl->cnt, RTE_RWLOCK_WRITE, __ATOMIC_RELEASE); } @@ -237,7 +250,8 @@ rte_rwlock_write_unlock(rte_rwlock_t *rwl) * A pointer to a rwlock structure. */ static inline void -rte_rwlock_read_lock_tm(rte_rwlock_t *rwl); +rte_rwlock_read_lock_tm(rte_rwlock_t *rwl) + __rte_shared_lock_function(rwl); /** * Commit hardware memory transaction or release the read lock if the lock is used as a fall-back @@ -246,7 +260,8 @@ rte_rwlock_read_lock_tm(rte_rwlock_t *rwl); * A pointer to the rwlock structure. */ static inline void -rte_rwlock_read_unlock_tm(rte_rwlock_t *rwl); +rte_rwlock_read_unlock_tm(rte_rwlock_t *rwl) + __rte_unlock_function(rwl); /** * Try to execute critical section in a hardware memory transaction, if it @@ -262,7 +277,8 @@ rte_rwlock_read_unlock_tm(rte_rwlock_t *rwl); * A pointer to a rwlock structure. */ static inline void -rte_rwlock_write_lock_tm(rte_rwlock_t *rwl); +rte_rwlock_write_lock_tm(rte_rwlock_t *rwl) + __rte_exclusive_lock_function(rwl); /** * Commit hardware memory transaction or release the write lock if the lock is used as a fall-back @@ -271,7 +287,8 @@ rte_rwlock_write_lock_tm(rte_rwlock_t *rwl); * A pointer to a rwlock structure. */ static inline void -rte_rwlock_write_unlock_tm(rte_rwlock_t *rwl); +rte_rwlock_write_unlock_tm(rte_rwlock_t *rwl) + __rte_unlock_function(rwl); #ifdef __cplusplus } diff --git a/lib/eal/include/generic/rte_spinlock.h b/lib/eal/include/generic/rte_spinlock.h index 73ed4bfbdc..8ca47bbfaa 100644 --- a/lib/eal/include/generic/rte_spinlock.h +++ b/lib/eal/include/generic/rte_spinlock.h @@ -22,12 +22,13 @@ #ifdef RTE_FORCE_INTRINSICS #include #endif +#include #include /** * The rte_spinlock_t type. */ -typedef struct { +typedef struct __rte_lockable { volatile int locked; /**< lock status 0 = unlocked, 1 = locked */ } rte_spinlock_t; @@ -55,11 +56,13 @@ rte_spinlock_init(rte_spinlock_t *sl) * A pointer to the spinlock. */ static inline void -rte_spinlock_lock(rte_spinlock_t *sl); +rte_spinlock_lock(rte_spinlock_t *sl) + __rte_exclusive_lock_function(sl); #ifdef RTE_FORCE_INTRINSICS static inline void rte_spinlock_lock(rte_spinlock_t *sl) + __rte_no_thread_safety_analysis { int exp = 0; @@ -79,11 +82,13 @@ rte_spinlock_lock(rte_spinlock_t *sl) * A pointer to the spinlock. */ static inline void -rte_spinlock_unlock (rte_spinlock_t *sl); +rte_spinlock_unlock(rte_spinlock_t *sl) + __rte_unlock_function(sl); #ifdef RTE_FORCE_INTRINSICS static inline void -rte_spinlock_unlock (rte_spinlock_t *sl) +rte_spinlock_unlock(rte_spinlock_t *sl) + __rte_no_thread_safety_analysis { __atomic_store_n(&sl->locked, 0, __ATOMIC_RELEASE); } @@ -99,11 +104,13 @@ rte_spinlock_unlock (rte_spinlock_t *sl) */ __rte_warn_unused_result static inline int -rte_spinlock_trylock (rte_spinlock_t *sl); +rte_spinlock_trylock(rte_spinlock_t *sl) + __rte_exclusive_trylock_function(1, sl); #ifdef RTE_FORCE_INTRINSICS static inline int -rte_spinlock_trylock (rte_spinlock_t *sl) +rte_spinlock_trylock(rte_spinlock_t *sl) + __rte_no_thread_safety_analysis { int exp = 0; return __atomic_compare_exchange_n(&sl->locked, &exp, 1, @@ -147,7 +154,8 @@ static inline int rte_tm_supported(void); * A pointer to the spinlock. */ static inline void -rte_spinlock_lock_tm(rte_spinlock_t *sl); +rte_spinlock_lock_tm(rte_spinlock_t *sl) + __rte_exclusive_lock_function(sl); /** * Commit hardware memory transaction or release the spinlock if @@ -157,7 +165,8 @@ rte_spinlock_lock_tm(rte_spinlock_t *sl); * A pointer to the spinlock. */ static inline void -rte_spinlock_unlock_tm(rte_spinlock_t *sl); +rte_spinlock_unlock_tm(rte_spinlock_t *sl) + __rte_unlock_function(sl); /** * Try to execute critical section in a hardware memory transaction, @@ -177,7 +186,8 @@ rte_spinlock_unlock_tm(rte_spinlock_t *sl); */ __rte_warn_unused_result static inline int -rte_spinlock_trylock_tm(rte_spinlock_t *sl); +rte_spinlock_trylock_tm(rte_spinlock_t *sl) + __rte_exclusive_trylock_function(1, sl); /** * The rte_spinlock_recursive_t type. @@ -213,6 +223,7 @@ static inline void rte_spinlock_recursive_init(rte_spinlock_recursive_t *slr) * A pointer to the recursive spinlock. */ static inline void rte_spinlock_recursive_lock(rte_spinlock_recursive_t *slr) + __rte_no_thread_safety_analysis { int id = rte_gettid(); @@ -229,6 +240,7 @@ static inline void rte_spinlock_recursive_lock(rte_spinlock_recursive_t *slr) * A pointer to the recursive spinlock. */ static inline void rte_spinlock_recursive_unlock(rte_spinlock_recursive_t *slr) + __rte_no_thread_safety_analysis { if (--(slr->count) == 0) { slr->user = -1; @@ -247,6 +259,7 @@ static inline void rte_spinlock_recursive_unlock(rte_spinlock_recursive_t *slr) */ __rte_warn_unused_result static inline int rte_spinlock_recursive_trylock(rte_spinlock_recursive_t *slr) + __rte_no_thread_safety_analysis { int id = rte_gettid(); diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build index cfcd40aaed..b0db9b3b3a 100644 --- a/lib/eal/include/meson.build +++ b/lib/eal/include/meson.build @@ -27,6 +27,7 @@ headers += files( 'rte_keepalive.h', 'rte_launch.h', 'rte_lcore.h', + 'rte_lock_annotations.h', 'rte_log.h', 'rte_malloc.h', 'rte_mcslock.h', diff --git a/lib/eal/include/rte_lock_annotations.h b/lib/eal/include/rte_lock_annotations.h new file mode 100644 index 0000000000..9fc50082d6 --- /dev/null +++ b/lib/eal/include/rte_lock_annotations.h @@ -0,0 +1,73 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2022 Red Hat, Inc. + */ + +#ifndef RTE_LOCK_ANNOTATIONS_H +#define RTE_LOCK_ANNOTATIONS_H + +#ifdef __cplusplus +extern "C" { +#endif + +#ifdef RTE_ANNOTATE_LOCKS + +#define __rte_lockable \ + __attribute__((lockable)) + +#define __rte_guarded_by(...) \ + __attribute__((guarded_by(__VA_ARGS__))) +#define __rte_guarded_var \ + __attribute__((guarded_var)) + +#define __rte_exclusive_locks_required(...) \ + __attribute__((exclusive_locks_required(__VA_ARGS__))) +#define __rte_exclusive_lock_function(...) \ + __attribute__((exclusive_lock_function(__VA_ARGS__))) +#define __rte_exclusive_trylock_function(ret, ...) \ + __attribute__((exclusive_trylock_function(ret, __VA_ARGS__))) +#define __rte_assert_exclusive_lock(...) \ + __attribute__((assert_exclusive_lock(__VA_ARGS__))) + +#define __rte_shared_locks_required(...) \ + __attribute__((shared_locks_required(__VA_ARGS__))) +#define __rte_shared_lock_function(...) \ + __attribute__((shared_lock_function(__VA_ARGS__))) +#define __rte_shared_trylock_function(ret, ...) \ + __attribute__((shared_trylock_function(ret, __VA_ARGS__))) +#define __rte_assert_shared_lock(...) \ + __attribute__((assert_shared_lock(__VA_ARGS__))) + +#define __rte_unlock_function(...) \ + __attribute__((unlock_function(__VA_ARGS__))) + +#define __rte_no_thread_safety_analysis \ + __attribute__((no_thread_safety_analysis)) + +#else /* ! RTE_ANNOTATE_LOCKS */ + +#define __rte_lockable + +#define __rte_guarded_by(...) +#define __rte_guarded_var + +#define __rte_exclusive_locks_required(...) +#define __rte_exclusive_lock_function(...) +#define __rte_exclusive_trylock_function(...) +#define __rte_assert_exclusive_lock(...) + +#define __rte_shared_locks_required(...) +#define __rte_shared_lock_function(...) +#define __rte_shared_trylock_function(...) +#define __rte_assert_shared_lock(...) + +#define __rte_unlock_function(...) + +#define __rte_no_thread_safety_analysis + +#endif /* RTE_ANNOTATE_LOCKS */ + +#ifdef __cplusplus +} +#endif + +#endif /* RTE_LOCK_ANNOTATIONS_H */ diff --git a/lib/eal/include/rte_seqlock.h b/lib/eal/include/rte_seqlock.h index 1663af62e8..fcbb9c5866 100644 --- a/lib/eal/include/rte_seqlock.h +++ b/lib/eal/include/rte_seqlock.h @@ -215,6 +215,7 @@ rte_seqlock_read_retry(const rte_seqlock_t *seqlock, uint32_t begin_sn) __rte_experimental static inline void rte_seqlock_write_lock(rte_seqlock_t *seqlock) + __rte_exclusive_lock_function(&seqlock->lock) { /* To synchronize with other writers. */ rte_spinlock_lock(&seqlock->lock); @@ -240,6 +241,7 @@ rte_seqlock_write_lock(rte_seqlock_t *seqlock) __rte_experimental static inline void rte_seqlock_write_unlock(rte_seqlock_t *seqlock) + __rte_unlock_function(&seqlock->lock) { rte_seqcount_write_end(&seqlock->count); diff --git a/lib/eal/ppc/include/rte_spinlock.h b/lib/eal/ppc/include/rte_spinlock.h index 149ec245c7..3a4c905b22 100644 --- a/lib/eal/ppc/include/rte_spinlock.h +++ b/lib/eal/ppc/include/rte_spinlock.h @@ -20,6 +20,7 @@ extern "C" { static inline void rte_spinlock_lock(rte_spinlock_t *sl) + __rte_no_thread_safety_analysis { while (__sync_lock_test_and_set(&sl->locked, 1)) while (sl->locked) @@ -28,12 +29,14 @@ rte_spinlock_lock(rte_spinlock_t *sl) static inline void rte_spinlock_unlock(rte_spinlock_t *sl) + __rte_no_thread_safety_analysis { __sync_lock_release(&sl->locked); } static inline int rte_spinlock_trylock(rte_spinlock_t *sl) + __rte_no_thread_safety_analysis { return __sync_lock_test_and_set(&sl->locked, 1) == 0; } diff --git a/lib/eal/x86/include/rte_rwlock.h b/lib/eal/x86/include/rte_rwlock.h index eec4c7123c..1796b69265 100644 --- a/lib/eal/x86/include/rte_rwlock.h +++ b/lib/eal/x86/include/rte_rwlock.h @@ -14,6 +14,7 @@ extern "C" { static inline void rte_rwlock_read_lock_tm(rte_rwlock_t *rwl) + __rte_no_thread_safety_analysis { if (likely(rte_try_tm(&rwl->cnt))) return; @@ -22,6 +23,7 @@ rte_rwlock_read_lock_tm(rte_rwlock_t *rwl) static inline void rte_rwlock_read_unlock_tm(rte_rwlock_t *rwl) + __rte_no_thread_safety_analysis { if (unlikely(rwl->cnt)) rte_rwlock_read_unlock(rwl); @@ -31,6 +33,7 @@ rte_rwlock_read_unlock_tm(rte_rwlock_t *rwl) static inline void rte_rwlock_write_lock_tm(rte_rwlock_t *rwl) + __rte_no_thread_safety_analysis { if (likely(rte_try_tm(&rwl->cnt))) return; @@ -39,6 +42,7 @@ rte_rwlock_write_lock_tm(rte_rwlock_t *rwl) static inline void rte_rwlock_write_unlock_tm(rte_rwlock_t *rwl) + __rte_no_thread_safety_analysis { if (unlikely(rwl->cnt)) rte_rwlock_write_unlock(rwl); diff --git a/lib/eal/x86/include/rte_spinlock.h b/lib/eal/x86/include/rte_spinlock.h index e2e2b2643c..0b20ddfd73 100644 --- a/lib/eal/x86/include/rte_spinlock.h +++ b/lib/eal/x86/include/rte_spinlock.h @@ -23,6 +23,7 @@ extern "C" { #ifndef RTE_FORCE_INTRINSICS static inline void rte_spinlock_lock(rte_spinlock_t *sl) + __rte_no_thread_safety_analysis { int lock_val = 1; asm volatile ( @@ -43,6 +44,7 @@ rte_spinlock_lock(rte_spinlock_t *sl) static inline void rte_spinlock_unlock (rte_spinlock_t *sl) + __rte_no_thread_safety_analysis { int unlock_val = 0; asm volatile ( @@ -54,6 +56,7 @@ rte_spinlock_unlock (rte_spinlock_t *sl) static inline int rte_spinlock_trylock (rte_spinlock_t *sl) + __rte_no_thread_safety_analysis { int lockval = 1; @@ -121,6 +124,7 @@ rte_try_tm(volatile int *lock) static inline void rte_spinlock_lock_tm(rte_spinlock_t *sl) + __rte_no_thread_safety_analysis { if (likely(rte_try_tm(&sl->locked))) return; @@ -130,6 +134,7 @@ rte_spinlock_lock_tm(rte_spinlock_t *sl) static inline int rte_spinlock_trylock_tm(rte_spinlock_t *sl) + __rte_no_thread_safety_analysis { if (likely(rte_try_tm(&sl->locked))) return 1; @@ -139,6 +144,7 @@ rte_spinlock_trylock_tm(rte_spinlock_t *sl) static inline void rte_spinlock_unlock_tm(rte_spinlock_t *sl) + __rte_no_thread_safety_analysis { if (unlikely(sl->locked)) rte_spinlock_unlock(sl); @@ -148,6 +154,7 @@ rte_spinlock_unlock_tm(rte_spinlock_t *sl) static inline void rte_spinlock_recursive_lock_tm(rte_spinlock_recursive_t *slr) + __rte_no_thread_safety_analysis { if (likely(rte_try_tm(&slr->sl.locked))) return; @@ -157,6 +164,7 @@ rte_spinlock_recursive_lock_tm(rte_spinlock_recursive_t *slr) static inline void rte_spinlock_recursive_unlock_tm(rte_spinlock_recursive_t *slr) + __rte_no_thread_safety_analysis { if (unlikely(slr->sl.locked)) rte_spinlock_recursive_unlock(slr); @@ -166,6 +174,7 @@ rte_spinlock_recursive_unlock_tm(rte_spinlock_recursive_t *slr) static inline int rte_spinlock_recursive_trylock_tm(rte_spinlock_recursive_t *slr) + __rte_no_thread_safety_analysis { if (likely(rte_try_tm(&slr->sl.locked))) return 1; diff --git a/lib/meson.build b/lib/meson.build index a90fee31b7..450c061d2b 100644 --- a/lib/meson.build +++ b/lib/meson.build @@ -120,6 +120,7 @@ foreach l:libraries reason = '' # set if build == false to explain why name = l use_function_versioning = false + annotate_locks = false sources = [] headers = [] indirect_headers = [] # public headers not directly included by apps @@ -202,6 +203,10 @@ foreach l:libraries cflags += '-DRTE_USE_FUNCTION_VERSIONING' endif cflags += '-DRTE_LOG_DEFAULT_LOGTYPE=lib.' + l + if annotate_locks and cc.has_argument('-Wthread-safety') + cflags += '-DRTE_ANNOTATE_LOCKS' + cflags += '-Wthread-safety' + endif # first build static lib static_lib = static_library(libname, From patchwork Tue Feb 7 10:45:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 123215 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 083EC41C2D; Tue, 7 Feb 2023 11:46:02 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 98B1942D3C; Tue, 7 Feb 2023 11:45:55 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 7D34542D38 for ; Tue, 7 Feb 2023 11:45:54 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675766754; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=iBNB6NpMSvvK7plI5xuz3+7gqsNHW438+IqDQ9cqIP8=; b=b6LvlhbeZqBjTefZaBQV9SgBMkDyFHXJXNlJ2fMYevgylkoMYwVdabZ3u3G2+LFCpqYycx 8jI1X283mVbrn8Pu1iFd3uKPKbmLWoPgHyzEhcoiKuwwoxbHCmcDKZ4vvQjFUTy7389hzv FNOdebSY5Jrj3s7STUAymbpjTQ1N5Gg= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-94-0sEzT2PFOVWtJqxj78fnXw-1; Tue, 07 Feb 2023 05:45:49 -0500 X-MC-Unique: 0sEzT2PFOVWtJqxj78fnXw-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 737C985A588; Tue, 7 Feb 2023 10:45:49 +0000 (UTC) Received: from dmarchan.redhat.com (ovpn-195-46.brq.redhat.com [10.40.195.46]) by smtp.corp.redhat.com (Postfix) with ESMTP id CAD412166B2A; Tue, 7 Feb 2023 10:45:47 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: maxime.coquelin@redhat.com, stephen@networkplumber.org, chenbo.xia@intel.com, jiayu.hu@intel.com, yuanx.wang@intel.com, xuan.ding@intel.com, mb@smartsharesystems.com Subject: [PATCH v6 2/9] vhost: simplify need reply handling Date: Tue, 7 Feb 2023 11:45:25 +0100 Message-Id: <20230207104532.2370869-3-david.marchand@redhat.com> In-Reply-To: <20230207104532.2370869-1-david.marchand@redhat.com> References: <20220328121758.26632-1-david.marchand@redhat.com> <20230207104532.2370869-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Dedicate send_vhost_slave_message() helper to the case when no reply is needed. Add a send_vhost_slave_message_process_reply() helper for the opposite. This new helper merges both send_vhost_slave_message() and the code previously in process_slave_message_reply(). The slave_req_lock lock is then only handled in this helper which will make lock checks trivial. Signed-off-by: David Marchand Acked-by: Morten Brørup Reviewed-by: Maxime Coquelin Reviewed-by: Chenbo Xia --- lib/vhost/vhost_user.c | 119 ++++++++++++++++++----------------------- 1 file changed, 51 insertions(+), 68 deletions(-) diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c index 8f33d5f4d9..60ec1bf5f6 100644 --- a/lib/vhost/vhost_user.c +++ b/lib/vhost/vhost_user.c @@ -2878,18 +2878,46 @@ send_vhost_reply(struct virtio_net *dev, int sockfd, struct vhu_msg_context *ctx } static int -send_vhost_slave_message(struct virtio_net *dev, - struct vhu_msg_context *ctx) +send_vhost_slave_message(struct virtio_net *dev, struct vhu_msg_context *ctx) +{ + return send_vhost_message(dev, dev->slave_req_fd, ctx); +} + +static int +send_vhost_slave_message_process_reply(struct virtio_net *dev, struct vhu_msg_context *ctx) { + struct vhu_msg_context msg_reply; int ret; - if (ctx->msg.flags & VHOST_USER_NEED_REPLY) - rte_spinlock_lock(&dev->slave_req_lock); + rte_spinlock_lock(&dev->slave_req_lock); + ret = send_vhost_slave_message(dev, ctx); + if (ret < 0) { + VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to send config change (%d)\n", ret); + goto out; + } - ret = send_vhost_message(dev, dev->slave_req_fd, ctx); - if (ret < 0 && (ctx->msg.flags & VHOST_USER_NEED_REPLY)) - rte_spinlock_unlock(&dev->slave_req_lock); + ret = read_vhost_message(dev, dev->slave_req_fd, &msg_reply); + if (ret <= 0) { + if (ret < 0) + VHOST_LOG_CONFIG(dev->ifname, ERR, + "vhost read slave message reply failed\n"); + else + VHOST_LOG_CONFIG(dev->ifname, INFO, "vhost peer closed\n"); + ret = -1; + goto out; + } + + if (msg_reply.msg.request.slave != ctx->msg.request.slave) { + VHOST_LOG_CONFIG(dev->ifname, ERR, + "received unexpected msg type (%u), expected %u\n", + msg_reply.msg.request.slave, ctx->msg.request.slave); + ret = -1; + goto out; + } + ret = msg_reply.msg.payload.u64 ? -1 : 0; +out: + rte_spinlock_unlock(&dev->slave_req_lock); return ret; } @@ -3213,42 +3241,6 @@ vhost_user_msg_handler(int vid, int fd) return ret; } -static int process_slave_message_reply(struct virtio_net *dev, - const struct vhu_msg_context *ctx) -{ - struct vhu_msg_context msg_reply; - int ret; - - if ((ctx->msg.flags & VHOST_USER_NEED_REPLY) == 0) - return 0; - - ret = read_vhost_message(dev, dev->slave_req_fd, &msg_reply); - if (ret <= 0) { - if (ret < 0) - VHOST_LOG_CONFIG(dev->ifname, ERR, - "vhost read slave message reply failed\n"); - else - VHOST_LOG_CONFIG(dev->ifname, INFO, "vhost peer closed\n"); - ret = -1; - goto out; - } - - ret = 0; - if (msg_reply.msg.request.slave != ctx->msg.request.slave) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "received unexpected msg type (%u), expected %u\n", - msg_reply.msg.request.slave, ctx->msg.request.slave); - ret = -1; - goto out; - } - - ret = msg_reply.msg.payload.u64 ? -1 : 0; - -out: - rte_spinlock_unlock(&dev->slave_req_lock); - return ret; -} - int vhost_user_iotlb_miss(struct virtio_net *dev, uint64_t iova, uint8_t perm) { @@ -3277,10 +3269,9 @@ vhost_user_iotlb_miss(struct virtio_net *dev, uint64_t iova, uint8_t perm) return 0; } -static int -vhost_user_slave_config_change(struct virtio_net *dev, bool need_reply) +int +rte_vhost_slave_config_change(int vid, bool need_reply) { - int ret; struct vhu_msg_context ctx = { .msg = { .request.slave = VHOST_USER_SLAVE_CONFIG_CHANGE_MSG, @@ -3288,29 +3279,23 @@ vhost_user_slave_config_change(struct virtio_net *dev, bool need_reply) .size = 0, } }; - - if (need_reply) - ctx.msg.flags |= VHOST_USER_NEED_REPLY; - - ret = send_vhost_slave_message(dev, &ctx); - if (ret < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to send config change (%d)\n", ret); - return ret; - } - - return process_slave_message_reply(dev, &ctx); -} - -int -rte_vhost_slave_config_change(int vid, bool need_reply) -{ struct virtio_net *dev; + int ret; dev = get_device(vid); if (!dev) return -ENODEV; - return vhost_user_slave_config_change(dev, need_reply); + if (!need_reply) { + ret = send_vhost_slave_message(dev, &ctx); + } else { + ctx.msg.flags |= VHOST_USER_NEED_REPLY; + ret = send_vhost_slave_message_process_reply(dev, &ctx); + } + + if (ret < 0) + VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to send config change (%d)\n", ret); + return ret; } static int vhost_user_slave_set_vring_host_notifier(struct virtio_net *dev, @@ -3339,13 +3324,11 @@ static int vhost_user_slave_set_vring_host_notifier(struct virtio_net *dev, ctx.fd_num = 1; } - ret = send_vhost_slave_message(dev, &ctx); - if (ret < 0) { + ret = send_vhost_slave_message_process_reply(dev, &ctx); + if (ret < 0) VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to set host notifier (%d)\n", ret); - return ret; - } - return process_slave_message_reply(dev, &ctx); + return ret; } int rte_vhost_host_notifier_ctrl(int vid, uint16_t qid, bool enable) From patchwork Tue Feb 7 10:45:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 123216 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 23B8F41C2D; Tue, 7 Feb 2023 11:46:09 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BAD5B42D38; Tue, 7 Feb 2023 11:45:57 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id D6B6542D2C for ; Tue, 7 Feb 2023 11:45:56 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675766756; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=PTWCi241gCwUP+T8k8FLEFinVLITgGmSOJW2tiRLkik=; b=RA64CfOabO59b0ax8xvoCIM+8nJvoOL6mkbfQ7dYLwrekkaC7MIV8Ahm41gRlmJ9om5zgF oCMxHF6JWun3Bgxdkeg2CB2zYqnsVvwTlsfstqmApBSxoZzApdYrURmk0C9a2HdgPRrKBL xBVO+ke2pZE6qc2jvSg+d4fAVXDwBTk= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-79-LsKcksRBOqu1NtkYd1KpBA-1; Tue, 07 Feb 2023 05:45:53 -0500 X-MC-Unique: LsKcksRBOqu1NtkYd1KpBA-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 15CBC1800077; Tue, 7 Feb 2023 10:45:53 +0000 (UTC) Received: from dmarchan.redhat.com (ovpn-195-46.brq.redhat.com [10.40.195.46]) by smtp.corp.redhat.com (Postfix) with ESMTP id 74C3C18EC5; Tue, 7 Feb 2023 10:45:51 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: maxime.coquelin@redhat.com, stephen@networkplumber.org, chenbo.xia@intel.com, jiayu.hu@intel.com, yuanx.wang@intel.com, xuan.ding@intel.com, mb@smartsharesystems.com Subject: [PATCH v6 3/9] vhost: terminate when access lock is not taken Date: Tue, 7 Feb 2023 11:45:26 +0100 Message-Id: <20230207104532.2370869-4-david.marchand@redhat.com> In-Reply-To: <20230207104532.2370869-1-david.marchand@redhat.com> References: <20220328121758.26632-1-david.marchand@redhat.com> <20230207104532.2370869-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Be a bit more strict when a programmatic error is detected with regards to the access_lock not being taken. Mark the new helper with __rte_assert_exclusive_lock so that clang understands where locks are expected to be taken. Signed-off-by: David Marchand Acked-by: Morten Brørup Reviewed-by: Maxime Coquelin Reviewed-by: Chenbo Xia --- lib/vhost/vhost.c | 18 +++--------------- lib/vhost/vhost.h | 10 ++++++++++ lib/vhost/virtio_net.c | 6 +----- 3 files changed, 14 insertions(+), 20 deletions(-) diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c index 19c7b92c32..8cd727ca2f 100644 --- a/lib/vhost/vhost.c +++ b/lib/vhost/vhost.c @@ -1781,11 +1781,7 @@ rte_vhost_async_channel_register_thread_unsafe(int vid, uint16_t queue_id) if (unlikely(vq == NULL || !dev->async_copy)) return -1; - if (unlikely(!rte_spinlock_is_locked(&vq->access_lock))) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "%s() called without access lock taken.\n", - __func__); - return -1; - } + vq_assert_lock(dev, vq); return async_channel_register(dev, vq); } @@ -1847,11 +1843,7 @@ rte_vhost_async_channel_unregister_thread_unsafe(int vid, uint16_t queue_id) if (vq == NULL) return -1; - if (unlikely(!rte_spinlock_is_locked(&vq->access_lock))) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "%s() called without access lock taken.\n", - __func__); - return -1; - } + vq_assert_lock(dev, vq); if (!vq->async) return 0; @@ -1994,11 +1986,7 @@ rte_vhost_async_get_inflight_thread_unsafe(int vid, uint16_t queue_id) if (vq == NULL) return ret; - if (unlikely(!rte_spinlock_is_locked(&vq->access_lock))) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "%s() called without access lock taken.\n", - __func__); - return -1; - } + vq_assert_lock(dev, vq); if (!vq->async) return ret; diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index 1f913803f6..82fe9b5fda 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -513,6 +513,16 @@ struct virtio_net { struct rte_vhost_user_extern_ops extern_ops; } __rte_cache_aligned; +static inline void +vq_assert_lock__(struct virtio_net *dev, struct vhost_virtqueue *vq, const char *func) + __rte_assert_exclusive_lock(&vq->access_lock) +{ + if (unlikely(!rte_spinlock_is_locked(&vq->access_lock))) + rte_panic("VHOST_CONFIG: (%s) %s() called without access lock taken.\n", + dev->ifname, func); +} +#define vq_assert_lock(dev, vq) vq_assert_lock__(dev, vq, __func__) + static __rte_always_inline bool vq_is_packed(struct virtio_net *dev) { diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index aac7aa9d01..cc9675ebe5 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -2367,11 +2367,7 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, vq = dev->virtqueue[queue_id]; - if (unlikely(!rte_spinlock_is_locked(&vq->access_lock))) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s() called without access lock taken.\n", - __func__); - return -1; - } + vq_assert_lock(dev, vq); if (unlikely(!vq->async)) { VHOST_LOG_DATA(dev->ifname, ERR, From patchwork Tue Feb 7 10:45:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 123217 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 236B541C2D; Tue, 7 Feb 2023 11:46:16 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1F06842D2C; Tue, 7 Feb 2023 11:46:02 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id E533F40042 for ; Tue, 7 Feb 2023 11:46:00 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675766760; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eOwGbt9CcP0zjshpwMuNvLPvD9rBDmotHjwdvgUIrgs=; b=N/4B8su4n0AgYoFV/nq61NmNoT/lOvRNYXkAdtXbi7s8JoaPZFZuiUiUBZRyoYLqWEYyC6 NwVvp/YIP7JsWa64/HvpphGFkLSLzNVLYRHvZix40+4YXON5tv7a4R+TBlP10omaSdbieL wa/GWgR8vncEGoKg6H3BdsOrH1BHycI= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-433-gH3638f8OjeV-31f4u2SAg-1; Tue, 07 Feb 2023 05:45:57 -0500 X-MC-Unique: gH3638f8OjeV-31f4u2SAg-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BF678801779; Tue, 7 Feb 2023 10:45:56 +0000 (UTC) Received: from dmarchan.redhat.com (ovpn-195-46.brq.redhat.com [10.40.195.46]) by smtp.corp.redhat.com (Postfix) with ESMTP id 327E82166B29; Tue, 7 Feb 2023 10:45:55 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: maxime.coquelin@redhat.com, stephen@networkplumber.org, chenbo.xia@intel.com, jiayu.hu@intel.com, yuanx.wang@intel.com, xuan.ding@intel.com, mb@smartsharesystems.com Subject: [PATCH v6 4/9] vhost: annotate virtqueue access lock Date: Tue, 7 Feb 2023 11:45:27 +0100 Message-Id: <20230207104532.2370869-5-david.marchand@redhat.com> In-Reply-To: <20230207104532.2370869-1-david.marchand@redhat.com> References: <20220328121758.26632-1-david.marchand@redhat.com> <20230207104532.2370869-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org vhost_user_lock/unlock_all_queue_pairs must be waived since clang annotations can't express taking a runtime number of locks. vhost_queue_stats_update() requirement can be expressed with a required tag. Signed-off-by: David Marchand Acked-by: Morten Brørup Reviewed-by: Maxime Coquelin Reviewed-by: Chenbo Xia --- Changes since RFC v3: - removed annotations needed for vhost async which went to the next patch, --- lib/vhost/vhost_user.c | 2 ++ lib/vhost/virtio_net.c | 4 +--- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c index 60ec1bf5f6..70d221b9f6 100644 --- a/lib/vhost/vhost_user.c +++ b/lib/vhost/vhost_user.c @@ -2965,6 +2965,7 @@ vhost_user_check_and_alloc_queue_pair(struct virtio_net *dev, static void vhost_user_lock_all_queue_pairs(struct virtio_net *dev) + __rte_no_thread_safety_analysis { unsigned int i = 0; unsigned int vq_num = 0; @@ -2982,6 +2983,7 @@ vhost_user_lock_all_queue_pairs(struct virtio_net *dev) static void vhost_user_unlock_all_queue_pairs(struct virtio_net *dev) + __rte_no_thread_safety_analysis { unsigned int i = 0; unsigned int vq_num = 0; diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index cc9675ebe5..f2ab6dba15 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -52,12 +52,10 @@ is_valid_virt_queue_idx(uint32_t idx, int is_tx, uint32_t nr_vring) return (is_tx ^ (idx & 1)) == 0 && idx < nr_vring; } -/* - * This function must be called with virtqueue's access_lock taken. - */ static inline void vhost_queue_stats_update(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf **pkts, uint16_t count) + __rte_exclusive_locks_required(&vq->access_lock) { struct virtqueue_stats *stats = &vq->stats; int i; From patchwork Tue Feb 7 10:45:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 123218 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2703641C2D; Tue, 7 Feb 2023 11:46:23 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7CA8E42D5A; Tue, 7 Feb 2023 11:46:07 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id D482742D0B for ; Tue, 7 Feb 2023 11:46:05 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675766765; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=T9jHugDeC06gTAM10GQyEJV4zZkZz8c/o7BUQ+efIDM=; b=Nzk7v4W39kuPN4VkXEije0sQ9bnAjKsktbOFlSw0p/3gxeHQDSPdxuvBv/7vqQdgX2I7MN S1TqFvC5loDXMRDFdXlnWjbQ/HDZeuJ5sppeMcmYPDin3MOOdTmRRba/3+iUu5ClF86Vro CFUbFOcG8lMDBdmXX+USG7XLbi0Ti70= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-561-NlU3FG7zNtKWuGjnL5Cjkg-1; Tue, 07 Feb 2023 05:46:02 -0500 X-MC-Unique: NlU3FG7zNtKWuGjnL5Cjkg-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 9004F18E524F; Tue, 7 Feb 2023 10:46:00 +0000 (UTC) Received: from dmarchan.redhat.com (ovpn-195-46.brq.redhat.com [10.40.195.46]) by smtp.corp.redhat.com (Postfix) with ESMTP id C03CF401014C; Tue, 7 Feb 2023 10:45:58 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: maxime.coquelin@redhat.com, stephen@networkplumber.org, chenbo.xia@intel.com, jiayu.hu@intel.com, yuanx.wang@intel.com, xuan.ding@intel.com, mb@smartsharesystems.com Subject: [PATCH v6 5/9] vhost: annotate async accesses Date: Tue, 7 Feb 2023 11:45:28 +0100 Message-Id: <20230207104532.2370869-6-david.marchand@redhat.com> In-Reply-To: <20230207104532.2370869-1-david.marchand@redhat.com> References: <20220328121758.26632-1-david.marchand@redhat.com> <20230207104532.2370869-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org vq->async is initialised and must be accessed under vq->access_lock. Signed-off-by: David Marchand Acked-by: Morten Brørup Reviewed-by: Maxime Coquelin Reviewed-by: Chenbo Xia --- Changes since v5: - rebased after packed support was added to async code, Changes since RFC v3: - rebased, - fixed annotations vq->access_lock -> &vq->access_lock, - reworked free_vq, --- lib/vhost/vhost.c | 4 ++++ lib/vhost/vhost.h | 2 +- lib/vhost/vhost_user.c | 10 +++++++--- lib/vhost/virtio_net.c | 41 +++++++++++++++++++++++++++++++++++++++++ 4 files changed, 53 insertions(+), 4 deletions(-) diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c index 8cd727ca2f..8bccdd8584 100644 --- a/lib/vhost/vhost.c +++ b/lib/vhost/vhost.c @@ -369,6 +369,7 @@ cleanup_device(struct virtio_net *dev, int destroy) static void vhost_free_async_mem(struct vhost_virtqueue *vq) + __rte_exclusive_locks_required(&vq->access_lock) { if (!vq->async) return; @@ -393,7 +394,9 @@ free_vq(struct virtio_net *dev, struct vhost_virtqueue *vq) else rte_free(vq->shadow_used_split); + rte_spinlock_lock(&vq->access_lock); vhost_free_async_mem(vq); + rte_spinlock_unlock(&vq->access_lock); rte_free(vq->batch_copy_elems); vhost_user_iotlb_destroy(vq); rte_free(vq->log_cache); @@ -1669,6 +1672,7 @@ rte_vhost_extern_callback_register(int vid, static __rte_always_inline int async_channel_register(struct virtio_net *dev, struct vhost_virtqueue *vq) + __rte_exclusive_locks_required(&vq->access_lock) { struct vhost_async *async; int node = vq->numa_node; diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index 82fe9b5fda..c05313cf37 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -326,7 +326,7 @@ struct vhost_virtqueue { struct rte_vhost_resubmit_info *resubmit_inflight; uint64_t global_counter; - struct vhost_async *async; + struct vhost_async *async __rte_guarded_var; int notif_enable; #define VIRTIO_UNINITIALIZED_NOTIF (-1) diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c index 70d221b9f6..8c1d60b76b 100644 --- a/lib/vhost/vhost_user.c +++ b/lib/vhost/vhost_user.c @@ -2168,6 +2168,7 @@ vhost_user_set_vring_enable(struct virtio_net **pdev, int main_fd __rte_unused) { struct virtio_net *dev = *pdev; + struct vhost_virtqueue *vq; bool enable = !!ctx->msg.payload.state.num; int index = (int)ctx->msg.payload.state.index; @@ -2175,15 +2176,18 @@ vhost_user_set_vring_enable(struct virtio_net **pdev, "set queue enable: %d to qp idx: %d\n", enable, index); - if (enable && dev->virtqueue[index]->async) { - if (dev->virtqueue[index]->async->pkts_inflight_n) { + vq = dev->virtqueue[index]; + /* vhost_user_lock_all_queue_pairs locked all qps */ + vq_assert_lock(dev, vq); + if (enable && vq->async) { + if (vq->async->pkts_inflight_n) { VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to enable vring. Inflight packets must be completed first\n"); return RTE_VHOST_MSG_RESULT_ERR; } } - dev->virtqueue[index]->enabled = enable; + vq->enabled = enable; return RTE_VHOST_MSG_RESULT_OK; } diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index f2ab6dba15..6672caac49 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -102,6 +102,7 @@ static __rte_always_inline int64_t vhost_async_dma_transfer_one(struct virtio_net *dev, struct vhost_virtqueue *vq, int16_t dma_id, uint16_t vchan_id, uint16_t flag_idx, struct vhost_iov_iter *pkt) + __rte_exclusive_locks_required(&vq->access_lock) { struct async_dma_vchan_info *dma_info = &dma_copy_track[dma_id].vchans[vchan_id]; uint16_t ring_mask = dma_info->ring_mask; @@ -151,6 +152,7 @@ static __rte_always_inline uint16_t vhost_async_dma_transfer(struct virtio_net *dev, struct vhost_virtqueue *vq, int16_t dma_id, uint16_t vchan_id, uint16_t head_idx, struct vhost_iov_iter *pkts, uint16_t nr_pkts) + __rte_exclusive_locks_required(&vq->access_lock) { struct async_dma_vchan_info *dma_info = &dma_copy_track[dma_id].vchans[vchan_id]; int64_t ret, nr_copies = 0; @@ -434,6 +436,7 @@ static __rte_always_inline void vhost_async_shadow_enqueue_packed_batch(struct vhost_virtqueue *vq, uint64_t *lens, uint16_t *ids) + __rte_exclusive_locks_required(&vq->access_lock) { uint16_t i; struct vhost_async *async = vq->async; @@ -450,6 +453,7 @@ vhost_async_shadow_enqueue_packed_batch(struct vhost_virtqueue *vq, static __rte_always_inline void vhost_async_shadow_dequeue_packed_batch(struct vhost_virtqueue *vq, uint16_t *ids) + __rte_exclusive_locks_required(&vq->access_lock) { uint16_t i; struct vhost_async *async = vq->async; @@ -611,6 +615,7 @@ vhost_async_shadow_enqueue_packed(struct vhost_virtqueue *vq, uint16_t *id, uint16_t *count, uint16_t num_buffers) + __rte_exclusive_locks_required(&vq->access_lock) { uint16_t i; struct vhost_async *async = vq->async; @@ -1118,6 +1123,7 @@ static __rte_always_inline int async_fill_seg(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf *m, uint32_t mbuf_offset, uint64_t buf_iova, uint32_t cpy_len, bool to_desc) + __rte_exclusive_locks_required(&vq->access_lock) { struct vhost_async *async = vq->async; uint64_t mapped_len; @@ -1195,6 +1201,7 @@ static __rte_always_inline int mbuf_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf *m, struct buf_vector *buf_vec, uint16_t nr_vec, uint16_t num_buffers, bool is_async) + __rte_exclusive_locks_required(&vq->access_lock) { uint32_t vec_idx = 0; uint32_t mbuf_offset, mbuf_avail; @@ -1323,6 +1330,7 @@ vhost_enqueue_single_packed(struct virtio_net *dev, struct rte_mbuf *pkt, struct buf_vector *buf_vec, uint16_t *nr_descs) + __rte_exclusive_locks_required(&vq->access_lock) { uint16_t nr_vec = 0; uint16_t avail_idx = vq->last_avail_idx; @@ -1383,6 +1391,7 @@ vhost_enqueue_single_packed(struct virtio_net *dev, static __rte_noinline uint32_t virtio_dev_rx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf **pkts, uint32_t count) + __rte_exclusive_locks_required(&vq->access_lock) { uint32_t pkt_idx = 0; uint16_t num_buffers; @@ -1610,6 +1619,7 @@ static __rte_always_inline int16_t virtio_dev_rx_single_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf *pkt) + __rte_exclusive_locks_required(&vq->access_lock) { struct buf_vector buf_vec[BUF_VECTOR_MAX]; uint16_t nr_descs = 0; @@ -1634,6 +1644,7 @@ virtio_dev_rx_packed(struct virtio_net *dev, struct vhost_virtqueue *__rte_restrict vq, struct rte_mbuf **__rte_restrict pkts, uint32_t count) + __rte_exclusive_locks_required(&vq->access_lock) { uint32_t pkt_idx = 0; @@ -1733,6 +1744,7 @@ rte_vhost_enqueue_burst(int vid, uint16_t queue_id, static __rte_always_inline uint16_t async_get_first_inflight_pkt_idx(struct vhost_virtqueue *vq) + __rte_exclusive_locks_required(&vq->access_lock) { struct vhost_async *async = vq->async; @@ -1761,6 +1773,7 @@ store_dma_desc_info_split(struct vring_used_elem *s_ring, struct vring_used_elem static __rte_noinline uint32_t virtio_dev_rx_async_submit_split(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf **pkts, uint32_t count, int16_t dma_id, uint16_t vchan_id) + __rte_exclusive_locks_required(&vq->access_lock) { struct buf_vector buf_vec[BUF_VECTOR_MAX]; uint32_t pkt_idx = 0; @@ -1867,6 +1880,7 @@ vhost_enqueue_async_packed(struct virtio_net *dev, struct buf_vector *buf_vec, uint16_t *nr_descs, uint16_t *nr_buffers) + __rte_exclusive_locks_required(&vq->access_lock) { uint16_t nr_vec = 0; uint16_t avail_idx = vq->last_avail_idx; @@ -1925,6 +1939,7 @@ vhost_enqueue_async_packed(struct virtio_net *dev, static __rte_always_inline int16_t virtio_dev_rx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf *pkt, uint16_t *nr_descs, uint16_t *nr_buffers) + __rte_exclusive_locks_required(&vq->access_lock) { struct buf_vector buf_vec[BUF_VECTOR_MAX]; @@ -1947,6 +1962,7 @@ virtio_dev_rx_async_packed_batch_enqueue(struct virtio_net *dev, struct rte_mbuf **pkts, uint64_t *desc_addrs, uint64_t *lens) + __rte_exclusive_locks_required(&vq->access_lock) { uint32_t buf_offset = sizeof(struct virtio_net_hdr_mrg_rxbuf); struct virtio_net_hdr_mrg_rxbuf *hdrs[PACKED_BATCH_SIZE]; @@ -2007,6 +2023,7 @@ virtio_dev_rx_async_packed_batch(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf **pkts, int16_t dma_id, uint16_t vchan_id) + __rte_exclusive_locks_required(&vq->access_lock) { uint64_t desc_addrs[PACKED_BATCH_SIZE]; uint64_t lens[PACKED_BATCH_SIZE]; @@ -2022,6 +2039,7 @@ virtio_dev_rx_async_packed_batch(struct virtio_net *dev, static __rte_always_inline void dma_error_handler_packed(struct vhost_virtqueue *vq, uint16_t slot_idx, uint32_t nr_err, uint32_t *pkt_idx) + __rte_exclusive_locks_required(&vq->access_lock) { uint16_t descs_err = 0; uint16_t buffers_err = 0; @@ -2052,6 +2070,7 @@ dma_error_handler_packed(struct vhost_virtqueue *vq, uint16_t slot_idx, static __rte_noinline uint32_t virtio_dev_rx_async_submit_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf **pkts, uint32_t count, int16_t dma_id, uint16_t vchan_id) + __rte_exclusive_locks_required(&vq->access_lock) { uint32_t pkt_idx = 0; uint16_t n_xfer; @@ -2124,6 +2143,7 @@ virtio_dev_rx_async_submit_packed(struct virtio_net *dev, struct vhost_virtqueue static __rte_always_inline void write_back_completed_descs_split(struct vhost_virtqueue *vq, uint16_t n_descs) + __rte_exclusive_locks_required(&vq->access_lock) { struct vhost_async *async = vq->async; uint16_t nr_left = n_descs; @@ -2156,6 +2176,7 @@ write_back_completed_descs_split(struct vhost_virtqueue *vq, uint16_t n_descs) static __rte_always_inline void write_back_completed_descs_packed(struct vhost_virtqueue *vq, uint16_t n_buffers) + __rte_exclusive_locks_required(&vq->access_lock) { struct vhost_async *async = vq->async; uint16_t from = async->last_buffer_idx_packed; @@ -2220,6 +2241,7 @@ write_back_completed_descs_packed(struct vhost_virtqueue *vq, static __rte_always_inline uint16_t vhost_poll_enqueue_completed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf **pkts, uint16_t count, int16_t dma_id, uint16_t vchan_id) + __rte_exclusive_locks_required(&vq->access_lock) { struct vhost_async *async = vq->async; struct async_inflight_info *pkts_info = async->pkts_info; @@ -2824,6 +2846,7 @@ desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, struct buf_vector *buf_vec, uint16_t nr_vec, struct rte_mbuf *m, struct rte_mempool *mbuf_pool, bool legacy_ol_flags, uint16_t slot_idx, bool is_async) + __rte_exclusive_locks_required(&vq->access_lock) { uint32_t buf_avail, buf_offset, buf_len; uint64_t buf_addr, buf_iova; @@ -3029,6 +3052,7 @@ static uint16_t virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count, bool legacy_ol_flags) + __rte_exclusive_locks_required(&vq->access_lock) { uint16_t i; uint16_t avail_entries; @@ -3132,6 +3156,7 @@ static uint16_t virtio_dev_tx_split_legacy(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count) + __rte_exclusive_locks_required(&vq->access_lock) { return virtio_dev_tx_split(dev, vq, mbuf_pool, pkts, count, true); } @@ -3141,6 +3166,7 @@ static uint16_t virtio_dev_tx_split_compliant(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count) + __rte_exclusive_locks_required(&vq->access_lock) { return virtio_dev_tx_split(dev, vq, mbuf_pool, pkts, count, false); } @@ -3341,6 +3367,7 @@ vhost_dequeue_single_packed(struct virtio_net *dev, uint16_t *buf_id, uint16_t *desc_count, bool legacy_ol_flags) + __rte_exclusive_locks_required(&vq->access_lock) { struct buf_vector buf_vec[BUF_VECTOR_MAX]; uint32_t buf_len; @@ -3389,6 +3416,7 @@ virtio_dev_tx_single_packed(struct virtio_net *dev, struct rte_mempool *mbuf_pool, struct rte_mbuf *pkts, bool legacy_ol_flags) + __rte_exclusive_locks_required(&vq->access_lock) { uint16_t buf_id, desc_count = 0; @@ -3419,6 +3447,7 @@ virtio_dev_tx_packed(struct virtio_net *dev, struct rte_mbuf **__rte_restrict pkts, uint32_t count, bool legacy_ol_flags) + __rte_exclusive_locks_required(&vq->access_lock) { uint32_t pkt_idx = 0; @@ -3462,6 +3491,7 @@ static uint16_t virtio_dev_tx_packed_legacy(struct virtio_net *dev, struct vhost_virtqueue *__rte_restrict vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **__rte_restrict pkts, uint32_t count) + __rte_exclusive_locks_required(&vq->access_lock) { return virtio_dev_tx_packed(dev, vq, mbuf_pool, pkts, count, true); } @@ -3471,6 +3501,7 @@ static uint16_t virtio_dev_tx_packed_compliant(struct virtio_net *dev, struct vhost_virtqueue *__rte_restrict vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **__rte_restrict pkts, uint32_t count) + __rte_exclusive_locks_required(&vq->access_lock) { return virtio_dev_tx_packed(dev, vq, mbuf_pool, pkts, count, false); } @@ -3588,6 +3619,7 @@ static __rte_always_inline uint16_t async_poll_dequeue_completed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf **pkts, uint16_t count, int16_t dma_id, uint16_t vchan_id, bool legacy_ol_flags) + __rte_exclusive_locks_required(&vq->access_lock) { uint16_t start_idx, from, i; uint16_t nr_cpl_pkts = 0; @@ -3634,6 +3666,7 @@ static __rte_always_inline uint16_t virtio_dev_tx_async_split(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count, int16_t dma_id, uint16_t vchan_id, bool legacy_ol_flags) + __rte_exclusive_locks_required(&vq->access_lock) { static bool allocerr_warned; bool dropped = false; @@ -3780,6 +3813,7 @@ virtio_dev_tx_async_split_legacy(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count, int16_t dma_id, uint16_t vchan_id) + __rte_exclusive_locks_required(&vq->access_lock) { return virtio_dev_tx_async_split(dev, vq, mbuf_pool, pkts, count, dma_id, vchan_id, true); @@ -3791,6 +3825,7 @@ virtio_dev_tx_async_split_compliant(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count, int16_t dma_id, uint16_t vchan_id) + __rte_exclusive_locks_required(&vq->access_lock) { return virtio_dev_tx_async_split(dev, vq, mbuf_pool, pkts, count, dma_id, vchan_id, false); @@ -3799,6 +3834,7 @@ virtio_dev_tx_async_split_compliant(struct virtio_net *dev, static __rte_always_inline void vhost_async_shadow_dequeue_single_packed(struct vhost_virtqueue *vq, uint16_t buf_id, uint16_t count) + __rte_exclusive_locks_required(&vq->access_lock) { struct vhost_async *async = vq->async; uint16_t idx = async->buffer_idx_packed; @@ -3820,6 +3856,7 @@ virtio_dev_tx_async_single_packed(struct virtio_net *dev, struct rte_mbuf *pkts, uint16_t slot_idx, bool legacy_ol_flags) + __rte_exclusive_locks_required(&vq->access_lock) { int err; uint16_t buf_id, desc_count = 0; @@ -3871,6 +3908,7 @@ virtio_dev_tx_async_packed_batch(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf **pkts, uint16_t slot_idx, uint16_t dma_id, uint16_t vchan_id) + __rte_exclusive_locks_required(&vq->access_lock) { uint16_t avail_idx = vq->last_avail_idx; uint32_t buf_offset = sizeof(struct virtio_net_hdr_mrg_rxbuf); @@ -3927,6 +3965,7 @@ static __rte_always_inline uint16_t virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count, uint16_t dma_id, uint16_t vchan_id, bool legacy_ol_flags) + __rte_exclusive_locks_required(&vq->access_lock) { uint32_t pkt_idx = 0; uint16_t slot_idx = 0; @@ -4036,6 +4075,7 @@ static uint16_t virtio_dev_tx_async_packed_legacy(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count, uint16_t dma_id, uint16_t vchan_id) + __rte_exclusive_locks_required(&vq->access_lock) { return virtio_dev_tx_async_packed(dev, vq, mbuf_pool, pkts, count, dma_id, vchan_id, true); @@ -4046,6 +4086,7 @@ static uint16_t virtio_dev_tx_async_packed_compliant(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count, uint16_t dma_id, uint16_t vchan_id) + __rte_exclusive_locks_required(&vq->access_lock) { return virtio_dev_tx_async_packed(dev, vq, mbuf_pool, pkts, count, dma_id, vchan_id, false); From patchwork Tue Feb 7 10:45:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 123219 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4BFCC41C2D; Tue, 7 Feb 2023 11:46:32 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 04DD542D67; Tue, 7 Feb 2023 11:46:10 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id 1BCE042D52 for ; Tue, 7 Feb 2023 11:46:09 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675766768; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kOBIU+X+IDSexcMm4j/+uveZOaNs9bCuFeWBDyatHIE=; b=THFbiSFm/neabJQBSYKdZr4pj1mqBW42ihufzWH7oY1P3akXNvkXp/9SRsmNb/IUSm7JlT 8Hbp1/JYu28qs6CtTr40GUSu+jLJUwuHCi+lxq2guZXqVfZo0B7gMu3wzpsJ2QsAI5x+U7 WegjlgfWoMvv4RLyPZMgXRVD3+LNzaQ= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-317-V3aMJW4hPMyxL2pTA1J1dw-1; Tue, 07 Feb 2023 05:46:04 -0500 X-MC-Unique: V3aMJW4hPMyxL2pTA1J1dw-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4EE89293248C; Tue, 7 Feb 2023 10:46:04 +0000 (UTC) Received: from dmarchan.redhat.com (ovpn-195-46.brq.redhat.com [10.40.195.46]) by smtp.corp.redhat.com (Postfix) with ESMTP id B2160492C3F; Tue, 7 Feb 2023 10:46:02 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: maxime.coquelin@redhat.com, stephen@networkplumber.org, chenbo.xia@intel.com, jiayu.hu@intel.com, yuanx.wang@intel.com, xuan.ding@intel.com, mb@smartsharesystems.com Subject: [PATCH v6 6/9] vhost: always take IOTLB lock Date: Tue, 7 Feb 2023 11:45:29 +0100 Message-Id: <20230207104532.2370869-7-david.marchand@redhat.com> In-Reply-To: <20230207104532.2370869-1-david.marchand@redhat.com> References: <20220328121758.26632-1-david.marchand@redhat.com> <20230207104532.2370869-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org clang does not support conditionally held locks when statically analysing taken locks with thread safety checks. Always take iotlb locks regardless of VIRTIO_F_IOMMU_PLATFORM feature. Signed-off-by: David Marchand Acked-by: Morten Brørup Reviewed-by: Maxime Coquelin Reviewed-by: Chenbo Xia --- lib/vhost/vhost.c | 8 +++----- lib/vhost/virtio_net.c | 24 ++++++++---------------- 2 files changed, 11 insertions(+), 21 deletions(-) diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c index 8bccdd8584..1e0c30791e 100644 --- a/lib/vhost/vhost.c +++ b/lib/vhost/vhost.c @@ -563,10 +563,9 @@ vring_translate(struct virtio_net *dev, struct vhost_virtqueue *vq) } void -vring_invalidate(struct virtio_net *dev, struct vhost_virtqueue *vq) +vring_invalidate(struct virtio_net *dev __rte_unused, struct vhost_virtqueue *vq) { - if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) - vhost_user_iotlb_wr_lock(vq); + vhost_user_iotlb_wr_lock(vq); vq->access_ok = false; vq->desc = NULL; @@ -574,8 +573,7 @@ vring_invalidate(struct virtio_net *dev, struct vhost_virtqueue *vq) vq->used = NULL; vq->log_guest_addr = 0; - if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) - vhost_user_iotlb_wr_unlock(vq); + vhost_user_iotlb_wr_unlock(vq); } static void diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index 6672caac49..49fc46e127 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -1688,8 +1688,7 @@ virtio_dev_rx(struct virtio_net *dev, struct vhost_virtqueue *vq, if (unlikely(!vq->enabled)) goto out_access_unlock; - if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) - vhost_user_iotlb_rd_lock(vq); + vhost_user_iotlb_rd_lock(vq); if (unlikely(!vq->access_ok)) if (unlikely(vring_translate(dev, vq) < 0)) @@ -1707,8 +1706,7 @@ virtio_dev_rx(struct virtio_net *dev, struct vhost_virtqueue *vq, vhost_queue_stats_update(dev, vq, pkts, nb_tx); out: - if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) - vhost_user_iotlb_rd_unlock(vq); + vhost_user_iotlb_rd_unlock(vq); out_access_unlock: rte_spinlock_unlock(&vq->access_lock); @@ -2499,8 +2497,7 @@ virtio_dev_rx_async_submit(struct virtio_net *dev, struct vhost_virtqueue *vq, if (unlikely(!vq->enabled || !vq->async)) goto out_access_unlock; - if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) - vhost_user_iotlb_rd_lock(vq); + vhost_user_iotlb_rd_lock(vq); if (unlikely(!vq->access_ok)) if (unlikely(vring_translate(dev, vq) < 0)) @@ -2520,8 +2517,7 @@ virtio_dev_rx_async_submit(struct virtio_net *dev, struct vhost_virtqueue *vq, vq->stats.inflight_submitted += nb_tx; out: - if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) - vhost_user_iotlb_rd_unlock(vq); + vhost_user_iotlb_rd_unlock(vq); out_access_unlock: rte_spinlock_unlock(&vq->access_lock); @@ -3543,8 +3539,7 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id, goto out_access_unlock; } - if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) - vhost_user_iotlb_rd_lock(vq); + vhost_user_iotlb_rd_lock(vq); if (unlikely(!vq->access_ok)) if (unlikely(vring_translate(dev, vq) < 0)) { @@ -3603,8 +3598,7 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id, vhost_queue_stats_update(dev, vq, pkts, count); out: - if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) - vhost_user_iotlb_rd_unlock(vq); + vhost_user_iotlb_rd_unlock(vq); out_access_unlock: rte_spinlock_unlock(&vq->access_lock); @@ -4150,8 +4144,7 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id, goto out_access_unlock; } - if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) - vhost_user_iotlb_rd_lock(vq); + vhost_user_iotlb_rd_lock(vq); if (unlikely(vq->access_ok == 0)) if (unlikely(vring_translate(dev, vq) < 0)) { @@ -4215,8 +4208,7 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id, vhost_queue_stats_update(dev, vq, pkts, count); out: - if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) - vhost_user_iotlb_rd_unlock(vq); + vhost_user_iotlb_rd_unlock(vq); out_access_unlock: rte_spinlock_unlock(&vq->access_lock); From patchwork Tue Feb 7 10:45:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 123220 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E536141C2D; Tue, 7 Feb 2023 11:46:37 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 547B342D53; Tue, 7 Feb 2023 11:46:16 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 14AAE42D53 for ; Tue, 7 Feb 2023 11:46:14 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675766774; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GAR1EbyfPZfteXTs+lXEbIhqMebYNFWP+Jl3RzKsDms=; b=Yr8VuhgV2HBStCzo7Q/JwCFqyDa90VOkAiU305ocIV5msNhGy/c3TsmPqO+hMpoYO8AT88 mC80FabRyvU7k/X69LlcVlLU5c4mgZCvuK7UcGhcCwuhoG20mwLlYqBFketkxvIQgpkgou 3eSrzkMd9DPQhJrOxvRZoPeBWJczRZ4= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-606-quKTJ2voMZi3BL0nuiyr5A-1; Tue, 07 Feb 2023 05:46:08 -0500 X-MC-Unique: quKTJ2voMZi3BL0nuiyr5A-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6661A101A521; Tue, 7 Feb 2023 10:46:08 +0000 (UTC) Received: from dmarchan.redhat.com (ovpn-195-46.brq.redhat.com [10.40.195.46]) by smtp.corp.redhat.com (Postfix) with ESMTP id 50279492B22; Tue, 7 Feb 2023 10:46:06 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: maxime.coquelin@redhat.com, stephen@networkplumber.org, chenbo.xia@intel.com, jiayu.hu@intel.com, yuanx.wang@intel.com, xuan.ding@intel.com, mb@smartsharesystems.com Subject: [PATCH v6 7/9] vhost: annotate IOTLB lock Date: Tue, 7 Feb 2023 11:45:30 +0100 Message-Id: <20230207104532.2370869-8-david.marchand@redhat.com> In-Reply-To: <20230207104532.2370869-1-david.marchand@redhat.com> References: <20220328121758.26632-1-david.marchand@redhat.com> <20230207104532.2370869-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The starting point for this is __vhost_iova_to_vva() which requires the lock to be taken. Annotate all the code leading to a call to it. vdpa and vhost_crypto code are annotated but they end up not taking a IOTLB lock and have been marked with a FIXME at the top level. Signed-off-by: David Marchand Acked-by: Morten Brørup Reviewed-by: Maxime Coquelin Reviewed-by: Chenbo Xia --- Changes since v5: - rebased after packed support was added to async code, Changes since RFC v3: - rebased, - moved unconditionnal lock acquire in separate patch, - fixed annotations, --- lib/vhost/iotlb.h | 4 ++++ lib/vhost/vdpa.c | 1 + lib/vhost/vhost.c | 8 +++----- lib/vhost/vhost.h | 22 ++++++++++++++------ lib/vhost/vhost_crypto.c | 8 ++++++++ lib/vhost/virtio_net.c | 43 ++++++++++++++++++++++++++++++++++++++++ 6 files changed, 75 insertions(+), 11 deletions(-) diff --git a/lib/vhost/iotlb.h b/lib/vhost/iotlb.h index e27ebebcf5..be079a8aa7 100644 --- a/lib/vhost/iotlb.h +++ b/lib/vhost/iotlb.h @@ -11,24 +11,28 @@ static __rte_always_inline void vhost_user_iotlb_rd_lock(struct vhost_virtqueue *vq) + __rte_shared_lock_function(&vq->iotlb_lock) { rte_rwlock_read_lock(&vq->iotlb_lock); } static __rte_always_inline void vhost_user_iotlb_rd_unlock(struct vhost_virtqueue *vq) + __rte_unlock_function(&vq->iotlb_lock) { rte_rwlock_read_unlock(&vq->iotlb_lock); } static __rte_always_inline void vhost_user_iotlb_wr_lock(struct vhost_virtqueue *vq) + __rte_exclusive_lock_function(&vq->iotlb_lock) { rte_rwlock_write_lock(&vq->iotlb_lock); } static __rte_always_inline void vhost_user_iotlb_wr_unlock(struct vhost_virtqueue *vq) + __rte_unlock_function(&vq->iotlb_lock) { rte_rwlock_write_unlock(&vq->iotlb_lock); } diff --git a/lib/vhost/vdpa.c b/lib/vhost/vdpa.c index 577cb00a43..a430a66970 100644 --- a/lib/vhost/vdpa.c +++ b/lib/vhost/vdpa.c @@ -146,6 +146,7 @@ rte_vdpa_unregister_device(struct rte_vdpa_device *dev) int rte_vdpa_relay_vring_used(int vid, uint16_t qid, void *vring_m) + __rte_no_thread_safety_analysis /* FIXME: requires iotlb_lock? */ { struct virtio_net *dev = get_device(vid); uint16_t idx, idx_m, desc_id; diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c index 1e0c30791e..c36dc06a0a 100644 --- a/lib/vhost/vhost.c +++ b/lib/vhost/vhost.c @@ -52,7 +52,6 @@ static const struct vhost_vq_stats_name_off vhost_vq_stat_strings[] = { #define VHOST_NB_VQ_STATS RTE_DIM(vhost_vq_stat_strings) -/* Called with iotlb_lock read-locked */ uint64_t __vhost_iova_to_vva(struct virtio_net *dev, struct vhost_virtqueue *vq, uint64_t iova, uint64_t *size, uint8_t perm) @@ -419,6 +418,7 @@ free_device(struct virtio_net *dev) static __rte_always_inline int log_translate(struct virtio_net *dev, struct vhost_virtqueue *vq) + __rte_shared_locks_required(&vq->iotlb_lock) { if (likely(!(vq->ring_addrs.flags & (1 << VHOST_VRING_F_LOG)))) return 0; @@ -435,8 +435,6 @@ log_translate(struct virtio_net *dev, struct vhost_virtqueue *vq) * Converts vring log address to GPA * If IOMMU is enabled, the log address is IOVA * If IOMMU not enabled, the log address is already GPA - * - * Caller should have iotlb_lock read-locked */ uint64_t translate_log_addr(struct virtio_net *dev, struct vhost_virtqueue *vq, @@ -467,9 +465,9 @@ translate_log_addr(struct virtio_net *dev, struct vhost_virtqueue *vq, return log_addr; } -/* Caller should have iotlb_lock read-locked */ static int vring_translate_split(struct virtio_net *dev, struct vhost_virtqueue *vq) + __rte_shared_locks_required(&vq->iotlb_lock) { uint64_t req_size, size; @@ -506,9 +504,9 @@ vring_translate_split(struct virtio_net *dev, struct vhost_virtqueue *vq) return 0; } -/* Caller should have iotlb_lock read-locked */ static int vring_translate_packed(struct virtio_net *dev, struct vhost_virtqueue *vq) + __rte_shared_locks_required(&vq->iotlb_lock) { uint64_t req_size, size; diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index c05313cf37..5750f0c005 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -563,12 +563,15 @@ void __vhost_log_cache_write(struct virtio_net *dev, uint64_t addr, uint64_t len); void __vhost_log_cache_write_iova(struct virtio_net *dev, struct vhost_virtqueue *vq, - uint64_t iova, uint64_t len); + uint64_t iova, uint64_t len) + __rte_shared_locks_required(&vq->iotlb_lock); void __vhost_log_cache_sync(struct virtio_net *dev, struct vhost_virtqueue *vq); + void __vhost_log_write(struct virtio_net *dev, uint64_t addr, uint64_t len); void __vhost_log_write_iova(struct virtio_net *dev, struct vhost_virtqueue *vq, - uint64_t iova, uint64_t len); + uint64_t iova, uint64_t len) + __rte_shared_locks_required(&vq->iotlb_lock); static __rte_always_inline void vhost_log_write(struct virtio_net *dev, uint64_t addr, uint64_t len) @@ -618,6 +621,7 @@ vhost_log_used_vring(struct virtio_net *dev, struct vhost_virtqueue *vq, static __rte_always_inline void vhost_log_cache_write_iova(struct virtio_net *dev, struct vhost_virtqueue *vq, uint64_t iova, uint64_t len) + __rte_shared_locks_required(&vq->iotlb_lock) { if (likely(!(dev->features & (1ULL << VHOST_F_LOG_ALL)))) return; @@ -631,6 +635,7 @@ vhost_log_cache_write_iova(struct virtio_net *dev, struct vhost_virtqueue *vq, static __rte_always_inline void vhost_log_write_iova(struct virtio_net *dev, struct vhost_virtqueue *vq, uint64_t iova, uint64_t len) + __rte_shared_locks_required(&vq->iotlb_lock) { if (likely(!(dev->features & (1ULL << VHOST_F_LOG_ALL)))) return; @@ -834,18 +839,23 @@ struct rte_vhost_device_ops const *vhost_driver_callback_get(const char *path); void vhost_backend_cleanup(struct virtio_net *dev); uint64_t __vhost_iova_to_vva(struct virtio_net *dev, struct vhost_virtqueue *vq, - uint64_t iova, uint64_t *len, uint8_t perm); + uint64_t iova, uint64_t *len, uint8_t perm) + __rte_shared_locks_required(&vq->iotlb_lock); void *vhost_alloc_copy_ind_table(struct virtio_net *dev, struct vhost_virtqueue *vq, - uint64_t desc_addr, uint64_t desc_len); -int vring_translate(struct virtio_net *dev, struct vhost_virtqueue *vq); + uint64_t desc_addr, uint64_t desc_len) + __rte_shared_locks_required(&vq->iotlb_lock); +int vring_translate(struct virtio_net *dev, struct vhost_virtqueue *vq) + __rte_shared_locks_required(&vq->iotlb_lock); uint64_t translate_log_addr(struct virtio_net *dev, struct vhost_virtqueue *vq, - uint64_t log_addr); + uint64_t log_addr) + __rte_shared_locks_required(&vq->iotlb_lock); void vring_invalidate(struct virtio_net *dev, struct vhost_virtqueue *vq); static __rte_always_inline uint64_t vhost_iova_to_vva(struct virtio_net *dev, struct vhost_virtqueue *vq, uint64_t iova, uint64_t *len, uint8_t perm) + __rte_shared_locks_required(&vq->iotlb_lock) { if (!(dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))) return rte_vhost_va_from_guest_pa(dev->mem, iova, len); diff --git a/lib/vhost/vhost_crypto.c b/lib/vhost/vhost_crypto.c index b448b6685d..f02bf865c3 100644 --- a/lib/vhost/vhost_crypto.c +++ b/lib/vhost/vhost_crypto.c @@ -490,6 +490,7 @@ static __rte_always_inline struct virtio_crypto_inhdr * reach_inhdr(struct vhost_crypto_data_req *vc_req, struct vhost_crypto_desc *head, uint32_t max_n_descs) + __rte_shared_locks_required(&vc_req->vq->iotlb_lock) { struct virtio_crypto_inhdr *inhdr; struct vhost_crypto_desc *last = head + (max_n_descs - 1); @@ -536,6 +537,7 @@ static __rte_always_inline void * get_data_ptr(struct vhost_crypto_data_req *vc_req, struct vhost_crypto_desc *cur_desc, uint8_t perm) + __rte_shared_locks_required(&vc_req->vq->iotlb_lock) { void *data; uint64_t dlen = cur_desc->len; @@ -552,6 +554,7 @@ get_data_ptr(struct vhost_crypto_data_req *vc_req, static __rte_always_inline uint32_t copy_data_from_desc(void *dst, struct vhost_crypto_data_req *vc_req, struct vhost_crypto_desc *desc, uint32_t size) + __rte_shared_locks_required(&vc_req->vq->iotlb_lock) { uint64_t remain; uint64_t addr; @@ -582,6 +585,7 @@ static __rte_always_inline int copy_data(void *data, struct vhost_crypto_data_req *vc_req, struct vhost_crypto_desc *head, struct vhost_crypto_desc **cur_desc, uint32_t size, uint32_t max_n_descs) + __rte_shared_locks_required(&vc_req->vq->iotlb_lock) { struct vhost_crypto_desc *desc = *cur_desc; uint32_t left = size; @@ -665,6 +669,7 @@ prepare_write_back_data(struct vhost_crypto_data_req *vc_req, uint32_t offset, uint64_t write_back_len, uint32_t max_n_descs) + __rte_shared_locks_required(&vc_req->vq->iotlb_lock) { struct vhost_crypto_writeback_data *wb_data, *head; struct vhost_crypto_desc *desc = *cur_desc; @@ -785,6 +790,7 @@ prepare_sym_cipher_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, struct virtio_crypto_cipher_data_req *cipher, struct vhost_crypto_desc *head, uint32_t max_n_descs) + __rte_shared_locks_required(&vc_req->vq->iotlb_lock) { struct vhost_crypto_desc *desc = head; struct vhost_crypto_writeback_data *ewb = NULL; @@ -938,6 +944,7 @@ prepare_sym_chain_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, struct virtio_crypto_alg_chain_data_req *chain, struct vhost_crypto_desc *head, uint32_t max_n_descs) + __rte_shared_locks_required(&vc_req->vq->iotlb_lock) { struct vhost_crypto_desc *desc = head, *digest_desc; struct vhost_crypto_writeback_data *ewb = NULL, *ewb2 = NULL; @@ -1123,6 +1130,7 @@ vhost_crypto_process_one_req(struct vhost_crypto *vcrypto, struct vhost_virtqueue *vq, struct rte_crypto_op *op, struct vring_desc *head, struct vhost_crypto_desc *descs, uint16_t desc_idx) + __rte_no_thread_safety_analysis /* FIXME: requires iotlb_lock? */ { struct vhost_crypto_data_req *vc_req = rte_mbuf_to_priv(op->sym->m_src); struct rte_cryptodev_sym_session *session; diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index 49fc46e127..51dc3c96d3 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -233,6 +233,7 @@ vhost_async_dma_check_completed(struct virtio_net *dev, int16_t dma_id, uint16_t static inline void do_data_copy_enqueue(struct virtio_net *dev, struct vhost_virtqueue *vq) + __rte_shared_locks_required(&vq->iotlb_lock) { struct batch_copy_elem *elem = vq->batch_copy_elems; uint16_t count = vq->batch_copy_nb_elems; @@ -637,6 +638,7 @@ vhost_shadow_enqueue_single_packed(struct virtio_net *dev, uint16_t *id, uint16_t *count, uint16_t num_buffers) + __rte_shared_locks_required(&vq->iotlb_lock) { vhost_shadow_enqueue_packed(vq, len, id, count, num_buffers); @@ -728,6 +730,7 @@ static __rte_always_inline int map_one_desc(struct virtio_net *dev, struct vhost_virtqueue *vq, struct buf_vector *buf_vec, uint16_t *vec_idx, uint64_t desc_iova, uint64_t desc_len, uint8_t perm) + __rte_shared_locks_required(&vq->iotlb_lock) { uint16_t vec_id = *vec_idx; @@ -765,6 +768,7 @@ fill_vec_buf_split(struct virtio_net *dev, struct vhost_virtqueue *vq, uint32_t avail_idx, uint16_t *vec_idx, struct buf_vector *buf_vec, uint16_t *desc_chain_head, uint32_t *desc_chain_len, uint8_t perm) + __rte_shared_locks_required(&vq->iotlb_lock) { uint16_t idx = vq->avail->ring[avail_idx & (vq->size - 1)]; uint16_t vec_id = *vec_idx; @@ -848,6 +852,7 @@ reserve_avail_buf_split(struct virtio_net *dev, struct vhost_virtqueue *vq, uint64_t size, struct buf_vector *buf_vec, uint16_t *num_buffers, uint16_t avail_head, uint16_t *nr_vec) + __rte_shared_locks_required(&vq->iotlb_lock) { uint16_t cur_idx; uint16_t vec_idx = 0; @@ -898,6 +903,7 @@ fill_vec_buf_packed_indirect(struct virtio_net *dev, struct vhost_virtqueue *vq, struct vring_packed_desc *desc, uint16_t *vec_idx, struct buf_vector *buf_vec, uint32_t *len, uint8_t perm) + __rte_shared_locks_required(&vq->iotlb_lock) { uint16_t i; uint32_t nr_descs; @@ -956,6 +962,7 @@ fill_vec_buf_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, uint16_t avail_idx, uint16_t *desc_count, struct buf_vector *buf_vec, uint16_t *vec_idx, uint16_t *buf_id, uint32_t *len, uint8_t perm) + __rte_shared_locks_required(&vq->iotlb_lock) { bool wrap_counter = vq->avail_wrap_counter; struct vring_packed_desc *descs = vq->desc_packed; @@ -1021,6 +1028,7 @@ static __rte_noinline void copy_vnet_hdr_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq, struct buf_vector *buf_vec, struct virtio_net_hdr_mrg_rxbuf *hdr) + __rte_shared_locks_required(&vq->iotlb_lock) { uint64_t len; uint64_t remain = dev->vhost_hlen; @@ -1124,6 +1132,7 @@ async_fill_seg(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf *m, uint32_t mbuf_offset, uint64_t buf_iova, uint32_t cpy_len, bool to_desc) __rte_exclusive_locks_required(&vq->access_lock) + __rte_shared_locks_required(&vq->iotlb_lock) { struct vhost_async *async = vq->async; uint64_t mapped_len; @@ -1164,6 +1173,7 @@ static __rte_always_inline void sync_fill_seg(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf *m, uint32_t mbuf_offset, uint64_t buf_addr, uint64_t buf_iova, uint32_t cpy_len, bool to_desc) + __rte_shared_locks_required(&vq->iotlb_lock) { struct batch_copy_elem *batch_copy = vq->batch_copy_elems; @@ -1202,6 +1212,7 @@ mbuf_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf *m, struct buf_vector *buf_vec, uint16_t nr_vec, uint16_t num_buffers, bool is_async) __rte_exclusive_locks_required(&vq->access_lock) + __rte_shared_locks_required(&vq->iotlb_lock) { uint32_t vec_idx = 0; uint32_t mbuf_offset, mbuf_avail; @@ -1331,6 +1342,7 @@ vhost_enqueue_single_packed(struct virtio_net *dev, struct buf_vector *buf_vec, uint16_t *nr_descs) __rte_exclusive_locks_required(&vq->access_lock) + __rte_shared_locks_required(&vq->iotlb_lock) { uint16_t nr_vec = 0; uint16_t avail_idx = vq->last_avail_idx; @@ -1392,6 +1404,7 @@ static __rte_noinline uint32_t virtio_dev_rx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf **pkts, uint32_t count) __rte_exclusive_locks_required(&vq->access_lock) + __rte_shared_locks_required(&vq->iotlb_lock) { uint32_t pkt_idx = 0; uint16_t num_buffers; @@ -1448,6 +1461,7 @@ virtio_dev_rx_sync_batch_check(struct virtio_net *dev, struct rte_mbuf **pkts, uint64_t *desc_addrs, uint64_t *lens) + __rte_shared_locks_required(&vq->iotlb_lock) { bool wrap_counter = vq->avail_wrap_counter; struct vring_packed_desc *descs = vq->desc_packed; @@ -1551,6 +1565,7 @@ virtio_dev_rx_batch_packed_copy(struct virtio_net *dev, struct rte_mbuf **pkts, uint64_t *desc_addrs, uint64_t *lens) + __rte_shared_locks_required(&vq->iotlb_lock) { uint32_t buf_offset = sizeof(struct virtio_net_hdr_mrg_rxbuf); struct virtio_net_hdr_mrg_rxbuf *hdrs[PACKED_BATCH_SIZE]; @@ -1598,6 +1613,7 @@ static __rte_always_inline int virtio_dev_rx_sync_batch_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf **pkts) + __rte_shared_locks_required(&vq->iotlb_lock) { uint64_t desc_addrs[PACKED_BATCH_SIZE]; uint64_t lens[PACKED_BATCH_SIZE]; @@ -1620,6 +1636,7 @@ virtio_dev_rx_single_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf *pkt) __rte_exclusive_locks_required(&vq->access_lock) + __rte_shared_locks_required(&vq->iotlb_lock) { struct buf_vector buf_vec[BUF_VECTOR_MAX]; uint16_t nr_descs = 0; @@ -1645,6 +1662,7 @@ virtio_dev_rx_packed(struct virtio_net *dev, struct rte_mbuf **__rte_restrict pkts, uint32_t count) __rte_exclusive_locks_required(&vq->access_lock) + __rte_shared_locks_required(&vq->iotlb_lock) { uint32_t pkt_idx = 0; @@ -1772,6 +1790,7 @@ static __rte_noinline uint32_t virtio_dev_rx_async_submit_split(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf **pkts, uint32_t count, int16_t dma_id, uint16_t vchan_id) __rte_exclusive_locks_required(&vq->access_lock) + __rte_shared_locks_required(&vq->iotlb_lock) { struct buf_vector buf_vec[BUF_VECTOR_MAX]; uint32_t pkt_idx = 0; @@ -1879,6 +1898,7 @@ vhost_enqueue_async_packed(struct virtio_net *dev, uint16_t *nr_descs, uint16_t *nr_buffers) __rte_exclusive_locks_required(&vq->access_lock) + __rte_shared_locks_required(&vq->iotlb_lock) { uint16_t nr_vec = 0; uint16_t avail_idx = vq->last_avail_idx; @@ -1938,6 +1958,7 @@ static __rte_always_inline int16_t virtio_dev_rx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf *pkt, uint16_t *nr_descs, uint16_t *nr_buffers) __rte_exclusive_locks_required(&vq->access_lock) + __rte_shared_locks_required(&vq->iotlb_lock) { struct buf_vector buf_vec[BUF_VECTOR_MAX]; @@ -1961,6 +1982,7 @@ virtio_dev_rx_async_packed_batch_enqueue(struct virtio_net *dev, uint64_t *desc_addrs, uint64_t *lens) __rte_exclusive_locks_required(&vq->access_lock) + __rte_shared_locks_required(&vq->iotlb_lock) { uint32_t buf_offset = sizeof(struct virtio_net_hdr_mrg_rxbuf); struct virtio_net_hdr_mrg_rxbuf *hdrs[PACKED_BATCH_SIZE]; @@ -2022,6 +2044,7 @@ virtio_dev_rx_async_packed_batch(struct virtio_net *dev, struct rte_mbuf **pkts, int16_t dma_id, uint16_t vchan_id) __rte_exclusive_locks_required(&vq->access_lock) + __rte_shared_locks_required(&vq->iotlb_lock) { uint64_t desc_addrs[PACKED_BATCH_SIZE]; uint64_t lens[PACKED_BATCH_SIZE]; @@ -2069,6 +2092,7 @@ static __rte_noinline uint32_t virtio_dev_rx_async_submit_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf **pkts, uint32_t count, int16_t dma_id, uint16_t vchan_id) __rte_exclusive_locks_required(&vq->access_lock) + __rte_shared_locks_required(&vq->iotlb_lock) { uint32_t pkt_idx = 0; uint16_t n_xfer; @@ -2843,6 +2867,7 @@ desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf *m, struct rte_mempool *mbuf_pool, bool legacy_ol_flags, uint16_t slot_idx, bool is_async) __rte_exclusive_locks_required(&vq->access_lock) + __rte_shared_locks_required(&vq->iotlb_lock) { uint32_t buf_avail, buf_offset, buf_len; uint64_t buf_addr, buf_iova; @@ -3049,6 +3074,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count, bool legacy_ol_flags) __rte_exclusive_locks_required(&vq->access_lock) + __rte_shared_locks_required(&vq->iotlb_lock) { uint16_t i; uint16_t avail_entries; @@ -3153,6 +3179,7 @@ virtio_dev_tx_split_legacy(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count) __rte_exclusive_locks_required(&vq->access_lock) + __rte_shared_locks_required(&vq->iotlb_lock) { return virtio_dev_tx_split(dev, vq, mbuf_pool, pkts, count, true); } @@ -3163,6 +3190,7 @@ virtio_dev_tx_split_compliant(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count) __rte_exclusive_locks_required(&vq->access_lock) + __rte_shared_locks_required(&vq->iotlb_lock) { return virtio_dev_tx_split(dev, vq, mbuf_pool, pkts, count, false); } @@ -3174,6 +3202,7 @@ vhost_reserve_avail_batch_packed(struct virtio_net *dev, uint16_t avail_idx, uintptr_t *desc_addrs, uint16_t *ids) + __rte_shared_locks_required(&vq->iotlb_lock) { bool wrap = vq->avail_wrap_counter; struct vring_packed_desc *descs = vq->desc_packed; @@ -3317,6 +3346,7 @@ virtio_dev_tx_batch_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf **pkts, bool legacy_ol_flags) + __rte_shared_locks_required(&vq->iotlb_lock) { uint16_t avail_idx = vq->last_avail_idx; uint32_t buf_offset = sizeof(struct virtio_net_hdr_mrg_rxbuf); @@ -3364,6 +3394,7 @@ vhost_dequeue_single_packed(struct virtio_net *dev, uint16_t *desc_count, bool legacy_ol_flags) __rte_exclusive_locks_required(&vq->access_lock) + __rte_shared_locks_required(&vq->iotlb_lock) { struct buf_vector buf_vec[BUF_VECTOR_MAX]; uint32_t buf_len; @@ -3413,6 +3444,7 @@ virtio_dev_tx_single_packed(struct virtio_net *dev, struct rte_mbuf *pkts, bool legacy_ol_flags) __rte_exclusive_locks_required(&vq->access_lock) + __rte_shared_locks_required(&vq->iotlb_lock) { uint16_t buf_id, desc_count = 0; @@ -3444,6 +3476,7 @@ virtio_dev_tx_packed(struct virtio_net *dev, uint32_t count, bool legacy_ol_flags) __rte_exclusive_locks_required(&vq->access_lock) + __rte_shared_locks_required(&vq->iotlb_lock) { uint32_t pkt_idx = 0; @@ -3488,6 +3521,7 @@ virtio_dev_tx_packed_legacy(struct virtio_net *dev, struct vhost_virtqueue *__rte_restrict vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **__rte_restrict pkts, uint32_t count) __rte_exclusive_locks_required(&vq->access_lock) + __rte_shared_locks_required(&vq->iotlb_lock) { return virtio_dev_tx_packed(dev, vq, mbuf_pool, pkts, count, true); } @@ -3498,6 +3532,7 @@ virtio_dev_tx_packed_compliant(struct virtio_net *dev, struct vhost_virtqueue *__rte_restrict vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **__rte_restrict pkts, uint32_t count) __rte_exclusive_locks_required(&vq->access_lock) + __rte_shared_locks_required(&vq->iotlb_lock) { return virtio_dev_tx_packed(dev, vq, mbuf_pool, pkts, count, false); } @@ -3661,6 +3696,7 @@ virtio_dev_tx_async_split(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count, int16_t dma_id, uint16_t vchan_id, bool legacy_ol_flags) __rte_exclusive_locks_required(&vq->access_lock) + __rte_shared_locks_required(&vq->iotlb_lock) { static bool allocerr_warned; bool dropped = false; @@ -3808,6 +3844,7 @@ virtio_dev_tx_async_split_legacy(struct virtio_net *dev, struct rte_mbuf **pkts, uint16_t count, int16_t dma_id, uint16_t vchan_id) __rte_exclusive_locks_required(&vq->access_lock) + __rte_shared_locks_required(&vq->iotlb_lock) { return virtio_dev_tx_async_split(dev, vq, mbuf_pool, pkts, count, dma_id, vchan_id, true); @@ -3820,6 +3857,7 @@ virtio_dev_tx_async_split_compliant(struct virtio_net *dev, struct rte_mbuf **pkts, uint16_t count, int16_t dma_id, uint16_t vchan_id) __rte_exclusive_locks_required(&vq->access_lock) + __rte_shared_locks_required(&vq->iotlb_lock) { return virtio_dev_tx_async_split(dev, vq, mbuf_pool, pkts, count, dma_id, vchan_id, false); @@ -3851,6 +3889,7 @@ virtio_dev_tx_async_single_packed(struct virtio_net *dev, uint16_t slot_idx, bool legacy_ol_flags) __rte_exclusive_locks_required(&vq->access_lock) + __rte_shared_locks_required(&vq->iotlb_lock) { int err; uint16_t buf_id, desc_count = 0; @@ -3903,6 +3942,7 @@ virtio_dev_tx_async_packed_batch(struct virtio_net *dev, struct rte_mbuf **pkts, uint16_t slot_idx, uint16_t dma_id, uint16_t vchan_id) __rte_exclusive_locks_required(&vq->access_lock) + __rte_shared_locks_required(&vq->iotlb_lock) { uint16_t avail_idx = vq->last_avail_idx; uint32_t buf_offset = sizeof(struct virtio_net_hdr_mrg_rxbuf); @@ -3960,6 +4000,7 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count, uint16_t dma_id, uint16_t vchan_id, bool legacy_ol_flags) __rte_exclusive_locks_required(&vq->access_lock) + __rte_shared_locks_required(&vq->iotlb_lock) { uint32_t pkt_idx = 0; uint16_t slot_idx = 0; @@ -4070,6 +4111,7 @@ virtio_dev_tx_async_packed_legacy(struct virtio_net *dev, struct vhost_virtqueue struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count, uint16_t dma_id, uint16_t vchan_id) __rte_exclusive_locks_required(&vq->access_lock) + __rte_shared_locks_required(&vq->iotlb_lock) { return virtio_dev_tx_async_packed(dev, vq, mbuf_pool, pkts, count, dma_id, vchan_id, true); @@ -4081,6 +4123,7 @@ virtio_dev_tx_async_packed_compliant(struct virtio_net *dev, struct vhost_virtqu struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count, uint16_t dma_id, uint16_t vchan_id) __rte_exclusive_locks_required(&vq->access_lock) + __rte_shared_locks_required(&vq->iotlb_lock) { return virtio_dev_tx_async_packed(dev, vq, mbuf_pool, pkts, count, dma_id, vchan_id, false); From patchwork Tue Feb 7 10:45:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 123221 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DA3FF41C2D; Tue, 7 Feb 2023 11:46:45 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CDCDB42D79; Tue, 7 Feb 2023 11:46:18 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id ED94542D5D for ; Tue, 7 Feb 2023 11:46:16 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675766776; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dv1ahMSUq5QgZ7MmUWvBVCmEGYTLTHPYnutA2SA54Sg=; b=I2tfTcyvnq7Miqwvb9J8E7FH4Cdo6a34oReSkMoTR6vf17ZHfVEN+/LrSUZF0zyQ0w5GJk jztLQnc0RskHdSyc51iKCauvz6dbRgDG5az0/yyo6rc9q0APA4w1la49UolD2YiyjOnrQN um5LZH3wur+XHeu2sOnHhkE/vNE3a40= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-654-g42YtZQJOzer-azXVnkyCA-1; Tue, 07 Feb 2023 05:46:12 -0500 X-MC-Unique: g42YtZQJOzer-azXVnkyCA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1FADE2932485; Tue, 7 Feb 2023 10:46:12 +0000 (UTC) Received: from dmarchan.redhat.com (ovpn-195-46.brq.redhat.com [10.40.195.46]) by smtp.corp.redhat.com (Postfix) with ESMTP id 859BE400DFDB; Tue, 7 Feb 2023 10:46:10 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: maxime.coquelin@redhat.com, stephen@networkplumber.org, chenbo.xia@intel.com, jiayu.hu@intel.com, yuanx.wang@intel.com, xuan.ding@intel.com, mb@smartsharesystems.com Subject: [PATCH v6 8/9] vhost: annotate vDPA device list accesses Date: Tue, 7 Feb 2023 11:45:31 +0100 Message-Id: <20230207104532.2370869-9-david.marchand@redhat.com> In-Reply-To: <20230207104532.2370869-1-david.marchand@redhat.com> References: <20220328121758.26632-1-david.marchand@redhat.com> <20230207104532.2370869-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Access to vdpa_device_list must be protected with vdpa_device_list_lock spinlock. Signed-off-by: David Marchand Acked-by: Morten Brørup Reviewed-by: Maxime Coquelin Reviewed-by: Chenbo Xia --- Changes since RFC v3: - rebased, --- lib/vhost/vdpa.c | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-) diff --git a/lib/vhost/vdpa.c b/lib/vhost/vdpa.c index a430a66970..6284ea2ed1 100644 --- a/lib/vhost/vdpa.c +++ b/lib/vhost/vdpa.c @@ -23,21 +23,22 @@ /** Double linked list of vDPA devices. */ TAILQ_HEAD(vdpa_device_list, rte_vdpa_device); -static struct vdpa_device_list vdpa_device_list = - TAILQ_HEAD_INITIALIZER(vdpa_device_list); +static struct vdpa_device_list vdpa_device_list__ = + TAILQ_HEAD_INITIALIZER(vdpa_device_list__); static rte_spinlock_t vdpa_device_list_lock = RTE_SPINLOCK_INITIALIZER; +static struct vdpa_device_list * const vdpa_device_list + __rte_guarded_by(&vdpa_device_list_lock) = &vdpa_device_list__; - -/* Unsafe, needs to be called with vdpa_device_list_lock held */ static struct rte_vdpa_device * __vdpa_find_device_by_name(const char *name) + __rte_exclusive_locks_required(&vdpa_device_list_lock) { struct rte_vdpa_device *dev, *ret = NULL; if (name == NULL) return NULL; - TAILQ_FOREACH(dev, &vdpa_device_list, next) { + TAILQ_FOREACH(dev, vdpa_device_list, next) { if (!strncmp(dev->device->name, name, RTE_DEV_NAME_MAX_LEN)) { ret = dev; break; @@ -116,7 +117,7 @@ rte_vdpa_register_device(struct rte_device *rte_dev, dev->type = RTE_VHOST_VDPA_DEVICE_TYPE_NET; } - TAILQ_INSERT_TAIL(&vdpa_device_list, dev, next); + TAILQ_INSERT_TAIL(vdpa_device_list, dev, next); out_unlock: rte_spinlock_unlock(&vdpa_device_list_lock); @@ -130,11 +131,11 @@ rte_vdpa_unregister_device(struct rte_vdpa_device *dev) int ret = -1; rte_spinlock_lock(&vdpa_device_list_lock); - RTE_TAILQ_FOREACH_SAFE(cur_dev, &vdpa_device_list, next, tmp_dev) { + RTE_TAILQ_FOREACH_SAFE(cur_dev, vdpa_device_list, next, tmp_dev) { if (dev != cur_dev) continue; - TAILQ_REMOVE(&vdpa_device_list, dev, next); + TAILQ_REMOVE(vdpa_device_list, dev, next); rte_free(dev); ret = 0; break; @@ -336,7 +337,7 @@ vdpa_find_device(const struct rte_vdpa_device *start, rte_vdpa_cmp_t cmp, rte_spinlock_lock(&vdpa_device_list_lock); if (start == NULL) - dev = TAILQ_FIRST(&vdpa_device_list); + dev = TAILQ_FIRST(vdpa_device_list); else dev = TAILQ_NEXT(start, next); From patchwork Tue Feb 7 10:45:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 123222 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 30DF441C2D; Tue, 7 Feb 2023 11:46:51 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CF80A42D6D; Tue, 7 Feb 2023 11:46:21 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id AC3AC410DF for ; Tue, 7 Feb 2023 11:46:20 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675766780; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qX4BE0gpJ/Vf4ydS4AYQT0NAEi+IbYSH4uX7Y38q+y4=; b=NS/E/jp7j2/B1AqifMrgJ8tqTLaSa9btglDvN1N2/OLWMa6+KKwTBrarLDNu5kZGnRB6Tl +BfIRxn0G8oKdolKsH4qr0kTmMQod5yMvZKaNaMj5BH8ZZXYOBdmKO+j0s/4zZrxzcWS2+ lJ1ZGHriyJY3M3I345urSJHuqz94KeA= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-43-fH58OibcOEuKXYMqXZt2yg-1; Tue, 07 Feb 2023 05:46:16 -0500 X-MC-Unique: fH58OibcOEuKXYMqXZt2yg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id AB9341800077; Tue, 7 Feb 2023 10:46:15 +0000 (UTC) Received: from dmarchan.redhat.com (ovpn-195-46.brq.redhat.com [10.40.195.46]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1B39F40398A0; Tue, 7 Feb 2023 10:46:13 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: maxime.coquelin@redhat.com, stephen@networkplumber.org, chenbo.xia@intel.com, jiayu.hu@intel.com, yuanx.wang@intel.com, xuan.ding@intel.com, mb@smartsharesystems.com Subject: [PATCH v6 9/9] vhost: enable lock check Date: Tue, 7 Feb 2023 11:45:32 +0100 Message-Id: <20230207104532.2370869-10-david.marchand@redhat.com> In-Reply-To: <20230207104532.2370869-1-david.marchand@redhat.com> References: <20220328121758.26632-1-david.marchand@redhat.com> <20230207104532.2370869-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Now that all locks in this library are annotated, we can enable the check. Signed-off-by: David Marchand Acked-by: Morten Brørup Reviewed-by: Maxime Coquelin Reviewed-by: Chenbo Xia --- lib/vhost/meson.build | 2 ++ 1 file changed, 2 insertions(+) diff --git a/lib/vhost/meson.build b/lib/vhost/meson.build index bc7272053b..197a51d936 100644 --- a/lib/vhost/meson.build +++ b/lib/vhost/meson.build @@ -17,6 +17,8 @@ elif (toolchain == 'icc' and cc.version().version_compare('>=16.0.0')) endif dpdk_conf.set('RTE_LIBRTE_VHOST_POSTCOPY', cc.has_header('linux/userfaultfd.h')) cflags += '-fno-strict-aliasing' + +annotate_locks = true sources = files( 'fd_man.c', 'iotlb.c',