From patchwork Wed Mar 22 17:05:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 125429 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E17CF42805; Wed, 22 Mar 2023 18:05:37 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6EE33410F9; Wed, 22 Mar 2023 18:05:37 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id 78B2040A84 for ; Wed, 22 Mar 2023 18:05:36 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1679504735; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=KW6kxLyWuM7N9h43ZCy4MMQ9nqjC2bOAGzzwH2FAQmw=; b=NCZ1xHyNCG1jfNNsR5OIyz70ZbcZ0UkAEQ20jIq5D7VYV8Soni3329/4hnujnpPQ1NKBY6 ZXnTVAP2mQbgsHUTQY2eP+ISgcD8QPM2u4oXk9rvTeh/lx7rwnPYpmYOdG8cXIjWIP7J0J mdhWbLYJ1gGtI5X9f9GqG7HvexzzlEE= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-669-8fWHRxgMOESvDS24Iw6Yzg-1; Wed, 22 Mar 2023 13:05:31 -0400 X-MC-Unique: 8fWHRxgMOESvDS24Iw6Yzg-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 3D45E885628; Wed, 22 Mar 2023 17:05:31 +0000 (UTC) Received: from dmarchan.redhat.com (unknown [10.45.224.63]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3038251FF; Wed, 22 Mar 2023 17:05:30 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: stable@dpdk.org, Maxime Coquelin , Chenbo Xia , Yuanhan Liu Subject: [PATCH] vhost: avoid sleeping under mutex Date: Wed, 22 Mar 2023 18:05:24 +0100 Message-Id: <20230322170524.2314715-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Covscan reported: 2. dpdk-21.11/lib/vhost/socket.c:852: lock_acquire: Calling function "pthread_mutex_lock" acquires lock "vhost_user.mutex". 23. dpdk-21.11/lib/vhost/socket.c:955: sleep: Call to "vhost_user_reconnect_init" might sleep while holding lock "vhost_user.mutex". # 953| vsocket->reconnect = !(flags & RTE_VHOST_USER_NO_RECONNECT); # 954| if (vsocket->reconnect && reconn_tid == 0) { # 955|-> if (vhost_user_reconnect_init() != 0) # 956| goto out_mutex; # 957| } The reason for this warning is that vhost_user_reconnect_init() creates a ctrl thread and calls nanosleep waiting for this thread to be ready, while vhost_user.mutex is taken. Move the call to vhost_user_reconnect_init() out of this mutex. While at it, a pthread_t value should be considered opaque. Instead of relying reconn_tid == 0, use an internal flag in vhost_user_reconnect_init(). Coverity issue: 373686 Bugzilla ID: 981 Fixes: e623e0c6d8a5 ("vhost: add reconnect ability") Cc: stable@dpdk.org Signed-off-by: David Marchand --- lib/vhost/socket.c | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-) diff --git a/lib/vhost/socket.c b/lib/vhost/socket.c index 669c322e12..21002848e6 100644 --- a/lib/vhost/socket.c +++ b/lib/vhost/socket.c @@ -498,8 +498,12 @@ vhost_user_client_reconnect(void *arg __rte_unused) static int vhost_user_reconnect_init(void) { + static bool reconn_init_done; int ret; + if (reconn_init_done) + return 0; + ret = pthread_mutex_init(&reconn_list.mutex, NULL); if (ret < 0) { VHOST_LOG_CONFIG("thread", ERR, "%s: failed to initialize mutex\n", __func__); @@ -515,6 +519,8 @@ vhost_user_reconnect_init(void) VHOST_LOG_CONFIG("thread", ERR, "%s: failed to destroy reconnect mutex\n", __func__); + } else { + reconn_init_done = true; } return ret; @@ -866,6 +872,11 @@ rte_vhost_driver_register(const char *path, uint64_t flags) if (!path) return -1; + if ((flags & RTE_VHOST_USER_CLIENT) != 0 && + (flags & RTE_VHOST_USER_NO_RECONNECT) == 0 && + vhost_user_reconnect_init() != 0) + return -1; + pthread_mutex_lock(&vhost_user.mutex); if (vhost_user.vsocket_cnt == MAX_VHOST_SOCKET) { @@ -961,11 +972,7 @@ rte_vhost_driver_register(const char *path, uint64_t flags) } if ((flags & RTE_VHOST_USER_CLIENT) != 0) { - vsocket->reconnect = !(flags & RTE_VHOST_USER_NO_RECONNECT); - if (vsocket->reconnect && reconn_tid == 0) { - if (vhost_user_reconnect_init() != 0) - goto out_mutex; - } + vsocket->reconnect = (flags & RTE_VHOST_USER_NO_RECONNECT) == 0; } else { vsocket->is_server = true; }