From patchwork Thu Sep 2 15:45:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gaoxiang Liu X-Patchwork-Id: 97834 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 25DF7A0C4C; Thu, 2 Sep 2021 17:46:10 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D9B224003E; Thu, 2 Sep 2021 17:46:09 +0200 (CEST) Received: from m12-13.163.com (m12-13.163.com [220.181.12.13]) by mails.dpdk.org (Postfix) with ESMTP id DE7DD4003C for ; Thu, 2 Sep 2021 17:46:07 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=163.com; s=s110527; h=From:Subject:Date:Message-Id:MIME-Version; bh=XdAmT X4S5SAkXNDeBYaCz6XcIPJl3HdE5r6Ldys9+xA=; b=DPRJ8WJF18VT3p+8fiVJa pCqIYPiq3OmKvjNspdgWQOHMwHGdy+tMFtFy9AelkR6Hn7yQRj7cf6vj86kiOf/7 FKkeWN4GVJ8TnytLufSL9dEstmCjYsy88q4hMSj5M337KJ7t2v/5EPKVWbpY8K5m LvPbVE3dD+zA7B8iLAi22s= Received: from DESKTOP-ONA2IA7.localdomain (unknown [39.182.52.167]) by smtp9 (Coremail) with SMTP id DcCowADn49m08TBhWHf9Bw--.17278S4; Thu, 02 Sep 2021 23:46:04 +0800 (CST) From: Gaoxiang Liu To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, liugaoxiang@huawei.com, Gaoxiang Liu Date: Thu, 2 Sep 2021 23:45:53 +0800 Message-Id: <20210902154553.249-1-gaoxiangliu0@163.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210827141925.1500-1-gaoxiangliu0@163.com> References: <20210827141925.1500-1-gaoxiangliu0@163.com> MIME-Version: 1.0 X-CM-TRANSID: DcCowADn49m08TBhWHf9Bw--.17278S4 X-Coremail-Antispam: 1Uf129KBjvJXoW3WF1UGF48uF1xGw1DJF4xJFb_yoW7Cr4xpF y2gFZxJr97Jrn5ZrZxAan7Xr1ru3Z5ua17G3srG3Z8Ja1kGw4avayv9a40gr1xJFWUJFyU tF12gF4S9FWUtw7anT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDUYxBIdaVFxhVjvjDU0xZFpf9x07Uec_fUUUUU= X-Originating-IP: [39.182.52.167] X-CM-SenderInfo: xjdr5xxdqjzxjxq6il2tof0z/1tbi6xwCOlXlxXV8hgAAs1 Subject: [dpdk-dev] [PATCH v7] vhost: fix crash on port deletion X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The rte_vhost_driver_unregister() and vhost_user_read_cb() can be called at the same time by 2 threads. when memory of vsocket is freed in rte_vhost_driver_unregister(), the invalid memory of vsocket is accessed in vhost_user_read_cb(). It's a bug of both mode for vhost as server or client. E.g., vhostuser port is created as server. Thread1 calls rte_vhost_driver_unregister(). Before the listen fd is deleted from poll waiting fds, "vhost-events" thread then calls vhost_user_server_new_connection(), then a new conn fd is added in fdset when trying to reconnect. "vhost-events" thread then calls vhost_user_read_cb() and accesses invalid memory of socket while thread1 frees the memory of vsocket. E.g., vhostuser port is created as client. Thread1 calls rte_vhost_driver_unregister(). Before vsocket of reconn is deleted from reconn list, "vhost_reconn" thread then calls vhost_user_add_connection() then a new conn fd is added in fdset when trying to reconnect. "vhost-events" thread then calls vhost_user_read_cb() and accesses invalid memory of socket while thread1 frees the memory of vsocket. The fix is to move the "fdset_try_del" in front of free memory of conn, then avoid the race condition. The core trace is: Program terminated with signal 11, Segmentation fault. Fixes: 52d874dc6705 ("vhost: fix crash on closing in client mode") Signed-off-by: Gaoxiang Liu Reviewed-by: Chenbo Xia --- v2: * Fix coding style issues. v3: * Add detailed log. v4: * Add the reason, when vhostuser port is created as server. v5: * Add detailed log when vhostuser port is created as client v6: * Add 'path' check before deleting listen fd * Fix spelling issues v7: * Fix coding style issues. --- lib/vhost/socket.c | 107 ++++++++++++++++++++++----------------------- 1 file changed, 53 insertions(+), 54 deletions(-) diff --git a/lib/vhost/socket.c b/lib/vhost/socket.c index 5d0d728d5..d6f9414c4 100644 --- a/lib/vhost/socket.c +++ b/lib/vhost/socket.c @@ -1023,66 +1023,65 @@ rte_vhost_driver_unregister(const char *path) for (i = 0; i < vhost_user.vsocket_cnt; i++) { struct vhost_user_socket *vsocket = vhost_user.vsockets[i]; + if (strcmp(vsocket->path, path)) + continue; - if (!strcmp(vsocket->path, path)) { - pthread_mutex_lock(&vsocket->conn_mutex); - for (conn = TAILQ_FIRST(&vsocket->conn_list); - conn != NULL; - conn = next) { - next = TAILQ_NEXT(conn, next); - - /* - * If r/wcb is executing, release vsocket's - * conn_mutex and vhost_user's mutex locks, and - * try again since the r/wcb may use the - * conn_mutex and mutex locks. - */ - if (fdset_try_del(&vhost_user.fdset, - conn->connfd) == -1) { - pthread_mutex_unlock( - &vsocket->conn_mutex); - pthread_mutex_unlock(&vhost_user.mutex); - goto again; - } - - VHOST_LOG_CONFIG(INFO, - "free connfd = %d for device '%s'\n", - conn->connfd, path); - close(conn->connfd); - vhost_destroy_device(conn->vid); - TAILQ_REMOVE(&vsocket->conn_list, conn, next); - free(conn); - } - pthread_mutex_unlock(&vsocket->conn_mutex); - - if (vsocket->is_server) { - /* - * If r/wcb is executing, release vhost_user's - * mutex lock, and try again since the r/wcb - * may use the mutex lock. - */ - if (fdset_try_del(&vhost_user.fdset, - vsocket->socket_fd) == -1) { - pthread_mutex_unlock(&vhost_user.mutex); - goto again; - } - - close(vsocket->socket_fd); - unlink(path); - } else if (vsocket->reconnect) { - vhost_user_remove_reconnect(vsocket); + if (vsocket->is_server) { + /* + * If r/wcb is executing, release vhost_user's + * mutex lock, and try again since the r/wcb + * may use the mutex lock. + */ + if (fdset_try_del(&vhost_user.fdset, vsocket->socket_fd) == -1) { + pthread_mutex_unlock(&vhost_user.mutex); + goto again; } + } else if (vsocket->reconnect) { + vhost_user_remove_reconnect(vsocket); + } - pthread_mutex_destroy(&vsocket->conn_mutex); - vhost_user_socket_mem_free(vsocket); + pthread_mutex_lock(&vsocket->conn_mutex); + for (conn = TAILQ_FIRST(&vsocket->conn_list); + conn != NULL; + conn = next) { + next = TAILQ_NEXT(conn, next); - count = --vhost_user.vsocket_cnt; - vhost_user.vsockets[i] = vhost_user.vsockets[count]; - vhost_user.vsockets[count] = NULL; - pthread_mutex_unlock(&vhost_user.mutex); + /* + * If r/wcb is executing, release vsocket's + * conn_mutex and vhost_user's mutex locks, and + * try again since the r/wcb may use the + * conn_mutex and mutex locks. + */ + if (fdset_try_del(&vhost_user.fdset, + conn->connfd) == -1) { + pthread_mutex_unlock(&vsocket->conn_mutex); + pthread_mutex_unlock(&vhost_user.mutex); + goto again; + } - return 0; + VHOST_LOG_CONFIG(INFO, + "free connfd = %d for device '%s'\n", + conn->connfd, path); + close(conn->connfd); + vhost_destroy_device(conn->vid); + TAILQ_REMOVE(&vsocket->conn_list, conn, next); + free(conn); + } + pthread_mutex_unlock(&vsocket->conn_mutex); + + if (vsocket->is_server) { + close(vsocket->socket_fd); + unlink(path); } + + pthread_mutex_destroy(&vsocket->conn_mutex); + vhost_user_socket_mem_free(vsocket); + + count = --vhost_user.vsocket_cnt; + vhost_user.vsockets[i] = vhost_user.vsockets[count]; + vhost_user.vsockets[count] = NULL; + pthread_mutex_unlock(&vhost_user.mutex); + return 0; } pthread_mutex_unlock(&vhost_user.mutex);