diff mbox series

vhost: fix crash on port deletion

Message ID 20210813142248.88-1-gaoxiangliu0@163.com (mailing list archive)
State Superseded, archived
Delegated to: Maxime Coquelin
Headers show
Series vhost: fix crash on port deletion | expand

Checks

Context Check Description
ci/iol-mellanox-Performance success Performance Testing PASS
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-intel-Functional success Functional Testing PASS
ci/iol-broadcom-Functional success Functional Testing PASS
ci/iol-broadcom-Performance success Performance Testing PASS
ci/iol-x86_64-unit-testing success Testing PASS
ci/iol-x86_64-compile-testing success Testing PASS
ci/intel-Testing success Testing PASS
ci/iol-aarch64-compile-testing success Testing PASS
ci/iol-aarch64-unit-testing success Testing PASS
ci/Intel-compilation success Compilation OK
ci/github-robot: build success github build: passed
ci/checkpatch warning coding style issues

Commit Message

Gaoxiang Liu Aug. 13, 2021, 2:22 p.m. UTC
The rte_vhost_driver_unregister() and vhost_user_read_cb()
can be called at the same time by 2 threads.
Eg thread1 calls rte_vhost_driver_unregister() and frees memory of
"conn".
Because socket fd has not been deleted from poll waiting fds,
"vhost-events" thread calls fdset_event_dispatch, then calls
vhost_user_read_cb(),
and accesses invalid memory of "conn".

The fix is to move the "fdset_try_del" in front of free memory of conn,
then avoid the race condition.

The core trace is:
Program terminated with signal 11, Segmentation fault.

Fixes: 52d874dc6705 ("vhost: fix crash on closing in client mode")

v2:
fix coding style issues

v3:
add detailed log

Signed-off-by: Gaoxiang Liu <liugaoxiang@huawei.com>
---
 lib/vhost/socket.c | 27 ++++++++++++++-------------
 1 file changed, 14 insertions(+), 13 deletions(-)

Comments

Xia, Chenbo Aug. 16, 2021, 6:44 a.m. UTC | #1
Hi Gaoxiang,

> -----Original Message-----
> From: Gaoxiang Liu <gaoxiangliu0@163.com>
> Sent: Friday, August 13, 2021 10:23 PM
> To: maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>
> Cc: dev@dpdk.org; liugaoxiang@huawei.com; Gaoxiang Liu <gaoxiangliu0@163.com>
> Subject: [PATCH] vhost: fix crash on port deletion
> 
> The rte_vhost_driver_unregister() and vhost_user_read_cb()
> can be called at the same time by 2 threads.
> Eg thread1 calls rte_vhost_driver_unregister() and frees memory of
> "conn".
> Because socket fd has not been deleted from poll waiting fds,
> "vhost-events" thread calls fdset_event_dispatch, then calls
> vhost_user_read_cb(),
> and accesses invalid memory of "conn".
> 
> The fix is to move the "fdset_try_del" in front of free memory of conn,
> then avoid the race condition.

I'm a bit confused here. 

First, are you talking about vhost as server or client? I guess it's server but
you'd better clarify it's a bug of both mode or one mode.

Based on we are talking about vhost as server:
we both know there are two types of socket fd. One for new connections coming,
let's call it listen fd. The other is one per connection for send/recv messages,
let's call it conn fd. And vhost_user_read_cb is called when conn fd gets some
messages. The original code first delete the conn fd from fd_set so that
vhost_user_read_cb will not be called, then free the 'conn'. This seems correct
to me.

You are moving the code of deleting the listen fd here. It's for trigger of callback
vhost_user_server_new_connection but not vhost_user_read_cb. So I don't understand
your point. Do I miss something here?

BTW, note two things:
1. please add correct prefix when 'git send-email'. E.g., for this patch, you should
add --subject-prefix="PATCH v3"
2. the version change description should be after signed-off tag, for example:

Signed-off-by: Gaoxiang Liu <liugaoxiang@huawei.com>
---
V2:
XXX
V3:
XXX
---
 lib/vhost/socket.c | 27 ++++++++++++++-------------
 1 file changed, 14 insertions(+), 13 deletions(-)

Thanks,
Chenbo

> 
> The core trace is:
> Program terminated with signal 11, Segmentation fault.
> 
> Fixes: 52d874dc6705 ("vhost: fix crash on closing in client mode")
> 
> v2:
> fix coding style issues
> 
> v3:
> add detailed log
> 
> Signed-off-by: Gaoxiang Liu <liugaoxiang@huawei.com>
> ---
>  lib/vhost/socket.c | 27 ++++++++++++++-------------
>  1 file changed, 14 insertions(+), 13 deletions(-)
> 
> diff --git a/lib/vhost/socket.c b/lib/vhost/socket.c
> index 5d0d728d5..2eb8fcadd 100644
> --- a/lib/vhost/socket.c
> +++ b/lib/vhost/socket.c
> @@ -1024,6 +1024,20 @@ rte_vhost_driver_unregister(const char *path)
>  	for (i = 0; i < vhost_user.vsocket_cnt; i++) {
>  		struct vhost_user_socket *vsocket = vhost_user.vsockets[i];
> 
> +		if (vsocket->is_server) {
> +			/*
> +			 * If r/wcb is executing, release vhost_user's
> +			 * mutex lock, and try again since the r/wcb
> +			 * may use the mutex lock.
> +			 */
> +			if (fdset_try_del(&vhost_user.fdset, vsocket->socket_fd) ==
> -1) {
> +				pthread_mutex_unlock(&vhost_user.mutex);
> +				goto again;
> +			}
> +		} else if (vsocket->reconnect) {
> +			vhost_user_remove_reconnect(vsocket);
> +		}
> +
>  		if (!strcmp(vsocket->path, path)) {
>  			pthread_mutex_lock(&vsocket->conn_mutex);
>  			for (conn = TAILQ_FIRST(&vsocket->conn_list);
> @@ -1056,21 +1070,8 @@ rte_vhost_driver_unregister(const char *path)
>  			pthread_mutex_unlock(&vsocket->conn_mutex);
> 
>  			if (vsocket->is_server) {
> -				/*
> -				 * If r/wcb is executing, release vhost_user's
> -				 * mutex lock, and try again since the r/wcb
> -				 * may use the mutex lock.
> -				 */
> -				if (fdset_try_del(&vhost_user.fdset,
> -						vsocket->socket_fd) == -1) {
> -					pthread_mutex_unlock(&vhost_user.mutex);
> -					goto again;
> -				}
> -
>  				close(vsocket->socket_fd);
>  				unlink(path);
> -			} else if (vsocket->reconnect) {
> -				vhost_user_remove_reconnect(vsocket);
>  			}
> 
>  			pthread_mutex_destroy(&vsocket->conn_mutex);
> --
> 2.32.0
Gaoxiang Liu Aug. 20, 2021, 3:53 p.m. UTC | #2
Hi Chenbo,

I add more detailed reason as below in [PATCH  v5]
The rte_vhost_driver_unregister() and vhost_user_read_cb()
can be called at the same time by 2 threads.
when memory of vsocket is freed in rte_vhost_driver_unregister(),
the invalid memory of vsocket is accessd in vhost_user_read_cb().
It's a bug of both mode for vhost as server or client.

Eg vhostuser port is created as server.
Thread1 calls rte_vhost_driver_unregister().
Before the listen fd is deleted from poll waiting fds,
"vhost-events" thread then calls vhost_user_server_new_connection(),
then a new conn fd is added in fdset when trying to reconnect.
"vhost-events" thread then calls vhost_user_read_cb() and
access invalid memory of socket while thread1 frees the memory of vsocket.

Eg vhostuser port is created as client.
Thread1 calls rte_vhost_driver_unregister().
Before vsocket of reconn is deleted from reconn list,
"vhost_reconn" thread then calls vhost_user_add_connection()
then a new conn fd is added in fdset when trying to reconnect.
"vhost-events" thread then calls vhost_user_read_cb() and
access invalid memory of socket while thread1 frees the memory of vsocket.



On 08/16/2021 14:44, Xia, Chenbo wrote:
Hi Gaoxiang,

> -----Original Message-----
> From: Gaoxiang Liu <gaoxiangliu0@163.com>
> Sent: Friday, August 13, 2021 10:23 PM
> To: maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>
> Cc: dev@dpdk.org; liugaoxiang@huawei.com; Gaoxiang Liu <gaoxiangliu0@163.com>
> Subject: [PATCH] vhost: fix crash on port deletion
>
> The rte_vhost_driver_unregister() and vhost_user_read_cb()
> can be called at the same time by 2 threads.
> Eg thread1 calls rte_vhost_driver_unregister() and frees memory of
> "conn".
> Because socket fd has not been deleted from poll waiting fds,
> "vhost-events" thread calls fdset_event_dispatch, then calls
> vhost_user_read_cb(),
> and accesses invalid memory of "conn".
>
> The fix is to move the "fdset_try_del" in front of free memory of conn,
> then avoid the race condition.

I'm a bit confused here.

First, are you talking about vhost as server or client? I guess it's server but
you'd better clarify it's a bug of both mode or one mode.

Based on we are talking about vhost as server:
we both know there are two types of socket fd. One for new connections coming,
let's call it listen fd. The other is one per connection for send/recv messages,
let's call it conn fd. And vhost_user_read_cb is called when conn fd gets some
messages. The original code first delete the conn fd from fd_set so that
vhost_user_read_cb will not be called, then free the 'conn'. This seems correct
to me.

You are moving the code of deleting the listen fd here. It's for trigger of callback
vhost_user_server_new_connection but not vhost_user_read_cb. So I don't understand
your point. Do I miss something here?

BTW, note two things:
1. please add correct prefix when 'git send-email'. E.g., for this patch, you should
add --subject-prefix="PATCH v3"
2. the version change description should be after signed-off tag, for example:

Signed-off-by: Gaoxiang Liu <liugaoxiang@huawei.com>
---
V2:
XXX
V3:
XXX
---
lib/vhost/socket.c | 27 ++++++++++++++-------------
1 file changed, 14 insertions(+), 13 deletions(-)

Thanks,
Chenbo

>
> The core trace is:
> Program terminated with signal 11, Segmentation fault.
>
> Fixes: 52d874dc6705 ("vhost: fix crash on closing in client mode")
>
> v2:
> fix coding style issues
>
> v3:
> add detailed log
>
> Signed-off-by: Gaoxiang Liu <liugaoxiang@huawei.com>
> ---
>  lib/vhost/socket.c | 27 ++++++++++++++-------------
>  1 file changed, 14 insertions(+), 13 deletions(-)
>
> diff --git a/lib/vhost/socket.c b/lib/vhost/socket.c
> index 5d0d728d5..2eb8fcadd 100644
> --- a/lib/vhost/socket.c
> +++ b/lib/vhost/socket.c
> @@ -1024,6 +1024,20 @@ rte_vhost_driver_unregister(const char *path)
>       for (i = 0; i < vhost_user.vsocket_cnt; i++) {
>            struct vhost_user_socket *vsocket = vhost_user.vsockets[i];
>
> +          if (vsocket->is_server) {
> +               /*
> +                * If r/wcb is executing, release vhost_user's
> +                * mutex lock, and try again since the r/wcb
> +                * may use the mutex lock.
> +                */
> +               if (fdset_try_del(&vhost_user.fdset, vsocket->socket_fd) ==
> -1) {
> +                    pthread_mutex_unlock(&vhost_user.mutex);
> +                    goto again;
> +               }
> +          } else if (vsocket->reconnect) {
> +               vhost_user_remove_reconnect(vsocket);
> +          }
> +
>            if (!strcmp(vsocket->path, path)) {
>                 pthread_mutex_lock(&vsocket->conn_mutex);
>                 for (conn = TAILQ_FIRST(&vsocket->conn_list);
> @@ -1056,21 +1070,8 @@ rte_vhost_driver_unregister(const char *path)
>                 pthread_mutex_unlock(&vsocket->conn_mutex);
>
>                 if (vsocket->is_server) {
> -                    /*
> -                     * If r/wcb is executing, release vhost_user's
> -                     * mutex lock, and try again since the r/wcb
> -                     * may use the mutex lock.
> -                     */
> -                    if (fdset_try_del(&vhost_user.fdset,
> -                              vsocket->socket_fd) == -1) {
> -                         pthread_mutex_unlock(&vhost_user.mutex);
> -                         goto again;
> -                    }
> -
>                      close(vsocket->socket_fd);
>                      unlink(path);
> -               } else if (vsocket->reconnect) {
> -                    vhost_user_remove_reconnect(vsocket);
>                 }
>
>                 pthread_mutex_destroy(&vsocket->conn_mutex);
> --
> 2.32.0
diff mbox series

Patch

diff --git a/lib/vhost/socket.c b/lib/vhost/socket.c
index 5d0d728d5..2eb8fcadd 100644
--- a/lib/vhost/socket.c
+++ b/lib/vhost/socket.c
@@ -1024,6 +1024,20 @@  rte_vhost_driver_unregister(const char *path)
 	for (i = 0; i < vhost_user.vsocket_cnt; i++) {
 		struct vhost_user_socket *vsocket = vhost_user.vsockets[i];
 
+		if (vsocket->is_server) {
+			/*
+			 * If r/wcb is executing, release vhost_user's
+			 * mutex lock, and try again since the r/wcb
+			 * may use the mutex lock.
+			 */
+			if (fdset_try_del(&vhost_user.fdset, vsocket->socket_fd) == -1) {
+				pthread_mutex_unlock(&vhost_user.mutex);
+				goto again;
+			}
+		} else if (vsocket->reconnect) {
+			vhost_user_remove_reconnect(vsocket);
+		}
+
 		if (!strcmp(vsocket->path, path)) {
 			pthread_mutex_lock(&vsocket->conn_mutex);
 			for (conn = TAILQ_FIRST(&vsocket->conn_list);
@@ -1056,21 +1070,8 @@  rte_vhost_driver_unregister(const char *path)
 			pthread_mutex_unlock(&vsocket->conn_mutex);
 
 			if (vsocket->is_server) {
-				/*
-				 * If r/wcb is executing, release vhost_user's
-				 * mutex lock, and try again since the r/wcb
-				 * may use the mutex lock.
-				 */
-				if (fdset_try_del(&vhost_user.fdset,
-						vsocket->socket_fd) == -1) {
-					pthread_mutex_unlock(&vhost_user.mutex);
-					goto again;
-				}
-
 				close(vsocket->socket_fd);
 				unlink(path);
-			} else if (vsocket->reconnect) {
-				vhost_user_remove_reconnect(vsocket);
 			}
 
 			pthread_mutex_destroy(&vsocket->conn_mutex);