From patchwork Fri Mar 11 16:35:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wang, YuanX" X-Patchwork-Id: 108681 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 91C41A0093; Fri, 11 Mar 2022 09:42:55 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 28C9340140; Fri, 11 Mar 2022 09:42:55 +0100 (CET) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id B267640042 for ; Fri, 11 Mar 2022 09:42:52 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646988172; x=1678524172; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=+wHSduxG1z1Birez/7qKEe+H9PXYfa5idm0/IlSc03Q=; b=I8+TfjzQ0ywV6g+Qnzan6rkH+XjEqfFyQGHhiDRGxLTwBsprtXDwM9XL WMCHJkUUXXV/tDt9ecsa+tM1/561Kyn3+cMrOEGMkIbUs078rzp5oZTXT P7l1j4u9Koeh0DCUmuPQ1OgyV9QvC3Fg1ECTSinyWJDMqSC9H9L73yDY9 CEOhxSCo6u6v68vz1I1cgDMW6pAV64OWwD6ilvXrqeYT5ERlstHNw2tfc 0YWnRxLClMySc32uP+UuGo+H8QNrkz10r3wux2kc5P/Hjy1TbaXEuXRFr O2vGz/wHgR2c12qLIAKtyhWXnBJjwQyKN6STzic2i1aNcVm+IV3lsrjT7 w==; X-IronPort-AV: E=McAfee;i="6200,9189,10282"; a="255719915" X-IronPort-AV: E=Sophos;i="5.90,173,1643702400"; d="scan'208";a="255719915" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Mar 2022 00:42:51 -0800 X-IronPort-AV: E=Sophos;i="5.90,173,1643702400"; d="scan'208";a="689002450" Received: from unknown (HELO localhost.localdomain) ([10.239.251.55]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Mar 2022 00:42:48 -0800 From: Yuan Wang To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, jiayu.hu@intel.com, weix.ling@intel.com, yuanx.wang@intel.com Subject: [PATCH] net/vhost: fix access to freed memory Date: Sat, 12 Mar 2022 00:35:12 +0800 Message-Id: <20220311163512.76501-1-yuanx.wang@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch fixes heap-use-after-free reported by ASan. It is possible for the rte_vhost_dequeue_burst() to access the vq is freed when numa_realloc() gets called in the device running state. The control plane will set the vq->access_lock to protected the vq from the data plane. Unfortunately the lock will fail at the moment the vq is freed, allowing the rte_vhost_dequeue_burst() to access the fields of the vq, which will trigger a heap-use-after-free error. In the case of multiple queues, the vhost pmd can access other queues that are not ready when the first queue is ready, which makes no sense and also allows numa_realloc() and rte_vhost_dequeue_burst() access to vq to happen at the same time. By controlling vq->allow_queuing we can make the pmd access only the queues that are ready. Fixes: 1ce3c7fe149 ("net/vhost: emulate device start/stop behavior") Signed-off-by: Yuan Wang Tested-by: Wei Ling Reviewed-by: Maxime Coquelin --- drivers/net/vhost/rte_eth_vhost.c | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c index 070f0e6dfd..8a6595504a 100644 --- a/drivers/net/vhost/rte_eth_vhost.c +++ b/drivers/net/vhost/rte_eth_vhost.c @@ -720,6 +720,7 @@ update_queuing_status(struct rte_eth_dev *dev) { struct pmd_internal *internal = dev->data->dev_private; struct vhost_queue *vq; + struct rte_vhost_vring_state *state; unsigned int i; int allow_queuing = 1; @@ -730,12 +731,17 @@ update_queuing_status(struct rte_eth_dev *dev) rte_atomic32_read(&internal->dev_attached) == 0) allow_queuing = 0; + state = vring_states[dev->data->port_id]; + /* Wait until rx/tx_pkt_burst stops accessing vhost device */ for (i = 0; i < dev->data->nb_rx_queues; i++) { vq = dev->data->rx_queues[i]; if (vq == NULL) continue; - rte_atomic32_set(&vq->allow_queuing, allow_queuing); + if (allow_queuing && state->cur[vq->virtqueue_id]) + rte_atomic32_set(&vq->allow_queuing, 1); + else + rte_atomic32_set(&vq->allow_queuing, 0); while (rte_atomic32_read(&vq->while_queuing)) rte_pause(); } @@ -744,7 +750,10 @@ update_queuing_status(struct rte_eth_dev *dev) vq = dev->data->tx_queues[i]; if (vq == NULL) continue; - rte_atomic32_set(&vq->allow_queuing, allow_queuing); + if (allow_queuing && state->cur[vq->virtqueue_id]) + rte_atomic32_set(&vq->allow_queuing, 1); + else + rte_atomic32_set(&vq->allow_queuing, 0); while (rte_atomic32_read(&vq->while_queuing)) rte_pause(); } @@ -967,6 +976,8 @@ vring_state_changed(int vid, uint16_t vring, int enable) state->max_vring = RTE_MAX(vring, state->max_vring); rte_spinlock_unlock(&state->lock); + update_queuing_status(eth_dev); + VHOST_LOG(INFO, "vring%u is %s\n", vring, enable ? "enabled" : "disabled");