From patchwork Tue Dec 5 09:45:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 134868 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DC87743676; Tue, 5 Dec 2023 10:45:53 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F118642E0B; Tue, 5 Dec 2023 10:45:51 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id 2690542D45 for ; Tue, 5 Dec 2023 10:45:48 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1701769547; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pqxHQ2dkDWcgvv9I6cJiJtlXVNwHrL6laP+mrMphIVY=; b=PHJklC8jpa1NjMHRhcf3JhM2ZSjwvePK8VjU3AOg86Wh2Xrddo6Off2nZsaPCz2VFCYdpa kj/JBKgYlFAtpnDXUToUCj9+4ftRExV4mtmwQunLdPNTO7pLsM9LNOg4O4lVtx+sMhPfvy 2I4LBKX8BiUOIfMxj9JLfx+W+zb79Qs= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-685-FopFTVPgN_eSEzVhBr_iqA-1; Tue, 05 Dec 2023 04:45:45 -0500 X-MC-Unique: FopFTVPgN_eSEzVhBr_iqA-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8A2BD2806050; Tue, 5 Dec 2023 09:45:45 +0000 (UTC) Received: from dmarchan.redhat.com (unknown [10.45.225.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8239F1C060AF; Tue, 5 Dec 2023 09:45:44 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: stable@dpdk.org, Eelco Chaudron , Maxime Coquelin , Chenbo Xia Subject: [PATCH v2 1/5] vhost: fix virtqueue access check in datapath Date: Tue, 5 Dec 2023 10:45:31 +0100 Message-ID: <20231205094536.2816720-1-david.marchand@redhat.com> In-Reply-To: <20231023095520.2864868-1-david.marchand@redhat.com> References: <20231023095520.2864868-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.7 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Now that a r/w lock is used, the access_ok field should only be updated under a write lock. Since the datapath code only takes a read lock on the virtqueue to check access_ok, this lock must be released and a write lock taken before calling vring_translate(). Fixes: 03f77d66d966 ("vhost: change virtqueue access lock to a read/write one") Cc: stable@dpdk.org Signed-off-by: David Marchand Acked-by: Eelco Chaudron Reviewed-by: Maxime Coquelin --- lib/vhost/virtio_net.c | 60 +++++++++++++++++++++++++++++++----------- 1 file changed, 44 insertions(+), 16 deletions(-) diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index 8af20f1487..d00f4b03aa 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -1696,6 +1696,17 @@ virtio_dev_rx_packed(struct virtio_net *dev, return pkt_idx; } +static void +virtio_dev_vring_translate(struct virtio_net *dev, struct vhost_virtqueue *vq) +{ + rte_rwlock_write_lock(&vq->access_lock); + vhost_user_iotlb_rd_lock(vq); + if (!vq->access_ok) + vring_translate(dev, vq); + vhost_user_iotlb_rd_unlock(vq); + rte_rwlock_write_unlock(&vq->access_lock); +} + static __rte_always_inline uint32_t virtio_dev_rx(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf **pkts, uint32_t count) @@ -1710,9 +1721,13 @@ virtio_dev_rx(struct virtio_net *dev, struct vhost_virtqueue *vq, vhost_user_iotlb_rd_lock(vq); - if (unlikely(!vq->access_ok)) - if (unlikely(vring_translate(dev, vq) < 0)) - goto out; + if (unlikely(!vq->access_ok)) { + vhost_user_iotlb_rd_unlock(vq); + rte_rwlock_read_unlock(&vq->access_lock); + + virtio_dev_vring_translate(dev, vq); + goto out_no_unlock; + } count = RTE_MIN((uint32_t)MAX_PKT_BURST, count); if (count == 0) @@ -1731,6 +1746,7 @@ virtio_dev_rx(struct virtio_net *dev, struct vhost_virtqueue *vq, out_access_unlock: rte_rwlock_read_unlock(&vq->access_lock); +out_no_unlock: return nb_tx; } @@ -2528,9 +2544,13 @@ virtio_dev_rx_async_submit(struct virtio_net *dev, struct vhost_virtqueue *vq, vhost_user_iotlb_rd_lock(vq); - if (unlikely(!vq->access_ok)) - if (unlikely(vring_translate(dev, vq) < 0)) - goto out; + if (unlikely(!vq->access_ok)) { + vhost_user_iotlb_rd_unlock(vq); + rte_rwlock_read_unlock(&vq->access_lock); + + virtio_dev_vring_translate(dev, vq); + goto out_no_unlock; + } count = RTE_MIN((uint32_t)MAX_PKT_BURST, count); if (count == 0) @@ -2551,6 +2571,7 @@ virtio_dev_rx_async_submit(struct virtio_net *dev, struct vhost_virtqueue *vq, out_access_unlock: rte_rwlock_write_unlock(&vq->access_lock); +out_no_unlock: return nb_tx; } @@ -3581,11 +3602,13 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id, vhost_user_iotlb_rd_lock(vq); - if (unlikely(!vq->access_ok)) - if (unlikely(vring_translate(dev, vq) < 0)) { - count = 0; - goto out; - } + if (unlikely(!vq->access_ok)) { + vhost_user_iotlb_rd_unlock(vq); + rte_rwlock_read_unlock(&vq->access_lock); + + virtio_dev_vring_translate(dev, vq); + goto out_no_unlock; + } /* * Construct a RARP broadcast packet, and inject it to the "pkts" @@ -3646,6 +3669,7 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id, if (unlikely(rarp_mbuf != NULL)) count += 1; +out_no_unlock: return count; } @@ -4196,11 +4220,14 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id, vhost_user_iotlb_rd_lock(vq); - if (unlikely(vq->access_ok == 0)) - if (unlikely(vring_translate(dev, vq) < 0)) { - count = 0; - goto out; - } + if (unlikely(vq->access_ok == 0)) { + vhost_user_iotlb_rd_unlock(vq); + rte_rwlock_read_unlock(&vq->access_lock); + + virtio_dev_vring_translate(dev, vq); + count = 0; + goto out_no_unlock; + } /* * Construct a RARP broadcast packet, and inject it to the "pkts" @@ -4266,5 +4293,6 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id, if (unlikely(rarp_mbuf != NULL)) count += 1; +out_no_unlock: return count; }