From patchwork Thu Aug 31 14:44:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxime Coquelin X-Patchwork-Id: 130998 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E172141FDE; Thu, 31 Aug 2023 16:44:57 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 793AD40285; Thu, 31 Aug 2023 16:44:57 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 29FA44027B for ; Thu, 31 Aug 2023 16:44:56 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1693493095; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=rgHQK3anuVr/9z1ZNTzu2W5ALRctcKxIEYxZqNV6WGc=; b=BTDf1Qu6PL+1+yD28vSsspBSOGWxcBdg1v8YbHguuazIezlfJJmpWc4909hCq3AuZrTmUZ N9JVqNFmrUHomHZ7MAwtpIJy634HZz63x6JGDenFd5dVO+b0yKK8cmrQ9ZY1zLpb/LKOAY FD55yB50f3HS/mnFptPCAxPa+wWkXdQ= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-552-yjgalPRbMbSdq1Yt2wWIKQ-1; Thu, 31 Aug 2023 10:44:54 -0400 X-MC-Unique: yjgalPRbMbSdq1Yt2wWIKQ-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E269B1021E1B; Thu, 31 Aug 2023 14:44:53 +0000 (UTC) Received: from max-p1.redhat.com (unknown [10.39.208.19]) by smtp.corp.redhat.com (Postfix) with ESMTP id 86E40C15BC0; Thu, 31 Aug 2023 14:44:52 +0000 (UTC) From: Maxime Coquelin To: dev@dpdk.org, chenbo.xia@intel.com, david.marchand@redhat.com, echaudro@redhat.com Cc: Maxime Coquelin Subject: [PATCH v2] vhost: add IRQ suppression Date: Thu, 31 Aug 2023 16:44:50 +0200 Message-ID: <20230831144450.3829729-1-maxime.coquelin@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Guest notifications offloading, which has been introduced in v23.07, aims at offloading syscalls out of the datapath. This patch optimizes the offloading by not offloading the guest notification for a given virtqueue if one is already being offloaded by the application. With a single VDUSE device, we can already see few notifications being suppressed when doing throughput testing with Iperf3. We can expect to see much more being suppressed when the offloading thread is under pressure. Signed-off-by: Maxime Coquelin --- lib/vhost/vhost.c | 4 ++++ lib/vhost/vhost.h | 27 +++++++++++++++++++++------ 2 files changed, 25 insertions(+), 6 deletions(-) diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c index eb6309b681..7794f29c18 100644 --- a/lib/vhost/vhost.c +++ b/lib/vhost/vhost.c @@ -48,6 +48,8 @@ static const struct vhost_vq_stats_name_off vhost_vq_stat_strings[] = { stats.guest_notifications_offloaded)}, {"guest_notifications_error", offsetof(struct vhost_virtqueue, stats.guest_notifications_error)}, + {"guest_notifications_suppressed", offsetof(struct vhost_virtqueue, + stats.guest_notifications_suppressed)}, {"iotlb_hits", offsetof(struct vhost_virtqueue, stats.iotlb_hits)}, {"iotlb_misses", offsetof(struct vhost_virtqueue, stats.iotlb_misses)}, {"inflight_submitted", offsetof(struct vhost_virtqueue, stats.inflight_submitted)}, @@ -1516,6 +1518,8 @@ rte_vhost_notify_guest(int vid, uint16_t queue_id) rte_rwlock_read_lock(&vq->access_lock); + __atomic_store_n(&vq->irq_pending, false, __ATOMIC_RELEASE); + if (dev->backend_ops->inject_irq(dev, vq)) { if (dev->flags & VIRTIO_DEV_STATS_ENABLED) __atomic_fetch_add(&vq->stats.guest_notifications_error, diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index 9723429b1c..3e78379e48 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -156,6 +156,7 @@ struct virtqueue_stats { uint64_t iotlb_misses; uint64_t inflight_submitted; uint64_t inflight_completed; + uint64_t guest_notifications_suppressed; /* Counters below are atomic, and should be incremented as such. */ uint64_t guest_notifications; uint64_t guest_notifications_offloaded; @@ -346,6 +347,8 @@ struct vhost_virtqueue { struct vhost_vring_addr ring_addrs; struct virtqueue_stats stats; + + bool irq_pending; } __rte_cache_aligned; /* Virtio device status as per Virtio specification */ @@ -908,12 +911,24 @@ vhost_need_event(uint16_t event_idx, uint16_t new_idx, uint16_t old) static __rte_always_inline void vhost_vring_inject_irq(struct virtio_net *dev, struct vhost_virtqueue *vq) { - if (dev->notify_ops->guest_notify && - dev->notify_ops->guest_notify(dev->vid, vq->index)) { - if (dev->flags & VIRTIO_DEV_STATS_ENABLED) - __atomic_fetch_add(&vq->stats.guest_notifications_offloaded, - 1, __ATOMIC_RELAXED); - return; + bool expected = false; + + if (dev->notify_ops->guest_notify) { + if (__atomic_compare_exchange_n(&vq->irq_pending, &expected, true, 0, + __ATOMIC_RELEASE, __ATOMIC_RELAXED)) { + if (dev->notify_ops->guest_notify(dev->vid, vq->index)) { + if (dev->flags & VIRTIO_DEV_STATS_ENABLED) + __atomic_fetch_add(&vq->stats.guest_notifications_offloaded, + 1, __ATOMIC_RELAXED); + return; + } + + /* Offloading failed, fallback to direct IRQ injection */ + __atomic_store_n(&vq->irq_pending, 0, __ATOMIC_RELEASE); + } else { + vq->stats.guest_notifications_suppressed++; + return; + } } if (dev->backend_ops->inject_irq(dev, vq)) {