From patchwork Mon May 23 16:13:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wang, YuanX" X-Patchwork-Id: 111602 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D7DB7A04FD; Mon, 23 May 2022 10:24:43 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 10FEB41611; Mon, 23 May 2022 10:24:08 +0200 (CEST) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id 2F7AD42684 for ; Mon, 23 May 2022 10:24:06 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1653294246; x=1684830246; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=mqzEPJ/3AVrl8Kwlxun2PJoIZMywdyE8ZmT9uHzwCoU=; b=kvDFK2Prh12GLppfOFlpratv3GtzO0ANUQU61/IQ5tPw7zb6xUAcfHpx xgBwMAD+kd02IMuTw74uBRtMKgDQvorxqSnOnVM5haJB2SPlIuU0WV1Pu 8BeRlCv1MooWCcd+smQfZqyPss/aNvTW/biivnGY+aOyVpIER0VQEZtYF dplwi0+WOvmjZGKsyNTu007bimgLvuphXLc2a7qFVY63BlZnp5BXzdw6p x8o+gaCXpw94Um6rQSRNIlLRu01rdbVibbXyn8es4RCvV+FJjaWHJ7tdc IKEjyfhX9ofDR5cDnC9ft+JAOST+lYnGdK4WS4rrr+5YwjZmfBygq0wTx A==; X-IronPort-AV: E=McAfee;i="6400,9594,10355"; a="298460933" X-IronPort-AV: E=Sophos;i="5.91,246,1647327600"; d="scan'208";a="298460933" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 May 2022 01:24:05 -0700 X-IronPort-AV: E=Sophos;i="5.91,246,1647327600"; d="scan'208";a="600522937" Received: from unknown (HELO localhost.localdomain) ([10.239.251.55]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 May 2022 01:24:02 -0700 From: Yuan Wang To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, jiayu.hu@intel.com, xuan.ding@intel.com, xingguang.he@intel.com, sunil.pai.g@intel.com, yuanx.wang@intel.com Subject: [PATCH v3 1/2] vhost: support clear in-flight packets for async dequeue Date: Tue, 24 May 2022 00:13:44 +0800 Message-Id: <20220523161345.289549-2-yuanx.wang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220523161345.289549-1-yuanx.wang@intel.com> References: <20220413182742.860659-1-yuanx.wang@intel.com> <20220523161345.289549-1-yuanx.wang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org rte_vhost_clear_queue_thread_unsafe() supports to clear in-flight packets for async enqueue only. But after supporting async dequeue, this API should support async dequeue too. This patch also adds the thread-safe version of this API, the difference between the two API is that thread safety uses lock. These APIs maybe used to clean up packets in the async channel to prevent packet loss when the device state changes or when the device is destroyed. Signed-off-by: Yuan Wang --- doc/guides/prog_guide/vhost_lib.rst | 8 ++- doc/guides/rel_notes/release_22_07.rst | 5 ++ lib/vhost/rte_vhost_async.h | 25 ++++++++ lib/vhost/version.map | 1 + lib/vhost/virtio_net.c | 82 +++++++++++++++++++++++++- 5 files changed, 118 insertions(+), 3 deletions(-) diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst index 680da504c8..a789f0c26f 100644 --- a/doc/guides/prog_guide/vhost_lib.rst +++ b/doc/guides/prog_guide/vhost_lib.rst @@ -288,7 +288,13 @@ The following is an overview of some key Vhost API functions: * ``rte_vhost_clear_queue_thread_unsafe(vid, queue_id, **pkts, count, dma_id, vchan_id)`` - Clear inflight packets which are submitted to DMA engine in vhost async data + Clear in-flight packets which are submitted to async channel in vhost + async data path without performing any locking. Completed packets are + returned to applications through ``pkts``. + +* ``rte_vhost_clear_queue(vid, queue_id, **pkts, count, dma_id, vchan_id)`` + + Clear in-flight packets which are submitted to async channel in vhost async data path. Completed packets are returned to applications through ``pkts``. * ``rte_vhost_vring_stats_get_names(int vid, uint16_t queue_id, struct rte_vhost_stat_name *names, unsigned int size)`` diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst index 70b48d0fec..4c8b0c1b21 100644 --- a/doc/guides/rel_notes/release_22_07.rst +++ b/doc/guides/rel_notes/release_22_07.rst @@ -92,6 +92,11 @@ New Features Added vhost async dequeue API which can leverage DMA devices to accelerate receiving pkts from guest. +* **Added thread-safe version of inflight packet clear API in vhost library.** + + Added an API which can clear the inflight packets submitted to + the async channel in a thread-safe manner in the vhost async data path. + Removed Items ------------- diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h index a1e7f674ed..1db2a10124 100644 --- a/lib/vhost/rte_vhost_async.h +++ b/lib/vhost/rte_vhost_async.h @@ -183,6 +183,31 @@ uint16_t rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, struct rte_mbuf **pkts, uint16_t count, int16_t dma_id, uint16_t vchan_id); +/** + * This function checks async completion status and clear packets for + * a specific vhost device queue. Packets which are inflight will be + * returned in an array. + * + * @param vid + * ID of vhost device to clear data + * @param queue_id + * Queue id to clear data + * @param pkts + * Blank array to get return packet pointer + * @param count + * Size of the packet array + * @param dma_id + * The identifier of the DMA device + * @param vchan_id + * The identifier of virtual DMA channel + * @return + * Number of packets returned + */ +__rte_experimental +uint16_t rte_vhost_clear_queue(int vid, uint16_t queue_id, + struct rte_mbuf **pkts, uint16_t count, int16_t dma_id, + uint16_t vchan_id); + /** * The DMA vChannels used in asynchronous data path must be configured * first. So this function needs to be called before enabling DMA diff --git a/lib/vhost/version.map b/lib/vhost/version.map index bc75d4d724..a1ed3a1205 100644 --- a/lib/vhost/version.map +++ b/lib/vhost/version.map @@ -94,6 +94,7 @@ EXPERIMENTAL { rte_vhost_vring_stats_get; rte_vhost_vring_stats_reset; rte_vhost_async_try_dequeue_burst; + rte_vhost_clear_queue; }; INTERNAL { diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index 68a26eb17d..a90ae3cb96 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -26,6 +26,11 @@ #define MAX_BATCH_LEN 256 +static __rte_always_inline uint16_t +async_poll_dequeue_completed_split(struct virtio_net *dev, struct vhost_virtqueue *vq, + struct rte_mbuf **pkts, uint16_t count, int16_t dma_id, + uint16_t vchan_id, bool legacy_ol_flags); + /* DMA device copy operation tracking array. */ struct async_dma_info dma_copy_track[RTE_DMADEV_DEFAULT_MAX]; @@ -2155,7 +2160,7 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, return 0; VHOST_LOG_DATA(DEBUG, "(%s) %s\n", dev->ifname, __func__); - if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->nr_vring))) { + if (unlikely(queue_id >= dev->nr_vring)) { VHOST_LOG_DATA(ERR, "(%s) %s: invalid virtqueue idx %d.\n", dev->ifname, __func__, queue_id); return 0; @@ -2182,7 +2187,18 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, return 0; } - n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, count, dma_id, vchan_id); + if (queue_id % 2 == 0) + n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, + pkts, count, dma_id, vchan_id); + else { + if (unlikely(vq_is_packed(dev))) + VHOST_LOG_DATA(ERR, + "(%d) %s: async dequeue does not support packed ring.\n", + dev->vid, __func__); + else + n_pkts_cpl = async_poll_dequeue_completed_split(dev, vq, pkts, count, + dma_id, vchan_id, dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS); + } vhost_queue_stats_update(dev, vq, pkts, n_pkts_cpl); vq->stats.inflight_completed += n_pkts_cpl; @@ -2190,6 +2206,68 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, return n_pkts_cpl; } +uint16_t +rte_vhost_clear_queue(int vid, uint16_t queue_id, struct rte_mbuf **pkts, + uint16_t count, int16_t dma_id, uint16_t vchan_id) +{ + struct virtio_net *dev = get_device(vid); + struct vhost_virtqueue *vq; + uint16_t n_pkts_cpl = 0; + + if (!dev) + return 0; + + VHOST_LOG_DATA(DEBUG, "(%s) %s\n", dev->ifname, __func__); + if (unlikely(queue_id >= dev->nr_vring)) { + VHOST_LOG_DATA(ERR, "(%s) %s: invalid virtqueue idx %d.\n", + dev->ifname, __func__, queue_id); + return 0; + } + + vq = dev->virtqueue[queue_id]; + + if (!rte_spinlock_trylock(&vq->access_lock)) { + VHOST_LOG_DATA(ERR, + "(%d) %s: failed to clear async queue id %d, virtqueue busy.\n", + dev->vid, __func__, queue_id); + return 0; + } + + if (unlikely(!vq->async)) { + VHOST_LOG_DATA(ERR, "(%s) %s: async not registered for queue id %d.\n", + dev->ifname, __func__, queue_id); + goto out_access_unlock; + } + + if (unlikely(!dma_copy_track[dma_id].vchans || + !dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr)) { + VHOST_LOG_DATA(ERR, "(%s) %s: invalid channel %d:%u.\n", dev->ifname, __func__, + dma_id, vchan_id); + goto out_access_unlock; + } + + if (queue_id % 2 == 0) + n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, + pkts, count, dma_id, vchan_id); + else { + if (unlikely(vq_is_packed(dev))) + VHOST_LOG_DATA(ERR, + "(%d) %s: async dequeue does not support packed ring.\n", + dev->vid, __func__); + else + n_pkts_cpl = async_poll_dequeue_completed_split(dev, vq, pkts, count, + dma_id, vchan_id, dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS); + } + + vhost_queue_stats_update(dev, vq, pkts, n_pkts_cpl); + vq->stats.inflight_completed += n_pkts_cpl; + +out_access_unlock: + rte_spinlock_unlock(&vq->access_lock); + + return n_pkts_cpl; +} + static __rte_always_inline uint32_t virtio_dev_rx_async_submit(struct virtio_net *dev, uint16_t queue_id, struct rte_mbuf **pkts, uint32_t count, int16_t dma_id, uint16_t vchan_id) From patchwork Mon May 23 16:13:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wang, YuanX" X-Patchwork-Id: 111603 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5E2A6A04FD; Mon, 23 May 2022 10:24:52 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 882C74067B; Mon, 23 May 2022 10:24:21 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id EB66140156 for ; Mon, 23 May 2022 10:24:19 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1653294260; x=1684830260; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=r0bc6H9iqSkLt4ulI2qiFgy+hxyBpN9XgZse9y3YoYk=; b=Qofe0wFOZB9unhILIZA2TIc+01VDCY5jpaRZJg2rpWK/DANKbQmjTJOU zVrHb3VjGJyPf3mDgHuHpNNO4MviZqWQgts7pK0447URy1MCRPnRdxaAk rIzqDEJO05xj7+Kby0LzbW4Bm7CCcLMqVs6rPKelCW2UXAA3skSIvjpBt 5i2jGTy3s9of44uFxWOUjtR7jEKdSRFwOWhmVn6by0+M1AfUXfhnrjG1L YXqzjjsrcVx0x2DoNKa3Hm7LEpavMeNwp+jaZk0pWUreP90jEHNu8CKCk D6o7OF+WQKYqMP5eqwVFn3JtFnj00kxP4lpme5oVDxWkAhGSmQOrzmqw6 g==; X-IronPort-AV: E=McAfee;i="6400,9594,10355"; a="260750427" X-IronPort-AV: E=Sophos;i="5.91,246,1647327600"; d="scan'208";a="260750427" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 May 2022 01:24:19 -0700 X-IronPort-AV: E=Sophos;i="5.91,246,1647327600"; d="scan'208";a="600523098" Received: from unknown (HELO localhost.localdomain) ([10.239.251.55]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 May 2022 01:24:16 -0700 From: Yuan Wang To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, jiayu.hu@intel.com, xuan.ding@intel.com, xingguang.he@intel.com, sunil.pai.g@intel.com, yuanx.wang@intel.com Subject: [PATCH v3 2/2] example/vhost: support to clear in-flight packets for async dequeue Date: Tue, 24 May 2022 00:13:45 +0800 Message-Id: <20220523161345.289549-3-yuanx.wang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220523161345.289549-1-yuanx.wang@intel.com> References: <20220413182742.860659-1-yuanx.wang@intel.com> <20220523161345.289549-1-yuanx.wang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch allows vring_state_changed() to clear in-flight dequeue packets. It also clears the in-flight packets in a thread-safe way in destroy_device(). Signed-off-by: Yuan Wang --- examples/vhost/main.c | 26 +++++++++++++++++++++----- 1 file changed, 21 insertions(+), 5 deletions(-) diff --git a/examples/vhost/main.c b/examples/vhost/main.c index 5bc34b0c52..a66d6d4d18 100644 --- a/examples/vhost/main.c +++ b/examples/vhost/main.c @@ -1539,6 +1539,25 @@ vhost_clear_queue_thread_unsafe(struct vhost_dev *vdev, uint16_t queue_id) } } +static void +vhost_clear_queue(struct vhost_dev *vdev, uint16_t queue_id) +{ + uint16_t n_pkt = 0; + int pkts_inflight; + + int16_t dma_id = dma_bind[vid2socketid[vdev->vid]].dmas[queue_id].dev_id; + pkts_inflight = rte_vhost_async_get_inflight(vdev->vid, queue_id); + + struct rte_mbuf *m_cpl[pkts_inflight]; + + while (pkts_inflight) { + n_pkt = rte_vhost_clear_queue(vdev->vid, queue_id, m_cpl, + pkts_inflight, dma_id, 0); + free_pkts(m_cpl, n_pkt); + pkts_inflight = rte_vhost_async_get_inflight(vdev->vid, queue_id); + } +} + /* * Remove a device from the specific data core linked list and from the * main linked list. Synchronization occurs through the use of the @@ -1596,13 +1615,13 @@ destroy_device(int vid) vdev->vid); if (dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled) { - vhost_clear_queue_thread_unsafe(vdev, VIRTIO_RXQ); + vhost_clear_queue(vdev, VIRTIO_RXQ); rte_vhost_async_channel_unregister(vid, VIRTIO_RXQ); dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled = false; } if (dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled) { - vhost_clear_queue_thread_unsafe(vdev, VIRTIO_TXQ); + vhost_clear_queue(vdev, VIRTIO_TXQ); rte_vhost_async_channel_unregister(vid, VIRTIO_TXQ); dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled = false; } @@ -1761,9 +1780,6 @@ vring_state_changed(int vid, uint16_t queue_id, int enable) if (!vdev) return -1; - if (queue_id != VIRTIO_RXQ) - return 0; - if (dma_bind[vid2socketid[vid]].dmas[queue_id].async_enabled) { if (!enable) vhost_clear_queue_thread_unsafe(vdev, queue_id);