From patchwork Thu Jun 9 17:34:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wang, YuanX" X-Patchwork-Id: 112610 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E16F5A0555; Thu, 9 Jun 2022 11:46:01 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D13EB427E9; Thu, 9 Jun 2022 11:46:01 +0200 (CEST) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id 66D0C40220 for ; Thu, 9 Jun 2022 11:46:00 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1654767960; x=1686303960; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=wpXV3dQHwp7BfN54nRfskJXVEAW3OXAHhEd8vs7wAmE=; b=BTnti03hiGskmjfSxLOMW66DXU5Nd8qdt1CqUZScLOzKKNZjDJXxxJ10 3H9vgdF6DA0n0BR3CK1Tja9CdZKpwoVTEMZ8E9xOlFg+Kh+xvL4BbdATy Vr4wUuC9KlZQkYUmK0tZkrBcL1bvQ4KQr/bDQjKBS2gam4hQjGO72wnfp z7+ywkEx+bgTN3IycQC4EBopOe6gh9UdDD65fcc1tu0yAaRKwjY3HntCJ JY4M2MG1H6+Rkh221lP2pb6NQzMl8Lc6PZWCduc/g/8II6WXxVFF3IIpt bpv0LnHdapZk+oU1gwmvchwExzonfIOFyMZqa2kHVOPkjSLHFxaWYQbUQ A==; X-IronPort-AV: E=McAfee;i="6400,9594,10372"; a="302589152" X-IronPort-AV: E=Sophos;i="5.91,287,1647327600"; d="scan'208";a="302589152" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jun 2022 02:45:59 -0700 X-IronPort-AV: E=Sophos;i="5.91,287,1647327600"; d="scan'208";a="637387562" Received: from unknown (HELO localhost.localdomain) ([10.239.251.55]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jun 2022 02:45:57 -0700 From: Yuan Wang To: maxime.coquelin@redhat.com, chenbo.xia@intel.com, dev@dpdk.org Cc: jiayu.hu@intel.com, xuan.ding@intel.com, sunil.pai.g@intel.com, Yuan Wang Subject: [PATCH v5 1/2] vhost: support clear in-flight packets for async dequeue Date: Fri, 10 Jun 2022 01:34:03 +0800 Message-Id: <20220609173404.1769210-2-yuanx.wang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220609173404.1769210-1-yuanx.wang@intel.com> References: <20220413182742.860659-1-yuanx.wang@intel.com> <20220609173404.1769210-1-yuanx.wang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org rte_vhost_clear_queue_thread_unsafe() supports to clear in-flight packets for async enqueue only. But after supporting async dequeue, this API should support async dequeue too. This patch also adds the thread-safe version of this API, the difference between the two API is that thread safety uses lock. These APIs maybe used to clean up packets in the async channel to prevent packet loss when the device state changes or when the device is destroyed. Signed-off-by: Yuan Wang Reviewed-by: Maxime Coquelin Reviewed-by: Jiayu Hu --- doc/guides/prog_guide/vhost_lib.rst | 8 ++- doc/guides/rel_notes/release_22_07.rst | 4 ++ lib/vhost/rte_vhost_async.h | 25 +++++++ lib/vhost/version.map | 1 + lib/vhost/virtio_net.c | 93 +++++++++++++++++++++++++- 5 files changed, 128 insertions(+), 3 deletions(-) diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst index cd3f6caa9a..606edee940 100644 --- a/doc/guides/prog_guide/vhost_lib.rst +++ b/doc/guides/prog_guide/vhost_lib.rst @@ -288,7 +288,13 @@ The following is an overview of some key Vhost API functions: * ``rte_vhost_clear_queue_thread_unsafe(vid, queue_id, **pkts, count, dma_id, vchan_id)`` - Clear inflight packets which are submitted to DMA engine in vhost async data + Clear in-flight packets which are submitted to async channel in vhost + async data path without performing locking on virtqueue. Completed + packets are returned to applications through ``pkts``. + +* ``rte_vhost_clear_queue(vid, queue_id, **pkts, count, dma_id, vchan_id)`` + + Clear in-flight packets which are submitted to async channel in vhost async data path. Completed packets are returned to applications through ``pkts``. * ``rte_vhost_vring_stats_get_names(int vid, uint16_t queue_id, struct rte_vhost_stat_name *names, unsigned int size)`` diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst index d46f773df0..28ad615a66 100644 --- a/doc/guides/rel_notes/release_22_07.rst +++ b/doc/guides/rel_notes/release_22_07.rst @@ -170,6 +170,10 @@ New Features This is a fall-back implementation for platforms that don't support vector operations. +* **Added thread-safe version of inflight packet clear API in vhost library.** + + Added an API which can clear the inflight packets submitted to + the async channel in a thread-safe manner in the vhost async data path. Removed Items ------------- diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h index a1e7f674ed..1db2a10124 100644 --- a/lib/vhost/rte_vhost_async.h +++ b/lib/vhost/rte_vhost_async.h @@ -183,6 +183,31 @@ uint16_t rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, struct rte_mbuf **pkts, uint16_t count, int16_t dma_id, uint16_t vchan_id); +/** + * This function checks async completion status and clear packets for + * a specific vhost device queue. Packets which are inflight will be + * returned in an array. + * + * @param vid + * ID of vhost device to clear data + * @param queue_id + * Queue id to clear data + * @param pkts + * Blank array to get return packet pointer + * @param count + * Size of the packet array + * @param dma_id + * The identifier of the DMA device + * @param vchan_id + * The identifier of virtual DMA channel + * @return + * Number of packets returned + */ +__rte_experimental +uint16_t rte_vhost_clear_queue(int vid, uint16_t queue_id, + struct rte_mbuf **pkts, uint16_t count, int16_t dma_id, + uint16_t vchan_id); + /** * The DMA vChannels used in asynchronous data path must be configured * first. So this function needs to be called before enabling DMA diff --git a/lib/vhost/version.map b/lib/vhost/version.map index 4880b9a422..9329f88e79 100644 --- a/lib/vhost/version.map +++ b/lib/vhost/version.map @@ -95,6 +95,7 @@ EXPERIMENTAL { rte_vhost_vring_stats_reset; rte_vhost_async_try_dequeue_burst; rte_vhost_driver_get_vdpa_dev_type; + rte_vhost_clear_queue; }; INTERNAL { diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index 68a26eb17d..4b28f65728 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -26,6 +26,11 @@ #define MAX_BATCH_LEN 256 +static __rte_always_inline uint16_t +async_poll_dequeue_completed_split(struct virtio_net *dev, struct vhost_virtqueue *vq, + struct rte_mbuf **pkts, uint16_t count, int16_t dma_id, + uint16_t vchan_id, bool legacy_ol_flags); + /* DMA device copy operation tracking array. */ struct async_dma_info dma_copy_track[RTE_DMADEV_DEFAULT_MAX]; @@ -2155,12 +2160,18 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, return 0; VHOST_LOG_DATA(DEBUG, "(%s) %s\n", dev->ifname, __func__); - if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->nr_vring))) { + if (unlikely(queue_id >= dev->nr_vring)) { VHOST_LOG_DATA(ERR, "(%s) %s: invalid virtqueue idx %d.\n", dev->ifname, __func__, queue_id); return 0; } + if (unlikely(dma_id < 0 || dma_id >= RTE_DMADEV_DEFAULT_MAX)) { + VHOST_LOG_DATA(ERR, "(%s) %s: invalid dma id %d.\n", + dev->ifname, __func__, dma_id); + return 0; + } + vq = dev->virtqueue[queue_id]; if (unlikely(!rte_spinlock_is_locked(&vq->access_lock))) { @@ -2182,11 +2193,89 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, return 0; } - n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, count, dma_id, vchan_id); + if ((queue_id & 1) == 0) + n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, + pkts, count, dma_id, vchan_id); + else { + if (unlikely(vq_is_packed(dev))) + VHOST_LOG_DATA(ERR, + "(%s) %s: async dequeue does not support packed ring.\n", + dev->ifname, __func__); + else + n_pkts_cpl = async_poll_dequeue_completed_split(dev, vq, pkts, count, + dma_id, vchan_id, dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS); + } + + vhost_queue_stats_update(dev, vq, pkts, n_pkts_cpl); + vq->stats.inflight_completed += n_pkts_cpl; + + return n_pkts_cpl; +} + +uint16_t +rte_vhost_clear_queue(int vid, uint16_t queue_id, struct rte_mbuf **pkts, + uint16_t count, int16_t dma_id, uint16_t vchan_id) +{ + struct virtio_net *dev = get_device(vid); + struct vhost_virtqueue *vq; + uint16_t n_pkts_cpl = 0; + + if (!dev) + return 0; + + VHOST_LOG_DATA(DEBUG, "(%s) %s\n", dev->ifname, __func__); + if (unlikely(queue_id >= dev->nr_vring)) { + VHOST_LOG_DATA(ERR, "(%s) %s: invalid virtqueue idx %u.\n", + dev->ifname, __func__, queue_id); + return 0; + } + + if (unlikely(dma_id < 0 || dma_id >= RTE_DMADEV_DEFAULT_MAX)) { + VHOST_LOG_DATA(ERR, "(%s) %s: invalid dma id %d.\n", + dev->ifname, __func__, dma_id); + return 0; + } + + vq = dev->virtqueue[queue_id]; + + if (!rte_spinlock_trylock(&vq->access_lock)) { + VHOST_LOG_DATA(DEBUG, "(%s) %s: virtqueue %u is busy.\n", + dev->ifname, __func__, queue_id); + return 0; + } + + if (unlikely(!vq->async)) { + VHOST_LOG_DATA(ERR, "(%s) %s: async not registered for queue id %u.\n", + dev->ifname, __func__, queue_id); + goto out_access_unlock; + } + + if (unlikely(!dma_copy_track[dma_id].vchans || + !dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr)) { + VHOST_LOG_DATA(ERR, "(%s) %s: invalid channel %d:%u.\n", dev->ifname, __func__, + dma_id, vchan_id); + goto out_access_unlock; + } + + if ((queue_id & 1) == 0) + n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, + pkts, count, dma_id, vchan_id); + else { + if (unlikely(vq_is_packed(dev))) + VHOST_LOG_DATA(ERR, + "(%s) %s: async dequeue does not support packed ring.\n", + dev->ifname, __func__); + else + n_pkts_cpl = async_poll_dequeue_completed_split(dev, vq, pkts, count, + dma_id, vchan_id, dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS); + } vhost_queue_stats_update(dev, vq, pkts, n_pkts_cpl); vq->stats.inflight_completed += n_pkts_cpl; +out_access_unlock: + rte_spinlock_unlock(&vq->access_lock); + return n_pkts_cpl; } From patchwork Thu Jun 9 17:34:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wang, YuanX" X-Patchwork-Id: 112611 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E4601A0555; Thu, 9 Jun 2022 11:46:08 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D77DB41614; Thu, 9 Jun 2022 11:46:08 +0200 (CEST) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id 12D2D40220 for ; Thu, 9 Jun 2022 11:46:06 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1654767967; x=1686303967; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=GBFXgkDx+k1FJ2Zm56S4xC17obDyQUznf/yL7BsSDzg=; b=nuitFTubGLvwZw1ryo3WiUx+PnUHvNWJ1YDXcxmWFbbQ5gy0FVPD6PWb dGPcI/BjKjIdoYjvVJ3fCZB9jJZalKPLtBog7uqAVns2NMOWzO0SuOUI3 zVvWW2O6zeoHspKOt9deOg+4D1lbehGEgrwEN1aRA5UP83W4w+FaNCETq RvhJsFvEnaLpcTuj+nDosuID8lEatDsdCdKt7eqjA0bLKB5oSokoJtKkb vXC0NGeu91Dk/Z7CgNZG/NyU81dim4m3erwrNhQk1/ZQACTrUNZrOIJiI BUTkrfQfYnVC7ycMrfqyMgIZnTYhi7WtK7xNnUQYIWeQkArNvgEqucCnG w==; X-IronPort-AV: E=McAfee;i="6400,9594,10372"; a="338987035" X-IronPort-AV: E=Sophos;i="5.91,287,1647327600"; d="scan'208";a="338987035" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jun 2022 02:46:06 -0700 X-IronPort-AV: E=Sophos;i="5.91,287,1647327600"; d="scan'208";a="637387612" Received: from unknown (HELO localhost.localdomain) ([10.239.251.55]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jun 2022 02:46:03 -0700 From: Yuan Wang To: maxime.coquelin@redhat.com, chenbo.xia@intel.com, dev@dpdk.org Cc: jiayu.hu@intel.com, xuan.ding@intel.com, sunil.pai.g@intel.com, Yuan Wang Subject: [PATCH v5 2/2] example/vhost: support to clear in-flight packets for async dequeue Date: Fri, 10 Jun 2022 01:34:04 +0800 Message-Id: <20220609173404.1769210-3-yuanx.wang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220609173404.1769210-1-yuanx.wang@intel.com> References: <20220413182742.860659-1-yuanx.wang@intel.com> <20220609173404.1769210-1-yuanx.wang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch allows vring_state_changed() to clear in-flight dequeue packets. It also clears the in-flight packets in a thread-safe way in destroy_device(). Signed-off-by: Yuan Wang Reviewed-by: Maxime Coquelin Reviewed-by: Jiayu Hu --- examples/vhost/main.c | 26 +++++++++++++++++++++----- 1 file changed, 21 insertions(+), 5 deletions(-) diff --git a/examples/vhost/main.c b/examples/vhost/main.c index e7fee5aa1b..a679ef738c 100644 --- a/examples/vhost/main.c +++ b/examples/vhost/main.c @@ -1543,6 +1543,25 @@ vhost_clear_queue_thread_unsafe(struct vhost_dev *vdev, uint16_t queue_id) } } +static void +vhost_clear_queue(struct vhost_dev *vdev, uint16_t queue_id) +{ + uint16_t n_pkt = 0; + int pkts_inflight; + + int16_t dma_id = dma_bind[vid2socketid[vdev->vid]].dmas[queue_id].dev_id; + pkts_inflight = rte_vhost_async_get_inflight(vdev->vid, queue_id); + + struct rte_mbuf *m_cpl[pkts_inflight]; + + while (pkts_inflight) { + n_pkt = rte_vhost_clear_queue(vdev->vid, queue_id, m_cpl, + pkts_inflight, dma_id, 0); + free_pkts(m_cpl, n_pkt); + pkts_inflight = rte_vhost_async_get_inflight(vdev->vid, queue_id); + } +} + /* * Remove a device from the specific data core linked list and from the * main linked list. Synchronization occurs through the use of the @@ -1600,13 +1619,13 @@ destroy_device(int vid) vdev->vid); if (dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled) { - vhost_clear_queue_thread_unsafe(vdev, VIRTIO_RXQ); + vhost_clear_queue(vdev, VIRTIO_RXQ); rte_vhost_async_channel_unregister(vid, VIRTIO_RXQ); dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled = false; } if (dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled) { - vhost_clear_queue_thread_unsafe(vdev, VIRTIO_TXQ); + vhost_clear_queue(vdev, VIRTIO_TXQ); rte_vhost_async_channel_unregister(vid, VIRTIO_TXQ); dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled = false; } @@ -1765,9 +1784,6 @@ vring_state_changed(int vid, uint16_t queue_id, int enable) if (!vdev) return -1; - if (queue_id != VIRTIO_RXQ) - return 0; - if (dma_bind[vid2socketid[vid]].dmas[queue_id].async_enabled) { if (!enable) vhost_clear_queue_thread_unsafe(vdev, queue_id);