From patchwork Mon Jan 17 13:28:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wang, YuanX" X-Patchwork-Id: 105892 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 61E3CA034F; Mon, 17 Jan 2022 06:37:11 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4C90141145; Mon, 17 Jan 2022 06:37:07 +0100 (CET) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id D580D410FD for ; Mon, 17 Jan 2022 06:37:05 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1642397826; x=1673933826; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=8a9e8r29wYcc8fUiJ/RRXzfgEVOBa3NvXo0XtuP+Dwk=; b=ileiTR39QVHpeSDz7ikGMv7eYy1bKOzc9ISorZ62K1rV/mHL/hTdaDWj ymRKd7XqKQSoDJ4g29jbCTaY+141uUc2QXQVcbiV61rPdsHPe/jYTJKUy iohJwQIHu9P/ISsTUVJCVZN/DCx5f9n40iUYO00tnRvwc5xcTvLljsF+y CT4oiSJqmOW1Gz/QWbNDPFBDLxbQx3bf4VM+PwGamuNZhDxSwKRs0ZQML AV7/4+cbNW7GG7YzsroHyuJJLJ+nuRGQubEGSz3BDOcjQjj6BdytAnbL3 k0l2tiIqNO0m/dK3Rt/M1CKnSlfLQQyrVdLetJrw5ZitucTpPBy/vF9wZ Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10229"; a="268931114" X-IronPort-AV: E=Sophos;i="5.88,294,1635231600"; d="scan'208";a="268931114" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Jan 2022 21:37:05 -0800 X-IronPort-AV: E=Sophos;i="5.88,294,1635231600"; d="scan'208";a="531197996" Received: from dpdk.sh.intel.com (HELO localhost.localdomain) ([10.239.251.55]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Jan 2022 21:37:02 -0800 From: Yuan Wang To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, jiayu.hu@intel.com, xuan.ding@intel.com, wenwux.ma@intel.com, yvonnex.yang@intel.com, yuanx.wang@intel.com Subject: [RFC 1/2] vhost: support clear in-flight packets for async dequeue Date: Mon, 17 Jan 2022 13:28:46 +0000 Message-Id: <20220117132847.884998-2-yuanx.wang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220117132847.884998-1-yuanx.wang@intel.com> References: <20220117132847.884998-1-yuanx.wang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org rte_vhost_clear_queue_thread_unsafe() supports to clear in-flight packets for async enqueue only. But after supporting async dequeue, this API should support async dequeue too. This patch also adds the thread-safe version of this API, the difference between the two API is that thread safety uses lock. These APIs maybe used to clean up packets in the async channel to prevent packet loss when the device state changes or when the device is destroyed. Signed-off-by: Yuan Wang --- doc/guides/prog_guide/vhost_lib.rst | 7 ++- doc/guides/rel_notes/release_22_03.rst | 5 ++ lib/vhost/rte_vhost_async.h | 26 ++++++++++ lib/vhost/version.map | 1 + lib/vhost/virtio_net.c | 71 +++++++++++++++++++++++++- 5 files changed, 107 insertions(+), 3 deletions(-) diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst index bdce7cbf02..7cea2490e5 100644 --- a/doc/guides/prog_guide/vhost_lib.rst +++ b/doc/guides/prog_guide/vhost_lib.rst @@ -269,7 +269,12 @@ The following is an overview of some key Vhost API functions: * ``rte_vhost_clear_queue_thread_unsafe(vid, queue_id, **pkts, count, dma_id, dma_vchan)`` - Clear inflight packets which are submitted to DMA engine in vhost async data + Clear in-flight packets which are submitted to async channel in vhost + async data path without performing any locking. Completed packets are + returned to applications through ``pkts``. + +* ``rte_vhost_clear_queue(vid, queue_id, **pkts, count, dma_id, dma_vchan)`` + Clear in-flight packets which are submitted to async channel in vhost async data path. Completed packets are returned to applications through ``pkts``. Vhost-user Implementations diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst index 6d99d1eaa9..e774919fc9 100644 --- a/doc/guides/rel_notes/release_22_03.rst +++ b/doc/guides/rel_notes/release_22_03.rst @@ -55,6 +55,11 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= +* **Added thread-safe version of inflight packet clear API in vhost library.** + + Added an API which can clear the inflight packets submitted to + the async channel in a thread-safe manner in the vhost async data path. + Removed Items ------------- diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h index b1249382cd..ed9389560b 100644 --- a/lib/vhost/rte_vhost_async.h +++ b/lib/vhost/rte_vhost_async.h @@ -186,6 +186,32 @@ __rte_experimental uint16_t rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, struct rte_mbuf **pkts, uint16_t count, int16_t dma_id, uint16_t vchan); + +/** + * This function checks async completion status and clear packets for + * a specific vhost device queue. Packets which are inflight will be + * returned in an array. + * + * @param vid + * ID of vhost device to clear data + * @param queue_id + * Queue id to clear data + * @param pkts + * Blank array to get return packet pointer + * @param count + * Size of the packet array + * @param dma_id + * the identifier of the DMA device + * @param vchan + * the identifier of virtual DMA channel + * @return + * Number of packets returned + */ +__rte_experimental +uint16_t rte_vhost_clear_queue(int vid, uint16_t queue_id, + struct rte_mbuf **pkts, uint16_t count, int16_t dma_id, + uint16_t vchan); + /** * The DMA vChannels used in asynchronous data path must be configured * first. So this function needs to be called before enabling DMA diff --git a/lib/vhost/version.map b/lib/vhost/version.map index 816a6dc942..16f7d57380 100644 --- a/lib/vhost/version.map +++ b/lib/vhost/version.map @@ -88,6 +88,7 @@ EXPERIMENTAL { # added in 22.03 rte_vhost_async_dma_configure; rte_vhost_async_try_dequeue_burst; + rte_vhost_clear_queue; }; INTERNAL { diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index 148709f2c5..510cd6ca8a 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -26,6 +26,11 @@ #define MAX_BATCH_LEN 256 +static __rte_always_inline uint16_t +async_poll_dequeue_completed_split(struct virtio_net *dev, uint16_t queue_id, + struct rte_mbuf **pkts, uint16_t count, uint16_t dma_id, uint16_t dma_vchan, + bool legacy_ol_flags); + /* DMA device copy operation tracking array. */ struct async_dma_info dma_copy_track[RTE_DMADEV_DEFAULT_MAX]; @@ -2042,7 +2047,49 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, return 0; VHOST_LOG_DATA(DEBUG, "(%d) %s\n", dev->vid, __func__); - if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->nr_vring))) { + if (unlikely(queue_id >= dev->nr_vring)) { + VHOST_LOG_DATA(ERR, "(%d) %s: invalid virtqueue idx %d.\n", + dev->vid, __func__, queue_id); + return 0; + } + + vq = dev->virtqueue[queue_id]; + + if (unlikely(!vq->async)) { + VHOST_LOG_DATA(ERR, "(%d) %s: async not registered for queue id %d.\n", + dev->vid, __func__, queue_id); + return 0; + } + + if (queue_id % 2 == 0) + n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, + pkts, count, dma_id, vchan); + else { + if (unlikely(vq_is_packed(dev))) + VHOST_LOG_DATA(ERR, + "(%d) %s: async dequeue does not support packed ring.\n", + dev->vid, __func__); + else + n_pkts_cpl = async_poll_dequeue_completed_split(dev, queue_id, pkts, count, + dma_id, vchan, dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS); + } + + return n_pkts_cpl; +} + +uint16_t +rte_vhost_clear_queue(int vid, uint16_t queue_id, struct rte_mbuf **pkts, + uint16_t count, int16_t dma_id, uint16_t vchan) +{ + struct virtio_net *dev = get_device(vid); + struct vhost_virtqueue *vq; + uint16_t n_pkts_cpl = 0; + + if (!dev) + return 0; + + VHOST_LOG_DATA(DEBUG, "(%d) %s\n", dev->vid, __func__); + if (unlikely(queue_id >= dev->nr_vring)) { VHOST_LOG_DATA(ERR, "(%d) %s: invalid virtqueue idx %d.\n", dev->vid, __func__, queue_id); return 0; @@ -2056,7 +2103,27 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, return 0; } - n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, count, dma_id, vchan); + if (!rte_spinlock_trylock(&vq->access_lock)) { + VHOST_LOG_DATA(ERR, + "(%d) %s: failed to clear async queue id %d, virtqueue busy.\n", + dev->vid, __func__, queue_id); + return 0; + } + + if (queue_id % 2 == 0) + n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, + pkts, count, dma_id, vchan); + else { + if (unlikely(vq_is_packed(dev))) + VHOST_LOG_DATA(ERR, + "(%d) %s: async dequeue does not support packed ring.\n", + dev->vid, __func__); + else + n_pkts_cpl = async_poll_dequeue_completed_split(dev, queue_id, pkts, count, + dma_id, vchan, dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS); + } + + rte_spinlock_unlock(&vq->access_lock); return n_pkts_cpl; } From patchwork Mon Jan 17 13:28:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wang, YuanX" X-Patchwork-Id: 105893 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8A801A034F; Mon, 17 Jan 2022 06:37:18 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8EB19411B6; Mon, 17 Jan 2022 06:37:09 +0100 (CET) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id 83E2F411B2 for ; Mon, 17 Jan 2022 06:37:08 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1642397828; x=1673933828; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=O1Rk/pRC5o9ICBojDDxl160eg3SsaGJX531FxpxRLmg=; b=Gosurg78q7hN3jc6RSBQfc0fvGgPNJ2UlFDxJ6U/8ehi62xb4kZ4cdKC C8/v5MnfzJKqexw4eP29R4b8IfAgMre8Rdv30ppn+y8xvDmFOr5i9BUgQ h3b88hvkmuJZ8NxKGiyppxrfK/eOzjND/avXvwn1RMKUrocGqOv2Exr1I wyTJru/tEft7OmZw3kcRXDiTLIphH8qYxwJuaifR6fTKkpvxEgJf2TvqC uIqzuqFOfO9DZs6PVQErk7v8qw3cMkkaNShhJtPt55wsjtcSOtwikjVaX A6z+UkOLxWed3kHs12Gs5Etf8pcj1Ju1u7qZxkuU7H1mMzOqNzOR7QsiQ g==; X-IronPort-AV: E=McAfee;i="6200,9189,10229"; a="268931125" X-IronPort-AV: E=Sophos;i="5.88,294,1635231600"; d="scan'208";a="268931125" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Jan 2022 21:37:08 -0800 X-IronPort-AV: E=Sophos;i="5.88,294,1635231600"; d="scan'208";a="531198012" Received: from dpdk.sh.intel.com (HELO localhost.localdomain) ([10.239.251.55]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Jan 2022 21:37:05 -0800 From: Yuan Wang To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, jiayu.hu@intel.com, xuan.ding@intel.com, wenwux.ma@intel.com, yvonnex.yang@intel.com, yuanx.wang@intel.com Subject: [RFC 2/2] example/vhost: support to clear in-flight packets for async dequeue Date: Mon, 17 Jan 2022 13:28:47 +0000 Message-Id: <20220117132847.884998-3-yuanx.wang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220117132847.884998-1-yuanx.wang@intel.com> References: <20220117132847.884998-1-yuanx.wang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch allows vhost_clear_queue_thread_unsafe() to clear in-flight dequeue packets. Signed-off-by: Yuan Wang --- examples/vhost/main.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/examples/vhost/main.c b/examples/vhost/main.c index 04a85262bc..050f983fd6 100644 --- a/examples/vhost/main.c +++ b/examples/vhost/main.c @@ -1495,6 +1495,7 @@ vhost_clear_queue_thread_unsafe(struct vhost_dev *vdev, uint16_t queue_id) uint16_t n_pkt = 0; uint16_t dma_id = dma_bind[vid2socketid[vdev->vid]].dmas[queue_id].dev_id; struct rte_mbuf *m_enq_cpl[vdev->pkts_enq_inflight]; + struct rte_mbuf *m_deq_cpl[vdev->pkts_deq_inflight]; if (queue_id % 2 == 0) { while (vdev->pkts_enq_inflight) { @@ -1503,6 +1504,13 @@ vhost_clear_queue_thread_unsafe(struct vhost_dev *vdev, uint16_t queue_id) free_pkts(m_enq_cpl, n_pkt); __atomic_sub_fetch(&vdev->pkts_enq_inflight, n_pkt, __ATOMIC_SEQ_CST); } + } else { + while (vdev->pkts_deq_inflight) { + n_pkt = rte_vhost_clear_queue_thread_unsafe(vdev->vid, + queue_id, m_deq_cpl, vdev->pkts_deq_inflight, dma_id, 0); + free_pkts(m_deq_cpl, n_pkt); + __atomic_sub_fetch(&vdev->pkts_deq_inflight, n_pkt, __ATOMIC_SEQ_CST); + } } }