From patchwork Wed Feb 16 07:04:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ding, Xuan" X-Patchwork-Id: 107676 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D5FC0A034C; Wed, 16 Feb 2022 08:05:36 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9604441142; Wed, 16 Feb 2022 08:05:31 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id CD80B4013F for ; Wed, 16 Feb 2022 08:05:29 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1644995130; x=1676531130; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=bWE4Gfe4NR2SvyiCdyatp1BmX80EUuaELvfYPLppvnE=; b=SdVmjoKQmAwWENiS6KCDM5vDL30Cq5HEKCmG1oyD+HMBHTA4qrYbOwo/ IQ50lfPcI5XRJN08VmcCNS0bFu1hda16sOG2IFXCMrJsLfBNQjtjunDHi Mn++I/ooMN0jcjOUKFpTl9eTjpWL6iyBYL5gfvGcLCiLa9SxbWNOu6DEq 28NDZF24xFTrswpW2wq7Krt8NveaypYi7vjakhQV/AmnD6wYHQinEBCIE TsvJmLUWaB1sVx0Y0k7xzqf4UY0lChdoPDc1jQbdnNzzQqaWOcnrmUFKF vIbCmziLKzqf3hKJtnhL8ouUVS+8mUco58IseIFa+JRuGRSQMPeO9KsOY Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10259"; a="249370808" X-IronPort-AV: E=Sophos;i="5.88,373,1635231600"; d="scan'208";a="249370808" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Feb 2022 23:05:25 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,373,1635231600"; d="scan'208";a="681387995" Received: from npg-dpdk-xuan-cbdma.sh.intel.com ([10.67.110.228]) by fmsmga001.fm.intel.com with ESMTP; 15 Feb 2022 23:05:23 -0800 From: xuan.ding@intel.com To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, jiayu.hu@intel.com, sunil.pai.g@intel.com, Xuan Ding Subject: [RFC 1/2] vhost: add unsafe API to check inflight packets Date: Wed, 16 Feb 2022 07:04:16 +0000 Message-Id: <20220216070417.9597-2-xuan.ding@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220216070417.9597-1-xuan.ding@intel.com> References: <20220216070417.9597-1-xuan.ding@intel.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Xuan Ding In async data path, when vring state changes or device is destroyed, it is necessary to know the number of inflight packets in DMA engine. This patch provides a thread unsafe API to return the number of inflight packets for a vhost queue without using any lock. Signed-off-by: Xuan Ding --- doc/guides/prog_guide/vhost_lib.rst | 5 +++++ doc/guides/rel_notes/release_22_03.rst | 4 ++++ lib/vhost/rte_vhost_async.h | 14 ++++++++++++++ lib/vhost/version.map | 1 + lib/vhost/vhost.c | 26 ++++++++++++++++++++++++++ 5 files changed, 50 insertions(+) diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst index 886f8f5e72..f95288d128 100644 --- a/doc/guides/prog_guide/vhost_lib.rst +++ b/doc/guides/prog_guide/vhost_lib.rst @@ -271,6 +271,11 @@ The following is an overview of some key Vhost API functions: This function returns the amount of in-flight packets for the vhost queue using async acceleration. + * ``rte_vhost_async_get_inflight_thread_unsafe(vid, queue_id)`` + + Get the number of inflight packets for a vhost queue without + performing any locking. + * ``rte_vhost_clear_queue_thread_unsafe(vid, queue_id, **pkts, count, dma_id, vchan_id)`` Clear inflight packets which are submitted to DMA engine in vhost async data diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst index ff3095d742..37ef37bb20 100644 --- a/doc/guides/rel_notes/release_22_03.rst +++ b/doc/guides/rel_notes/release_22_03.rst @@ -149,6 +149,10 @@ New Features * Called ``rte_ipv4/6_udptcp_cksum_mbuf()`` functions in testpmd csum mode to support software UDP/TCP checksum over multiple segments. +* **Added vhost API to get the number of inflight packets.** + + Added an API which can get the number of inflight packets in + vhost async data path without using lock. Removed Items ------------- diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h index 838c4778cc..06b0b0a579 100644 --- a/lib/vhost/rte_vhost_async.h +++ b/lib/vhost/rte_vhost_async.h @@ -135,6 +135,20 @@ uint16_t rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id, __rte_experimental int rte_vhost_async_get_inflight(int vid, uint16_t queue_id); +/** + * This function is lock-free version to return the amount of in-flight + * packets for the vhost queue which uses async channel acceleration. + * + * @param vid + * id of vhost device to enqueue data + * @param queue_id + * queue id to enqueue data + * @return + * the amount of in-flight packets on success; -1 on failure + */ +__rte_experimental +int rte_vhost_async_get_inflight_thread_unsafe(int vid, uint16_t queue_id); + /** * This function checks async completion status and clear packets for * a specific vhost device queue. Packets which are inflight will be diff --git a/lib/vhost/version.map b/lib/vhost/version.map index 1202ba9c1a..03b46cb10e 100644 --- a/lib/vhost/version.map +++ b/lib/vhost/version.map @@ -87,6 +87,7 @@ EXPERIMENTAL { # added in 22.03 rte_vhost_async_dma_configure; + rte_vhost_async_get_inflight_thread_unsafe; }; INTERNAL { diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c index 6bcb716de0..33dca62c6c 100644 --- a/lib/vhost/vhost.c +++ b/lib/vhost/vhost.c @@ -1907,6 +1907,32 @@ rte_vhost_async_get_inflight(int vid, uint16_t queue_id) return ret; } +int +rte_vhost_async_get_inflight_thread_unsafe(int vid, uint16_t queue_id) +{ + struct vhost_virtqueue *vq; + struct virtio_net *dev = get_device(vid); + int ret = -1; + + if (dev == NULL) + return ret; + + if (queue_id >= VHOST_MAX_VRING) + return ret; + + vq = dev->virtqueue[queue_id]; + + if (vq == NULL) + return ret; + + if (!vq->async) + return ret; + + ret = vq->async->pkts_inflight_n; + + return ret; +} + int rte_vhost_get_monitor_addr(int vid, uint16_t queue_id, struct rte_vhost_power_monitor_cond *pmc) From patchwork Wed Feb 16 07:04:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ding, Xuan" X-Patchwork-Id: 107677 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2FC61A034C; Wed, 16 Feb 2022 08:05:42 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 66C4A41152; Wed, 16 Feb 2022 08:05:32 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 4EFA24013F for ; Wed, 16 Feb 2022 08:05:30 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1644995130; x=1676531130; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=F6TYgvAXz5PWcyau88xOkTGYQS8y/0NV3SNpfEwmyXs=; b=ORIdfU/R1x0COThAO12OCvNGgx2s40PM1+xpdquz+zisWbTKHgxgo5td xXiMTgZDI1Rkb3BVBtslfxluR2BIe2HWfT0h6smxhc2f8xLMmI7yLCQl4 L6XWDUrzkK7YovGbBfQ7ZQRCD49TsXUpqdl9tGsenZA6goMjpcJiJUL2w b+BaI4dTKU6Y6wp35J4d+U4RR/9T+gFFBNAVZQlhLs041gHzixXYIF01K WvvkIx0dIIijGFcSMtFshDiX1M/wKO1p2fXUsjGaJsmHT66XyB1GqwitP pi+IYqBXT/xrwjLWi88KzlWCFqdcbr0pJpyXtZnyX6DrITVs3igL0Gr6g g==; X-IronPort-AV: E=McAfee;i="6200,9189,10259"; a="249370812" X-IronPort-AV: E=Sophos;i="5.88,373,1635231600"; d="scan'208";a="249370812" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Feb 2022 23:05:28 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,373,1635231600"; d="scan'208";a="681388013" Received: from npg-dpdk-xuan-cbdma.sh.intel.com ([10.67.110.228]) by fmsmga001.fm.intel.com with ESMTP; 15 Feb 2022 23:05:26 -0800 From: xuan.ding@intel.com To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, jiayu.hu@intel.com, sunil.pai.g@intel.com, Xuan Ding Subject: [RFC 2/2] examples/vhost: use API to check inflight packets Date: Wed, 16 Feb 2022 07:04:17 +0000 Message-Id: <20220216070417.9597-3-xuan.ding@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220216070417.9597-1-xuan.ding@intel.com> References: <20220216070417.9597-1-xuan.ding@intel.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Xuan Ding In async data path, call rte_vhost_async_get_inflight_thread_unsafe() API to directly return the number of inflight packets instead of maintaining a local variable. Signed-off-by: Xuan Ding Reviewed-by: Maxime Coquelin --- examples/vhost/main.c | 28 +++++++++++++++------------- examples/vhost/main.h | 1 - 2 files changed, 15 insertions(+), 14 deletions(-) diff --git a/examples/vhost/main.c b/examples/vhost/main.c index 3e784f5c6f..ba7ab23f4e 100644 --- a/examples/vhost/main.c +++ b/examples/vhost/main.c @@ -972,10 +972,8 @@ complete_async_pkts(struct vhost_dev *vdev) complete_count = rte_vhost_poll_enqueue_completed(vdev->vid, VIRTIO_RXQ, p_cpl, MAX_PKT_BURST, dma_id, 0); - if (complete_count) { + if (complete_count) free_pkts(p_cpl, complete_count); - __atomic_sub_fetch(&vdev->pkts_inflight, complete_count, __ATOMIC_SEQ_CST); - } } @@ -1017,7 +1015,6 @@ drain_vhost(struct vhost_dev *vdev) complete_async_pkts(vdev); ret = rte_vhost_submit_enqueue_burst(vdev->vid, VIRTIO_RXQ, m, nr_xmit, dma_id, 0); - __atomic_add_fetch(&vdev->pkts_inflight, ret, __ATOMIC_SEQ_CST); enqueue_fail = nr_xmit - ret; if (enqueue_fail) @@ -1346,7 +1343,6 @@ drain_eth_rx(struct vhost_dev *vdev) complete_async_pkts(vdev); enqueue_count = rte_vhost_submit_enqueue_burst(vdev->vid, VIRTIO_RXQ, pkts, rx_count, dma_id, 0); - __atomic_add_fetch(&vdev->pkts_inflight, enqueue_count, __ATOMIC_SEQ_CST); enqueue_fail = rx_count - enqueue_count; if (enqueue_fail) @@ -1518,14 +1514,17 @@ destroy_device(int vid) if (dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled) { uint16_t n_pkt = 0; + int pkts_inflight; int16_t dma_id = dma_bind[vid].dmas[VIRTIO_RXQ].dev_id; - struct rte_mbuf *m_cpl[vdev->pkts_inflight]; + pkts_inflight = rte_vhost_async_get_inflight_thread_unsafe(vid, VIRTIO_RXQ); + struct rte_mbuf *m_cpl[pkts_inflight]; - while (vdev->pkts_inflight) { + while (pkts_inflight) { n_pkt = rte_vhost_clear_queue_thread_unsafe(vid, VIRTIO_RXQ, - m_cpl, vdev->pkts_inflight, dma_id, 0); + m_cpl, pkts_inflight, dma_id, 0); free_pkts(m_cpl, n_pkt); - __atomic_sub_fetch(&vdev->pkts_inflight, n_pkt, __ATOMIC_SEQ_CST); + pkts_inflight = rte_vhost_async_get_inflight_thread_unsafe(vid, + VIRTIO_RXQ); } rte_vhost_async_channel_unregister(vid, VIRTIO_RXQ); @@ -1629,14 +1628,17 @@ vring_state_changed(int vid, uint16_t queue_id, int enable) if (dma_bind[vid].dmas[queue_id].async_enabled) { if (!enable) { uint16_t n_pkt = 0; + int pkts_inflight; + pkts_inflight = rte_vhost_async_get_inflight_thread_unsafe(vid, queue_id); int16_t dma_id = dma_bind[vid].dmas[VIRTIO_RXQ].dev_id; - struct rte_mbuf *m_cpl[vdev->pkts_inflight]; + struct rte_mbuf *m_cpl[pkts_inflight]; - while (vdev->pkts_inflight) { + while (pkts_inflight) { n_pkt = rte_vhost_clear_queue_thread_unsafe(vid, queue_id, - m_cpl, vdev->pkts_inflight, dma_id, 0); + m_cpl, pkts_inflight, dma_id, 0); free_pkts(m_cpl, n_pkt); - __atomic_sub_fetch(&vdev->pkts_inflight, n_pkt, __ATOMIC_SEQ_CST); + pkts_inflight = rte_vhost_async_get_inflight_thread_unsafe(vid, + queue_id); } } } diff --git a/examples/vhost/main.h b/examples/vhost/main.h index b4a453e77e..e7f395c3c9 100644 --- a/examples/vhost/main.h +++ b/examples/vhost/main.h @@ -52,7 +52,6 @@ struct vhost_dev { uint64_t features; size_t hdr_len; uint16_t nr_vrings; - uint16_t pkts_inflight; struct rte_vhost_memory *mem; struct device_statistics stats; TAILQ_ENTRY(vhost_dev) global_vdev_entry;