From patchwork Wed Jun 22 06:27:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wang, YuanX" X-Patchwork-Id: 113210 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 11425A04FD; Wed, 22 Jun 2022 08:29:11 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B12904069F; Wed, 22 Jun 2022 08:29:10 +0200 (CEST) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id 3C0A74069C; Wed, 22 Jun 2022 08:29:09 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1655879349; x=1687415349; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ko4o/Kpk0nY+0GvRKzPB2aerKqqO662ZMBydecUE/1g=; b=IgZsXFfJT4GAWFdO9aRYIVmaxTe8OMJPW+LczjfTwCtmuqrLPmFvfS7K jmnpJYzAnkDorSq6xEEtcbOVlCujRenSBOk7B8M/CnHy6ACw59m0lzzjw +MC/GAJwMLwBK477ZFifIAp8YkI/zylR4T6Cyi7/HRpYxW1Ezy9AQ9BTK yzgvskD/gGtbauAR4vUOzDp5HjI972Hli2vEk6NkoINJYxKUTYjSq7amZ pM4dVCuryLMC1RKxYDaurcaFZZqeot2oxJ03pIQG7dzhMk2QesPojhItn 4wfoaeVi1+B16n5LVgKF6yzA5T4QI8A+KRsCEZqoMYW5OC92a+fCaZhBj g==; X-IronPort-AV: E=McAfee;i="6400,9594,10385"; a="342011981" X-IronPort-AV: E=Sophos;i="5.92,211,1650956400"; d="scan'208";a="342011981" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jun 2022 23:29:08 -0700 X-IronPort-AV: E=Sophos;i="5.92,211,1650956400"; d="scan'208";a="644023050" Received: from unknown (HELO localhost.localdomain) ([10.239.252.55]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jun 2022 23:29:04 -0700 From: Yuan Wang To: maxime.coquelin@redhat.com, chenbo.xia@intel.com, dev@dpdk.org Cc: jiayu.hu@intel.com, xingguang.he@intel.com, Yuan Wang , stable@dpdk.org, Wei Ling Subject: [PATCH v3] examples/vhost: fix retry logic on eth rx path Date: Wed, 22 Jun 2022 14:27:41 +0800 Message-Id: <20220622062741.1140109-1-yuanx.wang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220518162505.1691401-1-yuanx.wang@intel.com> References: <20220518162505.1691401-1-yuanx.wang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org drain_eth_rx() uses rte_vhost_avail_entries() to calculate the available entries to determine if a retry is required. However, this function only works with split rings, and calculating packed rings will return the wrong value and cause unnecessary retries resulting in a significant performance penalty. This patch fix that by using the difference between tx/rx burst as the retry condition. Fixes: 4ecf22e356de ("vhost: export device id as the interface to applications") Cc: stable@dpdk.org Signed-off-by: Yuan Wang Tested-by: Wei Ling --- V3: Fix mbuf index. V2: Rebase to 22.07 rc1. --- examples/vhost/main.c | 28 +++++++++++----------------- 1 file changed, 11 insertions(+), 17 deletions(-) diff --git a/examples/vhost/main.c b/examples/vhost/main.c index e7fee5aa1b..0fa6c096c8 100644 --- a/examples/vhost/main.c +++ b/examples/vhost/main.c @@ -634,7 +634,7 @@ us_vhost_usage(const char *prgname) { RTE_LOG(INFO, VHOST_CONFIG, "%s [EAL options] -- -p PORTMASK\n" " --vm2vm [0|1|2]\n" - " --rx_retry [0|1] --mergeable [0|1] --stats [0-N]\n" + " --rx-retry [0|1] --mergeable [0|1] --stats [0-N]\n" " --socket-file \n" " --nb-devices ND\n" " -p PORTMASK: Set mask for ports to be used by application\n" @@ -1383,27 +1383,21 @@ drain_eth_rx(struct vhost_dev *vdev) if (!rx_count) return; - /* - * When "enable_retry" is set, here we wait and retry when there - * is no enough free slots in the queue to hold @rx_count packets, - * to diminish packet loss. - */ - if (enable_retry && - unlikely(rx_count > rte_vhost_avail_entries(vdev->vid, - VIRTIO_RXQ))) { - uint32_t retry; + enqueue_count = vdev_queue_ops[vdev->vid].enqueue_pkt_burst(vdev, + VIRTIO_RXQ, pkts, rx_count); - for (retry = 0; retry < burst_rx_retry_num; retry++) { + /* Retry if necessary */ + if (enable_retry && unlikely(enqueue_count < rx_count)) { + uint32_t retry = 0; + + while (enqueue_count < rx_count && retry++ < burst_rx_retry_num) { rte_delay_us(burst_rx_delay_time); - if (rx_count <= rte_vhost_avail_entries(vdev->vid, - VIRTIO_RXQ)) - break; + enqueue_count += vdev_queue_ops[vdev->vid].enqueue_pkt_burst(vdev, + VIRTIO_RXQ, &pkts[enqueue_count], + rx_count - enqueue_count); } } - enqueue_count = vdev_queue_ops[vdev->vid].enqueue_pkt_burst(vdev, - VIRTIO_RXQ, pkts, rx_count); - if (enable_stats) { __atomic_add_fetch(&vdev->stats.rx_total_atomic, rx_count, __ATOMIC_SEQ_CST);