From patchwork Fri May 13 02:50:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ding, Xuan" X-Patchwork-Id: 111089 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6A07EA00C3; Fri, 13 May 2022 04:55:34 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 93C634282D; Fri, 13 May 2022 04:55:31 +0200 (CEST) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id F3D83410F2 for ; Fri, 13 May 2022 04:55:29 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652410530; x=1683946530; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=2WAZdj/g+khOfou0INzhGCxmI8sI6KY0aE1d/xqhlZU=; b=c2TtzSGBr/dSDJPTrRV3Q4arSO3FUZG67K/JdRruGSf751lY7IpmVXSn QDhlhcZYwEr1oMm4k6SLxpNzaCKn/YF867bTxKIWWVbzaG3JgvDG+CNRR k6AAOR7b4pZjh2gEtoGaq0/BoedglYBINKIhUyBSkuR4b0icoRw3/nHoC 2AC6Y9ARndLGBoVlVeZvOWyHLnJSsJTJJmTTh8PzLp+ZJ4LdUN6jOokTv H7CGCqhUIvNlxEapbqpPTwcZ/3iqOacR1X5GWBb4Bjeq/bYbSzm2anRC5 upMe4CvoE9jFnER35qKF/L2TdHCEz+O+wPIqYuMysBUo8tRuKy7OyRTTQ w==; X-IronPort-AV: E=McAfee;i="6400,9594,10345"; a="295455723" X-IronPort-AV: E=Sophos;i="5.91,221,1647327600"; d="scan'208";a="295455723" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 May 2022 19:55:29 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,221,1647327600"; d="scan'208";a="543069775" Received: from npg-dpdk-xuan-cbdma.sh.intel.com ([10.67.110.228]) by orsmga006.jf.intel.com with ESMTP; 12 May 2022 19:55:27 -0700 From: xuan.ding@intel.com To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, jiayu.hu@intel.com, cheng1.jiang@intel.com, sunil.pai.g@intel.com, liangma@liangbit.com, Xuan Ding Subject: [PATCH v6 1/5] vhost: prepare sync for descriptor to mbuf refactoring Date: Fri, 13 May 2022 02:50:54 +0000 Message-Id: <20220513025058.12898-2-xuan.ding@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220513025058.12898-1-xuan.ding@intel.com> References: <20220407152546.38167-1-xuan.ding@intel.com> <20220513025058.12898-1-xuan.ding@intel.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Xuan Ding This patch extracts the descriptors to buffers filling from copy_desc_to_mbuf() into a dedicated function. Besides, enqueue and dequeue path are refactored to use the same function sync_fill_seg() for preparing batch elements, which simplifies the code without performance degradation. Signed-off-by: Xuan Ding Tested-by: Yvonne Yang Reviewed-by: Maxime Coquelin --- lib/vhost/virtio_net.c | 78 ++++++++++++++++++++---------------------- 1 file changed, 38 insertions(+), 40 deletions(-) diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index 5f432b0d77..d4c94d2a9b 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -1030,23 +1030,36 @@ async_mbuf_to_desc_seg(struct virtio_net *dev, struct vhost_virtqueue *vq, } static __rte_always_inline void -sync_mbuf_to_desc_seg(struct virtio_net *dev, struct vhost_virtqueue *vq, +sync_fill_seg(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf *m, uint32_t mbuf_offset, - uint64_t buf_addr, uint64_t buf_iova, uint32_t cpy_len) + uint64_t buf_addr, uint64_t buf_iova, uint32_t cpy_len, bool to_desc) { struct batch_copy_elem *batch_copy = vq->batch_copy_elems; if (likely(cpy_len > MAX_BATCH_LEN || vq->batch_copy_nb_elems >= vq->size)) { - rte_memcpy((void *)((uintptr_t)(buf_addr)), + if (to_desc) { + rte_memcpy((void *)((uintptr_t)(buf_addr)), rte_pktmbuf_mtod_offset(m, void *, mbuf_offset), cpy_len); + } else { + rte_memcpy(rte_pktmbuf_mtod_offset(m, void *, mbuf_offset), + (void *)((uintptr_t)(buf_addr)), + cpy_len); + } vhost_log_cache_write_iova(dev, vq, buf_iova, cpy_len); PRINT_PACKET(dev, (uintptr_t)(buf_addr), cpy_len, 0); } else { - batch_copy[vq->batch_copy_nb_elems].dst = - (void *)((uintptr_t)(buf_addr)); - batch_copy[vq->batch_copy_nb_elems].src = - rte_pktmbuf_mtod_offset(m, void *, mbuf_offset); + if (to_desc) { + batch_copy[vq->batch_copy_nb_elems].dst = + (void *)((uintptr_t)(buf_addr)); + batch_copy[vq->batch_copy_nb_elems].src = + rte_pktmbuf_mtod_offset(m, void *, mbuf_offset); + } else { + batch_copy[vq->batch_copy_nb_elems].dst = + rte_pktmbuf_mtod_offset(m, void *, mbuf_offset); + batch_copy[vq->batch_copy_nb_elems].src = + (void *)((uintptr_t)(buf_addr)); + } batch_copy[vq->batch_copy_nb_elems].log_addr = buf_iova; batch_copy[vq->batch_copy_nb_elems].len = cpy_len; vq->batch_copy_nb_elems++; @@ -1158,9 +1171,9 @@ mbuf_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq, buf_iova + buf_offset, cpy_len) < 0) goto error; } else { - sync_mbuf_to_desc_seg(dev, vq, m, mbuf_offset, - buf_addr + buf_offset, - buf_iova + buf_offset, cpy_len); + sync_fill_seg(dev, vq, m, mbuf_offset, + buf_addr + buf_offset, + buf_iova + buf_offset, cpy_len, true); } mbuf_avail -= cpy_len; @@ -2473,8 +2486,8 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf *m, struct rte_mempool *mbuf_pool, bool legacy_ol_flags) { - uint32_t buf_avail, buf_offset; - uint64_t buf_addr, buf_len; + uint32_t buf_avail, buf_offset, buf_len; + uint64_t buf_addr, buf_iova; uint32_t mbuf_avail, mbuf_offset; uint32_t cpy_len; struct rte_mbuf *cur = m, *prev = m; @@ -2482,16 +2495,13 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, struct virtio_net_hdr *hdr = NULL; /* A counter to avoid desc dead loop chain */ uint16_t vec_idx = 0; - struct batch_copy_elem *batch_copy = vq->batch_copy_elems; - int error = 0; buf_addr = buf_vec[vec_idx].buf_addr; + buf_iova = buf_vec[vec_idx].buf_iova; buf_len = buf_vec[vec_idx].buf_len; - if (unlikely(buf_len < dev->vhost_hlen && nr_vec <= 1)) { - error = -1; - goto out; - } + if (unlikely(buf_len < dev->vhost_hlen && nr_vec <= 1)) + return -1; if (virtio_net_with_host_offload(dev)) { if (unlikely(buf_len < sizeof(struct virtio_net_hdr))) { @@ -2515,11 +2525,12 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, buf_offset = dev->vhost_hlen - buf_len; vec_idx++; buf_addr = buf_vec[vec_idx].buf_addr; + buf_iova = buf_vec[vec_idx].buf_iova; buf_len = buf_vec[vec_idx].buf_len; buf_avail = buf_len - buf_offset; } else if (buf_len == dev->vhost_hlen) { if (unlikely(++vec_idx >= nr_vec)) - goto out; + goto error; buf_addr = buf_vec[vec_idx].buf_addr; buf_len = buf_vec[vec_idx].buf_len; @@ -2539,22 +2550,9 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, while (1) { cpy_len = RTE_MIN(buf_avail, mbuf_avail); - if (likely(cpy_len > MAX_BATCH_LEN || - vq->batch_copy_nb_elems >= vq->size || - (hdr && cur == m))) { - rte_memcpy(rte_pktmbuf_mtod_offset(cur, void *, - mbuf_offset), - (void *)((uintptr_t)(buf_addr + - buf_offset)), cpy_len); - } else { - batch_copy[vq->batch_copy_nb_elems].dst = - rte_pktmbuf_mtod_offset(cur, void *, - mbuf_offset); - batch_copy[vq->batch_copy_nb_elems].src = - (void *)((uintptr_t)(buf_addr + buf_offset)); - batch_copy[vq->batch_copy_nb_elems].len = cpy_len; - vq->batch_copy_nb_elems++; - } + sync_fill_seg(dev, vq, cur, mbuf_offset, + buf_addr + buf_offset, + buf_iova + buf_offset, cpy_len, false); mbuf_avail -= cpy_len; mbuf_offset += cpy_len; @@ -2567,6 +2565,7 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, break; buf_addr = buf_vec[vec_idx].buf_addr; + buf_iova = buf_vec[vec_idx].buf_iova; buf_len = buf_vec[vec_idx].buf_len; buf_offset = 0; @@ -2585,8 +2584,7 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, if (unlikely(cur == NULL)) { VHOST_LOG_DATA(ERR, "(%s) failed to allocate memory for mbuf.\n", dev->ifname); - error = -1; - goto out; + goto error; } prev->next = cur; @@ -2606,9 +2604,9 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, if (hdr) vhost_dequeue_offload(dev, hdr, m, legacy_ol_flags); -out: - - return error; + return 0; +error: + return -1; } static void