From patchwork Mon May 16 02:43:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ding, Xuan" X-Patchwork-Id: 111160 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E64DCA00BE; Mon, 16 May 2022 04:48:28 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 598B641614; Mon, 16 May 2022 04:48:25 +0200 (CEST) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id 752D5406A2 for ; Mon, 16 May 2022 04:48:22 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652669302; x=1684205302; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=2WAZdj/g+khOfou0INzhGCxmI8sI6KY0aE1d/xqhlZU=; b=ZP0RmV1a/nJyGMO0Ac63cPMVnDMA6bLWALd5buRJKtVSAcqOf7yrCvIh O7bAcCJQKn7Uet7IWhNppVz+SnrwCCvH1ed/+q99Cy0PekYkw55wqohjB h+az+AcDNbaMVSkH/zu0Bhg7EVH9/ksVnCcMrllI46zYkdW77ISXL265p rfGXr+OfS2Il83luDe0mGB4Fl/DsZAo//R3LQIDrHYmEoBxXUfrJqfKv3 o/wlXPBm4PYuc2vX9SAjQVQtOKMnV3PdcUdiOoLSUaRBtSNvGxXMkPtUH l3PghvY6RA0Lna6a4Nnaqt0JQtYkpESVibp6NeqrsvCxP9UxHf6TRkrhR g==; X-IronPort-AV: E=McAfee;i="6400,9594,10348"; a="331338374" X-IronPort-AV: E=Sophos;i="5.91,229,1647327600"; d="scan'208";a="331338374" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 May 2022 19:48:20 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,229,1647327600"; d="scan'208";a="544144658" Received: from npg-dpdk-xuan-cbdma.sh.intel.com ([10.67.110.228]) by orsmga006.jf.intel.com with ESMTP; 15 May 2022 19:48:17 -0700 From: xuan.ding@intel.com To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, jiayu.hu@intel.com, cheng1.jiang@intel.com, sunil.pai.g@intel.com, liangma@liangbit.com, Xuan Ding Subject: [PATCH v7 1/5] vhost: prepare sync for descriptor to mbuf refactoring Date: Mon, 16 May 2022 02:43:21 +0000 Message-Id: <20220516024325.96067-2-xuan.ding@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220516024325.96067-1-xuan.ding@intel.com> References: <20220407152546.38167-1-xuan.ding@intel.com> <20220516024325.96067-1-xuan.ding@intel.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Xuan Ding This patch extracts the descriptors to buffers filling from copy_desc_to_mbuf() into a dedicated function. Besides, enqueue and dequeue path are refactored to use the same function sync_fill_seg() for preparing batch elements, which simplifies the code without performance degradation. Signed-off-by: Xuan Ding Tested-by: Yvonne Yang Reviewed-by: Maxime Coquelin --- lib/vhost/virtio_net.c | 78 ++++++++++++++++++++---------------------- 1 file changed, 38 insertions(+), 40 deletions(-) diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index 5f432b0d77..d4c94d2a9b 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -1030,23 +1030,36 @@ async_mbuf_to_desc_seg(struct virtio_net *dev, struct vhost_virtqueue *vq, } static __rte_always_inline void -sync_mbuf_to_desc_seg(struct virtio_net *dev, struct vhost_virtqueue *vq, +sync_fill_seg(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf *m, uint32_t mbuf_offset, - uint64_t buf_addr, uint64_t buf_iova, uint32_t cpy_len) + uint64_t buf_addr, uint64_t buf_iova, uint32_t cpy_len, bool to_desc) { struct batch_copy_elem *batch_copy = vq->batch_copy_elems; if (likely(cpy_len > MAX_BATCH_LEN || vq->batch_copy_nb_elems >= vq->size)) { - rte_memcpy((void *)((uintptr_t)(buf_addr)), + if (to_desc) { + rte_memcpy((void *)((uintptr_t)(buf_addr)), rte_pktmbuf_mtod_offset(m, void *, mbuf_offset), cpy_len); + } else { + rte_memcpy(rte_pktmbuf_mtod_offset(m, void *, mbuf_offset), + (void *)((uintptr_t)(buf_addr)), + cpy_len); + } vhost_log_cache_write_iova(dev, vq, buf_iova, cpy_len); PRINT_PACKET(dev, (uintptr_t)(buf_addr), cpy_len, 0); } else { - batch_copy[vq->batch_copy_nb_elems].dst = - (void *)((uintptr_t)(buf_addr)); - batch_copy[vq->batch_copy_nb_elems].src = - rte_pktmbuf_mtod_offset(m, void *, mbuf_offset); + if (to_desc) { + batch_copy[vq->batch_copy_nb_elems].dst = + (void *)((uintptr_t)(buf_addr)); + batch_copy[vq->batch_copy_nb_elems].src = + rte_pktmbuf_mtod_offset(m, void *, mbuf_offset); + } else { + batch_copy[vq->batch_copy_nb_elems].dst = + rte_pktmbuf_mtod_offset(m, void *, mbuf_offset); + batch_copy[vq->batch_copy_nb_elems].src = + (void *)((uintptr_t)(buf_addr)); + } batch_copy[vq->batch_copy_nb_elems].log_addr = buf_iova; batch_copy[vq->batch_copy_nb_elems].len = cpy_len; vq->batch_copy_nb_elems++; @@ -1158,9 +1171,9 @@ mbuf_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq, buf_iova + buf_offset, cpy_len) < 0) goto error; } else { - sync_mbuf_to_desc_seg(dev, vq, m, mbuf_offset, - buf_addr + buf_offset, - buf_iova + buf_offset, cpy_len); + sync_fill_seg(dev, vq, m, mbuf_offset, + buf_addr + buf_offset, + buf_iova + buf_offset, cpy_len, true); } mbuf_avail -= cpy_len; @@ -2473,8 +2486,8 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf *m, struct rte_mempool *mbuf_pool, bool legacy_ol_flags) { - uint32_t buf_avail, buf_offset; - uint64_t buf_addr, buf_len; + uint32_t buf_avail, buf_offset, buf_len; + uint64_t buf_addr, buf_iova; uint32_t mbuf_avail, mbuf_offset; uint32_t cpy_len; struct rte_mbuf *cur = m, *prev = m; @@ -2482,16 +2495,13 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, struct virtio_net_hdr *hdr = NULL; /* A counter to avoid desc dead loop chain */ uint16_t vec_idx = 0; - struct batch_copy_elem *batch_copy = vq->batch_copy_elems; - int error = 0; buf_addr = buf_vec[vec_idx].buf_addr; + buf_iova = buf_vec[vec_idx].buf_iova; buf_len = buf_vec[vec_idx].buf_len; - if (unlikely(buf_len < dev->vhost_hlen && nr_vec <= 1)) { - error = -1; - goto out; - } + if (unlikely(buf_len < dev->vhost_hlen && nr_vec <= 1)) + return -1; if (virtio_net_with_host_offload(dev)) { if (unlikely(buf_len < sizeof(struct virtio_net_hdr))) { @@ -2515,11 +2525,12 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, buf_offset = dev->vhost_hlen - buf_len; vec_idx++; buf_addr = buf_vec[vec_idx].buf_addr; + buf_iova = buf_vec[vec_idx].buf_iova; buf_len = buf_vec[vec_idx].buf_len; buf_avail = buf_len - buf_offset; } else if (buf_len == dev->vhost_hlen) { if (unlikely(++vec_idx >= nr_vec)) - goto out; + goto error; buf_addr = buf_vec[vec_idx].buf_addr; buf_len = buf_vec[vec_idx].buf_len; @@ -2539,22 +2550,9 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, while (1) { cpy_len = RTE_MIN(buf_avail, mbuf_avail); - if (likely(cpy_len > MAX_BATCH_LEN || - vq->batch_copy_nb_elems >= vq->size || - (hdr && cur == m))) { - rte_memcpy(rte_pktmbuf_mtod_offset(cur, void *, - mbuf_offset), - (void *)((uintptr_t)(buf_addr + - buf_offset)), cpy_len); - } else { - batch_copy[vq->batch_copy_nb_elems].dst = - rte_pktmbuf_mtod_offset(cur, void *, - mbuf_offset); - batch_copy[vq->batch_copy_nb_elems].src = - (void *)((uintptr_t)(buf_addr + buf_offset)); - batch_copy[vq->batch_copy_nb_elems].len = cpy_len; - vq->batch_copy_nb_elems++; - } + sync_fill_seg(dev, vq, cur, mbuf_offset, + buf_addr + buf_offset, + buf_iova + buf_offset, cpy_len, false); mbuf_avail -= cpy_len; mbuf_offset += cpy_len; @@ -2567,6 +2565,7 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, break; buf_addr = buf_vec[vec_idx].buf_addr; + buf_iova = buf_vec[vec_idx].buf_iova; buf_len = buf_vec[vec_idx].buf_len; buf_offset = 0; @@ -2585,8 +2584,7 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, if (unlikely(cur == NULL)) { VHOST_LOG_DATA(ERR, "(%s) failed to allocate memory for mbuf.\n", dev->ifname); - error = -1; - goto out; + goto error; } prev->next = cur; @@ -2606,9 +2604,9 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, if (hdr) vhost_dequeue_offload(dev, hdr, m, legacy_ol_flags); -out: - - return error; + return 0; +error: + return -1; } static void From patchwork Mon May 16 02:43:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ding, Xuan" X-Patchwork-Id: 111161 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 53BBBA00BE; Mon, 16 May 2022 04:48:37 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C584342831; Mon, 16 May 2022 04:48:26 +0200 (CEST) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id 486BB410F6 for ; Mon, 16 May 2022 04:48:24 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652669304; x=1684205304; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=w56xqCmLDCgaivu1u3wG2gymPSOMMnT97D/z/E0XGJM=; b=nZdv4bio7F9XpFiXh0bRalMlCa5NUxvDb91KFifRCZzRN6wLc7laFVxV iKI3IPUzwIZmRVLIMFs+Ud89oea2hMDBO5uHWgrf9W1lyu653HCS2Se2D 7Ve2GQgwFP+I3e+E3g6E4JVKQrii74fMNJUIpQ3ctVt6BnmyEywOZfrjP msvFX54JlVaWEZJteg9mE8nEDwnbtlhrYPXY11bf1EOyZhPRiUGq1g3Yo PZdEmae4hLE1lCjrDY+qZquvZNXVUDgvNxu32xBTOPbI6CUdkJPqfFDzC CiqV60U+4gi/046Gkr7O5HfzoUO+icmsUESvL5LZzbzHJsjTi2XdLRnK4 Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10348"; a="331338376" X-IronPort-AV: E=Sophos;i="5.91,229,1647327600"; d="scan'208";a="331338376" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 May 2022 19:48:23 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,229,1647327600"; d="scan'208";a="544144681" Received: from npg-dpdk-xuan-cbdma.sh.intel.com ([10.67.110.228]) by orsmga006.jf.intel.com with ESMTP; 15 May 2022 19:48:21 -0700 From: xuan.ding@intel.com To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, jiayu.hu@intel.com, cheng1.jiang@intel.com, sunil.pai.g@intel.com, liangma@liangbit.com, Xuan Ding Subject: [PATCH v7 2/5] vhost: prepare async for descriptor to mbuf refactoring Date: Mon, 16 May 2022 02:43:22 +0000 Message-Id: <20220516024325.96067-3-xuan.ding@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220516024325.96067-1-xuan.ding@intel.com> References: <20220407152546.38167-1-xuan.ding@intel.com> <20220516024325.96067-1-xuan.ding@intel.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Xuan Ding This patch refactors vhost async enqueue path and dequeue path to use the same function async_fill_seg() for preparing batch elements, which simplifies the code without performance degradation. Signed-off-by: Xuan Ding Tested-by: Yvonne Yang Reviewed-by: Maxime Coquelin --- lib/vhost/virtio_net.c | 22 ++++++++++++++-------- 1 file changed, 14 insertions(+), 8 deletions(-) diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index d4c94d2a9b..a9e2dcd9ce 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -997,13 +997,14 @@ async_iter_reset(struct vhost_async *async) } static __rte_always_inline int -async_mbuf_to_desc_seg(struct virtio_net *dev, struct vhost_virtqueue *vq, +async_fill_seg(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf *m, uint32_t mbuf_offset, - uint64_t buf_iova, uint32_t cpy_len) + uint64_t buf_iova, uint32_t cpy_len, bool to_desc) { struct vhost_async *async = vq->async; uint64_t mapped_len; uint32_t buf_offset = 0; + void *src, *dst; void *host_iova; while (cpy_len) { @@ -1015,10 +1016,15 @@ async_mbuf_to_desc_seg(struct virtio_net *dev, struct vhost_virtqueue *vq, return -1; } - if (unlikely(async_iter_add_iovec(dev, async, - (void *)(uintptr_t)rte_pktmbuf_iova_offset(m, - mbuf_offset), - host_iova, (size_t)mapped_len))) + if (to_desc) { + src = (void *)(uintptr_t)rte_pktmbuf_iova_offset(m, mbuf_offset); + dst = host_iova; + } else { + src = host_iova; + dst = (void *)(uintptr_t)rte_pktmbuf_iova_offset(m, mbuf_offset); + } + + if (unlikely(async_iter_add_iovec(dev, async, src, dst, (size_t)mapped_len))) return -1; cpy_len -= (uint32_t)mapped_len; @@ -1167,8 +1173,8 @@ mbuf_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq, cpy_len = RTE_MIN(buf_avail, mbuf_avail); if (is_async) { - if (async_mbuf_to_desc_seg(dev, vq, m, mbuf_offset, - buf_iova + buf_offset, cpy_len) < 0) + if (async_fill_seg(dev, vq, m, mbuf_offset, + buf_iova + buf_offset, cpy_len, true) < 0) goto error; } else { sync_fill_seg(dev, vq, m, mbuf_offset, From patchwork Mon May 16 02:43:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ding, Xuan" X-Patchwork-Id: 111162 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A3E3AA00BE; Mon, 16 May 2022 04:48:43 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BECBE4280E; Mon, 16 May 2022 04:48:29 +0200 (CEST) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id 4EA7142B5A for ; Mon, 16 May 2022 04:48:27 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652669307; x=1684205307; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=Jp+zHdqvG30t8RC7x0aNp83adfDl7waXYzgZP9aiULg=; b=GTkKD966B5B+p6G6czft1iCWPXKxEjzmXLE3MamjFlnMx0/hDDXFcHSm FrN7ywWcRD0TWJdKImjmS7RQiEwQoyPNTN9V8yWnXHzfbK+R7LqWV0h9d UFpGVqtpkccU8QjKgV30FSVu6mehzp9zi9YtuD7pcml+tPIqdx+glYsVF v5bPWCmanIyf0AnnRGd+AvwekhukqqHuOcr2aRmBkkgcI3O5cul9uvFrt ZV62YNYGeDlCd5pBmiln9QidHQetKFtiKUttsDXsypH8+gSKlq6Qh80Sd qhrwfVYngp7x3tKAEzKhno8E2hQ2ATOWgZDpc37KlUAIiZy+/rsnVHDB/ g==; X-IronPort-AV: E=McAfee;i="6400,9594,10348"; a="331338381" X-IronPort-AV: E=Sophos;i="5.91,229,1647327600"; d="scan'208";a="331338381" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 May 2022 19:48:26 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,229,1647327600"; d="scan'208";a="544144699" Received: from npg-dpdk-xuan-cbdma.sh.intel.com ([10.67.110.228]) by orsmga006.jf.intel.com with ESMTP; 15 May 2022 19:48:24 -0700 From: xuan.ding@intel.com To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, jiayu.hu@intel.com, cheng1.jiang@intel.com, sunil.pai.g@intel.com, liangma@liangbit.com, Xuan Ding Subject: [PATCH v7 3/5] vhost: merge sync and async descriptor to mbuf filling Date: Mon, 16 May 2022 02:43:23 +0000 Message-Id: <20220516024325.96067-4-xuan.ding@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220516024325.96067-1-xuan.ding@intel.com> References: <20220407152546.38167-1-xuan.ding@intel.com> <20220516024325.96067-1-xuan.ding@intel.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Xuan Ding This patch refactors copy_desc_to_mbuf() used by the sync path to support both sync and async descriptor to mbuf filling. Signed-off-by: Xuan Ding Tested-by: Yvonne Yang Reviewed-by: Maxime Coquelin --- lib/vhost/vhost.h | 1 + lib/vhost/virtio_net.c | 48 ++++++++++++++++++++++++++++++++---------- 2 files changed, 38 insertions(+), 11 deletions(-) diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index a9edc271aa..00744b234f 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -180,6 +180,7 @@ struct async_inflight_info { struct rte_mbuf *mbuf; uint16_t descs; /* num of descs inflight */ uint16_t nr_buffers; /* num of buffers inflight for packed ring */ + struct virtio_net_hdr nethdr; }; struct vhost_async { diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index a9e2dcd9ce..5904839d5c 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -2487,10 +2487,10 @@ copy_vnet_hdr_from_desc(struct virtio_net_hdr *hdr, } static __rte_always_inline int -copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, +desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, struct buf_vector *buf_vec, uint16_t nr_vec, struct rte_mbuf *m, struct rte_mempool *mbuf_pool, - bool legacy_ol_flags) + bool legacy_ol_flags, uint16_t slot_idx, bool is_async) { uint32_t buf_avail, buf_offset, buf_len; uint64_t buf_addr, buf_iova; @@ -2501,6 +2501,8 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, struct virtio_net_hdr *hdr = NULL; /* A counter to avoid desc dead loop chain */ uint16_t vec_idx = 0; + struct vhost_async *async = vq->async; + struct async_inflight_info *pkts_info; buf_addr = buf_vec[vec_idx].buf_addr; buf_iova = buf_vec[vec_idx].buf_iova; @@ -2538,6 +2540,7 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, if (unlikely(++vec_idx >= nr_vec)) goto error; buf_addr = buf_vec[vec_idx].buf_addr; + buf_iova = buf_vec[vec_idx].buf_iova; buf_len = buf_vec[vec_idx].buf_len; buf_offset = 0; @@ -2553,12 +2556,25 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, mbuf_offset = 0; mbuf_avail = m->buf_len - RTE_PKTMBUF_HEADROOM; + + if (is_async) { + pkts_info = async->pkts_info; + if (async_iter_initialize(dev, async)) + return -1; + } + while (1) { cpy_len = RTE_MIN(buf_avail, mbuf_avail); - sync_fill_seg(dev, vq, cur, mbuf_offset, - buf_addr + buf_offset, - buf_iova + buf_offset, cpy_len, false); + if (is_async) { + if (async_fill_seg(dev, vq, cur, mbuf_offset, + buf_iova + buf_offset, cpy_len, false) < 0) + goto error; + } else { + sync_fill_seg(dev, vq, cur, mbuf_offset, + buf_addr + buf_offset, + buf_iova + buf_offset, cpy_len, false); + } mbuf_avail -= cpy_len; mbuf_offset += cpy_len; @@ -2607,11 +2623,20 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, prev->data_len = mbuf_offset; m->pkt_len += mbuf_offset; - if (hdr) - vhost_dequeue_offload(dev, hdr, m, legacy_ol_flags); + if (is_async) { + async_iter_finalize(async); + if (hdr) + pkts_info[slot_idx].nethdr = *hdr; + } else { + if (hdr) + vhost_dequeue_offload(dev, hdr, m, legacy_ol_flags); + } return 0; error: + if (is_async) + async_iter_cancel(async); + return -1; } @@ -2743,8 +2768,8 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, break; } - err = copy_desc_to_mbuf(dev, vq, buf_vec, nr_vec, pkts[i], - mbuf_pool, legacy_ol_flags); + err = desc_to_mbuf(dev, vq, buf_vec, nr_vec, pkts[i], + mbuf_pool, legacy_ol_flags, 0, false); if (unlikely(err)) { if (!allocerr_warned) { VHOST_LOG_DATA(ERR, "(%s) failed to copy desc to mbuf.\n", @@ -2755,6 +2780,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, i++; break; } + } if (dropped) @@ -2936,8 +2962,8 @@ vhost_dequeue_single_packed(struct virtio_net *dev, return -1; } - err = copy_desc_to_mbuf(dev, vq, buf_vec, nr_vec, pkts, - mbuf_pool, legacy_ol_flags); + err = desc_to_mbuf(dev, vq, buf_vec, nr_vec, pkts, + mbuf_pool, legacy_ol_flags, 0, false); if (unlikely(err)) { if (!allocerr_warned) { VHOST_LOG_DATA(ERR, "(%s) failed to copy desc to mbuf.\n", From patchwork Mon May 16 02:43:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ding, Xuan" X-Patchwork-Id: 111163 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1AEB1A00BE; Mon, 16 May 2022 04:48:49 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BD9D642B6B; Mon, 16 May 2022 04:48:34 +0200 (CEST) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id 9960942830 for ; Mon, 16 May 2022 04:48:31 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652669311; x=1684205311; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=54GsBMm9prV7C5kxtek4lq0EfDyvpY1AbrdWFHSxBHw=; b=hDrSv3dZ5L+1ZqPCUukB0jVMeMI1aG1vD6cN3SXgdifBqrhKMih9rs82 H1SRK9LmsxgkJZ+QtfcRLYpSgkkEleFXeLe1PJJiTKgvvRTbtgtwdaCRX 0rpLIL6mkIvRvuS2AEwF9fCvBgzLSbDZdJRcMa3qloxQo+i8kVh803Cjm 8N06Z75VTwGEkg36w7p8dVT2Ol+E8JnBP7dYfQgeVN9Wij9qm37lme0m3 POEnk2w3tuqmWxO95W56cuAtBeiuZCW85azxVF9VD92cAIaXW1v3D9tKj Zpc2w7RvUaIB7+uec9aCLIgx7cyPmnlYAOMr5H6V+kzK8hZLqRsfRT7Dz A==; X-IronPort-AV: E=McAfee;i="6400,9594,10348"; a="331338385" X-IronPort-AV: E=Sophos;i="5.91,229,1647327600"; d="scan'208";a="331338385" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 May 2022 19:48:30 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,229,1647327600"; d="scan'208";a="544144728" Received: from npg-dpdk-xuan-cbdma.sh.intel.com ([10.67.110.228]) by orsmga006.jf.intel.com with ESMTP; 15 May 2022 19:48:27 -0700 From: xuan.ding@intel.com To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, jiayu.hu@intel.com, cheng1.jiang@intel.com, sunil.pai.g@intel.com, liangma@liangbit.com, Xuan Ding , Yuan Wang Subject: [PATCH v7 4/5] vhost: support async dequeue for split ring Date: Mon, 16 May 2022 02:43:24 +0000 Message-Id: <20220516024325.96067-5-xuan.ding@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220516024325.96067-1-xuan.ding@intel.com> References: <20220407152546.38167-1-xuan.ding@intel.com> <20220516024325.96067-1-xuan.ding@intel.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Xuan Ding This patch implements asynchronous dequeue data path for vhost split ring, a new API rte_vhost_async_try_dequeue_burst() is introduced. Signed-off-by: Xuan Ding Signed-off-by: Yuan Wang Tested-by: Yvonne Yang Reviewed-by: Maxime Coquelin --- doc/guides/prog_guide/vhost_lib.rst | 7 + doc/guides/rel_notes/release_22_07.rst | 5 + lib/vhost/rte_vhost_async.h | 37 +++ lib/vhost/version.map | 2 +- lib/vhost/virtio_net.c | 337 +++++++++++++++++++++++++ 5 files changed, 387 insertions(+), 1 deletion(-) diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst index f287b76ebf..09c1c24b48 100644 --- a/doc/guides/prog_guide/vhost_lib.rst +++ b/doc/guides/prog_guide/vhost_lib.rst @@ -282,6 +282,13 @@ The following is an overview of some key Vhost API functions: Clear inflight packets which are submitted to DMA engine in vhost async data path. Completed packets are returned to applications through ``pkts``. +* ``rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id, + struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count, + int *nr_inflight, uint16_t dma_id, uint16_t vchan_id)`` + + Receives (dequeues) ``count`` packets from guest to host in async data path, + and stored them at ``pkts``. + Vhost-user Implementations -------------------------- diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst index 88b1e478d4..564d88623e 100644 --- a/doc/guides/rel_notes/release_22_07.rst +++ b/doc/guides/rel_notes/release_22_07.rst @@ -70,6 +70,11 @@ New Features Added an API which can get the number of inflight packets in vhost async data path without using lock. +* **Added vhost async dequeue API to receive pkts from guest.** + + Added vhost async dequeue API which can leverage DMA devices to + accelerate receiving pkts from guest. + Removed Items ------------- diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h index 70234debf9..2789492e38 100644 --- a/lib/vhost/rte_vhost_async.h +++ b/lib/vhost/rte_vhost_async.h @@ -204,6 +204,43 @@ uint16_t rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, __rte_experimental int rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id); +/** + * @warning + * @b EXPERIMENTAL: this API may change, or be removed, without prior notice + * + * This function tries to receive packets from the guest with offloading + * copies to the async channel. The packets that are transfer completed + * are returned in "pkts". The other packets that their copies are submitted to + * the async channel but not completed are called "in-flight packets". + * This function will not return in-flight packets until their copies are + * completed by the async channel. + * + * @param vid + * ID of vhost device to dequeue data + * @param queue_id + * ID of virtqueue to dequeue data + * @param mbuf_pool + * Mbuf_pool where host mbuf is allocated + * @param pkts + * Blank array to keep successfully dequeued packets + * @param count + * Size of the packet array + * @param nr_inflight + * >= 0: The amount of in-flight packets + * -1: Meaningless, indicates failed lock acquisition or invalid queue_id/dma_id + * @param dma_id + * The identifier of DMA device + * @param vchan_id + * The identifier of virtual DMA channel + * @return + * Number of successfully dequeued packets + */ +__rte_experimental +uint16_t +rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id, + struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count, + int *nr_inflight, uint16_t dma_id, uint16_t vchan_id); + #ifdef __cplusplus } #endif diff --git a/lib/vhost/version.map b/lib/vhost/version.map index 5841315386..8c7211bf0d 100644 --- a/lib/vhost/version.map +++ b/lib/vhost/version.map @@ -90,7 +90,7 @@ EXPERIMENTAL { # added in 22.07 rte_vhost_async_get_inflight_thread_unsafe; - + rte_vhost_async_try_dequeue_burst; }; INTERNAL { diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index 5904839d5c..8290514e65 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -3171,3 +3171,340 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id, return count; } + +static __rte_always_inline uint16_t +async_poll_dequeue_completed_split(struct virtio_net *dev, struct vhost_virtqueue *vq, + struct rte_mbuf **pkts, uint16_t count, uint16_t dma_id, + uint16_t vchan_id, bool legacy_ol_flags) +{ + uint16_t start_idx, from, i; + uint16_t nr_cpl_pkts = 0; + struct async_inflight_info *pkts_info = vq->async->pkts_info; + + vhost_async_dma_check_completed(dev, dma_id, vchan_id, VHOST_DMA_MAX_COPY_COMPLETE); + + start_idx = async_get_first_inflight_pkt_idx(vq); + + from = start_idx; + while (vq->async->pkts_cmpl_flag[from] && count--) { + vq->async->pkts_cmpl_flag[from] = false; + from = (from + 1) & (vq->size - 1); + nr_cpl_pkts++; + } + + if (nr_cpl_pkts == 0) + return 0; + + for (i = 0; i < nr_cpl_pkts; i++) { + from = (start_idx + i) & (vq->size - 1); + pkts[i] = pkts_info[from].mbuf; + + if (virtio_net_with_host_offload(dev)) + vhost_dequeue_offload(dev, &pkts_info[from].nethdr, pkts[i], + legacy_ol_flags); + } + + /* write back completed descs to used ring and update used idx */ + write_back_completed_descs_split(vq, nr_cpl_pkts); + __atomic_add_fetch(&vq->used->idx, nr_cpl_pkts, __ATOMIC_RELEASE); + vhost_vring_call_split(dev, vq); + + vq->async->pkts_inflight_n -= nr_cpl_pkts; + + return nr_cpl_pkts; +} + +static __rte_always_inline uint16_t +virtio_dev_tx_async_split(struct virtio_net *dev, struct vhost_virtqueue *vq, + struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count, + uint16_t dma_id, uint16_t vchan_id, bool legacy_ol_flags) +{ + static bool allocerr_warned; + bool dropped = false; + uint16_t free_entries; + uint16_t pkt_idx, slot_idx = 0; + uint16_t nr_done_pkts = 0; + uint16_t pkt_err = 0; + uint16_t n_xfer; + struct vhost_async *async = vq->async; + struct async_inflight_info *pkts_info = async->pkts_info; + struct rte_mbuf *pkts_prealloc[MAX_PKT_BURST]; + uint16_t pkts_size = count; + + /** + * The ordering between avail index and + * desc reads needs to be enforced. + */ + free_entries = __atomic_load_n(&vq->avail->idx, __ATOMIC_ACQUIRE) - + vq->last_avail_idx; + if (free_entries == 0) + goto out; + + rte_prefetch0(&vq->avail->ring[vq->last_avail_idx & (vq->size - 1)]); + + async_iter_reset(async); + + count = RTE_MIN(count, MAX_PKT_BURST); + count = RTE_MIN(count, free_entries); + VHOST_LOG_DATA(DEBUG, "(%s) about to dequeue %u buffers\n", + dev->ifname, count); + + if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts_prealloc, count)) + goto out; + + for (pkt_idx = 0; pkt_idx < count; pkt_idx++) { + uint16_t head_idx = 0; + uint16_t nr_vec = 0; + uint16_t to; + uint32_t buf_len; + int err; + struct buf_vector buf_vec[BUF_VECTOR_MAX]; + struct rte_mbuf *pkt = pkts_prealloc[pkt_idx]; + + if (unlikely(fill_vec_buf_split(dev, vq, vq->last_avail_idx, + &nr_vec, buf_vec, + &head_idx, &buf_len, + VHOST_ACCESS_RO) < 0)) { + dropped = true; + break; + } + + err = virtio_dev_pktmbuf_prep(dev, pkt, buf_len); + if (unlikely(err)) { + /** + * mbuf allocation fails for jumbo packets when external + * buffer allocation is not allowed and linear buffer + * is required. Drop this packet. + */ + if (!allocerr_warned) { + VHOST_LOG_DATA(ERR, + "(%s) %s: Failed mbuf alloc of size %d from %s\n", + dev->ifname, __func__, buf_len, mbuf_pool->name); + allocerr_warned = true; + } + dropped = true; + break; + } + + slot_idx = (async->pkts_idx + pkt_idx) & (vq->size - 1); + err = desc_to_mbuf(dev, vq, buf_vec, nr_vec, pkt, mbuf_pool, + legacy_ol_flags, slot_idx, true); + if (unlikely(err)) { + if (!allocerr_warned) { + VHOST_LOG_DATA(ERR, + "(%s) %s: Failed to offload copies to async channel.\n", + dev->ifname, __func__); + allocerr_warned = true; + } + dropped = true; + break; + } + + pkts_info[slot_idx].mbuf = pkt; + + /* store used descs */ + to = async->desc_idx_split & (vq->size - 1); + async->descs_split[to].id = head_idx; + async->descs_split[to].len = 0; + async->desc_idx_split++; + + vq->last_avail_idx++; + } + + if (unlikely(dropped)) + rte_pktmbuf_free_bulk(&pkts_prealloc[pkt_idx], count - pkt_idx); + + n_xfer = vhost_async_dma_transfer(dev, vq, dma_id, vchan_id, async->pkts_idx, + async->iov_iter, pkt_idx); + + async->pkts_inflight_n += n_xfer; + + pkt_err = pkt_idx - n_xfer; + if (unlikely(pkt_err)) { + VHOST_LOG_DATA(DEBUG, "(%s) %s: failed to transfer data.\n", + dev->ifname, __func__); + + pkt_idx = n_xfer; + /* recover available ring */ + vq->last_avail_idx -= pkt_err; + + /** + * recover async channel copy related structures and free pktmbufs + * for error pkts. + */ + async->desc_idx_split -= pkt_err; + while (pkt_err-- > 0) { + rte_pktmbuf_free(pkts_info[slot_idx & (vq->size - 1)].mbuf); + slot_idx--; + } + } + + async->pkts_idx += pkt_idx; + if (async->pkts_idx >= vq->size) + async->pkts_idx -= vq->size; + +out: + /* DMA device may serve other queues, unconditionally check completed. */ + nr_done_pkts = async_poll_dequeue_completed_split(dev, vq, pkts, pkts_size, + dma_id, vchan_id, legacy_ol_flags); + + return nr_done_pkts; +} + +__rte_noinline +static uint16_t +virtio_dev_tx_async_split_legacy(struct virtio_net *dev, + struct vhost_virtqueue *vq, struct rte_mempool *mbuf_pool, + struct rte_mbuf **pkts, uint16_t count, + uint16_t dma_id, uint16_t vchan_id) +{ + return virtio_dev_tx_async_split(dev, vq, mbuf_pool, + pkts, count, dma_id, vchan_id, true); +} + +__rte_noinline +static uint16_t +virtio_dev_tx_async_split_compliant(struct virtio_net *dev, + struct vhost_virtqueue *vq, struct rte_mempool *mbuf_pool, + struct rte_mbuf **pkts, uint16_t count, + uint16_t dma_id, uint16_t vchan_id) +{ + return virtio_dev_tx_async_split(dev, vq, mbuf_pool, + pkts, count, dma_id, vchan_id, false); +} + +uint16_t +rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id, + struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count, + int *nr_inflight, uint16_t dma_id, uint16_t vchan_id) +{ + struct virtio_net *dev; + struct rte_mbuf *rarp_mbuf = NULL; + struct vhost_virtqueue *vq; + int16_t success = 1; + + dev = get_device(vid); + if (!dev || !nr_inflight) + return 0; + + *nr_inflight = -1; + + if (unlikely(!(dev->flags & VIRTIO_DEV_BUILTIN_VIRTIO_NET))) { + VHOST_LOG_DATA(ERR, "(%s) %s: built-in vhost net backend is disabled.\n", + dev->ifname, __func__); + return 0; + } + + if (unlikely(!is_valid_virt_queue_idx(queue_id, 1, dev->nr_vring))) { + VHOST_LOG_DATA(ERR, "(%s) %s: invalid virtqueue idx %d.\n", + dev->ifname, __func__, queue_id); + return 0; + } + + if (unlikely(dma_id >= RTE_DMADEV_DEFAULT_MAX)) { + VHOST_LOG_DATA(ERR, "(%s) %s: invalid dma id %u.\n", + dev->ifname, __func__, dma_id); + return 0; + } + + if (unlikely(!dma_copy_track[dma_id].vchans || + !dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr)) { + VHOST_LOG_DATA(ERR, "(%s) %s: invalid channel %d:%u.\n", dev->ifname, __func__, + dma_id, vchan_id); + return 0; + } + + vq = dev->virtqueue[queue_id]; + + if (unlikely(rte_spinlock_trylock(&vq->access_lock) == 0)) + return 0; + + if (unlikely(vq->enabled == 0)) { + count = 0; + goto out_access_unlock; + } + + if (unlikely(!vq->async)) { + VHOST_LOG_DATA(ERR, "(%s) %s: async not registered for queue id %d.\n", + dev->ifname, __func__, queue_id); + count = 0; + goto out_access_unlock; + } + + if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) + vhost_user_iotlb_rd_lock(vq); + + if (unlikely(vq->access_ok == 0)) + if (unlikely(vring_translate(dev, vq) < 0)) { + count = 0; + goto out; + } + + /* + * Construct a RARP broadcast packet, and inject it to the "pkts" + * array, to looks like that guest actually send such packet. + * + * Check user_send_rarp() for more information. + * + * broadcast_rarp shares a cacheline in the virtio_net structure + * with some fields that are accessed during enqueue and + * __atomic_compare_exchange_n causes a write if performed compare + * and exchange. This could result in false sharing between enqueue + * and dequeue. + * + * Prevent unnecessary false sharing by reading broadcast_rarp first + * and only performing compare and exchange if the read indicates it + * is likely to be set. + */ + if (unlikely(__atomic_load_n(&dev->broadcast_rarp, __ATOMIC_ACQUIRE) && + __atomic_compare_exchange_n(&dev->broadcast_rarp, + &success, 0, 0, __ATOMIC_RELEASE, __ATOMIC_RELAXED))) { + + rarp_mbuf = rte_net_make_rarp_packet(mbuf_pool, &dev->mac); + if (rarp_mbuf == NULL) { + VHOST_LOG_DATA(ERR, "Failed to make RARP packet.\n"); + count = 0; + goto out; + } + /* + * Inject it to the head of "pkts" array, so that switch's mac + * learning table will get updated first. + */ + pkts[0] = rarp_mbuf; + pkts++; + count -= 1; + } + + if (unlikely(vq_is_packed(dev))) { + static bool not_support_pack_log; + if (!not_support_pack_log) { + VHOST_LOG_DATA(ERR, + "(%s) %s: async dequeue does not support packed ring.\n", + dev->ifname, __func__); + not_support_pack_log = true; + } + count = 0; + goto out; + } + + if (dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS) + count = virtio_dev_tx_async_split_legacy(dev, vq, mbuf_pool, pkts, + count, dma_id, vchan_id); + else + count = virtio_dev_tx_async_split_compliant(dev, vq, mbuf_pool, pkts, + count, dma_id, vchan_id); + + *nr_inflight = vq->async->pkts_inflight_n; + +out: + if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) + vhost_user_iotlb_rd_unlock(vq); + +out_access_unlock: + rte_spinlock_unlock(&vq->access_lock); + + if (unlikely(rarp_mbuf != NULL)) + count += 1; + + return count; +} From patchwork Mon May 16 02:43:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ding, Xuan" X-Patchwork-Id: 111164 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C4AA5A00BE; Mon, 16 May 2022 04:48:54 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B6C4542830; Mon, 16 May 2022 04:48:37 +0200 (CEST) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id BBD2442B6F for ; Mon, 16 May 2022 04:48:35 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652669315; x=1684205315; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=7eJjQh1nZdMxaTnpfIF8e42qrjnfzLxtcQquFmBnBdE=; b=Kg+0uKIzXVJ1ZpCAC72hJ86nyTfY5Io/iJ7h9DUAgN7vnj8uwg1d3SNV 33NoRkKqk+t7XCD7BtDhdy0Kp+4LsL6ZYBKbmvzO3VI0aUpxliVd1PFHd 6ec1uoL4TgnfVwmkaF8KZ1QXFD/hFn7HAnEqmQoqdSEcZ0mTW4m+75OAA +h8TKwpdFQqlBJK7g7Og4p3An5nE6wwZPG+5ng5vX3ixD7l1v8q+pYwA/ UGnge4JpPC1aYm1z/vlzAbb5YbtdDFOfkdHCowCQwuUL/SHKvXsNGK9tm pxkvzoTP4NIamjgwDTMJko6HLkNmKZoRyT2pUfcwa2v7jjAEGJltqP6X6 Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10348"; a="331338390" X-IronPort-AV: E=Sophos;i="5.91,229,1647327600"; d="scan'208";a="331338390" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 May 2022 19:48:35 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,229,1647327600"; d="scan'208";a="544144759" Received: from npg-dpdk-xuan-cbdma.sh.intel.com ([10.67.110.228]) by orsmga006.jf.intel.com with ESMTP; 15 May 2022 19:48:32 -0700 From: xuan.ding@intel.com To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, jiayu.hu@intel.com, cheng1.jiang@intel.com, sunil.pai.g@intel.com, liangma@liangbit.com, Xuan Ding , Wenwu Ma , Yuan Wang Subject: [PATCH v7 5/5] examples/vhost: support async dequeue data path Date: Mon, 16 May 2022 02:43:25 +0000 Message-Id: <20220516024325.96067-6-xuan.ding@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220516024325.96067-1-xuan.ding@intel.com> References: <20220407152546.38167-1-xuan.ding@intel.com> <20220516024325.96067-1-xuan.ding@intel.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Xuan Ding This patch adds the use case for async dequeue API. Vswitch can leverage DMA device to accelerate vhost async dequeue path. Signed-off-by: Wenwu Ma Signed-off-by: Yuan Wang Signed-off-by: Xuan Ding Tested-by: Yvonne Yang Reviewed-by: Maxime Coquelin --- doc/guides/sample_app_ug/vhost.rst | 9 +- examples/vhost/main.c | 286 ++++++++++++++++++++--------- examples/vhost/main.h | 32 +++- examples/vhost/virtio_net.c | 16 +- 4 files changed, 245 insertions(+), 98 deletions(-) diff --git a/doc/guides/sample_app_ug/vhost.rst b/doc/guides/sample_app_ug/vhost.rst index a6ce4bc8ac..09db965e70 100644 --- a/doc/guides/sample_app_ug/vhost.rst +++ b/doc/guides/sample_app_ug/vhost.rst @@ -169,9 +169,12 @@ demonstrates how to use the async vhost APIs. It's used in combination with dmas **--dmas** This parameter is used to specify the assigned DMA device of a vhost device. Async vhost-user net driver will be used if --dmas is set. For example ---dmas [txd0@00:04.0,txd1@00:04.1] means use DMA channel 00:04.0 for vhost -device 0 enqueue operation and use DMA channel 00:04.1 for vhost device 1 -enqueue operation. +--dmas [txd0@00:04.0,txd1@00:04.1,rxd0@00:04.2,rxd1@00:04.3] means use +DMA channel 00:04.0/00:04.2 for vhost device 0 enqueue/dequeue operation +and use DMA channel 00:04.1/00:04.3 for vhost device 1 enqueue/dequeue +operation. The index of the device corresponds to the socket file in order, +that means vhost device 0 is created through the first socket file, vhost +device 1 is created through the second socket file, and so on. Common Issues ------------- diff --git a/examples/vhost/main.c b/examples/vhost/main.c index c4d46de1c5..50e55de22c 100644 --- a/examples/vhost/main.c +++ b/examples/vhost/main.c @@ -63,6 +63,9 @@ #define DMA_RING_SIZE 4096 +#define ASYNC_ENQUEUE_VHOST 1 +#define ASYNC_DEQUEUE_VHOST 2 + /* number of mbufs in all pools - if specified on command-line. */ static int total_num_mbufs = NUM_MBUFS_DEFAULT; @@ -116,6 +119,8 @@ static uint32_t burst_rx_retry_num = BURST_RX_RETRIES; static char *socket_files; static int nb_sockets; +static struct vhost_queue_ops vdev_queue_ops[RTE_MAX_VHOST_DEVICE]; + /* empty VMDq configuration structure. Filled in programmatically */ static struct rte_eth_conf vmdq_conf_default = { .rxmode = { @@ -205,6 +210,20 @@ struct vhost_bufftable *vhost_txbuff[RTE_MAX_LCORE * RTE_MAX_VHOST_DEVICE]; #define MBUF_TABLE_DRAIN_TSC ((rte_get_tsc_hz() + US_PER_S - 1) \ / US_PER_S * BURST_TX_DRAIN_US) +static int vid2socketid[RTE_MAX_VHOST_DEVICE]; + +static inline uint32_t +get_async_flag_by_socketid(int socketid) +{ + return dma_bind[socketid].async_flag; +} + +static inline void +init_vid2socketid_array(int vid, int socketid) +{ + vid2socketid[vid] = socketid; +} + static inline bool is_dma_configured(int16_t dev_id) { @@ -224,7 +243,7 @@ open_dma(const char *value) char *addrs = input; char *ptrs[2]; char *start, *end, *substr; - int64_t vid; + int64_t socketid, vring_id; struct rte_dma_info info; struct rte_dma_conf dev_config = { .nb_vchans = 1 }; @@ -262,7 +281,9 @@ open_dma(const char *value) while (i < args_nr) { char *arg_temp = dma_arg[i]; + char *txd, *rxd; uint8_t sub_nr; + int async_flag; sub_nr = rte_strsplit(arg_temp, strlen(arg_temp), ptrs, 2, '@'); if (sub_nr != 2) { @@ -270,14 +291,23 @@ open_dma(const char *value) goto out; } - start = strstr(ptrs[0], "txd"); - if (start == NULL) { + txd = strstr(ptrs[0], "txd"); + rxd = strstr(ptrs[0], "rxd"); + if (txd) { + start = txd; + vring_id = VIRTIO_RXQ; + async_flag = ASYNC_ENQUEUE_VHOST; + } else if (rxd) { + start = rxd; + vring_id = VIRTIO_TXQ; + async_flag = ASYNC_DEQUEUE_VHOST; + } else { ret = -1; goto out; } start += 3; - vid = strtol(start, &end, 0); + socketid = strtol(start, &end, 0); if (end == start) { ret = -1; goto out; @@ -338,7 +368,8 @@ open_dma(const char *value) dmas_id[dma_count++] = dev_id; done: - (dma_info + vid)->dmas[VIRTIO_RXQ].dev_id = dev_id; + (dma_info + socketid)->dmas[vring_id].dev_id = dev_id; + (dma_info + socketid)->async_flag |= async_flag; i++; } out: @@ -990,7 +1021,7 @@ complete_async_pkts(struct vhost_dev *vdev) { struct rte_mbuf *p_cpl[MAX_PKT_BURST]; uint16_t complete_count; - int16_t dma_id = dma_bind[vdev->vid].dmas[VIRTIO_RXQ].dev_id; + int16_t dma_id = dma_bind[vid2socketid[vdev->vid]].dmas[VIRTIO_RXQ].dev_id; complete_count = rte_vhost_poll_enqueue_completed(vdev->vid, VIRTIO_RXQ, p_cpl, MAX_PKT_BURST, dma_id, 0); @@ -1029,22 +1060,7 @@ drain_vhost(struct vhost_dev *vdev) uint16_t nr_xmit = vhost_txbuff[buff_idx]->len; struct rte_mbuf **m = vhost_txbuff[buff_idx]->m_table; - if (builtin_net_driver) { - ret = vs_enqueue_pkts(vdev, VIRTIO_RXQ, m, nr_xmit); - } else if (dma_bind[vdev->vid].dmas[VIRTIO_RXQ].async_enabled) { - uint16_t enqueue_fail = 0; - int16_t dma_id = dma_bind[vdev->vid].dmas[VIRTIO_RXQ].dev_id; - - complete_async_pkts(vdev); - ret = rte_vhost_submit_enqueue_burst(vdev->vid, VIRTIO_RXQ, m, nr_xmit, dma_id, 0); - - enqueue_fail = nr_xmit - ret; - if (enqueue_fail) - free_pkts(&m[ret], nr_xmit - ret); - } else { - ret = rte_vhost_enqueue_burst(vdev->vid, VIRTIO_RXQ, - m, nr_xmit); - } + ret = vdev_queue_ops[vdev->vid].enqueue_pkt_burst(vdev, VIRTIO_RXQ, m, nr_xmit); if (enable_stats) { __atomic_add_fetch(&vdev->stats.rx_total_atomic, nr_xmit, @@ -1053,7 +1069,7 @@ drain_vhost(struct vhost_dev *vdev) __ATOMIC_SEQ_CST); } - if (!dma_bind[vdev->vid].dmas[VIRTIO_RXQ].async_enabled) + if (!dma_bind[vid2socketid[vdev->vid]].dmas[VIRTIO_RXQ].async_enabled) free_pkts(m, nr_xmit); } @@ -1325,6 +1341,32 @@ drain_mbuf_table(struct mbuf_table *tx_q) } } +uint16_t +async_enqueue_pkts(struct vhost_dev *dev, uint16_t queue_id, + struct rte_mbuf **pkts, uint32_t rx_count) +{ + uint16_t enqueue_count; + uint16_t enqueue_fail = 0; + uint16_t dma_id = dma_bind[vid2socketid[dev->vid]].dmas[VIRTIO_RXQ].dev_id; + + complete_async_pkts(dev); + enqueue_count = rte_vhost_submit_enqueue_burst(dev->vid, queue_id, + pkts, rx_count, dma_id, 0); + + enqueue_fail = rx_count - enqueue_count; + if (enqueue_fail) + free_pkts(&pkts[enqueue_count], enqueue_fail); + + return enqueue_count; +} + +uint16_t +sync_enqueue_pkts(struct vhost_dev *dev, uint16_t queue_id, + struct rte_mbuf **pkts, uint32_t rx_count) +{ + return rte_vhost_enqueue_burst(dev->vid, queue_id, pkts, rx_count); +} + static __rte_always_inline void drain_eth_rx(struct vhost_dev *vdev) { @@ -1355,25 +1397,8 @@ drain_eth_rx(struct vhost_dev *vdev) } } - if (builtin_net_driver) { - enqueue_count = vs_enqueue_pkts(vdev, VIRTIO_RXQ, - pkts, rx_count); - } else if (dma_bind[vdev->vid].dmas[VIRTIO_RXQ].async_enabled) { - uint16_t enqueue_fail = 0; - int16_t dma_id = dma_bind[vdev->vid].dmas[VIRTIO_RXQ].dev_id; - - complete_async_pkts(vdev); - enqueue_count = rte_vhost_submit_enqueue_burst(vdev->vid, - VIRTIO_RXQ, pkts, rx_count, dma_id, 0); - - enqueue_fail = rx_count - enqueue_count; - if (enqueue_fail) - free_pkts(&pkts[enqueue_count], enqueue_fail); - - } else { - enqueue_count = rte_vhost_enqueue_burst(vdev->vid, VIRTIO_RXQ, - pkts, rx_count); - } + enqueue_count = vdev_queue_ops[vdev->vid].enqueue_pkt_burst(vdev, + VIRTIO_RXQ, pkts, rx_count); if (enable_stats) { __atomic_add_fetch(&vdev->stats.rx_total_atomic, rx_count, @@ -1382,10 +1407,31 @@ drain_eth_rx(struct vhost_dev *vdev) __ATOMIC_SEQ_CST); } - if (!dma_bind[vdev->vid].dmas[VIRTIO_RXQ].async_enabled) + if (!dma_bind[vid2socketid[vdev->vid]].dmas[VIRTIO_RXQ].async_enabled) free_pkts(pkts, rx_count); } +uint16_t async_dequeue_pkts(struct vhost_dev *dev, uint16_t queue_id, + struct rte_mempool *mbuf_pool, + struct rte_mbuf **pkts, uint16_t count) +{ + int nr_inflight; + uint16_t dequeue_count; + uint16_t dma_id = dma_bind[vid2socketid[dev->vid]].dmas[VIRTIO_TXQ].dev_id; + + dequeue_count = rte_vhost_async_try_dequeue_burst(dev->vid, queue_id, + mbuf_pool, pkts, count, &nr_inflight, dma_id, 0); + + return dequeue_count; +} + +uint16_t sync_dequeue_pkts(struct vhost_dev *dev, uint16_t queue_id, + struct rte_mempool *mbuf_pool, + struct rte_mbuf **pkts, uint16_t count) +{ + return rte_vhost_dequeue_burst(dev->vid, queue_id, mbuf_pool, pkts, count); +} + static __rte_always_inline void drain_virtio_tx(struct vhost_dev *vdev) { @@ -1393,13 +1439,8 @@ drain_virtio_tx(struct vhost_dev *vdev) uint16_t count; uint16_t i; - if (builtin_net_driver) { - count = vs_dequeue_pkts(vdev, VIRTIO_TXQ, mbuf_pool, - pkts, MAX_PKT_BURST); - } else { - count = rte_vhost_dequeue_burst(vdev->vid, VIRTIO_TXQ, - mbuf_pool, pkts, MAX_PKT_BURST); - } + count = vdev_queue_ops[vdev->vid].dequeue_pkt_burst(vdev, + VIRTIO_TXQ, mbuf_pool, pkts, MAX_PKT_BURST); /* setup VMDq for the first packet */ if (unlikely(vdev->ready == DEVICE_MAC_LEARNING) && count) { @@ -1478,6 +1519,26 @@ switch_worker(void *arg __rte_unused) return 0; } +static void +vhost_clear_queue_thread_unsafe(struct vhost_dev *vdev, uint16_t queue_id) +{ + uint16_t n_pkt = 0; + int pkts_inflight; + + int16_t dma_id = dma_bind[vid2socketid[vdev->vid]].dmas[queue_id].dev_id; + pkts_inflight = rte_vhost_async_get_inflight_thread_unsafe(vdev->vid, queue_id); + + struct rte_mbuf *m_cpl[pkts_inflight]; + + while (pkts_inflight) { + n_pkt = rte_vhost_clear_queue_thread_unsafe(vdev->vid, queue_id, m_cpl, + pkts_inflight, dma_id, 0); + free_pkts(m_cpl, n_pkt); + pkts_inflight = rte_vhost_async_get_inflight_thread_unsafe(vdev->vid, + queue_id); + } +} + /* * Remove a device from the specific data core linked list and from the * main linked list. Synchronization occurs through the use of the @@ -1535,27 +1596,79 @@ destroy_device(int vid) vdev->vid); if (dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled) { - uint16_t n_pkt = 0; - int pkts_inflight; - int16_t dma_id = dma_bind[vid].dmas[VIRTIO_RXQ].dev_id; - pkts_inflight = rte_vhost_async_get_inflight_thread_unsafe(vid, VIRTIO_RXQ); - struct rte_mbuf *m_cpl[pkts_inflight]; - - while (pkts_inflight) { - n_pkt = rte_vhost_clear_queue_thread_unsafe(vid, VIRTIO_RXQ, - m_cpl, pkts_inflight, dma_id, 0); - free_pkts(m_cpl, n_pkt); - pkts_inflight = rte_vhost_async_get_inflight_thread_unsafe(vid, - VIRTIO_RXQ); - } - + vhost_clear_queue_thread_unsafe(vdev, VIRTIO_RXQ); rte_vhost_async_channel_unregister(vid, VIRTIO_RXQ); dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled = false; } + if (dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled) { + vhost_clear_queue_thread_unsafe(vdev, VIRTIO_TXQ); + rte_vhost_async_channel_unregister(vid, VIRTIO_TXQ); + dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled = false; + } + rte_free(vdev); } +static inline int +get_socketid_by_vid(int vid) +{ + int i; + char ifname[PATH_MAX]; + rte_vhost_get_ifname(vid, ifname, sizeof(ifname)); + + for (i = 0; i < nb_sockets; i++) { + char *file = socket_files + i * PATH_MAX; + if (strcmp(file, ifname) == 0) + return i; + } + + return -1; +} + +static int +init_vhost_queue_ops(int vid) +{ + if (builtin_net_driver) { + vdev_queue_ops[vid].enqueue_pkt_burst = builtin_enqueue_pkts; + vdev_queue_ops[vid].dequeue_pkt_burst = builtin_dequeue_pkts; + } else { + if (dma_bind[vid2socketid[vid]].dmas[VIRTIO_RXQ].async_enabled) + vdev_queue_ops[vid].enqueue_pkt_burst = async_enqueue_pkts; + else + vdev_queue_ops[vid].enqueue_pkt_burst = sync_enqueue_pkts; + + if (dma_bind[vid2socketid[vid]].dmas[VIRTIO_TXQ].async_enabled) + vdev_queue_ops[vid].dequeue_pkt_burst = async_dequeue_pkts; + else + vdev_queue_ops[vid].dequeue_pkt_burst = sync_dequeue_pkts; + } + + return 0; +} + +static inline int +vhost_async_channel_register(int vid) +{ + int rx_ret = 0, tx_ret = 0; + + if (dma_bind[vid2socketid[vid]].dmas[VIRTIO_RXQ].dev_id != INVALID_DMA_ID) { + rx_ret = rte_vhost_async_channel_register(vid, VIRTIO_RXQ); + if (rx_ret == 0) + dma_bind[vid2socketid[vid]].dmas[VIRTIO_RXQ].async_enabled = true; + } + + if (dma_bind[vid2socketid[vid]].dmas[VIRTIO_TXQ].dev_id != INVALID_DMA_ID) { + tx_ret = rte_vhost_async_channel_register(vid, VIRTIO_TXQ); + if (tx_ret == 0) + dma_bind[vid2socketid[vid]].dmas[VIRTIO_TXQ].async_enabled = true; + } + + return rx_ret | tx_ret; +} + + + /* * A new device is added to a data core. First the device is added to the main linked list * and then allocated to a specific data core. @@ -1567,6 +1680,8 @@ new_device(int vid) uint16_t i; uint32_t device_num_min = num_devices; struct vhost_dev *vdev; + int ret; + vdev = rte_zmalloc("vhost device", sizeof(*vdev), RTE_CACHE_LINE_SIZE); if (vdev == NULL) { RTE_LOG(INFO, VHOST_DATA, @@ -1589,6 +1704,17 @@ new_device(int vid) } } + int socketid = get_socketid_by_vid(vid); + if (socketid == -1) + return -1; + + init_vid2socketid_array(vid, socketid); + + ret = vhost_async_channel_register(vid); + + if (init_vhost_queue_ops(vid) != 0) + return -1; + if (builtin_net_driver) vs_vhost_net_setup(vdev); @@ -1620,16 +1746,7 @@ new_device(int vid) "(%d) device has been added to data core %d\n", vid, vdev->coreid); - if (dma_bind[vid].dmas[VIRTIO_RXQ].dev_id != INVALID_DMA_ID) { - int ret; - - ret = rte_vhost_async_channel_register(vid, VIRTIO_RXQ); - if (ret == 0) - dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled = true; - return ret; - } - - return 0; + return ret; } static int @@ -1647,22 +1764,9 @@ vring_state_changed(int vid, uint16_t queue_id, int enable) if (queue_id != VIRTIO_RXQ) return 0; - if (dma_bind[vid].dmas[queue_id].async_enabled) { - if (!enable) { - uint16_t n_pkt = 0; - int pkts_inflight; - pkts_inflight = rte_vhost_async_get_inflight_thread_unsafe(vid, queue_id); - int16_t dma_id = dma_bind[vid].dmas[VIRTIO_RXQ].dev_id; - struct rte_mbuf *m_cpl[pkts_inflight]; - - while (pkts_inflight) { - n_pkt = rte_vhost_clear_queue_thread_unsafe(vid, queue_id, - m_cpl, pkts_inflight, dma_id, 0); - free_pkts(m_cpl, n_pkt); - pkts_inflight = rte_vhost_async_get_inflight_thread_unsafe(vid, - queue_id); - } - } + if (dma_bind[vid2socketid[vid]].dmas[queue_id].async_enabled) { + if (!enable) + vhost_clear_queue_thread_unsafe(vdev, queue_id); } return 0; @@ -1887,7 +1991,7 @@ main(int argc, char *argv[]) for (i = 0; i < nb_sockets; i++) { char *file = socket_files + i * PATH_MAX; - if (dma_count) + if (dma_count && get_async_flag_by_socketid(i) != 0) flags = flags | RTE_VHOST_USER_ASYNC_COPY; ret = rte_vhost_driver_register(file, flags); diff --git a/examples/vhost/main.h b/examples/vhost/main.h index e7f395c3c9..2fcb8376c5 100644 --- a/examples/vhost/main.h +++ b/examples/vhost/main.h @@ -61,6 +61,19 @@ struct vhost_dev { struct vhost_queue queues[MAX_QUEUE_PAIRS * 2]; } __rte_cache_aligned; +typedef uint16_t (*vhost_enqueue_burst_t)(struct vhost_dev *dev, + uint16_t queue_id, struct rte_mbuf **pkts, + uint32_t count); + +typedef uint16_t (*vhost_dequeue_burst_t)(struct vhost_dev *dev, + uint16_t queue_id, struct rte_mempool *mbuf_pool, + struct rte_mbuf **pkts, uint16_t count); + +struct vhost_queue_ops { + vhost_enqueue_burst_t enqueue_pkt_burst; + vhost_dequeue_burst_t dequeue_pkt_burst; +}; + TAILQ_HEAD(vhost_dev_tailq_list, vhost_dev); @@ -87,6 +100,7 @@ struct dma_info { struct dma_for_vhost { struct dma_info dmas[RTE_MAX_QUEUES_PER_PORT * 2]; + uint32_t async_flag; }; /* we implement non-extra virtio net features */ @@ -97,7 +111,19 @@ void vs_vhost_net_remove(struct vhost_dev *dev); uint16_t vs_enqueue_pkts(struct vhost_dev *dev, uint16_t queue_id, struct rte_mbuf **pkts, uint32_t count); -uint16_t vs_dequeue_pkts(struct vhost_dev *dev, uint16_t queue_id, - struct rte_mempool *mbuf_pool, - struct rte_mbuf **pkts, uint16_t count); +uint16_t builtin_enqueue_pkts(struct vhost_dev *dev, uint16_t queue_id, + struct rte_mbuf **pkts, uint32_t count); +uint16_t builtin_dequeue_pkts(struct vhost_dev *dev, uint16_t queue_id, + struct rte_mempool *mbuf_pool, + struct rte_mbuf **pkts, uint16_t count); +uint16_t sync_enqueue_pkts(struct vhost_dev *dev, uint16_t queue_id, + struct rte_mbuf **pkts, uint32_t count); +uint16_t sync_dequeue_pkts(struct vhost_dev *dev, uint16_t queue_id, + struct rte_mempool *mbuf_pool, + struct rte_mbuf **pkts, uint16_t count); +uint16_t async_enqueue_pkts(struct vhost_dev *dev, uint16_t queue_id, + struct rte_mbuf **pkts, uint32_t count); +uint16_t async_dequeue_pkts(struct vhost_dev *dev, uint16_t queue_id, + struct rte_mempool *mbuf_pool, + struct rte_mbuf **pkts, uint16_t count); #endif /* _MAIN_H_ */ diff --git a/examples/vhost/virtio_net.c b/examples/vhost/virtio_net.c index 9064fc3a82..2432a96566 100644 --- a/examples/vhost/virtio_net.c +++ b/examples/vhost/virtio_net.c @@ -238,6 +238,13 @@ vs_enqueue_pkts(struct vhost_dev *dev, uint16_t queue_id, return count; } +uint16_t +builtin_enqueue_pkts(struct vhost_dev *dev, uint16_t queue_id, + struct rte_mbuf **pkts, uint32_t count) +{ + return vs_enqueue_pkts(dev, queue_id, pkts, count); +} + static __rte_always_inline int dequeue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr, struct rte_mbuf *m, uint16_t desc_idx, @@ -363,7 +370,7 @@ dequeue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr, return 0; } -uint16_t +static uint16_t vs_dequeue_pkts(struct vhost_dev *dev, uint16_t queue_id, struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count) { @@ -440,3 +447,10 @@ vs_dequeue_pkts(struct vhost_dev *dev, uint16_t queue_id, return i; } + +uint16_t +builtin_dequeue_pkts(struct vhost_dev *dev, uint16_t queue_id, + struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count) +{ + return vs_dequeue_pkts(dev, queue_id, mbuf_pool, pkts, count); +}