From patchwork Mon May 4 06:26:20 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ouyang Changchun X-Patchwork-Id: 4602 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id 002B4C64C; Mon, 4 May 2015 08:26:34 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id EF13BC644 for ; Mon, 4 May 2015 08:26:32 +0200 (CEST) Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga102.jf.intel.com with ESMTP; 03 May 2015 23:26:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.13,364,1427785200"; d="scan'208";a="565741935" Received: from shvmail01.sh.intel.com ([10.239.29.42]) by orsmga003.jf.intel.com with ESMTP; 03 May 2015 23:26:31 -0700 Received: from shecgisg004.sh.intel.com (shecgisg004.sh.intel.com [10.239.29.89]) by shvmail01.sh.intel.com with ESMTP id t446QTgE007171; Mon, 4 May 2015 14:26:29 +0800 Received: from shecgisg004.sh.intel.com (localhost [127.0.0.1]) by shecgisg004.sh.intel.com (8.13.6/8.13.6/SuSE Linux 0.8) with ESMTP id t446QQuo027558; Mon, 4 May 2015 14:26:28 +0800 Received: (from couyang@localhost) by shecgisg004.sh.intel.com (8.13.6/8.13.6/Submit) id t446QQP8027554; Mon, 4 May 2015 14:26:26 +0800 From: Ouyang Changchun To: dev@dpdk.org Date: Mon, 4 May 2015 14:26:20 +0800 Message-Id: <1430720780-27525-1-git-send-email-changchun.ouyang@intel.com> X-Mailer: git-send-email 1.7.12.2 Subject: [dpdk-dev] [PATCH] virtio: Fix enqueue/dequeue can't handle chained vring descriptors. X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Vring enqueue need consider the 2 cases: 1. Vring descriptors chained together, the first one is for virtio header, the rest are for real data; 2. Only one descriptor, virtio header and real data share one single descriptor; So does vring dequeue. Signed-off-by: Changchun Ouyang Tested-by: Qian Xu Signed-off-by: Qian Xu Acked-by: Huawei Xie Acked-by: Huawei Xie --- lib/librte_vhost/vhost_rxtx.c | 60 +++++++++++++++++++++++++++++++------------ 1 file changed, 44 insertions(+), 16 deletions(-) diff --git a/lib/librte_vhost/vhost_rxtx.c b/lib/librte_vhost/vhost_rxtx.c index 510ffe8..3135883 100644 --- a/lib/librte_vhost/vhost_rxtx.c +++ b/lib/librte_vhost/vhost_rxtx.c @@ -59,7 +59,7 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id, struct virtio_net_hdr_mrg_rxbuf virtio_hdr = {{0, 0, 0, 0, 0, 0}, 0}; uint64_t buff_addr = 0; uint64_t buff_hdr_addr = 0; - uint32_t head[MAX_PKT_BURST], packet_len = 0; + uint32_t head[MAX_PKT_BURST]; uint32_t head_idx, packet_success = 0; uint16_t avail_idx, res_cur_idx; uint16_t res_base_idx, res_end_idx; @@ -113,6 +113,10 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id, rte_prefetch0(&vq->desc[head[packet_success]]); while (res_cur_idx != res_end_idx) { + uint32_t offset = 0; + uint32_t data_len, len_to_cpy; + uint8_t plus_hdr = 0; + /* Get descriptor from available ring */ desc = &vq->desc[head[packet_success]]; @@ -125,7 +129,6 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id, /* Copy virtio_hdr to packet and increment buffer address */ buff_hdr_addr = buff_addr; - packet_len = rte_pktmbuf_data_len(buff) + vq->vhost_hlen; /* * If the descriptors are chained the header and data are @@ -136,24 +139,44 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id, desc = &vq->desc[desc->next]; /* Buffer address translation. */ buff_addr = gpa_to_vva(dev, desc->addr); - desc->len = rte_pktmbuf_data_len(buff); } else { buff_addr += vq->vhost_hlen; - desc->len = packet_len; + plus_hdr = 1; } + data_len = rte_pktmbuf_data_len(buff); + len_to_cpy = RTE_MIN(data_len, desc->len); + do { + if (len_to_cpy > 0) { + /* Copy mbuf data to buffer */ + rte_memcpy((void *)(uintptr_t)buff_addr, + (const void *)(rte_pktmbuf_mtod(buff, const char *) + offset), + len_to_cpy); + PRINT_PACKET(dev, (uintptr_t)buff_addr, + len_to_cpy, 0); + + desc->len = len_to_cpy + (plus_hdr ? vq->vhost_hlen : 0); + offset += len_to_cpy; + if (desc->flags & VRING_DESC_F_NEXT) { + desc = &vq->desc[desc->next]; + buff_addr = gpa_to_vva(dev, desc->addr); + len_to_cpy = RTE_MIN(data_len - offset, desc->len); + } else + break; + } else { + desc->len = 0; + if (desc->flags & VRING_DESC_F_NEXT) + desc = &vq->desc[desc->next]; + else + break; + } + } while (1); + /* Update used ring with desc information */ vq->used->ring[res_cur_idx & (vq->size - 1)].id = head[packet_success]; - vq->used->ring[res_cur_idx & (vq->size - 1)].len = packet_len; - - /* Copy mbuf data to buffer */ - /* FIXME for sg mbuf and the case that desc couldn't hold the mbuf data */ - rte_memcpy((void *)(uintptr_t)buff_addr, - rte_pktmbuf_mtod(buff, const void *), - rte_pktmbuf_data_len(buff)); - PRINT_PACKET(dev, (uintptr_t)buff_addr, - rte_pktmbuf_data_len(buff), 0); + vq->used->ring[res_cur_idx & (vq->size - 1)].len = + offset + vq->vhost_hlen; res_cur_idx++; packet_success++; @@ -583,7 +606,14 @@ rte_vhost_dequeue_burst(struct virtio_net *dev, uint16_t queue_id, desc = &vq->desc[head[entry_success]]; /* Discard first buffer as it is the virtio header */ - desc = &vq->desc[desc->next]; + if (desc->flags & VRING_DESC_F_NEXT) { + desc = &vq->desc[desc->next]; + vb_offset = 0; + vb_avail = desc->len; + } else { + vb_offset = vq->vhost_hlen; + vb_avail = desc->len - vb_offset; + } /* Buffer address translation. */ vb_addr = gpa_to_vva(dev, desc->addr); @@ -602,8 +632,6 @@ rte_vhost_dequeue_burst(struct virtio_net *dev, uint16_t queue_id, vq->used->ring[used_idx].id = head[entry_success]; vq->used->ring[used_idx].len = 0; - vb_offset = 0; - vb_avail = desc->len; /* Allocate an mbuf and populate the structure. */ m = rte_pktmbuf_alloc(mbuf_pool); if (unlikely(m == NULL)) {